Towards AI that understand humans

[Research Statement]

[Publications]


Computational Behavior Modeling

"What I cannot create, I do not understand." -- Richard Feynman.

My research advances AI systems by deepening their understanding of human behavior through the development of computational models that predict and simulate human behavior in interactive tasks. This work extends beyond task-specific expertise to encompass the underlying cognitive mechanisms involved in perception, decision-making, and motor control. By accurately modeling realistic human behaviors, these computational models form a foundation for enabling AI systems to better interpret and respond to humans in dynamic interactive environments.

Publications:

[1] Typoist: Simulating Errors in Touchscreen Typing [website]

[2] Chartist: Task-driven Eye Movement Control for Chart Reading [website]

[3] CRTypist: Simulating Touchscreen Typing Behavior via Computational Rationality [website]

[4] A Workflow for building Computationally Rational Models of Human Behavior [paper]

[5] WigglyEyes: Inferring Eye Movements from Keypress Data [paper]


Human Feedback for AI Alignment

To align AI more closely with human preferences, another line of my research centers on the design of interactive systems that directly integrate human feedback into agent behavior alignment. I focus on advancing alignment processes by developing human-centered interactive interfaces that comprehensively visualize AI agent behaviors and facilitate effective and efficient user control. Leveraging visualizations and interactive interface techniques, my work empowers human users to intuitively explore and guide the behaviors of AI agents.

Publications:

[1] DxHF: Providing High-Quality Human Feedback for LLM Alignment with Interactive Decomposition [website]

[2] Interactive Reward Tuning: Interactive Visualization for Preference Elicitation [website]

[3] Interactive Groupwise Comparison for Reinforcement Learning from Human Feedback [website]