Computational Human Behavior Modeling

2023 -

Humans intentionally interact with external environments. This process relies not only on their goals, knowledge, and experience, but also on the complex inner mechanisms of perception, cognition, and motor control. My research is grounded in both empirical user studies and the concept of computational rationality [1], which suggests that people make rational decisions within the limits of physical and cognitive resources.

I have led the advances in building computational behavior models in both human-computer interaction and visualization. For instance, in the context of touchscreen typing, I developed a supervisory control model of eye-hand coordination [2]. This model captures the intricate interactions among perceptual, cognitive, and motor control processes. A key feature of this model is the reformulation of the supervisory control problem, where both visual attention and the motor system are regulated based on a representation in working memory. The movement policy is designed to asymptotically approach optimal performance in line with cognitive and design-related constraints. For the visualization community, I have designed computational models for real-world visualization tasks. Taking the chart-reading model as an example [3], it simulates how users move their eyes to extract information from charts in order to complete analytical tasks. A significant insight from this model is its two-level hierarchical control structure, which accurately predicts task-driven scanpaths and human-like statistical summaries of eye movement behaviors across different tasks.

These modeling approaches in HCI and visualization hold the potential to enhance design generation and evaluation. To further generalize this modeling approach, my collaborators and I investigated the workflow of behavior modeling through computational rationality [1], concentrating on how cognitive processes can be understood and improved by combining deep learning and reinforcement learning techniques.

Related Publications:

[1] A Workflow for building Computationally Rational Models of Human Behavior [paper]

[2] CRTypist: Simulating Touchscreen Typing Behavior via Computational Rationality [website]

[3] Chartist: Task-driven Eye Movement Control for Chart Reading (CHI'25)

[4] Typoist: Simulating Errors in Touchscreen Typing (CHI'25)


Interactive AI Behavior Alignment

2023 -

My research successfully leverages interactive visualization techniques for understanding and aligning AI behaviors. For example, in the context of tuning AI behavior, traditional methods that utilize pairwise comparisons often fail to fully harness human cognitive abilities. To address this, I developed an interactive, visualizationbased approach [1] that follows the visual information-seeking principle: overview, zoom-and-filter, and details on demand. This system allows users to efficiently explore and adjust large sets of sampled behaviors through two linked views: an embedding view for contextual overviews and a sample view for detailed time-series motion data. This capability facilitates more precise tuning with fewer queries. In another example of AI alignment using reinforcement learning from human feedback (RLHF), my collaborators and I enhanced standard RLHF by introducing visualizations that organize behaviors hierarchically [2]. This hierarchy enables users to compare and explore entire groups of behaviors. This group-wise comparison approach significantly increases the efficiency of preference elicitation, reduces error rates, and leads to improved policy outcomes. To evaluate the design of these interactive visualization systems, we conduct simulation studies that model user behaviors and preferences, which align well with my research of computational behavior modeling.

Related Publications:

[1] Interactive Reward Tuning: Interactive Visualization for Preference Elicitation [website]

[2] Interactive Groupwise Comparison for Reinforcement Learning from Human Feedback


Narrative Chart

2021 - 2022

Narrative Chart (narchart.github.io) is an open-source visualization library specialized for authoring charts that facilitete data storytelling with a high-level action-orienented declarative grammar. Unlike existing visualization libraries such as D3.js, Vega, and ECharts, Narrative Chart is designed to meet the needs of data storytelling specifically and lower the barrier of creating such charts. Narrative Chart has rich supportive features for visual narratives, which enables users to rapidly create expressive charts and inspires their creativity.

Related Publications:

[1] AutoClips: An Automatic Approach to Video Generation from Data Facts [website]

[2] Understanding and Automating Graphical Annotations on Animated Scatterplots


Calliope - A Visual Data Story Generation Platform

2020 - 2021

Calliope (project homepage) is a visual data story generation platform that employs advanced AI techniques to automatically analyze data and represent data insights in the form of narrative visualization. It helps non-expert users create visual data stories through an automatic process without any technique barrier. The system is named after the Muse who presides over eloquence and epic poetry. In 2021, Calliope joined Greenplum as a plugin feature to help more users in database community [link] 🌟.

Related Publications:

[1] Calliope: Automatic Visual Data Story Generation from a Spreadsheet [website]

[2] AutoClips: An Automatic Approach to Video Generation from Data Facts [website]

[3] Calliope·Data: An Intelligent Visual Data Story Generation Platform for Mass [website]


Intelligent Visualization Systems

2018 - 2021

Intelligent visualization systems: an action-driven authoring tool and a natual language interface (NLI) for visualizations. See the following papers for the details.

Related Publications:

[1] VisAct: a visualization design system based on semantic actions [website]

[2] Task-Oriented Optimal Sequencing of Visualization Charts [website]

[3] Talk2Data: A Natural Language Interface for Exploratory Visual Analysis via Question Decomposition [paper]