Bridging research, trust, and teaching

As automation and generative AI become more integrated into the life of just about everyone, understanding how people trust and rely on automated agents requires new approaches to investigate beliefs and behavior, and rigorous evaluation practices.
Dave Miller, Assistant Teaching Professor in the Department of Mechanical Engineering, contributed to two peer-reviewed papers and co-presented a course at the Computer-Human Interaction (CHI) conference in Yokohama, Japan. His work explores how to equip the next generation of researchers with the tools to conduct meaningful, impactful studies.
Fostering trust and reliance
Miller is a co-author on the workshop paper “Quantifying Calibration: Bridging Trust and Reliance in Automation Across Dispositional Factors” written in collaboration with graduate student and first author Evelyn Goroza, Gavin McCarthy-Bui, A26; Anne Zhao, E27; and Elsa R. Ostenson, E26. The study was presented at the CHI AutomationXP Workshop and investigates how the Human-Computer Interaction (HCI) community can better identify and support calibrated trust and appropriate reliance on automated systems.
The paper explores how distrust in automated systems can lead to misuse, or even disuse, and emphasizes the need to understand the role of two underexplored factors shaping trust: cultural background and personality. These personal traits are rarely studied but could be important in how people decide whether or not to rely on a system. The team designed an experiment to test three things: how much people trust a system, how much they rely on it, and how those two things are connected.
Although the work model is currently conceptual, the team is actively conducting a study to test it in real-world settings. In their experiment, humans cooperate with an adaptable automation system to complete a sorting task game together. Their goal is to provide actionable insights for designers and to contribute to broader conversations around human-computer partnerships.
Motivating contributors
Understanding what motivates people to contribute to evolving datasets is essential for keeping information accurate and up to date. Maintaining dynamic, publicly editable data—like Wikipedia—requires sustained human engagement. This is an ongoing challenge for system designers. “Towards Fair and Equitable Incentives to Motivate Paid and Unpaid Crowd Contributions”, co-authored by Miller and accepted to the highly selective 2025 CHI Conference, explores this issue through the lens of Drafty, a publicly editable database of computer science faculty profiles.
In operation for nine years and visited by over half a million users, Drafty serves as the basis for a discrete choice experiment exploring what drives contributors. “Understanding why people choose to contribute and what motivates highly accurate contributions is essential for designing equitable public systems that balance incentives, motivations, and practical trade-offs,” the paper highlights.
The study found that people are more motivated when tasks align with their interests or expertise, and often that is more motivating than money. The findings uncover universal motivators and offer design recommendations for systems involving both paid and unpaid contributors. One recommendation is for paid crowdsourcing platforms to match tasks with workers who are genuinely interested in the topic for a higher probability of accurate work. Overall, this study aims to bridge the gap between paid crowdsourcing and peer production models.
Co-authors included first author Assistant Professor Shaun Wallace of the University of Rhode Island, Associate Professor Jeff Huang, and PhD candidates Talie Massachi and Jiaqi Su, all three of Brown University.
Teaching research design across disciplines
In addition to research, Miller is committed to developing the research skills of students and professionals. He co-presented the course “Research Methods for People in a Hurry” with Assistant Professor Shaun Wallace of the University of Rhode Island at the CHI Conference in April 2025. The presentation builds upon the course that he and Shaun previously hosted at CHI 2022. Designed for learners at all levels, the course offered a foundational understanding of research methods rooted in experimental psychology and human factors. Through interactive instruction, it addressed a core question: “What does someone need to know to do good research that will yield useful information that can tell a story—and will pass muster by reviewers?” The course empowered attendees to approach their own investigations with clarity and precision.
Centering the human in human-computer interaction
As technology continues to evolve, the role of human judgment remains essential. Accurate and efficient data collection is critical for producing meaningful research, but ensuring trust in that data is just as important. Equally vital is the ability to synthesize and share this knowledge with the next generation of researchers. Through his contributions to CHI 2025, Miller demonstrated his commitment to thoughtful, human-centered innovation.
About Dave Miller
Dave B. Miller is an Assistant Teaching Professor in the Department of Mechanical Engineering at Tufts University. He currently teaches courses in human factors methods, human factors in automation and AI, and interface design and has previously taught at Cornell University and Stanford University. He holds a PhD in communication research from Stanford University, a master’s degree from NYU’s Gallatin School, and a bachelor’s degree in human-environment relations with a minor in psychology from Cornell.
Learn more about Assistant Teaching Professor Dave Miller.