• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Designing Personalized AI Agents to Foster Human-Human Connections

Copyright

PRG

PRG 

Groups

Research Overview

As artificial intelligence (AI) devices become more common in our homes, concerns about their potential harm to human-human connections arise accordingly. This dissertation aspires to study the responsible design of embodied agents as social catalysts to purposefully enhance human-human interactions. It aims to shed light on the following three overarching research questions. Can we become more socially connected and collaborative with one another through the facilitation of a socially embodied agent? What social capabilities do these embodied agents need to acquire as social catalysts? What approaches should we take to design, develop and evaluate computing systems that enable positive social interactions between a human group and an embodied agent responsibly?

Robots as Social Catalysts in Human Groups
Certain individuals possess a unique knack for fostering communication and elevating interpersonal relationships, serving as social catalysts within a group. They play a critical role in creating rapport among teammates, eliciting collective wisdom, and streamlining problem-solving efforts. To be a social catalyst, one needs to be equipped with multiple key social, cognitive and affective capabilities, such as understanding a group's social dynamics and everyone's affective states and traits, as well as employing negotiation, counseling, and mediation strategies when conflicts arise. In my dissertation, I explore the possibility of enabling robots to act as social catalysts in human groups..

Copyright

PRG

Copyright

PRG

Copyright

PRG

To investigate the three questions, this work proposes a multidisciplinary framework for the holistic design and evaluation of embodied social agents intended to foster human-human connection and collaboration. It argues that robots need to possess three social capabilities: social-affective perception, context awareness, and social adaptation. These capabilities are elaborated in detail within the framework, together with a comprehensive, iterative process for their design, evaluation, and enhancement. This process needs to be grounded in theories and findings in psychology, and employ a mixed-methods integrative approach that involve computing, social sciences, and interaction design.

Case Study : Parent-Child Reciprocal Interaction
To illustrate and evaluate my proposed framework, the parent-child reciprocal interaction is selected as my case study. Exploring the robot-facilitated parent-child interactions has the potential to bridge the societal gap in children's at-home exposure to enriching adult-child exchanges essential for their development. 

From a research perspective, young children are a more challenging and less investigated population in HAI than typical adult populations, due to various child-specific technological and methodological barriers (e.g., scarcity of young children's behavior and interaction data and the limited performance of computing technologies on children’s data). For example, acoustic modeling of children’s speech remains challenging due to the physiological differences of children's articulatory apparatus size, the high heterogeneity of their vocal effort, speaking rate, and proficiency levels. Similarly, the large-scale linguistic resources, as enablers of high performance acoustic modeling, are often unavailable for children's speech, thereby requiring compensatory computing techniques for the limited model performance (e.g., transfer learning, feature adaptation). 

HAI involving children requires more sophisticated computing tools as well as child-specific evaluation methods and design considerations. In addition, studying child-adult interactions is more complex and challenging than typical adult-adult interactions, relying less on verbal exchanges and enriched with a greater wealth of more nonverbal social and emotional communications. Children rely heavily on nonverbal abilities (e.g., facial expression recognition) as essential tools for social interactions, given the under-development of their language skills. In contrast, daily adult-adult interactions tend to be more socially reserved and less behaviorally expressive. Human groups involving both two distinct populations, i.e., children and adults, also bring in a new dimension of interaction parameters that the framework needs to account for. 

All together, focusing on children's development as a case study yields a HAI scenario with highly difficult technical challenges, enriched multimodal human-human communication, and complex interaction styles and user backgrounds. The framework’s generalizability is more effectively demonstrated in this scenario, and can be easily applied to a variety of other contexts and scenarios. 

Summary

This dissertation provides insights into the potential of designing embodied social agents as social catalysts within human groups. It invites future exploration into the possibilities and challenges of machine-catalyzed group interactions, emphasizing both technical and ethical considerations. As sociable intelligent devices—from personal voice agents at home to autonomous vehicles—rapidly proliferate, humans increasingly interact with AI agents in an ecology composed of other humans and other intelligent machines. Accordingly, this work helps advance the social sophistication of intelligent machines that live with humans in this emergent human-agent ecology, as well as the understanding of the social and behavioral mechanisms underlying this ecology. 

Publications:

Technical Advancements Track: 

  • H. Chen, S. Alghowinem, S. Jang, C. Breazeal, and H. Park, “Dyadic Affect in Parent-child Multi-modal Interaction: Introducing the DAMI-P2C Dataset and its Preliminary Analysis,” IEEE Trans. Affect. Comput., 2022.
  • S. Alghowinem, H. Chen, C. Breazeal, and H. W. Park, in Proc. - 16th IEEE Int. Conf. Autom. Face Gesture Recognit. FG 2021.
  • H. Chen, Y. Zhang, F. Weninger, R. Picard, C. Breazeal, and HW Park. "Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset". Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI), 2022.

Theory Track: 

  • Chen, Alghowinem, Breazeal and Park. “Integrating Flow Theory and Adaptive Robot Roles : A Conceptual Model of Dynamic Robot Role Adaptation for the Enhanced Flow Experience in Long-term Multi-person Human-Robot Interactions”. In : 2024 19th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2024. **Best Paper Award**.

Interaction Design Track:

  • H. Chen, A. Ostrowski, S. Jang, C. Breazeal, and H. Park: "Designing Long-term Parent-child-robot Triadic Interaction at Home through Lived Technology Experiences and Interviews", Ro-MAN 2022, (under review).
  • H. Chen, H. W. Park, and C. Breazeal, “Teaching and learning with children: Impact of reciprocal peer learning with a social robot on children’s learning and emotive engagement,” Comput Educ, vol. 150, 2020.
  • H. Chen, H. W. Park, X. Zhang, and C. Breazeal, “Impact of interaction context on the student affect-learning relationship in child-robot interaction,” in Proc. HRI ’20, 2020.