Research


Multimodal learning

Our main focus on multimodal learning is to understand human actions, intentions, affective conditions, and instructions by utilizing data from multisensory systems, such as video, audio, language, and wearable sensors. Along this line, we have addressed several challenges of multimodal learning to extract robust multimodal representations. For example, we have developed learning approaches that can prioritize the salient modalities and extract robust representations from noisy and misaligned sensor data. Furthermore, we have developed multimodal multitask learning frameworks to recognize human affective conditioning by utilizing data from different domains. We are continuing our research on multimodal learning to develop multitasks learning frameworks for various human-centered tasks.


close-proximity Human-Robot Collaboration

Robots are moving from working in isolated chambers to working in close-proximity with human collaborator(s) as part of human-robot teams. In such situations, robots are increasingly expected to work with multiple humans and effectively model both human-human and human-robot dynamics before taking timely actions. Working toward this goal, we have proposed new algorithms that model human intent and motion while being interpretable and scalable to multiple humans.Our current work builds upon these algorithms to 1) obtain a more holistic representation of the environment and 2) interleave robot perception and control.


Trust In Human-Robot Interaction

In human-robot teams, trust has become an effective metric for representing a human agent’s willingness to rely on or interact with a robotic teammate. A balance of trust in a robot partner is essential to achieving proficient human-robot collaboration. Trust can also be used to both mitigate and recover from failure. We aim to equip robots with the tools to recognize and react to human trust levels in order to generate optimized, collaborative interactions. Our most recent work examines the interplay between failure recovery, humor, and trust in a human-robot interaction.