Personal Robots
Building socially engaging robots and interactive technologies to help people live healthier lives, connect with others, and learn better.
Robots are an intriguing technology that can straddle both the physical and social world of people. Inspired by animal and human behavior, our goal is to build capable robotic creatures with a "living" presence, and to gain a better understanding of how humans will interact with this new kind of technology. People will physically interact with them, communicate with them, understand them, and teach them, all in familiar human terms. Ultimately, such robots will possess the social savvy, physical adeptness, and everyday common sense to partake in people's daily lives in useful and rewarding ways.

Research Projects

  • AIDA: Affective Intelligent Driving Agent

    Cynthia Breazeal and Kenton Williams
    Drivers spend a significant amount of time multi-tasking while they are behind the wheel. These dangerous behaviors, particularly texting while driving, can lead to distractions and ultimately to accidents. Many in-car interfaces designed to address this issue still neither take a proactive role to assist the driver nor leverage aspects of the driver's daily life to make the driving experience more seamless. In collaboration with Volkswagen/Audi and the SENSEable City Lab, we are developing AIDA (Affective Intelligent Driving Agent), a robotic driver-vehicle interface that acts as a sociable partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social interaction with the driver. AIDA also leverages the driver's mobile device as its face, which promotes safety, offers proactive driver support, and fosters deeper personalization to the driver.
  • Animal-Robot Interaction

    Brad Knox, Patrick Mccabe and Cynthia Breazeal

    Like people, dogs and cats live among technologies that affect their lives. Yet little of this technology has been designed with pets in mind. We are developing systems that interact intelligently with animals to entertain, exercise, and empower them. Currently, we are developing a laser-chasing game, in which dogs or cats are tracked by a ceiling-mounted webcam, and a computer-controlled laser is moved with knowledge of the pet's position and movement. Machine learning will be applied to optimize the specific laser strategy. We envision enabling owners to initiate and view the interaction remotely through a web interface, providing stimulation and exercise to pets when the owners are at work or otherwise cannot be present.

  • Cloud-HRI

    Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova

    Imagine opening your eyes and being awake for only half an hour at a time. This is the life that robots traditionally live. This is due to a number of factors such as battery life and wear on prototype joints. Roboticists have typically muddled though this challenge by crafting handmade perception and planning models of the world, or by using machine learning with synthetic and real-world data, but cloud-based robotics aims to marry large distributed systems with machine learning techniques to understand how to build robots that interpret the world in a richer way. This movement aims to build large-scale machine learning algorithms that use experiences from large groups of people, whether sourced from a large number of tabletop robots or a large number of experiences with virtual agents. Large-scale robotics aims to change embodied AI as it changed non-embodied AI.

  • Command Not Found

    David Nunez, Tod Machover, Cynthia Breazeal

    A performance between a human and a robot tells the story of growing older and trying to maintain friendships with those we meet along the way. This project explores live-coding with a robot, in which the actor creates and executes software on a robot in real time; the audience can watch the program evolve on screen and the code, itself, is part of the narrative.

  • DragonBot: Android Phone Robots for Long-Term HRI

    Adam Setapen, Natalie Freed, and Cynthia Breazeal

    DragonBot is a new platform built to support long-term interactions between children and robots. The robot runs entirely on an Android cell phone, which displays an animated virtual face. Additionally, the phone provides sensory input (camera and microphone) and fully controls the actuation of the robot (motors and speakers). Most importantly, the phone always has an Internet connection, so a robot can harness cloud-computing paradigms to learn from the collective interactions of multiple robots. To support long-term interactions, DragonBot is a "blended-reality" character–if you remove the phone from the robot, a virtual avatar appears on the screen and the user can still interact with the virtual character on the go. Costing less than $1,000, DragonBot was specifically designed to be a low-cost platform that can support longitudinal human-robot interactions "in the wild."

  • Global Literacy Tablets

    Cynthia Breazeal, David Nunez, Tinsley Galyean, Maryanne Wolf (Tufts), and Robin Morris (GSU)

    We are developing a system of early literacy apps, games, toys, and robots that will triage how children are learning, diagnose literacy deficits, and deploy dosages of content to encourage app play using a mentoring algorithm that recommends an appropriate activity given a child's progress. Currently, over 200 Android-based tablets have been sent to children around the world; these devices are instrumented to provide a very detailed picture of how kids are using these technologies. We are using this big data to discover usage and learning models that will inform future educational development.

  • Magician-Robot Interaction

    Cynthia Breazeal, David Scott Nunez, Luke Plummer and Marco Tempest

    Can a robot and magician collaborate on stage to create a believable, evocative performance? Close human-robot proximity and coordination on a performance stage is a recent development (rapid passing of objects between human hands and robot grippers). Our tools allow us to compose a human-robot performance that blends pre-rendered choreography with key moments of dynamic interactivity to enhance the realism of the character. For example, as the robot is playing back a series of poses, it might also track the face of the performer to maintain eye contact. We are studying how perceived agency and blended static/dynamic interactivity might affect an audience's perception of the performance and how changes in computational robot choreography might also influence a viewer's emotional state. We have built trajectory timeline composition software, a sympathetic interface to an industrial robot, and custom hardware to achieve magic effects.

  • Mind-Theoretic Planning for Robots

    Cynthia Breazeal and Sigurdur Orn Adalgeirsson

    Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This system takes into account probability distributions over the initial beliefs and goals of people in the environment that are relevant to the task, and creates a prediction of how they will rationally act on their beliefs to achieve their goals. The MTP system then proceeds to create an action plan for the robot that simultaneously takes advantage of the effects of anticipated actions of others and also avoids interfering with them.

  • Robot Learning from Human-Generated Rewards

    Brad Knox, Robert Radway, Tom Walsh, and Cynthia Breazeal

    To serve us well, robots and other agents must understand our needs and how to fulfill them. To that end, our research develops robots that empower humans by interactively learning from them. Interactive learning methods enable technically unskilled end-users to designate correct behavior and communicate their task knowledge to improve a robot's task performance. This research on interactive learning focuses on algorithms that facilitate teaching by signals of approval and disapproval from a live human trainer. We operationalize these feedback signals as numeric rewards within the machine-learning framework of reinforcement learning. In comparison to the complementary form of teaching by demonstration, this feedback-based teaching may require less task expertise and place less cognitive load on the trainer. Envisioned applications include human-robot collaboration and assistive robotic devices for handicapped users, such as myolectrically controlled prosthetics.

  • Robotic Language Learning Companions

    Cynthia Breazeal, Jacqueline Kory, Sooyeon Jeong, Paul Harris, Dave DeSteno, and Leah Dickens

    Young children learn language not through listening alone, but through active communication with a social actor. Cultural immersion and context are also key in long-term language development. We are developing robotic conversational partners and hybrid physical/digital environments for language learning. For example, the robot Sophie helped young children learn French through a food-sharing game. The game was situated on a digital tablet embedded in a café table. Sophie modeled how to order food and as the child practiced the new vocabulary, the food was delivered via digital assets onto the table's surface. Meanwhile, a teacher or parent can observe and shape the interaction remotely via a digital tablet interface to adjust the robot's conversation and behavior to support the learner. More recently, we have been examining how social nonverbal behaviors impact children's perceptions of the robot as an informant and social companion.

  • Socially Assistive Robotics: An NSF Expedition in Computing

    Tufts University, University of Southern California, Kasia Hayden with Stanford University, Cynthia Breazeal, Edith Ackermann, Catherine Havasi, Sooyeon Jeong, Brad Knox, Jacqueline Kory, Jin Joo Lee, Samuel Spaulding, Willow Garage and Yale University

    Our mission is to develop the computational techniques that will enable the design, implementation, and evaluation of "relational" robots, in order to encourage social, emotional, and cognitive growth in children, including those with social or cognitive deficits. Funding for the project comes from the NSF Expeditions in Computing program. This Expedition has the potential to substantially impact the effectiveness of education and healthcare, and to enhance the lives of children and other groups that require specialized support and intervention. In particular, the MIT effort is focusing on developing second language learning companions for pre-school aged children, ultimately for ESL (English as a Second Language).

  • TinkRBook: Reinventing the Reading Primer

    Cynthia Breazeal, Angela Chang, and David Nunez
    TinkRBook is a storytelling system that introduces a new concept of reading, called textual tinkerability. Textual tinkerability uses storytelling gestures to expose the text-concept relationships within a scene. Tinkerability prompts readers to become more physically active and expressive as they explore concepts in reading together. TinkRBooks are interactive storybooks that prompt interactivity in a subtle way, enhancing communication between parents and children during shared picture-book reading. TinkRBooks encourage positive reading behaviors in emergent literacy: parents act out the story to control the words onscreen, demonstrating print referencing and dialogic questioning techniques. Young children actively explore the abstract relationship between printed words and their meanings, even before this relationship is properly understood. By making story elements alterable within a narrative, readers can learn to read by playing with how word choices impact the storytelling experience. Recently, this research has been applied in developing countries.