Animated Conversation
Because non-verbal signs are integral parts of the communicative process, we are designing a system that integrates gesture, intonation, and
facial expression into multi-modal human figure animation. In this project, appropriate speech, intonation, facial expression and gesture are
generated by rule from a semantic representation that originates in a goal-directed conversational planner. The output of the dialogue
generation is used to drive a graphical animation of a conversation between two simulated autonomous agents. The two animated agents
interact with one another, producing speech with appropriate intonational contours, hand gestures, and facial movements such as head turns
and nods. This work was begun when I was visiting faculty at the University of Pennsylvania, in the Center for Human Modeling and
Simulation.
Future directions for this work include adding a vision component so that the two agents may perceive each other's movements, enlarging the kinds of discourse that
can be handled to include storytelling, and adapting the system to handle human computer interaction.
[ <-- ]