Learning to map Prosody to Small-talk Phrases
This prototype learns to translate a small set of vocalizations into clear speech. It is meant to be used by individuals with severe speech disorders. Each symbol of the touch screen is hard wired to a pre-recorded phrase. Initially the user touches a symbol to activate the associated phrase. The user can also vocalize while touching a symbol. A learning algorithm searches for consistent aspects of the vocalizations which may be used to identify the intended phrase. Over time the system maps vocalizations onto phrases eliminating the need for the touch tablet. A speech only interface is preferred by many individuals who prefer to use what little vocal control they have to communicate since it allows face-to-face contact and is socially more compelling than having to point to a picture board.
This work is in collaboration with Prof. Rupal Patel (Columbia University Department of biobehavioural studies and visiting professor at the MIT Media Lab).
Rupal Patel and Deb Roy. "Adaptive spoken communication aids." Proceedings of the American Speech and Hearing Association annual conference, San Francisco, CA (Nov. 1999).
Rupal Patel and Deb Roy. (1998) "Teachable Interfaces for Individuals with Dysarthric Speech and Severe Physical Impairments", AAAI workshop on Integrating Artificial Intelligence and Assistive Technology, Madison, WI, July 1998. postscript (377K) pdf (67K) |