Grant: $775,940 - National Science Foundation - Jul. 10, 2009
No votes have been cast for this award yet
Award Description: The purpose of this study is to develop a communication interface that will enable older people to effectively communicate with robotic assistants, allowing them in this way to remain living safely in their homes. The proposed communication interface will be: (1) multimodal, that is supporting spoken, gestural and physical interactions, as all these typically occur simultaneously when people communicate with one another, and (2) adaptive so that the robotic assistant will adjust to the older person rather than the older person to the robotic assistant. The combination of speech, gestures and physical interactions (haptics) has received only limited attention, but it will be critical for successful deployment of assistive robots for many elderly individuals. The transformative component of this research is to view haptics as one of the drivers of the dialogue between the user and the robot, and to study its relation to speech and gestures through language processing methods. To adapt to each user, the interpretation of the speech, gestures and haptic signals will be performed by means of RISq (Recognition by Indexing and Sequencing), a novel, adaptive and reliable recognition methodology. Finally, a formal and modular control design methodology will be developed, that guarantees that the robot responds safely and reliably to the interpretation of the user intent provided by dialogue processing. The proposed work will be performed in five stages: 1. Human data collection and analysis: We will start by analyzing existing videotapes developed to train caregivers. This will help focus the subsequent substantial data collection, in which we will videotape elderly persons being assisted by their caregivers while performing some (Instrumental) Activities of Daily Living ((I)ADLs). 2. Recognition by multidimensional indexing and sequencing (RISq): RISq, originally developed for human activity recognition, will be extended to recognize speech, gestures, and haptic patterns. The output of RISq is in the form of words and symbols. These are transferred to the dialogue processing module that integrates and interprets them as control commands to the robot. 3. Dialogue processing: We will investigate whether collaborations involving haptics, where what is exchanged are forces, can be modeled similarly to human spoken dialogues. We will also develop a representation language for each utterance exchanged between the user and the robot that can integrate the contributions of speech, gestures and haptics, and of context. 4. Control architecture: We will identify specific activities from the collected data that can be performed by an assistive robot and develop a formal methodology for designing a set of controllers that guarantee that these activities can be executed reliably and without harming the user in response to the representation of user intentions provided by the dialogue processing module. 5. Evaluation: A prototype of the proposed user interface will be implemented on a commercial robot and evaluated by both, younger subjects in the Computer Vision and Robotics Laboratory in the College of Engineering, and by elderly users in Room 977 in the Department of Occupational Therapy at Rush University.
Project Description: We performed all the necessary set up activities that a new project requires, including hiring new students as Research Assistants, setting up biweekly project meetings, and beginning to foster a sense of community between PIs and students who come from different fields (robotics, natural language processing (NLP), signal and vision processing). As far as specific activities are concerned, we started a preliminary analysis of the data available to us, and we are evaluating different sensors that need to be purchased for data collection. In particular, we are comparing various versions of data-gloves that can provide contact and hand-shape information. As concerns the NLP components, the RA (Lin Chen) has started to familiarize himself with some of the components we will use, such as VerbNet and its API. The RA for the robotic part (Maria Javaid) is evaluating different sensors for collecting touch information and familiarizing herself with the nature of touch and force data. The first RA for signal processing (Simone Franzini) is now working on applying RISq for speech recognition. He is currently studying the Sphinx speech recognition system with the purpose to utilize parts of its signal processing software for our speech preprocessing and post processing stages. The second RA (Kai Ma) is working on a new hand and face detection and tracking algorithms based on a combination of LDA (linear discriminant analysis) and Adaboosting (adaptive boosting). We are also performing a wide literature survey on these subjects, including pointing gestures estimation.
Jobs Summary: JOBS CREATED: Academic salary: Milos Zefran 50% one summer month from 7/16-8/15/09 FTE 0.5 Barbara Di Eugenio 50% one summer month from 7/16/09-8/15/09 FTE 0.5 Jezekiel Ben-Arie 100% one summer month from 7/16-8/15/09 FTE 1.0 Assistant salary: Lin Chen FTE 0.25 Simone Franzini FTE 0.25 Maria Javaid FTE 0.25 Max Koleshnikov FTE 0.25 Kai Ma FTE 0.25 (Total jobs reported: 3)
Project Status: Less Than 50% Completed
This award's data was last updated on Jul. 10, 2009. Help expand these official descriptions using the wiki below.