The overarching objective of this project is to transform access to assistive communication technologies (augmentative and alternative communication) for individuals with motor disabilities and/or visual impairment, for whom natural speech is not meeting their communicative needs. These individuals often cannot access traditional augmentative and alternative communication because of their restricted movement or visual function. However, most such individuals have idiosyncratic body-based means of communication that is reliably interpreted by familiar communication partners. The project will test artificial intelligence algorithms that gather information from sensors or camera feeds about these idiosyncratic movement patterns of the individual with motor/visual impairments. Based on the sensor or camera feed information, the artificial intelligence algorithms will interpret the individual's gestures and translate the interpretation into speech output. For instance, if an individual waves their hand as their means of communicating "I want", the artificial intelligence algorithm will detect that gesture and prompt the speech-generating technology to produce the spoken message "I want." This will allow individuals with restricted but idiosyncratic movements to access the augmentative and alternative communication technologies that are otherwise out of reach.
See this in plain English?
AI-rewrites the medical criteria so a patient or caregiver can understand them. Always confirm with the trial site.
Time taken for Programming of artificial intelligence algorithms by users/personal care aides
Timeframe: average of 12 months
Number of messages programmed by users/personal care aides
Timeframe: average of 12 months