Apple and the University of Illinois are teaming up with Google, Meta, and more tech companies to collaborate on something called the Speech Accessibility Project. The goal of the initiative is to study and improve how artificial intelligence algorithms can be tuned to improve voice recognition for users with diseases that affect speech, including ALS and Down Syndrome.
Engadget is first to report on the Speech Accessibility Project that has yet to go online at the time of writing. According to the report, tech companies working with the University of Illinois include Amazon, Apple, Google, Meta, and Microsoft. Nonprofits Team Gleason, which empowers those living with ALS, and Davis Phinney Foundation for Parkinson’s are also working on the Speech Accessibility Project.
Diseases that affect speech touch tens of millions of people in the United States alone, according to the National Institutes of Health. Apple and other tech companies have innovated in the voice assistant space over the last decade with tools like Siri, Amazon Alexa, Google Assistant and others. Apple has also invested in technologies like VoiceOver and Voice Control that are best-in-class for users experiencing low vision or lack of mobility.
Voice-driven features are only as good at the algorithms that power them, however, and that’s critical for reaching users with Lou Gehrig’s disease, cerebral palsy, and other conditions that affect speech.
We’ll update our coverage with more details when the Speech Accessibility Project launches.
FTC: We use income earning auto affiliate links. More.
About the Author
Zac Hall
Zac covers Apple news, hosts the 9to5Mac Happy Hour podcast, and created SpaceExplored.com.