Speech recognition software such as Apple’s Siri and Amazon’s Alexa are being trained to accurately recognize the voice of ALS patients. Already reduced error rate to 10%.
An article that may be of interest to ALS patients and their families was brought to the attention of Illinois State University’s online newsletter, “Illinois Insider,” dated January 11, 2024.
The project is called the Speech Accessibility Project, which began last year to collect speech sound data from stroke and Parkinson’s disease patients and train software, and then speech from Down syndrome patients were added. This year, the software has officially begun adding speech data from ALS and cerebral palsy patients. Hasegawa Johnson, a professor of electrical and computer engineering, and his team began the project after noticing that Siri and Alexa were having trouble understanding what they were saying. They have received research funding from Apple, Amazon, Google, and others, and have already collected and provided as many as 100,000 speech samples that have been used to train Siri, Alexa, Google Assistant, and others.
Before the training, the speech of Parkinson’s patients was misrecognized by about 20%, but after the training, the misrecognition rate dropped to 10%. As more speech data from people with various disabilities become available and more training is conducted, it is expected that the error rate will further decrease. The same convenience of technolosy will be shared by these people when speech recognition software becomes easily accessible to them. In the future, the collected speech data will be made available to non-profit and small businesses free of charge. Of course, this is subject to the condition that the privacy of the voice provider is respected. Since this is an English version in an English-speaking country, it is expected that Japanese researchers will train Japanese version of the software in the future.
Jan.12, 2024 Reporter：Nobuko Schlough