
“A startup called Silence Speaks provides AI-powered avatars turning spoken words into sign language in real time,” notes historian Paul Matzko. “Their first use is for announcements in noisy public spaces like train stations where cochlear implants don’t work well and in a context where human interpreters are rare.” “AI tools are also improving the performance of those same implants by automatically and on the fly fine-tuning their performance to match the boutique needs of the individual and the particulars of a given physical context.”
“AI takes the guesswork out of [cochlear implant] programming,” says Susan B. Waltzman, PhD, Professor of Otolaryngology and co-director of the Cochlear Implant Center. “It gives patients the best possible chance of realizing the full potential of their implants by targeting optimal, objective performance and making that available to every patient, everywhere.”
NYU Langone Health adds:
Researchers are testing an AI technology called FOX, which uses psychoacoustic tests and other novel methods to calibrate patients’ devices to the appropriate pitch and volume. Sounds can be delivered directly from the audiologist’s computer to the device’s sound processor, enabling testing in a setting more natural than an audiometric booth. Since FOX measures individual results against a database of anonymous patient performance data to make unique fitting recommendations, outcomes should continue to improve as more information is integrated into FOX’s predictive technology.
A Chinese girl used to be deaf. But gene therapy has cured her deafness, by using viruses to add genes to cells in her inner ear. MIT Technology Review describes her transformation.
Last year, an English toddler had her hearing restored in the world’s first gene therapy trial.