By Jen Mullins, BS, CTRS, MATP Staff
Voice Assistants (VAs) are software that has been programmed to listen to users and (hopefully) perform resulting actions. “Users can ask their [voice] assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands.” –source. Some of the most popular VAs include (in order of popularity by a study by Adobe Analytics): Alexa, Google assistant, Siri, Cortana, and Bixby.
More than a convenience, voice assistants can be Assistive Technology (AT) for some users with disabilities. “For many people with disabilities, voice assistant technology can be a key tool for living independently. Voice assistant technology makes it easier to set up schedules and reminders, have more control over the home environment and lets people to connect with others easier. It can help people learn language and communication skills. Smart homes can empower people with disabilities to live more independently, giving us control over our environment and freedom to make choices able-bodied people may take for granted. As smart home technology becomes more widespread and affordable to those who need it most, our world will continue to become a more accessible place.” –The Mighty
You might be thinking, ‘if VAs are so great, why doesn’t everyone use them then?’ One of the challenges in using VAs that people with disabilities may face is getting their VA to understand them.
For people whose voice is impacted by their disability (sometimes called a ‘disability accent’), it can be extremely frustrating to have to repeat, repeat, and repeat when trying to ask their VA to do something; even more so when the VA can’t help them at all because it doesn’t understand their voice. Unfortunately, as is with many current VAs, if your accent isn’t one that the software has been trained to understand and respond accordingly to, it will have a hard time understanding you. Amazon/Alexa has taken some steps to help devices “learn your voice” and better understand individual users, but it is ultimately the work of the user to “train” their VA to understand them.
Recently, Google launched Project Understood to, “create a database that can help train Google’s technology to better understand people with Down syndrome.”
The need is great and Google’s initiative is very needed/smart business for them, but why did Google single out only people with Down syndrome? Why not include people who have Cerebral Palsy (CP) accents? People who have a Stoma? People whose voice has been impacted by a stroke or brain injury? Why not people who are deaf? Why not everyone who has a hard time with their VA understanding them?
The other issue I have with Project Understood is that Google is asking people with Down syndrome to ‘donate their voice’: “The more voice samples shared by the Down syndrome community, the closer we get to a world where every person is understood.” This initiative of Google’s will hopefully help lots of people, including people with Down syndrome, and like I said, it’s smart business for them. Why then are they not paying people for this crucial information? It’s wrong of Google to do this important work in this way. People with disabilities should be paid for their contributions, period. I’m not the only one who feels this way; I’ve had conversations with co-workers and friends and they feel similarly. And, people on Twitter and Reddit have been speaking out:
- Redditor ‘roymondous’ commented, “Shouldn’t they be paying for that, not asking people to ‘donate’ their voices? Seems like they wanna improve their product and that is certainly product research and development. If it was a little startup, then fine. But for one of the biggest companies in the world, shouldn’t they be paying people for that service?”
- Redditor ‘Lordjaraxxus64’; commented: “How about Google, a multi-billion dollar company actually pays these human beings instead of asking them to donate. This is ridiculous.”
If you use a Virtual Assistant, which do you use and what do you use it for most often? What do you think about Google’s Project Understood? Comment on this post to continue the conversation!