top of page

HERE ARE SOME STORIES FROM 

THE DARK SIDE

BROWSE THE STORIES TO FIND OUT MORE ABOUT THE IMPLICATIONS AND CONCERNING SIDE OF AIVAs

Education

While Siri and her siblings (namely, Alexa, Google Assistant, and Cortana), each offer a multitude of potential benefits in a classroom setting, there are still problems associated with utilizing AIVAs in educational settings. While they range in severity and potential, these issues are important to recognize and to consider when integrating smart assistants into education. Some of them include:

  • Lack of function: While still being very advanced, Siri and her siblings are not complete AI systems. Their functionality is limited and can cause disruption or frustration, especially in an educational setting.

  • Does not play well with children: While a nuanced user can navigate the sometimes janky responses and confusing interactions, many of these AIVAs still have problems with understanding different voices or accents. If a child is not specific enough or speak in a clear enough voice, the smart assistant may misinterpret a question or command, which can cause frustration or feelings of isolation over the user’s vocal identity or verbal skill level.

  • Privacy concerns: What is done and what is discussed in a classroom should obviously be open to consideration and inspection by anyone. Still, AIVAs’ data collecting is vague and advanced enough to reasonably give parents and teachers pause before allowing it to be used in schools. Siri and her siblings are afterall built on an AI platform that learns from its users and in turn alters its responses and interactions based on these experiences. What is done with its stored information and past inquiries is still vague, and considering that some of this stored information consists of its user’s vocal patterns and tones of voice, this is a significant amount of private information to be blindly handed over to the corporation behind the chosen smart assistant.

  • Unknown sources: While Siri and her siblings respond to questions with specific answers, many users are unaware of where these answers are obtained from. In an educational setting, there is a distinct lack of cited sources or further investigation options given when answering a question. Unlike a textbook or printed source, Siri does not come with a 'works cited' section or a bibliography, leading to many questioning the validity of its answers.

Accessibility

Though Siri and her siblings have come a long way, there is still major room for improvement as many people around the world with various forms of disabilities and speech impairments are still being left out of the conversation. According to the National Institute on Deafness and Other Communication Disorders, approximately 7.5 million people in the US “have trouble using their voices” due to speech disorders such as stuttering, or speech-affecting conditions such as ALS or cerebral palsy. 

 

The companies behind Siri and her siblings - Apple, Amazon, Microsoft and Google - have generally engineered voice technology that caters to uninterrupted, 'normal' speech from the average North American, English-speaking voice. To interpret the speech, smart assistants convert voice commands to text, and compare the text to a database of recognizable words. According to Frank Rudzicz, associate professor at the University of Toronto who studies speech, language and artificial intelligence, many databases do not contain data collected from people with atypical speech.  

 

In other words, Apple, Amazon, Microsoft and Google have rarely prioritized those whose speech does not match the ‘norm.’ 

​

Who might Siri and her siblings be inaccessible to?

Hearing: 

  • People who are Deaf or some people with hearing loss can’t effectively hear and interact with a voice assistant

Speech:

  • Voice recognition technology may struggle to understand people with speech difficulties (e.g., stuttering, slurring and verbal tics)

  • Accents could pose a challenge 

  • Voice assistants may struggle with detecting speech generating devices (as used by some people with physical disabilities)

  • People without speech will not be able to use a voice based Smart Assistant

  • Stroke survivors may have speech difficulties (ie. substituting words) which may procure unintended results from a voice assistant

Memory:

  • Smart assistant skills that require user authentication (ie. checking your bank account balance), require users to remember a password, which can be difficult for people with memory challenges

  • Individuals that struggle to remember a voice assistant’s key words or phrase (its ‘wake phrase’) will struggle to activate the device

​

Though much of this work is underway, companies must ensure that the disabled and “atypical” speech community is prioritized in the research and development of AIVAs. Voice technology must account for diverse speech patterns from the moment the product hits the market.

​

Car Lights

Entertainment & Global Use

Many individuals around the globe are becoming more familiar with the technology behind Siri and her siblings and their potential use for evil purposes. Although many users worry about what big companies are learning about them, it’s even more dangerous to have user data fall into the hands of people creating AI inventions in their basements with the intention to hack systems or steal information. This type of technology is entertaining but it could easily get out of hand if not controlled properly. Check out this link on how simple it is for everyday users to build their own voice-activated personal assistant using basic computer language: 

https://towardsdatascience.com/how-to-build-your-own-ai-personal-assistant-using-python-f57247b4494b

 

Will the industrial revolution happen again but with AIVAs? Will this be known as the Siri revolution? Will these voices replace ‘people assisting’ corporate jobs, like call centers and chat assistants? With the evolving technology, several institutions now use versions of smart assistants instead of hiring someone to answer phone calls or answer questions in a chat. Check out this link on how this company uses AI assistive technology to accommodate users’ needs:
https://www.artificial-solutions.com/ 

​

There are several benefits but also numerous risks when using AIVAs.There are many global myths, implications, and challenges of accepting this new technology in our lives. How can it be dangerous? Why should we research AI safety? Where do most controversies come from? Why are people naturally afraid of smart assistants outsmarting us? All of these questions can be answered in this fascinating website:
https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Gender

Why are Siri and her siblings sisters by default? Why do these AIVAs have female names, and speak in a submissive, even flirtatious female voice? When it comes to gender, Smart Assistants certainly have a dark side. “Siri’s ‘female’ obsequiousness — and the servility expressed by so many other digital assistants projected as young women — provides a powerful illustration of gender biases coded into technology products” (UNESCO, 2019).

​

Why female gendering of smart assistants is problematic:

  • Reflecting, reinforcing and spreading gender bias 

  • Smart Assistants’ tolerance of sexual harassment and verbal abuse

  • Blurring the lines between machine and human voices

  • Female smart assistants as the face and voice of servility and dumb mistakes 

  • Answers without complexity and referrals to higher authorities

​

This gendering is a direct reflection of broader gender disparities and lack of diversity in the industry of AI technology research in development. Women make up 12 percent of AI researchers and a mere six percent of software developers in the field of AI. The stereotypes perpetuated by Siri and her siblings do nothing but continue to widen the digital gender gap. 

 

According to UNESCO’s director for gender equality, Saniye Gulser Corat, “obedient and obliging machines that pretend to be women are entering our homes, cars and offices … The world needs to pay much closer attention to how, when and whether A.I. technologies are gendered and, crucially, who is gendering them” (Specia, 2019).

Woman on Window Sill
Security Room

Security

They. Know. Everything.

One of the most profound security issues with these AIVAs is that they enable private entities to record your voice, movements and behavioural patterns 100% of the time. Constantly recording your biometrics and preferential data means they have access to your name, phone, date of birth, passwords, credit cards, purchase history, search history, relationship status, and voiceprint. This information can then be used by or sold to third-parties. Apple, Amazon and Google have all admitted to contracting people to listen to voice recordings in an effort to improve speech recognition. In 2019, Apple only temporarily suspended this process after it was reported that conversations of medical information, illegal acts, and even people having sex were included in the recordings

​

It's. Always. Listening. 

You're having a conversation at home and all of a sudden your phone or smart speaker randomly says, ‘I’m not sure I understand.’ Smart assistants are designed to be in a partial (or passive) functioning state, always attentive and listening for the words programed like 'hey Siri' or 'OK Google' - known as their wake phrase - that launch them into their full functioning state. When we want them to wake, we appreciate the hands-free voice activation. Sometimes, though, they activate in error after mishearing sounds similar to their wake phrase and record things we didn't realize. Even in a locked state, some user information is recorded and can be extracted due to the partial functioning state. While beneficial for forensic scientists, it means your personal assistant may be listening, even when you don't realize it. 

Siri extending presence to partial state.png

You. Are. Not. Safe

Voice-based personal assistants can be prone to impersonation or attack. Studies reviewing security issues reveal that mangled voice commands can be sent to your personal device, secretly and remotely, using ultrasound, wireless signals, or even over public radio. Impersonating a person's actual voice can bypass voice-as-biometric authentication leading to security issues "ranging from information theft and financial loss [to] physical harm via unauthorized access to smart appliances and vehicles" (Feng et al., 2018, p.36). While comedic and seemingly far fetched, maybe the concern of toasters attacking people is closer than we think.

© 2023 by AmBits. Proudly created with Wix.com

  • Twitter
  • Facebook
bottom of page