It doesn’t get any scarier and creepier than this, as Amazon demonstrated on Wednesday a new feature that allows its virtual assistant, Alexa, to mimic the voices of users’ deceased relatives. Yes, you read that right!
The company demonstrated the feature at Amazon’s Re: MARS (Machine Learning, Automation, Robots, and Space) conference in Las Vegas. In the short music video, a boy asks Alexa to read the story of “The Wizard of Oz” in the voice of his late grandmother.
Alexa acknowledges the child’s request in her standard robotic voice, then immediately switches to a softer, more human tone, which apparently mimics the voice of the child’s deceased grandmother, and narrates an excerpt from the children’s novel.
“As you have seen during this experience, it is not the voice of Alexa reading the book, but the voice of the child’s grandmother,” said Rohit Prasad, Amazon’s Senior Vice President and Head Scientist for Alexa AI.
Prasad introduced the clip by saying that adding “human attributes” to AI systems has become increasingly important “in these times of the ongoing pandemic where so many of us have lost someone we love.”
“While AI can’t take away that pain of loss, it can certainly make their memories lasting,” he added.
Watch the demo video below:
The Alexa team teaches the digital assistant to mimic the voice of anyone who hears it by recording just one minute of that audio.
The company is presenting the functionality as a way to help people preserve memories, especially those who have lost loved ones to COVID-19.
“This required inventions where we had to learn how to produce a high-quality voice in less than a minute of recording versus hours of recording in the studio,” Prasad said at the conference.
“The way we managed it is by framing the problem as a voice conversion task and not as a path for speech generation. We are undoubtedly living in the golden age of AI, where our dreams and science are.”
The new feature is currently in development and Amazon has not disclosed when it plans to roll out the feature to the public. After Alexa’s voice-impersonation feature was announced, some took to Twitter to raise concerns about the capability, which could be misused by scammers and cybercriminals and could also be used to impersonate other people’s voices without their consent. .
The FUCK you want.
Remember when we told you that deepfakes would increase the mistrust, alienation and epistemic crisis already going on in this culture? Yes that. That times A LOT.
Amazon has a plan to make Alexa mimic someone’s voice [w/o their consent]https://t.co/kXm4EXKgp8
— Damien P. Williams, MA, MSc, ABD, Patternist (@Wolven) June 22, 2022
The phone attack implications of this tool are not good at all – it will most likely be used for impersonation.
At Amazon’s re: MARS conference they announced they are working on taking short audio clips of someone’s voice and reprogramming them for longer speechhttps://t.co/5TkEIHoeXG
— Rachel Tobac (@RachelTobac) June 22, 2022
“ALEXA, PLAY DESPACITO”
My late grandmother’s voice, robotic: “Now playing on Amazon Music, Despacito” https://t.co/D6BLQCafzj
— Parker Molloy (@ParkerMolloy) June 22, 2022
Umm, so how soon will criminals be able to use it to call your relatives and beg them for Venmo money? Or ask them for social security numbers? Or bank details?
— ?bitty_in_pink ? (@bitty_in_pink) June 22, 2022