- Unsupervised Learning
- Adobe Just Broke Voice Authentication
Adobe Just Broke Voice Authentication
So it looks like Adobe just broke existing voice authentication.
You can basically collect audio from someone, create a template of their voice, and then provide an editor where you type what you want them to say. Check out the video.
So imagine just capturing someone’s voice at a cafe and then being able to impersonate them activating their Hey Siri functionality, or gaining access to their work or home that’s secured by voice auth.
And that’s for stuff that could be a problem in the future.
Think about what it does to voice evidence.
First, you’ll be able to create voice evidence which will convince many, many people that you in fact said something.
Second, you’ll be able to deny things where there is voice evidence of you saying it, because you can simply say that it was forged.
You type a phrase, and the computer says it in the voice of the target.