Will 2023 be the year banks and telcos are finally hacked at scale using deepfake voice technology? One of the most puzzling trends of recent years has been the counterintuitive adoption of voiceprint technology to authenticate callers using vocal characteristics such as pitch, timbre and tone.

And believe me, I asked everyone from executives to board members in these sectors how they convinced themselves to prioritize the perceived convenience of voice recognition with the elevated risk of security and privacy breaches this technology introduces.

No one seems to know for sure, but a common theme seems to be the post-pandemic focus on reducing customer support costs. For the moment, companies appear to be treating impersonations and unauthorized account access as individual security incidents and password compromises, in an apparent pattern of turning a blind eye to bigger attack trends.

Click to read referenced article: Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find (forbes.com)


Way back in 2018, Google's Duplex and Baidu's "Deep Voice" demonstrated how a simple 5-second recording could clone a human voice. Today, numerous AI tools such as Respeecher and Murf have improved to the point where account take-overs (ATO), vishing and other voice communication fraud can be used for identity theft, cyber heists and phone number hijackings (SIM swapping).

If your decision makers remain unconvinced about the clear and present risk of adopting technology without thinking it through, the fictional but uncanny interviews with Steve Jobs and Richard Feynman hosted by podcast.ai just might get my point across.

Either way, next time your phone rings and you say hello a few times into a silent line, ask yourself what someone could do with the voice recording you just provided.