Skip to content Skip to sidebar Skip to footer

That voice you hear – even one you recognize – might not be real, and you may have no way of knowing. Voice synthesis is not a new phenomenon, but a growing number of freely available apps are putting this powerful voice-cloning capability in the hands of ordinary people, and the ramifications could be far reaching and unstoppable.

A recent Consumer Reports study that looked at half a dozen such tools puts the risks in stark relief. Platforms like ElevenLabs, Speechify, Resemble AI, and others use powerful speech synthesis models to analyze and recreate voices, and sometimes with little-to-no safeguards in place. Some try – Descript, for example, asks for recorded voice consent before the system will recreate a voice signature. But others are not so careful.


error: Content is protected !!