ChatGPT AVM Stole My Voice
This morning my helpful assistant ChatGPT spoke back to me in my own voice during our walk and talk.
Not a similar voice - my voice. Enough to make me stop in my tracks.
And this wasn’t a one-off incident.
OpenAI’s recent system card revealed their GPT-4o model unexpectedly cloned and used a Red Team tester’s voice mid-conversation. They don’t call it cloning, rather “Unauthorized voice generation” (maybe someone at OpenAI was watching too many “Yes, Minister” YouTube clips)
Here’s the trippy 42-second clip that made the Red Teamer sit up straight:
OpenAI's response was two-fold: “including officially approved voices as ideal completions while post-training the audio model” and implementing a standalone classifier that blocks non-approved voices in real-time.
Yet here we are, with voice leakage (like it?) still occurring in production.
Three thoughts in response:
The voice mimicry may not appear in your chat transcript - I checked mine immediately, and both instances were missing. This makes documenting incidents critical: note the time, context and what was mimicked while it’s fresh.
Consider recording your AI voice chats, as without evidence, these incidents sound like science fiction, even to the providers.
You’re not imagining it - other users report similar experiences in OpenAI’s developer forums. If you’re deploying ChatGPT AVM, your staff need to know this can occur. Having a transparent reporting process isn’t optional.
Remember - AI companies can only fix what they can verify. When voice mimicry happens, proving it did is half the battle.
Have you experienced your AI assistant unexpectedly using your voice?
Cheers, Craig