A US federal judge ordered OpenAI to hand over 20 million ChatGPT conversation logs. Full prompts. Full responses. Everything users typed in.
When I tell this to companies, the most common reaction is: "Yes but we ticked the box. Our data isn't used for training."
That box has nothing to do with this.
"Do not train on my data" means OpenAI won't use your prompts to improve their models. It does not mean your data isn't stored. It does not mean your data can't be subpoenaed. It does not mean a US court won't order your AI provider to hand over everything your employees ever typed in.
That's exactly what happened. A judge ordered OpenAI to preserve and produce conversation logs. Not training data. Conversation logs. The actual prompts your people type every day.
Nearly 40% of employee AI inputs contain sensitive data. Client names. Financial details. HR decisions. Strategic plans. All of it sitting on servers you don't control, in a jurisdiction where your GDPR rights are irrelevant the moment a court order lands.
"We turned off training" is not a governance strategy. It's one checkbox in a settings page. It says nothing about where your data lives, who can access it, or what happens when a court comes knocking.
The only thing that protects you is making sure sensitive data never gets there in the first place. Strip it out before it leaves your network. Keep your own audit trail. Then it doesn't matter what your AI provider is ordered to hand over.
There's nothing to find.

