No employee—except the rare bad actor—means to leak sensitive company data. But it happens, especially when people are using generative AI tools like ChatGPT to “polish a proposal,” “summarize a contract,” or “write code faster.” But here’s the problem: unless you’re using ChatGPT Team or Enterprise, it doesn’t treat your data as confidential.
According to OpenAIs own Terms of Use: “We do not use Content that you provide to or receive from our API to develop or improve our Services.”
But don’t forget to read the fine print: that protection does not apply unless you’re on a business plan. For regular users, ChatGPT can use your prompts, including anything you type or upload, to train its large language models.
Translation:
That “confidential strategy doc” you asked ChatGPT to summarize?
That “internal pricing sheet” you wanted to reword for a client?
That “source code” you needed help debugging?
☠️ Poof. Trade secret status, gone. ☠️
If you don’t take reasonable measures to maintain the secrecy of your trade secrets, they will lose their protection as such.
So how do you protect your business?
1. Write an AI Acceptable Use Policy. Be explicit: what’s allowed, what’s off limits, and what’s confidential.
2. Educate employees. Most folks don’t realize that ChatGPT isn’t a secure sandbox. Make sure they do.
3. Control tool access. Invest in an enterprise solution with confidentiality protections.
4. Audit and enforce. Treat ChatGPT the way you treat Dropbox or Google Drive, as tools that can leak data if unmanaged.
5. Update your confidentiality and trade secret agreements. Include restrictions on AI disclosures.
AI isn’t going anywhere. The companies that get ahead of its risk will be the ones still standing when the dust settles. If you don’t have an AI policy and a plan to protect your data, you’re not just behind—you’re exposed.