AI Fundamentals
← All Concepts
beginner

Data Privacy in AI

What Happens in the Kitchen

6 min read

The Analogy

What Happens in the Kitchen

When you order food at a restaurant, you trust the kitchen. When you type into ChatGPT, do you know where that text goes?

By default, most AI services log your conversations. Some use them to improve future models. Enterprise plans typically offer data-off agreements. If you paste a client's confidential document into a free ChatGPT account, that data has left your control. For students building products, this compounds — your users' data goes to your AI provider unless you explicitly configure otherwise.

In Plain English

When you use AI tools, your inputs are often stored, logged, and potentially used for model training. This matters for personal privacy and is critical when handling client or user data professionally. Enterprise tiers and privacy settings exist — but you must actively choose them.


The Technical Picture

LLM providers collect input/output pairs for safety monitoring, abuse detection, and model improvement by default. Opt-out options (ChatGPT's "improve the model" toggle, Anthropic's API data training opt-out) disable training use. GDPR and India's DPDP Act impose obligations on organisations processing user data through third-party AI services.

Real-World Examples

  • Samsung engineers accidentally leaked internal source code by pasting it into ChatGPT
  • Anthropic's API has a no-training data option for enterprise users
  • India's DPDP Act 2023 requires explicit consent before processing personal data — including via AI
Key Takeaway

Free AI tools use your data. If you're handling someone else's data, check the privacy settings before pasting.

Related Concepts