AI Fundamentals
← All Concepts
intermediate

Grounding

The Journalist with Sources

6 min read

The Analogy

The Journalist with Sources

A journalist who only writes from verified sources is trusted; one who makes things up is fired.

Grounding is making the AI cite its sources — anchoring every claim to a verifiable piece of information provided in the context. An ungrounded AI might confidently invent a statistic. A grounded AI is told: 'only use information from this document' — making its outputs traceable and trustworthy.

In Plain English

Grounding anchors an AI's responses to specific, provided source material — preventing it from making things up. A grounded AI only draws from the documents or data you've given it, making responses verifiable.


The Technical Picture

Grounding techniques constrain LLM generation to information from a provided context (RAG-retrieved chunks, database query results, or user-provided documents). This reduces hallucination by limiting the model's reliance on parametric memory and enabling attribution to source material.

Real-World Examples

  • Google's Grounding with Google Search feature for Gemini
  • Enterprise chatbots grounded on company policy documents
  • RAG systems are inherently grounded — answers come from retrieved chunks
Key Takeaway

Grounding keeps AI honest — it can only say what's in the sources you provide.

Related Concepts