AI Fundamentals
← All Concepts
intermediate

Fine-Tuning

The Specialist Doctor

7 min read

The Analogy

The Specialist Doctor

A general MBBS doctor undergoes 3 years of additional training to become a cardiologist — they didn't start from scratch.

The cardiologist already has all the general medical knowledge from MBBS. The specialisation just sharpens their focus on one domain. Fine-tuning works exactly like this — you take a foundational model with broad knowledge and train it further on specialised data so it excels at a specific task.

In Plain English

Fine-tuning is taking a pre-trained foundational model and training it further on a specific dataset to make it better at a particular task — without rebuilding it from scratch.


The Technical Picture

Fine-tuning involves continuing gradient descent on a pre-trained model using a domain-specific or task-specific dataset. Parameters are updated (fully or partially via methods like LoRA) to adapt the model's capabilities while retaining general pre-trained knowledge.

Real-World Examples

  • A hospital fine-tuning GPT-4 on medical records to create a clinical assistant
  • A law firm fine-tuning Claude on case law to improve legal research responses
  • GitHub Copilot fine-tuned on billions of lines of code from open-source repositories
Key Takeaway

Fine-tuning = specialising a general AI model for a specific task, without starting from scratch.

Related Concepts