Plain AI in plain English
About

How AI works · specialising a model

Fine-tuning

30-second gist~30s read

Fine-tuning is taking a general AI that already knows the world and teaching it more about your specific corner of it. The model keeps everything it learnt; you just nudge it toward your domain, your tone, or your way of doing things.

Most "Copilot for [industry]" products you see advertised are fine-tuned versions of a familiar foundation model, not new AIs from scratch.

If you want more

When you'd want to fine-tune~1 min

Three classic reasons:

  • A specialist domain — medical notes, legal contracts, fishing reports — where the general AI lacks vocabulary.
  • A specific tone — your bank's customer-service voice, your law firm's letterhead, your school's reading level.
  • A specific output format — always returns valid JSON, always replies in five bullets, always greets in te reo first.

For most people, fine-tuning is overkill. A well-written prompt with a few examples usually gets you 80% of the way for far less effort.

A real example

Khan Academy's "Khanmigo" tutor is a fine-tuned version of OpenAI's GPT-4. It still has GPT-4's general knowledge, but it's been trained to teach the way a Khan Academy coach would — never giving the answer outright, asking leading questions, staying patient. Most enterprise "Copilot for X" products work this way.