Bayes’ Theorem
What
A way to update your belief about something after seeing evidence:
P(hypothesis | evidence) = P(evidence | hypothesis) × P(hypothesis) / P(evidence)
Or more compactly:
posterior = likelihood × prior / evidence
Why it matters
- Naive Bayes classifier: simple, fast, works well for text
- Bayesian inference: update model beliefs as data comes in
- Spam filters: P(spam | these words) using word frequencies
- Medical diagnosis: P(disease | positive test) — famously unintuitive
Classic example
A disease affects 1% of people. A test is 99% accurate. You test positive. What’s the probability you have it?
P(disease | positive) = P(positive | disease) × P(disease) / P(positive)
= 0.99 × 0.01 / (0.99×0.01 + 0.01×0.99)
= 0.5 ← only 50%!
The low base rate (1%) means most positives are false positives.