Bayes Theorem Calculator
Bayes' theorem updates what you believe when new evidence arrives. It is the math behind why a 99% accurate test for a rare disease can still produce mostly false alarms. Plug in your starting probability and how good your test is to get the updated probability, with a per-10,000 walkthrough that makes the surprise visible.
New to Bayes' rule? Read the 4-min primer ▾
What Bayes' rule says. Start with a belief, observe some evidence, and update. The update is mechanical: multiply your prior belief by the likelihood of the evidence under that belief, then renormalise across all the ways the evidence could have arisen. The output is the posterior probability that the belief is true given what you just saw.
The base-rate fallacy. A test that is 99% accurate sounds airtight, but if only 1 in 1,000 people have the disease, a positive result is overwhelmingly a false alarm. Most people skip the prior and treat 99% accurate as 99% chance of disease. Bayes forces the prior back into the answer, which is why low-prevalence screening intuition is so often wrong.
Medical screening intuition. Sensitivity is how often the test catches a sick person; specificity is how often it correctly clears a healthy one. The positive predictive value (PPV) is what you actually want: given a positive test, what is the chance you are sick? PPV depends on prevalence as much as on the test, which is why a great test in a rare disease still produces mostly false alarms.
Picking which mode. Use the medical screening mode if you have prevalence, sensitivity, specificity. Pick the false-positive paradox mode if you want the "out of 10,000 people" walkthrough. Spam mode if you have base rates and word likelihoods. Generic Bayes if you have raw P(D|H), P(D|~H), and a prior. Two-test chains a second test from the first's posterior.
Try a real-world example to load.
A general-population HIV ELISA: prevalence about 0.1%, sensitivity 99%, specificity 95%. A random positive result - how worried should you be?
Read more Anatomy of Bayes' rule
Caveats When this is the wrong tool
- If you have...
- Use instead
- Bayes factors (model evidence, not single events)
- Model comparison rather than belief updating. A future Bayes factor calculator covers t-tests, proportions, correlation with JZS priors.
- Continuous likelihood ratios (e.g. biomarker level)
- Slope of the LR with biomarker value matters; needs a different UI. Out of scope here.
- Prevalence estimated from the same sample
- You have a calibration / Bayesian latent-class problem; treat prevalence as uncertain rather than known.
- Imperfect gold standard
- Sensitivity / specificity assume the truth is known. If your "truth" is itself a test, use latent-class methods.
- Two tests that are not conditionally independent
- Chaining understates uncertainty. Either model the dependence explicitly or treat the pair as one combined test with empirically measured operating characteristics.
- Conditional probability in R - the building block under Bayes' rule.
- Sample spaces and probability axioms - why the denominator must sum.
- Logistic regression - the regression analogue: log-odds as a linear function.
Numerical accuracy: closed-form arithmetic. Stable down to prior = 1e-9; near the boundary the posterior is ~prior × LR+ for very low prevalence.