Skip to content
HQ Baseline logoHQ Baseline

Clinical principles

False Positives and False Negatives in Baseline Testing: Understanding the Error Rates That Matter

No medical test is perfect. Understanding error rates is essential for anyone using baseline data.

5 min read

No medical test is perfect, and baseline concussion testing is no exception. Understanding error rates — and their real-world implications — is essential for clinicians, families, and program administrators.

Sensitivity and specificity

Sensitivity measures how well a test detects concussions when they’re actually present (true positive rate). ImPACT’s sensitivity for detecting concussion ranges from 79.2% to 91.4% depending on the study, the composite scores used, and the comparison methodology, as reported in research published in the American Journal of Sports Medicine and the Archives of Clinical Neuropsychology. This means that 9–21% of genuinely concussed athletes may produce cognitive scores that don’t show statistically reliable decline from their baseline.

Specificity measures how well a test correctly identifies healthy athletes (true negative rate — i.e., not flagging a healthy person as concussed). ImPACT’s specificity is generally higher than its sensitivity, but imperfect. Some healthy athletes will produce scores that meet decline criteria — particularly if their baseline was taken under suboptimal conditions, if significant time has passed since the baseline, or if normal test-retest variability happens to produce an unusual result.

What this means practically

Some concussed athletes will pass their cognitive test. Some healthy athletes will fail. This is why baseline testing should never be the sole basis for clinical decisions. It is one piece of evidence in a comprehensive evaluation that includes symptom assessment, clinical examination, balance testing, vestibular-ocular screening, and the clinician’s integrated clinical judgment.

Multi-domain closes the gap

The multi-domain approach improves overall detection. An athlete who passes cognitive testing but fails VOMS or balance assessment is still flagged for continued management. The more domains you test, the smaller the gap through which a concussion can slip undetected. See also why multi-domain testing catches what single tests miss.

Our approach

At Headquarters, we treat baseline comparison as an important data point — not a diagnostic verdict. Every clinical decision is made using the totality of evidence, and we educate families about both the strengths and limitations of the tools we use. Transparency about error rates builds trust; overpromising what tests can do erodes it.

Frequently asked questions

FAQ

What is ImPACT's sensitivity for detecting concussions?
79.2% to 91.4% across published studies, depending on methodology. That means 9–21% of concussed athletes may produce scores that don't show statistically reliable decline.
What is ImPACT's specificity?
Generally higher than sensitivity but imperfect. Some healthy athletes will produce scores that meet decline criteria — especially if their baseline was taken under suboptimal conditions.
Should baseline testing be the sole basis for clinical decisions?
No. It's one piece of evidence in a comprehensive evaluation that includes symptoms, clinical examination, balance, and vestibular-ocular screening.
Does multi-domain testing improve detection?
Yes. An athlete who passes cognitive testing but fails VOMS or balance assessment is still flagged. The more domains tested, the smaller the gap through which a concussion can slip.

Honest about what tests can (and can't) do.

Multi-domain baselines that close the gap between what each single tool misses — and transparent interpretation for every family.