Exploring how the 2015 MCAT revision improved predictive validity for medical school success while assessing broader competencies needed in modern medicine.
Every year, tens of thousands of aspiring doctors face a single, formidable challenge: the Medical College Admission Test, better known as the MCAT. For nearly a century, this exam has served as a critical gatekeeper to the medical profession, but in 2015 it underwent its most significant transformation in decades. The revised exam emerged with a new structure, new content, and a new scoring system—prompting an urgent question among educators: Does the new MCAT actually do a better job of predicting who will succeed in medical school and beyond?
This question isn't just academic trivia; it carries profound implications for the future of healthcare. Medical schools invest tremendous resources in selecting students who will not only survive the rigorous curriculum but also become competent, caring physicians. The stakes of admission decisions couldn't be higher—for applicants dreaming of white coats, for schools striving to maintain standards, and for patients who will one day entrust their lives to these future doctors.
Before examining the MCAT's transformation, we need to understand a key statistical concept: incremental validity. In simple terms, incremental validity measures whether a new assessment adds meaningful predictive power beyond existing tools. Imagine adding an extra ingredient to a recipe—incremental validity tells us whether that ingredient actually makes the dish better or if it's just adding unnecessary complexity.
In medical admissions, the central question becomes: Does the new MCAT provide better predictions of student performance compared to the old exam, especially when we consider other available information like undergraduate grades? 4
This concept matters because medical school admissions represent a high-stakes balancing act. Admissions committees must weigh multiple factors—grades, test scores, interviews, essays, and letters of recommendation—to select candidates most likely to succeed. If a new test version doesn't improve predictive accuracy, its value remains questionable despite the substantial costs and efforts associated with implementing changes.
The 2015 revision represented more than just superficial changes—it was a fundamental reconceptualization of what attributes future physicians need to master. The table below captures the dramatic shift in focus:
| Feature | Old MCAT (pre-2015) | New MCAT (post-2015) |
|---|---|---|
| Total Score Range | 3-45 | 472-528 |
| Mean Score | Approximately 25 | 501 |
| Core Sections | Physical Sciences, Biological Sciences, Verbal Reasoning | Chemical/Physical Foundations, Biological/Biochemical Foundations, Psychological/Social/Biological Foundations, Critical Analysis/Reasoning |
| Content Emphasis | Primarily natural sciences | Expanded to include social/behavioral sciences and biochemistry |
| Testing Time | Approximately 4.5 hours | Approximately 7.5 hours |
The most significant changes extended beyond scoring adjustments. The new exam incorporated psychological, social, and biological foundations of behavior, recognizing that understanding human behavior is as crucial to modern medicine as understanding biochemistry. The exam also expanded its testing of scientific reasoning within living systems, with greater emphasis on biochemistry and research methods—reflecting the evolving nature of medical science and practice.
The new MCAT added psychology, sociology, and biochemistry content to better reflect the knowledge needed by modern physicians.
How do researchers determine whether the new MCAT represents a genuine improvement over its predecessor? A 2025 study conducted at Sidney Kimmel Medical College provides compelling evidence through meticulous comparison. 1
1,312 students entering medical school between 2012 and 2015 with old MCAT scores.
1,111 students matriculating between 2016 and 2020 with new MCAT scores.
Researchers adopted a straightforward but powerful approach: comparing two distinct groups of medical students. The first cohort consisted of 1,312 students who entered medical school between 2012 and 2015 with old MCAT scores. The second included 1,111 students who matriculated between 2016 and 2020 with new MCAT scores. This design allowed for direct comparison of how well each exam version predicted performance on the United States Medical Licensing Examinations (USMLE)—the mandatory three-step examination series all physicians must pass to practice in the U.S. 1
The research team used path analysis—a sophisticated statistical technique that examines complex relationships between variables—to determine how much variance in USMLE scores could be explained by MCAT scores. This method provides clearer insight into predictive power than simple correlations alone.
The findings revealed crucial insights about the revised exam's performance:
| USMLE Examination | Variance Explained (R²) | Statistical Significance |
|---|---|---|
| Step 1 | 14% | p < 0.001 |
| Step 2 | 11% | p < 0.001 |
| Step 3 | 16% | p < 0.001 |
Perhaps most importantly, the study found that the new MCAT demonstrated predictive validity comparable to the prior version. The revised exam didn't represent a step backward—it maintained the predictive utility of the old exam while assessing a broader range of knowledge and skills relevant to modern medicine. 1
The 2025 Sidney Kimmel study isn't the only research validating the MCAT revision. Earlier investigations from multiple institutions have reinforced these findings:
Analyzing data from 7,970 matriculants found medium to large correlations between new MCAT scores and performance in the first year of medical school. This research demonstrated that while MCAT scores and undergraduate GPAs similarly predicted early medical school performance, using both metrics together provided better prediction than either measure alone. 3
A comprehensive study of osteopathic medical schools revealed that MCAT scores alone provided superior predictive value for licensing exam outcomes compared to undergraduate GPA alone. However, the combination of both metrics offered the strongest predictive power—reinforcing the value of comprehensive review in admissions decisions. 8
What does it take to conduct rigorous research on medical admission testing? The table below highlights essential methodological components:
| Research Tool | Purpose | Application in MCAT Studies |
|---|---|---|
| Path Analysis | Examines complex relationships between multiple variables | Used to determine how MCAT scores predict USMLE performance while controlling for other factors 1 |
| Multiple Regression | Assesses how well combinations of predictors explain outcomes | Determines how MCAT and UGPA together predict medical school performance 3 8 |
| Cross-Validation | Tests whether findings hold across different samples | Researchers split samples to verify prediction models remain accurate 4 |
| Differential Prediction Analysis | Examines whether tests function equally across demographic groups | Investigated gender differences in predictive validity of MCAT sections 1 5 |
| Correction for Range Restriction | Adjusts for limited variability in admitted students | Enhances accuracy of validity coefficients by accounting for the select group of admitted students 2 |
The accumulating evidence supporting the new MCAT's validity carries important implications for medical education and the physician workforce:
These findings validate the substantial investment in revising the exam and adjusting processes. The new MCAT assesses a broader range of knowledge while maintaining the predictive utility of the previous version.
The research underscores that the MCAT remains an important—though not exclusive—factor in admissions decisions. The exam's demonstrated relationship with licensing exam performance reinforces its relevance.
The findings highlight opportunities to better support students from diverse backgrounds. Emerging research on differential prediction invites more nuanced understanding of assessment tools. 5
While the new MCAT has demonstrated its value as a predictive tool, research continues to explore what other factors might improve medical student selection. Recent investigations have begun examining whether non-cognitive assessments—including personality measures and situational judgment tests—might add predictive power beyond academic metrics.
The journey to optimize medical student selection continues, but the evidence now clearly indicates that the 2015 MCAT revision represents a meaningful step forward. By more comprehensively assessing the knowledge and skills needed in modern medicine, while maintaining strong predictive validity, the new exam better serves medical schools, applicants, and ultimately, the patients who will depend on these future physicians.
The evolution of the MCAT reflects the ongoing effort to balance multiple goals: selecting students who will succeed in rigorous training, expanding the range of competencies assessed, and promoting diversity in the physician workforce—all while ensuring that the fundamental standard of medical excellence remains uncompromised.