How U.S. News Quietly Rewrote the Rules on Medical School
Prospective medical students navigating the 2026 U.S. News rankings face a system in transition, with elite schools like Harvard and Stanford still absent from the list entirely. Sahil Singh

For decades, aspiring doctors pored over a single, definitive number. Harvard: No. 1. Johns Hopkins: No. 2. The U.S. News & World Report medical school rankings were gospel — a clean, ordinal scoreboard that shaped where students applied, where faculty aspired to work, and how institutions measured their own prestige.

That era is over.

In 2024, U.S. News quietly but consequentially overhauled how it ranks American medical schools, replacing its famous numerical ladder with a four-tier system. Now, in 2026, the tiered approach is entering its third year — and it's reshaping not just how schools are evaluated, but which schools even bother to participate.

The Breaking Point

The shift didn't happen in a vacuum. Beginning around 2022, a wave of elite medical schools began withdrawing from the U.S. News ranking process entirely, refusing to submit the data the publication needed to assign them a number.

Harvard Medical School was among the first and most prominent to walk away. Its dean had publicly criticised the old methodology for creating what he called "perverse incentives" and "unintended consequences" — chief among them the pressure on admissions committees to prioritise applicants with sky-high MCAT scores and GPAs over those committed to serving underserved communities. The rankings, critics argued, were inadvertently rewarding exclusivity over mission.

Others followed. Columbia, Stanford, Duke, Cornell, NYU, and the University of Pennsylvania all pulled out. By the time U.S. News released its 2025 edition, roughly 20% of the schools that had ranked in the top 100 just two years earlier were absent from the list entirely — labelled simply as "unranked."

For a publication whose identity was built on definitive rankings, this was a crisis. You cannot credibly declare a No. 1 medical school in America if Harvard, Stanford, and Penn aren't playing.

The Tiered Solution

U.S. News's answer was the four-tier system, introduced with the 2024 rankings and now firmly established. Rather than assigning each school a specific position, schools are grouped into performance bands based on their percentile scores:

  • Tier 1: 85th–99th percentile (top performers)
  • Tier 2: 50th–84th percentile
  • Tier 3: 15th–49th percentile
  • Tier 4: 1st–14th percentile (lowest performers)

Within each tier, schools are listed alphabetically — no internal ordering, no splitting hairs between the third- and fifth-best programs. A Tier 1 school is simply a Tier 1 school.

The methodology also dropped reputational survey data — a factor that had previously asked residency program directors to weigh in on school prestige. U.S. News acknowledged the obvious: residency directors, by and large, don't factor a school's magazine ranking into their decisions. Removing it was both more honest and, conveniently, less dependent on the goodwill of institutions that were increasingly hostile to the whole exercise.

What Changed in 2026

The 2026 rankings, released April 7, mark the third iteration of this new approach — and the system continues to evolve. This year, the research ranking methodology was expanded to include federal, state, local, and private contracts alongside NIH grants, and a two-year average was applied to smooth out year-to-year fluctuations in funding data.

Movement at the top was meaningful. Two schools — the University of Colorado and the University of Florida — newly achieved Tier 1 status for research, joining established names like Baylor College of Medicine, Mayo Clinic, UCSF, UCLA, Vanderbilt, and Yale. On the primary care side, five schools entered Tier 1, including Dartmouth's Geisel School of Medicine and UC San Diego.

In total, 203 accredited medical and osteopathic schools were surveyed, with 102 qualifying for research rankings and 99 for primary care — comparable to previous years.

The Case For Tiers

Supporters of the new system argue it solves a fundamental flaw in ordinal ranking: false precision.

Is there a meaningful, real-world difference between the No. 4 and No. 5 medical school in America? Almost certainly not. Yet the old system presented that gap as fact, and applicants treated it as such. Schools responded accordingly — gaming metrics, narrowing admissions criteria, and optimising for the things U.S. News measured rather than the things that actually produce good doctors.

The tiered approach, proponents say, acknowledges what any honest medical educator will tell you: a cohort of 15 or 16 truly excellent research institutions doesn't have a meaningful internal hierarchy. Grouping them together is simply more accurate.

There's also an argument that tiers reduce the most damaging incentive distortions. When the difference between No. 3 and No. 7 could affect donor perception, faculty recruitment, and application volume, schools had powerful reasons to chase ranking-friendly metrics. A broader tier band blunts that pressure — at least at the margins.

The Case Against

Critics, however, are not convinced the tier system solves the deeper problem — it just obscures it.

The most pointed objection is that the rankings still exclude some of the best medical schools in the country. Harvard, Stanford, Columbia, Penn, and Duke remain unranked in the 2026 edition. For a prospective medical student trying to compare programs, a system that labels Harvard "unranked" while elevating lesser-known institutions to Tier 1 is not more informative — it's more confusing.

There's also the question of what tiers actually tell you. Knowing a school falls in the 50th–84th percentile range says something, but not much. A Tier 2 school with exceptional primary care training and a strong rural medicine programme might be a far better fit for many students than a Tier 1 research powerhouse. The tiers, like the old rankings, still struggle to capture what applicants actually need to know: what will my training look like, and where will it take me?

Some critics also question whether the methodology shift has genuinely addressed the incentive problems that drove schools to boycott in the first place — or whether it has simply repackaged the same underlying metrics in a way that made staying more palatable for some institutions while not enticing the elite holdouts back.

A Rankings Ecosystem in Flux

What's clear is that U.S. News no longer holds the same monopoly on medical school prestige it once did. The boycott, and the tier system that followed it, has opened space for alternative frameworks to gain traction.

Some schools now promote their own outcome data directly — residency match rates by specialty, the percentage of graduates practicing in underserved areas, average debt at graduation. Mission-driven institutions like Mercer University School of Medicine and Campbell University's osteopathic school have leaned into specialty rankings — such as "Most Graduates Practicing in Rural Areas" — that better reflect their actual strengths and purpose.

Times Higher Education's subject rankings offer a separate, internationally benchmarked view of research excellence. And within medical education circles, there's growing momentum behind the idea that accreditation outcomes and residency match data are simply more useful signals than any composite ranking score.

What It Means for Applicants

For the tens of thousands of students preparing medical school applications, the practical message is this: the tiered system is more honest than what it replaced, but it is not more comprehensive.

A Tier 1 designation for research tells you that a school has strong NIH funding, a favourable faculty-to-student ratio, and a selective admissions profile. It does not tell you whether the curriculum suits your learning style, whether the clinical training sites match your specialty interests, or whether you can afford to attend.

An "unranked" label, meanwhile, tells you almost nothing about educational quality — only that the school declined to hand its data to a magazine.

The most useful question a prospective medical student can ask isn't "what tier is this school?" It's a harder one: what kind of doctor do I want to be, and which programme will best help me get there?

That question, unfortunately, no ranking system — tiered or otherwise — has ever been designed to answer.