MedCity Influencers

New York Times stands by Dartmouth Atlas critique

Over the weekend, the two New York Times reporters who challenged the core findings of the Dartmouth Atlas of Health stuck to their guns in a detailed response to the rejoinder to their critique. The Dartmouth Atlas, which documents regional variation in Medicare spending, provides the intellectual underpinning for assertions by health care reformers (including […]

Over the weekend, the two New York Times reporters who challenged the core findings of the Dartmouth Atlas of Health stuck to their guns in a detailed response to the rejoinder to their critique. The Dartmouth Atlas, which documents regional variation in Medicare spending, provides the intellectual underpinning for assertions by health care reformers (including those in the White House) that 30 percent of all health care spending is wasted and does not improve either the quality or outcome of care.

The Times‘ original critique (see this GoozNews post) contained three main ideas:

  • The Dartmouth researchers fail to adjust their maps for regional variations in prices;
  • The Dartmouth researchers fail to adjust their maps for illness burden; and
  • The assertion that more spending leads to worse outcomes is not borne out by the data. In some cases more spending leads to better outcomes.

Some of this back-and-forth may sound like a quibble over language. Is it “30 percent” of health care is wasted or “up to 30 percent,” as the Dartmouth researcher now state in public? Reed Abelson and Gardiner Harris provide a link to the original 21-page response to their queries. “We think the 30 percent estimate could be too low,” the Dartmouth researchers wrote in a highlighted section.

On the other hand, the Times reporters appear to be taking a step back on the price issue. They went back to David Cutler, the Harvard health care economist whom they originally quoted. Cutler told them that the original 2003 articles by John Wennburg and Elliott Fisher of Dartmouth that appeared in the medical literature did, in fact, adjust for price. “But he said he agreed with the Times assertion that most of the atlas’s maps and rankings, as distinct from the group’s academic work, are not fully adjusted for prices,” Abelson and Harris wrote.

Notably, Cutler is now hedging his bets on the “30 percent is waste” argument. “Some believe that number is higher, and others think that it’s lower,” he wrote in the latest Health Affairs. “But there is little disagreement that health care is characterized by enormous waste.”

In my view, it is the dispute over quality that really needs to be explored by other researchers and policycmakers. Eliminating waste ought to improve quality. It has always been true in manufacturing that reducing steps and reducing waste not only reduces costs, but it improves the quality of the finished product.

There’s no reason to think it won’t be true in delivering a complicated service like health care, which some have compared to building and flying jet airplanes. Doing more than necessary to get the job done will only increase the possibility that errors will occur in the process, which in health care translates into more complications, further costs and, in some cases, lost lives.

Yet the Times reporters continue to assert through their dissection of the Dartmouth data that more spending on more services may actually result in higher quality. They go back to the original research — the two studies published in 2003 — to make their point:

The researchers are incorrect in saying that the results of those 2003 studies were “all in the same direction.” In fact, two of the various measures of quality and mortality cited in the articles actually seemed to show that more spending could correlate to better care. [footnotes 2 and 3] Heart attack patients in the most expensive regions, for example, were more likely to receive necessary beta blockers – a positive correlation between spending and quality. Similarly, hip fracture patients experienced “a small decrease in mortality rates” in more expensive places – another positive correlation.

We have very poor metrics for measuring quality of care, and one of the examples they cite shows why. Giving beta blockers is a “process” measure. We know from clinical trials that giving beta blockers after a heart attack improves outcomes. But does it improve outcomes the same in regions where the ratio of obesity-related heart attacks to stress related heart attacks differ? Does it have the same effect in regions with higher proportions of mild heart attacks (because they are more likely to use a sophisticated blood test to categorize chest pains as a heart attack) than it does in a region with a higher proportion of serious heart attacks?

These are the confounding variables that no data set can capture adequately until it fully reflects both the diagnoses of the incoming patients as well as the care delivered. The Dartmouth Atlas data, which relies on Medicare claims, falls far short of that goal. And the Times reporters, by trying to resift the data to make a counterpoint, only add another blind man’s hands on the elephant in the room — the absence of electronic data about the actual medical conditions of the patients behind those Medicare claims.

Topics