Healthcare spending gains determined by cancer survival, as opposed to mortality rates, had reportedly ended up reaching $550 billion in additional value in the U.S., according to a 2012 study.
But it’s not really a simple situation because understanding the numbers between survival and mortality to hopefully impact death prevention isn’t necessarily intuitive.
The New York Times has taken a closer look at why survival rates aren’t really a sufficient metric for measuring cost efficacy with the disease.
With the Rise of AI, What IP Disputes in Healthcare Are Likely to Emerge?
Munck Wilson Mandala Partner Greg Howison shared his perspective on some of the legal ramifications around AI, IP, connected devices and the data they generate, in response to emailed questions.
A new study was recently published in the same journal [Health Affairs], but using mortality rates. That study found that cancer care in the United States might provide significantly less value than that in Western Europe. Which should you believe? It’s worth exploring these two studies, and their metrics of choice, to get a better understanding of whether what we are spending in the United States really is worth it.
Survival rates are just easier to measure. So that kind of research shifts the understanding of how treatment and spending is understood. Another reason survival rate is the data most likely collected is because that’s what cancer patients actually want to know – it’s what could be useful to them, the Times noted.
Here’s the example I always use to explain this concept: Let’s consider a hypothetical illness, thumb cancer. We have no method to detect the disease other than feeling a lump. From that moment, everyone lives about four years with our best therapy. Therefore, the five-year survival rate for thumb cancer is effectively zero, because within five years of detection, everyone dies.
Now, let’s assume that we develop a new scanner that can detect thumb cancer five years earlier. We prevent no more deaths, mind you, because our therapy hasn’t improved. Everyone now dies nine years after detection instead of four. The five-year survival rate is now 100 percent.
Carroll goes on to further explain why this difference in research measurements is problematic.
But the mortality rate remains unchanged, because the same relative number of people are dying every year. We’ve just moved up the time of diagnosis and potentially subjected people to five more years of therapy, increased health care spending and caused more side effects. No real improvements were made.
But if we just looked at survival rates, we would think we made a difference. Unfortunately, that happens far too often in international comparisons, as the United States often does much more screening than other countries and then justifies it through improved survival rates.
Determining cost effectiveness when it comes to treating patients will shift when it’s looked at through a different lens. As Carroll puts it, “When it comes to preventing death, we need to consider mortality rates, not survival rates, or we may be getting far less for our money than we think.”