MedCity Influencers

Welcome to the world of comparative effectiveness research

Welcome to the real life world of comparative effectiveness research, that politically and pundit-popular means to decide which treatment approach doctors should utilize and which, based on the results of these studies, our government will decide which approach they will fund.

If there was one place research should be easy to perform, it’s on a disease that’s incredibly common.

Further, if there are two generally-accepted strategies to treating symptomatic patients with that ailment — one invasive and the other not — it should be pretty easy to compare which is best, right?

Maybe. Maybe not.

Welcome to the real life world of comparative effectiveness research, that politically and pundit-popular means to decide which treatment approach doctors should utilize and which, based on the results of these studies, our government will decide which approach they will fund.

But first, before starting the study, decide which way you’re leaning. Call that your “hypothesis”. Make sure your desired approach is the invasive one (this is very important) — that way, patients feel that at least you are trying to do something.

Good.

Now, be sure there are plenty of articles in the literature supporting your approach, but also discussing the substantial risks that might occur if that option is used and an accident happens.

Then have plenty of articles in the literature that talks about the other non-invasive but potentially dangerous treatment option.

Then go before your Investigational Research Board (IRB). Show them how cool it is and convince them this is the first prospective randomized trial comparing the two forms of treatment for this incredibly common disorder. Have a 15-page all-inclusive consent for the patient describing the good, the bad, and the potentially ugly. No, make it 17 pages just to be sure. (They’ll like that). Get the IRB’s blessing.

Then announce the trial to your colleagues and patients.

Then wait for the patient referrals from your colleagues who do not have the same vested interest in the trial as you, or wait for the Perfect Patient to enter your exam room.

Spend an hour with them telling them about the trial.

Then tell them that you really don’t know which option for therapy is best (and that’s why you’re doing the study), even though they have come to you in hopes you’ll explain to them which treatment option is best.

Look at their confused faces.

Offer plenty of time for them to decide if they want to be in the trial or not.

When they don’t call back, call them again to remind them about the importance of the trial. Talk to them for two more hours to answer their questions. Try to stay neutral to let them decide.. Hear them looking up things on the internet. Clarify the purpose of the trial to them. Sense their pressure.

Then watch them decline simply because they can’t decide whether to be in the trial or not.

Lather. Rinse. Repeat.

* * *

Sound familiar to others trying to do this work?

Now look at which topic was #1 of the Institute of Medicine’s Top 100 stand-alone topics for the First Quartile in which to perform Comparative Effectiveness Research.

Yep, atrial fibrillation.

Here’s the sad reality: the first comprehensive NIH and industry-sponsored comparative effectiveness trial studying the best approach to treat atrial fibrillation, the CABANA Trial, is having one hell-of-a-time enrolling subjects.

No one knows why.

But I suspect there are several reasons:

1) CER is complicated. Perhaps too much is being asked of these trials and their investigating centers since not only are clinical endpoints being studied, but costs as well.

2) These trials cost more to perform than they are funded. People can only work so long out of the goodness of their hearts until they must turn to some income-producing endeavor to justify their existence. In our current cost-conscious era, resources are limited for any complex, underfunded study.

3) Patients are better informed about their treatment options than ever before. This affects recruitment of subjects in several ways: (a) because of pre-conceived biases favoring one therapy over the other before a patient is even invited in to a trial, (b) a more educated subject population regarding the risks of any proposed therapy.

The real question becomes, can we really expect to put all our health care reform financial eggs in the unrealized promise of comparative effectiveness research trials when it’s so damn hard to enroll patients in these trials?

Westby G. Fisher, MD, FACC is a board certified internist, cardiologist, and cardiac electrophysiologist (doctor specializing in heart rhythm disorders) practicing at NorthShore University HealthSystem in Evanston, IL, USA and is a Clinical Associate Professor of Medicine at University of Chicago's Pritzker School of Medicine. He entered the blog-o-sphere in November, 2005. He writes regularly at Dr. Wes. DISCLAIMER: The opinions expressed in this blog are strictly the those of the author(s) and should not be construed as the opinion(s) or policy(ies) of NorthShore University HealthSystem, nor recommendations for your care or anyone else's. Please seek professional guidance instead.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.