Health IT

A closer look at the significance of study replication in health informatics

A new article in the Journal of the American Medical Informatics Association explores how setting standards for proper replication of health informatics research will lead to improved quality and patient safety in healthcare.

data, technology

Various fields have struggled with poor reproducibility of studies. A new article in the Journal of the American Medical Informatics Association explores how this issue is affecting the field of health informatics.

“The inability for researchers to reproduce many of the findings of past studies is causing particular concern in several disciplines, including psychology and the medical sciences,” the authors wrote.

presented by

In health informatics, failure to properly test new technologies and ensure results are accurate can have drastic consequences. The quality of patient care can be harmed, and human lives could potentially be ruined or lost.

Generally speaking, the inability to replicate exact results can arise from anything from a small sample size to statistical errors. The health informatics realm has its own unique set of challenges regarding replicability.

One such difficulty is ensuring “replication fidelity,” a measure of the similarity between the methods used in the original study and the replication. The more the replication is like the original study, the more likely it is to be seen as a genuine test of the original study’s validity.

But when studies are repeated in informatics, they are altered to ensure they interoperate with a new environment. “This act of local adoption, however, means that we no longer are comparing similar interventions,” the article notes.

Though these problems stand in the way, the authors take a stab at defining what the future of health informatics should look like.

First, it’s necessary to define when replication studies are actually necessary versus when additional repetition doesn’t add much to the evidence base. On top of that, it’s key to single out when a reproduced study is worth publishing in a journal.

Moreover, there need to be standards for original studies (so they can be properly replicated), as well as for the study replication process itself. For instance, researchers should be able to explicitly describe the context of their project.

Additionally, the article authors believe there should be formal standards for peer reviews of health informatics studies. The field should also work to address and recognize the cultural differences that exist between institutions and research groups.

As the article concludes:

We have taken as a mantra that different outcomes between similar studies are the consequences of context and implementation changes. We much less often consider the obvious alternative, that failure to replicate may mean the original study was flawed. Learning to separate these effects will in and of itself be a new research challenge, and may lead us to a deeper, richer, more theoretically robust understanding of informatics and the nature of digital interventions in a complex socio-technical universe.

Photo: Pixtum, Getty Images