MedCity Influencers

Connecting the healthcare data dots: How a harmonized healthcare data strategy can create efficiencies

When the healthcare industry talks about data, the conversation usually focuses on interoperability and data standards. While these are certainly important, the most effective way to connect the dots and gain a complete view of a patient is through data normalization.

Earth planet with global routes and light dots representing global connection and communication.

When the healthcare industry talks about data, the conversation usually focuses on interoperability and data standards. These are certainly important topics, but they don’t fully address the challenge of making complex forms of clinical data available for exchange and analysis.

Overcoming these challenges is a critical need for organizations aiming to provide data-driven, high-quality care at both an individual and population level. That’s because hospitals, health systems, community clinics, and physician practices are increasingly reimbursed – and ranked – based on patient outcomes.

presented by

The most effective way to connect the dots and gain a complete view of a patient is through data normalization. This is the process through which data from diverse systems is not just aggregated into a single data warehouse but also standardized into common terminology. Data normalization is not without its challenges, but the right combination of business processes and technology to capture, store, and standardize data can help organizations realize the clinical, financial, and operational benefits of data normalization.

Why data normalization is more than a technology initiative

The sprawl of data throughout the typical healthcare organization presents three formidable challenges. First, each clinical data type – diagnosis, procedure, medication, lab, device, and so on – is stored in its own siloed enterprise application. Second, each data type is coded differently – ICD for diagnoses, LOINC for labs tests and results, SNOMED for clinical documentation, RxNorm for medications, and in some cases, there will be no code associated with a given data set. Third, coding systems overlap, so a diagnosis or a medication could be coded in multiple formats.

These data types may be effective in their individual context, but on their own they don’t provide a complete picture of patient or population health, or of health system performance. To get that full picture, each data set needs to be moved from its individual system into a data warehouse.

Consumer / Employer

Health Benefit Consultants, Share Your Expert Insights in Our Survey

Ivana Naeymi-Rad Ivana Naeymi-Rad joined IMO in 2012 where she now leads the company’s software engineering, content development and delivery, project management, and enterprise IT departments. Ivana came to IMO with over 15 years of software engineering development and data experience in various industries including e-commerce, government, academia, and online advertising. In previous roles, she […]

A data warehouse alone isn’t going to be enough, though. The data sets may be together, but they’re still in their unique formats, which leads to inconsistency throughout the data warehouse. Interoperability standards from the Centers for Medicare & Medicaid Services will help in the future, but they don’t apply to legacy coding systems or clinical applications.

That’s where data normalization enters the picture. Through data normalization, data from disparate systems is standardized onto a common set of common, clinically validated terminology as it’s moved to the data warehouse.

Without a normalized data set, healthcare organizations are limited in their analytics capabilities. They tend to focus on the data sets with the fewest gaps. On the clinical side, that’s the patient registry, which is actually quite limited when it comes to looking at patient outcomes. On the financial and operational side, it’s the various documents they are required to put together for compliance, quality, or financial reporting.

A normalized data set makes it possible to take a more mature approach to analysis. Organizations have in one place a single data source that can be combed to take on initiatives such as reducing care variability, eliminating waste, managing population health, and introducing predictive analytics at the point of care. This makes data normalization more than just a technology initiative – it’s an important tool for value-based care, clinical decision support, and data-driven strategic planning.

How to make data normalization easier and connect the dots faster

Data normalization is not without its challenges. Inputting free text and other low-quality data sets into a data warehouse requires the use of extract, transfer, load (ETL) processes, as data must be cleansed before it can be standardized. This requires additional infrastructure and personnel; it also creates bottlenecks that diminish the value of the data, as it will likely be outdated by the time it’s finally in the warehouse. It is also often redundant, since a single data point – such as a patient’s diagnosis of stage 3 breast cancer – may be expressed in multiple data sets (albeit in different ways).

Given these obstacles, many healthcare organizations have not yet undertaken data normalization. But this is a critical step in data aggregation, standardization, and analysis. Otherwise, the common data set in the data warehouse may be incomplete at best or inaccurate at worst. This can have clinical, financial, and operational consequences. At the individual level, this may lead care teams to get a diagnosis, prescription, or treatment plan incorrect. At a population level, this can steer a population health, care quality, or care management initiative in the wrong direction.

Start at the point of care

Fortunately, two simple steps can lead organizations down the path to data normalization. First, start at the beginning and standardize data as it’s being entered into clinical systems at the point of care. This doesn’t need to disrupt clinical workflows; it simply should verify that the correct code is associated with the data being inputted. Then, use a data normalization engine to map each data point from each clinical system to the normalized description and associated code before data is transferred to the data warehouse.

With a normalized data set in hand, any internal or external stakeholder – whether it’s a health system, hospital, insurer, public health registry, research organization, or health information exchange (HIE) – will have a single version of the truth. This can allow organizations to do work they haven’t been able to do previously.

For example, an HIE that serves health systems and hospitals in the western United States found that aggregating data from payers, providers, and government agencies often led to gaps; this was especially true with Covid-19 lab data, which often lacked the LOINC codes that are valuable for surveillance and quality reporting. Through the use of a data normalization platform, it has normalized more than 1.8 million messages, and its effort to normalize lab data has expanded beyond Covidh-19 test results to include blood bank and microbiology messages.

Getting the simple tasks right makes the complex tasks possible

Today, only about one dozen organizations worldwide have reached Stage 7 of the HIMSS analytics maturity model. That’s a far cry from the hundreds that are at Stage 7 of the HIMSS EHR maturity model.

It’s certainly true that prescriptive and predictive analytics are complex tasks. However, they cannot be done without first establishing a standardized clinical vocabulary and terminology.

In theory, getting everyone to use the same terms should be simple. But it’s difficult in healthcare, as different clinical systems, not to mention different medical disciplines, have historically used different terms to define the same thing.

Instead of forcing entire fields of medicine to change, a data normalization strategy can ensure that data is available in a common language for exchange, use, and analysis. Connecting the dots enables organizations to spend less time and fewer resources cleansing data prior to analysis – and more time using data to improve clinical care and operations.

Photo: Filograph, Getty Images

Ivana Naeymi-Rad joined IMO in 2012 where she now leads the company’s software engineering, content development and delivery, project management, and enterprise IT departments.

Ivana came to IMO with over 15 years of software engineering development and data experience in various industries including e-commerce, government, academia, and online advertising. In previous roles, she led numerous engineering teams at both Jetsetter/Gilt Groupe and Yahoo.
Ivana earned a Bachelor of Science in Computer Science from the University of San Francisco and a Master of Science in Computer Science from the University of Southern California.

Topics