MedCity Influencers

How computer vision will redesign healthcare

Machine learning can transform healthcare and but the interoperability barrier is the first domino that needs to fall in order to realize the full scope of its potential.

 AI, machine learning

Currently, we depend upon the eyes, ears, and touch of medical staff to examine the human body and diagnose patients. Physicians must identify abnormalities of the skin, check for any bodily irregularities, examine complex MRIs, listen for arrhythmias, and inspect CT scans day in and day out. While all of these tasks seem routine, they actually test physicians’ knowledge about everything that could possibly be wrong with the human body.

Reliance on human skill has been the norm since the dawn of medicine (and the advent of modern medicine, too). But a new advancement in modern technology—machine learning—is poised to change how we diagnose ailments in healthcare.

Making data useful with machine learning
Through applying machine learning algorithms and neural networks, computer scientists have been working hard on tackling the problem of accurate classification (This is the process of categorizing a group of objects while only using basic data features to describe them.) Classification is useful because it gives physicians trusted benchmarks to work against, and it helps health systems cut recurring visiting due to misdiagnosis and incorrect treatment.

When inputted data is classified, a machine learning algorithm produces a confidence score. For example, if we needed to quickly deduce whether a patient was sick or healthy, and all you have is their height, weight, heart rate, and body temperature, the classifier would analyze this data and fire out a confidence score in a matter of seconds. A higher score would mean there’s a high chance the patient is sick, and a lower score would suggest that the patient is healthy.

While this is just one (and a very simple example) of the many potential use cases for machine learning, it represents a faster and more efficient way of performing routine diagnostics that require the time and expertise of human physicians. However, implementing machine learning isn’t about replacing doctors and radiologists with computers that potentially can never misdiagnose—this actually is about giving them the tools that will advance the way they work.

Knowledge is power—for those who have it
Machine learning has the power to help millions, but creating machine learning tools requires knowledge that’s incredibly specialized. Realistically, there are only a few thousand machine learning experts in the world that know how to develop algorithms that will have a true, positive impact. Those experts are unbelievably in demand, and their skills are being clamored for across dozens of industries with thousands of problems.

Beyond those few experts, though, are millions of capable developers who understand the basics of teaching a machine how to learn and perform tasks based on data. Their skills may not equal the experts’, but that’s only because they’re either young, inexperienced, don’t have the luxury of time to learn, or any combination of reasons out there. While they may not be making huge impact now, all they need in order to do is acquire the right information to solve the problems they’re facing.

Now, you tell me: would you rather stick with the current pace of innovation that relies on a few thousand experts, or empower the millions with higher knowledge and see what they can produce? I think I’d place my bets on the millions, which is the same choice Amazon’s DeepLens initiative is making.

(Machine Learning) Power to the People
DeepLens puts the ability to utilize machine learning knowledge, techniques, and neural networks in the hands of developers by giving them access to a fully-programmable camera that captures actions and insights, coding tutorials, and pre-trained machine learning models. This “computer vision” tool, designed by the few-thousand experts, empower change by making their knowledge and expertise available to the rest of the field.

But the advancements afforded through the use of computer vision learning doesn’t stop with analyzing biometrics or scanning images—literally any task that requires a trained eye can be improved through the use of computer vision.

Gauss Surgical, for example, developed a real-time blood monitoring solution that scans surgical sponges to provide an accurate estimation of blood loss during medical procedures. Accurate blood loss measurement is a legitimate issue within healthcare, with an estimated 20-60 percent of unnecessary blood transfusions driving $10 billion of waste each year, per the company’s website. Gauss’ computer vision algorithms help recognize hemorrhage status and improve patient outcomes by suggesting blood transfusions only when truly necessary.

Big Problems, Bigger Data
With tool like DeepLens, many problems that humans only partially (or inadequately) solve can be eradicated. But with one knowledge-based obstacle cleared, developers face another: the need to access and leverage huge amounts of data.

Getting your hands on patient information is not as easy as waltzing into a hospital and asking for a bunch of data to train algorithms with—for a multitude of reasons, healthcare and data are like oil and water. Even with the adoption of the EHR, the lack of enforced standards produces a lot of confusion around healthcare data: different departments using different formats, inconsistent data exists across systems, and variant definitions of conditions all make the sharing and utilization of healthcare data incredibly difficult.

Beyond this complexity lies another challenge: the fact that healthcare brings a whole new meaning to the term “Big Data”. If you look at oncology, for instance, we can quantify imaging studies and apply AI, but we also have digital pathology, genomics, and electronic medical record data at our disposal. Before you know it, the combination of all these factors creates terabytes upon terabytes of data. If you then want to start comparing this information over time across 10,000 similar cases, the result is some truly staggering Big Data.

An Innovative Future
Combine all of the above issues with the legal red tape surrounding the access of patient health information, and you can see how quickly innovators in healthcare can run into a brick wall.

The simple fact is that the healthcare industry stifles innovation by making it difficult to gain access to the incredibly-complex information people need in order to make new tools. And with the rest of the world hurtling into the future in terms of technological advances, the lack of data fluidity prevents people from receiving the best possible care at the lowest cost.

With healthcare costs (and waste) continuing to skyrocket each year, figuring out how to rein in healthcare expenditures is something that desperately needs attention. Part of doing that means developing more efficient means to help patients access care and providers and clinicians to deliver care, and making sure such products get to market with greater ease and speed.

Increasing speed to market is the toughest part of the equation to solve for because it requires figuring out how to make disparate systems interoperable—that is, making decades-old technology compatible with new technology in order to share data (and learn, and innovate, and apply insights) easily.

Interoperability is the key, so we’ve made it our problem to solve. We’ve built an API that harmonizes all the inconsistent data schemes and formatting across healthcare so that systems can share data that was once incompatible. How people leverage this data and knowledge is going to impact healthcare, and we’re excited to watch the innovation unfold.

We know that removing this barrier is the first domino that needs to fall because if the sharing of deep-learning knowledge shows anything, it’s that when more people can utilize the knowledge that’s already out there, true innovation can happen.

Photo: ANDRZEJ WOJCICKI, Getty Images

Avatar photo

QuHarrison Terry is the Marketing Director and in-house "Futurist" at Redox, the platform for healthcare data exchange. Named twice as a “LinkedIn Top Voice” in Technology, Terry focuses on the future of technology and how it will influence human behavior. Before joining the Redox team, Terry founded 23VIVI, the world's first digital art marketplace powered by the blockchain. He attended the University of Wisconsin-Madison and currently lives in Madison, WI.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.