What Can You Do With 5 Billion Heartbeats?

 In Alarm Management, Big Data

Our team at Medical Informatics Corp (MIC) has helped collect and amass over 100 bed years of data, which includes more than 5 billion heartbeats. Alongside PhysioNet, it is one of the largest known psychological data repositories in the world. While this is an impressive volume of data, we wanted to outline what we believe is actual value of high-resolution physiological data.

 

Why So Much Data?

Before we start down the path of outlining the value of this data, we will first answer the obvious question: Why do we need so much data?

Traditional EMRs sample vital signs off monitors at specific points in time, such as every once per minute or 5 minutes, or once an hour. It turns out that a LOT of things can happen during that time!  The body is constantly changing, constantly adapting to stresses on a moment-to-moment basis. If you don’t record information fast enough, you can miss out on important information.   There have been instances where patients have arrested and have been resuscitated in between the data samples taken by the EMR, so that if you look back on the vital sign history of the patient, you would not even see a change.  That’s just not at the level you need to do predictive analytics.

The fact is, there are a number of situations in which the behavior of the patient’s vital signs change prior to specific events. For example, a cardiac patient does not instantly arrest, but may quickly build up to this life-threnting event.   The reason why a patient goes into arrest could be a few things. Maybe, their heart isn’t receiving enough blood flow to function properly, or maybe their heart has so much blood inside of it, it’s under stress it can’t function properly. To start developing analytics around arrest, it is important to observe the ways that patients do arrest so that we can start to recognize these features if they happen to another patient.   The reason we need so much data is that each patient is slightly different from one another and there are many a different ways that patients arrest.  If you have enough data, and you do the math, you can start to see patterns which occur before the event happens, which may allow you to predict whether or not they will have an arrest within the next 1-2 hours.

Moving Towards Goal-Directed Therapy

But what about preventing arrest all together? Wouldn’t it be nice for a physician to know the range of blood pressure wherer their patients heart becomes ischemic?  Or the blood pressure where the heart starts to get stressed?  If they know where these ranges are for their patient, they would simply make sure to avoid going outside of these bounds.  In medicine, this is referred to as goal-directed therapy. Its like having lines on the road which make sure that you know when you are starting to veer into oncoming traffic, before something bad happens. In medicine,. . The challenge is that the goals for a particular patient are typically unknown because the physician is relying on sparsely sampled data at specific points in time.  For goal directed therapy to really work, you need to make available to the physician all of the patient data collected by the patient monitors.  You see, the best target goal for one patient can bad for another.  Also, the best goal for a patient can change and meander from time to time, meaning that what’s good for them now, could be bad for them tomorrow.

You may be wondering, don’t physicians know these goals already? Aren’t there guidlines for what these gaols should be?   It is true that  medical researchers, organizations, and foundations have done good work  in establishing broad guidelines for the care of patients.  The problem is that these guidelines are set very wide, because they’re made for the average patient population, not for a specific individual patient.  Such goals really need to be created on a patient-by-patient basis to obtain the best outcomes.

By using the complete and high-resolution physiological history of the patient,  patient-specific metrics can be derived to allow physicians to set patient-specific goals and guidelines.  MIC has this data, and better yet, we have a system that allows physicians to leverage this data in order to provide even better care for their patients.

 

So How Is It Done Now?

Monitors measure vital signs (like heart rate) once every 1-2 seconds.  Generally, the EMR of the hospital is only getting one of these data points every 1-5 minutes.  As a result, physicians don’t know what’s happening in between those sample times.

But processes that happen in your body don’t happen on a 1-sample-per-minute basis. Your heart beats 70-110 times per minute!  You can’t generate a single number to accurately represent all of this information!  If you’re looking for a particular change in physiology, you have to sample it at the rate at which that change is occurring.

For example, if you’re interested in the problems that are associated with your heart, you need to sample the heart at a rate that is sufficient to see that change. If you need second-to-second data but you’re looking at minute-to-minute data there’s no way you will glean any useful information. This is why an EKG is sampled at a rate of 240 times per second because that’s the rate at which the info is actually changing. That’s why the bedside monitors are set at that rate.

Unlike an EMR, the MIC Sickbay® platform takes the data at the rate it’s coming out of the monitoring equipment.  This means that we don’t miss a thing.  Every data point that is generated by a patient is captured and made available for patient care.  As you could imagine, this produces an enormous amount of data, which leads to its own challenges.

The Challenges Around Data Storage, Management, Access

If a hospital were to collect all of the data from an ICU bedside monitor, it would gather roughly 200-300 megabytes of data per day per patient. That’s not a huge amount for just one patient. But when you scale that for a typical large hospital that admits 100,000 patients per year, that’s an enormous amount of data (terabytes-petabytes), which creates data acquisition, processing, storage and management challenges.

It takes technologies specifically engineered and designed to meet the workflow that’s required to utilize this data from acquisition, to real-time processing, to management and retrospective analysis, to distributed visualization. If your system doesn’t have those components, at some point in your workflow you’ll hit a wall, and you won’t be able to proceed. And that’s the challenge. You need a system that is designed from the ground to accelerate each step of the this workflow .

 

So Back to Our Original Question: What can you do with 5 Billion Heartbeats?

A lot of things!  We have only started to scratch the surface.  For example, our team is working on patient population characterizations. It’s surprising to realize that the vast majority of patient of populations, don’t have standard physiologic characterization profiles. If they do, they’re 40 years old and they’re based on a small sub-set of patients and a small sub-set of data.
We’re going back and looking at all the physiological data for particular cohorts of patients that have a specific diagnosis. And then we’re going to characterize what that patient population looks like by measuring the distributions of heart rate, the distributions of SpO2, respiratory rate, how much time were they on ventilators, what drugs were they given, what laboratory tests were run, what were the outcomes, etc. We’re getting better baselines to understand when a patient is doing well, which will let us better identify when a patient might be at risk.

By having better data on patients you can provide better outcomes.

Facebooktwitterlinkedinmail