My phone rang late one Friday night. On the other end of the line was a physician at the end of his proverbial rope.
“I can’t do it,” he said, desperation in his voice. “I can’t tell another mother that I don’t know why her baby died. Four hours ago, he was alive and now he is dead and all the monitoring data was deleted. We have to do better.”
His desperation is shared by the tens of thousands of healthcare providers who went into this field to save lives. “I live and die by waveforms,” the physician continued. “It’s imperative that I see data beat to beat, and everything that’s in the EMR is piece-meal, single points in time.
“I need to know about every moment in time, across every device my patient is connected to. I need to correlate that with other data so I can figure out the root cause quickly, and treat. When there isn’t anything I can do to save a patient, I want to be able to figure out why. And then, if there is a way to predict risk faster, potentially by using AI, I want access to the data to do it.”
“But it all starts with the data,” he said. “Something has to change.”
The frustration lies not only in the fact that care is delayed, length of stay increased, and lives potentially lost, but also with the fact that we have the data that might have allowed for different care decisions and earlier interventions. The challenge… the data simply isn’t easily accessible.
Clinicians need more timely and contextual data than what is currently available in places like the EMR, data lakes, and in other systems. Specifically, providers need access to the physiologic waveform data from the biomedical devices monitoring patient health, especially in critical care settings where patients crash quickly, and often without warning.
Patients – especially critically-ill patients in the ICU – generate an incredible amount of information through the many devices monitoring each person’s condition. On average these patients are connected to 4-10 devices generating over 800,000 samples per hour. In most hospitals today, that information is locked down in proprietary formats, only visible at the bedside, within only fragments of discreet vitals data available in the EMR.
If healthcare is going to realize the promise of AI, we have to first fix the medical data problem, starting by making sure that the data already being captured is accessible for clinical surveillance and monitoring beyond the bedside. Once we unlock it, we can then start building new models, creating new analytics, and leveraging new measures of risk to predict deterioration faster, and further reduce patient risk.
But how do we get the data? It’s all about breaking down those data silos and proprietary formats through device integration.
This gathering of information is part of the process that Eric Topol, M.D., cardiologist, geneticist, author, and scientist refers to in his book Deep Medicine, a manifesto about the pressing need for useful information to transform healthcare. “To do so, we need to deeply define the patient using ALL medical data,” Dr. Topol states.
“There are risks and challenges due to misuse of data, including the fact that patient data is often held for ransom.” Part of the process Dr. Topol refers to as “deep phenotyping,” unlocking the data that is often held only in device silos is critical to gathering all the information possible from the patient.
Most device integration solutions today weren’t designed to capture the waveform data needed for monitoring, clinical surveillance, or AI. Rather, they were designed to automate the documentation process and send single, discreet vitals to the EMR at defined intervals.
There are other challenges, too. Getting at some data points sometimes requires a different interface. In this case, it’s important to outline which data is most critical to create and inform the model. If the interface doesn’t contain the data, then another will be required.
And finally, implementing device integration can be difficult. Many solutions require hardware at the bedside which is costly and challenging, especially in an already crowded critical care setting, or on an operating bed that can’t be taken out of service to enable the integration. These hardware constraints increase the complexity and often delay the implementation and therefore the ability to realize the benefits of device integration for surveillance and patient-centered analytics.
These many roadblocks underscore the need for a scalable, secure device integration solution that eliminates or reduces hardware at the bedside, possibly leverages your existing architecture, and enables the integration of waveform data into the patient data stream.
For meaningful analysis and clinical decision support, physicians and other members of the care team need better ways to augment their complex decision-making in real-time. While EMR data has achieved major breakthroughs in care and is a prerequisite for clinical summaries, data modeling, analytics and ultimately precision medicine, the care teams need the waveform data that reflects a patient’s changing condition – and not just from a single cardiac monitor, but from all monitoring devices connected to a patient.
When we capture and integrate both, we will then have the full resolution historical data needed so care teams can rapidly build trends that detect hidden problems such as a medication causing hypotension or extubation readiness. Providing this beat-to-beat historical data, for as long as a patient is in the hospital, or possibly even a prior admission is crucial to determining the why to reduce time to intervention, length of stay, and overall patient risk.
When I think back to the desperate words of that skilled physician who is dedicated to saving lives, I think about the information that he never sees, and the opportunity he misses to advance care for his patients. If this physician had the ability to look at all of his patient’s data retrospectively, for as far back as needed, his trained eye could spot warning signs, describe trends, intervene faster, and ultimately save more patients.
Once we get ALL of the data, how do we then start predicting patterns and further reducing patient risk? In an interview with The Scientist, Dr. Topol discussed the next step in the evolution after data gathering, a process he calls “deep learning.”
“No human being can process all that data because it’s dynamic and it’s quite large to deal with,” he explained. “That’s why we have deep learning. It’s a type of artificial intelligence that takes all these inputs and it crystallizes, distills it all.” He points out that even among the data we already have, only five percent of it is being processed.
“We need to digitize, democratize, and do deep learning to create a framework that nurtures the human being, creates deep medicine and restores the human touch between doctors and patients to get ‘care’ back into healthcare,” he states in his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.
We agree with Dr. Topol and others who underscore the need to have machines process the data but then deliver it back to the providers caring for the patients so they can make more informed decisions faster.
The burning question… how do we make this vision a reality? It starts with looking at the data you need first to determine the best device integration method. Then, it’s about finding a solution that processes all the data, transforms it, and unlocks it so you can do your own analytics and AI at scale. This is the foundation of clinical surveillance and AI at scale, and it’s the dawn of a new age of care.
I think often of that physician’s call late that evening, and of the heartbreaking news he had to deliver to that baby’s mother. I think of his dedication to his patients – the late nights, the weekends, the hard conversations and the tough decisions – and I feel like we have to do better for those who are trying to make the rest of the world better. Let’s get the data and start there.
Heather Hitchcock is the Chief Commercialization Officer of Medical Informatics Corp