All that's missing from a high-tech approach to quelling a future outbreak is the ability to share personal and portable health records. By Gary Meyer and Jay Sultan
How well we fight the next global viral threat depends on the U.S. healthcare industry getting serious about data interoperability.
Go forward in time with us for a few minutes. In late 2023, individuals in gateway cities around the U.S. begin to complain of flu-like symptoms at their monthly telehealth check-ins or when they visit drugstore-based clinics. Whether engaging virtually or in-person, patients authorize their caregivers to have immediate access to their longitudinal health records. These records, with data available in a common format and terminology, are housed in hospital electronic medical records (EMR) systems, yet used conveniently and securely in more and more contexts. Algorithms scan the data routinely, searching for signals and patterns. Within two weeks of the initial patient visits, the public health-specific algorithms suggest the emergence of a new virus. The algorithm flags health records of patients presenting with most or all symptoms; hospitals test patients in minutes for molecular evidence of the virus. Contact tracing is used to immediately identify friends, family and community members who may have been infected. Meanwhile, with an emergency declared, public health officials tap anonymized health records to monitor disease progression, diagnosis and treatment responses, underlying health information and mortality rates. Officials use this data to transmit messages to at-risk persons requesting they self-quarantine. Mobile devices, once only used for bulk broadcasts for storm alerts and Amber alerts, are used to enable precision epidemiology, pinpointing immediate intervention without the economy-crippling actions of statewide and nationwide lockdowns. Human and other resources are targeted to the few thousand individuals now affected by the new virus – before it can “go viral.” Equipment and drugs from the industry’s stockpile and supply chain are delivered to hot spots with the accuracy of precision-guided munitions to neutralize this new kind of enemy. Concurrently, researchers use the ever-growing amounts of longitudinal data to identify the virus’s genetic pathways. Academic and private researchers tap new public health data banks of depersonalized data to begin developing therapies and preventative measures aligned to genomic variants of the virus. Clear data about which individuals are most susceptible to the virus, its mortality rate and its spread factor, help elected officials make well-informed public-health decisions. With granular, high-risk parameters defined to the genetic level, vulnerable individuals successfully self-quarantine; in-home telehealth consults and treatments ensure medical centers are not overwhelmed; and the economic and human cost of the virus are minimized.
Many of the components of the above scenario – near-real-time genomic testing of virus and humans, telehealth capabilities, defined standard data formats, mobile tracing – all exist today. China, Taiwan and Singapore have used many of these emergent capabilities effectively in quelling COVID-19 outbreaks. These countries also had one other crucial capability: health data interoperability. In Taiwan, health insurance and immigration agencies combined the 14-day travel histories of local and foreign residents with their insurance card information, enabling healthcare providers and pharmacies to integrate those insights into personalized care. Officials also track the cell phones of self-quarantined patients to ensure they stay home. The U.S. health industry, for example, cannot support a high-tech approach for quelling an outbreak because health insurers, care providers and researchers cannot easily share data. Instantaneous genomic sequencing will do little good if a provider doesn’t know which ER patients to apply that data to because their health records are inaccessible or incomplete. For healthcare professionals to be well-equipped to fight the next pandemic, the U.S. healthcare industry must get serious about data interoperability. The foundations of interoperability have been legislated as far back as the Health Insurance Portability and Accountability Act (HIPAA) of 1996. HIPAA is mainly known for protecting privacy, but its twin goal was to enable individuals to access their own data. Few healthcare organizations invested in those capabilities, in part because there was little incentive: The penalties for failing to release data were nowhere near as onerous as the fines for unauthorized access. Then came the 21st Century Cures Act of 2016, designed to ensure that healthcare data could flow as needed while preserving patient privacy and enabling individuals to control how and where their health data is used. A data interoperability rule required by the act is scheduled to go into effect at the end of 2020. That date is the result of one deadline extension, and some in the industry already are calling for another.
How well we fight the next new global viral threat may depend on the interoperability rule and other future regulations that may emerge from the Cures Act. It’s not perfect: The Interoperability and Patient Access Final Rule applies only to health insurers, not hospitals, and addresses specific government-sponsored health plans: Medicare, Medicaid, Children’s Health Insurance Program and Qualified Health Plans. CMS has indicated this is a foundation on which to build out more interoperability for privately insured individuals and among physician and hospital systems. If the U.S. interoperability rule had gone into effect on January 1, 2019, as originally planned, many at-risk seniors right now could authorize their health insurers to release all administrative and clinical data they’ve collected to their new telehealth doctor or to the hospital where they are being treated for the virus. This data would include all doctors and hospitals visited; all prescription medicines received; and all procedures and major diagnoses. Symptomatic patients arriving at an ER would not have to remember any of that data: with their permission, ER physicians could immediately retrieve it and start making better-informed care decisions. During a national state of emergency, local, state and federal public health officials could retrieve required patient data almost instantly, eliminating the need for elected officials to take drastic social and economic measures. The value of accelerated research and availability of treatments and vaccines would be measured in lives saved. Interoperability also enables third-party developers to tap standardized health plan data via APIs to write applications designed to use that data. They could combine that information with data that individuals collect from home monitoring and fitness devices. AI agents in the apps could alert patients and their physicians if subtle signals indicate a patient’s condition is deteriorating. These measures could be refined if health data, especially about risk factors, could be combined in real-time with geo-tracing data from mobile phones and credit card purchases. That would enable the application of precision epidemiology and more targeted social isolation to reduce the personal and economic effects of an outbreak or pandemic. Achieving this goal requires health records data to be as portable as our mobile phones. The interoperability rule standardizes how we can do that while balancing privacy and access. Instead of claiming interoperability needs to be delayed yet again, the health industry must give it high priority as a clear preventative measure for whatever the virus is next time.
This article was written by Gary Meyer and Jay Sultan. Gary Meyer is an Assistant Vice President within Cognizant Consulting’s Healthcare Practice and Jay Sultan is Vice President of Healthcare Innovation and Strategy at Cognizant. Read more about Gary and Jay.