Sunday, October 23, 2011

Burning Issues: Health

See also Club of Amsterdam Journal, November 2011, Issue 143, Special Edition

Philip Gagner, Chief Scientist and Vice President, Schloer Consulting Group

Schloer Consulting Group is presently developing large scale interoperable electronic health records (EHR) systems specifically designed to assist in the delivery of patient-oriented, biometrically secure healthcare on municipal, regional, and national levels.

Philip has more than 30 years of experience in the computer and technology fields, including robotics, digital hardware design, software development, data communications, finance, and law. He earned a Juris Doctorate from Georgetown University, and has litigated some of the lead cases in software and technology law. In addition, his technical experience includes work as a researcher at the M.I.T. Artificial Intelligence Laboratory, hardware and software engineering as Senior Software Engineer at Digital Equipment Corporation, several years as a senior researcher at the Federal Judicial Center in Washington, D.C., and founder of Legal Data Systems, a software solutions company.


Club of Amsterdam: With technology now present in all aspects of our daily lives it is not surprising that healthcare systems in high-income countries are depending more and more on technological and computer assisted devices for their functioning and services. The 2011 American healthcare system reform illustrates perfectly the computerization of a number of Medicare activities. Are the current security measures and regulation rules enough to guarantee the functioning of the system concerned with such an important issue as health?

Philip Gagner: Let me, somewhat artificially, divide healthcare technologies into two parts: First are those that actually deliver healthcare (such as EEG machines and which I will label "medical devices") and second are those that keep patient histories and perform billing and financial systems (which are generally called Electronic Healthcare Records systems). I think that, given current technologies, these must be viewed separately, although some devices such as sleep apnea PAP machines, some dialysis machines, and some insulin dispensers are both therapeutic and keepers of records.

With regard to medical devices, the current security situation is appalling. At a recent Black Hat conference, a security researcher demonstrated how easily he could hack medical devices such as an insulin pump. The researcher, Jerome Radcliffe, was interested in the security of his own insulin pump and he discovered that there was essentially no security at all in the device: He could remotely control it with a simple radio transceiver, assisted by a java applet provided by the device manufacturer.(1)

More and more medical devices are being networked, and those networks are connected to the Internet. This is true both for home monitoring and for use in physicians' offices and hospitals. The convenience of remote access (as well as remote control) makes such connections inevitable.(2) The increasing use of wearable monitoring devices (often connected to smartphones) presents other security issues, both privacy issues and even the continued good health of their users.

Connecting a device to a smart phone is easy. However, smart phones are notoriously easy to hack, and any system that uses them is vulnerable to denial of service, eavesdropping, man in the middle, and insertion of dangerously false data or commands. These devices are generally connected wirelessly, and the protocols that they use (CGM and serial Bluetooth, for example) are also fatally insecure. In addition to wearable sensors, hospital equipment is increasingly connected to networks, and the security used is generally non-existent or easy to compromise.

If the possibility of remote commands to mis-delivery insulin is not sufficiently alarming, consider the group of researchers who were able to gain wireless access to a commercially available heart defibrillator and pacemaker. They were able to do so in an undetectable manner, and claimed that they could easily have set the device to kill the user, had the device been in a human body.(3)

Many millions of such devices are implanted or worn today, and tens of thousands more are prescribed or implanted each day. For existing devices, correcting even the most blatant security flaws is an intractable problem. As Gollakota et al. point out, such devices, have limited memory and limited possibility of upgrade. Replacing them would often require major surgery with high risks. In addition, using cryptographically secure techniques might actually endanger the patients, for example if doctors at a different hospital required emergency access to the device.

For future devices, security can, and must, be built into the devices. Existing medical standards are inadequate, and all software (including crypto software) for medical devices has special requirements of reliability and proper fail-safe modes. All software is notoriously prone to unanticipated bugs, and the more complex the device, the more prone to bugs it becomes. Security for medical devices must be simple, and at the same time highly resistant to passive (e.g. unauthorized monitoring) and active attacks. Active attacks here mean attacks that issue unauthorized commands to the medical devices.

Adopting rigorous medical device communication standards and thorough device testing can reduce the above problems for future devices. Today, there are no universally accepted standards, and there is little if any penetrability testing. Even so simple a method as wearing a removable metal shield over the implanted device can significantly reduce radio remote control hacking (see footnote 3), but these are not generally known to, or even thought about, by doctors.

The difficulties of allowing access to authorized medical providers while denying it to unauthorized ones, ties the problems of device security to problems in electronic healthcare records (HER). The healthcare records industry is fragmented not only along national borders, but also within nations. In many countries there are multiple competing systems of EHR, with the United States being the worst example. Simply obtaining a patient's electronic healthcare records can be such a bureaucratic and technical nightmare that doctors often merely fax them. This is even more true for records might be stored on incompatible systems, or systems with incompatible authorization protocols

When healthcare records are stored electronically, there are no universally accepted security standards. There are various laws in various countries regarding patient privacy, but from a technical standpoint, these are meaningless. If my doctor has my records on an office computer, and a worker in the doctor's office, on the same network, downloads a pirated electronic game containing computer viruses and Trojan horses, then all the policies and laws in the world have no effect. A famous case of public disclosure involved cancer records of the actress Farrah Fawcett and other celebrities. In 2008, an unauthorized employee with an administrative password was easily able to access them and sold them to the press.(4)

Security of EHR, like security of medical devices, is both a technological problem and a medical problem. As medical devices and medical records systems become more and more integrated, issues of security and privacy become issues of medical ethics and of sound medical practice. Just as doctors should not use equipment on which they has not been trained, neither should they use computer systems that they don't understand. But, every day, in every country, they do.

Social and legal systems must be changed to address these issues. First, we can no longer tolerate fragmentation of EHR standards. To be minimally medically acceptable, an EHR system must be able to forward records to at least the likely set of medical providers My company, Schloer Consulting, has designed a system that provides for electronic translation and interchange of EHR between all major standards, and uses biometric security and encrypted channels as integral components. This is not a perfect solution, but it is far superior to most systems.

Devising secure technical solutions for EHR within one group - a nation, for example - is not that difficult, but it is expensive. It requires cooperation and enforced standards between providers, and between providers and payers.(5) In the United States, such cooperation has been mandated by recently passed legislation, what Republicans there term "ObamaCare". We do not think that this legislation goes far enough and it certainly does not solve, or even address, the problems globally.

Medical device and EHR cyber security standards both must be rigorous, and both ought to require thorough penetration testing. The first murder by cyber attack probably has not occurred - although we would not know if it had - but in today's world, it is a very real possibility. The first major releases of EHR have, indeed, occurred. Present security technologies for medical devices and records are totally inadequate. We can correct this with a combination of legal, ethical, and technological changes, but resources must be made available to do this. I do not see this happening to nearly the extent that is required.


The Personalized Healthcare Initiative, a recently launched project in the USA, has set itself the goal of using clinical and genomic information to improve the effectiveness, safety and quality of treatment for patients by adapting treatments to each individual's medical identity. Would this kind of project be possible on an international scale or are the established healthcare systems, such as the French one characterized by universal coverage, the most efficient system we can hope for at this scale?

Philip Gagner
: The US healthcare system is a disaster. People in the USA pay four times as much as most of the civilized world for healthcare, without significantly better outcomes (and in many cases, such as infant mortality, much worse outcomes). Despite the deplorable state of both its public and its private healthcare, the US remains a leader in medical technology research. One of the most ambitious and controversial high-tech programs is the Personalized Healthcare Initiative (PHCI).

In the words of the US Department of Health and Human Services official documents:
"The Personalized Health Care Initiative will improve the safety, quality and effectiveness of healthcare for every patient in the US. By using "genomics", or the identification of genes and how they relate to drug treatment, personalized health care will enable medicine to be tailored to each person's needs."(6)

The PHC has two guiding principles and four goals:
Principle 1: Provide federal leadership supporting research addressing individual aspects of disease and disease prevention with the ultimate goal of shaping preventive and diagnostic care to match each person's unique genetic characteristics.
Principle 2: Create a "network of networks" to aggregate anonymous health care data to help researchers establish patterns and identify genetic "definitions" to existing diseases.

The four goals are generally to [1] link clinical with genetic information; [2] protect individuals from unauthorized or discriminatory use of genetic information; [3] ensure the accuracy and clinical validity of genetic testing; and [4] develop common policies for access to genomic databases. It is notable that neither of the two guiding principles explicitly includes either ethical or privacy concerns. The second goal (and to a limited extent the fourth) addresses individual privacy concerns but, as I read the descriptions of them, fail to recognize that privacy is, in fact, in conflict with the other goals and principles.

PHCI builds on prior U.S. law, primarily the Genetic Information Nondiscrimination Act (GINA) that prohibits most uses of genetic information by employers and by health insurers. This law, according to the NIH National Human Genome Research Institute, is required to ensure that individual healthcare can flourish without patients worrying that test results may adversely affect their work or insurance situation.

It is worth focusing on GINA because it is both an inspiration for, and intimately connected with, the implementation of PHCI and because the concern about whether patients will risk bad non-medical consequences from a medical test is a valid one. Employers are rationally less likely to hire and train an employee who carries a genetic marker for early death. Similarly, private insurance companies are less likely to provide health insurance to somebody who is more likely than average to develop a severe condition requiring expensive medical care, if they have the option.

Employers are relatively easy to regulate. Their actions are, to the employees, quite public, and measures in the US such as work hours, minimum wage, and even anti-discrimination laws have been widely successful. Because this is another anti-discrimination statute, it also is likely to be widely observed and honored.

Insurance companies are another matter. American insurance law does not protect people with non-genetic indicators of future bad health, such as cancer polyps or a negative X-ray diagnosis. In fact, GINA actually leaves them worse off, because, deprived of clearly relevant predictive data, insurers must rely on less reliable indicators or on secondary sources, such as treatments given. This means, at best, attempts to indirectly circumvent the prohibition, as well as the use of an inconsistent set of predictors. It means increased randomness as to outcomes, which in turn means that the overall variance in cost of insurance premiums will rise for the population. Because negative diagnosis can be inferred from treatment (and the use of this data is not prohibited), it means that patients will be less willing to undergo preventative care methodologies if they perceive them as likely to raise their premiums.(7)

Second, patients who receive bad news about their genetic testing will, rationally, opt-in for higher medical insurance coverage, and patients who receive good news will, rationally, opt for less. The outcome is an overall smaller population with more healthcare risks, and the GINA goal of spreading the risk through the population cannot possibly be satisfied.

The distinction between genomic information and other medical information is, in my view, arbitrary and without any valid basis in science. If you consider a future where genetic testing technologies are low cost and commonplace, and where better genomic knowledge predicts more and more about human physiology (including disease processes), then such testing becomes just another medical tool, like a biopsy. The entire idea behind GINA, the distinction between genomic disease probabilities and observable current medical conditions is a false distinction, and the underlying policy problem is that insurers are permitted (in the United States) to discriminate by risk pool manipulation based on any medical test.

In a society, such as the French medical system, where essentially 100% of the population is in the same risk pool, there is no discrimination by excluding those with genetic markers perceived as negative. Many of the features of PHCI and of GINA are based on policies to prevent such exclusion, but they do not solve the problem in a way likely to succeed, nor are such features necessary or desirable in other nations.

Whether or not prohibiting discrimination by employers is another matter. To me, it seems probable that such a prohibition would be both necessary on moral and social grounds, and effective, and similar prohibits are found in French and other European nation laws.

Other provisions of PHCI remain valid, and appear to hold great promise both for clinical treatment and public health. Genomics testing is still expensive, but far less so than it was a decade ago - by as much as 500%. This cost will continue to decrease, and equipment to sequence DNA and DNA fragments will be available at any large hospital in developed countries. With present technologies, much of the analysis required to perform genetic tests is done by highly trained people, but nearly all of this can, very likely, be automated. One researcher at George Washington University Hospital is developing large molecule detection devices that cost less than ten dollars and are disposable. These particular devices test for certain antibodies, but similar technologies are feasible for DNA marker testing.

The medical risks of genomic testing - as distinguished from risks of genomic diagnosis - are almost non-existent for adults, and minor for infants and fetuses. Assuming that current cost reduction trends continue, and that an increasing number of disease processes are linked to genetic markers, demand by physicians and patients will increase. Pharmacogenetics, allowing the targeted prescription of drugs based on DNA and large molecule markers, has entered medical practice and has been successful.(8)

Low cost and large-scale genetic testing provides two very different benefits: First, it has clinical utility, that is, it can alert healthcare providers to increased probabilities of certain outcomes for an individual patient. Second, it can provide a database for medical research. These two benefits have two sets of parallel risks. In the clinical practice case, the risks include the psychological burden of knowing that one is at higher risk for a certain condition (which may lead to behavioral changes that are harmful to the individuals overall health, such as fad diets, wasting money on charlatan healers, or even taking unnecessary medications), and can include false complacency based on negative test results).(9)

By way of example, consider a newborn screened for genetic markers for cystic fibrosis. Early diagnosis of that condition is believed to significantly improve clinical outcomes by allowing prompt administration of pancreatic enzymes and treatment of infections.(10) There are, of course, corresponding risks, and one can easily identify the risk of incorrect test results among them. Nevertheless, as a matter of clinical utility, one must determine whether the evidence-based benefits outweigh the evidence-based risks.

From the public health viewpoint (which is the viewpoint in which the database referred to above is useful), there is a significant benefit to large scale genetic testing. But, since the public at large will carry the cost burden, the public health benefits and risks must also be measured. As a matter of basic research, the type of database envisioned by PHCI will be valuable, and a simple example is correlating gene markers on one DNA segment with those on another and comparing them with other observed health information. Such database mining has already found correlations, and has found areas for further (non genomic) research into specific disease processes. The problem here is that, to achieve the benefits, individual data including environmental data must be stored in the database. The more data that are stored, and the greater the degree of public access, the more difficult it becomes to protect (or obscure) the identity and privacy of the tested individuals.

An example might be helpful here. Consider a database that contained the following information: A male individual, (name and exact address obscured in the database) mixed Caucasian-Asian ancestry brown hair, dark brown eyes (all easily determinable from DNA markers), born July 2009, early medical history includes persistent cough, stomach swelling, lives in a farming community near Nice, France, within 5 km of a fertilizer storage and processing facility, and has some genetic markers for cystic fibrosis. Given this information, it very likely would be possible, even easy, to identify the particular individual. At present, testing infants at birth for cystic fibrosis (particularly if there is a family history) is commonplace. But the results of that testing are not stored in a large and generally accessible database, and so are not available to neighbors, potential employers of other family members, the press, or charlatans hoping to peddle quackery to distressed parents.

A large public database with highly personal and traditionally private information is, by its very nature, inconsistent with individual privacy. The more one limits access to such data, the less likely the data are to be used for useful research. The more access one provides, the fewer realistic assurance of privacy one can give. This problem cannot be solved by legislation or by technology - it is simply that two different but worthwhile goals are inconsistent. One must decide how important the privacy issues are, and how valuable the research results will be, and then adjust the database content and access to achieve the balance.

In conclusion, he United States, first with GINA and later with PHCI, has determined to create a highly regulated national database of individual genomic information. The designers of the system are correctly concerned with the individual privacy issues and with public health risk issues, including those described here. The Obama administration has determined that the probable public benefits outweigh the public and individual risks, and this is likely the correct decision. But, it is in my view, a decision in an area fatally marred by the US healthcare payment and insurance coverage system. An insurance company that, in partial or complete defiance of the law, uses genomic information to reduce its payment risk will make greater profits than one that does not. Since the purpose of corporations is to maximize profits, this pressure to gain information will be intense. And although the US law prohibits using individual data, it does not, as I read it, prohibit using genomic information to create risk pools by statistically analyzing the data after removing individual identification. The smaller (the more specific) the risk pool, the more this becomes like discrimination against individuals. There is a large grey area of vagueness here, and insurance companies will undoubtedly exploit it.

By contrast, in a system where healthcare coverage is universal, the calculus becomes much easier. Assuming that reasonable measures are taken to maximize privacy and minimize security and penetration risks, the benefit to the public of such a database seem to quite clearly outweigh the risks. Genomic markers generally indicate a probability, not a certainty, of medical conditions, and genes generally work in combination to produce physiological effects. Understanding probabilistic evidence in favor or against clinical therapeutic measures only comes with large populations(11), and such a database is likely to reduce the number of expensive clinical trials. In addition, knowing what genetic predispositions exist in the population as a whole is valuable to public health officials.

Genomic information, in databases or otherwise, is not different in kind from other medical information, it is just newer. The same measures that are necessary to protect people from disclosure of private matters are necessary for genomic information, not more and not less. Those features of PIHC that do not relate specifically to the US healthcare insurance industry can, and should, be adopted in other countries and, to the extent politically possible, the PIHC database should be extended to an international genomic database of the human race.



1) Hacking Medical Devices for Fun and Insulin: Breaking the Human SCADA System, Jerome Radcliffe, http://www.blackhat.com/html/bh-us-11/bh-us-11-briefings.html (retrieved October 19, 2011).
2) Wearable Wireless Sensors, ABI Research, 3Q 2009
3) They Can Hear Your Heartbeats: Non-Invasive Security for Implantable Medical Devices. Gollakota, Hassanieh, Ransford, Katabi, Fu, In Proceedings of ACM SIGCOMM. August 2011.
4) Los Angeles Times, May 09, 2009. (retrieved October 14, 2011)
http://articles.latimes.com/2009/may/09/local/me-hospital9
5) In the data processing world, it is often said, jokingly, that the best things about standards are that (1) there are so many to choose from and (2) if you don't like the existing ones, wait a month because they change so frequently. This is certainly true in healthcare records management, and the only feasible solution is to mandate, by government regulation, that systems must be compatible with (that is, capable of interchanging data with a certain standard of choice). Yet, this very requirement adds complexity and new security vulnerabilities.
6)
http://www.hhs.gov/myhealthcare/ (October 18, 2011).
7) This analysis of insurance company reactions to GINA was first and cogently argued by Professor Russell Korokin, J.D., UCLA Center for Society and Genetics and UCLA Law School, and Dr. Rahul Rajkumar, M.D. J.D. of Brigham and Women's Hospital, Boston, MA, USA. The conclusions from the argument, however, are mine, and not to be attributed to them.
8) Genetic Testing in Clinical Practice, Lamberts and Uitterlinden, Annu. Rev. Med 2009, 60:431-42.
9) What is the clinical utility of genetic testing? Scott D. Gross, M.D. and Muin J. Khoury, M.D. Ph.D., Genetics in Medicine, Vol. 8 No. 7 (July 2006).
10) Ibid at 449.
11) Genetic Testing in Clinical Practice, Annual Review of Medicine, Vol. 60: 431-442 (Volume publication date February 2009).

0 Comments:

Post a Comment

<< Home