Computers, Privacy & the Constitution

Feeling Safe When Your Health Is at Risk

-- By RahulWadwa - 15 Jul 2015


Thanks to the Internet and the widespread use of smartphones, most of us are constantly connected at all times of the day. Our locations can be easily traced, ancient search histories dug up, and intimate messages read. It's been confirmed that the government has not only been spying on us, but on other countries as well. But if any of this makes you uncomfortable, there are ways to protect your online privacy and avoid being tracked. For example, you can abandon the technologies on which such spying occurs and live off the grid, so to speak. Or if that is understandably too difficult or impractical in today's world, you could take steps like encrypting your emails and deleting your Facebook.

But what if you could not leave those things behind? Many Americans, attached to their smartphones, would probably say they couldn't. But what if you actually had no choice? What if your life depended on those gadgets that are susceptible to hacking, and you literally couldn't get away from them because they are planted inside of you? This is the problem facing the healthcare industry as implantable medical devices (IMDs) using wireless technology to save lives improves and becomes increasingly available. With more than 2.5 million people already relying on them and their projected growth at 7.7% for 2015, more people find themselves making the decision between their privacy and their survival.

While the choice may seem obvious (survival), we shouldn't have to completely sacrifice our security in the process. Unfortunately, many vendors don't feel the same. Their reluctance to address the vulnerabilities in their products may put millions of people who use them at risk of their information being exposed or much worse. People will naturally choose their lives over guarding their personal information, and so hospitals will continue to purchase such equipment. The companies' have no incentive to improve; their concern ends when they make a profit. It's a shame, but that's how the market works. Of course we could use the same market to pressure for change, and the government has also already been taking this issue seriously, but is it worth it? How big are the risks?


Before we go there, we need to acknowledge the huge advantages of having IMDs that are controlled wirelessly. IMDs include equipment such as pacemakers, defibrillators, neuro-stimulators, drug delivery pumps like automatic insulin pumps, etc. Today, they incorporate wireless capabilities and complex software such as Wi-Fi and Bluetooth communication in order to make the flow of information from the patient to medical staff more efficient. Doctors can easily adjust prescriptions or diagnose diseases and regulate conditions with this real-time information from their computers without inconveniencing the patient through regular visits or more invasive procedures.

This all sounds like fantastic progress for patients and the medical field. But at the same time, the thought of having a device implanted in you that can be instantly, remotely monitored and programmed by another person is also, frankly, terrifying.

Potential Abuses

This cutting edge technology that is saving lives could also end up being the reason for losing lives. Studies show that most IMDs are vulnerable to hackers, malware, and other forms of cyber attack. Research by the DHS's Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) cited 300 medical devices from 40 companies that had unchangeable, and often simple, passwords. Upon cracking the passwords, hackers would have access to not only tons of medical and other personal data such as social security numbers, but also to entire hospital networks. Although the FDA has released guidelines for bringing IMDs up to standard, they are not binding, and most manufacturers have yet to fix these issues. The problem is not just that there is no incentive for them, but that there often isn't enough money or manpower. Most vendors (80%) are small businesses or start-ups that can't afford security experts. In addition, many hospitals are still unaware of this issue because it is so new. Medical equipment has never been regulated for security, but only ever for reliability, effectiveness, and safety.

But the concern goes beyond the exposure of private, individual information--hackers could potentially have control over your entire physical body. And these hackers could be anyone from the government to international terrorists to a neighbor down the street.

Given the glaring security flaws of wireless IMDs, the delayed reaction of the industry, the number of patients using them, and the number of people who could hack the system, the potential for abuse is high.

What are the chances?

Is it far-fetched to think terrorists or governments could hack into IMDs for whatever reason? In 2007, Dick Cheney had his implanted wireless defibrillator disabled to prevent terrorists from reconfiguring it to kill him. If you could die from a hacked IMD, are you still choosing between life and privacy? Yes. The chances of a hacker administering a lethal dose to a patient, for example, are pretty low. Targeted attacks are not so straightforward, but random ones are fairly easy. Most hackers are not looking to kill, but steal personal data instead. With this information, they could commit financial fraud, identity theft, Medicare or Medicaid fraud, etc. Medical data theft is far more valuable than credit card theft and takes much longer to notice.


The odds being low is not an excuse to leave personal information unprotected. The steps to guard our privacy can be as simple as placing some sort of removable memory inside the device device. With more than 300,000 Americans receiving IMDs a year, the FDA should make security requirements binding and more stringent.

However, with that many people relying on IMDs, the risks, though present, may not be significant enough to force vendors to increase their costs by securing their devices, thereby making an already unaffordable healthcare procedure more out of reach. For now, sacrificing privacy seems like a small price to pay for your life.

I think a research effort, as opposed to copying material revealed by a first Google search, would have turned up my own organization's Killed By Code: Software Transparency in Implantable Medical Devices, which despite being five years old would have filled in some of the spaces for you.

The draft would be more effective if it were technically more tightly accurate: pacemaker calibration by RF does not involve data collection, for example, so the vulnerability that concerned the Secret Service (and us, as you see) is not at all the same as data leakage. You would have an easier time getting successfully to the big ideas if you weren't choked off early, for a knowledgeable reader, by looseness and inaccuracy in detail.

But the primary problem is the cyber-security focus, which as usual is a ridiculous over-concentration on the least prevalent failure modes. People die not because of malign hacking, but because of ordinary software failure, known to this trade too as "bugs." You might, in research mode, also have discovered my When Software is in Everything: Future Liability Nightmares Free Software Helps Avoid, also from 2010, and also still state of the art.


Webs Webs

r2 - 26 Jul 2015 - 20:54:45 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM