This ‘digital copy’ is a very real and expanding entity that is both representing you and as a proxy and revealing more and more detailed aspects about your personal and private life. Shocking as that may sound, you are the very one who is feeding that copy by providing information freely in large proportions. Every day, we trade our personal information (and our privacy) for things like ‘free’ email, faster product shipping, and social news feeds that connect us with friends and family. This very public copy is not going away anytime soon, so the question is “Is that something that is helping or hindering you?”
Now what if I told you that you have another digital copy that has intimate knowledge of your physical and mental health? A more private copy that only gets to know and interact with your doctors and care providers. One who's quality of information can quite literally mean the difference between life and death? What if these two copies could interact without you knowing? Would you want your public and private copies sharing information freely? Perhaps, just the private pulling information from the public? Maybe not at all. Then again, maybe that choice has already been made for you…
ITSPmagazine: You’re an information security professional in the healthcare industry. If you would, please describe to our readers what you do, what your role is, and how (or why) information security and privacy are top of mind for you.
Elrod: I'm a technologist, consultant, and organizational leader with over 30 years experience; the last 20 of which have been primarily focused on privacy and information security. I work for a non-profit healthcare system in Northern California and in my current role I am primarily responsible for ensuring the strategic alignment of the organization's privacy and information security requirements with the technical and operational assets used to achieve them.
ITSPmagazine: A few months back, we had what I found to be a fascinating conversation about privacy and ethics in the healthcare space and we've kept in touch over the last few months looking to dig deeper into this topic. And I am very pleased today to continue that conversation and explore it a bit more with you: looking at the value of information as a means to provide better care, while also dissecting some of the risks associated with having that data and the ethics associated with using that data. We have a bunch of fun stuff to talk about today, so perhaps you can start off with what types of data are collected to provide health care.
Elrod: In healthcare, we collect all sorts of data in the regular operation of the business. We collect what I would call the “3 Ps of custodial data”. Those would be: PHI (Protected Health Information), PII (Personally Identifiable Information) and PCI (Payment Card Industry) data. Put that all together and you are talking about some of the most highly regulated and sensitive types of information you can have in one place. We gather that data as a normal part of business.
From a protected health information standpoint, we get direct information from patients about how the person's feeling, the subjective experience about the effectiveness of treatment, health goals, health concerns. We also have physician reported information, test results and data from all sorts of clinical systems and biomedical devices. More recently, we are seeing a stream of other ‘health information’ that’s become available from social and commercial sources. That's where it starts to get interesting, because we begin to cross over into the topic of metadata and information mining.
ITSPmagazine: Different types of metadata might be where the people live, perhaps? How often they get treatment for a particular thing or see a doctor about a particular case? What types of other metadata might there be?
Elrod: Metadata is really data about the data. It enables things like search, definition, and governing functions around information sets. With that in mind we approach the concept of metadata as one of three categories: Administrative, Descriptive (which you were just talking about) or Structural.
Administrative metadata might be something that conveys an individual’s consent related to sharing information or not. Descriptive metadata are the types of things that you were mentioning: Dates of service, ZIP codes, or some type of uniquely identifying information. And finally, Structural metadata that is descriptive about the data set itself and not the data elements within it: What's the history of that data? What's the location in which it was gathered? Who gathered it? Those types of things.
ITSPmagazine: I don't want to get too far ahead of ourselves here, but just for a scenario that we are leading up to… can we make predictive, calculated assumptions using these various data sets? Say, perhaps, we see an outbreak or the potential for a cure? But, based on the metadata, we see that the effects of the cure are best suited to low income neighborhoods located in specific region? Are we able to make those kinds of assumptions? Is it possible to look at the data to, perhaps, provide better care for a specific population that is at risk?
As we look at this, I’d like to point out that the origin of the data is also an interesting piece for me. In “the old days” everything was manually entered, if entered it all, into a computer system. I suspect a lot more of the data that we have on hand now is generated by ‘something’ rather than by ‘someone.’ So, what are some of the sources of information that we're collecting?
Elrod: In healthcare, the whole ‘Internet of Things’, includes all sorts of biomedical devices that could provide some type of information and data. As an example, data from an infusion pump or oxygen meter; something in a clinical setting. One also has to look at information gathered from outside sources. Nowadays, everybody has these fitness tracking things. You have it on your phone or you have it on your wrist or whatever. Maybe you are reporting information through an application about your diet or eating habits. We have a lot of potential information sources coming in and providing literally petabytes of data that could potentially be used. But that is where it starts to get to a little bit interesting. At what point is it an appropriate, applicable use of that data when we see it?
Along those lines, if we were to get data from a fitness app for example (a commercial company that you have shared your data with and then provided to us by them in this instance) and it is transmitted to our systems, does it become part of your standard record for or is it some sort of ‘other’ data pool by which we construct a digital shadow image of a patient? Was that data your data or did it belong to the fitness tracking company?
What if one of your digital copies had only half of the information needed to determine if you had a health condition and the other half of the information resided in the data of a different digital you? Wouldn’t it be great if they could just work together automatically and get a more complete picture of your health? You could have all that data working for you on a 24/7 basis looking out for your best interests and proactively letting you know about the risks and corrective actions available.
ITSPmagazine: You mention external data in the context of the medical device providing that data. I'm curious to know if it's possible now, or if you see it happening in the future, where other sources of data could be collected based on our activity online. Do you see a point where doctors or hospitals say, “Can we have access to your Facebook feed or your Twitter feed?” so they can look at your physical activity based on your social activity? These are two simple scenarios where the doctors, for example, may want this information to get a sense of how often somebody complains on Facebook that they have a migraine or that they woke up with back pain 3 months ago (not yesterday, like they said during the consultation). Yet, maybe it's more of an insurance thing than a healthcare thing that I am talking about here. But we are already starting to see that type of information coming into play, right? We are seeing this used to help provide better care, but also potentially feed into some of the other aspects of the healthcare ecosystem.
Elrod: Well that's a curious spot and you start to hit the fine line between the appropriate and ethical use of data, and possible invasion of privacy. I think we all have a perception of privacy in our lives. In the matter of health care and our well-being, we often have a different privacy expectation than we have in other areas. You mentioned Facebook, but it goes well beyond just one commercial organization. What if I can combine my data from multiple commercial organizations like Facebook, Google, Amazon or Apple for that matter? They're all collecting massive amounts of data on us all the time:
- What time of day you are shopping
- What we are searching for
- How many times have I complained on this or that
- Have I posted positive or negative views about this kind of restaurant or that kind of restaurant or that sort of business
- Where I am going
- What time of the year am I spending money on gifts
- What kind of health-related information am I searching for and reading about
All of this is constructed in some sort of digital version of you that a commercial organization is relatively free to use how they see fit. We are willing to trade our privacy because we receive some sort of remuneration (like the use of a service). I'm giving you the data and you're giving me free email or free searches on the Internet or a free way to connect with my family and friends through a social newsfeed. All the data that we’re freely disclosing in those contexts, all the individual data points, are then being correlated and used.
They collected data and metadata about you and they collected information freely given to them, information made public that they recorded; so who owns that information now? Who owns that digital version of you that's been collected and correlated by several different sources? Do you still own that? Case in point, I'll get on a well known e-commerce site and they'll post some little ad up for me to purchase a widget. I say “Hey, wow, I never thought I'd be interested in that.” but curiously I seem to be. They knew to advertise that particular product to me because they have a shadow version of me, a digital version they use to actually determine if I’m likely to purchase it or not. The more information they collect, the more precision and accuracy they have on knowing whether or not I’m going to buy it. Probability gets better, and eventually, I buy it.
In healthcare, it's a different world. We're making decisions on data that can literally impact the safety of patients. So one would think that we should do our best to have the most comprehensive data available about our patients in order to make the most accurate and precise decisions about their healthcare right? But that brings us back into that interesting place. Is it an ethical and appropriate use of data for healthcare systems to query the likes of Facebook, Amazon, and Google, about their patients? It is available for those outside of healthcare to use today, but for healthcare, where does that line get drawn? When is that okay for your doctor to do and when is it too invasive or possibly illegal?
ITSPmagazine: We assume that a line is maintained between the healthcare system and the Internet space in which we all operate. Certainly, a lot of people self-diagnose and self-treat themselves using search engines to find their symptoms and translate those into potential remedy without going to see a doctor, right?
Elrod: Sure, my wife and I joke about “Dr. Google”.
ITSPmagazine: Exactly. Your point is that all of that information is now available for advertising for pharmaceuticals or spa treatments or massage parlors, whatever there might be as a potential remedy. In one sense they become – rather, the Internet becomes – the Healthcare System, right?
Elrod: I can see a model where that is definitely starting to happen. Let’s consider a place where that collision seems to already be happening. Have you ever heard of these companies that do your genetic sequence for you? You send a sample in and they give you the results that say you might be related to these people and this is your ethnic background or this is what your genotype is. In healthcare, this type of genetic testing is commonly associated with what is called ‘Precision Medicine’ where the medical treatment is directed at, and customized for, an individual or specific population. An example might be being using this type of testing and research into genetic markers to develop better drugs to fight disease.
ITSPmagazine: I'm assuming the end-user license agreement for those companies that you're sending the genetic information to gives them a ton of rights. You’re basically saying you can use that information to trace my ancestry but then also for “whatever else you want.” Do hospitals and the healthcare system ask for that same right or is it strictly being collected to check for diabetes, for example? And what's the extent with which we can look at this data both legally and ethically? Or, worse yet, are they not bound by the same rules that would prevent them form sharing the information with other entities outside the US? What I'm trying to get at is, do these companies operating on the Internet have a better ability to use (and share) the data than the healthcare system since they are not bound to HIPAA regulations, for example, or am I stretching it too far?
Elrod: Exactly. In many cases they have much more flexibility in how they use the data. It is really two separate contexts. The Internet company / lab that does its genetic stuff and gives it back to you is not a healthcare company per se; they are research company and a commercial organization. Read the agreements carefully because when you sign up for one of those you actually are saying it's alright to do a lot of things with the data they come up with. You are consenting, in many of the cases, to allow that information for research whether it's for their own use, for profit or for educational purposes. Now one would hope they are de-identifying that data so you have your privacy protected. But you are basically consenting to allow them to do all sorts of stuff that is both known today and for unknown future uses. At Healthcare organizations when they collect samples, it is for the specific use that it was taken for. They don’t get to generalize that into genetic testing or any other sort of testing that you haven’t consented to. You have to specifically consent for its usage.
ITSPmagazine: Is that in the context of visiting the GP (general practitioner) or is it specifically tied to signing up for clinical trial related to a particular case or something else altogether?
Elrod: My understanding is that you would have to specifically consent to that information being used inside of a healthcare system. For instance, my blood should not to go to the genetic lab at a particular healthcare system to be used as part of those efforts without my explicit consent. It is possible that I might be asked by my GP, “Hey look would you be willing to participate or have your information utilized by our genetic lab to help cure X?” I guess you could then find yourself in that situation where you are being asked to extend the scope of the sharing of that info. But that is not something that's going on without your consent. Just because you had a blood draw doesn’t mean it was automatically sent over and genotyped and put in a database somewhere to be used without your knowledge.
ITSPmagazine: Healthcare entities shouldn't be doing that, right?
Elrod: Correct, they really shouldn't be doing that without consent. But let's put a bit of a spin on this. Let’s say we find out through research that a specific gene marker (from samples that people agreed to give as long as their privacy and anonymity was secured) leads to cancer that can either be easily treated or completely avoided if you make a specific behavioral change. We notice that there are a couple of records in the anonymous set that have this marker. Do we try to re-identify the individuals whose information has been de-identified? Is it acceptable, from an ethical standpoint, to do that? Especially if the person specifically wanted to remain anonymous. When is it a good idea? I don't have an answer to this. An effort where you know you could save someone's life or you could positively serve them but they requested the data to be anonymized. They don't want you to know who they are; that was their agreement for allowing their data to be used. Your access to it was dependent on that factor. How do we then balance the benefits of data science in healthcare and the privacy needs of the individual? What if I found something like this, would it be okay for me to go and pay one of these internet companies to know who is in their entire database? Who in their database actually has this marker and needs to know this information? There's a blurring right there. In that instance, I'm going from the healthcare context into that commercial or public space. Who gets to make that decision when the owner wants to remain anonymous in one context but gave that data openly in another? I want to be able to get this potentially lifesaving information out. What do I do? That's a big dilemma because there’s a potential for such a positive outcome.
What if your digital copies that had the data, worked together and made that information available for anyone to use for the right price? Not so much to steal your identity, but to reveal everything about your identity. Things that may be good or even lifesaving in one context, but not so good in another.
It reminds me of the movie GATTACA from the late 90s. It's interesting and has some relevance here.
It's a movie that talks about genetic discrimination and was definitely science fiction then but not so much now. In the movie, it's illegal to do genetic profiling on individuals and to use that information to discriminate in your decision making. But it's done anyway as the answers are there to purchase. This is where we start talking about the potential misuse of this kind of data and metadata. Picking on genotype profiling, it might be that certain markers in people means they may have more types of chronic diseases symptoms. As such, they'll take more time off work and they'll also be a higher insurance risk. One of those type of situations, right? In the movie, if I'm recalling correctly, it has become a means of systematic discrimination. You would not be hired for this particular role or job because you wouldn't be the perfect match or you would be an increased liability. Like some version of eugenics. It was taken to an extreme in the movie because that data was accessible publicly or they could easily purchase it even though they weren't supposed to.
Using genetic markers this way means that now we're talking about the possible correlations with non-health related factors like ‘they have a higher percentage of experiencing these kinds of diseases ergo they will be taking more time off of work’. The derived metadata set would affect their employability for certain positions or jobs. Maybe it would change their insurance rates negatively. Maybe it would even change the opinion people have of them in the general public sense; judging people based on genetic data, and deciding whether it's a good idea to be in a relationship with them because of a potential disease or problems down the road for possible children. Along the same lines, you're looking at things like blackmail and profiling of people too; using it for some kind of criminal act or purpose. There are a lot of gray areas here, because there are always two sides to every story. Lots of potential good but also the potential for bad in there.
ITSPmagazine: You bring up an interesting point; how do we maintain privacy and balance that with providing better care? Hopefully, at some point, we all agree on what’s ethical and legal and play by the rules. However, there's probably some gray area that gets crossed. And, all of this assumes that people are playing by those rules, whatever they are. This is not the case in many situations, I’m sure.
Elrod: Well, we're certainly not all playing the same game because, depending on the context, completely different rules apply. For example, I can post a bunch of information about myself: my name, my address, my date of birth, and that I'm going to the doctor because I broke my collarbone skiing. I can post this on a social networking site, write a blog about it, whatever I want. All that information is PHI. But it’s my PHI so I can put it out there as I see fit. Once I call my doctor or healthcare organization, even though I publicly put it out there, there’s a different set of rules.
If I were to provide that same information to my doctor or he read it from my public post, then he could not use it in the same way that the social networking or search company could.
All that said, I think we're going to see some changes in the near future on the concept of privacy and what's ethical and appropriate use. It will happen culturally first, before we see any kind of legal changes and I notice it happening now. My opinion of robust privacy and what that means is different than my kids. What they're willing to share and what they're okay with having others see – what they're OK with other entities using – is much different than mine. You can call it a generational thing, but it's also a connectivity thing too. That's probably another conversation though altogether, Sean.
ITSPmagazine: Yeah, another conversation, most definitely. As society moves forward, I think what we’ll be comfortable with today will look very different for us tomorrow. I was talking with another gentleman recently on a similar topic and he said 20 years ago if somebody said, “We would like to hand you this tracking device, you can keep in your pocket or your purse and you will carry it around with you 24/7. You can even leave it next to your bed when you sleep. It will know where you're at, all the time, and will know when you wake up and go to sleep.” Would you be comfortable with that 20 years ago?” Most would probably say “no way.” But yet, here we are. Our iPhones and Android devices do those very things, and we are “OK” with it.
Elrod: Exactly! I finally got creeped out enough when my phone kept telling me where I parked my car. I got a notification: “You parked your car in the street” then an hour later when I moved it again, I got a “You parked your car in the garage”. My very next thought was ‘How about I turn location services off’. So, you're right. Convenience and privacy. Convenience and compliance. We've definitely got some work to do there.
ITSPmagazine: Absolutely. We're all about raising awareness here at ITSPmagazine, which I think we've done a great job here in this conversation. I don't know that we did too much to create a bunch of fear; that's certainly not our goal at ITSPmagazine – rather, we want to educate and raise awareness – without frightening folks too much. With that said, is there anything that individuals or consumers should be aware of given the points we’ve just made?
Elrod: There's a lot of great advances that can come out of all this data but there is also a new cultural norm around privacy in a hyper-connected world that we need to figure out. We need to find what the balance is because bad guys will always do bad things but that shouldn't stop the good guys from doing good things.
An ITSP Radio Podcast
Not a lot has changed in ~25 years, since Cheswick co-created the first firewall
ITSPmagazine: Exactly, I had a chat the other day with Bill Cheswick, one of the “fathers of the firewall.” My initial instinct of how he would operate as a person (not as a security professional) would be that he would be fairly paranoid and cautiously approach new technologies. He had the other extreme position, however, where he told me “I'm so interested to see how far things can go, that I use this stuff with open arms.” And I said “do you not even change the default password?” He replied (paraphrasing) “No, I just segment my network. Beyond that, I just put the stuff on and see how far I can take it and what extreme benefit and value can I get from these new technologies, without any filters or limitations.” That's a completely different viewpoint than what I expected to hear from him. But to his point, there’s so much possible and I think we do need to keep our eyes open so we can explore, enjoy and ultimately benefit from it all.
Elrod: I have a similar opinion. As security professionals, we have to ask ourselves, “What do the bad guys do?” and respond accordingly as the good guys. Just like that first question you asked me about keeping privacy and information security top of mind, it's a methodology and a way of thinking. I know that there is some usefulness in all of these tools. I want to embrace them and use them and find that balance personally in my life. I want to take advantage of these things but do it in a secure and ethical fashion.
You want to make sure that we encourage innovation, but we do so in a way in which we are walking forward into things with eyes wide open. I want to be able to use all the things we’ve talked about. If there was a genetic marker for me that could inform me there was something that I could do today to stop contracting some kind of chronic disease later, I would want to know that. But I don't necessarily want my insurers, employers, my friends or the general public to know that about me.
An ITSP.TV Webinar | Privacy in Society
Ultimately, I think privacy controls need to be turned back to the individual. Not these companies or any sort of outside entity. Privacy and security are neither mutually exclusive nor should they be seen as impediments to progress. We need systems built to know when anybody uses any of our data. Perhaps some sort of public or private blockchain where it's encrypted and recorded. Understanding who's using (or requesting to use) my data and then I as an individual get to either grant or revoke permission to do that. I think there's a model there, especially with some of the new technologies currently evolving. Finding the balance between simplicity of interface while addressing the complexity of the problem behind the scenes. On a case-by-case basis, we can start to make that privacy decision intelligently or at least more consciously, regardless if it's a health organization or not. There are solutions to be had out there and I want to encourage that. I have high hopes. This is by no means a doom and gloom conversation.
ITSPmagazine: Definitely not doom and gloom. There are lots of fantastic things on the horizon. I think our lives will continue to improve and we are going to have to learn along the way; understanding what some of the risks are that accompany the benefits. Hopefully the benefits will outweigh the risks.
Elrod: As we push that knowledge, that capability and that type of control to everyone all across the board. That is what's really going to kick things into high gear.
ITSPmagazine: Jason, this has been a fantastic dialogue. I’m really glad we had a chance to do this. I know we've been talking about it for a while and I’m very happy that we got the chance to have this conversation and hopefully our audience enjoyed it. I think we’ve keyed up at least another couple conversations in the process. Perhaps we can find some more time in the near future to touch on some of those additional topics.
Elrod: Definitely. Thank you very much for having me Sean.
About Jason Elrod
Jason Elrod is a senior technologist and organizational leader with over 30 years of experience. For the last 20 years, Jason has focused primarily on strategic architecture, privacy, and information security. Currently, Jason is the Chief Information Security Architect for Sutter Health where he is responsible for ensuring the enterprise-wide alignment of technical solutions with the privacy and information security requirements of the organization.