By Dr. John Torous
Image courtesy of Flickr user Integrated Change |
We often hear much about the potential of digital health to revolutionize medicine and transform care – but less about the risks and harms associated with the same technology-based monitoring and care. “It’s a smartphone app … how much harm can it really cause?” is a common thought today, but also the starting point for a deeper conversation. That conversation is increasingly happening at Institutional Review Boards (IRBs) as they are faced with an expanding number of research protocols feature digital- and smartphone-based technologies.
In our article, ‘Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for IRBs” published in the Journal of Technology and Behavioral Science [1], we explore the evolving ethical challenges in evaluating digital health risk, and here expand on them. While risk and harm in our 21st century digital era are themselves evolving topics that change with both technology and societal norms, how do we quantify them to help IRBs in making safe and ethical decisions regarding clinical research?
A first step is to consider what is the baseline risk of any online or connected technology. Take for example privacy. In countries like the United States, internet service providers can now legally collect and sell users’ web browsing history without consent [2]. Popular websites such as Facebook may at times track users even when logged out or sometimes even without them ever having signed up [3]. The uses of this digital data can range the gamut from targeted advertising to police subpoenas of Fitbit and smartphone data for criminal prosecution [4]. With so much personal data already being collected in everyday life as the price of admission for use of today’s online services, what qualifies as high or low risk digital data collection in a clinical study or even everyday life? In Europe, this question has led to the new General Data Protection Regulation (GDPR) that took effect on May 25th 2018 [4] and set new strict and enforceable standards for online privacy, the right to be forgotten, data portability, data access, and breach notifications. Whether other countries will follow and pass legislation similar to the GDPR remains to be seen, but until then the question of assessing risks around privacy remains challenging for IRBs, researchers, and the public.
Image courtesy of Pixabay |
While privacy is the chief risk considered today with online-, sensor-, and smartphone-based research, as these tools develop so does their potential to create new risks. Three other types of risks that are important to consider are physical, psychological, and financial. In the published literature, there are surprisingly few cases of physical harm resulting from smartphone-based research studies which may in part reflect that most studies today focus on monitoring or lowering risk lifestyle interventions. There is also a paucity of data on psychological harms from smartphone-based studies or how people may react to being closely monitored, for example via GPS on their smartphone, etc. Likewise, there has been little reported on financial risks associated with inadvertent disclosure of digital data collected by sensors and smartphones. This is not to say these risks are minimal, but rather that as a field we need to better study and quantify these risks and their magnitude of harm. Without good data, it is challenging for IRBs to make informed decisions about studies - and equally challenging for research participants to make informed decisions about joining that study.
Further considerations that are more unique to digital health studies include assessing technology literacy and bystander risk. While words like ‘GPS,’ ‘anonymized data,’ and ‘hashing’ are frequently used in informed consent documents for smartphone studies, it is important to ensure that those signing informed consent actually understand what these words mean. Do you know the difference between de-identified and anonymized data? There is some research suggesting that those with lower health literacy may also be vulnerable to assuming health technologies like smartphone apps, etc. are safer and more secure than they actually are [6]. This raises the issue of a new digital divide not based on access to technology, but rather on understanding risks and equitable utilization. Yet another risk to consider that does not often occur in classical clinical research but more frequently with digital technology studies is bystander risk. Voice recordings may capture other voices in the nearby vicinity, Bluetooth monitoring will record information about nearby smartphones, and cameras may capture an entire scene with others in it.
Image courtesy of Wikimedia Commons |
Putting it all together, the model below seeks to guide IRBs through considering the different types of risk as well as ways to mitigate them. While this model is not designed to be comprehensive or thorough for every type of digital health study for every type of clinical population, hopefully the basic themes and examples provided may help guide informed decision making.
Recognizing risks in digital health studies is not an exercise in hindering research, but rather the pathway to mitigate risk and help ensure safer and better studies. For example, the largest risk factor of privacy is actually often the easiest to mitigate with appropriate encryption and security protocols. Ensuring that informed consent language is appropriate for those who are less technology-literate can help them better understand the study and be more interested in meaningfully participating. Communities like the Connected and Open Research Ethics (CORE) offer free and easy access to support and online forums for researchers, IRBs, and anyone to ask questions and receive answers on digital health ethics. The Neuroethics Blog you are reading right now also offers a wealth of relevant posts to help guide ethical decision making in this digital era. But perhaps the best resources of all remains an open mind willing to explore not only the benefits of digital technology, but also ponder the risks in order to bring both sides together for more informed decision making.
_______________
John Torous MD is director of the digital psychiatry division at Beth Israel Deaconess Medical Center at Harvard Medical School. As a board-certified psychiatrist with a background in computer sciences and clinical informatics, his research on smartphone apps and sensors for predicating relapse in serious mental illnesses like schizophrenia bridges engineering and clinical care. In 2017, Dr. Torous was awarded the Carol Davis Ethics Award by the American Psychiatric Association, an annual award for ethics and mental health work.
References
1. Torous J, Roberts LW. Assessment of Risk Associated with Digital and Smartphone Health Research: a New Challenge for Institutional Review Boards. Journal of Technology in Behavioral Science. 2018:1-5.
2. FCC releases proposed rules to protect broadband consumer privacy (2016). Federal Communications Commission. https://www.fcc.gov/document/fcc-releases-proposed-rules-protect-broadband-consumer-privacy.
3. https://www.buzzfeed.com/alexkantrowitz/heres-how-facebook-tracks-you-when-youre-not-on-facebook?utm_term=.loGrdzO1g#.eiED5XZR2
4. https://www.washingtonpost.com/local/public-safety/commit-a-crime-your-fitbit-key-fob-or-pacemaker-could-snitch-on-you/2017/10/09/f35a4f30-8f50-11e7-8df5-c2e5cf46c1e2_story.html?noredirect=on&utm_term=.a4acd2fba4ea
5. https://www.eugdpr.org/
6. Mackert M, Mabry-Flynn A, Champlin S, Donovan EE, Pounders K. Health literacy and health information technology adoption: the potential for a new digital divide. Journal of medical Internet research. 2016 Oct;18(10).
Want to cite this post?
Torous, J. (2018). Exploring the Risks of Digital Health Research: Towards a Pragmatic Framework . The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/07/exploring-risks-of-digital-health.html
No comments:
Post a Comment