Pages

Tuesday, April 24, 2018

The Effects of Neuroscientific Framing on Legal Decision Making




By Corey H. Allen







Corey Allen is a graduate research fellow in the Georgia State University Neuroscience and Philosophy departments with a concentration in Neuroethics. He is a member of the Cooperation, Conflict, and Cognition Lab, and his research investigates (1) the ethical and legal implications of neuropredictive models of high-risk behavior, (2) the role of consciousness in attributions of moral agency, and (3) the impact of neurobiological explanations in legal and moral decision making.





More than ever, an extraordinary amount of up-and-coming companies are jumping to attach the prefix “neuro” to their products. In many cases, this ”neurobabble” is inadequate and irrelevant, serving only to take advantage of the public’s preconceptions about the term. This hasty neuroscientific framing doesn’t stop with marketing but instead creeps into public and legal discourse surrounding action and responsibility. This leads to the question: does the framing of an issue as “neuroscientific” change the perceptions of and reactions to that issue? This question, especially in the realm of legal decision making, is the focus of ongoing research by Eyal Aharoni, Jennifer Blumenthal-Barby, Gidon Felsen, Karina Vold, and myself, with the support of Duke University and the John Templeton Foundation. With backgrounds varying from psychology, philosophy, neuroscience, to neuroethics, our team employs a multi-disciplinary approach to probe the effects of neuroscientific framing on public perceptions of legal evidence as well as the ethical issues surrounding such effects.




The Power of “Neuro”






When it comes to public perception, neuroscientific information seems to play a stronger role in confusion than in clarification. For example, the inclusion of irrelevant neuroscientific information (such as complex-sounding brain areas) clouds a person’s ability to tell good explanations from bad explanations as well as increases people’s satisfaction with these bad explanations (1). Similarly, the inclusion of brain images alongside scientific argumentation increases perceptions of scientific reasoning above and beyond different graphical representations of the same data; though, it is worth noting that these results have been contested (2 & 3). Regardless, neuroscientific explanations seem to possess some sort of “seductive allure” that other modes of explanations lack (1). But, this seductive allure does not stop at our perceptions; it also extends to how people behave and react to these perceptions. Please also see previous posts on this topic here.








Image courtesy of Wikimedia Commons.

Typically, research on the behavioral implications of neuroscientific discourse focuses on an actor’s moral responsibility and individual control of his/her future actions. It is conjectured that these topics of interest are due to laypeople’s proclivity to make (potentially faulty) assumptions regarding what caused the act and how much control the actor had in that process (4). In other words, painting a picture in which the actor’s brain activity arises prior to “them” realizing it leads to assumptions that his/her actions are nothing more than middleman in a larger causal chain and are, therefore, events that “they” have no control over. For example, Vohs & Schooler (2008) found that when participants were exposed to a deterministic message (i.e. an argument that behavior is a direct result of genetic and environmental factors, and thus, unchangeable), they were more likely to cheat on a task, presumably because they saw themselves as unable to do otherwise (5). Furthermore, neuroscientific discourse can affect how individuals treat others. When exposed to neuroscientific information (whether it be an entire semester of an introductory course in neuroscience or just a magazine blurb), people are less likely to be overly punitive when deciding how to respond to an individual’s bad actions (6). Seemingly, due to assumptions about causality, we are more likely to see others’ actions as being caused by their brain instead of being caused by “them,” leading to the intuition that they are now less morally responsible for their bad action and therefore less deserving of punishment. 





Neuroscience and Criminal Sentencing





One realm in which this research is especially pertinent is in criminal sentencing in the courtroom. Given that the usage of neuroscientific explanations and evidence in criminal sentencing has been steadily rising, the question regarding the seductive allure of neuroscience in the courtroom has already been posed frequently in the literature (7, 8, & 9, & 10). In particular, there is a focus on what is commonly referred to as “the double-edged sword” of neuroscientific evidence; that is, neuroscientific evidence has both the ability to mitigate or aggravate punishments depending on how the argument is framed. For example, explaining away a criminal action by positing that it was the offender’s brain that made him/her do it has the potential to decrease punitive sentences based on that crime. On the other hand, by making the argument that an offender is “unable to do otherwise” because of their brain, a picture is painted of an offender who cannot control his/her actions. Offenders who are unable to control their actions pose a larger future danger to society. This increased future danger has the potential to bring with it intuitions that the offender needs to be incapacitated for a longer time in order to protect society. 








Image courtesy of Flickr.

While most experiments aiming to find the effects of neuroscientific explanations in courtroom settings have found modestly mitigating (if any) effects on punishment (8, & 9, & 10), two studies in particular, by Aspinwall (2012) and Fuss (2015), show reasons to think that the “double-edged sword” of neuroscientific evidence is more than just theoretical (11 & 12). These studies posed the question of the effect of neuroscientific explanations on judges, American and German, respectively. Both studies found similar mitigating effects – the inclusion of a biological defense in the trial decreased attributions of legal responsibility and, furthermore, decreased the punishment recommended. But, on the other hand, both studies also found potentially aggravating effects. Aspinwall, for example, found that as mitigating post-hoc rationalizations (such as decreases in perceived moral culpability) increased with the introduction of a biological defense, so did mentions of balancing this effect with similar aggravating rationalizations (taking into account the offender’s future danger to society). Comparably, Fuss also reported an increase in judges’ recommendations for involuntary civil commitment. Though these results are certainly indicative and supportive of the notion of the “double-edged sword,” these findings rely heavily on open-ended question analyses instead of on direct punitive and incapacitative measures. 





Our Research on the Double-Edge Effect 





Because of the mixed results in the literature, we wondered whether or not failures to detect the “double-edge” effect were due to how punishment was being measured. A key assumption of the “double-edge” effect seems to be that mitigation and aggravation are driven by very different motives— one concerned with the moral responsibility of the offender (mitigation) and the other more concerned with the future consequences of letting a potentially dangerous offender back into society (aggravation). Overall, we were worried that prison sentences confound these motives and mask the “double-edge” effect, so we aimed to test it by including multiple measurements. In this research, we utilized an experimental vignette method, in which participants read a case summary regarding an offender with an impulse control disorder who was found guilty of sexual assault. They were then asked to assume the role of the judge overseeing the case and recommend a prison sentence for the offender. Participants were also given the opportunity to recommend that the offender be involuntarily hospitalized instead of or in addition to their prison sentence. This measure served as a way to incapacitate the offender (due to his/her dangerousness) without necessarily doing so for punitive reasons. 





Within these experimental vignettes, we manipulated two main aspects of the offender’s story: 1) whether the offender’s impulse control disorder was caused by behavioral or neurobiological factors, and 2) whether this disorder was found to be treatable or untreatable. Alongside these manipulations, we also included a control condition in which the offender was completely healthy. In line with the notion of the double edged sword, we hypothesized that neurobiological evidence, compared to behavioral evidence, would serve to reduce recommended prison sentences, and inversely, increase recommended time in involuntary hospitalization. This is exactly what we found: neuroscientific evidence decreased prison sentences (Figure 1) and increased involuntary hospitalization (Figure 2). We also found that both prison sentence recommendations (Figure 1) and involuntary hospitalizations (Figure 2) were greater when the disorder was described as untreatable as opposed to treatable.







Figure 1. Neurobiological evidence mitigated sentences relative to behavioral evidence and no evidence. Similarly, treatable conditions evoked significantly shorter sentences than untreatable conditions.







Figure 2. Neurobiological evidence increased involuntary hospitalization terms relative to behavioral evidence and no evidence. Inversely, treatable conditions evoked significantly shorter involuntary hospitalization terms than untreatable conditions.




What do our results suggest? 





While neuroscientific evidence certainly has the capacity to be mitigating in criminal sentencing, as previous studies have shown, that may not be the whole story. Our research suggests that when people are given the option of involuntary hospitalization in addition to prison time, they are then able to manage the offender’s future dangerousness without resorting to the “one-stop-shop” of retributive prison sentencing. In other words, when prison time is the only option available, our pluralistic motivations, and thus the potentially mitigating influence of neuroscientific evidence, may be obscured. Only when individuals can choose among sentencing options with distinct functions may this sensitivity to neuroscientific framing arise. 





Our results imply that sentencing decisions may be susceptible to how the evidence is framed. If this framing effect does turn out to be both prominent and replicable, then many ethical and legal issues arise with it. Though it is not my intention to address these issues here in the depth they warrant, it is worth mentioning two in particular: 1) more empirical research is needed to bolster the theoretical claims regarding ethical and legal issues of neuroscientific evidence in the courtroom, and 2) this type of research can play an important role in educating judges about the influences of framing, as well as address misconceptions about what neuroscience can and cannot tell us with respect to questions of causation and control. 





By applying the experimental method to the theoretical claims that have arisen in the popular media regarding the potential effects of neuroscientific evidence and framing within the courtroom, this line of research serves to substantiate, dispel, and scrutinize these claims. In doing so, both the accuracy and the precision of these concerns increase, leaving the public, future researchers, and legal scholars better suited to address certain susceptibilities within the sentencing process. For example, if this framing effect is truly only present when punitive motives are separated out and represented as different sentencing options, then concerns of framing might be ill-placed when the judge is only able to consider time in prison. On the other hand, if a judge is considering civil commitment, involuntary hospitalization, or supervised probation, these framing effects become incredibly important and pertinent. In this case, the framing of the offender’s offense can potentially alter not only his or her quality of life, but also his/her access to resources necessary for successful rehabilitation. 








Image courtesy of Wikimedia Commons.

Further down the line, this research can also educate the judges making these decisions about how certain framing effects might alter their recommended sentences. It is important to note that this education doesn’t stop solely with the effect of framing of sentencing but also includes more fine grained effects on notions of moral and legal responsibility, causation, and culpability. Though the jury is out on how this education would curb these certain framing susceptibilities, an increased recognition of, and interest in, these effects have the potential to inform the creation of additional safeguards within the legal system– safeguards designed to better protect the inherent tension between offender rights and public safety. 







References 





(1) Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470–477. 





(2) McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition, 107(1), 343–352. 





(3) Farah, M. J.; Hook, C. J. The Seductive Allure of “Seductive Allure.” Perspect. Psychol. Sci. 2013, 8, 88–90. 





(4) Nahmias, E. (2011). Intuitions about Free Will, Determinism, and Bypassing. The Oxford Handbook on Free Will 2nd Edition, 555–575.  





(5) Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will: Encouraging a belief in determinism increases cheating. Psychological Science, 19(1), 49–54. 





(6) Shariff, A. F., Greene, J. D., Karremans, J. C., Luguri, J. B., Clark, C. J., Schooler, J. W., ... & Vohs, K. D. (2014). Free will and punishment: A mechanistic view of human nature reduces retribution. Psychological science, 25(8), 1563-1570. 





(7) Farahany, N. A. (2016). Neuroscience and behavioral genetics in US criminal law: an empirical analysis. Journal of Law and the Biosciences, 2(3), 485-509. 





(8) Greene, E., & B. S. Cahill (2011). Effects of Neuroimaging Evidence on Mock Juror Decision Making. Behavioral Sciences & the Law, 30(3), 280-96. 





(9) Saks, M. J., Schweitzer, N. J., Aharoni, E., & Kiehl, K. A. (2014). The Impact of Neuroimages in the Sentencing Phase of Capital trials. Journal of Empirical Legal Studies, 11(1), 105-131. 





(10) Schweitzer, N. J., & Saks, M. J. (2011). Neuroimage evidence and the insanity defense. Behavioral sciences & the law, 29(4), 592-607. 





(11) Aspinwall, L. G., Teneille R. B., & J. Tabery (2012). The Double-Edged Sword: Does Biomechanism Increase or Decrease Judges' Sentencing of Psychopaths? Science, 337(6096), 846-849. 





(12) Fuss, J., Dressing, H., & Briken, P. (2015). Neurogenetic evidence in the courtroom: a randomised controlled trial with German judges. Journal of medical genetics, jmedgenet-2015.






Want to cite this post?



Allen, C. (2018). The Effects of Neuroscientific Framing on Legal Decision Making. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/04/the-effects-of-neuroscientific-framing.html

Tuesday, April 17, 2018

The Fake News Effect in Biomedicine


By Robert T. Thibault




Robert Thibault is interested in expediting scientific discoveries through efficient research practices. Throughout his PhD in the Integrated Program in Neuroscience at McGill University, he has established himself as a leading critical voice in the field of neurofeedback and published on the topic in Lancet Psychiatry, Brain, American Psychologist, and NeuroImage among other journals. He is currently finalizing an edited volume with Dr. Amir Raz, tentatively entitled “Casting light on the Dark Side of Brain Imaging,” slated for release through Academic Press in early 2019. 





We all hate being deceived. That feeling when we realize the “health specialist” who took our money was nothing more than a smooth-talking quack. When that politician we voted for never really planned to implement their platform. Or when that caller who took our bank information turned out to be a fraud. 





These deceptions share a common theme—the deceiver is easy to identify and even easier to resent. Once we understand what happened and who to blame, we’re unlikely to be misled by such chicanery again. 





But what if the perpetrator is more difficult to identify? What if they are someone we have a particular affection for? Can we maintain the same objectivity? 





What if the deceiver is you? 




In the case of self-deception, a different set of rules seem to apply. Self-deception is rarely deliberate and generally well-intentioned; it often stems from common cognitive biases and remains difficult to recognize. In this post, I discuss self-deception in the context of biomedical research. More specifically, I argue that researchers and practitioners can deceive themselves by clinging to promising seminal findings, overlooking emerging data, and in turn, believing an effect is present when it is not. 





The cognitive biases that misdirect us are well established. For example, when people are presented with new information that contradicts their folk understanding of the world, they tend to “quietly exempt themselves” from the general conclusions (1). In other words, if we don’t like the experimental results, we easily ignore them. This tendency is an example of confirmation bias





In a similar vein, experimenters have shown that the first information we hear on a particular topic often holds more weight than subsequent data. To test this concept, researchers provided participants with a script—helping them establish an initial belief—and later revealed that the script contained only false information (2). Nonetheless, participants continue to answer questions as if the initial script held some truth. This study, and others like it, depict what psychologists call the primacy bias or continued influence effect (3).







Image courtesy of dimland.



How, you may ask, are these biases relevant to biomedicine? Take for example the case of well-established treatments like antidepressants for depression, knee surgery for arthritis, acupuncture for lower back pain, neurofeedback for attention deficits, and even implanting tubular supports into coronary arteries for chest pain. In all of these cases, robust randomized-controlled-trials or meta-analyses reveal that these treatments provide little clinical benefit above and beyond placebo effects (4–8). Nonetheless, one in eight Americans continue to ingest anti-depressants (9), surgeons perform up to a million arthroscopic knee surgeries every year (10), over 14 million people have undergone acupuncture (11), thousands of neurofeedback practitioners continue to read brainwaves, and doctors implant hundreds of thousands of coronary stents annually. 





Of course, we like the idea that these treatments work through the presumed biological mechanisms (driving a confirmation bias) and we were probably first exposed to data suggesting they do (promoting a primacy bias). So now that conflicting, and notably stronger, evidence comes out against our original beliefs, we find the new conclusions difficult to swallow. Undoing our biases is hard, but the stakes are high. 





How did we get here? 





The overrepresentation of positive results in the published literature (i.e., publication bias) likely contributes to the confusion surrounding the evidence of many biomedical treatments. When analyzing antidepressant research, for example, scientists were only looking at a biased subset of the data until 2002 when Irving Kirsch submitted a Freedom of Information Request and meta-analyzed all published and unpublished data together. This analysis revealed that antidepressants modestly outperformed placebos in terms of statistical significance, but carried little additional clinical benefits. A recent meta-analysis found comparable results (12). Similarly, when evaluating neurofeedback research, we generally see only positive findings; until recently, it was notoriously difficult to publish a null finding in this field (5). 





Publication bias remains commonplace not only because researchers may forego publishing null findings, but also because journals are less likely to accept a paper presenting such results (13). This trend drives a state of affairs where the first paper published on any particular topic almost always reports positive findings. When follow-up studies deflate the hype surrounding seminal publications, which is often the case, we end up with a situation I call the fake news effect in biomedicine—a less reliable positive finding gets trumped by a more decisive null result, and yet, we cling to what we heard first and what makes us feel good. 





Beyond publication bias, some important experiments are simply never conducted due to narrowly-framed ethical concerns. For example, it’s rare to see placebo-controlled experiments in surgery because many scientists feel that they cannot justify exposing a patient to the potential complications of surgery without guaranteeing a genuine treatment. Likewise, regulatory agencies seldom require a placebo-controlled study before approving a new surgical technique.







Image courtesy of Wikimedia Commons.



The results from placebo-controlled trials, however, challenge this position. They demonstrate that some procedures, such as knee surgery, hardly outperform a sham comparator. Thus, in a broader frame, we expose millions of patients to the potential complications of certain surgeries while providing little more than placebo benefits. A panel of experts now strongly recommends against knee surgery for arthritis (14). Unfortunately, these placebo-controlled trials were performed after the medical profession established the infrastructure to practice knee surgery. If the robust null findings were published before the uncontrolled positive results, perhaps fewer practitioners would recommend this surgery. 





Even with these findings in mind, both ethics review boards and researchers themselves continue to shy away from certain placebo-controlled experiments. And we can’t blame them: as humans, we tend to regret choices that stem from action more than those that stem from inaction (15). For example, if an institution runs a protocol where a placebo control patient experiences serious adverse effects, lawyers are likely to get involved. Alternatively, if the institution refuses to conduct a placebo-controlled experiment for an invasive technique that turns out to provide only placebo benefits, few repercussions will surface. In a narrow-frame, we can praise our inaction for how it minimized exposure to invasive treatments; using a broad-frame, however, we can appreciate how this inaction may help perpetuate invasive placebo treatments which sometimes carry serious side-effects. 





Taken together, our scientific publishing model with its disdain for null findings, and our tendency to narrowly-frame ethical concerns and assume inaction as the default, stack the deck against us. They feed our cognitive biases and drive us toward self-deception. 





What next? 





If we were infallible interpreters of science, self-deception would become a non-issue. We could instantaneously weigh the influence of publication bias and we would never forget that studies without controls necessarily conflate placebo and treatment effects. Upon exposure to new and more convincing data, we would change our opinion accordingly. With inspiration from the economist Richard Thaler, let’s call this hypothetical character homo scientificus, or a scicon for short (Thaler depicts the perfectly rational and omniscient economic agent as homo economicus; an econ) (16). 





If we gave two scicons the same set of data, it wouldn’t matter what order we presented it in, what journal it was published in, or whether it was even published at all. They couldn’t deceive themselves even if they tried. As humans, however, we interpret data in relation to the order we receive it, our field of expertise, our own theoretical and methodological preferences, and even our emotional state at the time of reading. Needless to say, even the most seasoned scientists fall short of scicon status. 








Image courtesy of Pixabay.



While we can’t reset the past and clean our slate of biases, we can strive to override them when we look back at data and to circumvent them as we move forward. For example, we can use what statistician Andrew Gelman calls the time-reversal heuristic (17) to override the fake news effect in biomedicine. He encourages us to conduct thought experiments where we imagine that a robust null study was published before an uncontrolled positive result—and then to re-evaluate our belief. This technique attempts to override our biases in that we remain exposed to the same data while acknowledging our predispositions and attempting to minimize errors in thinking. 





To circumvent our biases—i.e., avoid information that feeds them or present data that hinders them—at least two practices can help. We can (1) publish null results immediately and unbashfully, if not in a journal, at least in a freely accessible repository; and (2) assume a broad frame when considering the ethical pros and cons of conducting a particular study. 





It remains difficult to identify when we’ve been deceiving ourselves, even more difficult to assume the blame, and perhaps most difficult of all to implement a lasting behavioral change in light of our discovery. As a first step to evade the perils of self-deception we can remain wary of our cognitive biases and present research in formats designed for humans, not scicons



References 



1. Nisbett RE, Borgida E. Attribution and the psychology of prediction. J Pers Soc Psychol 1975; 32: 932–43.



2. Anderson CA, Lepper MR, Ross L. Perseverance of social theories: The role of explanation in the persistence of discredited information. J Pers Soc Psychol 1980; 39: 1037–49.



3. Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychol Sci Public Interes Suppl 2012; 13: 106–31.



4. Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT. Initial severity and antidepressant benefits: A meta-analysis of data submitted to the food and drug administration. PLoS Med 2008; 5: 0260–8.



5. Thibault RT, Raz A. The Psychology of Neurofeedback: Clinical Intervention even if Applied Placebo. Am Psychol 2017; 72: 679–88.



6. Al-Lamee R, Thompson D, Dehbi HM, et al. Percutaneous coronary intervention in stable angina (ORBITA): A double-blind, randomised controlled trial. Lancet 2017; : 31–40.



7. Moseley JB, O’Malley K, Petersen NJ, et al. A controlled trial of arthroscopic surgery for osteoarthritis of the knee. N Engl J Med 2002; 347: 81–8.



8. Harris CS, Lifshitz M, Raz A. Acupuncture for Chronic Pain? Clinical Wisdom Undecided Despite Over 4000 Years of Practice. Am J Med 2015; 128: 331–3.



9. Pratt LA, Brody DJ, Gu Q. Antidepressant use among persons aged 12 and over: United States, 2011–2014. NCHS Data Breif 2017; : 1–8.



10. Kim S, Bosque J, Meehan JP, Jamali A, Marder R. Increase in Outpatient Knee Arthroscopy in the United States: A Comparison of National Surveys of Ambulatory Surgery, 1996 and 2006. J Bone Jt Surgery-American Vol 2011; 93: 994–1000.



11. Zhang Y, Lao L, Chen H, Ceballos R. Acupuncture use among american adults: What acupuncture practitioners can learn from national health interview survey 2007? Evidence-based Complement Altern Med 2012; 2012. DOI:10.1155/2012/710750.



12. Cipriani A, Furukawa TA, Salanti G, et al. Comparative efficacy and acceptability of 21 antidepressant drugs for the acute treatment of adults with major depressive disorder: a systematic review and network meta-analysis. Lancet 2018; : 1–10.



13. Smaldino PE, McElreath R. The Natural Selection of Bad Science. 2016. DOI:10.1098/rsos.160384.



14. Siemieniuk RAC, Harris IA, Agoritsas T, et al. Arthroscopic surgery for degenerative knee arthritis and meniscal tears: a clinical practice guideline. BMJ 2017; 357: j1982.



15. Zeelenberg M, Van Dijk E, Van Den Bos K, Pieters R. The inaction effect in the psychology of regret. J Pers Soc Psychol 2002; 82: 314–27.



16. Thaler RH, Sunstein CR. Nudge: Improving Decisions about Health, Wealth, and Happiness. 2008.



17. Gelman A. The time-reversal heuristic—a new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose). 2016. http://andrewgelman.com/2016/01/26/more-power-posing/ (accessed April 1, 2018).



Want to cite this post?



Thibault, R. (2018). The Fake News Effect in Biomedicine. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/04/the-fake-news-effect-in-biomedicine.html

Tuesday, April 10, 2018

Global Neuroethics and Cultural Diversity: Some Challenges to Consider




By Karen Herrera-Ferrá, Arleen Salles and Laura Cabrera






Karen Herrera-Ferrá, MD, MA lives in Mexico City and founded the Mexican Association of Neuroethics. She has a Post-doctorate in Neuroethics (Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics (PCCB) at Georgetown University), a MA on Clinical Psychology, and an MD. She also has a Certificate on Cognitive Behavioral Therapy and another one on History of Religions. She has a one-year fellowship on Psychosis and another on OCD. She is currently a PhD Candidate on Bioethics. On May 2016 she developed a national project to formally introduce and develop neuroethics in her country. The main focus of this project is to depict and include national leaders in mental health, interested in neuroethics, so to inform and divulge this discipline among scholars and society. She also works as a mental health clinician in a private hospital, lectures in different hospitals and Universities in Mexico and is an Affiliated Scholar of the Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics PCCB at Georgetown University. Her interests and research focus on two main topics: recurrent violent behavior and globalization of neuroethics in Latin America. 





Arleen Salles, Senior Researcher, Centre for Research Ethics and Bioethics, Uppsala University, Sweden, Task leader and research collaborator in the Ethics and Society subproject (SP12) of the EU-flagship Human Brain Project, Director of the Neuroethics Program at CIF (Centro de Investigaciones Filosoficas)  in Buenos Aires, Argentina.





Dr. Laura Cabrera is Assistant Professor of Neuroethics at the Center for Ethics and Humanities in the Life Sciences. She is also Faculty Affiliate at the National Core for Neuroethics, University of British Columbia. Laura Cabrera's interests focus on the ethical and societal implications of neurotechnology, in particular when use for enhancement purposes as well as for treatments in psychiatry. She has been working on projects at the interface of conceptual and empirical methods, exploring the attitudes of professionals and the public toward pharmacological and brain stimulation interventions, as well as their normative implications. Her current work also focuses on the ethical and social implications of environmental changes for brain and mental health. She received a BSc in Electrical and Communication Engineering from the Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Mexico City, an MA in Applied Ethics from Linköping University in Sweden, and a PhD in Applied Ethics from Charles Sturt University in Australia. Her career goal is to pursue interdisciplinary neuroethics scholarship, provide active leadership, and train and mentor future leaders in the field. 





The impact of scientific brain research and the effects of neurotechnology on human beings as biological and moral beings is increasingly felt in medicine and the humanities around the world. Neuroethics attempts to offer a collective response to the ethical issues raised by rapidly developing science and to find new answers to age-old philosophical questions. A growing number of publications show that the field has disseminated to many countries, including developing countries (1-3). Mindful that ethical issues are typically shaped by the interplay of science and society, there has been a recent emphasis on the need for a more culturally and socially sensitive field and a call for a wider and more inclusive neuroethics: a “cross cultural” “global” or “international” neuroethics (4). While the sentiment is good, what exactly a more inclusive neuroethics entails is not necessarily clear. Does it entail just recognizing the need for the field to be more aware of existing disparities in brain and mental health issues and their treatment in different regions? Does it entail recognizing the global scope of neuroethical problems? Or possibly working towards a common, unified approach to neuroethical issues that incorporates different viewpoints and methods? 









Image courtesy of Wikimedia.

While increased awareness of the scope of international issues is easier to accomplish, such awareness is less impactful if not followed by some kind of action. But if action is needed, the issue becomes: what is the moral framework that should guide it? More fundamentally, is there a unified moral framework that will allow us to address neuroethical issues in their cultural and social contexts? While the search for a unified framework might sound promising, its existence is not without difficulties. It requires rethinking usual approaches and ethical frameworks, making us aware of local context as a critical element when addressing the issues, and considering history and prevalent socio-cultural traditions that might play a key role in shaping people’s attitudes towards neuroscience, the questions it raises, and the potential ways to resolve them. This requires a good deal of conceptual work on issues such as how to understand notions of culture and difference and the potential stereotypical understanding of the values and beliefs of different groups. It also requires examination of the moral weight of particularities in order to ensure that inclusivity recognizes differences but at the same time does not overstate their impact or otherwise promotes further separation of other cultures. 





As we show next by focusing on some Latin American particularities, this is still a challenge. 





Latin America is a vast and multi-ethnic territory, rich in cultural traditions and histories that vary from country to country. Thus, it is difficult to talk about Latin American values and beliefs in a generalized way. This becomes evident if we focus on topics such as the brain-mind relation, approaches to research and technology development or adoption, as well as approaches to mental health and brain diseases. 





In general, people’s views on brain health, mental health, and mental well-being are shaped by historical, social, cultural, political, and even economic considerations and they typically influence each specific culture’s attitude towards the use of neuroscience and neurotechnology for diagnosis, treatment of neuropsychiatric disorders, and academic concerns (such as the discussion of neuro-enhancement). The same happens in Mexico and Argentina. In addition to economic and political considerations that clearly impact people’s perceptions, there are medical, cultural, and/or social factors that impact people’s perceptions and shape their ethical priorities in a way left unaddressed in much mainstream neuroethical literature.



a) Medical and Scientific: Both Mexico and Argentina are not only consumers of advanced neuroscientific techniques and neurotechnological tools in the clinical and research areas (e.g. Genomics, Diffusion Tensor Imaging, Deep Brain Stimulation, Transcranial Electric Stimulation, Transcranial Magnetic Stimulation) (5-12) but also producers of neuroscientific research (13,14). Importantly, while both countries share global medical priorities, such as the correct diagnosis and treatment of Alzheimer’s and Parkinson Diseases, there are additional local relevant neuropsychiatric priorities. For instance, in Mexico, Guatemala, Colombia, Brazil, etc., brain parasites accounts for up to 30% of epilepsy’s etiology (15).






Image courtesy of Wikimedia.

b) Cultural: Another important issue is how mental health and wellbeing are perceived and addressed locally. For example in Mexico, the mix of western and pre-Hispanic philosophies has been an important factor in leading many patients to consider going to a priest or a Chamán before consulting a psychologist or a psychiatrist (16). On the other hand, in Argentina, the prevalence of a psychoanalytic culture often results in people visiting psychoanalysts rather than psychiatrists (1). Indeed, the discipline of psychology, particularly in its psychoanalytic form is part of the everyday landscape and has shaped the language and beliefs of a significant portion of the population (1, 2).



c) Social/practical: When thinking of a global, cross-cultural or international neuroethics it is key to consider international collaborations. This requires reflection of the day-to-day practical considerations of carrying crossborder collaborations. Cultural differences bring challenges in terms of different bureaucratic hurdles. In some locations, Institutional Review Boards (IRBs) might work quite differently, in terms of the expertise of their members, the number of meetings they have, the information requested for protocols that might not involve clinical research. In fact, in some places IRBs might not even exist. There might be places where collaboration with certain countries only occurs in certain research areas, but not so much in the field of neuroethics. In order to develop fruitful international collaboration to better understand neuroethical issues from a cross-cultural perspective more needs to be done to address those more day-to-day practical considerations. 


A continuous discussion around what it means to do global, international, or cross-cultural neuroethics has many merits: a better understanding of cultural relevant practices, beliefs, and concerns will strengthen neuroethics goals beyond the few countries where it has flourished. Yet much conceptual and groundwork remains to be done to respectfully learn from different cultures and promote frameworks that advance the local and global goals of neuroethics.  





Careful reflection on cultural diversity, what it means, how it shapes the perception of neuroscience, neurotechnology and mental health, and how respect for cultural diversity might unintentionally lead to stigma and discrimination of “different others” is a necessity. 





Such reflection will hopefully enrich a ‘global neuroethics.' This is however, a work in progress that starts from a re-evaluation from global concepts to clinical and research contextual challenges. 







References 






1. Salles A. Neuroethics in a "psy" world. The case of Argentina. Camb Q Healthc Ethics. 2014;23(3):297-307.







2. Salles A. Neuroethics in Context: The Development of the Discipline in Argentina. In: Johnson L. S. M. RK, editor. The Routledge Handbook of Neuroethics. New York, NY: Routledge; 2018.







3. Buniak L, Darragh M, Giordano J. A four-part working bibliography of neuroethics: part 1: overview and reviews--defining and describing the field and its practices. Philos Ethics Humanit Med. 2014;9:9.







4. Lombera S, Illes J. The international dimensions of neuroethics. Dev World Bioeth. 2009;9(2):57-64.







5. Morales-Marín ME, Genis-Mendoza AD, Tovilla-Zarate CA, Lanzagorta N, Escamilla M, et al. Association between obesity and the brain-derived neurotrophic factor gene polymorphism Val66Met in individuals with bipolar disorder in Mexican population. Neuropsychiatr Dis Treat. 2016; 25 (12):1843-8.







6. San-Juan D, Sarmiento CI, Hernandez-Ruiz A, Elizondo-Zepeda E, Santos-Vázquez G. Transcranial Alternating Current Stimulation: A Potential Risk for Genetic Generalized Epilepsy Patients (Study Case). Front Neurol. 2016; 28 (7): 213.







7. Alvarado-Alanis P, León-Ortiz P, Reyes-Madrigal F, Favila R, Rodríguez-Mayoral O, et al. Abnormal white matter integrity in antipsychotic-naïve first-episode psychosis patients assessed by a DTI principal component analysis. Schizophr Res. 2015; 162(1-3):14-21.







8. Piedimonte F, Andreani JC, Piedimonte L, Micheli F, Graff P, et al. Remarkable clinical improvement with bilateral globus pallidus internus deep brain stimulation in a case of Lesch-Nyhan disease: five-year follow-up. Neuromodulation. 2015; 18 (2).







9. Bendersky D, Ajler P, Yampolsky C. [The use of neuromodulation for the treatment of tremor]. Surg Neurol Int. 2014; 5 (5): S232-46.







10. Fisher RS, Velasco A. Electrical brain stimulation for epilepsy. Nat Rev Neurol. 2014; 10 (5): 261-70.







11. Piedimonte F, Andreani JC, Piedimonte L, Graff P, Bacaro V, et al. Behavioral and motor improvement after deep brain stimulation of the globus pallidus externus in a case of Tourette's syndrome. Neuromodulation. 2013; 16 (1):55-8.







12. Rocha L. Interaction between electrical modulation of the brain and pharmacotherapy to control pharmacoresistant epilepsy. Pharmacol Ther. 2013; 138 (2): 211-28.







13. Consejo Nacional de Ciencia y Tecnología. Neurociencia 2017. Recovered January 5th 2018, from :
http://www.conacytprensa.mx/index.php/component/search/?searchword=neurociencia&orderi ng=newest&searchphrase=all







14. Consejo Nacional de Investigaciones Científicas y Técnicas. Neurociencia 2017. Recovered January 5th 2018, from Conicet: http://www.conicet.gov.ar/new_scp/search.php?keywords=neurociencias







15. World Health Organization. WHO, 2017. Recovered November 2nd 2017, from WHO: http://www.who.int/taeniasis/epidemiology/en.







16. INEGI. Encuesta sobre la Percepción Pública de la Ciencia y la Tecnología (ENPECYT) 2013. Recovered November 1st 2017, from INEGI: http://www.beta.inegi.org.mx/proyectos/e nchogares/especiales/enpecyt/2013/







Want to cite this post?




Herrera-Ferrá, K., Salles, A., and Cabrera, L. (2018). Global Neuroethics and Cultural Diversity: Some Challenges to Consider. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/04/global-neuroethics-and-cultural.html

Tuesday, April 3, 2018

The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing



By Karola Kreitmair 







Karola Kreitmair, PhD, is a Clinical Ethics Fellow at the Stanford Center for Biomedical Ethics. She received her PhD in philosophy from Stanford University in 2013 and was a postdoctoral fellow in Stanford’s Thinking Matters program from 2013-2016. Her research interests include neuroethics, especially new technologies, deep brain stimulation, and the minimally-conscious state, as well as ethical issues associated with wearable technology and citizen science.  





Brain-computer interfaces, neurostimulation devices, virtual reality systems, wearables, and smart phone apps are increasingly available as consumer technologies intended to promote health and wellness, entertainment, productivity, enhancement, communication, and education. At the same time, a growing body of literature addresses ethical considerations with respect to these neurotechnologies (Wexler 2016; Ienca & Adorno 2017; Kreitmair & Cho 2017). The ultimate goal of ethical consumer products is to contribute to human flourishing. As such, there are seven principles which developers must respect if they are to develop ethical consumer neurotechnologies. I take these considerations to be necessary for the development of ethical consumer neurotechnologies, i.e. technologies that contribute to human flourishing, but I am not committed to claiming they are also jointly sufficient. 





The seven principles are: 



1. Safety 


2. Veracity 


3. Privacy 


4. Epistemic appropriateness 


5. Existential authenticity 


6. Just distribution 


7. Oversight 



1. Safety

Consumer neurotechnologies must be safe! 








Image courtesy of Wikimedia Commons.

Technology should be safe both when it is used as intended and when under threat from cybersecurity attacks. The bar for ensuring safety is relative to degree of risk of harm inherent in technology. For technology to be safe when used as intended the development and production must be based on valid scientific principles and methods. Failure risks harm to users. Security breaches are particularly risky, because neurotechnology stands in an intimate relationship with the brain. 





Consider, for example, neurostimulation devices like the tDCS device Thync. Such devices are advertised as enhancing cognition, relieving symptoms of anxiety and depression, and combating cravings. However, there are considerable risks associated with such technologies, including that unintended areas may be affected, that enhancing one area might hurt another, that effects may be longer-lasting than expected, and that tDCS may cause contact dermatitis and skin burns (Riedel, Kabisch, Ragert, & von Kriegstein 2012; Wurzman, Hamilton, Pascual-Leone & Fox 2016). 





At the same time, neurotechnologies must be safe from cybersecurity threats. In August 2016, the FDA recalled 465000 pacemakers because they were vulnerable to hacking, allowing malicious actors to deliver inappropriate shocks to the heart or rapidly drain batteries. It is not hard to imagine similar attacks being launched on consumer neurotechnologies. For instance, Nervana is a neurostimulation device that stimulates the vagus nerve with gentle electrical impulses. If hackers were to hijack device in order to amplify or intensify the stimulation, this could be extremely harmful.



2. Veracity

Consumer neurotechnologies must not promise results they do not deliver! 





Flouting the principle of honesty violates a user’s autonomy, because it prevents her from making an informed decision on whether to use a particular technology. In an ideal world, all consumer neurotechnology would provide a valuable benefit to the user. However, while this may be too high of a bar for consumer products in our actual world, requiring honesty with respect to the value the technology provides is not. 








Image courtesy of Flickr.

There are currently a number of EEG sensor headsets on the market, such as the Neurosky Mindwave, that purport to measure attention, calmness, mental effort, appreciation, pleasantness, cognitive preparedness, and creativity. These devices can be combined with diverse apps that claim to allow users to train their performance along these dimensions. Take for instance apps meant to increase a user’s creativity through neurofeedback. There is limited evidence that ‘creativity’ can be reliably captured through EEG rather than a broader ‘heightened internal awareness’ (Fink, Schwab & Papousek 2011). Moreover, research that supports the claim that EEG neurofeedback training protocols can improve creativity tends to focus on effects in musical performances (Gruzelier et al 2014), a fact which is generally not explicitly stated on the EEG headset training websites.



3. Privacy

Consumer neurotechnologies must be private! 





Neurotechnology can capture massive amounts of highly sensitive data about users. Brain wave data may soon give insight into contentful mental states, i.e. ‘what’ a particular user may be thinking or what a particular user might be experiencing, making it a new class of data. There have been calls to regulate the handling of this exploding trove of neurodata (Yuste et al 2017), and I agree with this sentiment. In the meantime, however, users still have a right for their brain data to remain private, including, for instance, not having their brain data sold or shared to target product at them. Concerns regarding cybersecurity threats also factor into privacy considerations, as data can be a prime target for hackers.




EEG BCI devices, such as the Emotiv device, gather vast amounts of information regarding a user’s arousal, valence, frustration, and focus, as well as vast amounts of raw EEG data. This is used for a number of purposes, such as real-time modelling of brain activity, flying drones with one’s mind, or translating thoughts to speech





Such data must be handled in a responsible manner, that respect a user’s privacy. Absent new regulation, it is the developers’ responsibility to ensure this.



4. Epistemic appropriateness

Consumer neurotechnology should preserve epistemic appropriateness! 








Image courtesy of Pixabay.

Much of this technology functions by upending traditional epistemic pathways. It either mediates how we acquire information about ourselves or about the world. Traditionally, we gain such information through direct perception or introspection, relying on our embeddedness and embodiedness to give us information. Neurotechnology encourages the user to gather information through tracking and measurements, such as wearable tracking or neurofeedback, or through the generation of visual or tactile sensory input. Literature is emerging that employing these altered epistemic pathways may have profound impacts on an individual’s psychology, phenomenology, and even physiology. 





Tracking technology, including wearable devices and neurotracking devices, such as Hexoskin smart clothing or Interaxon Muse, can track location, activity, heart rate, breathing volume, EEG, EKG, sweat composition, and brain activity. It is this kind of technology that permits so-called ‘self-quantification’ (Wolf 2009). Evidence is emerging, however, that tracking an activity and focusing on measuring the output of this activity can diminish the enjoyment of the experience. In studies, individuals who tracked the number of steps on their walks through a forest using a fitness tracker did accumulate more steps, but also enjoyed the experience less, because it felt more like work (Etkin 2016). Moreover, there is concern that tracking and focusing on external means of gaining self-knowledge may be counterproductive to experiencing phenoma such as ‘flow’ and ‘being-in-the-moment’, which may contribute to alienation from embodiedness and embeddedness (Kreitmair, Cho & Magnus 2017). 





Consider also virtual reality systems, such as the Occulus Go which retails at $199 and is hitting the market in 2018. Such systems generate alternative visual and auditory phenomenological experiences. Or they can be extended to generate embodied virtual reality experiences, including ‘tactile’ or ‘haptic responsiveness’. (For instance, the NullSpace haptic suit includes 32 independently activated vibration pads on the chest, abdomen, shoulders, arms, and hands, which can activate 117 built-in haptic effects.) Such virtual reality systems may have effects beyond what is intended. Physiologically, evidence from studies involving the rubber hand illusion suggest that perceptual illusions can have effects for the immune system, such as increasing the histamine reactivity of the real body part (Barnsley et al 2011). Such illusions are also induced in virtual reality and thus are likely to cause similar immune responses in these settings. Moreover, the phenomenological effects of tactile responsiveness in virtual reality are not yet known. It is possible that embodied virtual reality may affect a user’s being-in-the-world in ways that are disorienting and alienating. 




5. Existential authenticity

Consumer neurotechnology should respect existential authenticity. 








Image courtesy of Wikimedia Commons.

This consideration also arises from my previous observation regarding the shift in epistemic access, namely that neurotechnology encourages the user to gather information through tracking and measurements, e.g. wearable tracking and neurofeedback, or through the generation of visual, auditory, or tactile sensory input. 





Neurotechnology mediate experiences. In a sense, a user is experiencing the representation of an experience (rather than the experience itself). This is either a quantified representation, when she is accessing states of the self and the world through tracking technology, or a virtual representation, when accessing states of the self and the world through virtual reality technology. It’s a different sort of experience, in that the user is not engaging in an authentic way with reality. 





This raises existential concerns. Experiences are that within which we ground our self-fashioning. As Kierkegaard says, the project of being a human is that of ‘becoming what one is’. We are always a ‘becoming’, never a ‘being’. What happens if we fashion ourselves on the basis of inauthentic experiences? Specifically, can we authentically fashion ourselves when the tactile, auditory, sensory input we receive is incongruous with reality? Can we authentically fashion ourselves when we acquire beliefs through ourselves through quantified data? 





Take the example of virtual reality. Sartre, in Anti-Semite and Jew (1948) states that “Authenticity consists in having a true and lucid consciousness of the situation, in assuming the responsibilities and risks it involves”. This raises the question of what effect this shift in the kind of thing we are experiencing might have for our moral sensibilities. What happens when we fashion our moral sensibilities in an experience space that is unbound by the constraints of reality? What happens when this technology becomes widespread in children? These are the kind of issues we, as neuroethicists, should be thinking about.



6. Just distribution

Consumer neurotechnologies must be justly distributed. 





These technologies constitute a good. As such there must be a just distribution. Without committing myself to a particular theory of just distribution it is none the less the case that how widely an individual technology ought to be accessible depends on the value it bestows. Very beneficial technologies must be widely accessible, while less beneficial technologies may be available only to a niche market. The ‘digital divide’ is already an issue concerning the equitable access to the internet between different socioeconomic statuses. This will likely be also with neurotechnologies. 








Image courtesy of Wikimedia Commons.

An example is neurostimulation devices, such as Startstim neurostimulation, which is advertised as performing transcranial direct current stimulation (tDCS), transcranial alternative current stimulation (tACS), transcranial random noise stimulation (tRNS), with the aim of enhancing, among other things, executive functions, language, attention, learning, memory, mental arithmetic, and social cognition. 





Justice demands that if this, or any other direct-to-consumer neurotechnology is an effective intervention that can be used to gain a benefit in a competitive environment, then it should be accessible in a just fashion.



7. Oversight

Consumer neurotechnologies must be subject to oversight! 





Oversight should address the six dimensions discussed and should be proportional to the extent the technology is implicated in the six dimensions. Of course, certain oversight mechanisms already exist. In the US, for instance, some of the technologies discussed most likely ought to be regulated by the FDA. The situation here resembles that of the early days of direct-to-consumer genomics. When companies like 23andMe began, they operated in an unregulated market. However, thanks in part to work of bioethicists (Magnus, Cook-Deegan & Cho 2009) the FDA stepped in and began to regulate 23andMe and others like it. 





For other technologies, those that would not be covered by FDA regulations, what we need is for stakeholders to develop industry guidelines. Specifically, stakeholders need to make judgment calls about where along the 6 dimensions thresholds should of acceptability should fall. If a particular technology falls below such a threshold, a that consumer neurotechnology should not be made available. Stakeholders here include users, parents, developers, medical experts, cybersecurity experts, and neuroethicists





In conclusion, these are the seven dimensions that the development of ethical consumer neurotechnologies will take into consideration. These are safety, veracity, privacy, epistemic appropriateness, existential authenticity, just distribution, and oversight. It is only when these dimensions are considered that consumer neurotechnologies will truly contribute to human flourishing. 





Note: These principles are limited to consumer neurotechnologies. They may therefore not hold for clinical, military, or third-party applications. They also do not necessarily apply to pharmacological technologies. 







References 





1. Barnsley, N., McAuley, J. H., Mohan, R., Dey, A., Thomas, P., & Moseley, G. L. (2011). The rubber hand illusion increases histamine reactivity in the real arm. Current Biology, 21(23), R945-R946. 





2. Etkin, J. (2016). The hidden cost of personal quantification. Journal of Consumer Research, 42(6), 967-984. 





3. Fink, A., Schwab, D., & Papousek, I. (2011). Sensitivity of EEG upper alpha activity to cognitive and affective creativity interventions. International Journal of Psychophysiology, 82(3), 233-239. 





4. Gruzelier, J. H., Foks, M., Steffert, T., Chen, M. L., & Ros, T. (2014). Beneficial outcome from EEG-neurofeedback on creative music performance, attention and well-being in school children. Biological psychology, 95, 86-95. 





5. Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5. 





6. Kreitmair, K. V., & Cho, M. K. (2017). The neuroethical future of wearable and mobile health technology. Neuroethics: Anticipating the Future, 80-107. 





7. Kreitmair, K. V., Cho, M. K., & Magnus, D. C. (2017). Consent and engagement, security, and authentic living using wearable and mobile health technology. Nature Biotechnology, 35(7), 617-620.  


Magnus, D., Cho, M. K., & Cook-Deegan, R. (2009). Direct-to-consumer genetic tests: beyond medical regulation?. Genome medicine, 1(2), 17. 





8. Riedel, P., Kabisch, S., Ragert, P., & von Kriegstein, K. (2012). Contact dermatitis after transcranial direct current stimulation. Brain stimulation, 5(3), 432-434. 





9. Sartre, J. P., & Becker, G. J. (1948). Anti-semite and Jew. New York, 43, 148. 





10. Wexler, A. (2016). The practices of do-it-yourself brain stimulation: implications for ethical considerations and regulatory proposals. Journal of medical ethics, 42(4), 211-215. 





11. Wolf, G. (2009). Know Thyself: Tracking Every Facet of Life, from Sleep to Mood to Pain, 24/7/365. Wired (2009, June 22). http://archive.wired.com/medtech/health/magazine/17- 





12. Wurzman, R., Hamilton, R. H., Pascual?Leone, A., & Fox, M. D. (2016). An open letter concerning do-it-yourself users of transcranial direct current stimulation. Annals of neurology, 80(1), 1-4. 




13. Yuste, R., Goering, S., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., ... & Kellmeyer, P. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551(7679), 159. 1





Want to cite this post?





Kreitmair, K. (2018). The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/04/the-seven-principles-for-ethical.html