Pages

Tuesday, June 26, 2018

Facial recognition, values, and the human brain




By Elisabeth Hildt








Image courtesy of Pixabay.

Research is not an isolated activity. It takes place in a social context, sometimes influenced by value assumptions and sometimes accompanied by social and ethical implications. A recent example of this complex interplay is an article, “Deep neural networks can detect sexual orientation from faces” by Yilun Wang and Michal Kosinski, accepted in 2017 for publication in the Journal of Personality and Social Psychology.





In this study on face recognition, the researchers used deep neural networks to classify the sexual orientations of persons depicted in facial images uploaded on a dating website. While the discriminatory power of the system was limited, the algorithm was reported to have achieved higher accuracy in the setting than human subjects. The study can be seen in the context of the “prenatal hormone theory of sexual orientation,” which claims that gay men and women tend to have gender-atypical facial morphology.




The abstract of the article ends with the sentences (p.2): “Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”





The authors of the study seem to assume that their role is confined to conducting research, sending the results out to society, and (maybe) sounding a note of caution, a caveat (Murphy 2017; Resnick 2018). But that, beyond that, their research does not pose any considerable ethical issues. This can be questioned, however. For researchers have a clear responsibility to think about the social embeddedness and ethical implications of their research before it is carried out and published and to design their studies in a way that keeps possible negative consequences to a minimum.





To begin with, there has been an ongoing discussion on whether this study complies with research ethics standards. Issues raised include whether the research is in line with the dating site’s guidelines and with copyright regulations, as well as whether the researchers were entitled to use the photos without having obtained the informed consent of the individuals who uploaded them for an entirely different purpose (Flaherty 2017; Leetaru 2017). As there is an ongoing investigation into these issues – but also discussion on whether there is need for new guidelines regulating artificial intelligence (AI) and digital data research (Leetaru 2017)— the study has not yet been published in the journal.








Image courtesy of Flickr

Apart from the above-mentioned research ethics issues, ethical aspects matter in two respects: first with regard to the value assumptions implicit in the study design and second with regard to possible ethical implications of the research. Physiognomy, the broader context in which the study is located, is a controversial field that many consider to be a pseudoscience (Emspak 2017).



Physiognomy assumes that a person’s facial features give indications of his/her personality traits. It is not by chance that the study reminds me of the pseudo-scientific phrenological approaches of the 19th century that attempted to draw conclusions about individuals’ personality traits based on the shape of their skulls (Holtzman 2015). What unites these two is that their approach is influenced by social value assumptions and the motivation to be able to identify individuals with behavior or with characteristics considered socially deviant.



Other brain-related research fields are not immune to social value assumptions either. An example is craniometry and the highly questionable claim made by Samuel George Morton in the 19th century that differences in cranial capacity between different ethnic groups are indicators of the intellectual capacity of these ethnic groups. The same applies for discussions on the relevance of brain size for intelligence (Fausto-Sterling 1993). These examples show us more about the underlying social assumptions of the researchers than about the actual relevance of their measurements.





Similarly, one of the basic assumptions of the Wang & Kosinski paper, the “prenatal hormone theory of sexual orientation” and the view that there is a correlation between the shape of a person’s face and his/her sexual orientation, is far from being proven (Emspak 2017: Murphy 2017). While the quality of the underlying scientific assumption is a complex question that cannot be resolved here, the choice of the research topic reflects the view that AI-based facial recognition to detect sexual orientation is a topic worth pursuing.



One of the central conclusions of the study is that human faces “contain more information about sexual orientation than can be perceived or interpreted by the human brain” (Wang & Kosinski, p. 29). Deep neural networks are reported to provide more accurate results in the described study setting because they take features into consideration that are not accessible or not relevant for humans and the human brain when it comes to distinguishing between heterosexual and homosexual individuals based on their faces. Nevertheless, it is obvious that the resulting data needs human interpretation, especially in view of the intimacy of the trait under investigation. For example, the authors explain the higher probability of seeing a shadow on the forefront of heterosexual men and lesbian women in the study by the tendency of both groups to wear baseball caps and “the association between baseball caps and masculinity in American culture” (p. 20). In other cultural contexts, different influencing factors may be expected. But, it remains unexplained as to why there is a higher probability of gay people wearing glasses in the study (Emspak 2017).








Image courtesy of Pixabay.

The underlying question is: how can we ever adequately interpret the resulting data in a situation in which not only a considerable number of the elements used by the system, but also their relevance escape our understanding? There is a clear risk for discrimination against homosexual men and women based on opaque algorithms (O’Neil 2016; Agüera y Arcas et al. 2018). This leads to the question of possible negative consequences for homosexual men and women.





Concerning possible ethical implications of their research study, the authors stress that their intention was to raise awareness of the risks gay people may face already, particularly in view of the growing digitalization of everyday lives, and that they did not develop algorithms for their study but instead used widely available off-the-shelf tools. However, it seems obvious that the study not only raises awareness of the options available in the digital age, but also suggests how to realize similar approaches; it also reinforces the assumption that using facial recognition to find out about the sexual orientation of individuals may be worthwhile.


_______________








Elisabeth Hildt is Professor of Philosophy and Director of the Center for the Study of Ethics in the Professions at Illinois Institute of Technology, Chicago. Her research focus is on neuroethics, ethics of technology, and Science and Technology Studies. Before moving to Chicago, she was the head of the Research Group on Neuroethics/Neurophilosophy at the University of Mainz, Germany.


















References








Agüera y Arcas, B., Todorov, A., Mitchell. M. (2018): “Do algorithms reveal sexual orientation or just expose our stereotypes?”, Medium January 11, 2018; https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477









Emspak, J. (2017): “Facing Facts: Artificial Intelligence and the Resurgence of Physiognomy”, Undark 11.08.2017; https://undark.org/article/facing-facts-artificial-intelligence/









Fausto-Sterling, A. (1993): “Sex, Race, Brains, and Calipers”, Discover Magazine 14(10): 32-37. http://discovermagazine.com/1993/oct/sexracebrainsand288









Flaherty, C. (2017): “Prominent journal that accepted controversial study on AI gaydar is reviewing ethics in the work”, Inside Higher Ed Sep 13, 2017; https://www.insidehighered.com/news/2017/09/13/prominent-journal-accepted-controversial-study-ai-gaydar-reviewing-ethics-work









Holtzman, G.S. (2015): “When Phrenology was Used in Court. Lessons in neuroscience from the 1834 trial of a 9-year-old”, Slate, Future Tense, Dec 16, 2015, http://www.slate.com/articles/technology/future_tense/2015/12/how_phrenology_was_used_in_the_1834_trial_of_9_year_old_major_mitchell.html









Leetaru, K. (2017): “AI ‘Gaydar’ And How The Future Of AI Will Be Exempt From Ethical Review”, Forbes, Sep 16, 2017; https://www.forbes.com/sites/kalevleetaru/2017/09/16/ai-gaydar-and-how-the-future-of-ai-will-be-exempt-from-ethical-review/#704e7602c09a









Murphy, H. (2017): “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine”, New York Times, Oct 9, 2017; https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html









O’Neil, C. (2016): Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown









Resnik, B. (2018): “This psychologist’s “gaydar” research makes us uncomfortable. That’s the point.”, Vox, Jan 29, 2018; https://www.vox.com/science-and-health/2018/1/29/16571684/michal-kosinski-artificial-intelligence-faces









Wang, Y., Kosinski, M. (2017): “Deep neural networks can detect sexual orientation from faces”, https://www.gsb.stanford.edu/sites/gsb/files/publication-pdf/wang_kosinski.pdf







Want to cite this post?



Hildt, E. (2018). Facial recognition, values, and the human brain. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/facial-recognition-values-and-human.html

Tuesday, June 19, 2018

Disrupting diagnosis: speech patterns, AI, and ethical issues of digital phenotyping



By Ryan Purcell, PhD







Jim Schwoebel, presenter at April The Future Now: (NEEDS)

Diagnosing schizophrenia can be complex, time-consuming, and expensive. The April seminar on The Future Now: (NEEDs) Neuroscience and Emerging Ethical Dilemmas at Emory focused on one innovative effort to improve this process in the flourishing field of digital phenotyping. Presenter and NeuroLex founder and CEO Jim Schwoebel had witnessed his brother struggle for several years with frequent headaches and anxiety, and saw him accrue nearly $15,000 in medical expenses before his first psychotic break. From there it took many more years and additional psychotic episodes before Jim’s brother began responding to medication and his condition stabilized. Unfortunately, this experience is not uncommon; a recent study found that the median period from the onset of psychotic symptoms until treatment is 74 weeks. Naturally, Schwoebel thought deeply about how this had happened and what clues might have been seen earlier. “I had been sensing that something was off about my brother’s speech, so after he was officially diagnosed, I looked more closely at his text messages before his psychotic break and saw noticeable abnormalities,” Schwoebel told Psychiatric News. For Schwoebel, a Georgia Tech alum and co-founder of the neuroscience startup accelerator NeuroLaunch, this was the spark of an idea. Looking into the academic literature he found a 2015 study led by researchers from Columbia University who applied machine learning to speech from a sample of participants at high risk for psychosis. They found that the artificial intelligence correctly predicted which individuals would transition to psychosis over the next several years.









Image Courtesy of Pixabay user mohamed_hassan.

Schwoebel went on to found NeuroLex Laboratories, which is developing technology to analyze speech samples for diagnostic purposes. For NeuroLex, schizophrenia is only one of several neuropsychiatric disorders that may be diagnosable by AI-mediated linguistic analysis. In their early stages, depression, Alzheimer’s (AD) and Parkinson’s disease (PD) also may affect the brain in unseen ways that algorithms can identify before a clinician. Early diagnosis of these conditions could have a profound impact on patient outcomes and the development of future treatments. While there are no disease-reversing cures available for any of these disorders currently, preventing psychotic episodes is a major goal in schizophrenia treatment and early diagnosis at least provides the opportunity for intervention. For neurodegenerative diseases, the case for early diagnosis may be even more compelling. By the time many patients are diagnosed with a neurodegenerative disease such as AD or PD, the disease has already done catastrophic and perhaps irreversible damage to the brain. Clinical trials that include such patients would therefore be doomed before they even begin. Knowing about the disease earlier could provide more opportunity for treatment, and would also have practical benefits for families wanting to plan for future care. 





NeuroLex is far from the only startup in the digital phenotyping space. Jeff Arnold, the founder of WebMD co-founded Sharecare, which offers analysis of phone calls to determine stress level among other personalized medical services. The former director of the US National Institute of Mental Health, Dr. Thomas Insel co-founded Mindstrong, which measures countless aspect of smartphone use as indicators of mental and emotional health. We know of Facebook’s efforts to identify users who may be suicidal and, it is safe to assume, Facebook and Google are interested in (if not already performing) other digital phenotyping analyses. 








Image Courtesy of Max Pixel

More efficient, more accurate, and earlier diagnosis of neuropsychiatric diseases could provide real benefits for patients, their families, and researchers. However, there are also real ethical issues that need to be considered. First, like any application involving AI, there is an increasing appreciation that bias may be a major hurdle. Related to speech, Schwoebel noted that there are obvious regional (think Brooklyn vs. Birmingham) as well as racial, ethnic, and gender differences in how people speak and the words that they choose. He emphasized the importance of a diverse training set, which would hopefully head off harmful and embarrassing situations like when a Google Photos algorithm generated some of the most racist tags imaginable. Yet the really scary part may be that the public will likely never know about most of the discrimination that happens in the background as we browse the web. Diverse training in the research phase and transparent computations would likely help avoid systemic biases but it is doubtful that they could be eliminated completely. 








Image Courtesy of Pexels user Andres Urena

Speech and voice data may contain everything from the most mundane to the very personal and sensitive and thus privacy issues could also present a significant challenge, both legally and ethically. From a legal perspective in the United States, different states have varying wiretapping and voice recording statutes. In some states, it is legal to record a conversation with the consent of only one party, in others the consent of all parties is required. This may seem like an easy ethical judgment – simply go with the stricter regulation and get everyone’s consent – but it probably is not that simple in practice. Another important consideration is how the speech data is collected. AI-powered home assistants like Google Home and Amazon Echo and even many smartphones and televisions are listening, and the owner of the device, not to mention other people within earshot, may not know exactly what they have consented to having recorded. Just recently, Amazon was asked to explain why exactly an Echo emailed an audio recording of a conversation that a Portland woman had at home to one of her husband’s co-workers. 





Lastly, there are concerns about how this technology could be used. Predictive data related to an individual’s likelihood to develop neuropsychiatric conditions that may result in long periods of disability would be very valuable to insurance companies and employers, as only two examples. In a one party voice recording consent state, an applicant for insurance or a job would not even need to agree to submit to this sort of analysis. In this era of deregulation, without obvious appetite for increased oversight at the Federal level, it will likely be up to the private sector to police itself and decide on ethical principles to guide the development and implementation of these technologies. After all, the more incidents of ugly AI bias and gross disregard for privacy that make it into the public view, the less interest there will be in adopting these technologies, which could seriously hamper their potential for good. There is little doubt that digital phenotyping in its many forms has the potential not only for improved efficiency at drastically lower cost, but also to enhance the ability and extend the reach of clinicians. Thoughtfully considering and addressing these concerns and should improve chances of reaching that potential, not hinder them.






Want to cite this post?




Purcell, R. (2018). Disrupting diagnosis: speech patterns, AI, and ethical issues of digital phenotyping. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/disrupting-diagnosis-speech-patterns-ai.html 

Tuesday, June 12, 2018

Ethical Concerns Surrounding Psychiatric Treatments: Do Academics Agree with the Public?


By Laura Y. Cabrera, Rachel McKenzie, Robyn Bluhm







Image courtesy of the

U.S. Airforce Special Operations Command.

Treatments for psychiatric disorders raise unique ethical issues because they aim to change behaviors, beliefs, and affective responses that are central to an individual’s sense of who they are. For example, interventions for depression aim to change feelings of guilt and worthlessness (as well as depressed mood), while treatments for obsessive-compulsive disorder try to diminish both problematic obsessive beliefs and compulsive behaviors. In addition to the specific mental states that are the target of intervention, these treatments can also affect non-pathological values, beliefs, and affective responses. The bioethics and neuroethics communities have been discussing the ethical concerns that these changes pose for individual identity [1,2], personality [3,4], responsibility [5], autonomy [6,7], authenticity [8], and agency [9,10]. 




What we did





Pharmacological interventions such as antidepressants, antipsychotics, and stimulants, and neurosurgical psychiatric interventions such as ablative surgeries and deep brain stimulation, can be regarded as more “direct” in their mechanisms of action (compared to psychotherapy or other behavioral therapies) and therefore raise greater concern about their effects on the patient. We set out to compare ethical issues brought up by these two types of psychiatric interventions. Given the recent increase in research on neurosurgical interventions in psychiatry (as well as the historical precedents), it is really important to compare discussions of these therapies with those of pharmacological interventions. In our study, we were interested in comparing two key stakeholders around these interventions: the academic community and the public. The former because the academic discussion around these topics has a long history, and because academics can influence clinical guidelines and




Image courtesy of Pixabay.

proposed recommendations that play a role in whether or not to adopt a particular intervention, as well as identifying the type of safeguards that might be warranted. The latter group we see as important because the public has, and should have, influence in shaping the future use of pharmacological and neurosurgical interventions. For example, think of the impact that the public’s voice has had in the negative perceptions around electroconvulsive therapy. There can be a disconnect between these two groups, however, when discussing the relevancy, legitimacy, and significance of these certain issues [9]. 





We compared ethical concerns raised regarding neurosurgical and pharmacological interventions and examined how these ethical issues are discussed both in the academic community and among the public. To gauge academic perspectives, we analyzed the medical and bioethics literature discussing pharmacological and neurosurgical interventions in psychiatry together with some discussion of ethical concerns. In the case of the public, we used online comments responding to articles covering interventions aimed at treating psychiatric disorders in major online American newspapers and magazines. Some examples of these comments are below.











Even though online comments can't provide a representative sample of public views, they are increasingly used by researchers to study public opinion on current issues [11]. The use of online comments has several advantages, including the possibility that, in a forum where people can comment anonymously (most people used pseudonyms or fake names), people may be more honest than they would be when interviewed by researchers. Yet, anonymity may foster commenters to write nonsense and politicize the conversation. While not all commenters provide rich and insightful comments, some are truly elaborated and reflect deep thinking on the issues at hand. 





Our analysis included perspectives related to various domains, including scientific and patient-related issues. Here, we focus on results related to philosophical and ethical concerns— specifically, six commonly discussed in the neuroethics literature: autonomy, authenticity, identity, enhancement, personal responsibility, and neurocentrism. 





What we found 





In a way, we were not surprised that the public and the academic groups have different concerns, or that different modalities shift the focus of attention to particular ethical and philosophical concerns. What surprised us the most was the predominance of the theme of “personal and social responsibility,” which aimed to capture whether or not a person with severe mental illness has a responsibility to do something about their disorder (such as try psychotherapy or any other intervention instead of or in addition to drugs/surgery). This theme also captures current social practices (e.g., stigma, blaming patients for their illnesses, and social conditions that might promote the prevalence of mental disorders) that might be responsible for exacerbating or failing to address mental problems, and was also one of the themes for which we do not find any statistically significant difference between the public and academic concerns. In the academic literature, personal responsibility was the most discussed concern for pharmacological interventions (32.6%), yet in the neurosurgery literature, it was issues of identity (62%). In the case of public concerns, personal and social responsibility was the most discussed concern in both types of interventions (pharma: 27.32%; neuro: 14.28%). 








Image courtesy of Wikimedia Commons.

While we expected to find differences in the types of concerns raised, we did not expect to find such a difference in the frequency with which ethical and philosophical concerns are raised. Overall, philosophical issues were discussed less frequently in the public comments than the academic literature for both types of intervention (pharma: 40.55% vs. 66.27%; neuro: 27.32% vs. 82.35%). Perhaps we expected a public that was as keen to engage in these topics as we are; instead, we find a public that challenges assumptions about scientific validity and what counts as a disorder in addition to one that is willing to share personal anecdotes online. 





What we think it means 





These findings reveal similarities and discrepancies in how philosophical issues associated with these two types of psychiatric treatment are discussed both in professional circles and among the public. While the public might be less likely to use academic terms such as “authenticity” or “autonomy,” the few commenters that did bring up ethical concerns regarding the use of particular psychiatric interventions used terms such as “personality,” “true self,” or “his choice.” Thus, it is possible that there is a need to look more deeply into whether there are substantial differences on how these terms shape ethical concerns in these two groups. The differences found regarding the type of intervention also have important implications. For example, it is possible that academics see an important difference between the interventions. In the case of psychiatric neurosurgey, identity is a frequently raised ethical concern, but this is not true in the case of pharmaceuticals. The fact that contemporary forms of psychiatric neurosurgery are relatively new might explain why many members of the public commenting on neurosurgical interventions use more familiar interventions, such as pharmacological interventions, to try to understand and assess the issues involved. 





Conclusions 





The public as well as psychiatric patients should be able to access and understand the concerns of the scientific community in order to better discern the risks and benefits of treatments. There is certainly a growing acknowledgement that the public is not waiting to be educated by the experts (a.k.a. the deficit model of public engagement) but rather is a group of people who bring valuable perspectives and knowledge from which experts can learn and benefit. As such, it is essential that the scientific community adequately considers and addresses the public’s concerns and perspectives so as to provide future patients with effective care. 



_______________






Dr. Laura Cabrera is Assistant Professor of Neuroethics at the Center for Ethics and Humanities in the Life Sciences. She is also Faculty Affiliate at the National Core for Neuroethics, University of British Columbia. Laura Cabrera's interests focus on the ethical and societal implications of neurotechnology, in particular when use for enhancement purposes as well as for treatments in psychiatry. She has been working on projects at the interface of conceptual and empirical methods, exploring the attitudes of professionals and the public toward pharmacological and brain stimulation interventions, as well as their normative implications. Her current work also focuses on the ethical and social implications of environmental changes for brain and mental health. She received a BSc in Electrical and Communication Engineering from the Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Mexico City, an MA in Applied Ethics from Linköping University in Sweden, and a PhD in Applied Ethics from Charles Sturt University in Australia. Her career goal is to pursue interdisciplinary neuroethics scholarship, provide active leadership, and train and mentor future leaders in the field. 






Rachel McKenzie is a fourth year undergraduate studying neuroscience at Michigan State University. She is interested in bioethics and science communication, and hopes to pursue a graduate degree and continue research in these areas after graduating in the spring.  






Robyn Bluhm is an Associate Professor in the Department of Philosophy and Lyman Briggs College at Michigan State University. Her research focuses on the relationship between epistemological and ethical issues in medicine and in neuroscience. She is the co-editor of Neurofeminism: Issues at the Intersection of Feminist Theory and Cognitive Science and of the International Journal of Feminist Approaches to Bioethics, and is the editor of Knowing and Acting in Medicine. 







 References



1. Kramer, P,D. 1993. Listening to Prozac. New York: Viking Penguin



2. Lipsman, N. and W. Glannon. 2012. Brain, mind and machine: What are the implications of deep brain stimulation for perceptions of personal identity, agency and free will? Bioethics 27: 465–470. doi:10.1111/j.1467-8519.2012.01978.x.



3. Synofzik, M. and T. E. Schlaepfer. 2008. Stimulating personality: Ethical criteria for deep brain stimulation in psychiatric patients and for enhancement purposes. Biotechnol J 3: 1511–1520. doi:10.1002/biot.200800187.



4. de Haan, S., E. Rietveld, M. Stokhof, and D. Denys. 2017. Becoming more oneself? Changes in personality following DBS treatment for psychiatric disorders: Experiences of OCD patients and general considerations. PLoS ONE 12: e0175748–27. doi:10.1371/journal.pone.0175748.



5. Klaming, L. and P. Haselager. 2010. Did my brain implant make me do it? Questions raised by DBS regarding psychological continuity, responsibility for action and mental competence. Neuroethics 6: 527–539. doi:10.1007/s12152-010-9093-1.



6. Glannon, W. 2012. Neuromodulation, agency and autonomy. Brain Topogr 27:46-54. doi:10.1007/s10548-012-0269-3.



7. Gilbert, F. 2015. A threat to autonomy? The intrusion of predictive brain implants. AJOB Neurosci 6: 4–11. doi:10.1080/21507740.2015.1076087.



8. Kraemer, F. 2011. Authenticity anyone? The enhancement of emotions via neuro-psychopharmacology. Neuroethics 4: 51–64. doi:10.1007/s12152-010-9075-3.



9. Singh, I. 2013. Not robots: children’s perspectives on authenticity, moral agency and stimulation drug treatments. J Med Ethics 39:359-366.



10. Goering, S., E. Klein, D. D. Dougherty, and A. Widge. 2017. Staying in the loop: Relational agency and identity in next-generation DBS for psychiatry. AJOB Neurosci 8: 59–70. doi:10.1080/21507740.2017.1320320.



11. Henrich, N. and Holmes B. (2013) Web news readers comments: Towards developing a methodology for using on-line comments in social inquiry. Journal of Media and Communication Studies Vol. 5(1), pp. 1-4, Disclosures: None





Want to cite this post?



Cabrera, L., McKenzie, R., Bluhm, R. (2018). Ethical Concerns Surrounding Psychiatric Treatments: Do Academics Agree with the Public? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/ethical-concerns-surrounding.html

Tuesday, June 5, 2018

Participatory Neuroscience: Something to Strive For?





By Phoebe Friesen








Image courtesy of Pixabay.

In the last few decades, there has been an increasing push towards making science more participatory by engaging those who are part of or invested in the community that will be impacted by the research in the actual research process, from determining the questions that are worth asking, to contributing to experimental design, to communicating findings to the public. Some of this push stems from the recognition that research is always value-laden and that the values guiding science have long been those of an elite and unrepresentative few (Longino, 1990). This push also has roots in feminist standpoint theory, which recognizes the way in which marginalized individuals may have an epistemic advantage when it comes to identifying problematic assumptions within a relevant knowledge project (Wylie, 2003). Additionally, many have noted how including the voices of those likely to be impacted by research can support the process itself (e.g. by identifying meaningful outcome measures) (Dickert & Sugarman, 2005). As a result, participatory research is becoming widely recognized as having both ethical and epistemic advantages. The field of neuroscience, however, which takes the brain as its primary target of investigation, has been slow to take up such insights. Here, I outline five stages of participatory research and the uptake of neuroscientific research in each, discuss the challenges and benefits of engaging in such research, and suggest that the field has an obligation, particularly in some cases, to shift towards more participatory research.




The first and lowest stage of participatory research is represented by most scientific projects, in which all power and decision-making lies in the hands of the investigators. The vast majority of neuroscientific research takes place at this stage of participation (Robertson, Hiebert, Seergobin, Owen, & MacDonald, 2015).





The second stage is best exemplified by citizen science projects, in which the parameters of a research project are established by investigators and then the content of the project is filled in by many non-scientists. In some cases, there are opportunities for participants to become more involved (and even to be given credit for discoveries and publications), but this does not always occur (Wicks, Vaughan, Massagli, & Heywood, 2011). There is a significant amount of neuroscientific research taking place at this stage, often in the form of games that allow investigators to crowdsource many hours of tasks related to identification that the human eye can perform better than algorithms (e.g. EyeWire, Mozak). Research at this stage can also be derived from social networks for patients that collect vast quantities of data recorded by preferences (e.g. PatientsLikeMe) (Suresh, 2016).





The third stage gives some say to those who are impacted by the science, offering those impacted by the research an opportunity to shape the data set being collected, through their words or interpretations of their own experiences, or through commenting on the research design before it is implemented. Some neuroscientific research takes place at this stage, particularly that which involves qualitative data or community consultation (Klein et al., 2016; Stratford et al., 2016).





The fourth stage can be found in methodologies such as critical participatory action research and community-based participatory research, in which a community contributes to the research design from start to finish and is given ownership of data and authority over how that data is communicated to others. Central to this stage is the premise that power is shared between scientists and community members throughout the entire research process. This form of research appears to be very rare within neuroscience. One neuroscientific project which investigates the influence of colors on learning resembles these models in some ways and was inspired by and continues to involve high school students from Catalonia (Ruiz Mallen et al., 2016).








Members of a citizen science team

Image courtesy of Wikimedia Commons.

The fifth and final stage of participatory research is exemplified by user-led research, in which a scientific project is entirely conducted by individuals outside of industry or the academy (e.g. survivor research in mental health) . I have found no examples of research within neuroscience that takes place at this stage.





Taking this all into account, it looks like neuroscience is not very participatory, and when it is, the community members’ role is primarily as free laborers. Why might this be the case? Part of the answer is likely to be the many challenges that participatory research involves, some of which are unique to the field of neuroscience and some of which are seen across scientific endeavors. Those unique to neuroscience include the background and technical knowledge required, the expensive tools involved that are difficult to attain and operate, and the hype that frequently surrounds findings within the field (Stratford et al., 2016). Similarly, there is often a significant distance between neuroscientific research and the communities that might be impacted by that research, making applications or implications of the research less easy to foresee or understand. This distance might also mean that only a small subset of the population is interested in engaging in participatory neuroscientific research or that there is a greater risk of “wandering terminology,” which occurs when terms used in one epistemic realm are misapplied within another (e.g. see (Tekin, 2017)).





There are also several structural challenges that prevent researchers, in any field, from engaging in participatory research. Funding structures may not support these forms of research, especially since building relationships with impacted communities and engaging in this form of research can take a significant amount of time (Price, Chatterjee, & Biswas, 2014). Relatedly, reward systems within institutions, including the pressure to publish and bring in funding, can deter investigators from engaging in participatory research and from sharing data, decision-making power, or publications with additional collaborators (Choudhury, Fishman, McGowan, & Juengst, 2014). Additionally, systems of research oversight (e.g. IRBs, RECs), which are not constructed on the basis of participatory research, are often unable or uninterested in accommodating research which frequently cannot be entirely specified in advance and includes individuals from outside of academia in data collection and analysis (Boser, 2007; Noorani, Charlesworth, Kite, & McDermont, 2017).





Additional challenges to engaging in participatory research are epistemic, including the risks of self-selection bias and collecting variable data in citizen science projects (Kelling et al., 2015). It is also not unusual for investigators and those impacted by research to be interested in different kinds of research questions and to have different expectations related to what good research looks like (Ottinger, 2010). Ethical challenges arise as well, including the risk of engaging in tokenism, where participants are merely invited to participate in a symbolic sense (Stratford et al., 2016), or failing to fully inform the community regarding how their contributions are being used or sold (e.g. Lumosity – see (Purcell & Rommelfanger, 2015)), especially since “the allure of participation in a scientific study [can] be used as a Trojan horse to entice individuals to part with information they might not otherwise volunteer” (Janssens & Kraft, 2012, p. 1).





Given these challenges, it is hard not to ask: why bother? Are there good reasons that neuroscience should strive to be more participatory, or should it simply continue as is, reaping the rewards of participation when humans can outperform algorithms, but otherwise remaining within the ivory tower?







Participatory action research

Image courtesy of Flickr.

An answer to this question requires a look at the benefits that participatory research can produce, which are largely captured by three Rs: Rigor, Relevance, and Reach (Balazs & Morello-Frosch, 2013). Participatory research enhances scientific rigor through encouraging both reflection and transparency during the process; community members may also contribute to the identification of assumptions and issues, including those stemming from various conflicts of interest. Participatory research can also increase the relevance of research, because topics neglected in mainstream research are more likely to be taken up and results that will directly benefit those impacted by the investigation are often pursued. Finally, the reach of the knowledge is expanded, in the community may have the ability to access people or places that scientists could not, both during recruitment and during the transmission of results, which can lead to both larger and more diverse data sets, which in turn, improves generalizability (Cooper, 2017).





Taking these benefits into account, it appears that the field of neuroscience has a lot to gain from including more non-investigators in the research process, not just during data collection, but while asking larger questions regarding methodologies and goals. Keeping in mind the challenges discussed above, it is likely that some neuroscientific projects are better off remaining in the halls of the academy, but particularly in cases where neuroscientific research involves vulnerable communities or when a drug or device is being developed for clinical application, significant benefits may result from engaging in participatory research (e.g. through ensuring that an intervention meets the needs of its users). Within participatory research, misleading assumptions can be identified and excluded early on, applications of the science can be kept in mind throughout the process, and scientists can be given the opportunity to learn what matters most to the individuals impacted by their work. This shift will require buy-in not only from investigators, but also from institutions and regulators, who often restrict participatory research through reward systems and systems of oversight that are based on a narrow conception of what constitutes science.


_______________



Phoebe Friesen is a post doctoral fellow at the Ethox Centre at the University of Oxford. She recently received a PhD in philosophy from the CUNY Graduate Center after completing her doctoral work, which focused on theoretical and ethical issues related to the placebo effect. She works primarily on questions within the realms of research ethics, bioethics, and psychiatry.

















References






Balazs, C. L., & Morello-Frosch, R. (2013). The three Rs: How community-based participatory research strengthens the rigor, relevance, and reach of science. Environmental Justice, 6(1), 9-16. 









Boser, S. (2007). Power, ethics, and the IRB: Dissonance over human participant review of participatory research. Qualitative Inquiry, 13(8), 1060-1074.









Choudhury, S., Fishman, J. R., McGowan, M. L., & Juengst, E. T. (2014). Big data, open science and the brain: lessons learned from genomics. Frontiers in Human Neuroscience, 8(239). doi:10.3389/fnhum.2014.00239 









Cooper, R. (2017). 9 Classification, Rating Scales, and Promoting User-Led Research. Extraordinary Science and Psychiatry: Responses to the Crisis in Mental Health Research, 197.









Dickert, N., & Sugarman, J. (2005). Ethical goals of community consultation in research. American Journal of Public Health, 95(7), 1123-1127.









Janssens, A. C. J., & Kraft, P. (2012). Research conducted using data obtained through online communities: ethical implications of methodological limitations. PLoS Medicine, 9(10), e1001328.









Kelling, S., Fink, D., La Sorte, F. A., Johnston, A., Bruns, N. E., & Hochachka, W. M. (2015). Taking a ‘Big Data’approach to data quality in a citizen science project. Ambio, 44(4), 601-611.









Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., . . . Widge, A. S. (2016). Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140-148. doi:10.1080/2326263X.2016.1207497 









Longino, H. E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry: Princeton University Press.









Noorani, T., Charlesworth, A., Kite, A., & McDermont, M. (2017). Participatory Research and the Medicalization of Research Ethics Processes. Social & Legal Studies, 26(3), 378-400. 









Ottinger, G. (2010). Buckets of resistance: Standards and the effectiveness of citizen science. Science, Technology, & Human Values, 35(2), 244-270.









Price, A., Chatterjee, P., & Biswas, R. (2014). time for person centered research in neuroscience: users driving the change. Annals of Neurosciences, 21(2), 37-40. doi:10.5214/ans.0972.7531.210201









Purcell, R. H., & Rommelfanger, K. S. (2015). Internet-based brain training games, citizen scientists, and Big Data: ethical issues in unprecedented virtual territories. Neuron, 86(2), 356-359.









Robertson, B. D., Hiebert, N. M., Seergobin, K. N., Owen, A. M., & MacDonald, P. A. (2015). Dorsal striatum mediates cognitive control, not cognitive effort per se, in decision-making: An event-related fMRI study. Neuroimage, 114, 170-184.









Ruiz Mallen, I., Riboli-Sasco, L., Ribrault, C., Heras, M., Laguna, D., & Perié, L. (2016). Citizen science, engagement and transformative learning: a study of the co-construction of a neuroscience research project in Catalonia.









Stratford, A., Brophy, L., Castle, D., Harvey, C., Robertson, J., Corlett, P., . . . Everall, I. (2016). Embedding a Recovery Orientation into Neuroscience Research: Involving People with a Lived Experience in Research Activity. Psychiatric Quarterly, 87(1), 75-88. doi:10.1007/s11126-015-9364-4









Suresh, A. (2016). The Science Behind WeCureALZ: A Participatory Research Project Tackling Alzheimer’s Disease. Retrieved from http://blogs.plos.org/citizensci/2016/04/22/science-behind-wecurealz-citizen-science-alzheimers/









Tekin, (2017). 11 Looking for the Self in Psychiatry: Perils and Promises of Phenomenology–Neuroscience Partnership in Schizophrenia Research. Extraordinary Science and Psychiatry: Responses to the Crisis in Mental Health Research, 249.









Wicks, P., Vaughan, T. E., Massagli, M. P., & Heywood, J. (2011). Accelerated clinical discovery using self-reported patient data collected online and a patient-matching algorithm. Nature Biotechnology, 29(5), 411-414.









Wylie, A. (2003). Why standpoint matters. Science and other cultures: Issues in philosophies of science and technology, 26-48.






Want to cite this post?

Friesen, P. (2018). Participatory Neuroscience: Something to Strive For? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2018/06/participatory-neuroscience-something-to.html