By Sean Batir (1), Rafael Yuste (1), Sara Goering (2), and Laura Specker Sullivan (2)
![]() |
Image from Kavli Futures Symposium |
(1) Neurotechnology Center, Kavli Institute of Brain Science, Department of Biological Sciences, Columbia University, New York, NY 10027
(2) Department of Philosophy, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, WA 98195
Detailed biographies for each author are located at the end of this post
Often described as the “two cultures,” few would deny the divide between the humanities and the sciences. This divide must be broken down if humanistic progress is to be made in the future of transformative technologies. The 2016 Kavli Futures Symposium held by Dr. Rafael Yuste and Dr. Sara Goering at the Neurotechnology Center of Columbia University addressed the divide between the humanities and sciences by curating an interdisciplinary dialogue between leading neuroscientists, neural engineers, and bioethicists across three broad topics of conversation. These three topics include conversations on identity and mind reading, agency and brain stimulation, and definitions of normality in the context of brain enhancement. The message of such an event is clear: dialogue between neurotechnology and ethics is necessary because the novel neurotechnologies are poised to generate a profound transformation in our society.
With the emergence of technology that can read the brain’s patterns at an intimate level, questions arose about the implications for how these methods could reveal the core of human identity – the mind. Jack Gallant, from UC Berkeley, reported on a neural decoder that can identify the visual imagery used by human subjects (1). As subjects in Gallant’s studies watched videos, the decoder determined how to identify which videos they were watching based on fMRI data. Gallant is convinced that “technologically, ubiquitous non-invasive brain decoding will happen. The only way that’s not going to happen is if society stops funding science and technology.”
Other panelists at the symposium shared Gallant’s confidence in the advent of technology that can decode the content of mental activity, and discussed how motor intentions can be decoded and used to control external objects, like a computer cursor or robotic arm. For instance, Miguel Nicolelis from Duke University discussed a Brain Net that merged neural commands from the brains of three monkeys “into a collective effort responsible for moving a virtual arm.” As the leader of one of the laboratories at the forefront of improving brain computer interfaces for prosthetic control, Nicolelis raised the question of whether such technologies “should be used for military applications.” Beyond specialized use, Nicolelis expressed concern that access to new technologies could be limited – who will be using brain decoders or multiple brain machine interfaces, and why?

Since sophisticated brain stimulation technologies are already capable of eliciting complex behaviors in lower mammals, ethicists discussed an array of concerns related to agency: how we can know whether our actions and behaviors actually result from our own intentions when adaptive neural devices interact with our brains? Pim Haselager of Radboud University explored our “sense of agency” in experiments designed to separate our belief in our agency from our actual causal efficacy in acting (2). His work suggests that “the harder you work, the more agency you feel,” and he notes that maintaining a strong sense of agency while using a BCI may be linked to a relatively high level of effort on the part of user. Haselager described the sense of agency as multi-faceted – while we are learning more about the dimensions of agency, interpersonal and psychosocial issues are still emerging with neurotechnological research. Ed Boyden from MIT, whose laboratory is developing tools for mapping and controlling the brain through optogenetics, continued the discourse surrounding the multifaceted nature of agency, by questioning, “Can detailed models of an individual’s [mental] traits be reconstructed to the point in which simulation could be possible?” He suggested that as the ability to probe neural circuits expands, we will face increasingly complex questions about ourselves and our priorities. If a human-like simulation could be developed, would it possess the same internal dilemma of agency that persists in any decision-making human?
Leigh Hochberg, from Brown University, whose laboratory focuses on brain computer interfaces for paraplegic patients and the clinical trials of BrainGate technology, suggested that how and why privacy of neural data is ascertained depends on what we think is in the data – what does it tell us? This affects how he assesses risk and benefit in his own work – in a trial with a small number of participants, clinical data might be easily identifiable. This requires what Hochberg described as an “extraordinary consent process.” With evidence of the safety and efficacy of BCIs, increasing numbers of participants in BCI clinical trials and changes to consent requirements, more thinking is needed about how neural data and security are handled. Finally, Martha Farah, from the University of Pennsylvania, raised important conceptual questions about agency. She proposed that agency is ethically significant because it is necessary for freedom and autonomy, which underlie commonsense notions of moral responsibility. The concern with neurotechnology and agency is not whether an intervention is “in the head,” but whether it is quantitatively different from preceding technologies, like pharmaceuticals – does it allow for drastically more control over individuals and their agency? Farah suggested that new neural technology might allow for more fine-grained control of human thoughts and behavior, a possibility that raises economic and regulatory issues in the short term, equality and opportunity questions in the medium term, and existential questions about humanity in the long term.

Gregor Wolbring from the University of Calgary expanded the conversation on normality and enhancement to address “ability privilege,” which is the idea that “individuals who enjoy the advantages are unwilling to give up their advantages,” because for many people the judgment of abilities is intrinsic to one’s self-identity and security. He posed questions regarding how we determine ability expectations, and how those expectations alter the treatment of people whose bodies are not typical. Will disabled people want neurotechnologies? Perhaps, if they are understood as tools to achieve well-being, rather than as ways to “fix” people. When asked about the role of neuroprosthetics in the disability world, Wolbring expressed “Tool, yes. Identity, no.” David Wasserman from NIH turned the conversation to neurodiversity, and the movement to reframe some neuroatypical forms of processing as forms of valuable diversity. Such individuals may not need medical technology, but better social accommodation. Thus, Wasserman argued for a more pecuniary focus, emphasizing “more funding ought to be given to…biomedical research that would increase the flourishing” of people living with various neuroatypical conditions. Wasserman suggests that such research should be less focused on medical “fixes”, even though the public tends to be moved by research justifications focused on medical advancement. This latter point was confirmed by Gallant, who noted that “while scientists do a bad job of explaining how science works, the public knows they get sick, and they go to the hospital. This is why the NIH budget is 10 times greater than NSF….medicalizing research has the good effect of attracting funding to biomedical research.” Equipped with this knowledge, a slightly clearer picture begins to emerge, where research at the frontiers of neurotechnology may be forced to address normalization in a medical context for the sake of funding further research, unless funding structures change.
An open discussion held at the end of the Kavli Futures Symposium with all speakers and members of the NIH BRAIN Neuroethics Workgroup synthesized separate kernels of knowledge shared throughout the event. This included a sense of urgency for funding ethical and legal work in order to guide the development of new technologies that have the capacity to radically transform the human experience. There is a need to ensure that multiple stakeholders, including scientists, disabled people, members of the general public, and ethicists work together to consider the ethical aspects of scientific and technological developments. These ethical aspects are clearest in the short term, such as issues about funding priorities, institutional space for ethics, translational goals, and social support for individuals using novel technologies. Long-term questions can also be raised, including the value of preserving the separateness of individuals with private mental space, the potential for combining consciousness toward shared tasks, and the significance of potential enhancements that radically alter what we can directly control with our brains.
By exploring the collective web of thought that connects the humanities and the sciences, several profound issues were identified. Attending to these issues should galvanize the relevant public and private entities to attend more fully to the integration of neurotechnological research with human values.
Author Biographies




Sara Goering is Associate Professor of Philosophy at the University of Washington, Seattle, and affiliated with the Program on Values and the Disability Studies Program. She leads the ethics thrust at the Center for Sensorimotor Neural Engineering.
References
1. Naselaris et al. 2015. A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105(15): 215-228.
2. Haselager, W.F.G. 2013. Did I do that? Brain-Computer Interfacing and the sense of agency. Minds and Machines 23(3): 405-418.
Want to cite this post?
Batir S, Yuste R, Goering S, and Specker Sullivan L. (2016). The 2016 Kavli Futures Symposium: Ethical foundations of Novel Neurotechnologies: Identity, Agency and Normality. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2016/11/the-2016-kavli-futures-symposium_14.htm
No comments:
Post a Comment