It’s a common saying and one that belies any functional capacity to predict another’s thoughts. Yet our drive to uncover arguably the most secretive and private form of information – thoughts – has led to major advances in the science of “reading one’s mind.”
Current technologies are able to perform generally accurate predictions about robust neurologic phenomena such as distinguishing if an individual is looking at (or imagining) a face or a home (Haynes and Rees, 2006). However, as technologies advance it may well soon be possible to detect and identify more complex and covert thoughts – lies (Langleben et al., 2005) and even subconscious thought (Dehaene et al., 1998). With this ability, we must question the ethics surrounding the pursuit of this knowledge.
Figure 3 from Haynes and Rees, 2006 |
The foremost concern with use of this technology is the right to privacy. At what length are we able to sidestep an individual’s basic right to privacy? If an individual is unaware of their thoughts, is it subject to the same privacy rules? In order to even begin addressing this question we would have to establish the nature of privacy as it relates to our thoughts. Popular opinion would hold that thoughts are private, pure and simple, off limits to anyone but those who experience them.
But what of more ambiguous situations?
Would it be ethical to use thoughts as a means of evidence? Conceivably, there would be a push to use this technology to try defendants (Dickson and McMahon, 2005) but we call into question the nature of the information we obtain from a defendant. Perhaps they are experiencing a subconscious thought, rather than actively lying. Is it possible to distinguish between these two states? Even if we could, what is the quality of the evidence garnered? It’s possible that the defendant exhibits thought in a pattern that suggests a lie, when in reality it is the superimposition of both a subconscious thought and a truthful one.
Of further concern, thoughts cannot be assumed to translate to action and so we again must question what the acquisition of a thought means. Take for example the common scenario of road rage. We have all experienced annoyance at a reckless driver that nearly clips your vehicle. In the instant after their car narrowly evades yours, a variety of shock induced thoughts come to mind. Granted while some individuals may put such thoughts to action, a large majority grumbles under the breath and carries on with their day. Knowing that all these people experience similar thoughts after an incident such as this; can we associate a thought in line with road rage to an action in line with road rage? Clearly we cannot as a thought in this case does not equate to action. Thus, returning to the notion of reading these thoughts, what would a read thought mean; how should it be interpreted?
Clearly, most of these concerns address the technical limitations of the technology in its current state (Haynes and Rees, 2006) and so perhaps the mature technology could be limitedly applied to cases such as these. However, no matter how refined the technology we cannot readily solve the dilemma of what thoughts should be read and what should be left to the individual and therein lays our greatest quandary.
Ultimately, I believe we will inevitably come to see the use of this technology in limited realms as we have an important history outlining when privacy can be impinged upon. Notably, cases such as legal wiretapping, or search warrants are justified breaches of privacy. Conceivably this technology will in time be utilized in a similar fashion. In addition, there will cases where individuals either provide consent to having their brain activity recorded or are required by law to do so. Still while these possibilities are near certain, the question remains where we as a society will place limits on using and accessing this information – since while I soon may be able to know what you’re thinking, would we even want to know?
--Gordon Dale
Want to cite this post?
Dale, G. (2012). I know what you’re thinking…. The Neuroethics Blog. Retrieved on
-->, from http://www.theneuroethicsblog.com/
References
Haynes, J., & Rees, G. (2006). Decoding mental states from brain activity in humans. Nature Neuroscience, 7, doi: 10.1038/nrn1931
Dickson, K., & McMahon, M. (2005). Will the law come running? the potential role of "brain fingerprinting" in crime investigation and adjudication in australia. Journal of Law and Medicine, 13(2), 204-22.
Dehaene, S., Naccache, L., Le Clec, G., Koechlin, E., Mueller, M., Dehaene-Lambertz, G., van de Moortele, P., & Le Bihan, D. (1998). Imaging unconscious semantic priming. Nature, 395(6702), 597-600. doi: 10.1038/26967
Langleben, D. D., Loughead, J. W., Bilker, W. B., Ruparel, K., Childress, A. R., Busch, S. I., & Gur, R. C. (2005). Telling truth from lie in individual subjects with fast event-related fmri. Human Brain Mapping, 26, 262-72.
No comments:
Post a Comment