Pages

Tuesday, December 18, 2012

Two Internship Openings with Emory's Neuroethics Program for Spring 2013!



NEUROETHICS INTERNSHIP OPENINGS


Are you interested in the ethical and social implications of neuroscience?


The Emory Neuroethics Program invites you to apply for a Neuroethics Internship. We are looking for up to two self-motivated, creative, and organized individuals who are interested in topics that fall at the intersection of neuroscience, society, and ethics.



The Neuroethics Program is a community of scholars at the Emory University Center for Ethics who explore the ethical and social implications of neuroscience and neurotechnology. You can be part of that exciting team.



The Center for Ethics at Emory is an interdisciplinary hub that collaborates with every school at Emory University as well as local universities and the private and public community. The Center for Ethics houses The American Journal for Bioethics Neuroscience, the premier journal in Neuroethics. The director of the Center for Ethics, Dr. Paul Root Wolpe, is one of the founders of the field of Neuroethics as well as the International Neuroethics Society, where he serves on the Executive Board.



Students will have creative input into this new, growing program and play an integral role in its day-to-day functions. Duties will include things like:



• Social Media: Writing for The Neuroethics Blog, FB and Web design

• Participating in projects led by the undergrad-run Neuroethics Creative

• Neuroethics Journal Club

• Organizing Symposia

• Neuroethics Research and more…



Please visit our program page (ethics.emory.edu/neuroethics) or Facebook (The Neuroethics Program at Emory) to learn more about us, or contact us at neuroethics@emory.edu.



To apply please submit a 1-pg letter of interest and resume to neuroethics@emory.edu by January 18, 2013.



Eligibility and expectations:

• Must be organized and deadline-oriented

• Must be self-motivated

• Must currently be an undergraduate student (can be from any discipline)

• Hours are flexible, but must be consistent


Who's responsible for 'free will?' Reminding you that all ideas were once new





A figure adapted from Soon, Brass, Heinze and Haynes' 2008 


fMRI study where a "free decision" could be predicted above


 chance 7 seconds before it was consciously "felt."  Those 


green globs could be thought of as the unconscious part of 


your brain that is actually in control of your life.  Image here,


paper here 


As seen previously on this blog, the notion of "Free Will" is a bit of a Neuroethics battleground. About 30 years ago, Dr. Benjamin Libet et. al. published an experiment where the researchers were able to predict when human volunteers would press a button- a fraction of a second before the participants themselves realized they were going to do so.  And despite suggestions that the scientific method is breaking down, there is an entire cottage industry of scientists replicating Libet's result and finding more and more effective ways to predict what you are going to be 'freely' thinking.



I'll defer to Scott Adams of Dilbert fame to describe why this is a problem:





This is from 1992.  Libet's study was published in 1983.  Your life has been absurd  for the past 30 years. (I haven't been able to track down exactly what "Brain Research" Scott Adams was referencing here, but it seems to be similar  to  the Libet experiment.)  From http://dilbert.com/strips/comic/1992-09-13/  


The implications are pretty tremendous- if my conscious mind is just observing a decision that has already been made, and not participating in a decision, how is that decision mine?  How can I be blamed for decisions that I am merely watching?



However, it's hard to scientifically argue that free will is (or isn't) an illusion, unless you know exactly what it is in the first place.  So Jason Shepard and Shane Reuter ran a test to see how folks actually use the phrase 'free will.'  All well and good, that certainly beats just assuming that everyone has the same definition.



But then a sinking feeling emerges- here is an idea that is so precious to us, that we actually start becoming worse people when we hear that it is an illusion.  And yet this authoritative definition is coming to us through majority rule?  Our hero is roused to action, and sets out to find a 'correct' definition, not just a 'popular' one...




While using undergraduates in these sorts of psychology studies isn't necessarily problematic, I've been an undergraduate, and thus do not trust these people with my free will.  They might put it in a Dr. Pepper bottle filled with dry ice and chuck it into an abandoned parking lot late at night.  That would be terrible.  Image from www.quickmeme.com

But with so many variants on free will floating around, how do you choose a 'correct' definition?  Does free will mean free from the laws of physics (metaphysical libertarianism)?  Free from control by an omnipotent God?  Free from mental disease, free from peer pressure, free from our own emotions?  And what, exactly, is a 'will?'  Does it need to be 'conscious'?  Our hero thinks to himself, “what would Science do?”



And suddenly the answer becomes embarrassingly clear: why, just give the definition of the phrase to whoever coined it!  Those who followed should be forced to use different phrases for whatever 'revisionist' notions of free will they invented- free-ish will.  Free-range will.  Will Zero.  Our hero sits back and reflects on the cleverness and superiority of the Sciences over all other domains of thought [1].  Now all that is left to do is a wee bit of Googling.  Pish pish, easy post.



However, after significant Googling, and digging through two different libraries, and further Googling, and reading books that were over 100 years old[2], and talking to people who were familiar enough with the topic to actually respect its subtleties, and staring at a computer screen wondering what the hell he had gotten himself into, our hero realized that figuring out who is responsible for infecting us with an attachment to 'free will' wasn't going to be an easy task.  More like a thesis, or a career.



What is clear from the relatively small portion of the literature that I've been able to digest is that people have been talking about free will for over a millennium, but less than three millennia.  Probably.  In the early 20th century it was common to presume that folks have always had a notion of free will, an example being in 1923  when W.D. Ross confidently asserted that “Aristotle shared the plain man’s belief in free will.” This was despite Ross's admission, two pages later, that Aristotle “did not examine the problem very thoroughly, and did not express himself with perfect consistency.”[3]  Later scholars took this lack of clear discussion to conclude that Aristotle lacked a notion of will, free or otherwise, altogether.  So, Aristotle’s clean.  For now.








Would Shepard and Reuter's study have gone differently if St. Augustine

took a psychology class at Emory in the spring of 2012? Images from here and here

In his 1974 Sather lectures at Berkeley[4], Albrecht Dihle put forth his argument that St. Augustine, in the 4th century AD, was the first person to put together our 'Western notion' of free will.  St. Augustine came to an (arguably libertarian) notion of free will as a way to solve the problem of evil: how could a benevolent and omnipotent God allow for a world with evil? Answer: humans are responsible for evil due to their free will (which got tainted when Adam and Eve consumed a particular fruit).  Augustine describes this 'free will' as a first cause, with no causes before it, meaning God gets none of the blame and gets to remain omnipotent.  So, St. Augustine is the one responsible!  Or so it seems...



Fast forward to 1997, and we find Michael Frede putting forth an argument (in his own Sather lectures at Berkeley[5]) that Dihle was being too restrictive in his definition of 'Free Will,' and that St. Augustine got his idea about what free will was from the stoics.  Frede argues that it was actually the late stoic Epictetus[6] who first developed a full notion of will, in the 2nd century AD.  Epictetus gets the blame as he was the first to link three critical claims together: that all voluntary acts are caused by wishes, that wishes are created by the rational soul, and therefore that all voluntary acts are caused by acts of reason, which is to say, caused by choice.  This is contrasted with the Aristotelian and Platonic schools of thought, that held that voluntary acts could also be the result of non-rational urges (thirst, hunger, etc).  So Epictetus is responsible then.  Okay...



But coming up to the present, we see reactions to Frede's arguments.  Karen Neilsen recently published an article[7] where she argues that Aristotle (HIM again!) actually developed a notion of will prior to Epictetus, making the point that Frede translates the Greek 'prohairesis' as 'will' for Epictetus and as 'decision' for Aristotle (although Neilsen makes no comment on the 'free' aspect).  So to understand where 'will' came from, we are looking at shifting definitions in ancient Greek.  AAGH!



The lesson here is that this concept didn't emerge suddenly out of the history of the west.  "Free will," whatever it is, was a gradual development over thousands of years, with input from several schools of classic Greek thought, as well as Jewish and Christian traditions.  Perhaps then, instead of thinking of "free will" as a single well-defined idea, it should be thought of as an entire lineage of ideas.  If this is the case, neuroscientists, science writers, and the public at large need to be very cautious when asserting that "free will is an illusion" is a scientifically valid hypothesis.  If neuroscience wants to make points on "free will," it needs to be both more specific as to what variant of "free will" it refers to, and more broad in the variants of 'free will' it entertains.







[1]- I hope the childish language here makes it clear that I actually think otherwise.  Joking aside, there is an important point to be made here on the differences between philosophy and science (and the subsequent frustrations felt by both when they interact), especially considering that this is a NeuroEthics blog.  I don't have a real answer, but I'll start a discussion by pointing to Robert Hartman's 1963 paper “The Logical difference between Philosophy and Science.”  Hartman asserts that all of science is built on top of the super-system of mathematics, whereas each philosopher effectively creates its own, semi-independent system.  Seeing how Hartman admits to building on the ideas of famed 18th century philosopher Immanuel Kant, one might be tempted to call this bogus.  But perhaps Science can be thought of like Star Wars, where new authors are continually adding to the same (expanded) universe, where Philosophy can be thought of like Batman, where authors are continually re-telling the same story and re-imagining the same characters in different ways.  Or perhaps someone who studies the philosophy of science needs to slap me around a bit in the comments section.

[2] Keep in mind that I'm a neuroscientist here, so I rarely read things that were written more than 20 years ago: Wow.  I had to read the printed-in-1900 copy of Epictetus's discourses I found at Tech's library out loud.  Just in case it contained an incantation.  Also, the pages smelled AMAZING.

[3]W. D. Ross, Aristotle (London: Methuen, 1923), page 199-201

[4]A. Dihle, The Theory of Will in Classical Antiquity.  (Berkeley: University of California Press, 1982)

[5]M. Frede, A Free Will: Origin of the Notion in Ancient Thought  (Berkeley: University of California Press, 2011)

[6] Epictetus was a freed slave in the house of Nero.  As a former slave, his accusatory discussion on freedom in 'Discourses' is particularly difficult to write off.  Highly suggested.

[7]K. Neilsen, The Will – Origins of the Notion in Aristotle's Thought.  Antiquorum Philosophia (2012). Available at: http://works.bepress.com/karennielsen/16.  Note that I owe a lot of the structure of the history above to this source







Want to cite this post?

Zeller-Townson, RT. (2012). Who's responsible for 'free will?' Reminding you that all ideas were once new. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/12/whos-responsible-for-free-will.html

Monday, December 10, 2012

Uncovering the Neurocognitive Systems for 'Help This Child'














In their article, “Socioeconomic status and the brain: mechanistic insights from human and animal research,” Daniel A. Hackman, Martha J. Farah, and Michael J. Meaney explore how low socioeconomic status (SES) affects underlying cognitive and affective neural systems. They identify and focus on two sets of factors that determine the relationship between SES and cognitive development: (1) the environmental factors or ‘mechanisms’ that demonstrably mediate SES and brain development; and (2) those neurocognitive systems that are most strongly affected by low SES, including language processing and executive function.  They argue that “these findings provide a unique opportunity for understanding how environmental factors can lead to individual differences in brain development, and for improving the programmes and policies that are designed to alleviate SES-related disparities in mental health and academic achievement” [1].






Neuroscience can tell us how SES may affect her brain.

Can it move us to do something about it?






Theoretically, I have no doubt that neuroscience can make a powerful contribution to early childhood development by determining whether and which neurocognitive systems appear to be more extensively affected by low socioeconomic status.

This is, as the authors themselves point out, important work, because understanding which systems are affected can help educators and policy-makers develop programs to target them more directly and successfully. For example, the work of D’Anguilli et. al. demonstrates that low-SES children pay more attention to unattended  stimuli, and are thereby more susceptible to becoming distracted and having a harder time focusing on a given task. [2] A corresponding, corrective strategy would consist in introducing games, lessons and computer-based strategies which explicitly target executive functions – and indeed, just such a set of measures is being used by the Tools of the Mind curriculum, which as of this year is being implemented in 18,000 pre-kindergarten and kindergarten classrooms, in Head Start programs, public schools, and childcare centers across the nation.






Fig. 1: The yellow 'brain development' box represents those neurocognitive systems that are most affected by low SES, and could include 'language processing' and 'executive function'



So far, so good. So what am I worried about?



What do you think? 



I’m not ‘worried’ so much as left wondering about one issue in Hackman et. al.’s review that I would now like to explore, and that I would welcome further discussion about.



My concern relates to the broader relationship between scientific knowledge and our individual and collective moral motivation to do something about an ongoing injustice. Allow me illustrate what I mean using two diagrams adapted from the Hackman et. al. article. The first represents the state of our knowledge regarding the relationship between SES and development, without any concrete neuroscientific understanding of the neurocognitive systems that mediate between them:




Fig. 2: We know that SES affects developmental outcomes,

even if we don't understand the neurocognitive systems that mediate the relationship











The second represents the state of our knowledge regarding the relationship between SES and development, now including our emerging neuroscientific understanding of the neurocognitive systems that mediate between them, outlined in the paper:




Fig. 3: Neuroscience is beginning to elucidate which neurocognitive systems are

most strongly affected by SES, and thereby influence children's developmental outcomes







My question is this: if sociologists and psychologists have already firmly established the relationship between SES, specific environmental mediators, and resulting developmental outcomes, as in Figure 1, (and they have, as the evidence cited by Hackman’s et. al. attests to), then can the addition of a scientific understanding of the intermediary mechanisms in any way enhance or strengthen our practical commitment to improving children’s SES and the corresponding environmental mediators that affect their development?  In other words, if I already know that SES, and specifically prenatal influences, directly affect elements of children’s cognitive and emotional development, do I need to know anything before doing something about it? And will knowing more about it, including understanding the causal sequence mediating the relationship, prompt me to do anything more about it than I was doing before?



Again, as mentioned, I fully recognize and appreciate the potential of neuroscientists and their collaborators to “design of more specific and powerful interventions to prevent and remediate the effects of low childhood SES.” [1] A second, equally essential neuroscientific question to explore is whether certain brain propensities increase the likelihood of individuals' living in low-SES circumstances. Could we say that certain brain propensities correspond to developmental diseases, or to a kind of physical handicap - one that traps people in poverty and decreases their likelihood of attaining a better quality of life?  If so, would this oblige us to take action? These are fundamental questions that need to be explored further. For my part, I'm not sure I agree with the statement that neuroscience can “highlight the importance of policies that shape the broader environments to which families are exposed” with any more clarity or motivational force than our existing knowledge already does. [1]



I am a neurophile, but…





Here’s why I’m slightly skeptical.  To borrow an example from the philosopher Peter Singer, imagine that you’re driving down the street and see a person bleeding profusely from his leg. [3] You could rush in and help this man, but you’re wearing your brand new, $375 J.Crew Ludlow suit jacket, so you think to yourself, ‘Ok, do I leave him there? I mean, it’s terrible, but I guess so, because I don’t want to get blood all over my beautiful jacket.’ If you responded to the situation in this way, we would probably call you a moral monster.






One of these is not like the other. Or...?





Now consider a different case. Imagine that you’re watching your favorite episode of the Walking Dead when a commercial from Care comes on and reminds you that for $375, you could pay for and facilitate 8 healthy births, and thereby help save the lives of several mothers and their babies. Now you think to yourself "Well, I guess it would be good to save those people, but I really just want that jacket." In this case, our general consensus would be that while you're no Mother Theresa, we probably wouldn't want to condemn for being a moral monster. (After all, that jacket is made from 'world class wool'!) So what gives? As Singer pointed out in a series of influential articles, our rational obligation towards the mothers and their newborns should be the same as towards the bleeding man. [3] So how and why do our intuitions differ?





In his article, “From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?,” the philosopher Joshua Greene suggests that an evolutionary perspective may help explain the differences in our responses. He proposes, “consider that our ancestors did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives by making relatively modest material sacrifices. Consider also that our ancestors did evolve in an environment in which individuals standing face-to-face could save each others’ lives, sometimes only through considerable personal sacrifice. Given all of this, it makes sense that we would have evolved altruistic instincts that direct us to help others in dire need, but mostly when the ones in need are presented in an ‘up-close-and-personal’ way.” [4] According to Greene, this makes a sense of why human beings can be extraordinarily altruistic in their immediate, interpersonal interactions, but still gobsmackingly selfish in their transnational relations.



Unfortunately, our relationship to children in lower-SES environments is closer to the distant pregnant mothers in Singer's analogy than it is to the bleeding stranger right in front us. Few of us interact with low-SES children on a daily basis, and so many of us worry about how they get on in more abstract, theoretical terms. But if this is right, then more information, or even more scientific understanding, will not be enough to move us toward addressing their developmental issues. Rather, we will need to use other kinds of knowledge, such as our emerging understanding of biased moral motivation, to reflectively increase the probability of translating our moral principles into actions. That is, examples like Singer's bleeding stranger tell us something about how our moral motivation works, and we need to use this type of knowledge to try and make low-SES children seem more like the man with the leg wound in our moral imaginations. This would increase the likelihood of our doing something to improve low-SES children's circumstances. One way of achieving this would be to ensure that we interact with low-SES parents and their children on a more regular basis, e.g., by doing something as simple as taking public transportation. This would make us more likely to put our hard-won neuroscience research to use.




____



[1] Hackman, D. A., Farah, M.J., Meaney, M. J., 2010, 'Socioeconomic status and the brain: mechanistic insights from human and animal research,' in Nature 11, Available at https://mail-attachment.googleusercontent.com/attachment/u/0/?ui=2&ik=bb31177d51&view=att&th=13ad223ce6979971&attid=0.4&disp=inline&safe=1&zw&saduie=AG9B_P9-g3MEKF3MHUpOaKNL2td1&sadet=1355272405884&sads=-iGOQ0R2Qyb4_aHrUapR1YuvG1g



[2] D’Angiulli A, Herdman A, Stapells D, Hertzman C. 2002, 'Children’s event-related potentials of auditory selective attention vary with their socioeconomic status.' Neuropsychology 22:293–300.



[3] Singer, P., 1972. 'Famine, affluence, and morality.' Philosophy and Public Affairs 1, 229–243.



[4] Greene, J. 2003. ''From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?' Nature. Available at: http://www.overcominghateportal.org/uploads/5/4/1/5/5415260/from_neural_is_to_moral_ought.pdf







Want to cite this post? 

Haas, J. Uncovering the Neurocognitive Systems for 'Help This Child'. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/12/uncovering-neurocognitive-systems-for.html

Tuesday, December 4, 2012

Neurodiversity and autism: where do we draw the line?

In April 2012, the Emory Neuroethics Program conducted an interview with Steven Hyman, the director of the Stanley Center for Psychiatric Research at MIT’s Broad Institute, where he expressed his belief that mental illnesses and developmental disorders should not be thought of as clear and distinct categories. He said that “classifications are, in the end, cognitive schemata that we impose on data in order to organize it and manipulate it…it's really not helpful to act like there's a ‘bright line’ in nature that separates the well from non-well.” Rather, he said, there are spectrums of behaviors, and disorders exist along them with differing degrees of severity.



This idea of spectrum disorders is common in modern psychiatry, with a commonly known example being the autism spectrum. This approach groups similar disorders of varying levels of severity along a spectrum which also includes behaviors and emotions classified as normal. While the spectrum approach is often touted as an improvement over the previous methods of classification, it still does not solve the lingering problem of how to define disorders.






Neurodiversity shirt



This question is one of the biggest issues in modern psychiatry: where along the spectrum is the transition from the normal range to a diagnosable mental disorder? Doctors and therapists rely on the Diagnostic and Statistical Manual of Mental Disorders (DSM) and scientific literature to make decisions, but it is not a perfect system and leaves room for controversy. The unreleased DSM-5 will move more towards the spectrum approach. For example, it will not include Asperger syndrome as a separate disorder, instead incorporating it into autism spectrum disorder (ASD). But despite this, the DSM-5 is being criticized for emphasizing the negative aspects of ASD (more so than the DSM-4) and, more generally, for pathologizing behaviors and mental states that, some feel, should be (and were at one point) considered normal.



When it comes to classifications, there are some who want to extend the spectrum of behaviors considered “normal” even further, or even remove the dichotomy of “normal” and “abnormal” psychology all together. One group active in this debate is the neurodiversity movement. The term neurodiversity was coined by Judy Singer, a sociologist with Asperger syndrome. It is based on the biological term biodiversity, the variety among and within species required for a healthy ecosystem. Proponents of neurodiversity argue that certain neurological, psychological, and developmental conditions which are usually described as disorders should instead be viewed as part of the normal variation that exists among humans. Neurodiversity proponents argue that while particular ways of thinking or acting may be more common and normalized, there are other equally acceptable ways of living and being. A wide variety of neurological conditions are supported by the neurodiversity movement including ADHD, dyslexia, and Tourette’s, but largely the movement has been lead by those fighting for greater acceptance of the world view and experience of those with autism.



The autism rights movement applies the ideas of neurodiversity to disorders on the autism spectrum (including autism and Asperger syndrome). Proponents of the movement, which includes people both with and without autism, see the autism spectrum “disorders” as neurological variations in functioning that, while different from the norm, should still be seen as normal and not, in fact, as disorders. These activists seek to correct myths and misconceptions about autism, emphasize autism’s positive aspects, and seek a greater role for people with autism in discussions about the condition.






A symbol of the autism rights movement



Neurodiversity proponents do not entirely disapprove of teaching or training people with autism to function better in society by helping them cope with some of the more damaging symptoms (difficulties with taking care of themselves, communicating, and reading emotions, for example). But they think this should be done without losing the beneficial aspects of autism and without making people with autism conform to a normal ideal. For example, behaviors common in autism like stimming (repetitive movements) and keeping strict routines would not be discouraged.



Another idea commonly held among neurodiversity and autism rights activists is that autism and Asperger’s are identities with their own unique culture. This explains some of the resistance to being “cured”. While ASD may contain both positive and negative characteristics, “curing” autism, activists argue, would destroy who the individuals are as people (since their autism so shapes the way they see and interact with the world) and would eliminate the autistic culture. Computers and the internet have contributed to this culture, both by allowing people with autism to more easily connect with each other and by offering a mode of communication that many are more comfortable with than face-to-face interaction. But some are worried about the detrimental effects that relying on computer mediated communication might have on important social skills gained through face-to-face interaction.



Neurodiversity advocates encourage the removal of medicalized language such as “disease”, “disorder”, “treatment”, “cure”, and “epidemic” from discussions about autism. And some prefer terms like “autistic”, “autie”, and “autist” (for those with autism) and “aspie” (for those with Asperger syndrome) instead of “person with autism/Asperger’s” to emphasize the belief that they are identities rather than disorders. This also lead to the term “neurotypical” being used to refer to people who are seen as developmentally and psychologically “normal” from a non-neurodiverse perspective. Being neurotypical has even been jokingly described as a disorder to satirize how autism is often viewed.



Just like neurodiversity is championed by both those with and without autism, the movement faces criticism from within both groups as well. A common argument is that the neurodiversity approach is championed mostly by people with Asperger’s or high-functioning autism, and while it might work for them, others need treatment. According to them, working to help such low-functioning individuals better fit into society will help them and greatly improve their quality of life. They think that maintaining autistic culture is more of a utopian ideal and, in actuality, society will never be able to truly except or accommodate low-functioning individuals with autism otherwise.






A protest against Autism Speaks, an advocacy organization often criticized by the neurodiversity movement  



The problem remains in identifying the “bright line” of distinction between “low-functioning” and “high-functioning” individuals. The “high-functioning” and “low-functioning” labels for autism are not clinical classifications and have no agreed-upon definitions. The closest thing to an official distinction is that high-functioning autism is “unaccompanied by mental retardation” while low-functioning is. But these terms are still controversial given the challenges of assessing the intelligence of those who have difficulty with (or little desire for) communicating through conventional methods and (and conventional testing). Neurodiversity advocates stress that some people diagnosed with low-functioning autism do not have any intellectual deficits and only appear to because they communicate and think so differently.



The concept of neurodiversity offers a different paradigm to approach autism. Instead of viewing it as a disease that should be cured, neurodiversity acknowledges both the positive and negative aspects of it. While most within the movement agree that it is desirable to alleviate the negative effects, they reject a “sledge-hammer approach” that tries to force everyone with autism to be more like neurotypicals in all ways, especially when they feel such treatments are harmful and even abusive.



But on the other hand, though listening to what people with autism have to say about themselves and their identities is important to understanding autism, I think that neurodiversity can also go too far, becoming too idealistic when it comes to deciding what behaviors should be accepted by society. For some, it might be in their best interest to help them communicate verbally and curb behaviors like stimming (at least in certain situations) through treatments that are less controversial (though I understand the irony here, considering that the nuerodiversity movement started in response to neurotypicals thinking they knew what was best for those with autism).



Still, neurodiversity presents an important viewpoint to keep in mind and, at the very least, illustrates how, despite how much we know about psychology and neuroscience, the only way to understand people’s subjective conscious and emotional experiences is by listening to what they have to say. Well, at least for now.





Want to cite this post?

Queen, J. (2012). Neurodiversity and autism: where do we draw the line? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/12/neurodiversity-and-autism-where-do-we.html





Additional Resources (Added 6/16/2015)



Autism

http://healthfinder.gov/FindServices/SearchContext.aspx?topic=81



Autism Speaks Resource Guide

http://www.autismspeaks.org/family-services/resource-guide



Career Assistance for People with Autism

http://www.hloom.com/career-assistance-for-people-with-autism



National Center for Autism Resources & Education

https://www.disability.gov/resource/national-center-for-autism-resources-education-ncare



AutismNOW Transition Planning

http://autismnow.org/in-the-classroom/transition-planning-for-students

The Future of Intelligence Testing















Few people I know actually enjoy standardized tests. Wouldn’t it be great if technology could eliminate the need for bubble-in forms and Scantron sheets? How nice would it be to simply go in and get a snapshot of your brain to find out how smart you are? Imagine walking into the test center, signing on the dotted line, getting a quick scan, and walking out with your scores in hand, helping you gain admittance into a college or land your next job. No brain-racking questions, no tricky analogies, and no obscure vocabulary. Goodbye SAT, hello functional magnetic resonance imaging (fMRI).






Image from 















http://theturingcentenary.files.wordpress.com/2012/06/brain-functions.jpg    





In the general, there have been two types of intelligence studies: psychometric and biological. Biological approaches make use of neuroimaging techniques and examine brain function. Psychometrics focuses on mental abilities (think IQ tests). Dr. Ian Deary and associates suggest that a greater overlap of these techniques will reveal new findings. In their paper, 'Testing versus understanding human intelligence,' they state:



“The lack of overlap between these approaches means that it is unclear what the scores of intelligence tests mean in terms of fundamental biological processes of the brain.”[2]



Applying psychometric analysis techniques (IQ tests) coupled with advanced imaging has the potential to reveal the locations of “higher” cognition and neural processing. I expect that future technologies will have incredibly improved resolution, which will allow scientists to see not only what regions the brain uses but also the exact pathways that are activated for each action. By understanding which pathways are utilized for specific tasks, it may be possible to identify which genes as well as environmental factors (nutrition, education) are responsible for their development. This can lead to programs dedicated to training specific areas of the brain (several of which already exist) [5], and perhaps even drugs that foster development.



I believe it is increasingly important to consider the implications of a technology powerful enough to quickly evaluate an individual’s level of intelligence. However, before I get there, it is necessary to first explain what we can and cannot currently do. What we cannot currently do is use neuroimaging to determine how smart you are. What we can currently do is use neuroimaging to see what parts of your brain contribute to how you process information, access memories and function [3].



Localizing Intelligence



Different parts of your brain do different things and none of these brain regions work in isolation. Some regions contribute to eating, seeing, and other regions play a role in “intelligence”. The varying techniques of imaging-based testing search for different correlates of intelligence [4] (i.e., general intelligence, problem solving, learning abilities). Developments in imaging technologies have improved our ability for greater analysis, allowing for the study of both damaged and healthy brains. For example, MRI studies have found that the volume of gray matter correlates to intelligence, providing evidence for generalizations made regarding brain volume and intelligence [7]. A 2006 study of 100 postmortem brains examined the relationship between an individual’s Full Scale Wechsler Adult Intelligence Scale (WAIS) score and the volume of their brain regions. The factors they considered important to the relationship between brain size and intelligence were age, sex and hemispheric functional lateralization (They found that general verbal ability was correlated with cerebral volume in women and right-handed men. They did not find a relationship between ability and volume in with every group, however).



Additionally, PET and fMRI studies have revealed more information regarding the functionality of certain regions of the brain. By recording and interpreting the brain activity of subjects as they complete a variety of tasks, researchers are able to draw inferences based on the performance in the types of task (and thus, the type of intelligence) that calls on particular areas of the brain.  This is interesting, as knowing how parts of the brain are utilized may reveal more information about the structure and hierarchy used in neural development. It also may provide interesting information regarding the pathways of neural signals throughout the nervous system. Image-based testing may allow researchers to discover why certain neurons are connected, if they are indeed aligned in a purposeful manner and consequently, how to repair such pathways when they are damaged.






Image from http://news.wustl.edu/news/Pages/24068.aspx

A study from Washington University in St. Louis has shed light on how our brains utilize various networks for performance with working memory tasks [1]. They described a mechanism, global connectivity, which coordinates control of other networks. In a sense, global connectivity is the CEO of your brain, insuring that all components of your system are functioning and allowing for effective control of thought and behavior. Specifically, they found that a region of the lateral prefrontal cortex (LPFC), whose activity has been found to predict working memory performance, employs global connectivity. They report that,



“critically, global connectivity in this LPFC region, involving connections both within and outside the frontoparietal network, showed a highly selective relationship with individual differences in fluid intelligence. These findings suggest LPFC is a global hub with a brainwide influence that facilitates the ability to implement control processes central to human intelligence.”



This fascinating study identified a very specific characteristic of the brain (the global connectivity of the left LPFC) that suggested investigators were accurately able to predict fluid intelligence (where fluid intelligence refers to reasoning and novel problem solving ability) [4].





Potential Issues



We are learning more about the brain and the biological bases for intelligence every day. We have expanded our understanding of memory, cognitive thought and neural computation [2,6]. Our understanding of imaging techniques and what we can learn from them continues to grow. It may very well be possible to someday use neuroimaging to evaluate an individual’s intelligence. It is becoming more widely accepted that a neurobiological basis for intelligence exists (at least for reasoning and problem-solving) [4]. At the same time, the success of these intelligence studies presents ethical issues. Gray et al. pose the question, “Is it ever ethical to assess population-group (racial or ethnic) differences in intelligence?” While little variation has been found between racial groups, the public perception of intelligence studies has been negatively impacted by concerns of racism [4]. It is important to consider the consequences of studies that investigate intelligence differences in population-groups (racial, ethnic, and socioeconomic status). Gray states that it is not necessary to consider race when exploring the neurobiological bases of intelligence. The majority of variation occurs within a racial group and not between them. However, if a study were to investigate race and intelligence, Gray states that it will be necessary to have consent as well as active support from the target groups (i.e., financial support).



There are in fact studies that have investigated test score differences associated with race. Claude Steele, Ph.D., a professor of social psychology at Stanford University, discussed the test performance differences of white and black Americans. In his interview with PBS, Dr. Steele explained that a serious gap in test scores exists between whites and blacks, citing 100-point differences on the verbal and quantitative sections of the SAT. This is a concern for policy makers who are responsible for maintaining a standard level of education, as these low scores may help identify areas that need improvement. However, the negative effect of these findings is due to something Dr. Steele refers to as “stereotype threat.” He explains that when a salient negative stereotype about a group that you identify with may apply (i.e., lower SAT scores), when you are in that test situation the prospect of matching the stereotype can be distracting and upsetting. This stereotype threat impacts test taking, undermining test performance.



When considering the neurobiological bases of intelligence, it may be harder to escape these stereotypes. It may someday become known which genes code for higher intelligence, and thus higher test scores. Already, we have begun studying what regions of the brain relate to intelligence test performance. The next step will be discovering which genes code for the development of those regions, then connecting the dots from genes to intelligence. Furthermore, an understanding of the environmental and social influence that control the activation of these genes (and thus development) will be needed. Future generations may have stereotype threats that are based on genes rather than groups. When you know that you carry the genes for a certain level of intelligence, you may find yourself doubtful of your ability to perform above the level predicted by your genome and fall short of your potential.



There is a lot to be learned regarding how our genes relate to intelligence and by understanding how different gene pools code for their “smarts,” our understanding could grow immensely. However, as I mentioned earlier, suggesting that one group is genetically hard-coded to be smarter than another will have enormous implications. Science and research are going to continue pushing this ethical boundary and I predict that we eventually will be able to find many links between genes and intelligence. It is both beneficial and crucial to discuss how genes and intelligence should be studied now, before the research takes place. We have a unique opportunity where ethics can lay the guidelines before this technology emerges.



There are exciting possibilities that neuroimaging intelligence tests may bring. On the one hand, we can learn so much about the brain, how we think and how we can make it better. On the other hand, it may drive ethnic and racial groups, as well as socio-economic groups, farther apart. So we really have to ask ourselves, is that the price we have to pay to not fill out another Scantron sheet?







Want to Cite This Post?

Craig, E. The Future of Intelligence Testing. The Neuroethics Blog. Retrieved on from, http://www.theneuroethicsblog.com/2012/12/the-future-of-intelligence-testing.html






Related articles



http://headblitz.com/what-brain-scans-might-replace-iq-tests-in-the-future/

http://www.mobiledia.com/news/142925.html

http://www.medicaldaily.com/articles/11216/20120801/intelligence-mri-iq-test-brain.htm

http://www.psychologytoday.com/blog/finding-the-next-einstein/201202/could-brain-imaging-replace-the-sat

http://en.wikipedia.org/wiki/Neuroimaging_intelligence_testing





References



1. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32(26), 8988–99. doi:10.1523/JNEUROSCI.0536-12.2012

2. Deary, I. J., & Caryl, P. G. (1997). Neuroscience and human intelligence differences. Trends in neurosciences, 20(8), 365–71. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9246731

3. Duncan, J. (2000). A Neural Basis for General Intelligence. Science, 289(5478), 457–460. doi:10.1126/science.289.5478.457

4. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: science and ethics. Nature reviews. Neuroscience, 5(6), 471–82. doi:10.1038/nrn1405

5. Hackman, D. a, Farah, M. J., & Meaney, M. J. (2010). Socioeconomic status and the brain: mechanistic insights from human and animal research. Nature reviews. Neuroscience, 11(9), 651–9. doi:10.1038/nrn2897

6. Prabhakaran, V., Rypma, B., & Gabrieli, J. D. E. (2001). Neural substrates of mathematical reasoning: A functional magnetic resonance imaging study of neocortical activation during performance of the necessary arithmetic operations test. Neuropsychology, 15(1), 115–127. doi:10.1037//0894-4105.15.1.115

7. Witelson, S. F., Beresh, H., & Kigar, D. L. (2006). Intelligence and brain size in 100 postmortem brains: sex, lateralization and age factors. Brain : a journal of neurology, 129(Pt 2), 386–98. doi:10.1093/brain/awh69