Pages

Tuesday, December 18, 2012

Two Internship Openings with Emory's Neuroethics Program for Spring 2013!



NEUROETHICS INTERNSHIP OPENINGS


Are you interested in the ethical and social implications of neuroscience?


The Emory Neuroethics Program invites you to apply for a Neuroethics Internship. We are looking for up to two self-motivated, creative, and organized individuals who are interested in topics that fall at the intersection of neuroscience, society, and ethics.



The Neuroethics Program is a community of scholars at the Emory University Center for Ethics who explore the ethical and social implications of neuroscience and neurotechnology. You can be part of that exciting team.



The Center for Ethics at Emory is an interdisciplinary hub that collaborates with every school at Emory University as well as local universities and the private and public community. The Center for Ethics houses The American Journal for Bioethics Neuroscience, the premier journal in Neuroethics. The director of the Center for Ethics, Dr. Paul Root Wolpe, is one of the founders of the field of Neuroethics as well as the International Neuroethics Society, where he serves on the Executive Board.



Students will have creative input into this new, growing program and play an integral role in its day-to-day functions. Duties will include things like:



• Social Media: Writing for The Neuroethics Blog, FB and Web design

• Participating in projects led by the undergrad-run Neuroethics Creative

• Neuroethics Journal Club

• Organizing Symposia

• Neuroethics Research and more…



Please visit our program page (ethics.emory.edu/neuroethics) or Facebook (The Neuroethics Program at Emory) to learn more about us, or contact us at neuroethics@emory.edu.



To apply please submit a 1-pg letter of interest and resume to neuroethics@emory.edu by January 18, 2013.



Eligibility and expectations:

• Must be organized and deadline-oriented

• Must be self-motivated

• Must currently be an undergraduate student (can be from any discipline)

• Hours are flexible, but must be consistent


Who's responsible for 'free will?' Reminding you that all ideas were once new





A figure adapted from Soon, Brass, Heinze and Haynes' 2008 


fMRI study where a "free decision" could be predicted above


 chance 7 seconds before it was consciously "felt."  Those 


green globs could be thought of as the unconscious part of 


your brain that is actually in control of your life.  Image here,


paper here 


As seen previously on this blog, the notion of "Free Will" is a bit of a Neuroethics battleground. About 30 years ago, Dr. Benjamin Libet et. al. published an experiment where the researchers were able to predict when human volunteers would press a button- a fraction of a second before the participants themselves realized they were going to do so.  And despite suggestions that the scientific method is breaking down, there is an entire cottage industry of scientists replicating Libet's result and finding more and more effective ways to predict what you are going to be 'freely' thinking.



I'll defer to Scott Adams of Dilbert fame to describe why this is a problem:





This is from 1992.  Libet's study was published in 1983.  Your life has been absurd  for the past 30 years. (I haven't been able to track down exactly what "Brain Research" Scott Adams was referencing here, but it seems to be similar  to  the Libet experiment.)  From http://dilbert.com/strips/comic/1992-09-13/  


The implications are pretty tremendous- if my conscious mind is just observing a decision that has already been made, and not participating in a decision, how is that decision mine?  How can I be blamed for decisions that I am merely watching?



However, it's hard to scientifically argue that free will is (or isn't) an illusion, unless you know exactly what it is in the first place.  So Jason Shepard and Shane Reuter ran a test to see how folks actually use the phrase 'free will.'  All well and good, that certainly beats just assuming that everyone has the same definition.



But then a sinking feeling emerges- here is an idea that is so precious to us, that we actually start becoming worse people when we hear that it is an illusion.  And yet this authoritative definition is coming to us through majority rule?  Our hero is roused to action, and sets out to find a 'correct' definition, not just a 'popular' one...




While using undergraduates in these sorts of psychology studies isn't necessarily problematic, I've been an undergraduate, and thus do not trust these people with my free will.  They might put it in a Dr. Pepper bottle filled with dry ice and chuck it into an abandoned parking lot late at night.  That would be terrible.  Image from www.quickmeme.com

But with so many variants on free will floating around, how do you choose a 'correct' definition?  Does free will mean free from the laws of physics (metaphysical libertarianism)?  Free from control by an omnipotent God?  Free from mental disease, free from peer pressure, free from our own emotions?  And what, exactly, is a 'will?'  Does it need to be 'conscious'?  Our hero thinks to himself, “what would Science do?”



And suddenly the answer becomes embarrassingly clear: why, just give the definition of the phrase to whoever coined it!  Those who followed should be forced to use different phrases for whatever 'revisionist' notions of free will they invented- free-ish will.  Free-range will.  Will Zero.  Our hero sits back and reflects on the cleverness and superiority of the Sciences over all other domains of thought [1].  Now all that is left to do is a wee bit of Googling.  Pish pish, easy post.



However, after significant Googling, and digging through two different libraries, and further Googling, and reading books that were over 100 years old[2], and talking to people who were familiar enough with the topic to actually respect its subtleties, and staring at a computer screen wondering what the hell he had gotten himself into, our hero realized that figuring out who is responsible for infecting us with an attachment to 'free will' wasn't going to be an easy task.  More like a thesis, or a career.



What is clear from the relatively small portion of the literature that I've been able to digest is that people have been talking about free will for over a millennium, but less than three millennia.  Probably.  In the early 20th century it was common to presume that folks have always had a notion of free will, an example being in 1923  when W.D. Ross confidently asserted that “Aristotle shared the plain man’s belief in free will.” This was despite Ross's admission, two pages later, that Aristotle “did not examine the problem very thoroughly, and did not express himself with perfect consistency.”[3]  Later scholars took this lack of clear discussion to conclude that Aristotle lacked a notion of will, free or otherwise, altogether.  So, Aristotle’s clean.  For now.








Would Shepard and Reuter's study have gone differently if St. Augustine

took a psychology class at Emory in the spring of 2012? Images from here and here

In his 1974 Sather lectures at Berkeley[4], Albrecht Dihle put forth his argument that St. Augustine, in the 4th century AD, was the first person to put together our 'Western notion' of free will.  St. Augustine came to an (arguably libertarian) notion of free will as a way to solve the problem of evil: how could a benevolent and omnipotent God allow for a world with evil? Answer: humans are responsible for evil due to their free will (which got tainted when Adam and Eve consumed a particular fruit).  Augustine describes this 'free will' as a first cause, with no causes before it, meaning God gets none of the blame and gets to remain omnipotent.  So, St. Augustine is the one responsible!  Or so it seems...



Fast forward to 1997, and we find Michael Frede putting forth an argument (in his own Sather lectures at Berkeley[5]) that Dihle was being too restrictive in his definition of 'Free Will,' and that St. Augustine got his idea about what free will was from the stoics.  Frede argues that it was actually the late stoic Epictetus[6] who first developed a full notion of will, in the 2nd century AD.  Epictetus gets the blame as he was the first to link three critical claims together: that all voluntary acts are caused by wishes, that wishes are created by the rational soul, and therefore that all voluntary acts are caused by acts of reason, which is to say, caused by choice.  This is contrasted with the Aristotelian and Platonic schools of thought, that held that voluntary acts could also be the result of non-rational urges (thirst, hunger, etc).  So Epictetus is responsible then.  Okay...



But coming up to the present, we see reactions to Frede's arguments.  Karen Neilsen recently published an article[7] where she argues that Aristotle (HIM again!) actually developed a notion of will prior to Epictetus, making the point that Frede translates the Greek 'prohairesis' as 'will' for Epictetus and as 'decision' for Aristotle (although Neilsen makes no comment on the 'free' aspect).  So to understand where 'will' came from, we are looking at shifting definitions in ancient Greek.  AAGH!



The lesson here is that this concept didn't emerge suddenly out of the history of the west.  "Free will," whatever it is, was a gradual development over thousands of years, with input from several schools of classic Greek thought, as well as Jewish and Christian traditions.  Perhaps then, instead of thinking of "free will" as a single well-defined idea, it should be thought of as an entire lineage of ideas.  If this is the case, neuroscientists, science writers, and the public at large need to be very cautious when asserting that "free will is an illusion" is a scientifically valid hypothesis.  If neuroscience wants to make points on "free will," it needs to be both more specific as to what variant of "free will" it refers to, and more broad in the variants of 'free will' it entertains.







[1]- I hope the childish language here makes it clear that I actually think otherwise.  Joking aside, there is an important point to be made here on the differences between philosophy and science (and the subsequent frustrations felt by both when they interact), especially considering that this is a NeuroEthics blog.  I don't have a real answer, but I'll start a discussion by pointing to Robert Hartman's 1963 paper “The Logical difference between Philosophy and Science.”  Hartman asserts that all of science is built on top of the super-system of mathematics, whereas each philosopher effectively creates its own, semi-independent system.  Seeing how Hartman admits to building on the ideas of famed 18th century philosopher Immanuel Kant, one might be tempted to call this bogus.  But perhaps Science can be thought of like Star Wars, where new authors are continually adding to the same (expanded) universe, where Philosophy can be thought of like Batman, where authors are continually re-telling the same story and re-imagining the same characters in different ways.  Or perhaps someone who studies the philosophy of science needs to slap me around a bit in the comments section.

[2] Keep in mind that I'm a neuroscientist here, so I rarely read things that were written more than 20 years ago: Wow.  I had to read the printed-in-1900 copy of Epictetus's discourses I found at Tech's library out loud.  Just in case it contained an incantation.  Also, the pages smelled AMAZING.

[3]W. D. Ross, Aristotle (London: Methuen, 1923), page 199-201

[4]A. Dihle, The Theory of Will in Classical Antiquity.  (Berkeley: University of California Press, 1982)

[5]M. Frede, A Free Will: Origin of the Notion in Ancient Thought  (Berkeley: University of California Press, 2011)

[6] Epictetus was a freed slave in the house of Nero.  As a former slave, his accusatory discussion on freedom in 'Discourses' is particularly difficult to write off.  Highly suggested.

[7]K. Neilsen, The Will – Origins of the Notion in Aristotle's Thought.  Antiquorum Philosophia (2012). Available at: http://works.bepress.com/karennielsen/16.  Note that I owe a lot of the structure of the history above to this source







Want to cite this post?

Zeller-Townson, RT. (2012). Who's responsible for 'free will?' Reminding you that all ideas were once new. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/12/whos-responsible-for-free-will.html

Monday, December 10, 2012

Uncovering the Neurocognitive Systems for 'Help This Child'














In their article, “Socioeconomic status and the brain: mechanistic insights from human and animal research,” Daniel A. Hackman, Martha J. Farah, and Michael J. Meaney explore how low socioeconomic status (SES) affects underlying cognitive and affective neural systems. They identify and focus on two sets of factors that determine the relationship between SES and cognitive development: (1) the environmental factors or ‘mechanisms’ that demonstrably mediate SES and brain development; and (2) those neurocognitive systems that are most strongly affected by low SES, including language processing and executive function.  They argue that “these findings provide a unique opportunity for understanding how environmental factors can lead to individual differences in brain development, and for improving the programmes and policies that are designed to alleviate SES-related disparities in mental health and academic achievement” [1].






Neuroscience can tell us how SES may affect her brain.

Can it move us to do something about it?






Theoretically, I have no doubt that neuroscience can make a powerful contribution to early childhood development by determining whether and which neurocognitive systems appear to be more extensively affected by low socioeconomic status.

This is, as the authors themselves point out, important work, because understanding which systems are affected can help educators and policy-makers develop programs to target them more directly and successfully. For example, the work of D’Anguilli et. al. demonstrates that low-SES children pay more attention to unattended  stimuli, and are thereby more susceptible to becoming distracted and having a harder time focusing on a given task. [2] A corresponding, corrective strategy would consist in introducing games, lessons and computer-based strategies which explicitly target executive functions – and indeed, just such a set of measures is being used by the Tools of the Mind curriculum, which as of this year is being implemented in 18,000 pre-kindergarten and kindergarten classrooms, in Head Start programs, public schools, and childcare centers across the nation.






Fig. 1: The yellow 'brain development' box represents those neurocognitive systems that are most affected by low SES, and could include 'language processing' and 'executive function'



So far, so good. So what am I worried about?



What do you think? 



I’m not ‘worried’ so much as left wondering about one issue in Hackman et. al.’s review that I would now like to explore, and that I would welcome further discussion about.



My concern relates to the broader relationship between scientific knowledge and our individual and collective moral motivation to do something about an ongoing injustice. Allow me illustrate what I mean using two diagrams adapted from the Hackman et. al. article. The first represents the state of our knowledge regarding the relationship between SES and development, without any concrete neuroscientific understanding of the neurocognitive systems that mediate between them:




Fig. 2: We know that SES affects developmental outcomes,

even if we don't understand the neurocognitive systems that mediate the relationship











The second represents the state of our knowledge regarding the relationship between SES and development, now including our emerging neuroscientific understanding of the neurocognitive systems that mediate between them, outlined in the paper:




Fig. 3: Neuroscience is beginning to elucidate which neurocognitive systems are

most strongly affected by SES, and thereby influence children's developmental outcomes







My question is this: if sociologists and psychologists have already firmly established the relationship between SES, specific environmental mediators, and resulting developmental outcomes, as in Figure 1, (and they have, as the evidence cited by Hackman’s et. al. attests to), then can the addition of a scientific understanding of the intermediary mechanisms in any way enhance or strengthen our practical commitment to improving children’s SES and the corresponding environmental mediators that affect their development?  In other words, if I already know that SES, and specifically prenatal influences, directly affect elements of children’s cognitive and emotional development, do I need to know anything before doing something about it? And will knowing more about it, including understanding the causal sequence mediating the relationship, prompt me to do anything more about it than I was doing before?



Again, as mentioned, I fully recognize and appreciate the potential of neuroscientists and their collaborators to “design of more specific and powerful interventions to prevent and remediate the effects of low childhood SES.” [1] A second, equally essential neuroscientific question to explore is whether certain brain propensities increase the likelihood of individuals' living in low-SES circumstances. Could we say that certain brain propensities correspond to developmental diseases, or to a kind of physical handicap - one that traps people in poverty and decreases their likelihood of attaining a better quality of life?  If so, would this oblige us to take action? These are fundamental questions that need to be explored further. For my part, I'm not sure I agree with the statement that neuroscience can “highlight the importance of policies that shape the broader environments to which families are exposed” with any more clarity or motivational force than our existing knowledge already does. [1]



I am a neurophile, but…





Here’s why I’m slightly skeptical.  To borrow an example from the philosopher Peter Singer, imagine that you’re driving down the street and see a person bleeding profusely from his leg. [3] You could rush in and help this man, but you’re wearing your brand new, $375 J.Crew Ludlow suit jacket, so you think to yourself, ‘Ok, do I leave him there? I mean, it’s terrible, but I guess so, because I don’t want to get blood all over my beautiful jacket.’ If you responded to the situation in this way, we would probably call you a moral monster.






One of these is not like the other. Or...?





Now consider a different case. Imagine that you’re watching your favorite episode of the Walking Dead when a commercial from Care comes on and reminds you that for $375, you could pay for and facilitate 8 healthy births, and thereby help save the lives of several mothers and their babies. Now you think to yourself "Well, I guess it would be good to save those people, but I really just want that jacket." In this case, our general consensus would be that while you're no Mother Theresa, we probably wouldn't want to condemn for being a moral monster. (After all, that jacket is made from 'world class wool'!) So what gives? As Singer pointed out in a series of influential articles, our rational obligation towards the mothers and their newborns should be the same as towards the bleeding man. [3] So how and why do our intuitions differ?





In his article, “From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology?,” the philosopher Joshua Greene suggests that an evolutionary perspective may help explain the differences in our responses. He proposes, “consider that our ancestors did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives by making relatively modest material sacrifices. Consider also that our ancestors did evolve in an environment in which individuals standing face-to-face could save each others’ lives, sometimes only through considerable personal sacrifice. Given all of this, it makes sense that we would have evolved altruistic instincts that direct us to help others in dire need, but mostly when the ones in need are presented in an ‘up-close-and-personal’ way.” [4] According to Greene, this makes a sense of why human beings can be extraordinarily altruistic in their immediate, interpersonal interactions, but still gobsmackingly selfish in their transnational relations.



Unfortunately, our relationship to children in lower-SES environments is closer to the distant pregnant mothers in Singer's analogy than it is to the bleeding stranger right in front us. Few of us interact with low-SES children on a daily basis, and so many of us worry about how they get on in more abstract, theoretical terms. But if this is right, then more information, or even more scientific understanding, will not be enough to move us toward addressing their developmental issues. Rather, we will need to use other kinds of knowledge, such as our emerging understanding of biased moral motivation, to reflectively increase the probability of translating our moral principles into actions. That is, examples like Singer's bleeding stranger tell us something about how our moral motivation works, and we need to use this type of knowledge to try and make low-SES children seem more like the man with the leg wound in our moral imaginations. This would increase the likelihood of our doing something to improve low-SES children's circumstances. One way of achieving this would be to ensure that we interact with low-SES parents and their children on a more regular basis, e.g., by doing something as simple as taking public transportation. This would make us more likely to put our hard-won neuroscience research to use.




____



[1] Hackman, D. A., Farah, M.J., Meaney, M. J., 2010, 'Socioeconomic status and the brain: mechanistic insights from human and animal research,' in Nature 11, Available at https://mail-attachment.googleusercontent.com/attachment/u/0/?ui=2&ik=bb31177d51&view=att&th=13ad223ce6979971&attid=0.4&disp=inline&safe=1&zw&saduie=AG9B_P9-g3MEKF3MHUpOaKNL2td1&sadet=1355272405884&sads=-iGOQ0R2Qyb4_aHrUapR1YuvG1g



[2] D’Angiulli A, Herdman A, Stapells D, Hertzman C. 2002, 'Children’s event-related potentials of auditory selective attention vary with their socioeconomic status.' Neuropsychology 22:293–300.



[3] Singer, P., 1972. 'Famine, affluence, and morality.' Philosophy and Public Affairs 1, 229–243.



[4] Greene, J. 2003. ''From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?' Nature. Available at: http://www.overcominghateportal.org/uploads/5/4/1/5/5415260/from_neural_is_to_moral_ought.pdf







Want to cite this post? 

Haas, J. Uncovering the Neurocognitive Systems for 'Help This Child'. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/12/uncovering-neurocognitive-systems-for.html

Tuesday, December 4, 2012

Neurodiversity and autism: where do we draw the line?

In April 2012, the Emory Neuroethics Program conducted an interview with Steven Hyman, the director of the Stanley Center for Psychiatric Research at MIT’s Broad Institute, where he expressed his belief that mental illnesses and developmental disorders should not be thought of as clear and distinct categories. He said that “classifications are, in the end, cognitive schemata that we impose on data in order to organize it and manipulate it…it's really not helpful to act like there's a ‘bright line’ in nature that separates the well from non-well.” Rather, he said, there are spectrums of behaviors, and disorders exist along them with differing degrees of severity.



This idea of spectrum disorders is common in modern psychiatry, with a commonly known example being the autism spectrum. This approach groups similar disorders of varying levels of severity along a spectrum which also includes behaviors and emotions classified as normal. While the spectrum approach is often touted as an improvement over the previous methods of classification, it still does not solve the lingering problem of how to define disorders.






Neurodiversity shirt



This question is one of the biggest issues in modern psychiatry: where along the spectrum is the transition from the normal range to a diagnosable mental disorder? Doctors and therapists rely on the Diagnostic and Statistical Manual of Mental Disorders (DSM) and scientific literature to make decisions, but it is not a perfect system and leaves room for controversy. The unreleased DSM-5 will move more towards the spectrum approach. For example, it will not include Asperger syndrome as a separate disorder, instead incorporating it into autism spectrum disorder (ASD). But despite this, the DSM-5 is being criticized for emphasizing the negative aspects of ASD (more so than the DSM-4) and, more generally, for pathologizing behaviors and mental states that, some feel, should be (and were at one point) considered normal.



When it comes to classifications, there are some who want to extend the spectrum of behaviors considered “normal” even further, or even remove the dichotomy of “normal” and “abnormal” psychology all together. One group active in this debate is the neurodiversity movement. The term neurodiversity was coined by Judy Singer, a sociologist with Asperger syndrome. It is based on the biological term biodiversity, the variety among and within species required for a healthy ecosystem. Proponents of neurodiversity argue that certain neurological, psychological, and developmental conditions which are usually described as disorders should instead be viewed as part of the normal variation that exists among humans. Neurodiversity proponents argue that while particular ways of thinking or acting may be more common and normalized, there are other equally acceptable ways of living and being. A wide variety of neurological conditions are supported by the neurodiversity movement including ADHD, dyslexia, and Tourette’s, but largely the movement has been lead by those fighting for greater acceptance of the world view and experience of those with autism.



The autism rights movement applies the ideas of neurodiversity to disorders on the autism spectrum (including autism and Asperger syndrome). Proponents of the movement, which includes people both with and without autism, see the autism spectrum “disorders” as neurological variations in functioning that, while different from the norm, should still be seen as normal and not, in fact, as disorders. These activists seek to correct myths and misconceptions about autism, emphasize autism’s positive aspects, and seek a greater role for people with autism in discussions about the condition.






A symbol of the autism rights movement



Neurodiversity proponents do not entirely disapprove of teaching or training people with autism to function better in society by helping them cope with some of the more damaging symptoms (difficulties with taking care of themselves, communicating, and reading emotions, for example). But they think this should be done without losing the beneficial aspects of autism and without making people with autism conform to a normal ideal. For example, behaviors common in autism like stimming (repetitive movements) and keeping strict routines would not be discouraged.



Another idea commonly held among neurodiversity and autism rights activists is that autism and Asperger’s are identities with their own unique culture. This explains some of the resistance to being “cured”. While ASD may contain both positive and negative characteristics, “curing” autism, activists argue, would destroy who the individuals are as people (since their autism so shapes the way they see and interact with the world) and would eliminate the autistic culture. Computers and the internet have contributed to this culture, both by allowing people with autism to more easily connect with each other and by offering a mode of communication that many are more comfortable with than face-to-face interaction. But some are worried about the detrimental effects that relying on computer mediated communication might have on important social skills gained through face-to-face interaction.



Neurodiversity advocates encourage the removal of medicalized language such as “disease”, “disorder”, “treatment”, “cure”, and “epidemic” from discussions about autism. And some prefer terms like “autistic”, “autie”, and “autist” (for those with autism) and “aspie” (for those with Asperger syndrome) instead of “person with autism/Asperger’s” to emphasize the belief that they are identities rather than disorders. This also lead to the term “neurotypical” being used to refer to people who are seen as developmentally and psychologically “normal” from a non-neurodiverse perspective. Being neurotypical has even been jokingly described as a disorder to satirize how autism is often viewed.



Just like neurodiversity is championed by both those with and without autism, the movement faces criticism from within both groups as well. A common argument is that the neurodiversity approach is championed mostly by people with Asperger’s or high-functioning autism, and while it might work for them, others need treatment. According to them, working to help such low-functioning individuals better fit into society will help them and greatly improve their quality of life. They think that maintaining autistic culture is more of a utopian ideal and, in actuality, society will never be able to truly except or accommodate low-functioning individuals with autism otherwise.






A protest against Autism Speaks, an advocacy organization often criticized by the neurodiversity movement  



The problem remains in identifying the “bright line” of distinction between “low-functioning” and “high-functioning” individuals. The “high-functioning” and “low-functioning” labels for autism are not clinical classifications and have no agreed-upon definitions. The closest thing to an official distinction is that high-functioning autism is “unaccompanied by mental retardation” while low-functioning is. But these terms are still controversial given the challenges of assessing the intelligence of those who have difficulty with (or little desire for) communicating through conventional methods and (and conventional testing). Neurodiversity advocates stress that some people diagnosed with low-functioning autism do not have any intellectual deficits and only appear to because they communicate and think so differently.



The concept of neurodiversity offers a different paradigm to approach autism. Instead of viewing it as a disease that should be cured, neurodiversity acknowledges both the positive and negative aspects of it. While most within the movement agree that it is desirable to alleviate the negative effects, they reject a “sledge-hammer approach” that tries to force everyone with autism to be more like neurotypicals in all ways, especially when they feel such treatments are harmful and even abusive.



But on the other hand, though listening to what people with autism have to say about themselves and their identities is important to understanding autism, I think that neurodiversity can also go too far, becoming too idealistic when it comes to deciding what behaviors should be accepted by society. For some, it might be in their best interest to help them communicate verbally and curb behaviors like stimming (at least in certain situations) through treatments that are less controversial (though I understand the irony here, considering that the nuerodiversity movement started in response to neurotypicals thinking they knew what was best for those with autism).



Still, neurodiversity presents an important viewpoint to keep in mind and, at the very least, illustrates how, despite how much we know about psychology and neuroscience, the only way to understand people’s subjective conscious and emotional experiences is by listening to what they have to say. Well, at least for now.





Want to cite this post?

Queen, J. (2012). Neurodiversity and autism: where do we draw the line? The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/12/neurodiversity-and-autism-where-do-we.html





Additional Resources (Added 6/16/2015)



Autism

http://healthfinder.gov/FindServices/SearchContext.aspx?topic=81



Autism Speaks Resource Guide

http://www.autismspeaks.org/family-services/resource-guide



Career Assistance for People with Autism

http://www.hloom.com/career-assistance-for-people-with-autism



National Center for Autism Resources & Education

https://www.disability.gov/resource/national-center-for-autism-resources-education-ncare



AutismNOW Transition Planning

http://autismnow.org/in-the-classroom/transition-planning-for-students

The Future of Intelligence Testing















Few people I know actually enjoy standardized tests. Wouldn’t it be great if technology could eliminate the need for bubble-in forms and Scantron sheets? How nice would it be to simply go in and get a snapshot of your brain to find out how smart you are? Imagine walking into the test center, signing on the dotted line, getting a quick scan, and walking out with your scores in hand, helping you gain admittance into a college or land your next job. No brain-racking questions, no tricky analogies, and no obscure vocabulary. Goodbye SAT, hello functional magnetic resonance imaging (fMRI).






Image from 















http://theturingcentenary.files.wordpress.com/2012/06/brain-functions.jpg    





In the general, there have been two types of intelligence studies: psychometric and biological. Biological approaches make use of neuroimaging techniques and examine brain function. Psychometrics focuses on mental abilities (think IQ tests). Dr. Ian Deary and associates suggest that a greater overlap of these techniques will reveal new findings. In their paper, 'Testing versus understanding human intelligence,' they state:



“The lack of overlap between these approaches means that it is unclear what the scores of intelligence tests mean in terms of fundamental biological processes of the brain.”[2]



Applying psychometric analysis techniques (IQ tests) coupled with advanced imaging has the potential to reveal the locations of “higher” cognition and neural processing. I expect that future technologies will have incredibly improved resolution, which will allow scientists to see not only what regions the brain uses but also the exact pathways that are activated for each action. By understanding which pathways are utilized for specific tasks, it may be possible to identify which genes as well as environmental factors (nutrition, education) are responsible for their development. This can lead to programs dedicated to training specific areas of the brain (several of which already exist) [5], and perhaps even drugs that foster development.



I believe it is increasingly important to consider the implications of a technology powerful enough to quickly evaluate an individual’s level of intelligence. However, before I get there, it is necessary to first explain what we can and cannot currently do. What we cannot currently do is use neuroimaging to determine how smart you are. What we can currently do is use neuroimaging to see what parts of your brain contribute to how you process information, access memories and function [3].



Localizing Intelligence



Different parts of your brain do different things and none of these brain regions work in isolation. Some regions contribute to eating, seeing, and other regions play a role in “intelligence”. The varying techniques of imaging-based testing search for different correlates of intelligence [4] (i.e., general intelligence, problem solving, learning abilities). Developments in imaging technologies have improved our ability for greater analysis, allowing for the study of both damaged and healthy brains. For example, MRI studies have found that the volume of gray matter correlates to intelligence, providing evidence for generalizations made regarding brain volume and intelligence [7]. A 2006 study of 100 postmortem brains examined the relationship between an individual’s Full Scale Wechsler Adult Intelligence Scale (WAIS) score and the volume of their brain regions. The factors they considered important to the relationship between brain size and intelligence were age, sex and hemispheric functional lateralization (They found that general verbal ability was correlated with cerebral volume in women and right-handed men. They did not find a relationship between ability and volume in with every group, however).



Additionally, PET and fMRI studies have revealed more information regarding the functionality of certain regions of the brain. By recording and interpreting the brain activity of subjects as they complete a variety of tasks, researchers are able to draw inferences based on the performance in the types of task (and thus, the type of intelligence) that calls on particular areas of the brain.  This is interesting, as knowing how parts of the brain are utilized may reveal more information about the structure and hierarchy used in neural development. It also may provide interesting information regarding the pathways of neural signals throughout the nervous system. Image-based testing may allow researchers to discover why certain neurons are connected, if they are indeed aligned in a purposeful manner and consequently, how to repair such pathways when they are damaged.






Image from http://news.wustl.edu/news/Pages/24068.aspx

A study from Washington University in St. Louis has shed light on how our brains utilize various networks for performance with working memory tasks [1]. They described a mechanism, global connectivity, which coordinates control of other networks. In a sense, global connectivity is the CEO of your brain, insuring that all components of your system are functioning and allowing for effective control of thought and behavior. Specifically, they found that a region of the lateral prefrontal cortex (LPFC), whose activity has been found to predict working memory performance, employs global connectivity. They report that,



“critically, global connectivity in this LPFC region, involving connections both within and outside the frontoparietal network, showed a highly selective relationship with individual differences in fluid intelligence. These findings suggest LPFC is a global hub with a brainwide influence that facilitates the ability to implement control processes central to human intelligence.”



This fascinating study identified a very specific characteristic of the brain (the global connectivity of the left LPFC) that suggested investigators were accurately able to predict fluid intelligence (where fluid intelligence refers to reasoning and novel problem solving ability) [4].





Potential Issues



We are learning more about the brain and the biological bases for intelligence every day. We have expanded our understanding of memory, cognitive thought and neural computation [2,6]. Our understanding of imaging techniques and what we can learn from them continues to grow. It may very well be possible to someday use neuroimaging to evaluate an individual’s intelligence. It is becoming more widely accepted that a neurobiological basis for intelligence exists (at least for reasoning and problem-solving) [4]. At the same time, the success of these intelligence studies presents ethical issues. Gray et al. pose the question, “Is it ever ethical to assess population-group (racial or ethnic) differences in intelligence?” While little variation has been found between racial groups, the public perception of intelligence studies has been negatively impacted by concerns of racism [4]. It is important to consider the consequences of studies that investigate intelligence differences in population-groups (racial, ethnic, and socioeconomic status). Gray states that it is not necessary to consider race when exploring the neurobiological bases of intelligence. The majority of variation occurs within a racial group and not between them. However, if a study were to investigate race and intelligence, Gray states that it will be necessary to have consent as well as active support from the target groups (i.e., financial support).



There are in fact studies that have investigated test score differences associated with race. Claude Steele, Ph.D., a professor of social psychology at Stanford University, discussed the test performance differences of white and black Americans. In his interview with PBS, Dr. Steele explained that a serious gap in test scores exists between whites and blacks, citing 100-point differences on the verbal and quantitative sections of the SAT. This is a concern for policy makers who are responsible for maintaining a standard level of education, as these low scores may help identify areas that need improvement. However, the negative effect of these findings is due to something Dr. Steele refers to as “stereotype threat.” He explains that when a salient negative stereotype about a group that you identify with may apply (i.e., lower SAT scores), when you are in that test situation the prospect of matching the stereotype can be distracting and upsetting. This stereotype threat impacts test taking, undermining test performance.



When considering the neurobiological bases of intelligence, it may be harder to escape these stereotypes. It may someday become known which genes code for higher intelligence, and thus higher test scores. Already, we have begun studying what regions of the brain relate to intelligence test performance. The next step will be discovering which genes code for the development of those regions, then connecting the dots from genes to intelligence. Furthermore, an understanding of the environmental and social influence that control the activation of these genes (and thus development) will be needed. Future generations may have stereotype threats that are based on genes rather than groups. When you know that you carry the genes for a certain level of intelligence, you may find yourself doubtful of your ability to perform above the level predicted by your genome and fall short of your potential.



There is a lot to be learned regarding how our genes relate to intelligence and by understanding how different gene pools code for their “smarts,” our understanding could grow immensely. However, as I mentioned earlier, suggesting that one group is genetically hard-coded to be smarter than another will have enormous implications. Science and research are going to continue pushing this ethical boundary and I predict that we eventually will be able to find many links between genes and intelligence. It is both beneficial and crucial to discuss how genes and intelligence should be studied now, before the research takes place. We have a unique opportunity where ethics can lay the guidelines before this technology emerges.



There are exciting possibilities that neuroimaging intelligence tests may bring. On the one hand, we can learn so much about the brain, how we think and how we can make it better. On the other hand, it may drive ethnic and racial groups, as well as socio-economic groups, farther apart. So we really have to ask ourselves, is that the price we have to pay to not fill out another Scantron sheet?







Want to Cite This Post?

Craig, E. The Future of Intelligence Testing. The Neuroethics Blog. Retrieved on from, http://www.theneuroethicsblog.com/2012/12/the-future-of-intelligence-testing.html






Related articles



http://headblitz.com/what-brain-scans-might-replace-iq-tests-in-the-future/

http://www.mobiledia.com/news/142925.html

http://www.medicaldaily.com/articles/11216/20120801/intelligence-mri-iq-test-brain.htm

http://www.psychologytoday.com/blog/finding-the-next-einstein/201202/could-brain-imaging-replace-the-sat

http://en.wikipedia.org/wiki/Neuroimaging_intelligence_testing





References



1. Cole, M. W., Yarkoni, T., Repovs, G., Anticevic, A., & Braver, T. S. (2012). Global connectivity of prefrontal cortex predicts cognitive control and intelligence. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32(26), 8988–99. doi:10.1523/JNEUROSCI.0536-12.2012

2. Deary, I. J., & Caryl, P. G. (1997). Neuroscience and human intelligence differences. Trends in neurosciences, 20(8), 365–71. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9246731

3. Duncan, J. (2000). A Neural Basis for General Intelligence. Science, 289(5478), 457–460. doi:10.1126/science.289.5478.457

4. Gray, J. R., & Thompson, P. M. (2004). Neurobiology of intelligence: science and ethics. Nature reviews. Neuroscience, 5(6), 471–82. doi:10.1038/nrn1405

5. Hackman, D. a, Farah, M. J., & Meaney, M. J. (2010). Socioeconomic status and the brain: mechanistic insights from human and animal research. Nature reviews. Neuroscience, 11(9), 651–9. doi:10.1038/nrn2897

6. Prabhakaran, V., Rypma, B., & Gabrieli, J. D. E. (2001). Neural substrates of mathematical reasoning: A functional magnetic resonance imaging study of neocortical activation during performance of the necessary arithmetic operations test. Neuropsychology, 15(1), 115–127. doi:10.1037//0894-4105.15.1.115

7. Witelson, S. F., Beresh, H., & Kigar, D. L. (2006). Intelligence and brain size in 100 postmortem brains: sex, lateralization and age factors. Brain : a journal of neurology, 129(Pt 2), 386–98. doi:10.1093/brain/awh69




Tuesday, November 27, 2012

Staring into the Zombie Abyss

By Guest Contributor Marc Merlin, Director of the Atlanta Science Tavern.



In his excellent review of the recent Zombethics Conference, Ross Gordon covers the central themes discussed during its morning session: a hypothetical neuroanatomy of zombies that would account for their hostile behavior, the possibility of the existence of philosophical zombies, soulless humans walking among us and, finally, the always-vexing question of free will, as it concerns both zombies and us.



Without a doubt these discussions have much to say about neuroscience and the philosophy of mind. What is less clear to me is what they have to say about ethics. They help us think more carefully about zombie behavior, but they offer little additional understanding of own our behavior, which is, after all, the grist for the ethics mill.






The Piano Kill, via Zombieland



The fact of the matter is that we are the ethical agents in the universe of human-zombie interactions. What motivates us and informs our behavior - consciously or unconsciously - is what is of primary importance here. Why is it that human characters in zombie dramas are moved to pursue the destruction of the walking dead with unabashed gusto, without the least apology or excuse? And why is their success at dismemberment and decapitation of zombie foes met with the eager applause of broad-based television and film audiences, many of whom would not be caught dead walking into a movie theater to catch the latest installment of the horror-porn “Saw” franchise?



From this perspective, the central zombethical question becomes, 'Why do we find zombies so delightfully kill-able'?



One implication of this line of thinking is that neuroscience research relative to zombie ethics should focus less on the mental states of zombies and more on the mental states that zombies (and their destruction) evoke in us. There being a dearth of real-life zombies with brains to scan, this shift of perspective toward the human offers obvious experimental advantages. In addition, it suggests a testable hypothesis: the response of the human brain when presented with depictions of the killing of zombies will be different than that when it is presented with depictions of the killing of other kinds of threats.



Speculating here a bit, it may be that zombies occupy a sweet spot of sorts when it comes to hateability. Since they possess hardly a shred of personality, which makes identifying with them difficult or impossible, they hold little claim on our store of empathy. Yet, unlike empathy, which is bestowed on all sorts of creatures - even expressive robots - the antipathy that we feel toward zombies may not be an equal opportunity disposition. Perhaps this form of hostility, bent on annihilation, is especially reserved for members of our own species or facsimiles thereof.



All this leads me to suggest a topic for next year’s Zombethics conference: Zombies: Why do we hate them so, and why do we kill them with such glee? Some brainy food for thought until Halloween 2013.








Want to Cite this Post?

Merlin, M. (2012). Staring into the Zombie Abyss. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2012/11/staring-into-zombie-abyss.html.


Doing Neuroscience, Doing Feminism: Interview with Dr. Sari van Anders













Dr. Sari van Anders

After attending the Neurogenderings Conference in Vienna, where participants debated whether it would be
possible to conduct feminist neuroscience research, I decided it would be
useful to interview an actual practicing feminist neuroscientist – and I knew
just who to talk to. Dr. Sari van Anders is an Assistant Professor in Psychology and Women’s Studies
at the University of Michigan. She earned her Ph.D. in Biological &
Cognitive Psychology from Simon Fraser University. In her social neuroendocrinology lab at the University of Michigan, she conducts feminist neuroscience
research on a variety of topics, with a principle focus on the social
modulation of testosterone via sexuality, partnering/pair bonding, and
nurturance. She has received grants from the National Institutes of Health
(NIH) and the American Institute of Bisexuality and has published articles in Hormones and Behavior, Archives of Sexual Behavior, and Psychoneuroendocrinology, among others.







I
asked her to talk about what she sees as feminist about her own behavioral
neuroscience research, how she has secured support for her work from other behavioral
neuroendocrinologists, and what advice she would give to early career
scientists who want to incorporate feminist concerns into their research. Read
on for Dr. Van Anders’ thoughtful and thought-provoking answers.




I have heard you
describe your research as a behavioral neuroscientist as ‘feminist.’ Can you
explain what you see as feminist about your behavioral neuroscience research?






Feminist science practice, like other aspects of feminism
(e.g., activism, praxis, theory, etc.), is not one thing. So the ways in which
I position my work as feminist may not be the same as the ways in which other
scientists might position their science, or the ways nonscientists might
position my work. With that caveat in mind, onwards! One important feminist
facet of my work is that I see science as one way to approach knowledge
creation/production, as opposed to the only way or the most valuable way.
Science can help us understand certain aspects of certain phenomena and is
valuable as such, but is more valuable when we recognize its limitations and
acknowledge the value of insights gained from other approaches.





Another important feminist facet of my work is that I see
vast gulfs of difference between bioscience and biologically determinist
thinking; so, I separate out natural from material, innate from trait, must
from is, etc. Our bodies and the biological systems inside of them are
recipients of socialization in the same ways our behaviors and cultural
practices are. Social modulation of hormones is a major thrust of my research
program… how could I (or we) think of our bodily systems as only preprogrammed
when we increasingly know how each biobody exists in a social context? A major part
of feminist thought critiques the split between gender and sex because it has
in large part left sex (i.e., biology; nature) as a fixed, natural, acultural
entity. Part of the work my research does is to expand notions of
sex/nature/biology such that we see biological properties as malleable and
socially located.





Another way my work is feminist is that I think about
inequities while I do my work, including how social location might affect the
questions I ask and my own understandings of phenomenon, but also how a gender
or intersectional lens might help me understand my findings better (which it almost
always does). Critically engaging with one’s positionality has been called
‘strong objectivity.’ Theory compelled me and my own research has convinced me
that objectivity works closer to how we want it to when we constantly engage
with and interrogate our own biases and positions.





My work is also feminist because it’s informed by feminist
thought, especially feminist science studies, even when the work is not focused
on gender/sex. It’s feminist because I don’t think that science leads to
simpler answers; I’m not, and I don’t think science intrinsically is (except in
practice), reductionist. I study hormones and this research often leads me to
explode phenomenological categories. For example, we found that cuddling
increased testosterone – and followed up by theorizing and studying both
cuddling and testosterone with fascinating and – to my mind – transformative
findings about both. Similarly, we found that sexual desire is linked to
testosterone in sometimes counterintuitive ways, which has led us to ask: what are people desiring when they desire? These
are far from reductionist implications, because they leave us with more
questions about hormones but also the social phenomenon we’re studying (rather
than simplifying them). The world is complex, and science helps us appreciate
how complex.









van Anders has found that cuddling can increase testosterone levels in women

Image from Flickr by malloreigh



I also see my work as
feminist because I think about it as community- and alliance-building. If
knowledge production were collaborative rather than competitive, what would it
look like? We try to build those sorts of relationships with colleagues, junior
and senior, to make science what we wish it could be (i.e., where we constantly
push at the clarity and meaningfulness of our understandings of phenomena
together, critically, constructively, enthusiastically, and connected to lived
experiences). Finally, I think of my work as feminist because the knowledge we
create is situated, as I and my lab
happily acknowledge that our findings make sense in this time and place because
they were produced in this time and place.





Can you say a little
bit about what you mean by “inclusive research and lab practices”?





I’ve been thinking about inclusive research and lab
practices since early graduate school, and I’ve come to define it for myself as
an ongoing process that involves thinking about how my lab operates, research
methods, and science communication approaches. I could go on and on about this,
and love to, but will limit this to some concrete examples. In the lab, e.g., I
think about how I recruit people, how I make clear the implicit and explicit
‘rules’ of labs and my lab for the people who work in my lab and come from
diverse backgrounds, how diverse perspectives will help us get closer to more
truthful and rounded knowledge. I think about how we treat each other in ways
that are respectful of difference, sameness, and culture, and are realistic
about power.





In my methods, I think a lot about how we recruit
participants and who feels welcome into science and why. I work hard to make
our studies places where people from rightly science-skeptical groups have a
place, for reasons beyond or unrelated to difference (while still making room
to honor those differences). So, posters, questionnaires, recruitment ads, etc.
How do we ask questions - and most of my research is quantitative – that honor
people’s lived experiences? That map onto people’s realities? That reflect
people’s autonomy and respect their self-identities? These are grand goals, and
we are obviously therefore continually striving to do better at the principles that
underlie them.









Inclusive questionnaires as a part of inclusive
research methods






In science communication, I think a lot about the ways I
write papers and the ways that I am allowed
to write papers (I get some pretty hostile reviews that limit my ability to
communicate certain ideas or in certain ways), how I involve my students (e.g.,
I have a lot of undergraduate co-authors, including first-authors on my
papers), whom I speak to at conferences, how I get involved in mentoring, etc.





So... I see inclusive research practices as trying to
provide a model of science that explicitly acknowledges that science is a human
endeavor and therefore political – and a
model that therefore works within a consciously-articulated and progressive
frame. So, inclusive research practices is kind of like saying that ‘the
personal is political and it’s not just Politics that are political’ but in a
science-y way, like: 'the day-to-day of science is political, and it’s not just
Science that is political.'





The fact that you
have received a number of major grants and have published your work in the
leading journals in your field indicates that you have managed to secure the
support of other behavioral neuroscientists. How were you able to get other
scientists to support your research?







“Coming out as
Feminist”:
Feminists come in all sizes

Image from Flickr by Daniel Morrison


Well, one strategy of many feminists in non-feminist-allied
disciplines (of which behavioral neuroscience is certainly one!) is to go into
stealth mode. I had a major strategy which was to build up a large body of
research and then one day be like: surprise! This was feminist all along! I
think I’ve adhered to this strategy somewhat, but there are cues that
scientists pick up on (‘radical’ things like using self-identification terms
for sexuality, using non-binaristic gender/sex language, incorporating social
location) and I think now I’ve been made. Also, it became increasingly
difficult to do the work while straddling a fence – like, have you ever tried
to do anything while
fence-straddling? – because that meant partitioning myself in uncomfortable and
inauthentic ways…I found that the more people could level Feminist! as an 'insult,' the more they would. As soon as I became more explicitly
feminist, it became hard for others to level ‘feminist’
as an insult. Sort of like coming out, as in sometimes people have more power
when they can insinuate something you’re not yet sharing. I also think that my
subfields – behavioral neuroendocrinology (BNE) and sex research – are
feminist-friendly in their own ways. BNE already pays a lot of attention to
sexual diversity and gender/sex, as well as social location in certain limited
ways (e.g., how poverty might affect stress hormones). So it’s less of a leap
to think about how other aspects of social location might matter. Sex research
also has some progressive traditions and elements, and I’ve been lucky in that
I see myself continually able to mine that vein of progressiveness in all my
colleagues. I think I’ve had a lot of privilege that I’ve been able to use too;
I am trained in neuroscience, I’m white, I’ve had financial safety nets, I’m
Canadian and now in the U.S., so I think my position has let me do a lot with
fewer roadblocks than others might experience.





I am not so naïve to think that merit is enough for
anything. But I do want to stake a claim to doing good work; I think I do great
work! People know that I love my work, and I think my enthusiasm is catching. I
think that my feminist approaches are intrinsically part of why my work is
great – feminist science is not just ‘good science’. Feminist science is more
than just good science, even while it also is
good science. So, the more critically engaged my science is, the better science I
produce.





I also think that I have worked extremely hard to be
bio-legible and speak to my colleagues in ways they will understand. I used to
think of my work as challenging/pushing/etc., but I now see my research program
as building/reframing/expanding. I think this noncombative approach is more in
line with how I’d like to see change happen when possible (‘be the change you
want to see’ sort of thing). And I think because I work within my fields but on
the margins, this insider/outsider status has given me a lot of space to do
what I do, but also others to be generous and supportive. I’m really careful,
too. I read book and article after book and article about the doing of science
in terms of the politics and management, etc. I’ve never believed that whatever
merit I do have will shine on its own as some sort of Sari-beacon, so I work
hard to connect with people who have shared interests in some way. I’m also
beyond extroverted (I’d way rather talk to a stranger than eat alone!) so that
makes it a pleasure to connect with people. And since science is done by and
with people, I think that this has helped too.





But you know, this question is hard to answer, especially as
I’m pretenure and still junior. I think I’ll have more perspective as time –
and I – march on.





How has your work
been received by feminist scholars and activists who are not scientists?





I often worry about how my work will be received by critical
scholarship audiences when I'm not there to situate it... and even when I am.
So it has been a really pleasant and welcome experience to find that folks from
across women's studies and critical scholarship seem to be really interested in
my work and, moreover, really extraordinarily generous. I think part of the
reason is that I really do listen to and am interested in what people have to say, and make changes in
my science. I think another reason is that I also try really hard to speak the
language. I think scientists are often worried about how their work will be
received and whether it will be attacked, like: why open up another front?! But I think critical thought and careful, conscious positioning go a
long way (in scholarship, and elsewhere!). Like I said about neuroscience, I
try to be biolegible. But I also often joke that I'm 'bilingual' because I can
speak to both groups and even joint groups, so I also try to be WS-legible. In part, I think this is because I
really truly understand that these epistemological approaches are so deeply
different that I can see where there's room for them to come together.





Do you have any
advice for students and early-career researchers who want to incorporate
feminist insights into their basic science research?





I can’t not recommend stealth mode. People are still so
misinformed about what feminist science would be that it could be such a major
and immediate stumbling block, especially to a junior person. I also can’t not
recommend authenticity. We all are most passionate about doing work that has
meaning, and I know those times when I’ve gone into deep stealth have been some
of the most professionally (and personally) deathly stultifying and unfulfilling times.







Sometimes stealth
mode is required


Image from Flickr by jeriaska


There are few guides to doing feminist science practice, but
I’m trying to build some – get in touch with me and others who seem like
allies. I’m also building a feminist science practice website just to
facilitate these sorts of alliances, so look for that! I have other more
prosaic suggestions: remember that you are the person on the ground, so you
have to make decisions that will
sometimes turn out to be wrong in ways you can only realize through
experience. Remember that no matter how grand your audience might be in your
imagination, you have to get through reviewers, editors, program officers, etc.
to get your work published and funded and that doing so involves negotiations
with your principles that not need to be positioned as ‘selling out’ to guilt
trip yourself. Finally, remember that what you’re doing is hard, because you’re
creating new knowledge (which is hard enough) but you’re also creating the ways
to create new knowledge, so be patient with yourself, excited at your successes,
and generous with your colleagues (and maybe also generous with yourself and patient with your colleagues).







Want to Cite this Post?


Gupta, K. (2012). Doing Neuroscience, Doing Feminism: Interview with Dr. Sari Van Anders. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/11/doing-neuroscience-doing-feminism.html.