Skip to main content

Morality and Neuroscience

Submitted by Ken Watts on Tue, 11/27/2007 - 15:36

Ronald Baily, in Reason Magazine, notes that recent discoveries in neuroscience fit nicely with the theory of moral sentiments proposed by Adam Smith in the eighteenth century:

image image

"As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation," observed British philosopher and economist Adam Smith in the first chapter of his magisterial The Theory of Moral Sentiments (1759). "Whatever is the passion which arises from any object in the person principally concerned, an analogous emotion springs up, at the thought of his situation, in the breast of every attentive spectator." Smith's argument is that our ability to empathize with others is at the root of our morality.

Recent discoveries in neuroscience are bolstering Smith's insights about the crucial role of empathy in human sociality and morality. For example, in the 1990s, Italian scientists researching motor neurons in macaque monkeys discovered mirror neurons. As the story goes, a monkey's brain had been wired up to detect the firing of his neurons when planning and carrying out a movement such as grasping a peanut. One researcher returned from lunch licking an ice cream cone. As the monkey watched the researcher, some of his neurons fired as though he were eating the ice cream, even though he was not moving. The monkey's neurons were "mirroring" the activity that the monkey was observing. [read the article]

These mirror neurons are found in humans, as well, and it appears that they aid us in learning, but also provide one source of the empathy which underlies our dedication to fairness and care for others. It turns out that we really do feel each other's pain—by a sort of internal acting out of the other's situation.

This kind of research takes a different—and, I think, superior—approach to questions of morality. By asking how we make moral judgments, rather than what moral judgments we "ought" to be making,  it follows a wisdom approach, rather than a legal one.

The benefit is greater understanding, and, perhaps, a deeper appreciation for human nature.

One example of this is the question (also addressed in the article) of why we will make heavy sacrifices to help those who are close to us, but fail to make even moderate sacrifices for those who are not close. (By close, I mean either emotionally or physically).

The example in the article, provided by Harvard University psychologist Joshua Greene, is that most people would find it immoral to drive by a bleeding hiker on the roadside, but would not find it immoral to fail to make a donation to a charity that feeds poor people in Africa.

Greene's explanation is that we did not evolve in a context where we could give aid to people half a globe away. It's a point that has been made before, both by myself and by Adam Smith:

"That we should be but little interested, therefore, in the fortune of those whom we can neither serve nor hurt, and who are in every respect so very remote from us, seems wisely ordered by Nature."

The approach of the legal model is to objectify morality, as something like Plato's forms, and, usually, to take an extreme position in the process. 

"It is obviously right and good for people to take care of others, and no one person is more deserving of care than another, so we should be just as willing to help people half a world away as we are to help our own family."

But, of course, we won't live up to such high standards, and that allows the legal model to do what it does best—pronounce us guilty. 

The wisdom model, on the other hand, looks at what is, and tries to understand it. Morality isn't some objectified and arbitrary Platonic form—it's an activity. It's something we humans do. So it makes sense to take it at face value. If we don't feel morally obligated to donate to every deserving charity, there might just be a good reason for that. 

I would suggest that the reason doesn't just lie in our evolutionary history, either. Most people do give something to those charities—but most people feel more responsibility for those who are closer. 

While it may be that there are times when an adjustment would be helpful, think of the alternative.

What if we were all just as concerned about those who are distant as those who are near? What if every time I went to the store to buy food for my children, I found myself weighing the needs of other children—half a world away—against theirs: trying to balance the value of this loaf of bread against the need of the entire world? 

I doubt that it would be an improvement. How much energy would be wasted in those calculations? How efficient would they be—how accurate? And would the starving poor on the other side of the world be making the same calculations about the needs of my children?

It's obviously unworkable—but the primary function of the legal model is not to work, but to lay blame.

The wisdom approach on the other hand, might, through understanding, suggest ways to improve  the situation.