Saturday, November 16, 2013

Leitura De Sabado: Sobre Bem x Mal

Sam Harris (opa, alguem o conhece ai?) nessa otima entrevista. Boa leitura

The Roots of Good and Evil

An Interview with Paul Bloom

Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is a past president of the Society for Philosophy and Psychology and a co-editor of Behavioral and Brain Sciences, one of the major journals in the field. Dr. Bloom has written for scientific journals such as Nature and Science and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic. He is the author or editor of six books, including Just Babies: The Origins of Good and Evil.
Paul was kind enough to answer a few questions about his new book.
*  *  *


Harris: What are the greatest misconceptions people have about the origins of morality?
Bloom: The most common misconception is that morality is a human invention. It’s like agriculture and writing, something that humans invented at some point in history. From this perspective, babies start off as entirely self-interested beings—little psychopaths—and only gradually come to appreciate, through exposure to parents and schools and church and television, moral notions such as the wrongness of harming another person.
Now, this perspective is not entirely wrong. Certainly some morality is learned; this has to be the case because moral ideals differ across societies. Nobody is born with the belief that sexism is wrong (a moral belief that you and I share) or that blasphemy should be punished by death (a moral belief that you and I reject). Such views are the product of culture and society. They aren’t in the genes.
But the argument I make in Just Babies is that there also exist hardwired moral universals—moral principles that we all possess. And even those aspects of morality—such as the evils of sexism—that vary across cultures are ultimately grounded in these moral foundations.
A very different misconception sometimes arises, often stemming from a religious or spiritual outlook. It’s that we start off as Noble Savages, as fundamentally good and moral beings. From this perspective, society and government and culture are corrupting influences, blotting out and overriding our natural and innate kindness.
This, too, is mistaken. We do have a moral core, but it is limited—Hobbes was closer to the truth than Rousseau. Relative to an adult, your typical toddler is selfish, parochial, and bigoted. I like the way Kingsley Amis once put it: “It was no wonder that people were so horrible when they started life as children.” Morality begins with the genes, but it doesn’t end there.

Harris: How do you distinguish between the contributions of biology and those of culture?
Bloom: There is a lot you can learn about the mind from studying the fruit flies of psychological research—college undergraduates. But if you want to disentangle biology and culture, you need to look at other populations. One obvious direction is to study individuals from diverse cultures. If it turns out that some behavior or inclination shows up only in so-called WEIRD (Western Educated Industrial Rich Democratic) societies, it’s unlikely to be a biological adaptation. For instance, a few years ago researchers were captivated by the fact that subjects in the United States and Switzerland are highly altruistic and highly moral when playing economic games. They assumed that this reflects the workings of some sort of evolved module—only to discover that people in the rest of the world behave quite differently, and that their initial findings are better explained as a quirk of certain modern societies.
One can do comparative research—if a human capacity is shared with other apes, then its origin is best explained in terms of biology, not culture. And there’s a lot of fascinating research with apes and monkeys that’s designed to address questions about the origin of pro-social behavior.
Then there’s baby research. We can learn a lot about human nature by looking at individuals before they are exposed to school, television, religious institutions, and the like. The powerful capacities that we and other researchers find in babies are strong evidence for the contribution of biology. Now, even babies have some life history, and it’s possible that very early experience, perhaps even in the womb, plays some role in the origin of these capacities. I’m comfortable with this—my claim in Just Babies isn’t that the moral capacities of babies emerge without anyinteraction with the environment. That would be nuts. Rather, my claim is the standard nativist one: These moral capacities are not acquired through learning.
We should also keep in mind that failure to find some capacity in a baby does not show that it is the product of culture. For one thing, the capacity might be present in the baby’s mind but psychologists might not be clever enough to detect it. In the immortal words of Donald Rumsfeld, “Absence of evidence is not evidence of absence.” Furthermore, some psychological systems that are pretty plainly biological adaptations might emerge late in development—think about the onset of disgust at roughly the age of four, or the powerful sexual desires that emerge around the time of puberty. Developmental research is a useful tool for pulling apart biology and culture, but it’s not a magic bullet.

Harris: What are the implications of our discovering that many moral norms emerge very early in life?
Bloom: Some people think that once we know what the innate moral system is, we’ll know how to live our lives. For them it’s as if the baby’s mind contains a holy text of moral wisdom, written by Darwin instead of Yahweh, and once we can read it, all ethical problems will be solved.
This seems unlikely. Mature moral decision-making involves complex reasoning, and often the right thing to do involves overriding our gut feelings, including those that are hardwired. And some moral insights, such as the wrongness of slavery, are surely not in our genes.
But I do think that this developmental work has some interesting implications. For one thing, the argument in Just Babies is that, to a great extent, all people have the same morality. The differences that we see—however important they are to our everyday lives—are variations on a theme. This universality provides some reason for optimism. It suggests that if we look hard enough, we can find common ground with any other neurologically normal human, and that has to be good news.
Just Babies is optimistic in another way. The zeitgeist in modern psychology is pro-emotion and anti-reason. Prominent writers and intellectuals such as David Brooks, Malcolm Gladwell, and Jonathan Haidt have championed the view that, as David Hume famously put it, we are slaves of the passions. From this perspective, moral judgments and moral actions are driven mostly by gut feelings—rational thought has little to do with it.
That’s a grim view of human nature. If it were true, we should buck up and learn to live with it. But I argue in Just Babies that it’s not true. It is refuted by everyday experience, by history, and by the science of developmental psychology. Rational deliberation is part of our everyday lives, and, as many have argued—including Steven Pinker, Peter Singer, Joshua Greene, you, and me, in the final chapter of in Just Babies—it is a powerful force in driving moral progress.

Harris: When you talk about moral progress, it implies that some moralities are better than others. Do you think, then, that it is legitimate to say that certain individuals or cultures have the wrong morality?
Bloom: If humans were infinitely plastic, with no universal desires, goals, or moral principles, the answer would have to be no. But it turns out that we have deep commonalities, and so, yes, we can talk meaningfully about some moralities’ being better than others.
Consider a culture in which some minority is kept as slaves—tortured, raped, abused, bought and sold, and so on—and this practice is thought of by the majority as a moral arrangement. Perhaps it’s justified by reference to divine command, or the demands of respected authorities, or long-standing tradition. I think we’re entirely justified in arguing that they are wrong, and when we do this, we’re not merely saying “We like our way better.” Rather, we can argue that it’s wrong by pointing out that it’s wrong even for them—the majority who benefit from the practice.
Obstetricians used to deliver babies without washing their hands, and many mothers and babies died as a result. They were doing it wrong—wrong by their own standards, because obstetricians wanted to deliver babies, not kill them. Similarly, given that the humans in the slave society possess certain values and intuitions and priorities, they are acting immorally by their own lights, and they would appreciate this if they were exposed to certain arguments and certain facts.
Now, this is an empirical claim, drawing on assumptions about human psychology, but it’s supported by history. Good moral ideas can spread through the world in much the same way that good scientific ideas can, and once they are established, people marvel that they could ever have thought differently. Americans are no more likely to reinstate slavery than we are to give up on hand-washing for doctors.
You’ve written extensively on these issues in The Moral Landscape and elsewhere, and since we agree on so much, I can’t resist sounding a note of gentle conflict. Your argument is that morality is about maximizing the well-being of conscious minds. This means that determining the best moral system reduces to the empirical/scientific question of what system best succeeds at this goal. From this standpoint, we can reject a slave society for precisely the same reason we can reject a dirty-handed-obstetrician society—it involves needless human pain.
My view is slightly different. You’re certainly right that maximizing well-being is something we value, and needless suffering is plainly a bad thing. But there remain a lot of hard questions—the sort that show up in Ethics 101 and never go away. Are we aspiring for the maximum total amount of individual well-being or the highest average? Are principles of fairness and equality relevant? What if the slave society has very few unhappy slaves and very many happy slaveholders, so its citizens are, in total and on average, more fulfilled than ours? Is that society more moral? If my child needs an operation to save his sight, am I a better person if I let him go blind and send the money to a charity where it will save another child’s life? These are hard questions, and they don’t go away if we have a complete understanding of the empirical facts.
The source of these difficulties, I think, is that as reflective moral beings, we sometimes have conflicting intuitions as to what counts as morally good. If we were natural-born utilitarians of the Benthamite sort, then determining the best possible moral world really would be a straightforward empirical problem. But we aren’t, and so it isn’t.
Harris: Well, it won’t surprise you to learn that I agree with everything you’ve said up until this last bit. In fact, these last points illustrate why I choose not to follow the traditional lines laid down by academic philosophers. If you declare that you are a “utilitarian,” everyone who has taken Ethics 101, as you say, imagines that he understands the limits of your view. Unfortunately, those limits have been introduced by philosophers themselves and are enshrined in the way that we have been encouraged to talk about moral philosophy.
For instance, you suggest that a concern for well-being might be opposed to a concern for fairness and equality—but fairness and equality are immensely important precisely because they are so good at safeguarding the well-being of people who have competing interests. If someone says that fairness and equality are important for reasons that have nothing to do with the well-being of people, I have no idea what he is talking about.
Similarly, you suggest that the hard questions of ethics wouldn’t go away if we had a complete understanding of empirical facts. But we really must pause to appreciate just how unimaginably different things would be IF we had such an understanding. This kind of omniscience is probably impossible—but nothing in my account depends on its being possible in practice. All we need to establish a strong, scientific conception of moral truth in principle is to admit that there is a landscape of experiences that conscious beings like ourselves can have, both individually and collectively—and that some are better than others (in any and every sense of “better”). Must we really defend the proposition that an experience of effortless good humor, serenity, love, creativity, and awe spread over all possible minds would be better than everyone’s being flayed alive in a dungeon by unhappy devils? I don’t think so.
I agree that how we think about collective well-being presents certain difficulties (average vs. maximum, for instance)—but a strong conception of moral truth requires only that we acknowledge the extremes. It seems to me that the paradoxes that Derek Parfit has engineered here, while ingenious, need no more impede our progress toward increased well-being than the paradoxes of Zeno prevent us from getting to the coffee pot each morning. I admit that it can be difficult to say whether a society of unhappy egalitarians would be better or worse than one composed of happy slaveholders and none-too-miserable slaves. And if we tuned things just right, I would be forced to say that these societies are morally equivalent. However, one thing is not debatable (and it is all that my thesis as presented in The Moral Landscape requires): If you took either of these societies and increased the well-being of everyone, you would be making a change for the good. If, for instance, the slaveholders invented machines that could replace the drudgery of slaves, and the slaves themselves became happy machine owners—and these changes introduced no negative consequences that canceled the moral gains—this would be an improvement in moral terms. And any person who later attempted to destroy the machines and begin enslaving his neighbors would be acting immorally.
Again, the changes in well-being that are possible for creatures like ourselves are possible whether or not anyone knows about them, and their possibility depends in some way on the laws that govern the states of conscious minds in this universe (or any other).
Whatever its roots in our biology, I think we should now view morality as a navigation problem: How can we (or any other conscious system) reduce suffering and increase happiness? There might be an uncountable number of morally equivalent peaks and valleys on the landscape—but that wouldn’t undermine the claim that basking on some peak is better than being tortured in one of the valleys. Nor would it suggest that movement up or down depends on something other than the laws of nature.
Bloom: I agree with almost all of this. Sure—needless suffering is a bad thing, and increased well-being is a good thing, and that’s why I’m comfortable saying that some societies (and some individuals) have better moralities than others. I agree as well that determining the right moral system will rest in part on knowing the facts. This is true for the extremes, and it’s also true for real-world cases. The morality of drug laws in the United States, for instance, surely has a lot to do with whether those laws cause an increase or a decrease in human suffering.
My point was that there are certain moral problems that don’t seem to be solvable by science. You accept this but think that these are like paradoxes of metaphysics—philosophical puzzles with little practical relevance.
This is where we clash, because some of these moral problems keep me up at night. Take the problem of how much I should favor my own children. I spend money to improve my sons’ well-being—buying them books, taking them on vacations, paying dentists to fix their teeth, etc.—that could instead be used to save the lives of children in poor countries. I don’t need a neuroscientist to tell me that I’m not acting to increase the total well-being of conscious individuals. Am I doing wrong? Maybe so. But would you recommend the alternative, where (to use my earlier example) I let my son go blind so that I can send the money I would have paid for the operation to Oxfam so that another child can live? This seems grotesque. So what’s the right balance? How should we weigh the bonds of family, friendship, and community?
This is a serious problem of everyday life, and it’s not going to be solved by science. 

Harris:
 Actually, I don’t think our views differ much. This just happens to be a place where we need to distinguish between answers in practice and answers in principle. I completely agree that there are important ethical problems that we might never solve. I also agree that there are circumstances in which we tend to act selfishly to a degree that beggars any conceivable philosophical justification. We are, therefore, not as moral as we might be. Is this really a surprise? As you know, the forces that rule us here are largely situational: It is one thing for you to toss an appeal from the Red Cross in the trash on your way to the ice cream store. It would be another for you to step over the prostrate bodies of starving children. You know such children exist, of course, and yet they are out of sight and (generally) out of mind. Few people would counsel you to let your own children go blind, but I can well imagine Peter Singer’s saying that you should deprive them of every luxury as long as other children are deprived of food. To understand the consequences of doing this, we would really need to take all the consequences into account. 
I briefly discuss this problem in The Moral Landscape. I suspect that some degree of bias toward one’s own offspring could be normative in that it will tend to lead to better outcomes for everyone. Communism, many have noticed, appears to run so counter to human nature as to be more or less unworkable. But the crucial point is that we could be wrong about this—and we would be wrong with reference to empirical facts that we may never fully discover. To say that these answers will not be found through science is merely to say that they won’t be established with any degree of certainty or precision. But that is not to say that such answers do not exist. It is also possible to know exactly what we should do but to not be sufficiently motivated to do it. We often find ourselves in this situation in life. For example, a person desperately wants to lose weight and knows that he would be happier if he did. He also knows how to do it—by eating less junk and exercising more. And yet he may spend his whole life not doing what he knows would be good for him. In many respects, I think our morality suffers from this kind of lassitude.
But we can achieve something approaching moral certainty for the easy cases. As you know, many academics and intellectuals deny this. You and I are surrounded by highly educated and otherwise intelligent people who believe that opposition to the burqa is merely a symptom of Western provincialism. I think we agree that this kind of moral relativism rests on some very dubious (and unacknowledged) assumptions about the nature of morality and the limits of science. Let us go out on a scientific limb together: Forcing half the population to live inside cloth bags isn’t the best way to maximize individual or collective well-being. On the surface, this is a rather modest ethical claim. When we look at the details, however, we find that it is really a patchwork of claims about psychology, sociology, economics, and probably several other scientific disciplines. In fact, the moment we admit that we know anything at all about human well-being, we find that we cannot talk about moral truth outside the context of science. Granted, the scientific details may be merely implicit, or may remain perpetually out of reach. But we are talking about the nature of human minds all the same.
Bloom: We still have more to talk about regarding the hard cases, but I agree with you that there are moral truths and that we can learn about them, at least in part, through science. Part of the program of doing so is understanding human nature, and especially our universal moral sense, and this is what my research, and my new book, is all about.
- See more at: http://www.samharris.org/blog/item/the-roots-of-good-and-evil#sthash.mMSGVy0Z.dpuf

No comments:

Post a Comment