A week is an eternity in the political blogosphere, but it's a short time in the world of philosophy. Or at least, that's my excuse for responding now to a couple of Matt Yglesias' week-old posts on the objectivity of morality. Some things Matt says in the course of arguing against the existence of objective moral facts that really don't have much bearing on the question of objectivity. In fact, many of his criticisms apply just as well to areas of inquiry where objective truth is clearly at stake. Let's first consider this:
Sometimes, you face a question that you think has an objective answer like "How much should we care about budget deficits?" What you're supposed to do in those circumstances is look at the evidence in an even-handed and objective way. The big issues of political commitment don't work like that at all. Siegel didn't go learn Arabic fluently, then read the Koran (it says you should only read it in Arabic), then study the works of Sayyid Qutb and other Islamist commentators, and then objectively weigh those arguments against the great names of liberal political thought in an open-minded and unprejudiced way before deciding, "Yes, those Islamists are all wrong!" That would be dumb, and nobody lives their life like that.Matt's pointing out that the way we usually arrive at moral beliefs is quite different from the way we ought to arrive at beliefs on the empirical questions that drive public policy. That seems right. But it's important to note that very few people -- Matt and Ezra perhaps, but not most of us -- actually arrive at their public policy views in much the same way. Most people don't wonk out on pdfs from Brookings, seek out the best arguments from all sides, and make well-considered decisions. Emotional judgments from gut feelings, sadly, play an outsized role in determining many ordinary people's beliefs on issues where there are objective right and wrong answers. You don't even need to go to normatively laden questions of the "How much should we care" variety to see this. You can just look at ordinary, purely descriptive questions -- "Do tax cuts stimulate the economy more or less than spending increases?" or "Which candidate is more electable?" to find places where many people's emotional attitudes (for example, their feelings about taxation or about the candidates) determine what sorts of beliefs they form.
Does this tear away the objectivity from facts of public policy? I don't think so. All it says is that people are forming their public policy beliefs in an unreliable and untrustworthy way. All the more reason to recognize the possibility of error within ourselves, and dedicate some energy to thinking clearly, considering well-collected empirical data, and listening carefully to all sides. Similarly, the fact that people tend to make emotionally driven moral judgments doesn't mean that morality isn't objective. It just means that we're likely to make mistakes, and so we need to understand that our intuitive moral judgments could be wrong. Objective truth could still be out there -- we're just bad at finding it.
This attitude towards moral belief underlies my own approach to the issue. I think that people very often go wrong in their beliefs about the objective moral facts. How do I separate the true moral beliefs from the false ones? I first try to determine what sorts of processes of belief-formation are generally reliable, considering many examples where morality isn't at stake. Then I look at all the ways that people form their beliefs about which states of affairs are good, and which actions are right. I throw out the beliefs that are generated by unreliable processes. Particularly, I throw out the beliefs formed by having some emotionally-driven attitude towards a state of affairs, and thus coming to believe that there's some objective goodness or badness out there in that state of affairs. All that's left is the goodness of pleasure and the badness of displeasure, which can be discovered without any emotions standing between us and our pleasure or displeasure. You can know that your sensations of black are sensations of darkness without any emotion standing between you and the black, and similarly, you can know that your experiences of pleasure are experiences of goodness without any emotion standing between you and the pleasure. Looking at your experiences and determining what they're like, with no emotional interference, is a reliable way of knowing. So the objective goodness of pleasure and the objective badness of displeasure are all we can know of objective goodness and badness.
Another related point Matt makes:
The fact that we don't usually require airtight arguments for moral conclusions doesn't really bear much on the question of objectivity. Consider the easier questions of physics -- you don't need to determine the gravitational constant in order to know that when you shoot a basketball, it'll travel in an arc, and eventually come down. But physics is an objective matter, if anything is. So there's nothing incompatible between our being able to get it right on a fair number of the objective questions, and saying that we haven't got a good theory worked out to decide the hard cases and explain everything. (Of course, once we do figure out the gravitational constant and build our theory, we can do all sorts of neat stuff.) My point here shouldn't be taken as a rejection of the idea that we're often wrong, or unjustified, in our moral judgments. All I'm saying is that it's possible to occasionally make correct judgments while lacking any developed theory to explain them.
Islamists do a lot of stuff that seems cruel and repugnant -- sawing off peoples' heads, for example or stoning gay people to death. Is that "really" wrong? Do I need to check? Deduce it from first principles? If I can't come up with an airtight argument against head-sawing within the next fifteen minutes, does that throw everything into doubt? Again, that's silly; nobody thinks that.
There's this, from Matt's next post:
What I want to say to Matt here is that objective moral truth actually underlies the possibility of this kind of discussion. Why is it interesting, in these discussions, to point out "alleged inconsistencies in the other guy's position"? Why would he even care about inconsistencies? Here's one answer: because his position consists of his beliefs, and when you have inconsistent beliefs, at least one of them has to be false. And why is it a problem that one is false? Because our beliefs aspire to objective truth, and when they are false, they fail.
When you argue with people, you try to appeal to shared sentiments, point out alleged inconsistencies in the other guy's position, and so on and so forth. What underlies the possibility of discussion isn't objective moral truth but the fact that, say, Jonah and I have a vast stockpile of things we agree about and one tries to resolve controversies with appeals to stuff in that store of previous agreement.
Now, there are sophisticated versions of anti-realism that propose their own answers to these questions, like those offered by Simon Blackburn and Allan Gibbard (Matt links to one of Blackburn's books in his post). I reject their views because I reject deflationary theories of truth, but that's a fairly technical issue that I won't get into here.
One more thing to quibble with Matt about:
But this doesn't really have that much to do with objectivity either. It's an objective question whether or not Allah exists. He exists or he doesn't. But people are really set in their beliefs on this issue, some for good reasons and others for bad reasons. And this is the kind of disagreement that could conceivably (and does actually) cause people to start using force against one another. Similarly, you don't have to take morality outside the domain of objectivity to explain why people wouldn't be able to settle their differences through debate. Sometimes people just can't come to agreement on a question with an objective answer, and their desires are such that this lack of agreement seems to them like something worth fighting about.
Sometimes you face someone whose disagreements with you are so profound that appeals to shared premises don't get you anywhere. Or you face someone who just doesn't care about doing the right thing. It's precisely because there's no way to decide who's objectively right in a dispute between, say, Adolf Hitler and liberal democracy, that we resolve the biggest moral controversies with force and threats of force rather than moral discourse and appeals to conscience. Debate and deliberation only work for the small stuff.
If there's a single take-home message to all of this, here it is: Don't just throw out the idea that there are objective facts somewhere, just because people keep forming their beliefs in wacky ways, or because there's a lot of disagreement, or because everyone is fighting over stuff. It's still possible that there are objective facts, and the people just aren't being very smart about figuring them out.