Wednesday, June 15, 2005

Explanations and birds

If anyone knows enough epistemology or philosophy of science, I'm wondering if there's an orthodox thing to say about explanations in cases like the following:

You're a hippie, lying on your back in a meadow where birds are fairly uncommon. You're carrying a drug that will cause you to have a visual hallucination of a flock of birds flying above you, but leave your reasoning abilities as normal. You take the drug, and soon enough, you have a visual experience as of a flock of birds flying above you. If you had experienced this without taking the drug, you would've believed that lots of birds really were flying overhead. But you know that you did take the drug.

I think it's pretty clear that in this case, it's rational to believe that there are no birds up there. Even though a causal story that implies the existence of birds would explain all the observational data, and even though you have no counterevidence against the birds' existence, you have no reason to accept the existence of the birds. All your observational data is already explained by something you have good reason to believe -- namely, that the drug has given you illusory bird sensations. If you have one good explanation of something, you don't keep buying other explanations and accepting them in conjunction with the first one. Maybe you accept an inclusive disjunction of them if other reasonable explanations are offered, but you don't jump to accepting the conjunction without further evidence.

So here's my question: What general principle of explanation do we appeal to in rejecting the real-birds explanation in the above case? One possibility is simplicity. You take on existential commitments in making explanations -- commitments to the existence of a causal mechanism by which the drug influenced your sensations, for instance. Occam's razor will slice away redundant existential commitments like a commitment to real birds.

I post this because I have a similar example in a paper, and I thought simplicity was doing the work, but Brian Leiter thought that was weird. If anyone has a better idea of what principles are involved here, I'd like to know!

13 comments:

Richard said...

I would think it involves some sort of (Bayesian?) probability calculation. Given the low "base rate" probability of real birds, and the very high chance of a hallucination, it's rational to conclude that it was just a hallucination. (Isn't it?)

Neil Sinhababu said...

Hey, that sounds pretty good!

Dennis said...

Yep, I agree completely (and was going to say precisely this); the chance you see birds because of the drugs is high relative to the chance there are actually birds, so assume no birds.

Justin said...

A couple of things occurred to me here. First, simplicity is often used as a kind of catch-all. So, on the one hand there may be a sense in which simplicity is doing work here, while on the other hand you probably want to say something more informative.

Second, if simplicity *is* doing work here, I'm not sure what you write perfectly captures it. Here is the preferred explanation: the drugs cause your bird impressions. Here is the alternative explanation: birds cause your bird impressions. How is the first explanation really any simpler? I don't think it is. Notice, the alternative explanation *isn't* this: birds+drugs cause your bird impressionss (or birds and drugs both cause bird impressions). I can see how this explanation is more complicated than the preferred one, but I take it that the real alternative explanation here says that drugs played no role in your bird-impressions, and thus the drugs do not figure as any part of the genuine explanation. So, at the least, some care would need to be taken to spell out how simplicity figures here.

The Bayes stuff is fine as far as it goes. Given that you've updated your belief set so that it includes the belief that you've taken drugs, then the low prior probability you attribute to birds actually being present won't get much of a boost from your having bird impressions. There are foundational questions here which might be relevant though. For instance: exactly what is the deeper principle about explanation which the Bayes stuff is capturing?

I presume that you want the *reason* we think the drug explanation is better, not just a formal mechanism codifying our intuition that it is. If so, here would be one shot at giving you what you want... Assume that explanations (more carefully, explanans) are propositions. Then it would seem to be a necessary truth that explanations must be true: that snow is blue can't genuinely explain anything, that snow is white can. Given this, if you know that some proposition is false then you know it can't genuinely explain anything. In your example, you know that the proposition that drugs were taken is true. Now of course, this by itself doesn't guarantee that your taking drugs explains your bird impressions, but it's at least a necessary condition for that. On the other hand, the proposition that birds are present is assigned a low prior probability, so you think it's probably not true. And, that you have bird impressions doesn't significantly boost that probability in this case, so its posterior probability will be low too. Since a false proposition can't explain anything, and since even upon having the bird impressions the proposition in question should be assigned a low probability (i.e., should be taken to be probably false), you should think that this proposition doesn't explain anything. Yet more could be said here, but I've probably already written too much.

Rousseau said...

Of course the Bayes stuff is nice. That's why all us economists and mathematicians have probability based outlooks on the world.

I always had the impression Neil, that you were looking for absolute answers.

(And no, that you have a majority or plurality probablity to believe in one outcome, does not mean you should say that occupies the 1 in a [1,0] set.)

Lindsay Beyerstein said...

Are we assuming that this drug always causes instant bird-flock hallucinations, and that the hippy knows this?

In that case, it's rational to believe that if you swallow the pill and immediately see a bird flock that your experience is caused by the drug.

If the hippie knows that the drug usually produces bird-flock hallucinations and/or that it takes a while to take effect, then I'm not sure it's rational for him to believe that he just hallucinated a flock. If there's uncertainty, it's probably rational to suspend judgment.

I think it comes down to the base rates. What's the chance that a flock of birds would fly overhead at any given instant, vs. the chances of the drug inducing that experience during the same interval?

Neil Sinhababu said...

Thanks, everyone. After reading Justin's comment, I've realized that the initial post was ambiguous. Was I trying to explain why one rejects the real-bird explanation, or why one rejects the existence of real birds, not considered as part of an explanation? It's actually the latter claim that I was particularly interested in. Of course, that implies some interest in the former, since the former implies the latter.

I'm actually not sure how the story works in Bayesian terms. Most of my knowledge about Bayesianism comes from just having read this Stanford article, so count me as a confused beginner.

Certainly, the base rate of birds is pretty low. But P(Birds/Bird-sensations) is pretty high. Now, you might say that the relevant thing to consider is P(Birds/Bird-sensations-when-you're-on-drugs), which ought to be low. But what I'm looking for is an explanation of that. What's the reason why the conditional probability of birds when you're having specifically drug-induced sensations is so low? I'm assuming that a low P(Birds/Bird-sensations-when-you're-on-drugs) isn't one of our priors, and that we have to build it out of some other stuff. So what is this other stuff?

A low P(X/X-isn't-part-of-the-simplest-explanation-that-explains-all-the-data) would do the work.

Justin said...

So, here's the relevant equation: P(H|E) = [P(H)*P(E|H)]/P(E), where 'H' stands for hypothesis and 'E' stands for evidence. In this particular case, let the hypothesis be that there are real birds and let the evidence be that you have bird-sensations. We want to calculate P(H|E) -- the probability of there being real birds, given that you have bird-sensations -- given the other values. P(H) is the prior probability of there being real birds; that is, the probability of there being birds, prior to taking into account your sensations. The example says this should be low -- the birds are uncommon in the meadow. So, let's say it's .2. Next, P(E|H); that is, the probability of your having bird-sensations, assuming that there are actual birds. Presumably this is very high. If there are real birds, you'll most likely have bird sensations. To keep things simple, let's just suppose this probability is 1. Finally, P(E); that is, the likelihood that you will have bird-sensations, considered independently from whether the actual-birds-hypothesis is true or false. Here's where the bit about the drugs is relevant. Given that you've taken the drugs, it's actually pretty likely that you will have bird-sensations; so, let's say this is about .9. Substituting the values into the equation, you get this... P(H|E) = (.2*1)/.9 = .2222; this value is the posterior probability. What this means is that the probability of there being birds jumped from .2 prior to you having bird-sensations to only .222 after having bird sensations -- a very slight jump which leaves the actual-bird-hypothesis still unlikely. Perhaps for your purposes, the crucial thing here is that the higher P(E) is, the less of a "boost" the actual-bird-hypothesis will get because the bigger number you're dividing by. When you take drugs that cause bird-sensations, P(E) goes up, corresponding to a lower posterior probability.

Neil Sinhababu said...

Great! Thanks.

Nancy Goldstein said...

Um, sorry Neil, but I actually *did* do drugs as a young hipster and, um, this is just a little too, um, complicated for me, um...

Battlepanda said...

There is actually a real-life example analogous to your thought experiment, in my mind at least.

Back in the dark ages, it was widely accepted that rocks fell out of the sky. Of course, it was the work of the devil. But then that newfangled scientific thinking took hold, and all of a sudden, people who believed that rocks fall out of the skies were dismissed as ignorant and superstitious. Except of course they were right.

I think the main thing to take from this is that 'pretty darn sure' is not the same as 'absolutely certain'. 'Conclusion' implies certainty. Take a hundred hippies on drugs, and some of them are bound to be seeing real birds as well as phonies.

Neil Sinhababu said...

Yes, Angelica... as Justin points out, seeing the bird-images raises the probability of real birds by 2.2%.

Marty said...

Hi, just stumbled across your blog from Rox Populi, where you had the funniest caption of the bunch.

My thought is, it's only LIKE Bayesianism AFTER the drug has worn off. If you're considering your judgment while the drug is affecting you, then you may well believe it's real birds.

But since Bayesianism is about prediction more than it is about ex post reasoning, then, semantically, you might be better off calling the doubt you experience afterwards as Pierce's "abduction," Lawson's "retroduction" or more generally as "inference to the best explanation."

Oddly enough, it's just this sort of IMPROBABILITY that would seem to cause real trouble for a Humean inductivist. He would continue to say, "just because it was entirely improbable that I would ever see birds, I can't be sure birds will never appear. And I saw birds, therefore how do I know the birds WEREN'T in fact real?" So, oddly enough, we're at certain times best off when we harbor strong convictions and erase doubts that might lead us into this sort of Humean trap.