Sunday, April 05, 2009

Inference, perception, and babies

I think the thing people want to say in epistemology these days is that perceptual knowledge is non-inferential. This makes a good deal of phenomenological sense. Insofar as there's something distinctive that it feels like when we infer things from other things, it isn't going on in ordinary cases of perception. I don't look in the general direction of my desk, have a bunch of sense experiences, puzzle out what's going on with them, suddenly go 'Aha!' and arrive at the conclusion that that's my desk in front of me. Nor do I need to in order to be justified in believing that my desk is there.

There are lots of things for which we have this sort of noninferential phenomenology. It's not just the way things go with the size, color, and orientation of surfaces -- I can look at a sequence of letters and figure out what word it is in basically the same way. People extend this to all sorts of other stuff. Apparently experienced chess players can see who's winning a chess game in this sort of way. People talk about moral perception working this way too.

It's examples like the chess thing that make me less impressed with this point, however. That's something that probably started out with inferential phenomenology earlier in the players' chess careers, as people considered the positions and the values of the pieces and arrived at a decision about who was winning. They've just done it so many times that it's become second nature to them and there's no inferential phenomenology. Same with words -- you start out doing Hooked On Phonics or something and puzzling them out. Then after a while it all works automatically.

If I had to bet, I'd bet that perception of physical objects is the same way too. The puzzling-out part just gave way to automatic knowledge long ago when we were little babies, so we don't remember it. If people want to say that perceptual knowledge is noninferential for adults but inferential for really little kids, that's cool. But sometimes it sounds like they really want to make claims about this sort of knowledge as a general category, regardless of the age of the perceiver, and then I'd want to say that perceptual knowledge just involves an inference we've gotten real good at so we don't have to think about it.

7 comments:

Protagoras said...

I think that this is connected to the things some people believe about qualia; you can't, it is believed, infer to a phenomenal feel. If you could, then those inferential links would be the sort of thing one could use to start trying to give a functional account of qualia, which some insist can't be done. However, I think you're quite right; we can infer to phenomenal feels. I'm actually working on a paper defending a functionalist account of qualia relying partly on this point.

Michael Drake said...

Good point. Also, if perception were noninferential per se, we'd never have cause to do a double take. I recount a pertinent experience here.

Thomas said...

There's more to it than the connection with qualia, since there can be behavioral consequences.

You can do experiments with very young children that set up a perception that something is a single physical object and then contradict it, and see if they are surprised.

If I recall correctly, these experiments show that some aspects of perception of physical objects exist very early, and others appear rather later.

Of course, even if these perceptions were apparently non-inferential at birth they could (?must) still have been learned during evolution. Whether this would count as non-inferential would demand on what argument you wanted to have, and with whom.

dustin locke said...

Hey man! Long time no see!

Interesting post. But perhaps the argument for thinking that the experienced chess player's beliefs are non-inferential is not that they are automatic, but, rather, that the beliefs that the player would need to infer her belief from simply don't seem to be there. Of course, that's an empirical question that needs to be answered empirically.

jed said...

Two reactions:

1) Why is conscious inference the issue? If our brains are doing a lot of defeasible analysis (probably using statistics rather than logic) does it matter if it was ever conscious? If philosophers have a rationale for demanding conscious reasoning I'd really like to know.

This anyway seems like an empirical question, thus a matter for cog sci -- is it not?

2) You use the interesting phrase "the thing people want to say in epistemology" -- is what they'd like to say the real root of the argument? It certainly seems so in cases such as property dualism -- you have to want that conclusion pretty strongly to keep walking down that road. I have seen some pretty explicit statements of such motivations in other cases.

But if this is how things often work in philosophy I find it disturbing.

Neil Sinhababu said...

Thanks, Dustin, I think that helps.

Jed, I think the motivations in this case usually are closer to avoiding external-world skepticism (you don't know whether or not you're dreaming) than avoiding dualism. Regarding inference, there's going to be a pretty contentious question about what counts as an inference. I've talked to some people who think that inference is a fundamentally normative notion -- inferences are things for which an agent can be held to account for doing incorrectly. Personally, I don't have any clue what would be a good account of inference.

Neil Sinhababu said...

Or really I shouldn't say I don't have any clue -- I just don't know what a good theory would look like, or what's out there in the literature.