Monday, May 09, 2005

Good dog!

After reading about the stray dog that rescued the abandoned baby, I was reminded of one of my many odd philosophical views -- the view that animals can be morally praiseworthy. In my view, what makes one an appropriate object for moral praise is an intrinsic desire to help others. (Intrinsic desires are to be contrasted with instrumental desires. If you want to help others only because someone promises you a bone for helping others, you have an instrumental desire and that's not morally praiseworthy.) So any creature that can be motivated by feelings of sympathy or benevolence is a candidate for moral praise.

As it often happens, my big enemy here is Immanuel Kant. Kant believed that motivations rooted in our desires and not in reason itself couldn't be moral. I agree that animals don't have the capacity that's necessary for moral esteem on Kant's view -- they can't consider their reasons for acting, look for reasons why those are good reasons, and discover the foundation of all their reasons in their own nature as free and rational agents. Kant expressed a common belief that some kind of reflection or deliberation which animals probably can't do is necessary for moral praise, and that's the belief I don't have.

7 comments:

Dennis said...

So the question I immediately pose in my instrumentalist way is this: can a small computer program be morally praiseworthy? An extremely complex multipurpose program? A dog-equivalent AI? A Data-style emotionless but otherwise human-equivalent intelligence? What if we know every mechanistic thing about the cognitive processes in these examples? I suspect I'd have to parse Kant's answers (in the first instance) as no, maybe (depending on whether the program can "understand" morality), no, and yes, and yours as no, no, maybe (depending on the level of comprehensibility of the dog simulation), and no. Neither of these sets of answers strikes me as terribly satisfying. Am I missing something fundamental about the nature of desires for you?

Neil Sinhababu said...

I'm not exactly sure how broad I want the scope of moral praise to be. But at the very least, I want to praise all creatures (and robots) who desire to help others. If they feel pleasure in knowing of another's happiness and are motivated to help others, they get praise from me. I don't know what of this your AIs have, so it's a bit hard to answer exactly.

Dennis said...

I suppose this kind of gets wrapped up in the whole question of what it is to be a "creature." It's far from clear to me what a "desire" should be in any sort of universal sense -- I for one would like to say that a human-equivalent AI (in all senses, unlike the examples I listed) would have desires, and thus be capable of being morally praiseworthy on your view, even if i completely understood all of the component pieces and could follow its mental states through with pencil and paper if I tried hard enough. At the same time, I'd like a program like "Hello, world" to be essentially an inanimate object and thus ineligible for such treatment. Since I think you'd share these intuitions, I think you'd be on my (confused) side in wanting to figure out where that line is. And, of course, I have no friggin' clue.

Blue said...

I think this deterministic outlook kinda makes the difference between itnrinsic desires and instrumental desires very ambiguous. An AI programmed to want to help people? A mother who doesn't desire to increase world utility, so much as pass on her genes, when she feel compassion for her doe-eyed child?

Blue said...

Also, (inspired by a certain online quiz going around), I wonder jsut how far does the ability to have moral worth go? Not just AI's or animals, but what about whole cultures? Can you judge the "belief of a nation" as ethically good or not, or is it a meaningless statement?

Anonymous said...

According to Candace Vogler, Kant would go so far as to say (I think) that an intrinsic desire to help others makes it less plausible that you are behaving morally. If you're helping others out of warm fuzzy feelings you get from helping people, you may be fulfilling your duties through your actions, but you doing so from your own inclinations, not for the sake of fulfilling duty. It's more plausible to believe that someone who's a cold-hearted bastard but who helps others because he feels obliged to is behaving morally.

This is why I did badly on my Kant midterm.

Neil Sinhababu said...

Tony, I think I can make sense of the moral worth of an entire culture. If people in one culture are more benevolent than people in another culture, I'd assign it higher moral worth. Cultures with more cruelty are worse.

You're right, Julian. The biggest role desire can play in Kant (as I interpret him) is in making someone aware of a particular option. But desire can't play any role whatsoever in the agent's justification of the action, or the agent will be trapped in heteronomy. This view sucks.