I’m currently engaging in an email discussion on free will with Billie Pritchett, a keen thinker and writer on many philosophical topics. He is posting our exchange on his blog, which you should definitely take a look at.
The conversation has illuminated a key point for me in this overall free will discussion:
Compatibilism is in opposition to both the commonsense notion of free will, and with itself.
First, two definitions:
Incompatibilists believe free will requires the ability to have willfully chosen otherwise for any previous decision.
Compatibilists agree that people do not have the ability to have done otherwise, and happily grant that THIS type of free will is impossible. They believe instead that the standard for free will lies in the ability to contemplate options and choose one according to desires and values.
Those are uncontroversial. So let’s see how #2 aligns with the commonsense view of free will and moral responsibility.
The commonsense standard
The easiest way to determine what regular folk think about moral responsibility and free will is not to ask them about those terms directly, but rather to have them evaluate common morally-charged scenarios and get their feedback.
Here’s a scenario:
If we ask the average American what they think about Christopher, they will almost invariably say that he deserves to go to jail for his crime. And if you probe further for a reason, you’ll find that it hinges on statements like these:
Or take the prototypical example of a child reprimanded by his parent for taking a cookie before supper. All of these situations hinge on a single concept:
He did X, when he could—and should—have done Y.
Upon inspection, this principle is clearly inherent in any commonsense discussion of human morality. Try it for yourself. Do your best to imagine any clear case of moral guilt where the central claim is NOT that the subject could have done otherwise.
There aren’t any.
Whether its on the playground or in criminal court, when we have a scenario where the subject had no options but the one they took, we either have exoneration due illness or we have a justified action.
Stated plainly, we call that person: “Not guilty.”
To be morally responsible is to have taken moral action X when you could/should have taken moral action Y, and it simply cannot be present when action X was the only option.
Yet this is precisely what the compatibilists are selling. They agree that people could not have done otherwise (that’s the determinism part), yet they still think people can be held responsible for their actions.
They cannot have both.
Either people could have chosen other than what they did choose (Libertarianism), or they were compelled—however imperceptably—by the universe to make the choice they made (and therefore lack free will and moral responsibility).
So which is it?
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
The compatibilism argument reduces to the following:
This is ridiculous.
The determination of freedom and responsibility is not in whether you got to think about your choice; it’s about whether you had the option to make a different one.
The alternative definition of just being able to consider your options opens the definition so broad as to make it apply to many other things, such as dogs, dolphins, apes, and computers.
Computers, as an example, think about their choices all the time. In fact, they do far more analysis than we do when coming to a given decision.
If the standard were “thinking about options in the context of goals” then chess computers from the 1990s would have already achieved both free will and moral responsibility. Why do we intuitively know that computers don’t have these things?
Because computers can’t do anything other than what their programming concludes for them. Computers can’t break in and make a different choice.
But this is precisely the restriction that compatibilists agree humans have as well when they accept “compatibility” with determinism.
So why are humans then free when computers are not?
The contradiction with compatibilism requires them to pick from the following:
Give free will and moral responsibility to dogs and computers
Realize that humans don’t have it either
They must choose.
The incompatibilist belief definition is an adaptation of a classical definition of free will, and isn’t formally the definition for incompatibilism. That definition is simply that determinism is incompatible with free will.
The terms “free will” and “moral responsibility” are academic enough to require explanation, and thus to present bias and noise by way of definition. So scenarios and feedback are likely the best way to learn about common intuitions and beliefs.
The claim that this is how people will respond to these moral scenarios is an empirical one, and needs to borne out through actual research. But I think I’ve made the scenarios obvious enough that people won’t disagree that these would be the findings.