What follows comes from a long-running debate with two friends on free will. We’ve made progress recently (after around 5 years), and now we’re getting into a distinction between two different types of choice, freedom, and responsibility.
Here’s my friend:
And here’s my response:
I’m good up to #4.
(5) (4) does not mean that we didn’t make an actual choice.
This is pure semantics. If the output of an algorithm is determined then it’s silly to say anywhere along the way in the code that that particular spot made a choice.
I 100% agree with you that we experience making conscious, voluntary, and deliberative choices, and that this experience is not an illusion. No question.
But when you go on to say that this is the definition of choice, that’s where I part with you.
We will presumably be able to build a computer that has a set algorithm of how it makes every choice, and, excepting randomness, its choices would be determined. But we could also add a consciousness to it, where it attached attribution to its actions, to its deliberations, for certain kind of (voluntary) efforts.
This would not fit most peoples’ standard of the computer making a choice (I don’t think). I think you’ll find that when most people speak of choice and free will, they really are talking about libertarian free will.
Not a sensation of choice-making within the context of a deterministic algorithm, as you’re describing here. You’re basically moving the bar relative to what most people call choice, but still using the WORD choice.
So let me state this plainly: if that deliberation and contemplation and distinction between voluntary and involuntary was labeled “choice”, I’d absolutely agree we had it. I do think we have that, whatever it’s called.
I just call it the experience of choice. David’s experiments show it quite well.
My problem is that this doesn’t meet the standard for moral responsibility, which is the topic of the discussion (and has been for centuries).
If a deterministic algorithm is executing, you cannot find moral fault in any part of it other than its creator.
But that’s what you’re asking us to do. You’re asking us to morally judge the person on that algorithm’s train tracks, even know they can only do what it is doing. And you’re saying this is ok because onboard that train the conductor THINKS he’s driving (he’s not, the train was programmed at the station).
That is no foundation for moral responsibility. Not REAL moral responsibility.
But let me come more towards your position for a second, and maybe arrive in the same place (hopefully).
I don’t believe we should conduct our society as if we don’t have free will. Or, at least, not most of the time—not in 2015.
Our criminal justice system, which probably does more good than harm, is based on punishing people who do things that are bad for our society. It’d be ridiculous for a 1st degree murderer to give a ‘two lever argument’ defense that proves they should’t be held responsible.
They’d be right, of course, but we’re simply not able to run a civilization this way.
So what do I propose? (this might look familiar to you)
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
The more practically responsible someone appears, e.g. deliberation, good mental health, etc., the more blameworthy they are in a pure practical sense. Meaning, in a sense that our standard criminal justice system deals with.
We can then have another part of government which has as its underlying principle that anything that goes wrong with a person is either bad genes or bad environment—neither of which are their fault. So they’re all about improving the environment, getting people proper parenting, education, etc.
And these two systems work simultaneously with different philosophies.
Now, let me make a further admission.
If someone punches me in the face tomorrow, I’m going to make the EXACT SAME calculation as you and Tamler are proposing when I decide how to act immediately. The cops will do the same. And so will the courts.
But if I am calm and collected, I’ll be contemplating blame based on the second system (genes + environment), and thus trying to help them rather than punish them. These blur in the real world, but you get the idea.
So Tamler was largely just stating that he could not live in the second, pure world anymore. He couldn’t go through life thinking people weren’t to blame because it wasn’t practical for him, and it flew counter to his emotions. So he stopped trying.
He simply fell back to the intuitive position of blaming people at the criminal court standard:
No? All healthy? Well, this was your fault then.
That’s intuitive. It’s clean. And it’s PRACTICAL.
So I don’t think it’s a bad argument for how we should live our lives (much of the time anyway). Especially since I live much of my life that way as well.
But it IS flawed as a method of yielding ACTUAL moral responsibility for the same reason I’ve said multiple times. In fact, that actor was simply playing out an algorithm of which he was not the author, and thus true blame is not justified.
So, what are we left with?
If we’re talking about building a society, and being practical about how we treat each other, I think it’s perfectly sane to say someone is “blameworthy” for drunk driving or whatever (if they meet the obvious standard we talked about).But if we’re talking about a free will discussion, and the true core of moral responsibility, and what’s required to attain that standard, then no, those living within the bubble of a deterministic algorithm are NOT morally responsible for their actions.
Image from Mario Quintana