I was pleased to hear that Sam Harris recent appeared on one of my favorite philosophy podcasts: Very Bad Wizards, to talk about free will (among other things). Sam is one of the few people I’ve found who shares my exact views on the matter.
This was especially rich since I had planned on discussing free will in person with one of the two guys who run the podcast: Tamler Sommers. Tamler is a philosopher and professor at the University of Houston. I travel to Houston a good amount for work, so I wanted to arrange a meeting with him to discuss our particular delta on the topic. I think I’ll still try to do that, actually, but this discussion on the podcast with Sam was a pleasant substitute for me in the interim.
I’ll start off by saying that this was the best discussion of free will I’ve ever observed. Here are a few of the key points:
All three participants claimed from the very beginning to reject the concept of libertarian free will. This is fascinating because it would seem to remove most of the waste that occurs in free will conversations.
Tamler used to hold Sam’s exact position, i.e. that it was not possible to achieve moral responsibility because we’re just Roombas who lack free will.
Tamler changed is mind recently for a remarkable reason: he felt his intellectual understanding of free will was in conflict with natural human responses to crime, i.e. that because he would feel legitimate in hating himself for driving drunk and injuring a child, or in hating and wanting retribution for someone who hurt his child, he could not in good faith maintain the purist view on free will that Sam and I hold.
While all three claimed to reject libertarian free will, 70% of the conversation was spent on showing how it wasn’t possible. I felt this was heat from Sam where there should have been light, but it’s true that Tamler in particular kept throwing out signals that he supported it, so Sam kept repeating himself.
Tamler and David’s main (and great) question was the following: “even knowing we lack libertarian free will, can we not still acknowledge that certain people react to reasonable stimuli aimed at improving their behavior, and others do not?” This then became: “What should we do with this information?”, and, “Can this not become a new foundation for reward and punishment in a world that we all agree does not include actual free will?”
In short, the discussion hit the event horizon of the XY problem. They lost sight of X (moral responsibility without a consequentialist framework) and proceeded to address a number of Ys. Here’s the X that I think they should have remained focused on:
Whenever they lost each other, or felt required to rehash already-accepted points, they should have returned to that key question.
My greatest frustration, which led to loud and creative swearing directed at at my inanimate steering wheel, was that I felt as if Sam missed the opportunity to make the crucial point in a teed-up situation when everyone had already rejected libertarian free will:
This seemed patently obvious to me, and I was sure that Sam was going to state it directly and force a response, but he didn’t. He did keep repeating that he thought all their points would ultimately reduce to consequentialist reasoning, which is really the same point, but he didn’t ever state the position directly, which I think (hope) would have been enlightening.
Instead he was continually pulled into the undergrowth of detail by false signals that they actually did hold libertarian viewpoints. And as I said, I think this is what produced so much heat vs. light.
I want to return to Tamler’s problem with maintaining a pure view on free will, i.e. that people can have precisely zero moral responsibility other than what we give artificially for consequentialist purposes.
This point is critically important because it represents the main objection for most people to the view of free will that I hold. Interestingly, it’s also the reason that people generally don’t respond to logical argument when it comes to moral issues that often involve emotion.
This is especially interesting since Tamler just gave a great TEDx talk on this exact topic. I’m going to write about it separately as well, but the summary seems to be the following:
I trust Tamler will clean that up if I made a mess of it.
The irony is that Tamler seems to be doing this exact thing. As Tamler says in his TED talk, we generally cannot be persuaded by reason alone—especially when we have an emotional force being applied in the opposite direction.
And in this case, he knows we are Meat Roombas. He knows we are literally not rehsponsible for (because we’re not the cause of) any of our actions, but despite this he is claiming it’s defendable to seek retribution against a hypothetical someone who hurt his child based purely on emotion.
So while I personally applaud Tamler for having the courage to explore and accept the implications of this cognitive dissonance within himself, I am deeply troubled by his claim that his emotions are a superior compass for establishing moral responsibility than reason, simply because they’re more tangible to him as a human.
Sam picked up on this perfectly, and offered the following retort (which was also in his book on Free Will):
[ NOTE: That’s not a quote from Harris, but rather my summary of his position. ]
Tamler and Davids’ responses to this were curious. They said that the presence of the cure somehow changed their responsibility. This I did not understand.
Here’s a deductive form of the argument I would have made in Sam’s position:
Once we know everything about the brain and how it creates and makes up the mind, we will see that all negative behavior is causally (and therefore morally) the same as having a tumor or being controlled by an evil genius.
The presence of a cure for X Negative Behavior, or a tumor, or being controlled by said evil genius does not stop the cause of those things from being outside the control of the person in question.
Therefore, the presence of a cure for the cause of whatever negative behavior resulted in an unwanted outcome does not affect in any way the moral responsibility of the person who committed the behavior.
[ PRE-DEFENSE: #1 includes not having the intelligence or state of mind to take the cure were it available. Nice try, though. 😉 ]
You know the drill: either accept #3 or show me what’s wrong with #1 or #2.
Anyway, Tamler’s question still stands, and my answer to him is simultaneously undeniable and unfulfilling:
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
I told you it would suck.
So to Tamler: I sympathize with you. I agree with you. And I respect your willingness to explore the consequences of the tangible nature of your emotions.
But it seems to me that the one thing we cannot do is use that emotion as an escape hatch to unreasonableness.
When we stub our toe at night, or a driver cuts us off in a mean way, we as humans are universally justified in being angry. We are allowed to cry out. We are allowed to cuss. We are allowed to make gestures.
But only for a moment.
We’re only allowed to cuss while in the throes of pure emotion. And the truth seems to be that it is—or at least should be—the same for real crimes. It seems to simply be a matter of degrees.
If I had a daughter, and a drunk man killed her with his car, I would anticipate holding two things in my mind simultaneously:
This man is a moist robot comprised of big bang matter vibrating according to the laws of physics. His genetics, chemistry, upbringing, education, and general circumstances turned him into a projectile aimed at someone I care about. He too is a victim.
SO FUCKING WHAT? In THIS world, Sarah is gone. In MY world, my wife is devastated. In the REAL world, I’ll never see her laugh again. Fuck this guy, I want to be left in a room alone with him for an hour. I will make him feel the pain he has caused. I will make him know what we have lost.
I am an empathic person, so I was emotional even writing that. And it is a bit presumptuous of me to talk about what I would do in position X when I have never been in it. But what I can say is that I would expect to be able to grasp position #1 after some measure of time had passed.
To be clear, I would not be ashamed of still feeling the searing wrath of position #2 at times. I would not feel lesser for it. Not for a long while, anyway.
But once I was able to maintain position #1 for any period of time—even a span of minutes, or hours, or days—I would hope to begin to see that position #2 was in fact not ok.
You either believe your beliefs or you don’t. And when I am in state #1 I know for a fact that the man is not responsible. I simultaneously know that when I’m in state #2 it’s ok for me to feel what I’m feeling.
But I wouldn’t ever be tempted to conflate these two by thinking that it’s ok to replace #1 with #2 as a matter of policy, and that is the mistake that I believe Tamler is making.
So those were my observations and comments. Overall I thought it a tremendously enjoyable podcast, and I recommend that you listen regularly.
Tamler and David kept mentioning sensitivity to punishment and moral arguments as potential indicators of increased moral responsibility, which is curious given they already accepted everyone as being Roombas. So in that analogy we would simply have Roombas with more (or fewer) collision avoidance sensors and lights, which do exactly nothing to keep it from being a Roomba.