Free Will Revisited

freewillhorary

At a recent team meetup in Atlanta I had the chance to talk to someone who actually enjoyed hearing about my perspective on free will, a moral framework, and a quasi-universal method of evaluating political propositions.

Normally, peoples’ eyes glaze over at a one or two levels of depth–either because they’re lost or because it’s just not interesting to them, but in this case, with my new friend Nick, he grabbed on with both hands. We ended up talking for nearly five hours on the topic overdrinks in our hotel lobby. It was quite enjoyable.

Anyway, I’m going to attempt to capture what we discussed in the hope that it will spawn additional conversation here.

[ Keep in mind that this will be quickly spewedout, so it won’t be terribly precise. I’ll clean it up as I have time oras it is mercilessly and justifiably attacked. ]

Basics

We started with the basics: there is the underlying layer of actual truth, which is that only physics truly exists and everything else is happening on top of that layer. So, physical matter, combined with the laws of physics, plus quantum randomness, equals outcomes.

I also gave Nick my two-lever argument, but it seemed mostly unnecessary. Within a minute of me explaining why we most likely don’t have free will he 100% had it. It was tremendous. I kept trying to explain it in a different way (out of muscle memory), but after my veryfirst pass his comment was something like, “Yeah, I get it, seems ratherobvious actually if you think about it…”

So, with that taken care of in the first few minutes we set on to evaluate repercussions. ## A Moral Framework

I then hit him with the Russell/Harris good life / moral framework, which is that we should be trying to reduce suffering and increase happiness for conscious creatures, and that we should do so by manipulating variables in the real world.

He was quick to pick up on how we can’t mainipulate anything if we’re not making decisions, which is obvious but no less troubling because it is. I explained that we’re living within the illusion, and that no matter what the first layer (physical) is the true one, and that anything else we’re going to talk about, including anything having to do with our experiences, is real in the sense that it’s real to us, but is subordinate to the physical layer.

So, in that sense, we experience turning those knobs, and that’s all that can be said for it. Like the physical layer, he instantly got the moral framework as well, and he had an interesting comment. He said, “This is going to change everything about how I look at moral situations.” Which is true. Even if you were mostly there already in terms of behavior and belief, this gives an actual framework within which it makes sense to behave and believe this way.

And we went through various political positions to see if we could find any where we disagreed. We couldn’t. This person who I’d never spoken at length with before, who accepted my two basic premises of the underlying physical layer combined with the moral framework that sits on top of it, couldn’t find a single place of disagreement politically.

In short, we agreed that the goal is to minimize suffering and maximize happiness, and that we use science as a tool to do so. So with respect to gun control, for instance, we look at the data. If injecting guns into a certain society and population increases safety and security, and thus reduces suffering and increases happiness, then we’re pro-gun there. And where it does the opposite, we’re anti-gun. We believe both places probably exist, which is why we’re all for gathering more data on which we can then make solid policy decisions.

Anyway, we now agreed on layer 1: the physical layer that does not allow for free will, and the experience layer which allows for practical free will (the experience of it) and the ability to live subjectively meaningful lives based on improving oneself, creating, sharing, loving, etc.

And that’s where it got interesting.

What Now?

Ok, so we agreed on the nature of reality and how to generally improve it given the horrible state it’s currently in. But what about after that? What is the endgame? How do you maximize human happiness beyond that?

And specifically, how does our belief in free will affect that? Or, to put it more pointedly, what if believing free will exists makes people happier?

Alternatively, what if free will does exist (something we explored considerably) and the best thing for increasing happiness was to believe that it didn’t?

This was a fun one. Here’s how it goes: try to imagine a world in which free will does exist, but where we would NOT behave as if it didn’t in terms of criminal justice, welfare, taxation, etc.? Or, to put it another way, how would believing in free will (because it actually existed) change how we should treat each other? We couldn’t find a way.

In other words, we don’t think actually having free will would make us change how we’d treat criminals or billionaires. Wouldn’t we want to still act as if we didn’t because it’s the best way to run a society? Would retributive justice or more rewards for the rich, based on them DESERVING these things, actually help society?

My thought was that it would if punishment was actually deserved, and if the punishment lead to better behavior. But that’s the case with consequentialist approaches as well. In fact, that’s the definition isn’t it? You do what gives the best outcomes. Of course there is an underlying concept of fairness there, so if torturing a baby for 1,000 years yields a .01% better outcome for the world overall we probably still wouldn’t go for that, but it’s a good guideline.

Once he stumbled on this he became convinced that it’s an even better argument against free will, i.e. that even if we were to have it, which isn’t likely, we should behave as if we don’t because it yields the best possible outcomes.

My point there was that we must remain anchored to truth in some significant way. Otherwise we could just hook ourselves up to endorphine machines and maximize our happiness all day long. The slope with no footing there is that once you start believing things that aren’t true, in the name of happiness, you now open yourself up to suffering. See religion.

So, ultimately, we came away thinking that if we had no free will we’d need to live within the illusion of not having it, just as we currently lack it but live within the illusion of having it.

At one level, that of our experience, we must do what flows with our interpretation of the world. Our biology and evolution makes it so that we cannot walk around in the first layer (physical). It’s too strange. It’s too disconnected. It’s too depressing. It’s too…unreal. We must live within the world we experience, and that world seems to have free will in it so we act as if it does.

Fine. No issues. As long as we remain anchored to truth. And that truth is that we don’t have it, and that truth leads us to treat ourselves better and build a better society.

And that’s where the craziness comes in. If behaving, at some level, as if free will doesn’t exist, is ALWAYS the better way to behave, then why does it matter if it exists or not? This was Nick’s point on many occasions, and it may be Carl’s point as well, although I’m not sure for the same reasons. He’s on many occasions said that it doesn’t matter so it doesn’t matter.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

I disagree. I think we should use the truth as an anchor either way, whether we’re pulling ourselves toward it or pushing ourselves away from it. It must remain the anchor, lest we lose sight of true north and start embracing superstition and other forms of nonsense.

Maximizing Happiness

The second interesting point we came upon is that it’s hard to maximize happiness for advanced lifeforms.

So let’s say we transfer ourselves into digital form. We recreate the brain, can live in artificial/eternal bodies, switch bodies, etc. Crime is gone, our horrible impulses to harm others are gone. We’re hyper-advanced, say, in 1,000 years.

Great. Now what? What constitutes the good life then?

Or, to return to some theological arguments, what constitutes happiness if it cannot be contrasted against suffering? I’ve always thought of the endgame as digital beings moving through life experiencing things together. Creating art, music. Dancing. Making love. Laughing. Exploring things we don’t yet understand about the universe. Looking for other civilizations to share experiences with.

But if you think about it, this all hinges on us having remnants of our current psychology. It requires that we be curious. It requires that we enjoy overcoming obstacles. It requires that we enjoy improving ourselves.

All these things, however, exist because we used to be primitive creates trying not die. In perspective, getting a Nobel prize, and enjoying the experience of doing so, is just a few short hops from building a straw hut that keeps the rain off of you. In both cases the enjoyment is based on a single thing: struggle.

And struggle implies obstacles, and risk.

So as obstacles and risk disappear, does not happiness as well? This needs exploration.

Best Case as No Possibility of Suffering?

Here’s a sad thought: what if the ultimate stage of enlightenment is achieved, and we realize that the way to be most happy is to avoid the possibility of suffering, so we just unplug ourselves.

Or perhaps that suffering simply gives a deeper low on which the happiness can launch from?

I’d hate to get this far and realize we’ve actually justified nihilism, which is a sophomoric attack on our premise as it pertains to our current world. In other words, decreasing suffering and increasing happiness TODAY by acknowledging that free will doesn’t exist, and that we have a moral framework, does not lead to nihilism. It leads to a far superior society.

But what about 5,000 years from now? Will the point be lost once we don’t have suffering to spring off of? What will happiness be at that point? When you can arbitrarily squirt it whenever you want, what meaning will it have? It seems to me that we will do as we’ve always done and invent the difference between suffering and happiness. We’ll artificially generate the struggles. But even that depends on the primitive mind that gets high off of the struggles–a sign of our lowly origins.

I imagine there will be movements to install primitive components into ones brains in order to have the…texture of conflict and primitiveness. This will provide the delta between happy and unhappy that is required to enjoy the happy.

Summary

So, it was a great conversation. Here are the main things I got from it:

  1. Confidence that there is a subset of people (in this case those with a Masters in CompSci) who get the basic arguments instantly, and for whom the moral landscape resonates strongly with.

  2. Nick’s discovery that a strong argument against free will itself is that even if it were true we’d likely have to behave as if it were not. To the degree that retributivist punishment or true merit-based reward produced worse outcomes we’d have to move back toward the pure consequence-based system of the world in which no free will exists.

  3. The endgame for happiness achievement and sustainment is not clear, as it seems that struggle and risk and obstacle (see suffering) are required to some degree for meaningful happiness. And if we are advanced enough to eliminate those things then we may have also eliminated our chance at happiness. The alternative is to still have those things while maintaining the ability to remove them, which again is an embrace of the non-real for the sake of contentment.

If anyone would like to comment on these developments, I’d love to engage with you.

Related posts: