Human Immortality Using LLMs

Using LLMs and Memory to back ourselves up before we die

I’ve been thinking about digital immortality for a long time before GPTs and LLMs became a thing. Back when I wrote this piece I am pretty sure I was thinking of something able to extract and emulate the brain by interfacing with the biology itself. But that still seems very far off.

Now that we have LLMs, though, another door has opened for us. First a bit about identity itself.

What makes us, us?

I’ve read a million (realistically a hundred or so) books about human identity. What creates it. Whether it’s mostly nature or nurture. How much it changes over our lifetimes. How much it changes. And—probably most importantly—how much those changes matter or don’t matter for our understanding of identity.

Sam Harris just had a guest on talking about Derek Parfit, a philosopher obsessed with these types of questions about identity. It was great to have Sam talk about this topic because I’ve been thinking about similar questions because of Sam for years, especially as they relate to Free Will.

One of the top questions, or perhaps the umbrella question, of this type of work is this:

If we’re changing all the time, from our molecules to our memories, then how is it that we still feel like ourselves? And how do others still see us as ourselves?

It’s a lot like the Ship of Theseus. If it’s changing all the time, what makes it the Ship of Theseus. Note: Here’s my answer to that.

Constant change yet still the same

People who don’t believe a person you love can change to become someone you no longer expect/want have never been close to a divorce or parenting.

We’ve all heard of Tay, the AI personality that Microsoft launched that got exposed to the underworld of the internet and became a sexist and racist within a couple of days.

We think of this as an ai training issue, but how different is it when a conservative Lutheran family raises a teenager that grows up in the 60’s?

With Tay you have liberals creating a Bambi of a soul that turns into 4Chan Johnson. With the conservatives they hoped to get Tucker Carlson and got a flower child.

People change all the time, but we still consider them the same person.

In both cases you have a creator that has hopes for what their creation will become. And in both cases they are horribly disappointed because the inputs shaped the output.

And who would say it’s unrealistic that an LLM could be one thing at one stage, and perhaps be a great partner, only to find out in a few months or 20 years that you don’t like them anymore? Isn’t that some massive percentage of relationships?

Isn’t that falling out of love? Isn’t that growing apart? Isn’t that raising a kid who becomes someone you’re not proud of? Isn’t that divorce?

And this isn’t even talking about the most important type of change within a person, which is change in oneself that’s a complete mystery. How often does a human go through life finding themselves accepting or wanting or rejecting things that completely surprise them?

Wow, I didn’t like garlic before and now I want it on everything!

Wow, I didn’t know I could ever be attracted to a person like her!

Wow, why do I have these feelings?

Here’s the most human thing ever:

Who am I? And what do I want?

We are not as far from LLMs as we’d like to believe. We’re often confused about what we are, we’re changing constantly based on input, and our output is massively unpredictable—even to loved ones and ourselves.

The science of human inconsistency

Daniel Kahneman has written extensively on human bias and inconsistency. The biggest and most poignant example of this is the difference between the Experiencing and Remembering self.

Basically, we could be having a good or bad time during a 2-week vacation, and rather than recall accurately all the different good or bad times a year after the trip, we’re likely to only remember key milestone measurements. It’s like you only sample experiences every once in a while, and those samples are what’s used to create your memory.

This is huge because our memories are a big part of our identity. So we might think we really enjoyed, or hated, a particular vacation. Or relationship. Not because it was mostly good or bad, but because our Remembering Self sees it that particular way.

Reflection in a recent AI paper

A screenshot from the experiment

Somewhat related to this, a recent Stanford paper discussed an experiment where autonomous AI agents were put into a SIMS-like environment and asked to behave as themselves. They had basic personalities to start, but over time they started gaining more and more nuance and idiosyncrasies.

The team did this by using a concept called Reflection, whereby they had the agents review recent events within the context of their goals and overall personality, and then to log that as a separate “memory”. So basically like journaling, where you think about what’s happened and reflect on it, which is then captured in writing.

Constant flux

Going back to human change for a moment, think about the ways in which we’re constantly changing, like the Ship of Theseus.

  • We are constantly changing physical form through the motion of molecules, the destruction and birth of cells, etc.

  • Our memories are imperfect and constantly forming and decaying, and we are our memories

  • We often can’t remember key pieces of what we did, or what happened, from important parts of our lives

  • Our preferences are constantly changing, from what we look for in food, mates, music, and many other things

Also, when someone asks us to describe what we’re about we have no idea what we’ll say. What do you mean what I’m about? We can’t even capture or articulate what we care most about. Our goals. Our priorities.

Self-definition

Ask the average person to describe who they are and what their core values are and you’ll find the answers to be a lot more random than GPT-4 with a temperature set to .75. And even more astounding, the human will be just as surprised by their answer as anyone.

Our own minds are basically opaque LLMs. Not just to others, but to ourselves.

If you don’t believe me, think about what’s happening when you ask yourself what you’re in the mood to eat, or where you’d like to go on vacation. You don’t know, which is why you’re asking your internal LLM to generate some options.

What will pop up in that? You have no idea. And the prompt and context matters to what pops out.

Building a backup of ourselves

So that brings us back to backing ourselves up using an LLM.

If you think through this whole identity thing carefully, and realize how flawed it all is, and how opaque it is to even ourselves—you’ll realize that we don’t need a perfect replica of ourselves to become “real”. In fact, our imperfection and flux is what makes us real.

We change constantly. We age. We grow. We forget things. We become different people.

So it’s not perfection we need for immortality. What we need is for us—as a dying version of ourselves—to believe that we’ve adequately transferred enough of our being into another version of ourselves. Notice how that’s a lot like having kids!

The question isn’t whether our backups can hit perfection, but whether it can hit “good enough” to convince ourselves and others.

So if we can interview this, um, thing…and it responds like me, it might be good enough. It knows all my past experiences. It has my sense of humor. My preferences. My general approach to things. Etc. Then we might feel pretty good lying down for your last sleep at 97 (or 117) because your first, biological body is failing.

You might be ok doing this because you’ve been told by the company handling the procedure that as of the moment this body dies, the entity you just interviewed will legally become you.

You will wake up as your new self. Eventually. Whenever we sort out how to create subjective experience and/or can put you into a body of some sort.

In the meantime before #newbody tech

We can’t currently put digital selves into bodies. Biological, robotic, or otherwise. But since the fear of dying is arguably the primary motivator for all of humanity, you can be sure lots of people are working on it.

In the meantime, here’s what people can (and will) do with existing LLM technology.

  1. Write Extraordinarily Deep Descriptions of Themselves: Your whole life story. Your dramas and traumas. Your key life events. Your preferences. Likes and dislikes.

  2. It’ll Import from Everything You’ve Done Online: Do you have a podcast? A video channel? Instagram? The company doing this for you will import all of it, and that will be part of the training.

  3. Journals, Texts, and Other Private Data: They’ll also ask you to import as much conversation as you can gather from throughout your life, because that will train it (you) on how you interact with others.

  4. Extensive Interviews and Scenario Exercises: Once all that is done, the company, let’s just call it Eterna, will then take you through as many hours of interviews as you can stomach. Deep interactive interviews putting you in various scenarios to extract your preferences to an extraordinary fidelity level.

  5. Interviews With Loved Ones, Friends, Coworkers, and Associates: The more interaction impressions and data the better.

  6. Your full Genome: Of course as the tech gets better it’ll be able to triangulate on “youness” by also looking at your genetic blueprint. And plus if we want to make a future body for you, you might want it to be somewhat like you.

All of this will, using LLMs, create a version of yourself that responds more like you than you imagine. You couldn’t just tell it who you are because like I talked about above, we’re not good at that. No, we have to show it who we are.

Then you can die knowing that some approximation of “you” is now stored somewhere. Now when the tech becomes available, Eterna will be able to put you in a new body and you can continue on in Life 2.0.

Of course it won’t truly be you. But you aren’t either. That’s the point.

The bar we need to hit is “good enough”, not perfection. And as the tech gets better we’ll keep getting closer to a 1:1 mapping.

This isn’t theoretical for me. I’m doing it.

How about you?