There are many articles—like this one—that say that automation and AI are going to take menial human jobs, but that it’ll be a good thing.
The argument basically goes like so:
Yes, many human jobs will disappear, but those will be replaced with jobs we actually want
Everyone will move to more advanced work that requires more creativity and such
So it’s ultimately a good thing, don’t worry about it
There are couple of major assumptions here that I think need to be called out and discussed.
First, who says these millions of people can or want to do this new creative work? Isn’t it a bit condescending to say to millions of people who’ve known one life that they’re now being upgraded into a better one, and that they can now cast off their shackles and stop doing the shit work because machines can do it instead?
It’s pretty insulting, I think.
I do believe that creative work is ultimately better for humanity, and probably would produce net more happiness for people in general, but this is not a simple point to make to millions of people who’ve been leading those lives for generations.
Even more importantly, there’s absolutely no guarantee (or even solid reason to believe) that AI advancement is going to stop at some magical place between grunt work and creative work. As if it’s going to evolve and improve and then suddenly stop getting smarter and better as soon as it’s figured out how to do boring clerical work.
We don’t know how or where it’s going to stop, but AI can already write articles, short stories, and create music that is good enough to convince humans that other humans created it. And we’re just now hitting level 0 in a game that goes to infinity.
Based on the current pacing, the odds seem very good to me that virtually all human creativity will be reproducible by AI within 20 years. And it wouldn’t surprise me if it were 10.
So the question then, assuming I’m right, is very simply, “What do we do then?”
Forget about the chances that it just self-improves at an exponential pace, we become ant-like compared to them, and they wipe us out. That’s a separate conversation.
But let’s assume they produce everything from the best comedy to the best art to the best…well, everything.
What do billions of humans do then?
One scenario I can envision is having each person wield AI, integrate with it, and in a very controlled way basically become that thing. So it’s like we’re still doing things and adding value, except it’s really mostly the AI doing it. If we’re integrated enough with it, though, it’ll still kind of be “us”.
So you have a small number of people, innovating wildly towards humanist ends, and we continue much like now except with fewer, smarter people working toward unified goals.
If that sounds too tree-hugger, that’s for a reason. If this type of AI gets weaponized we’re all going to be dead, so the only scenarios where we’re sitting around wondering what to do are those where we’ve also decided to co-exist.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
So that’s one option: we basically turn into a small number of super-humans powered by AI, and we basically live in a commune doing art and science to improve humanity.
Another option is that we combine.
AI gets centralized. It sees our knowledge and our humanity and doesn’t want to kill us, so it offers to merge with us.
It parses all of our thoughts and ideas and experiences, brings them into the brain, and now we all live in a central consciousness.
The problem there is that humans will want to maintain some concept of individuality, which doesn’t seem conducive to this environment. So perhaps we’d have to partition off identities in some way to maintain a human-like feel to our lives.
This started out as an exploration of what humans are going to do for work, and it turned into what are we going to become.
I guess I’m ok with that.