Revisiting the AI Bubble

Why a major crash, AGI, and millions of jobs replaced aren't mutually exclusive

AI Bubble Header

I did a short post (and a video) about how AI shouldn't be thought of as a bubble because a bubble is a false belief that nobody will believe once it crashes and burns.

But as I said in the note on that post, it's a semantic argument, and if the name is already being used as "overinvestment in something that will crash for many investors" or something like that, then the battle is lost.

Well, the battle is lost.

It's already a term being used in the financial industry so some tech nerd with a "better" definition isn't going to do anything. So while I like my definition better, it doesn't f-ing matter. Definitions are community-owned, and alive.

Anyway.

People are starting to point out that the overinvestment is getting insane, with that Marketwatch article saying the bubble is 17 times that of the dotcom bubble and 4 times the subprime instance. That's big.

No idea if that's true or not because I'm not an expert in that space, but I wanted to highlight and differentiate a couple of things that I think are important.

First, AI isn't going anywhere, and I 100% think we're on track for being able to viably replace a human knowledge-worker by 2028—which is my definition of AGI. And it could be 2026 or 2027.

As Sholto talked about in a recent podcast, our current systems for doing AI in the main labs are like...abysmal. Super inefficient. Basically sets of hacks chained together. In other words, there is "slack in the rope" everywhere in the AI creation/optimization process ready to be discovered. Another way to say that is there is no reason to believe that we're even close to optimized across many dimensions of AI.

That doesn't negate the bubble though. It only makes it more nasty, because the White Whale being chased is actually real.

Only a small percentage of investments, startups, businesses will survive the shift in 3-10 years (or whatever), and the rest will die off due to lack of understanding, vision, luck, or countless other causes.

What we end up with is—let's just call it 20%—who are like, "I told you AI was awesome!", and they launch off into the stratosphere, and then 80% who are like, "Well that turned out to be complete rubbish, and I've lost everything."

What I urge you to absorb is that these are not mutually exclusive. All these can happen at the same time.

  1. We get AGI (human knowledge worker replacement) by 2028.
  2. Millions of jobs lost / reduced by 2030.
  3. Google, OpenAI, Anthropic, Nvidia, et al. become even richer.
  4. Most AI startups crash and burn.
  5. Many AI startups thrive and replace traditional companies with 1-5% of the workforce.
  6. Many traditional companies fail to implement AI fast enough and get wiped out.
  7. Many traditional companies implement AI eventually and survive.
  8. Most knowledge workers will face extraordinary pressure to learn AI and do more with less
  9. Many won't ever be employable again
  10. Some will be more employable and properous than ever

This is not an either-or situation. It's a mix.

The question—which nobody has the answer to—is simply how much of each we'll get. And when.