Skip to content

Sam Altman Wants AGI as Fast as Possible, and He Has Powerful Opposition

Sam Altman was removed from OpenAI by the forces for XRisk and EA
November 20, 2023

Sam Altman was removed from OpenAI by the forces for XRisk and EA

Sam Altman was removed from OpenAI by the forces for XRisk and EA

A lot of people are asking for my thoughts on what happened at OpenAI this weekend.

As I’ll explain below, I believe what happened ultimately came down to two opposing philosophies on AI—and specifically AGI (the ability for an AI to fully replace a pretty smart human).

On one side you have what people like Balaji call the Accelerators, and on the other side you have what he calls the Decelerators. I have my own problems with Balaji, but the analysis below looks pretty good.

tw profile: Balaji

Balaji @balajis

tw

NO DECENTRALIZATION WITHOUT POLARIZATION

Haseeb is right. But this is good.

Because before the events of the last few days, we had only one dominant view — and it resulted in executive orders, compute bans, and well-funded coalitions for “responsible AI.”

But all can now see… twitter.com/i/web/status/1…

tw profile: Haseeb >|< Haseeb >|< @hosseeb

This weekend we all witnessed how a culture war is born.

E/accs now have their original sin they can point back to. This will become the new thing that people feel compelled to take a side on--e/acc vs decel--and nuance or middle ground will be punished.

12:06 PM • Nov 20, 2023

344 Likes   62 Retweets   30 Replies

Two other terms to spend some time Googling are the Existential Risk (XRisk) Community, and the Effective Altruism (EA) community. They are not the same, but they have a lot of overlap.

  • Basically the EA community is trying to do the most good for the most people in the future
  • And the XRisk community is trying to articulate and prevent events that could end humanity or our civilization

Specifically for the AGI conversation, these two groups are aligned on not destroying humanity by inventing an AGI too quickly that outright kills us.

Eliezer Yudkowsky is something of a leader in the AI XRisk community, and here’s what he had to say on Thursday of last week, just to give a taste.

tw profile: Eliezer Yudkowsky ⏹️

Eliezer Yudkowsky ⏹️ @ESYudkowsky

tw

Never have so many scientists warned of a serious danger of utter human extinction, while so many others pretend to have no idea what they could be talking about.

3:10 AM • Nov 17, 2023

658 Likes   63 Retweets   215 Replies

And no, I’m not saying that tweet is what started this. But the connection is strong enough that Eliezer had to come out and tell people that no—he did not in fact order them to fire Sam. The fact that he actually had to clear that up tells us a lot.

He goes on to say this when it starts going down.

tw profile: Elon Musk

Elon Musk @elonmusk

tw

Replying to@DrKnowItAll16

I am very worried.

Ilya has a good moral compass and does not seek power.

He would not take such drastic action unless he felt it was absolutely necessary.

11:06 PM • Nov 19, 2023

7.93K Likes   907 Retweets   503 Replies

What (very likely) happened this weekend

So, what actually happened?

Details are murky, and it’s hard to speak specifically unless you have Hamiltonian knowledge from “the room where it happened”, but after having spoken with people close to the issue (yeah I’m doing that), and having had conversations about this dynamic for months before, this seems to be the situation.

I’m being broad enough here to hopefully be accurate even when it’s impossible to know the details yet. And it’s pretty easy to check everything here.

  1. There are large and/or powerful EA and XRisk factions at OpenAI
  2. They have been very concerned about how quickly we’re moving towards AGI for months now
  3. They’ve been getting increasingly concerned/vocal over the last 2-3 months
  4. The DevDay announcements, with the release of GPTs and Assistants, were a crossed line for them, and they basically said, “We need to do something.”
  5. The OpenAI board used to have more people on it, and those people were on Team Sam. They had to leave the board for unrelated reasons
  6. This left the existing board that was significantly in the Deceleration camp (Being careful here because the details of exactly who, and how much, aren’t clear)
  7. Ilya has always been very cautious about building AGI that’s aligned with humans
  8. He also just recently became the co-leader of the new Superalignment group within OpenAI to help ensure that happens.
  9. The board would eventually, and likely sooner rather than later, be filled out with more people who were Team Sam
  10. Based on all of this, it seems that the current board (as of Friday) decided that they simply had to take drastic action to prevent unaligned AGI from being created

There have been rumors that AGI has already been created, and that Ilya decided to pull the fire alarm because he knew it. But based on what I know, this is not true.

Anyway, that is the gist of it.

Basically, there are powerful people at OpenAI who believe that we’re very close to opening Pandora’s box and killing everyone.

They believe this to their core, so they’re willing to do anything to stop it. Hence—Friday.

This is my current working theory—which could still be wrong, mind you.

I’ll be watching Season 4 of Sam Altman along with you all, and I’ll add notes to this if I am wrong or need to make adjustments. But I won’t be changing the text above. I’ll just be appending below.

🍿

NOTES

  1. When I say Sam wants AGI “as fast as possible”, I mean as fast as “safely” possible. He’s commented at great length about how he sees AI safety playing out, which seems plausible. In short, it’s small, incremental changes toward progress that give us time to adjust as things happen.