- Unsupervised Learning
- Posts
- Sam Altman Wants AGI as Fast as Possible, and He Has Powerful Opposition
Sam Altman Wants AGI as Fast as Possible, and He Has Powerful Opposition
Sam Altman was removed from OpenAI by the forces for XRisk and EA
A lot of people are asking for my thoughts on what happened at OpenAI this weekend.
As I’ll explain below, I believe what happened ultimately came down to two opposing philosophies on AI—and specifically AGI (the ability for an AI to fully replace a pretty smart human).
On one side you have what people like Balaji call the Accelerators, and on the other side you have what he calls the Decelerators. I have my own problems with Balaji, but the analysis below looks pretty good.
NO DECENTRALIZATION WITHOUT POLARIZATION
Haseeb is right. But this is good.
Because before the events of the last few days, we had only *one* dominant view — and it resulted in executive orders, compute bans, and well-funded coalitions for “responsible AI.”
But all can now see… twitter.com/i/web/status/1…
— Balaji (@balajis)
12:06 PM • Nov 20, 2023
Two other terms to spend some time Googling are the Existential Risk (XRisk) Community, and the Effective Altruism (EA) community. They are not the same, but they have a lot of overlap.
Basically the EA community is trying to do the most good for the most people in the future
And the XRisk community is trying to articulate and prevent events that could end humanity or our civilization
Specifically for the AGI conversation, these two groups are aligned on not destroying humanity by inventing an AGI too quickly that outright kills us.
Eliezer Yudkowsky is something of a leader in the AI XRisk community, and here’s what he had to say on Thursday of last week, just to give a taste.
Never have so many scientists warned of a serious danger of utter human extinction, while so many others pretend to have no idea what they could be talking about.
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky)
3:10 AM • Nov 17, 2023
And no, I’m not saying that tweet is what started this. But the connection is strong enough that Eliezer had to come out and tell people that no—he did not in fact order them to fire Sam. The fact that he actually had to clear that up tells us a lot.
He goes on to say this when it starts going down.
I am very worried.
Ilya has a good moral compass and does not seek power.
He would not take such drastic action unless he felt it was absolutely necessary.
— Elon Musk (@elonmusk)
11:06 PM • Nov 19, 2023
What (very likely) happened this weekend
So, what actually happened?
Details are murky, and it’s hard to speak specifically unless you have Hamiltonian knowledge from “the room where it happened”, but after having spoken with people close to the issue (yeah I’m doing that), and having had conversations about this dynamic for months before, this seems to be the situation.
I’m being broad enough here to hopefully be accurate even when it’s impossible to know the details yet. And it’s pretty easy to check everything here.
There are large and/or powerful EA and XRisk factions at OpenAI
They have been very concerned about how quickly we’re moving towards AGI for months now
They’ve been getting increasingly concerned/vocal over the last 2-3 months
The DevDay announcements, with the release of GPTs and Assistants, were a crossed line for them, and they basically said, “We need to do something.”
The OpenAI board used to have more people on it, and those people were on Team Sam. They had to leave the board for unrelated reasons
This left the existing board that was significantly in the Deceleration camp (Being careful here because the details of exactly who, and how much, aren’t clear)
Ilya has always been very cautious about building AGI that’s aligned with humans
He also just recently became the co-leader of the new Superalignment group within OpenAI to help ensure that happens.
The board would eventually, and likely sooner rather than later, be filled out with more people who were Team Sam
Based on all of this, it seems that the current board (as of Friday) decided that they simply had to take drastic action to prevent unaligned AGI from being created
There have been rumors that AGI has already been created, and that Ilya decided to pull the fire alarm because he knew it. But based on what I know, this is not true.
Anyway, that is the gist of it.
Basically, there are powerful people at OpenAI who believe that we’re very close to opening Pandora’s box and killing everyone.
They believe this to their core, so they’re willing to do anything to stop it. Hence—Friday.
This is my current working theory—which could still be wrong, mind you.
I’ll be watching Season 4 of Sam Altman along with you all, and I’ll add notes to this if I am wrong or need to make adjustments. But I won’t be changing the text above. I’ll just be appending below.
🍿
NOTES
When I say Sam wants AGI “as fast as possible”, I mean as fast as “safely” possible. He’s commented at great length about how he sees AI safety playing out, which seems plausible. In short, it’s small, incremental changes toward progress that give us time to adjust as things happen.