I think we should make a distinction between Functional AGI and Technical AGI.
I think Functional AGI matters most, and I think it’s likely to come before Technical AGI. And of course, if Technical comes first then it will achieve Functional naturally as a bonus.
The reason I think Functional AGI matters most is because if an AI system can replace an average knowledge worker, we’re talking about a BILLION knowledge workers being affected (Forbes). It doesn’t matter if it’s not a Pure, or Technical, AGI if it can do the general part of AGI.
So that means:
Basically doing lots of different types of work tasks, and adjusting to a dynamic work environment where tomorrow could look different than yesterday.
My argument is that—if an AI system can do this to the level of a decent knowledge worker—it doesn’t matter if it’s actually 319 different AI models behind the scenes running some insane orchestration that make us THINK it’s generalized when it’s not. Outwardly, it’s doing enough generalization to replace a human, and that's the part that matters.
Why is that the key fact?
Because either way, that’s a role that a human being isn’t going to fill. The reason could be alien intelligence, smoke, mirrors, or duct tape. Either way, someone either lost—or failed to get—a good-paying job.
All this to say that we don’t actually need true, technical AGI to happen for AGI to have a monumental impact on society.
If we hit Functional AGI—through some ingenious orchestration of narrowly intelligent agents—the effect on humans will be the same.
And it’s that effect on humans that actually matters.