I've had what I think is a good definition for AGI for a while now, but ASI has been more elusive—at least for me.
The problem is that AGI is trying to get to something we know, which is roughly our intelligence level. And it seems much harder to intuit what it means to be above human intelligence.
So what I'm going to do here is:
Similar to my definition of AGI, I think a definition of ASI should be human-centric. In other words, the definition should start to answer the question of:
Why do we care?
Or, what theoretical ASI capabilities could an AI have that would have the most impact on humans? I think the answer comes down to two main components:
If an AI can do those things better than any human, I'd say that's a good basis for a definition of ASI.
But just to tighten it up and generalize it, let's go with:
So we're really saying ASI extends AGI.
It's the same thing—general cognitive ability—but to a superhuman level. So, in condensed form:
It's a spectrum, as we see in the chart above.
And that brings us to the main idea here, which is the question of how to to move up in this chart. Procedurally—as a general approach.
I think the answer is to emulate what we know works in humans—which I'll capture and simplify as the following:
Repeat.
This is what I find so promising about this whole challenge of getting to AGI and ASI:
It seems like the iterative process described above can be easily orchestrated and scaled using tech—including current AI.
We basically take the human process of learning, thinking, combining and copying ideas, sleeping on them, trying them out, etc.—like we saw in The Enlightenment and like we see with places like San Francisco—and we automate the hell of it at scale.
The testing part is the most difficult because it often can't be theoretical.
If we're talking about medicine, for example, you have to actually see if the molecule does what you think it'll do. That means making the actual molecule and exposing it to the pathogen, or whatever. And same with many other types of problems.
Even without that, though, many other solution types can be tested in a purely digital/modeled environment—more like A/B testing—and that by itself could multiply the creative output of humanity many times over.
Finally, this same model for approaching problems—which is loosely based around the scientific method—could serve as content for Reinforcement Learning for future AI.
As we start to learn the types of approaches to problems that are most fruitful, AI may become generally smarter about proposing smarter initial solutions, as well as iterating faster as we are faced with new problems.
Will that takes us all the way to AGI? ASI?
Impossible to say. But I think it's a promising path.