A Possible Path to ASI

Could the route to ASI be the scaled and orchestrated mixture of ideas?

ASI Workflow

I've had what I think is a good definition for AGI for a while now, but ASI has been more elusive—at least for me.

The problem is that AGI is trying to get to something we know, which is roughly our intelligence level. And it seems much harder to intuit what it means to be above human intelligence.

So what I'm going to do here is:

  1. Give a working definition of ASI
  2. Show how it relates to, and extends, AGI
  3. Describe a practical methodology for pursuing it.

A Definition of ASI

Similar to my definition of AGI, I think a definition of ASI should be human-centric. In other words, the definition should start to answer the question of:

Why do we care?

Or, what theoretical ASI capabilities could an AI have that would have the most impact on humans? I think the answer comes down to two main components:

  1. Creating net-new things that help or harm us—like medicines or weapons
  2. Managing things we care about—like our lives, our businesses, our countries, and our society

If an AI can do those things better than any human, I'd say that's a good basis for a definition of ASI.

But just to tighten it up and generalize it, let's go with:

Extending AGI to Get to ASI

AGI -> ASI (Click for full size image)

So we're really saying ASI extends AGI.

It's the same thing—general cognitive ability—but to a superhuman level. So, in condensed form:

  • AGI is an AI that's able to do cognitive work as well as an average knowledge worker
  • ASI is an AI that's able to do cognitive work better than any human

It's a spectrum, as we see in the chart above.

A Possible Path to Both AGI and ASI

The Cognitive Progress Workflow (Click for Full Size)

And that brings us to the main idea here, which is the question of how to to move up in this chart. Procedurally—as a general approach.

I think the answer is to emulate what we know works in humans—which I'll capture and simplify as the following:

  1. Have decent hardware (human brain, human evolution, etc.)
  2. Have lots of experiences, combined with training/education
  3. Face challenges / problems
  4. Use your hardware, training, and experience to try to solve those problems
  5. Learn from the results
  6. Talk with other people who are doing the same
  7. Take some of their ideas and copy them, modify them, or combine them with your own
  8. Sleep and/or take time away from the problem, and let your subconscious work on the problem without you
  9. Continue bombarding yourself with new inputs, through reading, conversation with others, etc.
  10. Suddenly get inspiration for a new way to solve the problem, which you then go and try

Repeat.

This is what I find so promising about this whole challenge of getting to AGI and ASI:

It seems like the iterative process described above can be easily orchestrated and scaled using tech—including current AI.

We basically take the human process of learning, thinking, combining and copying ideas, sleeping on them, trying them out, etc.—like we saw in The Enlightenment and like we see with places like San Francisco—and we automate the hell of it at scale.

  • We can collect ideas at scale
  • We can collect problems at scale
  • We can build AI that combines them with all sorts of randomness and errors to produce creative variance
  • We can build a system for testing them against the problems
  • We can build a system that interprets results and turns that into new ideas for top of funnel
  • Etc.

The testing part is the most difficult because it often can't be theoretical.

If we're talking about medicine, for example, you have to actually see if the molecule does what you think it'll do. That means making the actual molecule and exposing it to the pathogen, or whatever. And same with many other types of problems.

Even without that, though, many other solution types can be tested in a purely digital/modeled environment—more like A/B testing—and that by itself could multiply the creative output of humanity many times over.

Finally, this same model for approaching problems—which is loosely based around the scientific method—could serve as content for Reinforcement Learning for future AI.

As we start to learn the types of approaches to problems that are most fruitful, AI may become generally smarter about proposing smarter initial solutions, as well as iterating faster as we are faced with new problems.

Will that takes us all the way to AGI? ASI?

Impossible to say. But I think it's a promising path.

Summary

  1. AGI and ASI only matter in the context of human needs and desires
  2. One of our general and primary needs is to create new solutions to our problems
  3. AGI and ASI are on a spectrum of general AI cognitive ability, with ASI being AGI at a superhuman level
  4. It may be possible to speedrun AI's ability to generate, evolve, and test ideas at scale using some fairly basic automation and AI (not counting the real-world testing piece)
  5. This could be a path to both AGI and ASI level invention and problem-solving, which could have a tremendous positive impact on society

Notes

  1. ONE-SENTENCE SUMMARY: ASI is just an extension of AGI, and we might be able to get there through scalable creation, mixing, and testing of ideas.
  2. Thanks to Joel Parish and Joseph Thacker for talking through parts of this over the years.
  3. Here are all my AI Definitions RAID