- Unsupervised Learning
- Posts
- An Idea on How to Build a Conscious Machine
An Idea on How to Build a Conscious Machine
September 19, 2019
I’ve been consuming a lot of content about AI lately. I’m reading What to Think About Machines That Think, which is a compilation of tons of expert opinions on the topic, and I’ve also read a few other books and follow the a16z and Waking Up podcast where it’s a frequent topic. These inputs don’t make me an expert by any interpretation—I’m just someone with an interest who’s read some things.
That aside, I think I might have an interesting idea for how to build a conscious machine.
If you follow the AI space at all, you know that consciousness is a big deal, and also that it’s quite separate from human-level intelligence or super-intelligence. These are all distinct things according to most.
The issue of super-intelligence is far beyond my ability to even poke a stick at, although I think that it’s a bit more understood as a complexity problem, and, thus, inevitable.
Conscious machines is what really gets me excited. How would you go about making them conscious? Would it be conscious in a different way than we are, similar to how machines are intelligent in different ways than us?
And these questions, of course, first require to even understand how and why we are conscious, so step 0 is still out of our reach.
But I have an idea, which starts with explaining why I think we evolved consciousness. A few years back I wrote a short piece called An Evolutionary Explanation for Free Will.
Essentially, we became conscious as a side effect of evolving the sensation of free will, responsibility, and blame and praise. These together allowed societies to form, grow, and prosper, and once humans had these things—or perhaps in order to get them—we developed the idea that we were the authors of actions and our experience of the world became what we now call conscious.
This of course required the necessary hardware, sufficient complexity, etc., but the main component was the concept of praise, blame, responsibility, and the notion that we were the original cause of the action. As Daniel Dennett writes, “It’s a bag of tricks.”
So that’s my explanation for how we gained consciousness. Now on to the task of creating it in machines.
Remember that I’m a security consultant, not an AI researcher. I am not claiming expertise here.
I wrote another piece more recently about what it will take to make human-like computers, called My Current Predictions For Thinking Machines. There I argued that it’s not about intelligence, or consciousness, but instead all about goals. In Desire is the Center of Humanity, I made this exact case, and pointed out that having and satisfying strong desires is the center of human happiness.
Now let’s add what just happened with Alpha Go Zero. I’m sure you remember that Alpha Go was the computer that beat the best human Go player. It ran on like 140 Google CPUs and had hundreds of hours of training by human Go players.
Alpha Go Zero is the new version of Alpha Go, and it runs on 4 processors instead of 140. It beat its predecessors in just 3 days, which was impressive, but the mind-blowing part is that it didn’t learn anything from humans. It taught itself by playing itself.
Reinforcement Learning is a subset of Machine Learning, where agents take actions within an environment to attempt to maximize reward, and that’s what Alpha Go Zero used to improve. Winning was the reward, and it was able to keep trying combinations to improve.
Now combine this with Evolutionary Algorithms, which are algorithms that model evolution in creating randomly modified variants that are then tested for success against a given criteria.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
That’s quite a stack of blocks we’ve just assembled. Let’s review:
Humans attained consciousness as a side effect (or prerequisite?) of gaining the responsibility capability, which allowed them to thrive in complex societies.
Desires and goals are the most fundamental component of humanity, with evolution supplying ours as survival and reproduction.
Once you have goals you can use Reinforcement Learning to try to achieve them.
You can potentially use Evolutionary Algorithms to try many different ways of achieving better outcomes, i.e., further maximizing your reward.
So that brings us to the idea.
I think we can build conscious machines by supplying the goal of maximizing the effectiveness of a large society of agents, and letting the system evolve it the same way it did with us.
Basically, conscious agents are so good at following rules and creating orderly societies that we should expect an evolutionary algorithm to stumble upon the same solution multiple times.
So we create a massive number of agents, put them into societies, and give the goal of having those societies achieve better things than their competitors.
If sufficient experimentation is allowed, and there are sufficient resources to run these experiments, I believe that the system will stumble on the same thing that evolution did—which is agents that have a sense of responsibility, blame, and as a result—internal experience.
In short, reproduce the environment that we developed consciousness, line up the goals and incentives in the same way, and let the combination of Reinforcement Learning and Evolutionary Algorithms do their thing.
It might not build the same consciousness we have (and probably won’t), but I don’t see any more direct path to accomplishing the task.
I would love for someone with formal training in the field to tell me the different ways I’m wrong—or potentially on to something.
Notes
As I was writing this, and listening to part of the book mentioned above, I realized that complexity and time might be an insurmountable constraint. It’s not helpful, for example, to say that we could recreate consciousness if we could recreate Earth. Sure, but what does that get you? I do think that we could potentially do this at a much smaller scale with far fewer variables, but that’s where ideas reach their limitation and you start to need true expertise in the field to explore and validate.
Thanks to a reader for sending in: “1. With an evolutionary approach, evolving agents and minds isn’t the hard part. Computing a rich virtual environment where agents can learn from embodied action and feedback is the hard part. 2. Addressing the curse of dimensionality in Von Neumann architecures. Most evolutionary attempts have trouble moving beyond low-fidelity 2D grids. 3. Reinforcement learning is typically very serial. Being hard to parallelize means it’s very hard to scale. All of it can be done, but it will take a lot of problem solving!”