My Current Predictions for Thinking Machines

mechanical-brain

I’ve been thinking a lot about what I called “Getting Better at Getting Better” in my book. It’s the idea of accelerating machine intelligence, where computers aren’t just getting better at solving problems, but the pace at which they get better increases drastically. I think this comes in two forms:

  • Improved machine learning that improves as we provide higher quantities of quality data.

  • Evolutionary algorithms that use evolution to innovate.

I’m also reading a book called What to Think About Machines That Think, which is a collection of short thoughts by dozens of experts in various fields on whether computers will soon be able to think like—or better than—humans.

This spawned a few ideas of my own on the topic, but not being an expert in the area I was at first reluctant to capture them. But then I remembered that it’s ok to have raw thoughts as long as you have an appropriate respect for your limitations. So here are my current ideas on the topic of Thinking Machines.

  1. First, I don’t think human intelligence is all that special. I think it’s a matter of complexity, connection counts, etc., and this seems to be what we’re observing with our massive breakthroughs in neural nets and Deep Learning. So it’s mostly a matter of complexity, which is now becoming technologically approachable.

  2. Second, consciousness, as many experts have alluded to in neuroscience, philosophy, etc., is not a single special thing that sits on top of a mountain, but rather an emergent property of multiple, segmented components in the human brain that reach a certain level of complexity. Or as Daniel Dennett says, it’s simply a bag of tricks. Further, it’s my belief that this strange emergent property provided advantage by allowing one to experience and assign blame and praise, which provided tremendous advantage to early adopters who were creating communities. It’s also quite distinct from intelligence.

  3. Third, the core game to be considered when looking at whether AI will become human-like is not intelligence or consciousness, but rather goals. Humans are unique in that our goals come from evolution. At their center they are survival and reproduction, and every other aspiration or ambition sits on top of and secondary to those drives. So in order to make something like a human, it seems to me that you’d have to create something where every component of its being is steeped in a similar sauce. In other words, we were made over millions of years, step by step, with the goals of survival and reproduction guiding all successful iterations. So if we don’t want to end up with something extremely foreign to ourselves, we’ll need to somehow replicate that same process in machines. Failing to somehow emulate this process will likely result in a painted-on vs. baked-in feel to their goals and ambitions.

So when we talk about the mystery of human intelligence, or thinking machines (which usually means something that reminds us of ourselves), we’re really talking about three things:

  • Something smart.

  • Something conscious.

  • Something with a recognizable goal structure.

The key is realizing how distinct these three things are, and that our “humanness” seems to emanate from the combination of these things, not from one of them in particular.

Summary

So, human intelligence is just a matter of sufficient complexity, which we’re quickly approaching and will soon exceed. Consciousness is separate from intelligence, and will turn out to be a rather unremarkable hack caused by different parts of the brain working independently from each other. And the most difficult component of this entire “replicate humans” equation—instead of super-intelligence or consciousness—will actually end up being the creation of human-like (and human-aligned) goals.

OSS: Intelligence is easy, consciousness is a red-herring, and the hard problem is actually goal creation.

This is my current, non-expert prediction for how the “Thinking Machines” story will play out in coming years and decades.

Notes

  1. Some of these ideas were inspired by Waking Up, by Sam Harris, multiple essays by Daniel Dennett on the nature of consciousness, and dozens of other books I’ve read on various orthogonal topics.

  2. OSS = One Sentence Summary. I think we should be able to take anything interesting and make it a 1,000 page book, a 100 page book, a 1,000 word essay, or a one-sentence summary. I strive to keep this flexibility of explanation in anything I’m learning or trying to understand.

  3. Because this is a prediction, and I love tracking and learning from being wrong, I’ll be updating the text in update sections below the original content and not changing the prediction itself.

Related posts: