A Merged Pothole Model of Consciousness

I was just listening to a Sam Harris podcast on AI and had an interesting idea for a model of consciousness.

Imagine a series of 12 potholes in a large paved parking lot. They’re separated by a few feet a piece, in a random way, and each looks completely different from each other.

Over time they start to merge, however.

Now imagine that water comes up from each pothole, say from the rising of ground water (not a real thing, just stick with me).

This happens simultaneously in all 12, and the flow of water from each, with their individual sizes, contours, and depths, meet together in the general center of the cluster.

The speed and flow of that water as it meets in the center, mixing and swirling semi-randomly—that’s a model of consciousness.

What we’re seeing now from various areas of science is that consciousness comes from the connection of multiple sub-parts of the brain. And each of those subcomponents evolved on their own, for their own purposes. Just like individual potholes had their own origins with completely unique shapes.

So if you have 12 completely unique areas, all with millions of years of history, which then get joined in a completely random way (through evolution), and then consciousness springs from the particular way they COMBINE…that’s complex.

Why am I mentioning this?

Because this model makes me think that it’s even more ridiculous to think that AI will think or behave the way that we do.

With AI we’re building potholes, adding water, and then linking the subsystems together. But there’s no reason to think that the swirling of water in the center is going to match what ours looked like.

Just to add some additional imperfect analogies for variables, what temperature is the water at? What’s the salt content? Maybe it’s not water flow at all. Maybe it’s electricity being conducted through that water. What’s the material of the pavement where we’re making the potholes? How does that predict the random shape that’s created for each?

All this combines to support my strong intuition that “human-like” intelligence is not extremely likely. It’s likely to have similar capabilities of course, but how it gets there could be very, very different.

Notes

  1. Image from ScienceMag.

Related posts: