The “Resource Exhaustion” AI Problem

paperclip-robot-l

Listening to the PEL episode on AI with Nick Bostrom I was introduced to a fascinating threat concept related to Artificial Intelligence.

The idea is that there are serious dangers associated with with giving a new AI goals. The example given was telling a new toddler AI that its purpose in life is to make paper clips.

Seems harmless enough, right?

Problem is, the thing might say, “Yes, master.”, and then proceed to do the following:

  1. Connect to every online system in the world

  2. Take over the manufacturing plants

  3. Build an army of robots to help build more plants

  4. Build an army of robots to help create steel

  5. Build an army of robots to kill all humans because they are wasting resources that could be used for making more paper clip plants

  6. Cover the earth in manufacturing plants, and harvest the world’s entire supply of iron, killing all life in its path

  7. When the earth is covered in paperclips and paperclip plants, and all resources are exhausted, build spaceships so they can find other worlds that might have iron, or paper clips

  8. Send distress signals into space with hopes of getting aliens to send rescue ships, so they can attack them, kill them, and harvest their ship for iron, so they can make paper clips

You get the idea.

We essentially have to be life-and-death careful about giving an AI goals, because we don’t know what it’ll be willing to do to achieve them.

Related posts: