• Unsupervised Learning
  • Posts
  • Machine Learning’s Effect On Humanity Will Be Magnifying Our Successes and Failures

Machine Learning’s Effect On Humanity Will Be Magnifying Our Successes and Failures

AI—and specifically machine learning—are going to empower humans the way a futuristic exoskeleton would empower a 4-year-old.

When they want something from the kitchen cabinet—they’re going to get it. And if they don’t want to brush their teeth, it’s not going to happen.

And in the process of expressing these concerns, the kitchen, bathroom, and perhaps the entire house will be destroyed.

That’s humans with machine learning. It’s a force multiplier. An exoskeleton. The granting of super-powers.

I am not a math Ph.D, or an AI expert, but after reading around 15 books on all different types of artificial intelligence—past, present, and future—I think machine learning is going to greatly magnify our successes and failures as humans. It’s going to make everything we do more extreme—for good, and for evil.

Humans getting access to machine learning is going to produce realities like Black Mirror and Star Trek the Next Generation, with very little in between.

The problem is that we as humans are experimenters. We try things. We try economic systems. We try safety programs. We try social incentive programs.

But we’re often sloppy and wrong, so even if something was meant to cause harm, it seldom works so well that it can inflict maximum damage before someone notices.

With AI/ML, companies and governments will be able to launch half-baked ideas—just as they always have—that work extraordinarily well. Too well.

These results will still require skill and intent to interpret and misuse, but we should assume both of those will be in abundant supply.

  • We’ll launch marketing campaigns designed to gather information on people and determine their preferences, and algorithms will come back with answers for how to manipulate them politically.

  • We’ll ask who is most likely to commit crimes, and algorithms will come back with lists of our least fortunate.

  • We’ll ask how to improve designs we’ve had for hundreds of years, and the algorithms will surpass human ingenuity in minutes.

What this means is that every mistake we make will be magnified, accelerated, and perfected—automatically. And the potential for this to produce dystopian power structures cannot be overstated.

I wish I were saying this so that people will read this, learn, and be more cautious.

But they won’t.

Many books have been written on the topic of AI, and many of them call for caution in building this potentially civilization-ending technology.

But there’s no one to listen. We are not a people. We are not a government. We are not a world government with a unified people.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

We are a collection of market-driven companies trying to win, and that means we will act independently—in our own interest—to beat out our competitors.

That’s how Black Mirror gets made in the United States, without an overlord government like in China. In China they make it on purpose, and in the U.S. it gets made because it’s effective at accomplishing things and therefore makes people money.

Either way you end up with Black Mirror.

But that raises the question: how can we get ST:TNG instead?

I think the only option is to win a series of very precocious races. In short, we have to get lucky.

We basically need to continue to grow in intelligence, blend with technology through implants, create some semblance of AGI, and have a series of really bad failures—but not so bad that they destroy us.

So, dystopian societies where everyone kills themselves. Or where they create a bot army to try to take over the world. Etc.

We need a series of serious but small mistakes, in other words, to show us the destructive potential of missteps while holding vorpal scissors.

If we can make enough of those to learn from, but not so many or so large that we get destroyed, we might be able to improve ourselves to the point where we’re responsible enough to wield the power of machine learning, reinforcement learning, and evolutionary algorithms without erasing ourselves in the process.

So it’s a race between the power of the ML we can create, vs. our own maturity.

Related posts: