• Unsupervised Learning
  • Posts
  • YouTube’s Ban of Hacking Videos Moves Us Closer to an Entertainment-only Public Sphere

YouTube’s Ban of Hacking Videos Moves Us Closer to an Entertainment-only Public Sphere

Marcus Hutchins wrote a great essay recently about YouTube’s new ban on “hacking” videos.

He writes:

One major problem here is that hacking tutorials are not inherently bad. There exist a vast YouTube community aimed at teaching the next generation of cyber security experts.

YouTube’s New Policy on Hacking Tutorials is Problematic, Marcus Hutchins

YouTube’s New Policy on Hacking Tutorials is Problematic, Marcus Hutchins

I think he’s absolutely correct, but it’s actually worse than that. I think it reveals a precarious future where dangerous is redefined as, “anything that can be used to do harm”.

Almost any information can be used to do—or contribute to—harm, so the question is where to draw the line.

I am using a pure hacking definition here.

The heart of the hacking community is not so different from that of education, since one of the central goals is describing how things work.

But if you’re a cynic, or you have a desire to control people, you’re prone to notice that knowing how something works also helps you destroy it. Which means it must be dangerous, right?

Sure, but in a healthy world environment this isn’t a problem. When you trust other people you freely distribute information about how things work. We call it an education. And we do this because we assume that others will be responsible with that knowledge.

Videos by STÖK are a great example of positive hacking culture.

That trust is the part that’s going away.

You seldom hear of things being taken off the banned list.

YouTube is now in the position of building a global platform based on edge cases. If someone describes how an alarm system works, it only takes a few people to complain—or maybe a small incident to happen—for them to place that type of content on the dangerous list. Problem is, they’ll keep adding to the dangerous list without the courage to ever remove anything. Eventually everything will be dangerous.

I’m forced to make the analogy to hardware stores, where you can freely walk into thousands of locations across the US and purchase nail guns, ice picks, and saw blades. How is that possible? Don’t people realize how much harm someone could do with such tools?

Knowing how a bridge is built isn’t that far away from knowing how to find vulnerabilities in your own web applications.

Knowing how a bridge is built is very similar. Or how a security alarm works. Or how to find vulnerabilities in systems that you own or use. They’re all examples of education being simultaneously useful and dangerous.

Defensive hackers use this same knowledge to anticipate attacks.

As Marcus points out, much of the Hacking community on YouTube is about explaining how things work. Tutorials. Tooling. Explanations. They show us how to find flaws in the things we have, so that we can fix them before someone else takes advantage.

Yes, there are some people who use those videos to do harm, but there are also people who commit murder with kitchen knives. Should we ban cooking at home because of it!

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

Eventually “they” comes around to “you”.

That’s the future I’m worried about with moves like this from YouTube. I’m worried that when people don’t trust each other, the game becomes removing weapons from the enemy because “they” cannot be trusted with such information. In such an environment, tools and information quickly conflate with weaponry.

In that world—where nobody trusts one another, and where education becomes something that we withhold from our enemies—the only approved content will be entertainment.

Entertainment is safe—information is dangerous.

This will do two things: first, it’ll drive real information exchange underground, where—due to the pressures applied—it will often take on darker forms. Second, it will leave the public platforms as a sterile place devoid of real content and conversation.

Ultimately, true accounts of the world will be labeled dangerous or offensive, the public platforms will become Nerf Zones devoid of true information/idea exchange, and quality education will only be found in private forums and private schools.

The only people who will know how things work will either be rich/smart enough to gain access to true education, or people trying to break things rather than build them. And the only people engaging in honest conversations, about things that matter, will be angry with those who tried to silence them.

Privileged access. Banned content. Anger. Secrecy. These are not good things for society.

Banned content policies like this are well-meaning, but they result in a stratification into information-haves and information-have-nots, where a the privileged and/or malicious have real information, and the masses and/or law-abiding are left with neutered and approved entertainment.

Eventually that imbalance will produce a net-loss for everyone.

Related posts: