Markus Ranum is a highly-respected (for good reason) information security professional who has done a ton of cool stuff during his multiple decades in the field. Most notable to me is the fact that he is the father of the proxy firewall — quite an accomplishment. I actually had an opportunity to exchange a couple words with him once at a conference; both his experience and unique perspective were quite palpable.
At any rate, Markus has just come out with a new essay entitled, “The Six Dumbest Ideas In Computer Security.” I have some major issues with it.
His second point is that rather than eternally enumerating, tracking, and messing with what’s bad on a system, one should simply figure out what’s good and allow that and nothing else. I think that’s a great idea, don’t get me wrong, but I also think it’d be better if all programmers were security experts. It’s just not possible. He writes:
The problem here is simple. People have jobs where they maintain other people’s systems. They didn’t write the software that runs on them, and they barely have the power to ask for patches to be installed — let alone that the entire paradigm of system security be re-evaluated and implemented. I just don’t see anything practical in this point for the everyday administrator. We don’t have the luxury of fixing problems by being perfect.
Vulnerability Scanning and Pentesting Is Pointless?
His next point that I take issue with is the notion that vulnerability assessments and pentesting are pointless. I see his point and actually think it’s valid. The only problem with it is that it absolutely doesn’t matter. His point is essentially that by plugging holes we’re not addressing true security problems, i.e. the system’s security design.
Well, sure. But that’s absurd. What are we to do given the current problems? Stop patching also? Just give up because we’re not worthy of perfection? Seriously.
The basic problem that Markus has here is that he’s speaking in a completely ideal sense when it doesn’t represent the real world. The real world is made of systems that have holes in them. Those holes either get patched or money gets lost. As such, we run around playing the patching game — which is of course related to the “vulnerability research game” and the “vulnerabilty assessment” game. It’s life.
The underlying issue is that systems are horribly designed from the ground up. This is the concept that I touched on in my paper, “Jousting From Unicycles”, and I think it’s quite in line with Markus’ comments. What I didn’t do in that piece, however, was demean what measures are being taken today — in the unclean world we live in — in order to deal with the unfortunate truths we face.
Hacking Is Cool, Isn’t
His next point is that the whole concept of teaching and learning hacking is lame.
I couldn’t disagree more. Does he not realize that learning how memory is set up or how network traffic moves across a system is learning about engineering? Learning “hacking” in the true sense is learning how things work, and not just on a small, specific scale. The better security professionals learn “concepts”, not as series of examples that stand on their own. I think he’s really off on this one.
Overall, I think Markus is a man of extremes. He’s brilliant, there’s no doubt about that, it’s just that very little of what he’s said here is practical in any way. These ideas need to incorporated into security “design”, i.e. in future projects and future products (preferably into programming languages, system architecture, IDEs, and compilers).
In other words, we’re dealing with the here and now — not with some sort of fictional, ideal world where default-deny policies prevail, and all system admins have a full understanding of everything in their environment. It just doesn’t represent reality. Should it? Sure, but there are lots of things that should be the case. To me this is the same as criticizing local police because they wouldn’t be needed if everyone was nice.