Since putting up my post titled Obscurity is a Valid Security Layer back in 2009 I have had all manner of discussions about whether or not obscurity adds, takes away from, or has no effect on security.
The post recently made its rounds Hacker News and Twitter, and Robert Graham dedicated a post to explaining why I’m wrong about it. I don’t know Robert personally, but I know of his reputation. He’s a very smart guy who’s done tremendous things for the industry and who tends to put together very strong arguments.
It may be for semantic reasons—meaning we might actually agree once they’re out of the way—but the position in his response was flawed.
Let’s take a look. He writes:
Ok, this sounds like his issue might be a purely semantic one, which, if that were his only argument, I might agree with. It is the reason I’ve titled this post Disambiguation of “Security by Obscurity”—because the terms themselves are injecting confusion.
But then he continues…
First of all, I believe it’s somewhat accepted within the security community that while the Military may use AES and other public standards, organizations like the NSA have their own algorithms, and they’re not public.
Maybe I’m wrong about that, but it doesn’t matter that much to the argument. Let’s continue.
The fundamental mistake he’s making here, which is the same one that so many others make, is thinking that obscurity and security are fundamentally disconnected.
Let’s (for these purposes) define security as reducing risk, and let’s define risk as:
risk = probability X impact
So we can lower risk by reducing the probability of being effectively targeted and attacked, or we can lower risk by reducing the impact of an attack that does succeed.
Pretty straight forward.
Now let’s talk about the world “Obscurity”. As I said, it’s a word that means different things to different people, and for that reason it tends to be a bad word to use in explanations and discussions. So I’ll take the hit on that one, and I’ll try to repent by creating this post.
Obscurity in my mind (as illustrated through the port-knocking, SSH port changing, and tank camouflage examples in my original article) refers to the dictionary definition of making something obscure. Definitions include things like:
That’s what obscure means, and what obscurity is, when dealing with most types of military or information security. It equates to a very simple, and very powerful lever in the risk equation: lowering the chance that you will be successfully attacked.
That means lowering probability in the risk equation, which means improving the security of the system.
This applies to a well-camoflauged (but otherwise identically armored) tank on the battlefield, it applies to a secure SSH daemon running on an alternative port, and it applies to a secured web server protected by port-knocking.
If you haven’t otherwise lowered the security of the system in some way, but you make the target less likely to be targeted and/or successfully attacked, you have improved its security. It’s obscurity, and it provides security.
Here’s Robert again:
This was the entire point of my previous post—that it shouldn’t rely on obscurity, but that used as a layer it does improve it. Not sure how he missed this.
But let’s continue…
Ok, now I’m confused. Are you saying it is or is not a security layer? By driving up the cost to attack the service you raise security, which is accomplished by making it harder to find you and attack you (see tanks, SSH daemons, port-knocking).
So where’s the disagreement?
Hold on. We’re talking about whether we’re adding security, not the tradeoff between security and usability. Don’t cross the streams.
First, you don’t patch SSH on ports, you patch SSH. You stop the service, patch, and restart the service, and when it comes back up it’ll be running on whatever port(s) that was running on before. It’s pretty easy to check lsof or netstat on the boxes in your environment to see what ports the daemon is bound to.
And if you don’t have that information, because your service/asset management equates to “zmap all the things and patch what you find”, then you are already screwed for a different reason.
Hold on, who is the attacker here? Who are we defending against? Are we defending against someone who’s harvested Shodan for all custom SSH ports on the Internet, and who attacks those custom ports with 0days at the exact same time as they attack everyone else on port 22?
When the threat isn’t a mass-0day with a short lifespan directed at millions, and someone is specifically targeting your organization, you have bigger issues than what port your SSH server listens on. This is especially true if you’re not sophisticated enough to know what boxes run what services without doing a live discovery scan every time you need to patch.
The key point is that when you “increase attacker effort” to find you, to target you, or to attack you, you improve your security. Period.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
The cryptography case that Kerckhoff’s Principle addressed (which hijacked the “obscurity” term and created mass-confusion on this topic) was a very specific and stupid case where one creates a security system based entirely on a secret, which, once revealed, compromises everything.
That’s not generic “obscurity” as is used all throughout various security disciplines; that’s a dumb design for a crypto system.
To make the point more clear with the tank example, imagine hearing the following argument from someone criticizing the “obscurity” of using camouflage on M1-A1 tanks in the desert.
Brilliant, except they’re not relying on it. It’s still an armored tank. And we’re not relying on being on another SSH port either, because it’s a patched and hardened server that only accepts internal CA certificates for authentication.
All we did is increase the cost to the attacker of attacking us, which lowered the probability that we’d be attacked, which reduced our risk, which made us more secure.
And we did it by making ourselves more obscure.
Anyone who has trouble with this concept should imagine being an armored solider in a sniper area. In scenario 1 you’re wearing an orange safety vest, and in scenario 2 you’re using a cloaking device that makes you invisible.
Have you lowered your risk by using the cloaking device? Have you raised it by putting on the vest?
And just to add some more real-world examples, consider the following activities performed by security and intelligence services all over the world:
Alternating routes taken by high-profile people to and from locations
Using multiple cars and aircraft to hide which the principal is in
Keeping locations and future plans secret so people cannot set up ambushes
Imagine the argument against these:
Someone should explain that to the Secret Service who, according to this argument, have been wasting all those extra motorcade limos for no reason.
In fact, might as well throw out all of OPSEC—that’s just obscurity too.
Look, Robert wrote BlackIce when I could barely ping an IP address. I am standing on his shoulders as one of the pioneers in this industry, and he could no-doubt still teach me a million different things. This just isn’t one of them.
He got this one wrong, and that’s ok. You should see the time that I argued we should drop the “www” subdomain.
I do agree that the term “Security by/through Obscurity” has too much baggage to be used effectively in discussions. I am looking for a better way to describe making it difficult to target you. I thought about “Ambiguity” or something similar, but I haven’t found a good one.
Who knows, maybe “Obscurity” really is the best word, and we need to untrain the Kerckhoff / crypto case as an automatic response. I remain agnostic as to the best way to solve the semantic problem.
Regardless of what it’s called, making it harder and/or more costly to effectively attack you definitely improves security.