Security and Obscurity Revisited
Many of us are familiar with a concept know as Security by Obscurity. The term has quite negative connotations within the infosec community-- often for the wrong reasons. There's little debate about whether security by obscurity is bad; this is true because it means the secret being hidden is the key to the entire system's security.
Obscurity itself, however, when added to a system that already has decent controls in place, is not necessarily a bad thing. In fact, when done right, obscurity can be a strong addition to an overall approach.
Good Obscurity vs. Bad Obscurity
An example of security by obscurity is when someone has an expensive house outfitted with the latest alarm system, but they keep the key and alarm code in the planter box next to the front door. This is security by obscurity because if anyone knows the secret, i.e. that the key and code are stored in the planter, then the security of the entire system is compromised.
That's security by obscurity: if the secret ever gets out, it's game over. The concept comes from cryptography, where it's utterly sacrilegious to base the security of a cryptographic system on the secrecy of the algorithm.
Obscurity as a Layer
These technologies allow one to hide their network services behind an additional layer of protection. Using the technology you can have an SSH server (or other previously secured daemon) sitting live on the Internet that port scanners literally can't see. This works because your firewall sits between the Internet and your listening service.
Your firewall listens to the incoming requests and ignores all standard attempts to your system. If, however, you ask in a very specific way, i.e. using the secret knock sequence (PK) or a packet with a special payload (SPA), it'll open access to the server for yourspecific source IP. This is where many respond with something like the following:
That's stupid because it's security by obscurity. If anyone figures out the secret, they'll just replay it and be into the system!
That's where they make the error. They are missing the fact that you still have to authenticate to the daemon behind this layer. You didn't replace the service's security with this layer, you simply added it to what already existed. Remember, the NSA most likely has great algorithms, but they still don't publish them.
Another example of this can be found in the concept of camouflage used throughout history in warfare. Specifically, consider an armored tank such as the M-1. The tank is equipped with some of the most advanced armor ever used, and has been shown repeatedly to be effective in actual real-world battle.So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings? Or how about in the future when we can make the tank completely invisible? Did we reduce the effectiveness of the armor? No, we didn't. Making something harder to see does not make it easier to attack if or when it is discovered. This is a fallacy that simply must die.
When the goal is to reduce the number of successful attacks, starting with solid, tested security and adding obscurity as a layer does yield an overall benefit to the security posture. Camouflage accomplishes this on the battlefield, and PK/SPA accomplish this when protecting hardened services.
Of course, being scientific types, we like to see data. In that vein I decided to do some testing of the idea using the SSH daemon (full results here).
I configured my SSH daemon to listen on port 24 in addition to its regular port of 22 so I could see the difference in attempts to connect to each (the connections are usually password guessing attempts). My expected result is far fewer attempts to access SSH on port 24 than port 22, which I equate to less risk to my, or any, SSH daemon.
[ Setup for the testing was easy: I added a
Port 24 line to my
sshd_config file, and then added some logging to my firewall rules for ports 22 and 24. ]
I ran with this configuration for a single weekend, and received over eighteen thousand (18,000) connections to port 22, and five (5) to port 24.
That's 18,0000 to 5.
Let’s say that there’s a new zero day out for OpenSSH that's owning boxes with impunity. Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it's that simple.
So the next time the subject comes up, remember a simple concept: Security by Obscurity is bad, but obscurity when added as a layer on top of other controls can be an effective way to lower risk. Those who dismiss obscurity out of hand are simply regurgitating someone else's (wrong) ideas rather than working through the concepts themselves. ::