Cryptography and Open Source

maxresdefault-1-e1473472086916

I’ve been reading Bruce Schneier’s book on cryptography for the last couple of days, and one of the main concepts in the text struck me as interesting.

One of the points of discussion when looking at the security of a given algorithm is its exposure to scrutiny. Bruce explicitly states that no one should ever trust a proprietary algorithm. He states that with few exceptions, the only relatively secure algorithms are those that have stood the test of time while being poured over by thousands of cryptanalysts.

What strikes me is the similarity between this mode of thought and that of the Open Source community on the topic of security. In that debate there is much disagreement about which is better – open or closed – while in the crypto world it’s considered common knowledge that open is better. According to the crypto paradigm, having any measure of an algorithm’s security based on the fact that it’s a secret is generally a bad thing. There, keys are what makes the system secure – not the algorithm being a secret.

I realize there are some differences in these two models, but they are small enough, in my opinion, to say that those participating in the Open/Closed Source debate could learn something by tapping into the body of knowledge held by this related field.

Can we, for example, take the analogy at face value and compare Joe’s Nifty Algorithm and DES, with the source code for the Windows kernel + IIS and the Linux kernel + Apache? Windows to Linux isn’t quite fair since Linux is a kernel alone If we can, you realize, then it’s fairly easy to see which is more secure – Linux. Why? Well, simply because there are all those people looking for ways to attack it while having full access to the source code/(algorithm). For Windows and IIS, it’s a relatively paltry number in comparison.

In the crypto world, an experienced analyst would never even consider encrypting sensitive data with Sally’s Uber-Secure Algorithm simply because of one concept: Who the hell is Sally? Why is she qualified to create algorithms that are secure? Why should I trust her?

Well, to take the analogy to its conclusion, Sally’s algorithm is very much like Windows, Sun, or any other proprietary company’s source code. What you have is something like this:

<

p>Proprietary Company: Trust us, this stuff is pretty solid.Analyst/Skeptic: Show me.Proprietary Company: I can’t it’s secret. But trust me, it’s secure.

<

p>

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

If that were an algorithm up for review, the conversation would look more like this:

Algorithm Designer: This is secure.Cryptanalyst: Show me all the source code, let the world attack it for years, and if it holds up, I’ll believe you – a little bit.

Call me crazy, but the second paradigm is the only system that I can place any significant amount of trust in.

Now, there is a trump card up the sleeve when it comes to secrets and algorithms. When the organization or person creating the secret is trusted to do a good job simply because of who they are, i.e. the NSA or Bruce Schneier himself, then, and only then, can a secret possibly be an asset. The NSA doesn’t, for example, give out its algorithms so that they can be scrutinized even though they know this could potentially lead to the discovery of weaknesses.

To them, it’s more important that it’s secret. I would be relatively confident that their algorithms were stout, however, despite the fact that they haven’t been torn apart by the world’s best. Why? Because they have many of the world’s best working for them, while Bruce well, he’s just the man.

I would also place a fair amount of trust in the security of a Closed Source Web server package if that package was known to come from Weitse Venema, for example. Realize though that Weitse himself would most likely demand that the project was Open Source precisely so that the world could verify its relative security. He’s not likely to even trust himself to produce code that’s as secure as the world could make it, and that should say something right there.

So, now the question becomes very simple:

Do you trust both the talent and moral integrity of the company you are getting your Closed Source software from so much that you think the products they are supplying you are superior (from a security standpoint) to those that can be created via peer review and attack by tens of thousands? For me, in the case of Microsoft, Sun, and countless others, the answer is no.

Related posts: