Thoughts on the OWASP Top 10 2021

owasp2021

This post will talk about my initial thoughts on The OWASP Top 10 release for 2021.

Let me start by saying that I have respect for the people working on this project, and that as a project maintainer myself, I know how impossibly hard this is.

Right, so with that out of the way, here’s what struck me with this list, along with some comments on building lists like this in general.

The list’s biggest problem is that it doesn’t have a clear identity.

Is it a list of vulnerabilities? Is it a list of vulnerability categories? Is this for developers? Is this for security companies? Is this for security tool output labeling? Is it a tool for helping security metrics functions within companies?

It’s not clear.

Basically, when I look at this list, I don’t see a tangible list of things; what I see is an ontological mess.

  • The addition of Insecure Design is one of the problems. While I think everyone can agree it’s important, it’s not a thing in itself. It’s instead a set of behaviors that we use to prevent issues. Ontologically, we’ve confused a security process with the problems that process is meant to discover or prevent. As my friend Joel Parish points out, the far end of this problem is that Secure Design is meant to find everything else on the list. So, it’s basically an Inception-like “Draw The Rest of the Owl” type of situation. He’s like, “When does ‘failure to follow OWASP Top 10’ end up on the list?” Indeed.

  • The mixing of categories and vulnerablities has always been a huge problem for me, both as a user and as an OWASP Project Leader. If you’re going to call out the worst vulns, do that. If you’re going to list a bunch of categories, do that. Or at least call out that you’re mixing the two.

  • Then you have the situations where things can fall into multiple categories. Not turning off FTP on your home router can be Security Misconfiguration and also Insecure Defaults. The same can exist with Vulnerable and Outdated Components and Software and Data Integrity failures. Relying on an insecure component can hit both of those.

  • Security Logging and Monitoring suffers from something similar to Insecure Design. Namely, it sits in this weird realm between tangible and being part of a security review process. And it’s definitely part of Secure Design. So the question is, are we being told what exactly to log? Or are we just saying we should generally log? It’s the category vs. issue problem again.

  • And then we have the lonely SSRF at number 10. The only specific issue on the list. I mean it’s a good vuln, but it stands out like a vulnerability on a category list, which it is.

Analysis

I think ultimately this comes down to the first point: the list has an identity problem.

It started as a list of vulnerablities that just had a hard cutt-off at 10. So its purpose wasn’t to fix all of WebAppSec in 10 bullets. That mentality is what’s gotten us to the point of adding “Lack of Secure Design”.

Think of it in terms of two wildly different questions:

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

If you could only fix 10 specific vulnerabilities, which 10 would you fix first?

vs…

What are the first 10 categories or activities a security program should explore in to improve their security?

Those are very different things.

These lists started as the former, and then over the years have migrated to the latter. We’re currently stuck in an uncanny valley between the two.

I’m not even convinced the list shouldn’t be categories. I’d love to hear the arguments. It could be that specific vulns are too narrow and not useful enough unless you do a Top 100 list or something. But what I do know is that confusion is not a good thing, and that’s what we’re grappling with at this point.

Recommendations to the team

  1. Solve the identity problem, and be very clear in its communication.

  2. Don’t start with the data and look for categories and vulns. Instead, start with the purpose of the project and the output you want it to produce for a defined audience, and then look at the data, and then the list.

  3. In doing the first two, be willing to completely reconsider the list. It might be that it’s time for a bigger adjustment than ever for 2022. Like moving to pure categories, or pure issues. A reframing.

Summary

The list still provides value. It does. And I appreciate everyone’s work on the project. I just think it would be far more useful with more clarity around its identity and intended use.