Comments on the OWASP Top 10 2017 Draft

owasp-logo

I’m going to make some comments about the proposed 2017 update of the flagship OWASP Project—the OWASP Top 10.

Before I do, I just want to say that as a present and former leader of multiple OWASP projects (IoT Security, Mobile Top 10, Game Security Framework, etc.) over the last seven years, I empathize with the difficulty of making these lists. It sucks to work for weeks, months, and sometimes years to get consensus on something, only to have 1,000 internet randos fly by and crap on it. Please accept my comments as constructive criticism rather than hate. I appreciate what you do.

Initial impressions and analysis

So here are some of my thoughts on the new proposed organization of the categories.

I hated A7 initially

My averse reaction to it was immediate, negative, and strong. It’s called Insufficient Attack Protection, which struck me as just a horrible name. The reason for this I’ll explain more in my next point. But then I read the full page description on it, and I definitely see where they were going with it.

It’s basically detection and response (which includes patching), which I think is a phenomenal addition to the list. I do dislike the name, though. So I started thinking about better names, as I’ve done a million times in my various OWASP projects, and quickly realized all the better names are four to five words long.

  • Insufficient Detection and Response

  • Lack of Detection, Response, and Patching

  • Etc.

So they probably battled over this point for days or weeks or months, only to arrive at a not great solution of, “Insufficient Attack Protection”. It’s a noble attempt, although I might still try to convince them still to change it to, Insufficient Detection / Response.

Anyway, short version is I love the category but really dislike the name.

A more fundamental problem

owasp-2017-e1491969994641

TABLE 1. — The current proposed 2017 list (April 2017).

This brings me to the reason I moved away from “Top 10” for the OWASP IoT Security Project. With that project, the Mobile Security Project, and virtually every other list-based effort that I’ve encountered, I’ve seen the project team run into the same wall.

They’re mixing different types of security issues into a single list.

What we’ve done with these different OWASP projects is collect different forms of “bad things”, which can include any of the following:

  • Vulnerabilities

  • Threats

  • Risks

  • Miscellaneous Bad Hombres

  • Etc.

The differences between these are quite important, and blending them all together into a single list can be problematic. But the biggest problem is when people on the project team don’t agree on the definitions, their differences, or whether or not it’s ok to mix them in a single list.

So let me just ask you, dear reader: What is the OWASP Top 10 a list of?

Is it a list of vulnerabilities? Not really. Injection isn’t a vulnerability; it’s a category of vulnerability. Same with Auth and Session Management, Access Control, Security Misconfiguration, etc. Then you have XSS and CSRF that are individual vulns.

So we have a list of 10 somethings—and on that list we have a mix of parents and children, containers and contents, buckets and water. That kills me. Always has. Especially since people don’t usually realize it’s happening during the discussion, and even when they do we can’t agree on terms.

So then we have Underprotected APIs. That’s not even a category. And it’s definitely not a vulnerability. It’s like a…thing. And I love it actually. I think it’s a good item to be on the list. But what is it? And does it belong on this list?

Hard to say.

My thoughts on OWASP lists

So here are some of the ideas I’ve had regarding this composite listing problem.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

I think we need more discreet and granular lists that clearly indicate what they are and who they’re for. In the IoT Security project we’re doing this by having more sub-projects within the project. We’re trying to break vulnerabilities into vulnerabilities, attack surfaces into attack surfaces, risks into risks, etc. I don’t want to cross the streams. I’ve not solved the problem, but that’s what we’re working towards.

I think the OWASP Top 10 could benefit greatly be calling itself what it is—a list of things to consider and avoid when building web applications. That’s not pretty. It’s not catchy. It doesn’t sound as cool as “Top 10 Vulnerabilities”. But it’s more honest. If it’s functional to make it a composite list, then lets do it. But let’s not lie to ourselves about it.

Perhaps it could be the Top 10 Risks—if we were to argue that each item includes probability and impact not just within each vulnerability but in relation to each other. In other words, the project team says something like:

We the OWASP Top 10 Team have studied x amount of data and have ranked not only the prevalence but the impact of all these issues as they relate to overall web application risk. The list is a composite of vulnerabilities, categories of vulnerabilities, and considerations, and we’ve determined that this is the order in which you should work to prevent these types of issues within your own web applications.

I don’t know that they are saying that, or that they can based on the data they have considered, or that they would even want to. Is the number actually a rank of priority, or is it just a designation so you can keep track? Not everyone knows the answer to that, and it should be more clear.

Summary

This is a hard problem, and I applaud the work that has been done.

The TL;DR here for me is that I think this is a great list of things for developers to avoid doing in their own applications, but I’m not happy with the seemingly confused way we get there. Not just in this project, but in all similar projects. We’re just throwing things in these lists with no regard for taxonomy, hierarchy, or any other structure. That would be ok with me if we were explicit about that, but I don’t think we are.

I think more clarity on said structure could help significantly.

Hopefully my comments are taken with the love that produced them, and can lead to some additional conversation and/or clarity around the structure of not just this list, but other OWASP projects going forward.

Happy to be part of that conversation if anyone wants to engage.

[ Apr 13, 2017 — Two points that I didn’t catch in my first reading of the draft: 1) The blatant promotion of specific companies and products in the document diminishes its authority, and I think they should be removed. 2) At the end of the document the project leaders actually make it fairly clear that the Top 10 is a lit of risks and not vulnerabilities, and they even give a link to the specific methodology that is used to determine those risks. I’m glad they make that clarification, although I think it should be done at the very top of the document rather than at the end. ]

Notes

  1. Jeremiah Grossman had a great suggestion, which was “Lack of Web Asset Inventory”, or something similar. I loved it, but the issue is that the OWASP lists are supposed to be for makers and breakers primarily (at least as I understand the history and current zeitgeist), and the lack of inventory point—while powerful—is clearly an organizational issue, not an issue with a single application or system. For that reason I’m not convinced that it fits in the current form of OWASP Top 10. But perhaps in a better version, or in an orthogonal project.

  2. Speaking of OWASP projects, I’ll be at the OWASP Summit in London in June. If you’re going, let’s meet up while we’re there.

Related posts: