- Unsupervised Learning
- Responsible Disclosure? How About Responsible Behavior?
Responsible Disclosure? How About Responsible Behavior?
A vulnerability was discovered today in Apple’s laptops that allows you to log into a root account with no password.
I am not 100% sure what Responsible Disclosure means. Seems like it has lots of definitions, and that they change based on the person and over time. So requiring someone to “responsibly disclose” something—according to whatever arbitrary definition they’re using—seems like a silly and unrealistic standard.
But maybe it’ll be easier to agree on responsible behavior. And sometimes it’s useful to transfer the situation to another industry to remove our watch-strap bias.
Something orange and green in a Petri dish
Let’s say I’m a smart, young biologist and I just ordered a new do-it-yourself CRISPR kit. And let’s say I accidentally stumbled onto a way—after seven weeks of unrelated but orthogonal research—to make Ebola live longer in a dormant state while simultaneously being more deadly. So if it were released it could kill millions or billions of people.
In this situation there are a lot of ways to do the right thing. The path is not clear. But I absolutely know what not to do.
It’s not ok to find an ISIS representative online and sell the secret for $400,000 so my kids can go to Harvard.
I’m not going to make a mural of the DNA sequence of the new strain, and paint it on the side of my house and invite the local news to film it.
I’m not going to email the Vatican and say, “Hey, you might want to let God know his underwear is showing.”
And I’m not going to get on Twitter and say, “Hey, anyone with a CRISPER kit—order a sample from here and then do X, Y, Z to create a civilization-ending virus”.
Since people tend to wonder such things, yes—as a tester for over 17 years I’ve submitted a lot of bugs to a lot of companies.
Now, if you’re feeling particularly spunky you might say something like,
Again, that’s a great reaction to someone telling you that you absolutely must follow procedure 244889.2b, subsection 11, which starts with filling out 49 forms and setting yourself on fire.
That’s mighty specific, and people might have differing opinions on the point. But something like calling the CDC and saying, “I found something you guys need to see” sounds like a pretty good option.
Now you might be putting yourself at risk by doing this, especially if you’re in a particularly bad political climate. Or maybe your name is Mohammed McVeigh, and you would rather not have conversations about ethics with people wearing three letter acronyms on their jackets.
But let me put an idea to you:
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
When life presents choices to moral people, it also removes some of the options.
Good people don’t get to sell the virus to ISIS. They don’t get to put it in the water supply in the name of science. And they don’t get to claim that they own the virus, or the decision of what to do with it, just because they discovered it.
And yes, it’s the same with the cybers.
If this had been an insta-root exploit for all Apache servers on the internet, for example, there would still be lots of right answers for how to handle it. I don’t think forcing researchers to follow some sort of strict, varied, and proprietary protocol is the right answer. Give them some freedom and autonomy to do the right thing, in the way that works best for them.
But there would also be lots of wrong answers.
Getting on Twitter with, “OMG Apache security sux. NGNIX 4Lyfe. https://pastebin.com/88sl2el20m02l2s4”.
Compromising every box you can find on the internet to do a cool talk in Vegas next year.
Putting the exploit out to the highest bidder so you can retire early.
These aren’t bad options because of cyber, or because of some dumb thing called Responsible Disclosure. Screw that. They’re bad options because they place the good of the discoverer (fame, money, etc.) above the negative effects on others (disruption, financial loss, safety, etc.). This is about human behavior, not “cyber” behavior.
There’s a ton of grey area here, of course, because not everything is Ebola and Apache Armageddon. But for this case, when you have a super responsive security team that surely would have addressed it quickly (and probably rewarded them as well) I’m not sure dumping it on Twitter was one of the correct choices.
In short, it’s ridiculous to think that researchers have to choose between 1) following a dogmatic and arbitrarily defined disclosure procedure, or 2) behaving as if the rules of moral human behavior somehow don’t apply to vulnerability research. It’s a false choice, and we should be smarter than that.
Take the cyber out of it and view it purely as a set of choices that benefit you and negatively affect others to various degrees. In most cases—just as in regular life—there will be a number of solid, moral options between the extremes of dogma and chaos.