Skip to content

October 30, 2013   |   Read Online

Grass_snake

Security profiling is profoundly misunderstood. Like Security by Obscurity, many people think that as soon as you hear “profiling” (or “obscurity”), it must mean the bad kind.

This is a mistake.

The Oxford American dictionary defines profiling as:

The bad kind of profiling involves individuals assuming that anyone who looks like X is likely to be malicious—ignoring (or not understanding) that the statistics are usually overwhelmingly against that assumption. Profiling in this sense is fundamentally an emotional, rather than a logical approach to security, and has the following flaws:

  1. It fails to understand that humans are individuals, and that trigger attributes need to be weighed against all other attributes in order for a proper security review to be performed.
  2. It’s often driven by outward racism or other types of xenophobia, rather than data, and thus is a compromised approach from the very beginning.
  3. It often negatively affects security by causing the person performing the profiling to ignore threats who don’t have a certain appearance.

The good kind of profiling—which unfortunately looks to non-security-types very much like the bad kind—is found all throughout security. Within Information Security, for example, the fields of spam detection and intrusion detection are all based upon the simple concept of pattern matching.

There are sets of characteristics for known bad actors, and there are sets of characteristics for actors that are being inspected. We compare these attributes using some sort of algorithm, and we arrive at a decision.

More technically, a Bayesian approach is often behind much of this analysis:

So, we start off with the probability that someone is a spam message, or a malicious actor, and then based on the evidence we see come in, we adjust that probability. And if we hit a certain threshold, we take some sort of action—whether that’s flagging for additional scrutiny, or what have you.

That’s security, and it’s happening trillions of times a day within security systems all over the world. The reason it’s accepted is because it’s overwhelmingly obvious that it’s a good way of doing things.

Bad profiling happens when someone has a poor understanding of the initial odds of someone being a malicious actor and/or they incorrectly adjust that probability based on seeing a particular attribute or set of attributes—again, because of emotion, bias, and/or and a lack of training.

What if we didn’t profile?

Let’s turn it around. Imagine a security system that didn’t use this concept of evidence-based pattern matching to make security decisions. Say you’re looking at an email anti-spam solution and you ask how the system knows if something is spam.

Hmmm…ok. Isn’t that too late though? What about blocking it before it annoys someone?

Right then—this meeting is over.

Put another way, if you have anyone who is in charge of security, and they are not evaluating incoming potential threats against previous threats, and comparing attributes of each to see their similarities, that person is plainly terrible at their job.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

This type of analysis is fundamental to security screening—always has been and always will be.

A real-world example

A good example would be a security guard in Northern Ireland during the IRA bombing period. What do you do with a security guard who doesn’t scrutinize young, male Irish men more than elderly Indonesian women when you have limited time and resources to perform screening?

You fire him.

Similarly, what do you do with an email security vendor that doesn’t mark multiple blatant misspellings in emails as probable spam on account of “some people aren’t good at spelling, and we don’t want to offend them.”

Fire that vendor.

This is because looking at young Irish men and poor spellers more (but not exclusively) is a better use of limited security resources in these situations. That’s the game we play in security—making decisions quickly, with limited information, and doing so as accurately as possible.

Know the difference

So it’s pretty simple:

  1. Profiling is part of good security—period. We profile (pattern match) because it’s an effective way to adjust scrutiny when you have limited resources. This is so true that not doing it is often outright negligent.
  2. Poor (false) profiling substitutes actual data with personal bias and emotion and, ironically, often ends up reducing security by ignoring potential threats that don’t match the filter’s pet notions of what malicious looks like.

Profiling is part of security. Do the good kind. Don’t do the bad kind.

  1. The Real Internet of Things: Details and Examples
  2. Everyday Threat Modeling