I’ve been writing about Information Security for around 20 years now, and I’ve written a lot about the definitions of various terms. A number of people have asked me to assemble those definitions into a single location, and that’s what we’ve done below.
These terms are active works in progress, so if you can improve them please let me know.
Each term has a link to more discussion around each definition.
- Threat
- A negative event that can lead to an undesired outcome, such as damage to, or loss of, an asset. Threats can use—or become more dangerous because of—a vulnerability in a system. MORE
- Risk
- A risk, in plain language, is a chance of something bad happening combined with how bad it would be if it did happen. MORE
- Threat Actors
- The person, actor, entity, or organization that is initiating the given (Threat) scenario. This is generally reserved for human-driven scenarios, such as hack attempts. It doesn’t usually make sense to talk about threat actors when the event is a flood or an earthquake, for example. MORE
- Vulnerability
- Vulnerabilities are weaknesses in a system. They make threats possible and/or more significant. MORE
- Programmer
- A Programmer is someone who can solve problems by by manipulating computer code. They can have a wide range of skill levels—from just being “ok” with basic scripting to being an absolute sorcerer with any language. MORE
- Hacker
- A Hacker is someone who makes things. In this context, it’s someone who makes things by programming computers. This is the original, and purest definition of the term, i.e., that you have an idea and you “hack” something together to make it work. It also applies to people who modify things to significantly change their functionality, but less so. MORE
- Developer
- A Developer is a formally trained programmer. They don’t just solve problems or create things, but do so in accordance with a set of design and implementation principles. These include things like performance, maintainability, scale, robustness, and (ideally) security. MORE
- Event
- An event is an observed change to the normal behavior of a system, environment, process, workflow or person. Examples: router ACLs were updated, firewall policy was pushed. MORE
- Alert
- An alert is a notification that a particular event (or series of events) has occurred, which is sent to responsible parties for the purpose of spawning action. Examples: the events above sent to on-call personnel. MORE
- Incident
- An incident is an event that negatively affects the confidentiality, integrity, and/or availability (CIA) at an organization in a way that impacts the business. Examples: attacker posts company credentials online, attacker steals customer credit card database, worm spreads through network. MORE
- Breach
- A Breach is an Incident that results in the confirmed disclosure——not just potential exposure—of data to an unauthorized party. MORE
- Penetration Test
- A Penetration Test is a time-boxed technical assessment designed to achieve a specific goal, e.g., to steal customer data, to gain domain administrator, or to modify sensitive salary information. MORE
- Red Team Engagement
- A Red Team Engagement is a long-term or continuous campaign-based assessment that emulates the target’s real-world adversaries to improve the quality of the corporate information security defenses, which—if one exists—would be the company’s Blue Team. MORE
- Vulnerability Assessment
- A vulnerability assessment is a technical assessment designed to yield as many vulnerabilities as possible in an environment, along with severity and remediation priority information. MORE
- Audit
- An audit can be technical and/or documentation-based, and focuses on how an existing configuration compares to a desired standard. This is an important point. It doesn’t prove or validate security; it validates conformance with a given perspective on what security means. These two things should not be confused. MORE
- White/Grey/Black-Box Testing
- The white/grey/black assessment parlance is used to indicate how much internal information a tester will get to know or use during a given technical assessment. The levels map light to internal transparency, so a white-box assessment is where the tester has full access to all internal information available, such as network diagrams, source code, etc. A grey-box assessment is the next level of opacity down from white, meaning that the tester has some information but not all. The amount varies. A black-box assessment—as you’re hopefully guessing—is an assessment where the tester has zero internal knowledge about the environment, i.e. it’s performed from the attacker perspective. MORE
- Risk Assessment
- Risk Assessments, like threat models, are extremely broad in both how they’re understood and how they’re carried out. At the highest level, a risk assessment should involve determining what the current level of acceptable risk is, measuring the current risk level, and then determining what can be done to bring these two in line where there are mismatches. Risk Assessments commonly involve the rating of risks in two dimensions: probability, and impact, and both quantitative and qualitative models are used. In many ways, risk assessments and threat modeling are similar exercises, as the goal of each is to determine a course of action that will bring risk to an acceptable level. MORE
- Threat Assessment
- A threat assessment is a type of security review that’s somewhat different than the others mentioned. In general it pertains more to physical attacks than technology, but the lines are blurring. The primary focus of a threat assessment is to determine whether a threat (think bomb threat or violence threat) that was made, or that was detected some other way, is credible. The driver for the assessment is to determine how many resources—if any—should be spent on addressing the issue in question. MORE
- Threat Modeling
- Threat Modeling is not a well-understood type of security assessment to most organizations, and part of the problem is that it means many different things to many different people. At the most basic level, threat modeling is the process of capturing, documenting, and (often) visualizing how threat-agents, vulnerabilities, attacks, countermeasures, and impacts to the business are related for a given environment. As the name suggests, the focus often starts with the threat agent and a given attack scenario, but the subsequent workflow then captures what vulnerabilities may be taken advantage of, what exploits may be used, what countermeasures may exist to stop/diminish such an attack, and what business impact may result. MORE
- Bug Bounty
- A Bug Bounty is a type of technical security assessment that leverages crowdsourcing to find vulnerabilities in a system. The central concept is simple: security testers, regardless of quality, have their own set of strengths, weaknesses, experiences, biases, and preferences, and these combine to yield different findings for the same system when tested by different people. In other words, you can give 100 experienced security testers the exact same testing methodology and they’re likely to find widely different vulnerabilities. The bug bounty concept is to embrace this difference instead of fighting it by harnessing multiple testers on a single assessment. MORE
- Red Team
- Red Teams are internal or external entities dedicated to testing the effectiveness of a security program by emulating the tools and techniques of likely attackers in the most realistic way possible. The practice is similar, but not identical to, penetration testing, and involves the pursuit of one or more objectives—usually executed as a campaign. MORE
- Blue Team
- Blue Teams refer to the internal security team that defends against both real attackers and Red Teams. Blue Teams should be distinguished from standard security teams in most organizations, as most security operations teams do not have a mentality of constant vigilance against attack, which is the mission and perspective of a true Blue Team. MORE
- Purple Team
- Purple Teams exist to ensure and maximize the effectiveness of the Red and Blue teams. They do this by integrating the defensive tactics and controls from the Blue Team with the threats and vulnerabilities found by the Red Team into a single narrative that maximizes both. Ideally Purple shouldn’t be a team at all, but rather a permanent dynamic between Red and Blue. MORE
- Green Team
- An offensively-trained and defensively-focused security team dedicated to working with development and infrastructure groups to address issues discovered using offensive security techniques systemically and at scale across an organization. MORE MORE