There is some confusion about the definitions of Red, Blue, and Purple teams within Information Security. Here are my definitions and concepts associated with them.
- Red Teams are external entities brought in to test the effectiveness of a security program. This is accomplished by emulating the behaviors and techniques of likely attackers in the most realistic way possible. The practice is similar, but not identical to, Penetration Testing, and involves the pursuit of one or more objectives.
- Blue Teams refer to the internal security team that defends against both real attackers and Red Teams. Blue Teams should be distinguished from standard security teams in most organizations, as most security operations teams do not have a mentality of constant vigilance against attack, which is the mission and perspective of a true Blue Team.
- Purple Teams are ideally superfluous groups that exist to ensure and maximize the effectiveness of the Red and Blue teams. They do this by integrating the defensive tactics and controls from the Blue Team with the threats and vulnerabilities found by the Red Team into a single narrative that ensures the efforts of each are utilized to their maximum. When done properly, 1 + 1 will equal 3, but this should be happening naturally as the benefit of having a Red and Blue team.
The purpose of a Red Team is to find ways to improve the Blue Team, so Purple Teams should not be needed in organizations where the Red Team is functioning properly. I equate it to hiring a waiter-like person to work in a restaurant where their whole job is to pick up food from the kitchen and bring it to tables.
When management is asked why they hired this extra person to do this instead of having the waiters do it themselves, the answer was, “The waiters said it wasn’t their job.” This is the same as a Red Team hacking and not sharing their knowledge with the defense.
If you have this problem, the solution is to fix the Red Team, not to create a separate team to do their job.
Concepts and philosophy
Red and Blue teams ideally work in perfect harmony with each other, as two opposing sides of the same coin.
Like Yin and Yang or Attack and Defense, Red and Blue teams could not be more opposite in their tactics and behaviors, but these differences are precisely what make them part of a healthy and effective whole.
Red Teams attack, and Blue Teams defend, but the primary goal is shared between them: improve the security posture of the organization.
Purple Teams are arguably an artificial addition to this pairing. They exist to ensure that observations and lessons from both teams make it to the other so that continuous improvement can occur. Without this crucial bridge, each team discovers key insights but doesn’t share them with the other.
For example, the Red Team might learn ways they could have been stopped but not share this knowledge with the Blue Team. Or the Blue Team may be aware of gaps in their controls but not share them with the Red Team.
Some of the common problems with Red and Blue team cooperation include:
- The Red Team thinks itself too elite to share information with the Blue Team
- The Red Team is pulled inside the organization and becomes neutered, restricted, and demoralized, ultimately resulting in a catastrophic reduction in their effectiveness
- The Red Team and Blue Team are not designed to interact with each other on a continuous basis, as a matter of course, so lessons learned on each side are effectively lost
- Information Security management does not see the Red and Blue team as part of the same effort, and there is no shared management or metrics shared between them
Organizations that suffer from one or more of these ailments are most likely to need a Purple Team to solve them.
A key point in the understanding of Purple Teams is that it should be thought of as a function, or a concept, more than as a separate entity. This can come in the form of an actual, named team that performs this function, or it could be part of the Red/Blue teams’ management organization that ensures that the feedback loop between them is continuous and healthy.
Having the Purple Team function occur as part of security management may be ideal so that it does not appear as if the Purple Team is a peer with the other two, or that the Purple Team is the only way the Red and Blue teams will communicate with each other. This breakdown can perpetuate the negative adversarial aspects (which include reluctance to share information) between the Red and Blue teams.
- Red Teams emulate attackers in order to find flaws in the defenses of the organizations they’re working for.
- Blue Teams defend against attackers and work to constantly improve their organization’s security posture.
- A properly functioning Red / Blue Team implementation features continuous knowledge sharing between the Red and Blue teams in order to enable continuous improvement of both.
- Purple Teams are often created to facilitate this continuous integration between the two groups.
- The Purple Team can be conceptualized as a Purple Team function, and can exist as a separate team or as part of the security management organization.
- In an ideal, mature organization, the Red Team’s entire purpose is to improve the blue team, so the interaction provided by the Purple team should be superfluous.
- All these terms can apply to any kind of security operation, but these specific definitions are tuned towards information security.
- The ideal organizational placement of a Purple Team is a subject of debate. The most important thing is simply that it occurs somewhere.
- A Tiger team is similar, but not quite the same as a Red Team. A 1964 paper defined the term as “a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem. The term is now used often as a synonym for Red Team, but the general definition is an elite group of people designed to solve a particular technical challenge.
- It is important that Red Teams maintain a certain separation from the organizations they are testing, as this is what gives them the proper scope and perspective to continue emulating attackers. Organizations that bring Red Teams inside, as part of their security team, tend to (with few exceptions) slowly erode the authority, scope, and general freedom of the Red Team to operate like an actual attacker. Over time (often just a number of months) Red Teams that were previously elite and effective become constrained, stale, and ultimately impotent.
- In addition to being a bridge organization for less mature programs, Purple Teams can also help organizations acclimate their management to the concept of attacker emulation, which can be a frightening concept for many organizations.
- Another aspect that leads to the dilution of effectiveness of internal Red Teams is that elite Red Team members seldom transition well to cultures at companies with the means to hire them. In other words, companies that can afford a true Red Team tend to have cultures that are difficult or impossible for elite Red Team members to handle. This often leads to high attrition within internal Red Team members who make the transition to internal.
- It is technically possible for an internal Red Team to be effective; it’s just extremely unlikely that they can remain protected and supported at the highest levels over long periods of time. This tends to lead to erosion, frustration, and attrition.
- One trap that internal Red Teams regularly fall into is being reduced in power and scope to the point of being ineffective, at which point management brings in consultants who have full support and who come back with a bunch of great findings. Management then looks at the internal team and says, “Wow! They’re amazing! Why can’t you do that?” That’s usually a LinkedIn-generating event.
- Other analogies to Red Teams that don’t collaborate: Professional footballers who kick but don’t pass, professional applauders who only use their right hand, professional auditors who don’t write reports, professional teachers who don’t interact with students. You get the idea.
- Thanks to Rob Fuller, Dave Kennedy, and Jason Haddix for reading drafts.