Skip to main content

Transforming Security Operations: The Power of Adversary Emulation

Dylan Williams, Appian
April 9, 2024

The Security Operations team at Appian is responsible for detecting, responding to, and shutting down potential cybersecurity threats against our company. As an innovative and cutting-edge security team, we put a great deal of effort into coming up with new and novel ideas for how a potential threat may manifest. Threat ideas can come from domain industry experience, interacting with internal stakeholders, or reading security blogs. 

Most organizations will write detection rules in their intrusion detection/prevention systems for threat ideas so their teams can be alerted and act to protect the organization, trusting the detection rule will work as intended. At Appian, we go beyond the standard detection engineering process. Every detection we create is also tested by emulating what the detection is trying to catch. The objective is to demonstrate whether we can accurately describe how susceptible to an attack we are by actually emulating the behavior. This is done by mimicking the tactics, techniques, and procedures followed by real-world threat actors. By incorporating these tests regularly into the detection engineering process from idea generation to production alert, we can start to move away from the traditional security frameworks and toward threat-informed defense. 

This blog will discuss threat-informed defense and adversary emulation and how Appian uses this approach and will offer some advice for how to use it in your own security programs.

Common misconceptions.

Here are some of the most common security assumptions organizations make that lead to a false sense of security: 

  1. Assuming that technologies should work as vendors claim.
  2. Assuming detection tools are configured and deployed correctly. 
  3. Assuming that changes to environments are properly communicated and implemented.

At Appian, we don’t make these assumptions. We do not blindly or implicitly trust our processes and technologies to determine whether we can prevent or detect a specific threat or attack. 

For example, if someone asks if your organization monitors for when an employee logs into applications at odd times of the day, instead of using a standard industry canned answer of “yes, we have a product or tool that prevents that,” or “our team wrote an alert last year for that,” we go ahead and actually test it via emulation. In other words, we have someone log in outside of normal business hours and see how our detection systems catch specific abnormal behaviors. Incorporating this purple team–like concept has gained a lot of traction in the industry, and there are many terms surrounding it now, such as security validation, breach and attack simulation (BAS), and adversary emulation.

However, there is no industry-wide consensus on where and when to leverage this approach in your security operations program. Instead of adding another layer of processes and technologies, the Appian Security Operations team takes a potential threat and tests it first in our environment. Then, we ask the following questions:

  • Was it prevented? 
  • Was it detected? 
  • If not, do we have visibility? 

We call this approach a proof-based security monitoring program. Here's a visualization of the entire cycle:

Proof-based security in action.

Okay, now that we know what proof-based security is, what does it look like in practice? 

Remember, the objective is to demonstrate we can accurately describe our susceptibility to a specific kind of attack by emulating the behavior. There are a multitude of great tools out there to automate this, including free and open source ones. For example, you can use Atomic Red Team to emulate endpoint techniques on Windows or Mac or Stratus Red Team for cloud environments. By iterating through this process, we have built out an extensive attack scenario library. Every attack scenario has a corresponding test and detection rule for our team to execute and develop. All team members are encouraged to focus more on behaviors and less on atomic indicators like IP addresses, domain names, file names, or other information that can be easily changed by an adversary.

One important thing to remember when doing this is that you are not limited to what’s available on the web. The most valuable content will come from things unique to your organization. There is not an automated way to test for every threat that could affect your organization, so our recommendation is to remember to test things manually. We encourage other teams to move from more traditional, endpoint-based tests and expand into cloud-based and software as a service (SaaS)–based attacks. Experiment with behaviors like logging into an Okta account from an abnormal useragent, taking anomalous actions in an AWS account, or trying creative ways to push potentially suspicious code into your build pipeline. Try using adversary emulation to drive detection creation and obtain concrete evidence of whether a particular threat was prevented, detected, or neither. By doing this—and more importantly, doing this continuously—you can create a clear picture of your organization’s susceptibility to an attack that is based on data, and not people, tools, or products. You can give concrete, objective answers to questions like, “can we catch XYZ?” 

  • “Yes, we can because we tested this, and we tested it yesterday.” 

  • “No we can’t, but we are aware of this gap and are working on it.”

So remember, don’t assume. Prove it through testing. The image below shows some examples of tests you can try:

Learn more about how Appian safeguards security with Appian Protect.