How do security professionals study threat actors, & why do we do it?

I receive a lot of great questions about my work in Digital Forensics and Incident Response (DFIR), and while I’ve written a bit on the topic of threat actors and attribution, I’ve been repeatedly asked some interesting questions about this in specific. In the interest of not answering the same question 101 more times, today I will attempt to tackle some of the most popular, difficult, and ambiguous ones.

Before we begin, there are a couple things that are really important to understand before we deep dive into a conversation about modern computer hacking:

  1. Digital attacks are often launched through hacked or infected computers which belong to innocent people or companies. Those computers might not even be in the same country as the bad guy. The bottom line is: gone is the notion of “just trace the IP address back”.
  2. There are often computers in many countries used in the same digital attack, whether as part of a big DDoS attack, or something more complex like exfiltrating some stolen data through a bunch of computers. The bottom line is: It’s not uncommon to see computers in 10 countries used in the same criminal operation, online.

With that in mind, let’s have a short chat about the strange world of attack attribution, the secret sauce that goes into making it happen, and why it sometimes appears like we as computer security professionals really, really suck at catching bad guys.

Why would anybody care who is hacking them?

I’ve noted before that for the average commercial company, it’s usually not terribly relevant to discuss the specific national origin of attacks at an executive level.  That energy is better spent in understanding why a breach occurred, and preventing it from happening again. Companies are rarely going to cease business operations in a country because of attacks sourced there.

That being said, it can potentially be helpful for the right operational security staff to have an understanding of the actors who are attempting to breach their defenses. Once a team knows that actor CRAZY HAMSTER is attacking their company, they can read reports on attacks by CRAZY HAMSTER against other companies. Reports often document tools, tactics, and procedures used by the attacker, and the security team can use this information to ensure they have appropriate mitigation and detection in place. It might also give the security team an idea of what ends CRAZY HAMSTER is trying to accomplish through their campaign of digital villainy.

Outside of the commercial space, things become a lot more complex. I’ve noted previously that espionage and sabotage are as old as human civilization – and they are still just as relevant to politics and warfare, today. It is very important to not think of “cyber war” as a domain entirely independent of the other realms of warfare, political maneuvering, and espionage. No matter how tempting it is to worry about catastrophic digital attacks on critical infrastructure or the internet, precedent and rhetoric support hacking mostly being used as one component of more complex global conflicts. So, hacking really has to be analyzed as one part of a whole, but it certainly shouldn’t be ignored.

 How can anybody know with any certainty who is hacking whom?

I’ve talked about the complexities of digital attribution in the past, and I always take the time to note that attribution is a complex, time-consuming process. That does not make it impossible, (with the right resources and substantial work hours), for qualified experts to make some determination beyond a reasonable doubt.

I already told you that IP addresses alone aren’t very useful for figuring out the source of attacks, anymore. That’s okay – that doesn’t mean that hackers and their tools don’t leave lots of digital evidence. In essence, the entire field of Digital Forensics and Incident Response (DFIR) centers around responding to and analyzing compromised networks, systems, and their logs, then providing detailed reports on what occurred. DFIR tends to focus on hard evidence like recovering deleted tools, files, and malware, retrieving command history and even tiny changes made to the computer, identifying communication with other systems, and then building a very comprehensive timeline of an attackers’ activity.

In plain English – an unencrypted computer hard drive is an archaeological treasure trove of information, containing stuff like what has been typed in a search bar, which sites were viewed in private browsing mode, what’s been plugged in to it, to what process started exactly five months and 16 days ago. Computer memory contains even more juicy details about use and abuse of computers. It’s very hard to hide every artifact of an attack on a computer that is not encrypted or hasn’t been powered down. Reliable evidence can persist for months, or even years.

Where DFIR tends to answer the “how”, “when”, and the “what” of a hacking incident, cyber threat intelligence strives to grow our understanding into “who” and “why”. Much like traditional Intelligence, good threat intel professionals take a more holistic approach in looking at attackers: taking hard evidence found by DFIR analysis and combining it with softer evidence like typical attacker behavior, linguistics, favored tools, target selection, previous attacker activities and indicators, and global events.

A balance of good quality evidence that DFIR discovers with the comprehensive view that good quality intelligence provides is the secret sauce that can allow agencies and researchers to point towards the source of an attack.

But what if reports on an attack or threat actor conflict?

It happens. Two good investigators can look at similar evidence and come to slightly different conclusions. Our recourse is to carefully read all available reports, then look hard at the quality of the expertise, reasoning, and access to evidence within each. Again, good detective work doesn’t lead to absolute certainty. The goal is to reach the most reasonable and supported conclusion possible. Some assembly is required.

But what about those “false flag” operations?

There’s certainly lots of precedent for false flag operations in the (very, very) long and storied history of espionage and counterespionage.  Digital attacks are no exception. Bad guys can try to pretend to be other bad guys, and people can claim credit for other peoples’ activities for a multitude of reasons.

This is why good intelligence, as opposed to merely digital forensics alone, is crucial to any attribution. It is rare to see a human computer compromise occur without any attacker mistakes (or evidence of those mistakes), and those small errors in syntax, language, or exploitation can be quite telling to a keen and attentive analyst.

Who are these commercial threat intelligence companies?

It takes a lot of resources for a company to build a large-scale threat intelligence program. So, a number of successful companies have popped up which hire intelligence specialists, linguists, security researchers, and political scientists to provide detailed threat intelligence to organizations, for profit. A small word of caution: keep in mind that while it is certainly in these companies’ best interests to be technically correct when they release reports and findings, they are still businesses and their objectives are to sell a product. They will probably not give everything away for free.

So if you know who’s hacking an organization, why aren’t they getting arrested?

Unfortunately, even if we know who is hacking who, there’s often not a lot we can do about the perpetrators. Hacking the attacker back in retaliation is extremely murky legal water, especially since we already noted that hackers like to use innocent people’s computers to launch their attacks. One misstep and we could end up sued or prosecuted ourselves. Government action could have even more severe repercussions.

We can certainly go to the appropriate law enforcement agencies and report theft, intrusion, or damage – indeed, I highly recommend it. However, LEOs don’t have it easy, either. Not only are their computer crimes groups often overtaxed by the surge in ransomware and phishing, but as we noted earlier, computer crimes often cross many international borders. Taking down a big criminal hacking operation usually takes coordination between private firms and several countries’ law enforcement agencies. That means each one has to approve and fund the takedown. It happens fairly regularly, but it’s a big effort.

Then, there’s the issue of state-sponsored attacks, which are a matter of politics above most law enforcement organizations’ ability to pursue. If one country conducts espionage or sabotage against a public or private institution in another, politicians must weigh retaliation for what was done versus the potential of souring international relations (or worse).

So, sometimes we really do know who is attacking, but there’s no feasible way to pursue them ourselves, right at the moment.

I want to see evidence of CRAZY HAMSTER attacking companies first hand. Why can’t I?

First of all, make sure you really can’t. Not every threat intelligence company uses the same nomenclature for the same actors – a sore spot for many security professionals. When in doubt, please check first, and ask if needed.

Many commercial intelligence companies and research firms produce reports for the public that contain an executive summary that is easily readable at any technical skill level. A good report should also contain substantial technical detail including indicators of compromise – specific evidence found in the analysis of the attack which can potentially be used to identify the same actor elsewhere.

Unfortunately, in any breach or attack, there will very likely be a lot of evidence unavailable to the general public. The first problem with releasing it all is that raw digital forensic evidence almost always contains proprietary and confidential data. That’s just the nature of raw network traffic and system drives. Even attacker activity alone usually contains passwords, account lists, and sensitive network configuration and vulnerability data. Some of this information may be made available to information sharing partners and colleagues through NDA/TLP, while some is kept strictly confidential.

The second problem is that any data provided to the general public is by its nature also being made available to the attackers. If they are still operating, showing all cards could really hurt efforts to bring them to justice.

Why aren’t you security professionals and researchers doing anything about these threat actors?

We are. While we might not be able to get every perpetrator arrested today, there are concerted efforts to share data on attackers and malware between commercial companies, law enforcement, and government agencies. The ISAC program is a great example of this. Many threat researchers and non-profit organizations release and share threat intelligence data and malware research for free.

Information sharing not only helps in law enforcement efforts, but it mutually improves detection of attackers and preventative security with their behaviors in mind. If we can’t stop the attackers right now, we can work together to hinder them at every turn.

3 thoughts on “How do security professionals study threat actors, & why do we do it?

Leave a comment