How do security professionals study threat actors, & why do we do it?

I receive a lot of great questions about my work in Digital Forensics and Incident Response (DFIR), and while I’ve written a bit on the topic of threat actors and attribution, I’ve been repeatedly asked some interesting questions about this in specific. In the interest of not answering the same question 101 more times, today I will attempt to tackle some of the most popular, difficult, and ambiguous ones.

Before we begin, there are a couple things that are really important to understand before we deep dive into a conversation about modern computer hacking:

  1. Digital attacks are often launched through hacked or infected computers which belong to innocent people or companies. Those computers might not even be in the same country as the bad guy. The bottom line is: gone is the notion of “just trace the IP address back”.
  2. There are often computers in many countries used in the same digital attack, whether as part of a big DDoS attack, or something more complex like exfiltrating some stolen data through a bunch of computers. The bottom line is: It’s not uncommon to see computers in 10 countries used in the same criminal operation, online.

With that in mind, let’s have a short chat about the strange world of attack attribution, the secret sauce that goes into making it happen, and why it sometimes appears like we as computer security professionals really, really suck at catching bad guys.

Why would anybody care who is hacking them?

I’ve noted before that for the average commercial company, it’s usually not terribly relevant to discuss the specific national origin of attacks at an executive level.  That energy is better spent in understanding why a breach occurred, and preventing it from happening again. Companies are rarely going to cease business operations in a country because of attacks sourced there.

That being said, it can potentially be helpful for the right operational security staff to have an understanding of the actors who are attempting to breach their defenses. Once a team knows that actor CRAZY HAMSTER is attacking their company, they can read reports on attacks by CRAZY HAMSTER against other companies. Reports often document tools, tactics, and procedures used by the attacker, and the security team can use this information to ensure they have appropriate mitigation and detection in place. It might also give the security team an idea of what ends CRAZY HAMSTER is trying to accomplish through their campaign of digital villainy.

Outside of the commercial space, things become a lot more complex. I’ve noted previously that espionage and sabotage are as old as human civilization – and they are still just as relevant to politics and warfare, today. It is very important to not think of “cyber war” as a domain entirely independent of the other realms of warfare, political maneuvering, and espionage. No matter how tempting it is to worry about catastrophic digital attacks on critical infrastructure or the internet, precedent and rhetoric support hacking mostly being used as one component of more complex global conflicts. So, hacking really has to be analyzed as one part of a whole, but it certainly shouldn’t be ignored.

 How can anybody know with any certainty who is hacking whom?

I’ve talked about the complexities of digital attribution in the past, and I always take the time to note that attribution is a complex, time-consuming process. That does not make it impossible, (with the right resources and substantial work hours), for qualified experts to make some determination beyond a reasonable doubt.

I already told you that IP addresses alone aren’t very useful for figuring out the source of attacks, anymore. That’s okay – that doesn’t mean that hackers and their tools don’t leave lots of digital evidence. In essence, the entire field of Digital Forensics and Incident Response (DFIR) centers around responding to and analyzing compromised networks, systems, and their logs, then providing detailed reports on what occurred. DFIR tends to focus on hard evidence like recovering deleted tools, files, and malware, retrieving command history and even tiny changes made to the computer, identifying communication with other systems, and then building a very comprehensive timeline of an attackers’ activity.

In plain English – an unencrypted computer hard drive is an archaeological treasure trove of information, containing stuff like what has been typed in a search bar, which sites were viewed in private browsing mode, what’s been plugged in to it, to what process started exactly five months and 16 days ago. Computer memory contains even more juicy details about use and abuse of computers. It’s very hard to hide every artifact of an attack on a computer that is not encrypted or hasn’t been powered down. Reliable evidence can persist for months, or even years.

Where DFIR tends to answer the “how”, “when”, and the “what” of a hacking incident, cyber threat intelligence strives to grow our understanding into “who” and “why”. Much like traditional Intelligence, good threat intel professionals take a more holistic approach in looking at attackers: taking hard evidence found by DFIR analysis and combining it with softer evidence like typical attacker behavior, linguistics, favored tools, target selection, previous attacker activities and indicators, and global events.

A balance of good quality evidence that DFIR discovers with the comprehensive view that good quality intelligence provides is the secret sauce that can allow agencies and researchers to point towards the source of an attack.

But what if reports on an attack or threat actor conflict?

It happens. Two good investigators can look at similar evidence and come to slightly different conclusions. Our recourse is to carefully read all available reports, then look hard at the quality of the expertise, reasoning, and access to evidence within each. Again, good detective work doesn’t lead to absolute certainty. The goal is to reach the most reasonable and supported conclusion possible. Some assembly is required.

But what about those “false flag” operations?

There’s certainly lots of precedent for false flag operations in the (very, very) long and storied history of espionage and counterespionage.  Digital attacks are no exception. Bad guys can try to pretend to be other bad guys, and people can claim credit for other peoples’ activities for a multitude of reasons.

This is why good intelligence, as opposed to merely digital forensics alone, is crucial to any attribution. It is rare to see a human computer compromise occur without any attacker mistakes (or evidence of those mistakes), and those small errors in syntax, language, or exploitation can be quite telling to a keen and attentive analyst.

Who are these commercial threat intelligence companies?

It takes a lot of resources for a company to build a large-scale threat intelligence program. So, a number of successful companies have popped up which hire intelligence specialists, linguists, security researchers, and political scientists to provide detailed threat intelligence to organizations, for profit. A small word of caution: keep in mind that while it is certainly in these companies’ best interests to be technically correct when they release reports and findings, they are still businesses and their objectives are to sell a product. They will probably not give everything away for free.

So if you know who’s hacking an organization, why aren’t they getting arrested?

Unfortunately, even if we know who is hacking who, there’s often not a lot we can do about the perpetrators. Hacking the attacker back in retaliation is extremely murky legal water, especially since we already noted that hackers like to use innocent people’s computers to launch their attacks. One misstep and we could end up sued or prosecuted ourselves. Government action could have even more severe repercussions.

We can certainly go to the appropriate law enforcement agencies and report theft, intrusion, or damage – indeed, I highly recommend it. However, LEOs don’t have it easy, either. Not only are their computer crimes groups often overtaxed by the surge in ransomware and phishing, but as we noted earlier, computer crimes often cross many international borders. Taking down a big criminal hacking operation usually takes coordination between private firms and several countries’ law enforcement agencies. That means each one has to approve and fund the takedown. It happens fairly regularly, but it’s a big effort.

Then, there’s the issue of state-sponsored attacks, which are a matter of politics above most law enforcement organizations’ ability to pursue. If one country conducts espionage or sabotage against a public or private institution in another, politicians must weigh retaliation for what was done versus the potential of souring international relations (or worse).

So, sometimes we really do know who is attacking, but there’s no feasible way to pursue them ourselves, right at the moment.

I want to see evidence of CRAZY HAMSTER attacking companies first hand. Why can’t I?

First of all, make sure you really can’t. Not every threat intelligence company uses the same nomenclature for the same actors – a sore spot for many security professionals. When in doubt, please check first, and ask if needed.

Many commercial intelligence companies and research firms produce reports for the public that contain an executive summary that is easily readable at any technical skill level. A good report should also contain substantial technical detail including indicators of compromise – specific evidence found in the analysis of the attack which can potentially be used to identify the same actor elsewhere.

Unfortunately, in any breach or attack, there will very likely be a lot of evidence unavailable to the general public. The first problem with releasing it all is that raw digital forensic evidence almost always contains proprietary and confidential data. That’s just the nature of raw network traffic and system drives. Even attacker activity alone usually contains passwords, account lists, and sensitive network configuration and vulnerability data. Some of this information may be made available to information sharing partners and colleagues through NDA/TLP, while some is kept strictly confidential.

The second problem is that any data provided to the general public is by its nature also being made available to the attackers. If they are still operating, showing all cards could really hurt efforts to bring them to justice.

Why aren’t you security professionals and researchers doing anything about these threat actors?

We are. While we might not be able to get every perpetrator arrested today, there are concerted efforts to share data on attackers and malware between commercial companies, law enforcement, and government agencies. The ISAC program is a great example of this. Many threat researchers and non-profit organizations release and share threat intelligence data and malware research for free.

Information sharing not only helps in law enforcement efforts, but it mutually improves detection of attackers and preventative security with their behaviors in mind. If we can’t stop the attackers right now, we can work together to hinder them at every turn.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


 

Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.

 

 

Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.

 

 

How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.

 

 

Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.

 

 

Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp ;).

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts (https://krypt3ia.wordpress.com/2014/12/20/fauxtribution/). Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.

 

 

How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.

 

 

So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.

 

 

Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).

 

 

Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.

 

 

Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.

 

Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.

 

Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..