Consolidated Malware Sinkhole List

A common practice of researchers studying a piece of malware is to seize control of its malicious command and control domains, then redirect traffic to them to benign research servers for analysis and victim notification. I always highly recommend monitoring for traffic to these sinkholes – it is frequently indicative of infection.

I’ve found no comprehensive public list of these sinkholes. There have been some previous efforts to compile a list, for instance by reverse engineering Emerging Threats Signatures (mikesxrs – I hope this answers your questions, a little late!). Some sinkholes are documented on the vendors’ sites, while others are clearly labeled in whois data, but undocumented. Still others are only detectable through behavior and hearsay.

Below, I share my personal list of publicly-noted sinkholes only. Please understand that with few exceptions I have not received any of this information from the vendors or organizations mentioned. It is possible there is some misattribution, and addresses in use do change over time. This is merely intended as a helpful aid for threat hunting, and there are no guarantees whatsoever.

Before we proceed, credit where credit is due:

I am certainly not claiming credit for this entire list. There are many smart people out there who provided partial data and clues.

http://www.kleissner.org/ maintains fantastically useful lists of command and control servers for numerous botnets. Within those lists, a number of sinkholes are attributed to specific organizations, some of which I could and could not independently verify.

The extremely talented Miroslav Stampar has quite a few sinkholes identified within his maltrail malicious traffic detection tool.

Many, many Robtex, DomainTools, and VirusTotal queries and a lot of Google search hacking went into compiling and cross-checking this list. Michael B. Jacobs has written a terrific paper  which covers some of the methodologies I used to detect and confirm undocumented sinkhole servers through DNS and behavioral analysis.

There are more detailed databases of sinkholes, but they tend to be access-restricted and contain data I will not repost for confidentiality reasons. My list is fully OSINT-based and can be reproduced with time and effort.

Here’s the current list:

If you have any corrections to offer either as one of these organizations or an independent researcher, please contact me and I will give credit in this blog accordingly.

 

Phishing Exercises, without the “Ish”

Much like open offices and outsourcing in business, information security is subject to trends. One you probably saw in your vendor spam folder over the past couple of years is phishing awareness exercises.

The premise sounds simple – phish your employees before the bad guys do, monitor how they respond, and react accordingly.  In reality, people’s experiences have been more complex. There’s not much middle ground in the discussion of phishing exercises. I see either glowing articles praising their merits (most of which are selling something), or bemused cynicism about them from security professionals. In my experience, there really can be benefits to running phishing test exercises in a sensible way, but many organizations are not implementing them in a sensible way, so they end up pretty worthless.

When you’re setting up a phishing test program, you have the option of developing your own phishing exercise infrastructure and metrics collection toolkit, combining open source solutions like King Phisher or (SET), or purchasing one of many available commercial solutions. I won’t advocate for one brand over another in this blog – most will work (in the right configuration and conditions). A similar set of concerns exist, whether you develop a deployment and metrics solution, or you buy a commercial solution in a box. Let’s discuss how any and all of these tools are being used incorrectly.

 

Before spending money, or implementing anything

Develop a clear goal for your program with your senior leadership fully involved. This goal should not be, “stop employees from clicking on phishing messages”. That’s simply unattainable. Yeah, you want that number to decrease, but even top security professionals have fallen for well-crafted phishing messages. People click on things when they’re busy and distracted, and it theoretically only takes one compromised host to breach a network. A real attacker only has to get that one, inattentive click. If your senior management measures your success by phishing clicks reaching zero, you’ll ultimately find yourself dumbing down campaigns to look more successful. This won’t do anybody any favors.

A more realistic goal is improving the quantity and speed of reporting of suspicious emails. Detecting phishing with tech is hard. Most organizations spend a great deal of money on modern solutions to catch and alert on phishing messages, and even those can be circumvented. Your last line of defense against phishing and social engineering is a good relationship with end users who will promptly tell you they are being attacked. While it takes only one phish to compromise a network, it takes only one prompt report to security to shut an attack down.

Next, you should bring your HR and Legal teams into the conversation and discuss anonymity. There is no room for gray area here. You will either conduct phishing exercises anonymously or you will not. If you conduct the phishing exercises anonymously, you must develop the program in a double blind way where even network security can’t practically retrieve the names of people who clicked. You’ll still see an overall view of the health of your organization, but nobody can be pressured to provide identifying data, even by angry executives.

If you choose to not conduct exercises anonymously, I recommend that you clearly document any repercussions for clicking, and ensure they are uniform across your organization. Otherwise, your exercises could easily become a public humiliation game or end in unequal punishment by managers, putting you in hot water with HR.

 

A carrot, instead of a stick

Regardless of if you conduct your exercises anonymously or not, you may decide to provide extra security training to people who click on your test phishes. Frankly, a lot of security awareness training is pretty awful, “death by PowerPoint” stuff. If your users can fly through every slide and kludge their way through your multiple choice test, chances are it’s a waste of time. Try to have some empathy for how an end user is feeling when they click on a test phish and are routed to a long, mandatory training. They’re embarrassed, frustrated, and it’s very possible they clicked because they were already frantically busy. In their minds, you aren’t helping – they feel like you tricked them. There’s now hostility in your relationship, not a willingness to help “the team” stop attackers.

If possible, in-person training is a great option (snack bribery highly encouraged). Offer a lunch and learn, or a social hour with IT security. Offer this in lieu of traditional web-based training, and have a conversation with your end users. People are statistically more inclined to help somebody they have met in person and feel some connection to. You want to try to make your phishing exercises a positive thing that people want to improve, not a negative thing that people subconsciously associate with punishment or embarrassment.

If training has to be computer-based, try to make it quick, effective, and interactive. This is a space where you may wish to spend some money to get something good quality and enjoyable.

Be clear about what you’re trying to accomplish with phishing exercises and why they are important to your organization. Ensure you give credit to people who report phishing and help your team improve more than you punish people who make genuine mistakes. It’s better to provide measures to protect victims and help them learn, rather than encourage them to circumvent your security team.

 

Who should you phish?

Establish the scope of your exercises. Must certain employees be exempt for legal reasons? Are multiple languages spoken in your organization which will require separate exercises? Will your exercises be conducted across global business hours and all shifts? Have you done some OSINT to generate a list of exposed users and email addresses that require special attention?

I highly advise against phishing everybody at once. The only things that travel faster than light in workplaces are rumors. Once one person realizes he or she has fallen for the phishing exercise, it’s nearly impossible to contain the “helpful warnings” to neighbors and friends. This is good, but won’t necessarily give you accurate metrics about individual performance.

 

Designing your phish

Security teams everywhere look forward to this part with glee. I must remind my blue team friends of a lesson that successful red teamers learn early in their careers: your job is not to “get” your target for the laughs. Your job is to educate your target and improve their security. You are on their team. Yes, you can phish nearly anybody with a well crafted message and insider knowledge. Conversely, you can produce excellent metrics by selecting an absurdly easy phish. Neither results in any significant security training.

Your phishing exercises are a scientific experiment, and a good experiment has as few variables as possible. The variables that do exist must be well quantified, and should include the difficulty of the phishing message, which is easier said than measured. Comparing clicks on an excellent phish with perfect grammar and a timely topic to one that applies to few employees and is written in poor English is apples to oranges. If you want to change the variable of phishing difficulty, do not change the variable of employee selection or time of day, and vice-versa.

If you’re having trouble with this, look to your phishing awareness training. Most commercial training programs list warning signs of a phish. When developing your messages, choose a specific number of these specific warning signs to include.

 

Avoiding phishing-related divorces, and other unpleasantness

Writing a phishing email seems fun and easy. You copy one you’ve seen in your filters, or use a common phishing theme, and send it out with a link or attachment, right?

Or not.

Bad guys have it a lot easier than us, as defenders and pen testers. Bad guys can emulate any public company or person they want in their phishing messages, and abuse any emotion. While we want to make test phishes as realistic as possible, there are good reasons why we have to put more thought into ours.

The reaction of a human being to a phishing email depends on a lot more factors than just their corporate security training. They’re also influenced by their outside security education, their biases and experiences with the content of the message, and their emotions. Imagine a phishing test email that uses the classic “payment received” scam, ostensibly from some real online payment firm. Some people will look at the phish, see it for what it is, and report it appropriately. Others will Google the payment provider and report the phish to them instead; a black eye (or even a blacklist) for your company. In a worst case scenario, an employee could receive the message and apply a personal context, forwarding it to their spouse as ‘proof’ they’re hiding money.

You must try to keep your phishing exercise contained. Remember, you are handling live lies. Not only could forwarding of your test message alter your metrics, but it could also result in more dire legal or ethical consequences if it should leave your network perimeter. Ensure you thoroughly prevent this, and clean up after your exercise as soon as possible once you’re done.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


 

Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.

 

 

Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.

 

 

How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.

 

 

Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.

 

 

Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp ;).

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts (https://krypt3ia.wordpress.com/2014/12/20/fauxtribution/). Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.

 

 

How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.

 

 

So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.

 

 

Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).

 

 

Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.

 

 

Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.

 

Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.

 

Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..

The $5 Vendor-Free Crash Course: Cyber Threat Intel

Threat intelligence is currently the trendy thing in information security, and as with many new security trends, frequently misunderstood and misused. I want to take the time to discuss some common misunderstandings about what threat intelligence is and isn’t, where it can be beneficial, and where it’s wasting your (and your analysts’) time and money.

To understand cyber threat intelligence as more than a buzzword, we must first understand what intelligence is in a broader sense. Encyclopedia Britannica provides this gem of a summary:

“… Whether tactical or strategic, military intelligence attempts to respond to or satisfy the needs of the operational leader, the person who has to act or react to a given set of circumstances. The process begins when the commander determines what information is needed to act responsibly.”

The purpose of intelligence is to aid in informed decision making. Period. There is no point in doing intelligence for intelligence’s sake.

Cyber threat intelligence is not simply endless feeds of malicious IP addresses and domain names. To truly be useful intelligence, threat Intel should be actionable and contextual. That doesn’t mean attribution of a set of indicators to a specific country or organization; for most companies that is at the best futile and at the most, dangerous. It simply means gathering data to anticipate, detect, and mitigate threat actor behavior as it may relate to your organization.  If threat intelligence is not contextual or is frequently non-actionable in your environment, you’re doing “cyber threat” without much “intelligence” (and it’s probably not providing much benefit).

Threat intelligence should aid you in answering the following six questions:

  1. What types of actors might currently pose a threat to your organization or industry? Remember that for something to pose a threat, it must have capability, opportunity, and intent.
  2. How do those types of actors typically operate?
  3. What are the “crown jewels” prime for theft or abuse in your environment?
  4. What is the risk of your organization being targeted by these threats? Remember that risk is a measure of probability of you being targeted and harm that could be caused if you were.
  5. What are better ways to detect and mitigate these types of threats in a timely and proactive manner?
  6. How can these types of threats be responded to more effectively?

Note that the fifth question is the only one that really involves those big lists of Indicators of Compromise (IoCs). There is much more that goes into intelligence about the threats that face us than simply raw detection of specific file hashes or domains without any context. You can see this in good quality threat intelligence reports – they clearly answer “what” and “how” while also providing strategic and tactical intelligence.

I’m not a fan of the “throw everything at the wall and see what sticks” mentality of using every raw feed of IoCs available. This is incredibly inefficient and difficult to vet and manage. The real intelligence aspect comes in when selecting which feeds of indicators and signatures are applicable to your environment, where to place sensors, and which monitored alerts might merit a faster response. Signatures should be used as opposed to one-off indicators when possible. Indicators and signatures should be vetted and deduplicated. Sensibly planning expiration for indicators that are relatively transient (like compromised sites used in phishing or watering hole attacks) is also pretty important for your sanity and the health of your security appliances.

So, how do you go about these tasks if you can’t staff a full time threat intelligence expert? Firstly, many of the questions about how you might be targeted and what might be targeted in your environment can be answered by your own staff. After your own vulnerability assessments, bring your risk management, loss prevention, and legal experts into the discussion, (as well as your sales and development teams if you develop products or services). Executive buy-in and support is key at this stage. Find out where the money is going to and coming from, and you will have a solid start on your list of crown jewels and potential threats. I also highly recommend speaking to your social media team about your company’s global reputation and any frequent threats or anger directed at them online. Are you disliked by a hacktivist organization? Do you have unscrupulous competitors? This all plays into threat intelligence and security decisions.

Additionally, identify your industry’s ISAC or equivalent, and become a participating member. This allows you the unique opportunity to speak under strict NDA with security staff at your competitors about threats that may impact you both. Be cognizant that this is a two way street; you will likely be expected to participate actively as opposed to just gleaning information from others, so you’ll want to discuss this agreement with your legal counsel and have the support of your senior leadership. It’s usually worth it.

Once you have begun to answer questions about how you might be targeted, and what types of organizations might pose a threat, you can begin to make an educated decision about which specific IOCs might be useful, and where to apply them in your network topology. For instance, most organizations are impacted by mass malware, yet if your environment consists entirely of Mac OS, a Windows ransomware indicator feed is probably not high in your priorities. You might, however, have a legacy Solaris server containing engineering data that could be a big target for theft, and decide to install additional sensors and Solaris signatures accordingly.

There are numerous commercial threat intelligence companies who will sell your organization varying types of cyber threat intelligence data of varying qualities (in the interest of affability, I’ll not be rating them in this article). When selecting between paid and free intelligence sources (and indeed, you should probably be using a combination of both), keep the aforementioned questions in mind. If a vendor’s product will not help answer a few of those questions for you, you may want to look elsewhere. When an alert fires, a vendor who sells “black box” feeds of indicators without context may cause you extra time and money, while conversely a vendor who sells nation state attribution in great detail doesn’t really provide the average company any actionable information.

Publicly available sources of threat intelligence data are almost endless on the internet and can be as creative as your ability to look for them. Emerging Threats provides a fantastic feed of free signatures that include malware and exploits used by advanced actors. AlienVault OTX and CIRCL’s MISP are great efforts to bring together a lot of community intelligence into one place. Potentially useful IoC feeds are available from many organizations like abuse.ch, IOC Bucket, SANS ISC DShield and MalwareDomains.com (I recommend checking out hslatman’s fairly comprehensive list.). As previously noted, don’t discount social media and your average saved Google search as a great source of Intel, as well.

The most important thing to remember about threat intelligence is that the threat landscape is always changing – both on your side, and the attackers’. You are never done with gathering intelligence or making security decisions based it. You should touch base with everybody involved in your threat intelligence gathering and process on a regular basis, to ensure you are still using actionable data in the correct context.

***

In summary, don’t do threat intelligence for the sake of doing threat intelligence. Give careful consideration to choosing intelligence that can provide contextual and actionable information to your organization’s defense. This is a doable task, possible even for organizations that do not have dedicated threat intelligence staff or budgets, but it will require some regular maintenance and thought.


Many thanks to the seasoned Intel pros who kindly took the time to read and critique this article: @swannysec, @MalwareJake, and @edwardmccabe

I highly recommend reading John Swanson’s work on building a Threat Intel program next, here.

The Gamemaster’s Guide to Incident Response

I had the honor and pleasure of being asked to teach a four hour incident response class at last month’s Circle City Con in Indianapolis, IN (you can watch a recording, here). The subject was preestablished based on attendee interest: building an incident response program in small, medium, and large enterprises. Granted, most of the talks I give aren’t on spaceships or robots or other such entertaining stuff, but this in particular presented a conundrum – developing a program and team can be a spectacularly dry subject.

I approached the class with a few goals in mind:

1) I wanted to ensure I kept audience interest for the full four hours, (and maintained my ability to speak!)

2) I wanted to ensure every critical topic about building a team I presented was reinforced in an entertaining way.

and finally,

3) Maintaining audience involvement so that they would be offered auditory, visual, and hands on learning simultaneously, giving them the best chance of success possible.

The answer became obvious – I firmly believe that incident response is made of endless, great stories. Therefore, I would gamify building an incident response team, turning it into a story-based role playing game. I’d already had positive exposure to gamification in and out of infosec education – I regularly speak at gaming conventions about hacking. Now it was apparent that I was going to have to bring gaming to the hacking convention.

Gamification has been a key component of education for a long time, but it in the past it’s been pretty kid-specific. However, as gaming itself becomes more mainstream, teens and adults are becoming more and more comfortable in thinking of their lives in terms of games – accumulating achievements, reaching rewards through set goals, and learning through creative, fun activities. This has been reflected as the new, hot thing in everything from fitness apps to HR training – educators have discovered that people can often learn better and be more interested in fun, creative, and achievement-oriented environments.

The tremendous value gamification (and role playing in specific) brings to education is the creativity and emotional involvement it inspires. If I simply presented a case and explained my response, there’d be no emotional weight – the students would be passive observers hearing about something that already happened. I could take it a step further and give each group a pre-written scenario – a vast improvement because they would have to think critically about their solution. However, my solution took this emotional involvement a bit further, letting each group randomly generate their own unique scenario, with random benefits and pitfalls. I couldn’t predict the outcome, therefore every situation was unique and posed it’s own challenges (to the class, and to me).

After 40 minutes of lecture where I presented incident response team concepts and methodology, I let each of my student groups  generate a company faced with multiple security problems. Each team was given two polyhedral dice (20 and 6 sided). They were provided some brief instructions:

Exercise 1: Our Saga Begins                         Building an Incident Response Team

INSTRUCTIONS: You will be completing these exercises as a small group. Every group’s scenario will be a
little different based on the roll of the polyhedral dice you’ve been provided. Fill in the blanks with your
random dice roll, and then complete the exercises. Some groups will be ‘luckier’ than others, but that’s how the cookie crumbles. Every group will share their situation and solution.

And some static background, for the sake of brevity:

Due to your stellar reputation in Incident Response, your consulting firm has been hired by Renraku, a
4500 employee global company, to design their very first dedicated Incident Response team. They’ve
been having an increasing number of security incidents over the past year and they’ve relied on outside
contractors and vendors to help investigate and resolve them. They currently only have a Security
Operations Center (SOC) that does some basic log monitoring, patching, and malware remediation. The
CISO Mr. Lanier provides the following specifications:

–  The IR team will respond 24/7/365 with a projected staff of 10 people (on an on-call rotation).
–  They will respond to physical and digital security incidents
–  IR team will collaborate with members of the HR, legal, loss prevention, and physical security teams.
–  On detection of incidents, the SOC will normally be the ones to page out the Incident Response team.

I then let them roll the dice to determine some key aspects of the incident response scenario:

Roll [D6] _______. This will reflect the industry that Renraku focuses on (for the rest of this course):
(1) Retail Stores
(2) Hospitals
(3) Financial Investments
(4) Defense Contracting
(5) News Media
(6) Oil, Gas, and Electric

Roll [D20] ___________. This is how many major security incidents Renraku has been faced with in the
12 months. If you rolled over a 12, it means they were dealing with more than one incident at once (and
may have to again).

Roll [D20] ___________.   The number of months it took Renraku to detect their last major compromise.

Roll [D20] ___________.   The number of countries that Renraku operates offices in.

Roll [D20] ___________.   The number of subsidiaries with different system configurations and software
that have unrestricted connections to Renraku’s internal network.

This posed an interesting challenge because not everybody in the class had played a roleplaying game with polyhedral dice, before. Having small groups helped with this, as the people who enjoyed tabletop gaming immediately latched on and took their dice rolls very seriously!

I knew there would be stumbling points in presenting the course, and did my best to anticipate them. I predicted that some teams would be in very good situations and others in very untenable ones, so I was very careful to weight each dice roll to keep them in realistic ranges. In practice, this worked fairly well, but in the future I will be widening this range slightly because none of the groups were in a truly difficult staffing position on their mock incident response teams. I was also worried that some of the companies would turn out too identical, but fortunately my statistical math was (surprisingly) good and of my seven groups, only two ended up very similar in industry and security issues. This is something to consider carefully when developing a new scenario: what are the worst and best case scenarios that can occur?

The biggest issue I ran into ended up being time. This was both a bad and a good problem to have – the groups had in depth, occasionally heated discussions about their companies. I was trying to engage them emotionally and I think I succeeded to an extent because of this. I actually had to stop a couple of the exercises early to stay near the timeframes I had set for each exercise. In the future, I’ll be trimming down lecture a bit more and making the exercise constraints more clear. I stated at the beginning of the course that none of the exercises solutions needed to be technical in nature, but some of the groups worked out very interesting technical solutions to the problems.

Another problem I faced was rewarding achievement. When I present this course again, I’ll have to clearly establish criteria for which group wins each exercise (and wins fantastic geeky swag)! All of the groups had great solutions and creative ideas, and I felt very on-the-spot trying to choose a winner quickly.

During the class I learned very quickly that camping it up and interjecting roleplay kept it fun. I played the role of ‘GM’ throughout the exercises, occasionally rolling a D20 to see if the companies were set up in hostile countries or involved in embarrassing data breaches. I was careful to start this with groups that were already having a good time and playing with the scenario – the group that rolled miserably on their Oil & Gas company network and made up a great story as to why was repeatedly dogged by hacktivists! This really kept me on my toes. My recommendation to others less familiar with off-the-cuff roleplaying might be to write down some injects in advance.

All in all, I highly recommend this method for teaching the creative thinking and logical reasoning skills that are so desperately needed in incident response. Presenting a randomized scenario made each team care a bit more about the company they were representing, and introduced a large number of unpredictable scenarios. My students were engaged throughout the class and I got some very positive feedback afterwards.

I’m excited to continue development of my gamified Incident Response course through the year, and I’m happy to present it at cons as I’m available, or help you set up your own program. You can find my slides, worksheets, and sample scenarios on Google Docs here – I only ask for credit if you use my work directly. Enjoy!

Lesley’s Rules of SOC

I see a lot of the same errors made repeatedly as organizations stand up Security Operations. They not only result in lost time and money, but often result in breaches and malware outbreaks. I tweeted these out of frustration quite some time ago and I’ve since been repeatedly asked for a blog post condensing and elaborating on them. So, without further ado, here are Lesley’s Rules of SOC, in their unabridged form. Enjoy!


  1. You can’t secure anything if you don’t know what you’re securing. 

    Step one in designing and planning a SOC should be identifying high value targets in your organization, and who wants to steal or deface them. This basic risk and threat analysis shows you where to place sensors, what hours you should be staffed in what regions, what types of skill and talent you need on your team, and what your Incident Response plan might need to include,

  2. If you’re securing and monitoring one area really well and ignoring another, you’re really not securing anything. 

    An unfortunate flaw in we as an infosec community is that we often get distracted by the newest, coolest exploit. The vast majority of breaches and compromises don’t involve a cool exploit at all. They involve unpatched systems, untrained employees, and weak credentials. Unfortunately, I often see organizations spending immense time on their crown jewel systems like their domain controllers, and very little paid to their workstations or test systems. All an attacker needs to be in a network is a single vulnerable system from which he or she can move laterally to other devices (see the Target breach). I also see people following the letter of the law in PCI compliance, ignoring all the software and human practices beyond this insufficient box.

  3. You can buy the shiniest magic box, but if its not monitored, updated, and maintained with your input, you’re not doing security. 

    Security is a huge growth market, and vendors get better and better at selling solutions to executives with every newsworthy data breach. A lot of ‘cybersecurity’ solutions are now being sold as a product in a box – ‘install our appliances on your network and become secure’, This is simply not the case. Vendor solutions vary vastly in quality and upkeep. All of this is moot if the devices are placed in illogical places in the network, so that the devices can’t see inbound or outbound internet traffic, or host to host traffic. Even with a sales engineer providing product initial setup, a plan must be developed for the devices to be patched and updated. Who will troubleshoot the devices if they fail? And finally, their output must be monitored by somebody who understands the output. I’m constantly appalled by the poor documentations big vendors provide for the signatures produced by their product. Blocking alone is not adequate. Who is attacking and what is the attack?

  4. If your executives aren’t at the head of your InfoSec initiatives, they’re probably clicking on phishing emails. 

    I think this is pretty self explanatory. Security is not an initiative that can be ‘tacked on’ at a low level in an organization. To get the support and response needed to respond to incidents and prevent compromise, the SOC team must have a fast line to their organization’s executives in an emergency. 

  5. Defense in Depth, mother##%er. Your firewall isn’t stopping phishing, zero days, or port 443. 

    I constantly hear organizations (and students, and engineers) bragging about their firewall configs. This is tone deaf and obsolete thinking. Firewalls, even next generation firewalls that operate at layer 7, can only do so much. As I’ve said previously, exploits from outside to inside networks are not the #1 way that major breaches are occurring. All it takes is one employee clicking yes to security prompts on a phishing message or compromised website to have malware resident on a host inside their network. The command and control traffic from that host can take nigh infinite forms, many of which won’t be caught by a firewall without specific threat intelligence. You can’t block port 80 or 443 at the firewall in most any environment, and that’s all that’s really needed for an attacker to remote control a system. So you have to add layers of detection that have more control and visibility. such as HIDS, internal IDS, and system level restrictions. 

  6. There are a lot of things that log besides your firewall and antivirus. 

    I wrote a post on this a while back listing a bunch. The thing that horrifies me more than SOCs that don’t have a decent SIEM or log aggregation solution are the ones that only monitor their antivirus console and firewall. So many network devices and systems can provide security logs. Are you looking at authentication or change logs? DNS requests? Email? 

  7. Good security analysts and responders are hard to find. Educate, motivate, and compensate yours. 

    Or you will lose them just as they are becoming experienced. Our field has almost a 0% unemployment rate. 

  8. Make good connections everywhere in your organization. People will know who to report security incidents to, and you’ll know who to call when they do. 

    There’s often a personality and culture clash between infosec people and the rest of the business. This is really dangerous. We are ultimately just another agency supporting the business and business goals. All of our cases involve other units in or organization to some extent or another. 

  9. If you don’t have some kind of Wiki or KB with processes, contact info, and lessons learned, you’re doing it wrong. 

    I can’t believe I have to say this because it’s true of almost any scientific or technical field. If you don’t write down what you did and how you did it, the next person who comes along will have to spend the time and effort to recreate your steps and potentially make the same mistakes. This also means everybody on your team needs to be able to make notes and comment on processes, not just one gatekeeper. 

  10. You can’t do everything simultaneously. Identify and triage your security issues and tackle one project at a time. 

    Plenty of the horror stories I hear from security operations centers in their early stages involve taking on too much at once – especially without the guidance of a project manager. These teams drop everything because they can’t do it all simultaneously. We have the unfortunate tendency to be ideas people without organizing the projects and tasks we develop into structured projects.

  11. Threat Intelligence is not a buzzword and does not center around APTs. Have good feeds of new malware indicators. 

    Yes, there are predatory companies selling threat intelligence feeds with little or no value (or ones that consist entirely of otherwise free data). The peril in discounting threat intelligence is that signature based malware and threat detection is becoming less valuable every day. Every sample of the same malware campaign can look different due to polymorphism, and command and control mechanisms have gotten complex enough that traffic can change drastically. We are forced, at this point, to start looking in a more sophisticated way at who is attacking and how they operate to predict what they will do next. The includes things from identifying domains resolving to a set of IPs to sophisticated intelligence analysis. How far you take threat intelligence depends on time, funding, and industry, but every organization should be making it a part of their security plan.

  12. if your employees have to DM me for help with their basic SIEM / log aggregation, you’re failing at training. 

    Happens all the time, folks. I see a lot of good people at organizations with terrible training cultures. Make sure everybody has a level of basic knowledge from the start, and isn’t so intimidated in asking for help that he or she feels forced to go outside your organization. 

  13. Team build, and don’t exclude. The SOC that plays well will respond well together and knows their members’ strengths and shortfalls. 

    Prototypical hacker culture, while an absolute blast, is not for everyone. I’ve seen people shamed out of infosec for the most bizarre reasons – the fact is that some people don’t drink alcohol, or want to go to cons, or think Cards Against Humanity is appropriate. Yes, we are generally intelligent people and we can be rather eccentric. That doesn’t mean that people who find these things unpleasant don’t have skills and knowledge to contribute. Accept that they don’t have the same interests and move on without badgering. It’s their personal choice. When you plan your teambuilding activities, try to make them inclusive – people with kids might not be able to hang out at the bar at midnight.

  14. If you seek hires do it in range of places. Grads, veterans, exploit researchers, and more all may have different stuff to offer. 

    I see a lot of organizations with a relationship with a infosec group or university that only recruit from that specific pool. As with lack of genetic diversity, this provides no advancement or innovation. There are tons of places to find interesting perspectives on infosec from well educated candidates. It’s important to bring fresh ideas and perspective into your team.

  15. if your ticketing system doesn’t work in a security context, get your own dang ticketing system and forward. 

    There are two main reasons that you shouldn’t be using the same ticketing system for security cases that your IT department uses for everyday help desk operations. The first is security – there is no reason that your IT contractors or non-IT staff in general should be able to see the details of sensitive cases, even by an error in permissions. This also includes their accounts, should they become compromised. The second is that these ticketing systems are not designed with security incidents in mind. A security incident case management platform should do Critical things like store malware samples safely, provide court admissible records of evidence hashes and case notes, and integrate with SIEM or log aggregation solutions. If your ticketing solution is not doing these basic functions, it’s high time to consider a separate platform.

  16. DO virtualize your malware analysis. DON’T virtualize your security applications unless the vendor says how to. 

    Virtualization software is critical for lots of reasons in infosec – from setting up malware analysis labs to CTFs to honeypots. It is not appropriate for all security applications and solutions. Most organizations are heavily pushing virtualization as a cost saving initiative, but be very cautious when presuming all resource intensive and highly specialized security tools will function alike when virtualized.