101 Ways I Screwed Up Making a Fake Identity

As most of you know, my professional area of expertise in security is incident response, with an emphasis on system / malware forensics and OSINT. I’m fortunate enough in my position in the security education and con community to sometimes get pulled into other directions of blue teaming and the occasional traditional penetration testing. However, the rarest of those little fun excursions are into the physical pen testing and social engineering realm. In the breaking into buildings and pretending to be a printer tech realm, I’m merely a hobbyist.🙂

Therefore, it was a bit remarkable that in the course of developing some training, there was a request for me to create some fake online personas that would hold up against moderately security savvy users. I think most of us have created an online alter ego to some extent, but these needed to be pretty comprehensive to stand up to some scrutiny. Just making an email account wasn’t going to cut it.

So Pancakes went on an adventure into Backstop land. And made a lot of amusing mistakes and learned quite a few things on the way. I’ll share some of them here, so the social engineers can have a giggle and offer suggestions in the comments, and the other hobbyists can learn from my mistakes. Yes, there are automated tools that will help you do this if you have to do it in bulk for work, but many of the problems still exist. (Please keep in mind that misrepresenting yourself on these services can cause your account to be suspended or banned, so if you’re doing more than academic security  education or research, do cover your legal bases.)

What I messed up

I’m not going to waste everybody’s time talking about how to build a unremarkable and average character in a sea of people or use, nor how we always set up a VM to work in to avoid cookies and other identity leakage (including our own fat fingering). Those have been discussed ad infinitum. Let’s start with what happened after those essentials, because creating a good identity is apparently a lot more involved..

  • It pretty much required a phone number from the get go. I spun up my VMs and created the base sets of email and social media accounts that an average internet user might have, but Twitter was on to me from the start. I wasn’t planning on involving a phone for 2FA at all, but their black box security algorithm tripped in seconds and made me use a phone to enable the first account. So, I’m pretty much terrible. Granted, there are plenty of online services that will give you a phone number, and I could have burners if I felt the need, but it added a layer of complexity. In a good move, it looks like most of social media is now spamming new users to enable 2FA.
  • My super authorial D&D skills at creating dull people in big towns and reposting memes weren’t enough. I had to make friends and meet people to make the profiles pass as real. I knew that was going to be a challenge, but I didn’t expect it to become such a thought problem.
    • Twitter was the easiest once I fleshed out the characters and followed a bunch of accounts they would like, then people following those accounts. Some people just follow back folks who aren’t eggs (I do). I quickly had 40 or 50 followers on the dummy accounts. I’m apparently big in the vegan cooking scene now.
    • LinkedIn wasn’t too bad once somebody clued me into (LION) tags and good old 2000+ connection recruiter accounts. The people who participate in that essentially connect with anybody, regardless of the normal LinkedIn security and privacy rules about knowing people personally. So after making decent profiles, I just had to find a couple people with the tag, then fork out through 2nd degree connections in their vast networks to the correct industries and regions. Of course, I had to do a bit of strategic plagiarizing from other people in my characters’ professions’ skills sections to build believable people, first. (We have yet to see if they got any recruiter messages, but none of them had really lucrative careers.)
    • Facebook was actually the one I struggled with the most, because you really need a starting point in your network to even add other people. I talked to a lot of security folks about my woes there and they made some good suggestions. The first was to play some Facebook browser games for a few minutes (I feel like my time with Candy Crush was worse than the dark web), then go to their community pages and plead “add me”.  Again, people cheating the security / privacy system make it easy to gain a foothold. A couple popular games got me 50-100 friends, and from there by using Facebook’s lovely verbose search system, I could move my network into the regions that my personas “lived in”. For instance, if the character were from Chicago I would search for friends of friends of the connections I had made for people in Chicago, and those people were much more likely to add me because I was a “friend of so and so”. The other effective strategy people gave me was to present myself as an ardent fan of a sports team or political party in article comments. That worked pretty well, but not as fast as the games.
    • Once I had some “friends” on Facebook, moving into specific workplaces and schools wasn’t too hard. Public Facebook Events at those institutions and their associated venues provide lists of lots of people to add who were almost certainly physically present. Again, once I had a few connections in that circle, it became exponentially easier to add more.
    • Pinterest, YouTube, and Meetup were pretty easy – there’s really not a lot of verification of users there, by design. I liked them for this because they’re very public and tie the other social media profiles together nicely. I confess that I did lose my nerve when Meetup group sign up forms asked me detailed questions about my “kids” or my “spouse”, and stuck to ones that weren’t so intrusive, because that just felt creepy (says the woman who looked up a cached copy of your 2004 MySpace page).
  • I don’t normally feel guilty when I’m hacking somebody in a pen testing engagement (it’s for a good cause), but I did feel a little weird and guilty interacting with unwitting strangers on the internet as other people. It definitely took me out of my comfort zone – not only did I have to role play other personalities with wildly different views, but I had to shake my normal security paranoia to do stuff like click “add friend” a lot without hesitation and leak data through privacy settings, strategically.
  • I really had to commit to one character at a time to develop them into a person.
  • Even in a clean VM, there was still apparent tracking to my IP space on LinkedIn! I didn’t bother to use a proxy or a public connection for an educational endeavor, but if I had to flee the mafia or something I would certainly keep that in mind. Internet advertisement tracking is insidious and possibly scarier than any nation state actor.
  • Photos are everywhere yet were strangely really hard to come by. Fake identity creating sites like provide profile pictures, but anybody half decent at OSINT will immediately reverse image search a suspicious profile’s picture. Their stock art photos have been so abused that searching any one at random provides a trove of suspect business reviews and fake LinkedIn profiles (a blog of its own…). Again, since this was a legal and ethical endeavor, I just used a collection of donated (previously unposted) photos from friends, heavily visually filtered and transformed. Even that required a lot of careful checking for metadata and visual clues that tied them to a location. I’m sure there are more expensive stock art photo sources that are less abused, but I’m not sure how ultimately virginal even their photos are. Maybe I should invest in a good wig and glasses.
  • This was time consuming, and I can see it becoming incredibly time consuming, which is the reason you use tools to automate the wits out it if you do it regularly as a penetration tester. Facebook and Twitter timestamp content, and comprehensive ways around that are the kind of things social media companies give out hefty bug bounties for. On Twitter, you can retweet a years worth of old tweets in temporal sequence, but that will never change your publicly visible account creation date. Similarly on Facebook, you can manually change the date and location of posts, but your account creation date is still pretty easy to see based on other time data and your profile ID number. Ultimately, there seems to be no substitute for good old months and years of the account existing. If somebody has a work around they’d like to share, I’m all ears.

What we can learn about OSINT and defense from this exercise

  1. Not new, but always good to reiterate: people bypassing security and privacy controls for convenience is a really big security issue. People who blatantly bypassed the personal connection requirements on Facebook and LinkedIn made my job a lot easier. If nobody had accepted my fake characters’ invites on social media, I would have been pretty stymied and stuck buying followers or building my own network to be friends with myself.
  2. As an adjunct to #1, be mindful of connections via one of these “wide open” social media accounts (many hundreds of connections, or an indication they don’t screen requests in their profiles).
  3. Reverse image search the photo, all of the time. Maybe on two sites! This should be something you do before dating somebody or making a business deal, just like googling their name. No photos are, as always, a red flag.
  4. Check the age of social media profiles even if they look verbose and well defined. Stealing other peoples’ bios is easy.
  5. Never be connection #1, #2, or #3 to a profile you don’t recognize (you enabler).
  6. Don’t accept connection requests from Robin Sage, (or anybody else who presents themselves as a member of your community with no prior contact).
  7. In fact, don’t accept friend invites from people you don’t know even if they have 52 mutual friends and “go to your school”. I had 52 mutual friends and was bantering with the school mascot about a sportsball team I’ve never heard of, in a few minutes.
  8. Look for some stuff that’s deeper than social media and typical web 2.0 services when you’re investigating a person. My typical OSINTing delves into stuff like public records, phone and address history, and yes, family obituaries. Real people leave more artifacts online over the course of their lives than merely things that require a [Click Here to Sign in with Facebook], and the artifacts I listed are harder to fake quickly.
  9. Forget trust, verify everything.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.



Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.



How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.



Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.



Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp😉.

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts ( Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.



How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.



So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.



Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).



Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.



Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.


Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.


Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..


What’s a Challenge Coin, Anyway? (For Hackers)

So what are these “challenge coins”?

Challenge coins come from an old military tradition that bled into the professional infosec realm then into the broader hacker community through the continual overlap between the communities. In some ways like an informal medal, coins generally represent somewhere you have been or something you have accomplished. Consequently, you can buy some, and be gifted or earn others; the latter are generally more traditional and respected.

There are a few stories about how challenge coins originated in the U.S. Military and most have been lost to history and embellished over time, but I will tell you the tale as it was passed down to me:

During World War I, an officer gifted coin-like squadron medallions to his men. One of his pilots decided to wear it about his neck as we would wear dog tags, today. Some time later, that pilot’s plane was shot down by the enemy and he was forced down behind enemy lines and captured. As a prisoner of war, all of his papers were taken, but as was customary he was allowed to keep his jewelry, including the medallion. During the night, the pilot managed to take advantage of a distraction to make a daring escape. He spent days avoiding patrols and ultimately made his way to the French border. Unfortunately, the pilot could not speak any French, and with no uniform and no identification, they assumed he was a spy. The only thing that spared him execution was showing them his medallion, upon which there was a squadron emblem the French soldiers recognized and could verify.

Today, people who collect challenge coins tend to have quite a few more than just one.

What’s the “challenge”?

Challenge coins are named such because anybody who has one can issue a challenge to anybody else who has one. The game is a gamble and goes as such:

  • The challenger throws down their coin, thereby issuing a challenge to one or more people.
  • The person or people challenged must each immediately produce a coin of their own.
  • If any of the people challenged cannot produce one coin, they must buy a drink for the challenger
  • If the people challenged all produce coins, the challenger must buy the next round of drink(s) for them.

Therefore, a wise person carries a coin in a pocket, wallet, or purse, at all times!

How do I get challenge coins?

As I mentioned before, the three major ways to get a challenge coin in the military and in the hacking community are to buy one, earn one, or be gifted one.

  • You can buy coins at many places and events to show you were there. Many cons sell them now, as well as places like military installations and companies. They’re a good fundraiser.
  • You can be gifted a coin. This is normally done as a sign of friendship or gratitude, and the coins gifted are normally ones that represent a group or organization like a military unit, company, non-profit, or government agency. The proper way to gift a coin is enclosed in a handshake.
  • You can earn a coin. Many competitions and training programs offer special coins for top graduates, champions, and similar accomplishments (similar to a trophy). This is the most traditional way to receive a coin.

How do I display my coins, once I have more than one?

On a coin rack or coin display case. >>

Can I make my own challenge coins? How much do they cost?

Yes. Lots of companies will sell you challenge coins. The price varies drastically based on the number ordered, colors, materials, and complexity of the vector design.

Think about whether you plan to sell coins to people, gift them on special occasions, or make them a reward, and plan accordingly.

Can I see some examples of infosec / hacking challenge coins?

Sure! I hope you’ve enjoyed this brief introduction to challenge coins. Here are some of my friends and their favorite challenge coins:




The $5 Vendor-Free Crash Course: Cyber Threat Intel

Threat intelligence is currently the trendy thing in information security, and as with many new security trends, frequently misunderstood and misused. I want to take the time to discuss some common misunderstandings about what threat intelligence is and isn’t, where it can be beneficial, and where it’s wasting your (and your analysts’) time and money.

To understand cyber threat intelligence as more than a buzzword, we must first understand what intelligence is in a broader sense. Encyclopedia Britannica provides this gem of a summary:

“… Whether tactical or strategic, military intelligence attempts to respond to or satisfy the needs of the operational leader, the person who has to act or react to a given set of circumstances. The process begins when the commander determines what information is needed to act responsibly.”

The purpose of intelligence is to aid in informed decision making. Period. There is no point in doing intelligence for intelligence’s sake.

Cyber threat intelligence is not simply endless feeds of malicious IP addresses and domain names. To truly be useful intelligence, threat Intel should be actionable and contextual. That doesn’t mean attribution of a set of indicators to a specific country or organization; for most companies that is at the best futile and at the most, dangerous. It simply means gathering data to anticipate, detect, and mitigate threat actor behavior as it may relate to your organization.  If threat intelligence is not contextual or is frequently non-actionable in your environment, you’re doing “cyber threat” without much “intelligence” (and it’s probably not providing much benefit).

Threat intelligence should aid you in answering the following six questions:

  1. What types of actors might currently pose a threat to your organization or industry? Remember that for something to pose a threat, it must have capability, opportunity, and intent.
  2. How do those types of actors typically operate?
  3. What are the “crown jewels” prime for theft or abuse in your environment?
  4. What is the risk of your organization being targeted by these threats? Remember that risk is a measure of probability of you being targeted and harm that could be caused if you were.
  5. What are better ways to detect and mitigate these types of threats in a timely and proactive manner?
  6. How can these types of threats be responded to more effectively?

Note that the fifth question is the only one that really involves those big lists of Indicators of Compromise (IoCs). There is much more that goes into intelligence about the threats that face us than simply raw detection of specific file hashes or domains without any context. You can see this in good quality threat intelligence reports – they clearly answer “what” and “how” while also providing strategic and tactical intelligence.

I’m not a fan of the “throw everything at the wall and see what sticks” mentality of using every raw feed of IoCs available. This is incredibly inefficient and difficult to vet and manage. The real intelligence aspect comes in when selecting which feeds of indicators and signatures are applicable to your environment, where to place sensors, and which monitored alerts might merit a faster response. Signatures should be used as opposed to one-off indicators when possible. Indicators and signatures should be vetted and deduplicated. Sensibly planning expiration for indicators that are relatively transient (like compromised sites used in phishing or watering hole attacks) is also pretty important for your sanity and the health of your security appliances.

So, how do you go about these tasks if you can’t staff a full time threat intelligence expert? Firstly, many of the questions about how you might be targeted and what might be targeted in your environment can be answered by your own staff. After your own vulnerability assessments, bring your risk management, loss prevention, and legal experts into the discussion, (as well as your sales and development teams if you develop products or services). Executive buy-in and support is key at this stage. Find out where the money is going to and coming from, and you will have a solid start on your list of crown jewels and potential threats. I also highly recommend speaking to your social media team about your company’s global reputation and any frequent threats or anger directed at them online. Are you disliked by a hacktivist organization? Do you have unscrupulous competitors? This all plays into threat intelligence and security decisions.

Additionally, identify your industry’s ISAC or equivalent, and become a participating member. This allows you the unique opportunity to speak under strict NDA with security staff at your competitors about threats that may impact you both. Be cognizant that this is a two way street; you will likely be expected to participate actively as opposed to just gleaning information from others, so you’ll want to discuss this agreement with your legal counsel and have the support of your senior leadership. It’s usually worth it.

Once you have begun to answer questions about how you might be targeted, and what types of organizations might pose a threat, you can begin to make an educated decision about which specific IOCs might be useful, and where to apply them in your network topology. For instance, most organizations are impacted by mass malware, yet if your environment consists entirely of Mac OS, a Windows ransomware indicator feed is probably not high in your priorities. You might, however, have a legacy Solaris server containing engineering data that could be a big target for theft, and decide to install additional sensors and Solaris signatures accordingly.

There are numerous commercial threat intelligence companies who will sell your organization varying types of cyber threat intelligence data of varying qualities (in the interest of affability, I’ll not be rating them in this article). When selecting between paid and free intelligence sources (and indeed, you should probably be using a combination of both), keep the aforementioned questions in mind. If a vendor’s product will not help answer a few of those questions for you, you may want to look elsewhere. When an alert fires, a vendor who sells “black box” feeds of indicators without context may cause you extra time and money, while conversely a vendor who sells nation state attribution in great detail doesn’t really provide the average company any actionable information.

Publicly available sources of threat intelligence data are almost endless on the internet and can be as creative as your ability to look for them. Emerging Threats provides a fantastic feed of free signatures that include malware and exploits used by advanced actors. AlienVault OTX and CIRCL’s MISP are great efforts to bring together a lot of community intelligence into one place. Potentially useful IoC feeds are available from many organizations like, IOC Bucket, SANS ISC DShield and (I recommend checking out hslatman’s fairly comprehensive list.). As previously noted, don’t discount social media and your average saved Google search as a great source of Intel, as well.

The most important thing to remember about threat intelligence is that the threat landscape is always changing – both on your side, and the attackers’. You are never done with gathering intelligence or making security decisions based it. You should touch base with everybody involved in your threat intelligence gathering and process on a regular basis, to ensure you are still using actionable data in the correct context.


In summary, don’t do threat intelligence for the sake of doing threat intelligence. Give careful consideration to choosing intelligence that can provide contextual and actionable information to your organization’s defense. This is a doable task, possible even for organizations that do not have dedicated threat intelligence staff or budgets, but it will require some regular maintenance and thought.

Many thanks to the seasoned Intel pros who kindly took the time to read and critique this article: @swannysec, @MalwareJake, and @edwardmccabe

I highly recommend reading John Swanson’s work on building a Threat Intel program next, here.

Why do Smartphones make great Spy Devices?

There has been extensive, emotional political debate over the use of shadow IT and misuse of mobile phones in sensitive areas by former US Secretaries of State Colin Powell and Hillary Clinton. There is a much needed and very complex discussion we must have about executive security awareness and buy-in, but due to extensive misinformation I wanted to briefly tackle the issue of bringing smartphones into sensitive areas and conversations (and why that’s something that is our responsibility to educate our leadership to stop doing).

This should not be a partisan issue. It underscores a pervasive security issue in business and government: if employees perceive security controls inexplicably inconvenient, they will try to find a way to circumvent them, and if they are high enough level, their actions may go unquestioned. This can happen regardless of party or organization, and in the interest of security, information security professionals must try to discuss these cases in a non-partisan way to try to prevent them from reoccurring.

That being said, let’s talk briefly about why carrying smartphones into any sensitive business or government conversations matters, and is a particularly bad habit that needs to be broken.

There are two things to remember about hackers. The first is that we’re as lazy (efficient?) as any other humans, and we will take the path of least resistance to breach and move across a network. Instead of uploading and configuring our own tools on a network to move laterally and exfiltrate data, we will reach for the scripting and integrated tools already available on the network. In doing so, smart hackers accomplish a second and much more critical objective of limiting the number of detectable malicious tools in an environment. Every piece of malware removed from an infiltration operation is one less potential antivirus or intrusion detection system fire, and one less layer of defense in depth that is effective against hackers. An intrusion conducted using trusted and expected  administrative tools and protocols is very hard to detect.

These same principles can apply to more traditional audio and video surveillance. In the past, covert surveillance devices had to be brought into a target facility via human intervention (for instance, brought in by an operative, a bribe, or covertly planted on a person or delivery). The decades of history (we know) about bugs is fascinating – they had to be engineered to pass through intensive security measures and remain in target facilities without notice. In the pre-transistor and the early era of microelectronics, this was a complex engineering feat indeed.

Personal communication devices, and to a greater extent smartphones, are a game changer. Every function that a cold war -era industrial or military spy could want of a bug is a standard feature of the smartphones that billions of people carry everywhere. Most have excellent front and rear facing cameras. They have microphones capable of working at conference phone range. They have storage capable of holding hours of recording, multiple radio transmitters, and integrated GPS. James Bond’s dream.

More importantly than any of this, smartphones tend to be one of three major operating systems, which are commercially available globally and excruciatingly studied for exploits by every sort of hacker. Some of these exploits are offered to the highest bidder on the black market. Although the vulnerability of smartphone operating systems varies by age and phone manufacturer, each is also  vulnerable to social engineering and phishing through watering hole attacks, email, text message, or malicious apps.

Why expend the effort and risk to get a bug into a facility and conceal it when an authorized person brings such a fantastic, exploitable surveillance device in knowingly and hides it themselves? If the right person in the right position is targeted, they may not even be searched or reprimanded if caught.

There’s been a lot of discussion about countermeasures against compromised smartphones. Unfortunately, even operating inside a Faraday cage that blocks all communication is not effective because eventually, the phone leaves. A traditional covert device may not. As with the USB devices used to deploy Stuxnet, this trusted air gap is broken the moment an untrusted device can pass across it. A compromised phone can simply be instructed to begin recording audio when it’s cellular signal is lost, and upload the recording as soon as that connection is restored. Turning off the devices is also not particularly effective in the era of smartphones with irremovable batteries.

Yes, of course it’s still possible to put a listening device in a remote control or a light fixture. Surreptitious hacking tools used to compromise networks on site can still function this way. But why expend the substantial effort and risk in installing, communicating to, and removing them if there’s an easier way?

This is not to say it’s time to put on our tin foil hats and throw out our phones. Most people are probably not individual targets of espionage, and using smartphones with current updates and good security settings is decent protection against malware. However, there are people all over the world who are viable targets for industrial or nation-state espionage, either for their own position or for their access to sensitive people, information, or places. If you are informed by a credible authority that you may be targeted and should not bring your smartphone into a particular area, please take this advice seriously and consider that your device(s) could be compromised. If you suspect that there is another valid reason that you could be targeted by industrial or nation state espionage, leave your phone outside. It is generally far simpler to compromise your smartphone than it would have been to break into your office and install a listening device.



Starting an InfoSec Career – The Megamix – Chapter 7


Chapter 7: Landing the Job

So, we’ve come this far in your infosec journey. You’ve studied hard, attended conferences, played a CTF or two, updated your resume, and networked a bit within the information security community. Great work!

Let’s prepare for your very first information security interview.


=== What to Say ===

There have been nigh infinite pieces written on the subject of interviewing, but I’d like to briefly share some basic interview skills that have really served me and my candidates well:

  • Make sure spend at least 30 minutes researching the organization you will be interviewing at. What are their strategic goals or products? Where do they have offices? What’s their corporate culture like? Consider what interests you about their mission, and how you feel you could benefit them as a security professional.
  • Always bring several printed copies of your resume and references to your interview, formatted the way you intended. HR systems will often remove formatting and line breaks before routing your resume to a hiring manager, and your copy may be more pleasant to read. You will also want a copy to reference, yourself.
  • Bring note taking materials to your interview, and make sure you’ve written down a few relevant questions to ask your interviewers about the position and the organization.
  • Arrive 15 minutes early for your interview, and be polite to everybody you meet. You never know if the person you make eye contact with and say “good morning” to in the hall will be interviewing you, later.
  • Make eye contact, and pay attention during the interview. Most of us are introverts, and this can be a challenge. Make the effort to be personable and show that you are listening to your interviewers.
  • Put your phone away and on silent. I shouldn’t have to say this.
  • Answer questions honestly. Most of my colleagues and I would very much prefer, “I’m not sure”, to an evasive answer or an outright lie, particularly on technical questions. Often, knowing where you would look something up is an okay answer to a technical question. When we ask you questions about where you could improve, there should be a real response that verifies you are a human. Everybody has some area they can improve in, and we will never believe you’re utterly perfect.
  • The initial interview is not normally the appropriate place to ask about compensation. Yes, infosec is an understaffed and in demand field. You have better chances than most at landing the job. No, your Masters in Information Security does not guarantee you the position immediately in lieu of a technical interview.
  • Do talk about your (legal) infosec-related hobbies and activities! We want to hear about the security lab you built in your house, the book you read, the CTF that you participated in, or the security related talks and projects you’re participating in. They show you are an interested and involved candidate, and a good fit for our teams.



=== What to Know ===

The previous chapters in this blog series suggested ways to build your foundational skills in the key areas of networking, systems administration, and security, so I won’t dwell too much on the necessity of knowing the fundamentals of these things such as common ports and protocols, malware types, and operating system functionality in an entry level infosec interview. Suffice to say, this is where the free educational resources, formal training, and your home lab really come into play.

You should ensure, before going to an interview, that you are up to date on the basics of current threats and security news. What you learned at your university is almost certainly not current enough for most interviews. There are a lot of great resources that provide information on ongoing threat activity. For instance, I really like the exploit kit status dashboard at (ProofPoint) EmergingThreats. SANS ISC posts botnet and scanner activity from publicly submitted data, and Sophos posts a nice free malware dashboard that shows their overview of currently detected malware. Threat trackers, coupled with the blogs, news services, and educational resources we’ve previously discussed, should enable you to go to your interview ready to answer general questions about the top threats that are currently plaguing organizations.


=== What Not to Say ===



In May, I surveyed a broad swath of security professionals to share the statements they hear from interview candidates that are the most indicative that the person is inexperienced in professional information security work. I’d like to share a few of the most popular, and why they carry that connotation. Keep in mind, the selected statements by candidates aren’t necessarily technically wrong; they more often tend to oversimplify or ignore administrative and business-related problems in security. It would be wise to choose your words diplomatically before saying any of the following things:

“Antivirus is obsolete, and a waste of money! Get rid of it.”

We can’t all be Netflix, dramatic headlines or not. It’s true that antimalware programs have a lot of problems to contend with in the 2010s. Between a cat and mouse game with well-funded malware authors, and polymorphism and regular botnet updates, simply maintaining a library of static signatures is indeed not effective anymore. Most decent antivirus vendors recognize this, and have implemented new tactics like heuristic engines and HIPS functionality to catch new variants and unknown threats. Antivirus is one component of a solid ‘defense in depth’ solution. It has a reasonable potential to mitigate a percentage of things that slip past network IPS, firewalls, web filters, attachment sandboxes, and other enterprise security solutions.

“Why are you wasting money on $x commercial product? I can do the same thing with this open source project on GitHub”

We love the philosophy and price tag on open source projects, and it’s great that commercial vendors have open source competition that drives them to improve and enhance their products. This doesn’t mean that free tools are always a viable replacement for commercial tools in an enterprise environment. There are intangible things which usually come with the purchase of a good quality commercial security product: support, regular updates, scalability, certifications, and product warranties. Those intangible things can have a tangible cost for an enterprise implementing an open source product in their stead. For instance, the organization may have to hire a full time developer to maintain and tweak the tool to their needs and scale. They may also be solely legally liable if a vulnerability in free open source software is exploited in a breach – a risk many organizations’ legal teams will simply not accept.

“They deserved to get breached because they didn’t remove Java / Flash / USB functionality / Obsolete Software…”

Most organizations exist to provide a product or service, and that’s usually not “security”. As security professionals, we’re just one small part of our organizations and their mission, and we never function in a vacuum. Oversimplified assertions like this are a dead giveaway that a candidate is not used to compromising and negotiating inside a business environment. Yes, in an ideal security world, we would use hardened operating systems with limited administrative rights and no insecure applications. Few of us actually operate in that ideal world, and many of us work at an operational scale alone that renders this unfeasible. We do what we can; navigating the political risk management game where we must to provide the most secure environment we are capable of.

“Just block China/Russia/x… IPs.”

Once again, this indicates a candidate is thinking only as a security person (and a biased security person) and not as a member of a business. Unfortunately, it also shows a lack of technical knowledge, as many attackers use large, global networks of compromised hosts to launch attacks.

“Security Awareness is a waste of money. Users will always be stupid.”

This is an appalling lack of confidence in your own ‘team’. Yes, some end users will probably always click / ignore / fail to report. (Most security people will also click when properly socially engineered.) The point of security awareness is not to create a perfect environment where nobody ever clicks on a phishing message or ignores an alert window – if your management has made that their measure of success, they’re doing security wrong. The point of security awareness is to improve awareness of threats, encourage some employees to report potential threats so you can respond, and decrease day to day problems so you can focus on the more severe ones.

“[Fortune 100] should have already have gotten rid of $OS and gone to $OTHEROS, because it’s more secure / real security people use $OTHEROS.”

This is dogmatic elitism without real business or technical foundation. Any up-to-date operating system can have a valid use case in business and in security work. A good red team or blue team security professional should be able to secure, compromise, and use tools on OSX, Linux, and Windows effectively (and indeed, there are valuable tools unique to each). It’s okay to have an operating system preference and to intelligently discuss the merits of $OperatingSystem for your specific use case. Don’t assume everybody else’s use case is the same.

“Hack them back / have the attackers arrested…”

We all crave the movie ending where the black hat hackers get their comeuppance and are thrown in jail. Unfortunately, unless we work for a LEO, the military, or a huge global telco, we’re rarely likely to get it. “Hacking back” of any sort is usually wildly illegal (especially because attacks are almost always launched from compromised hosts that belong to law-abiding people). Arrests happen when time-consuming coordinated efforts between security firms, global law enforcement, and lawyers are successful. Even the terrifying financial spearphish to your CFO is likely to not be chased down by law enforcement for some time. When permitted, absolutely do share your threat intelligence with law enforcement and working groups to aid in these important efforts. Expect any response received will take significant time.

“Don’t you monitor every brute force attempt against your perimeter? I count the dictionary attacks against my honeypot every night!”

No, monitoring this would be a waste of time in most large organizations. Behavioral trends and specific sequences of events that could indicate a compromise are more valuable to monitor. Time is money.

Any statement beginning with, “Why don’t you just…?” or “It’s simple…”

It pretty much never is that simple, so don’t personally insult your interviewer by assuming it is



This concludes the InfoSec Career Megamix! I hope you’ve enjoyed this blog series and that it has been helpful to you in furthering your own security career. Many thanks to everybody who has commented on my blogs or provided input and suggestions. Please do check out the links to other peoples’ wonderful work on the subject which I have included throughout the blogs.

[You can find the previous chapters in this blog series here:

The Fundamentals

> Education & Certifications

> Fields and Niches

Blue Team Careers in Depth

Red Team Careers in Depth

Self-Study Options]

InfoSec tickets for Veterans & Twitter Feed!

Hello all,

For the past year, I have reached out to twitter-folk for their spare and unwanted infosec con tickets they wished to donate to military veterans and military members interested in the field. I have made the decision today to formalize this into a website and twitter account, .

Why do I do this?

There is a collective misunderstanding in our society about military members. Although we may consider them to be heroes, valiant, and tough, the vast majority of charities and educational programs available to them after serving also presume them all to fit into specific blue collar professions after their enlistment or commission.

This is simply not the case. The military staffs almost every job seen in the civilian world, from cooks to network engineers. There are even infosec professionals, just like us, using similar tools and attending the same certification programs. The difference is, they are doing it on the military payscale and within military lifestyle restrictions, to benefit a larger cause.

To a junior enlisted military member working in IT or recently discharged from the military, with industry certifications and years of experience, a couple hundred dollars we might consider trivial can be an impossible roadblock to attending conferences and networking. We all know how absolutely critical attending these events can be to progressing in our careers. So why keep qualified people out over a trifling sum of money?

This Twitter account is made to reduce the isolation veterans may feel upon trying to enter the information security field. I will endeavor to RT and tweet programs beneficial to veterans and currently enlisted people,  as well as provide an informal operator service to connect people with spare infosec con tickets with veterans who want to attend the conferences.

This service will be best effort and a work in progress until I find others willing to volunteer time and effort to it. However, I believe this service fills a serious void between military and civilian IT security.

If you wish to donate a spare ticket or sponsor an attendee, please DM the account and I will relay this opportunity is available. If somebody requests the ticket, I will connect you to arrange transfer of ownership. I hold no liability for the transfer so please exercise caution and common sense.

If you are a veteran or military member who would like a ticket to a conference, please stay tuned to this twitter account and DM if you see a ticket that interests you (first come, first serve).

This service is on the honor system and I will only be spot checking military service. I can only ask for your honesty to maintain the account.

This account will not be accepting advertisement or promotional offers; corporate sponsorship for individuals is okay within reason.

[Off Topic] On Dealing with Completely Impossible Situations


Just some non-infosec-specific thoughts regarding things I’ve learned about dealing with burnout, and the really bad days:

  • Steve, Diane, Kay, Erica, Bryan, and Anna taught me that sometimes you just find family when it’s needed. (Say yes to seeing your friends, even when you’re indescribably exhausted.)
  • Jack taught me to have a breakdown plan.
  • Col E. taught me that it’s not weakness to ask for help when you really need it.
  • Rance taught me that sometimes you have to put on the chicken hat and dance besides it all.
  • D.T. taught me that you can be lying there missing a leg in the hospital, then marry your nurse.
  • Johnny taught me that even on a really crappy day, a lightsaber battle is still okay.
  • Selena taught me that writing about the worst situations can help you face them (and help other people, too).
  • Reggie taught me that however messed up things are, it’s not too late to reconcile with people you fought with honorably, like a true samurai.
  • 60 hackers confirmed that our community is real, and if you’re not a dreadful person both the “black hat” and “white hat” people in it might send cards and books to a person across the planet who needs them, without prompting or asking questions.
  • Jodi taught me be unapologetically your own self, and keep fighting and screaming at what’s wrong in the world, even when the world is beating you down.
  • Two ER visits and permanent health damage have taught me that there are repercussions for not taking care of yourself under continual and intense stress.
  • Completely impossible situations taught me that you never really know what you’re capable of dealing with until you’re faced with one. Overcoming those obstacles are the memories you keep.

I hope someday when you are going through an impossible time, you can come back to this post and find some help and hope.


The Worst InfoSec Resume, Ever

I do quite a bit of InfoSec résumé reviewing and critiquing, both personally and professionally, so I’m repeatedly asked for tips on common problems. In order to ensure that these problems were not exclusive to me, I recently had a lengthy discussion  with a number of InfoSec professionals involved in hiring (thank you!). We discussed our “top 10” pet peeves when reading candidates’ résumés.

So without further ado, here is an illustrated example of some common problems we see on many résumés, and some suggestions about how to fix them.

(If these images are hard to view on your phone or at a specific resolution, you may click them to view them full screen.)