Health and Wellness in InfoSec

Most of us know that being a hacker isn’t exactly the lowest stress gig out there. With the holiday season fast approaching, thinking about taking care of our well-being and that of our colleagues, family, and friends becomes even more important than usual. I’d like to have a quick chat about ways I personally have approached health and self-care, some lessons I’ve learned after nearly two decades in IT, and some suggestions for caring for yourself and those around us. Of course, I’m not a doctor. I don’t even play one on TV.  I can only speak to my own personal experiences coping with extreme and long-term stress. I hope they provide some food for thought.


Nutrition

Eating a portioned, balanced diet is an oft forgotten but very important element of our overall physical and mental health and longevity. How we eat is also very important. Let’s start with some really easy changes.

I’ve personally found great value in (whenever possible) ensuring I eat on a regular schedule. I also try force myself to eat a minimum of a couple meals a week at a table (not in front of my computers or my TV), from an actual plate. This forces me to eat more slowly, control portions, and gets my focus away from work and news during the meal.

It’s no secret that I’m a pretty incompetent chef, which sometimes hurts my eating habits. I’ve worked hard to balance this out a bit by eating more steamed and raw vegetables and fresh fruit, carefully reading nutrition, preservative, and preparation facts on microwave meals, and occasionally utilizing delivery services with semi-prepared or pre-prepared healthy meals. I also try to get together for shared meals with friends or family on a regular basis (they cook, I bake, everybody lives 😉 ) Check out MealSharing if you don’t live near friends or family, or arrange something with your local hacking group. I saw a lot of Hacker Family holiday meals out there this year.

I’ve never seen a ‘fad diet’ or non-FDA-approved weight loss pill that worked long term – I’m not even terribly keen on excessive meal replacements. Be cautious about anything that seems too good to be true. We’re hackers, and we are some of the best out there at uncovering bad science and scams. Never forget to research while looking for a quick fix. Unless your doctor says otherwise, start with simple things like portion control, balanced nutrition, fresh foods, and avoiding too much added sugar and sodium. Eating sensibly and reducing portions is a lot easier to stick with than drastic dietary changes and lack of variety, in the long term.

Finally, try to drink more water. There are tons of reasons to avoid the added sugars of soda and the sugar substitutes in calorie-free drinks, as well as excessive caffeine and alcohol consumption. Drinking more water can make a huge physical and mental impact on our health. Using reusable water bottles instead of plastic soda bottles or cans is also great for the environment.

Exercise

Many people in information security work long hours and travel extensively. This makes getting regular exercise difficult. So, let’s have a little chat about the exercise that wiser experts than I say you should be doing at a minimum.

The American Heart Association currently recommends the following for healthy adults:

For Overall Cardiovascular Health:

  • At least 30 minutes of moderate-intensity aerobic activity at least 5 days per week for a total of 150

OR

  • At least 25 minutes of vigorous aerobic activity at least 3 days per week for a total of 75 minutes; or a combination of moderate- and vigorous-intensity aerobic activity

AND

  • Moderate- to high-intensity muscle-strengthening activity at least 2 days per week for additional health benefits.

For Lowering Blood Pressure and Cholesterol

  • An average 40 minutes of moderate- to vigorous-intensity aerobic activity 3 or 4 times per week

Obviously, 75 – 150 minutes of exercise can be pretty hard to get when we’re working long nights and sleeping at airports. Hotel gyms get really old. That doesn’t mean we shouldn’t still make an effort, because not only does exercise provide physical benefits, but it can get our minds off troubles as well.

In my personal experience, getting involved in group exercise classes in which missed attendance is noticed and checked on was a great help. I chose martial arts and yoga. Martial arts gave me a structured, moderate to vigorous intensity activity with concrete goals to achieve, and strict attendance and coaching requirements. Even if I’m exhausted and flying out the same night, I have to make my classes or provide a valid excuse.

Yoga provides me a low-medium intensity stress-relieving exercise activity I can do almost anywhere I travel. Finding yoga schools wherever I go for work has become an exciting adventure – I always meet new instructors with new ideas and perspectives. There’s no reason national or international exercise programs like crossfit, BJJ, or aerobics can’t provide the same for you. Find an exercise routine you find fun and captivating, not something that’s a chore you try to get out of. (Always consult a doctor and research the routine before starting a new exercise program – we’ll talk about this shortly.)

Community & Friendship

Introversion is pretty common in hackers; I’m no exception. As unappealing as it can feel, there are good reasons for us to have a community of support and a little regular interaction with other humans. We’re very fortunate as hackers to have a tremendous community of practice with many local, regional, and international events, which we all should try to attend if able. However, those don’t ensure that we aren’t isolated on a day-to-day basis. Folks who work from home are especially vulnerable to the trap of staying home surrounded by hobbies, games, and gadgets, frustrated with other people.

Ask yourself, “Have I spoken out loud to another human today?”.

There will be Really Bad Days in your life where the escapism of books, games, and the toys, and what box you popped aren’t enough. You will eventually need some support to dig out of a dark, overwhelming place. The best way to ensure that safety net is there is to build it right now, even as watching Netflix or con videos might seem a lot more fun and less stressful. Be part of the hacker community, your local community, and your communities of interest.

Yes, the internet is a great resource for friendships, especially when we’re geographically isolated from folks with similar interests. If possible, don’t rely on the internet alone. Make sure you have a couple real phone numbers to call on the Really Bad Day. Make sure somebody relatively local can pick you up at the hospital, or bring you a can of gas, or bail you out of jail on that Really Bad Day. Be that person for your friend’s Very Bad Day, too.  It can be very wearing to put yourself out there, but it’s easier to meet people with common interests and hobbies than ever before in human history. Join your local 2600, CitySec, or DEF CON local group.  I also highly recommend Meetup.com for finding or starting low-key hobby and geekdom groups in your area.

Remember that we have family that we are born with, but we can also have family that we choose – and sometimes those bring us much more compassion and care on the Bad Days.

Sleep

I promise, no matter what you think, you really, really do need it. Even if your 3 energy drinks tell you that you don’t. If you start crashing hard on your days off, repeatedly, you are probably pushing yourself too hard. The National Sleep Foundation recommends between 7-9 hours of sleep for adults. Below 6 hours isn’t even considered healthy on their scale. I know a lot of people in infosec talk about living on 4 or 5 hours of sleep routinely as a matter of pride, but you’re only hurting yourself (or your employees, if you promote this). You will very likely notice a physical and mental performance improvement when you get enough rest.

Humanitarianism

Kindness, service, and volunteer work not only help those around us, but they improve our personal well-being as well, and get us involved in local and global communities. A small act of kindness, like showing honest appreciation to people around us, or showing compassion to somebody in need, can make endless difference in another person’s life.

Quite a few of you might be surprised that I (a humanist and an atheist) go to church on a regular basis. Let me endeavor to answer the immediate questions raised by this. Firstly, I attend a humanist church that doesn’t promote any specific religious ideology (Unitarian Universalist). Secondly, it forces me to listen to people with varied philosophies about their concerns and their perspectives, which gives me a more nuanced and human view of worldviews that are different than mine. Thirdly, it allows me to be part of a supportive community of humanitarians who are also interested in helping less fortunate people in organized ways. Some problems are too big to tackle alone.

No matter your philosophical and spiritual views (or lack thereof), the idea of the golden rule is pretty universal. I personally ascribe to the concept of leaving the world a little better than I came into it, for future humans. Others did it for me, and we all benefit from random acts of kindness. Find a way to give back to the communities you are a part of by choice or by chance.

Seeing That Doctor

I honestly can’t count the number of friends in infosec, including myself, who have ended up in the hospital after ignoring health problems due to high pressure, fast-paced lifestyles. Nobody likes going to to the doctor, and health insurance can be a nightmare in the US. I can’t stress enough – learn from our mistakes, or suffer the consequences.

Even if you’re in your 20’s or 30’s, go to your yearly physical. Make sure you have routine blood work done to check for stuff like vitamin deficiencies. Vitamin D deficiency is super common in IT and shift work, and as many can attest it can have a huge impact on your physical and mental well-being. Get screened for cancer and hereditary conditions appropriately for your age, gender, and risk factors. No job is ever worth your life.

I’m Still Really Stressed, What Now?

Here are some thoughts for you.

  • Try to reduce excessive caffeine intake. It raises your heart rate, and artificially reduces your desire to get (needed) rest.
  • Have a cup of herbal tea. Take the time to put in some honey or lemon, and try to relax for a few minutes while drinking it.
  • Remove your social media apps from your phone if your feeds are stressing you out. Social media vacations are okay.
  • Actual vacations are okay, too. They are not a mark of shame. The things you did out of the routine are the things you will remember in a decade.
  • Call a friend, and chat for a while. Even better, chat with a friend in person.
  • Try a new hobby. Groupon Local is great for this. It doesn’t have to be something intense like skydiving. Try something low-pressure that you’ve always wanted to learn more about, like photography, painting, sushi making, or home brewing. It’s a big world out there, full of endless things.
  • Meditate. This doesn’t necessarily mean sit still on the floor, cross-legged. Moving meditation is a thing, too. Sweeping can be meditation. Lockpicking can be meditation. So can music or art. For something more traditional, Tai Chi, Hatha Yoga, and Qi Gong are organized moving meditation. We’re just talking about calm, repetitive motion activity that allows you to focus your thoughts and breathe without getting frustrated.
  • ASMR videos, however silly-seeming, help some folks relax.
  • For the “Type-A” hackers: find something to plan out that doesn’t stress you out. It can be a totally mental, pretend exercise. For example, plan out a vacation, a CFP submission, a research project, a business you’d like to start, or a job change. Have fun working out the logistics or details, and don’t worry about the real life roadblocks or requirements. If you get inspired, that’s great. If not, move on to something else.
  • Read an actual, physical book that you enjoy. Or replay an old game that you enjoy, that won’t stress you out. Something you equate to happy memories.
  • Finally, and most importantly,

Professional Help

There is no shame in seeking professional help when you’re in a dark place. While I’ve offered a few suggestions of possible ways to improve the quality of your life, health, and support structures, there are truly long term and short term conditions that can best be worked through with a licensed professional. Depression and substance abuse are sadly huge problems in the hacker family, and they call for proper care. We don’t want to lose anybody else. Please do not hesitate to seek out professional resources when you need them. You are valued. You are important. You can do good in the world.

The National Suicide Hotline: 1-800-273-8255
SAMHSA Substance Abuse Hotline: 1-800-662-HELP

Bridging the Gap between Media and Hackers: An Olive Branch

I had a lovely interview about IoT security with Emmy-award-winning reporter Kerry Tomlinson of Archer News this past week at BSides Jackson. It’s unfortunately rare in our field that we get to have such productive, mutually beneficial conversations with members of the media. There’s a lot of uncertainty and (often justified) lack of trust between both parties – which makes it easy to forget that presenting a coherent, technically correct, and comprehensible message on information security and privacy is crucial for everyone.

Since organizations like I Am the Cavalry are already approaching the outreach problem primarily from the side of security professionals, I’d like to take a slightly different approach by specifically addressing journalists and the media.

We need your help!

With the plethora of hacker conferences which are gaining legitimacy and attention across the world, there are many opportunities to address our community. Hacking conference call-for-papers are often open to everybody, not just people gainfully employed in security. You are welcome to apply and lend your unique perspective to these problems. It doesn’t have to be DEF CON or Black Hat. There are many smaller options which record and post talks, and have great reach within our community.

Here are some important topics which you could help educate us about, by sharing your perspective:

  • What is it like being a journalist covering security? What are the challenges?
  • How should we prepare for a media interview?
  • Many people in security feel burnt by misquotes and misinterpretations of their work. How can we better avoid this? What should we do if we feel we have been misrepresented by a media organization?
  • How can we better vet news outlets which want to work with us?
  • How can we help you as subject matter experts or fact checkers?
  • How can we help you present our most important security research to society without sensationalizing?
  • How can we better format and target our blogs and research for the media?

We want to help you!

There are plenty of security topics that are timely and  highly relevant to journalists and the media, and many of us are willing to offer education and insights to your communities of practice, if offered opportunities to do so.

Here are some topics which many willing security professionals (including myself) could provide a range of insights and training on at media conferences and educational programs:

  • How to conduct secure and private communications with sources and colleagues.
  • How to maintain operational security and avoid leakage of sensitive personal information.
  • How to secure computers and mobile devices.
  • Understanding, detecting, and avoiding social engineering.
  • How to approach hackers (white, grey, and black hat) for information on security research.
  • The realities of hacker “culture” and work, and how these differ from fictional stereotypes.
  • Current issues with malvertising on news sites, how to better decrease the risk thereof, and their effect on the rise of adblockers.

I want to take a moment to thank the many journalists and reporters who do fabulous coverage of security topics right now (especially Steve Ragan, who wrote the essential article on how to deal with the media as a hacker) who associate with our community on a regular basis. Thanks for dealing with our foibles and for doing great work.

Using Team Cymru’s MHR with Volatility

Today we’ll briefly discuss crosschecking Team Cymru’s Malware Hash Registry against files found in memory or hibernation files by Volatility. We’re going to do it by hand at the command line, as a quick exercise in some ways to manipulate both tools and think through command line problems. Please note Team Cymru places restrictions on automated use of their lookup tool, so don’t automate anything like this without speaking to them.

To do this, we’ll obviously need a memory image and a Linux environment with Volatility functioning (I recommend downloading the SIFT kit VM if you don’t have one). Our starting point in this exercise is after memory has been properly retrieved with an imaging tool, we’ve identified an appropriate Volatility profile with imageinfo, and we’ve identified a suspicious process or processes using our standard toolkit of commands like malfind, malsysproc, unloadedmodules, etc…

We shall begin by dumping some files of interest from our memory image using a command like moddump (which extracts kernel drivers) or dlldump. For this example we will simply be dumping dlls. To avoid a mess, we will first make a directory to put the dumped files in.

mkdir dlls

Next, we perform the dump. In practice, we should be focusing on specific suspicious processes using –pid, or the results of a search with –regex. There will be a cap on the number of hashes we can submit using this mechanism, so don’t try to submit the entire raw results of dumpfiles. However, as an example, examining only .exe files output by dumpfiles -n might be interesting. Each command has its purpose.

vol.py -f [mem]  –profile=[Profile] dlldump –dump-dir=dlls

(As a reminder, that command is:

vol.py -f [filename of the memory file] –profile=[The profile that imageinfo / kdbgscan identified) dlldump –dump-dir=[the path to our dump directory]  –pid=[suspicious process ID if available])

Now, we ought to have a big folder full of dll files which Volatility found in memory. Let’s head here and make sure everything worked okay.

cd dlls
ls

Team Cymru requires their input be in a specific format with a begin and end marker. So let’s make a new file that starts with that.

echo begin > hashes.txt

We can’t just use the output of md5sum or sha1sum because it contains two columns (hash and then filename) and the MHR service needs line-delimited hashes, only. We have to do something to remove that second column. There are a lot of solutions in Linux. In this example, I chose to pipe the results of md5sum into Gawk, with which I select only column 1. I’ll then stick that output into our hashes.txt file.

md5sum * | awk ‘{ print $1 }’ >> hashes.txt

(Grep is a powerful tool. We could certainly do some file filtering at this point if we failed to do so properly within Volatility – for instance, in our example of dumpfiles -n, this might be where we filter for only .exes, with md5sum * | grep .exe | awk ‘{print $1 }’ >> exehashes.txt)

Now let’s properly close our file as requested.

echo end >> hashes.txt

The bulk command line submission method for Team Cymru is netcat to whois. We shall upload the file we just made, and a new file with their response will be generated as a result.

netcat hash.cymru.com 43 < hashes.txt > hashescheck.txt

Remember that our syntax for netcat will be [destination server]  [port] < [the file we are sending] > [the returned output’s destination].

Now, we can check the contents of the resulting file. If we sent a larger list of files, we’ll probably want to filter out noise by eliminating any line returned NO_DATA. For verification, there should be a header returned at a minimum.

cat hashescheck.txt | grep -v NO_DATA

# Bulk mode; hash.cymru.com [2016-x-x x:x:x +0000]
# SHA1|MD5 TIME(unix_t) DETECTION_PERCENT

And that’s that!

(Please don’t ask me about submitting files to VirusTotal, because that already exists; all you’ll need is your API key.)

101 Ways I Screwed Up Making a Fake Identity

As most of you know, my professional area of expertise in security is incident response, with an emphasis on system / malware forensics and OSINT. I’m fortunate enough in my position in the security education and con community to sometimes get pulled into other directions of blue teaming and the occasional traditional penetration testing. However, the rarest of those little fun excursions are into the physical pen testing and social engineering realm. In the breaking into buildings and pretending to be a printer tech realm, I’m merely a hobbyist. 🙂

Therefore, it was a bit remarkable that in the course of developing some training, there was a request for me to create some fake online personas that would hold up against moderately security savvy users. I think most of us have created an online alter ego to some extent, but these needed to be pretty comprehensive to stand up to some scrutiny. Just making an email account wasn’t going to cut it.

So Pancakes went on an adventure into Backstop land. And made a lot of amusing mistakes and learned quite a few things on the way. I’ll share some of them here, so the social engineers can have a giggle and offer suggestions in the comments, and the other hobbyists can learn from my mistakes. Yes, there are automated tools that will help you do this if you have to do it in bulk for work, but many of the problems still exist. (Please keep in mind that misrepresenting yourself on these services can cause your account to be suspended or banned, so if you’re doing more than academic security  education or research, do cover your legal bases.)

What I messed up

I’m not going to waste everybody’s time talking about how to build a unremarkable and average character in a sea of people or use www.fakenamegenerator.com, nor how we always set up a VM to work in to avoid cookies and other identity leakage (including our own fat fingering). Those have been discussed ad infinitum. Let’s start with what happened after those essentials, because creating a good identity is apparently a lot more involved..

  • It pretty much required a phone number from the get go. I spun up my VMs and created the base sets of email and social media accounts that an average internet user might have, but Twitter was on to me from the start. I wasn’t planning on involving a phone for 2FA at all, but their black box security algorithm tripped in seconds and made me use a phone to enable the first account. So, I’m pretty much terrible. Granted, there are plenty of online services that will give you a phone number, and I could have burners if I felt the need, but it added a layer of complexity. In a good move, it looks like most of social media is now spamming new users to enable 2FA.
  • My super authorial D&D skills at creating dull people in big towns and reposting memes weren’t enough. I had to make friends and meet people to make the profiles pass as real. I knew that was going to be a challenge, but I didn’t expect it to become such a thought problem.
    • Twitter was the easiest once I fleshed out the characters and followed a bunch of accounts they would like, then people following those accounts. Some people just follow back folks who aren’t eggs (I do). I quickly had 40 or 50 followers on the dummy accounts. I’m apparently big in the vegan cooking scene now.
    • LinkedIn wasn’t too bad once somebody clued me into (LION) tags and good old 2000+ connection recruiter accounts. The people who participate in that essentially connect with anybody, regardless of the normal LinkedIn security and privacy rules about knowing people personally. So after making decent profiles, I just had to find a couple people with the tag, then fork out through 2nd degree connections in their vast networks to the correct industries and regions. Of course, I had to do a bit of strategic plagiarizing from other people in my characters’ professions’ skills sections to build believable people, first. (We have yet to see if they got any recruiter messages, but none of them had really lucrative careers.)
    • Facebook was actually the one I struggled with the most, because you really need a starting point in your network to even add other people. I talked to a lot of security folks about my woes there and they made some good suggestions. The first was to play some Facebook browser games for a few minutes (I feel like my time with Candy Crush was worse than the dark web), then go to their community pages and plead “add me”.  Again, people cheating the security / privacy system make it easy to gain a foothold. A couple popular games got me 50-100 friends, and from there by using Facebook’s lovely verbose search system, I could move my network into the regions that my personas “lived in”. For instance, if the character were from Chicago I would search for friends of friends of the connections I had made for people in Chicago, and those people were much more likely to add me because I was a “friend of so and so”. The other effective strategy people gave me was to present myself as an ardent fan of a sports team or political party in article comments. That worked pretty well, but not as fast as the games.
    • Once I had some “friends” on Facebook, moving into specific workplaces and schools wasn’t too hard. Public Facebook Events at those institutions and their associated venues provide lists of lots of people to add who were almost certainly physically present. Again, once I had a few connections in that circle, it became exponentially easier to add more.
    • Pinterest, YouTube, and Meetup were pretty easy – there’s really not a lot of verification of users there, by design. I liked them for this because they’re very public and tie the other social media profiles together nicely. I confess that I did lose my nerve when Meetup group sign up forms asked me detailed questions about my “kids” or my “spouse”, and stuck to ones that weren’t so intrusive, because that just felt creepy (says the woman who looked up a cached copy of your 2004 MySpace page).
  • I don’t normally feel guilty when I’m hacking somebody in a pen testing engagement (it’s for a good cause), but I did feel a little weird and guilty interacting with unwitting strangers on the internet as other people. It definitely took me out of my comfort zone – not only did I have to role play other personalities with wildly different views, but I had to shake my normal security paranoia to do stuff like click “add friend” a lot without hesitation and leak data through privacy settings, strategically.
  • I really had to commit to one character at a time to develop them into a person.
  • Even in a clean VM, there was still apparent tracking to my IP space on LinkedIn! I didn’t bother to use a proxy or a public connection for an educational endeavor, but if I had to flee the mafia or something I would certainly keep that in mind. Internet advertisement tracking is insidious and possibly scarier than any nation state actor.
  • Photos are everywhere yet were strangely really hard to come by. Fake identity creating sites like https://randomuser.me/ provide profile pictures, but anybody half decent at OSINT will immediately reverse image search a suspicious profile’s picture. Their stock art photos have been so abused that searching any one at random provides a trove of suspect business reviews and fake LinkedIn profiles (a blog of its own…). Again, since this was a legal and ethical endeavor, I just used a collection of donated (previously unposted) photos from friends, heavily visually filtered and transformed. Even that required a lot of careful checking for metadata and visual clues that tied them to a location. I’m sure there are more expensive stock art photo sources that are less abused, but I’m not sure how ultimately virginal even their photos are. Maybe I should invest in a good wig and glasses.
  • This was time consuming, and I can see it becoming incredibly time consuming, which is the reason you use tools to automate the wits out it if you do it regularly as a penetration tester. Facebook and Twitter timestamp content, and comprehensive ways around that are the kind of things social media companies give out hefty bug bounties for. On Twitter, you can retweet a years worth of old tweets in temporal sequence, but that will never change your publicly visible account creation date. Similarly on Facebook, you can manually change the date and location of posts, but your account creation date is still pretty easy to see based on other time data and your profile ID number. Ultimately, there seems to be no substitute for good old months and years of the account existing. If somebody has a work around they’d like to share, I’m all ears.

What we can learn about OSINT and defense from this exercise

  1. Not new, but always good to reiterate: people bypassing security and privacy controls for convenience is a really big security issue. People who blatantly bypassed the personal connection requirements on Facebook and LinkedIn made my job a lot easier. If nobody had accepted my fake characters’ invites on social media, I would have been pretty stymied and stuck buying followers or building my own network to be friends with myself.
  2. As an adjunct to #1, be mindful of connections via one of these “wide open” social media accounts (many hundreds of connections, or an indication they don’t screen requests in their profiles).
  3. Reverse image search the photo, all of the time. Maybe on two sites! This should be something you do before dating somebody or making a business deal, just like googling their name. No photos are, as always, a red flag.
  4. Check the age of social media profiles even if they look verbose and well defined. Stealing other peoples’ bios is easy.
  5. Never be connection #1, #2, or #3 to a profile you don’t recognize (you enabler).
  6. Don’t accept connection requests from Robin Sage, (or anybody else who presents themselves as a member of your community with no prior contact).
  7. In fact, don’t accept friend invites from people you don’t know even if they have 52 mutual friends and “go to your school”. I had 52 mutual friends and was bantering with the school mascot about a sportsball team I’ve never heard of, in a few minutes.
  8. Look for some stuff that’s deeper than social media and typical web 2.0 services when you’re investigating a person. My typical OSINTing delves into stuff like public records, phone and address history, and yes, family obituaries. Real people leave more artifacts online over the course of their lives than merely things that require a [Click Here to Sign in with Facebook], and the artifacts I listed are harder to fake quickly.
  9. Forget trust, verify everything.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


 

Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.

 

 

Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.

 

 

How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.

 

 

Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.

 

 

Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp ;).

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts (https://krypt3ia.wordpress.com/2014/12/20/fauxtribution/). Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.

 

 

How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.

 

 

So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.

 

 

Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).

 

 

Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.

 

 

Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.

 

Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.

 

Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..

What’s a Challenge Coin, Anyway? (For Hackers)

So what are these “challenge coins”?

Challenge coins come from an old military tradition that bled into the professional infosec realm then into the broader hacker community through the continual overlap between the communities. In some ways like an informal medal, coins generally represent somewhere you have been or something you have accomplished. Consequently, you can buy some, and be gifted or earn others; the latter are generally more traditional and respected.

There are a few stories about how challenge coins originated in the U.S. Military and most have been lost to history and embellished over time, but I will tell you the tale as it was passed down to me:

During World War I, an officer gifted coin-like squadron medallions to his men. One of his pilots decided to wear it about his neck as we would wear dog tags, today. Some time later, that pilot’s plane was shot down by the enemy and he was forced down behind enemy lines and captured. As a prisoner of war, all of his papers were taken, but as was customary he was allowed to keep his jewelry, including the medallion. During the night, the pilot managed to take advantage of a distraction to make a daring escape. He spent days avoiding patrols and ultimately made his way to the French border. Unfortunately, the pilot could not speak any French, and with no uniform and no identification, they assumed he was a spy. The only thing that spared him execution was showing them his medallion, upon which there was a squadron emblem the French soldiers recognized and could verify.

Today, people who collect challenge coins tend to have quite a few more than just one.

What’s the “challenge”?

Challenge coins are named such because anybody who has one can issue a challenge to anybody else who has one. The game is a gamble and goes as such:

  • The challenger throws down their coin, thereby issuing a challenge to one or more people.
  • The person or people challenged must each immediately produce a coin of their own.
  • If any of the people challenged cannot produce one coin, they must buy a drink for the challenger
  • If the people challenged all produce coins, the challenger must buy the next round of drink(s) for them.

Therefore, a wise person carries a coin in a pocket, wallet, or purse, at all times!

How do I get challenge coins?

As I mentioned before, the three major ways to get a challenge coin in the military and in the hacking community are to buy one, earn one, or be gifted one.

  • You can buy coins at many places and events to show you were there. Many cons sell them now, as well as places like military installations and companies. They’re a good fundraiser.
  • You can be gifted a coin. This is normally done as a sign of friendship or gratitude, and the coins gifted are normally ones that represent a group or organization like a military unit, company, non-profit, or government agency. The proper way to gift a coin is enclosed in a handshake.
  • You can earn a coin. Many competitions and training programs offer special coins for top graduates, champions, and similar accomplishments (similar to a trophy). This is the most traditional way to receive a coin.

How do I display my coins, once I have more than one?

On a coin rack or coin display case. https://www.amazon.com >>


Can I make my own challenge coins? How much do they cost?

Yes. Lots of companies will sell you challenge coins. The price varies drastically based on the number ordered, colors, materials, and complexity of the vector design.

Think about whether you plan to sell coins to people, gift them on special occasions, or make them a reward, and plan accordingly.

Can I see some examples of infosec / hacking challenge coins?

Sure! I hope you’ve enjoyed this brief introduction to challenge coins. Here are some of my friends and their favorite challenge coins:

 

 

The $5 Vendor-Free Crash Course: Cyber Threat Intel

Threat intelligence is currently the trendy thing in information security, and as with many new security trends, frequently misunderstood and misused. I want to take the time to discuss some common misunderstandings about what threat intelligence is and isn’t, where it can be beneficial, and where it’s wasting your (and your analysts’) time and money.

To understand cyber threat intelligence as more than a buzzword, we must first understand what intelligence is in a broader sense. Encyclopedia Britannica provides this gem of a summary:

“… Whether tactical or strategic, military intelligence attempts to respond to or satisfy the needs of the operational leader, the person who has to act or react to a given set of circumstances. The process begins when the commander determines what information is needed to act responsibly.”

The purpose of intelligence is to aid in informed decision making. Period. There is no point in doing intelligence for intelligence’s sake.

Cyber threat intelligence is not simply endless feeds of malicious IP addresses and domain names. To truly be useful intelligence, threat Intel should be actionable and contextual. That doesn’t mean attribution of a set of indicators to a specific country or organization; for most companies that is at the best futile and at the most, dangerous. It simply means gathering data to anticipate, detect, and mitigate threat actor behavior as it may relate to your organization.  If threat intelligence is not contextual or is frequently non-actionable in your environment, you’re doing “cyber threat” without much “intelligence” (and it’s probably not providing much benefit).

Threat intelligence should aid you in answering the following six questions:

  1. What types of actors might currently pose a threat to your organization or industry? Remember that for something to pose a threat, it must have capability, opportunity, and intent.
  2. How do those types of actors typically operate?
  3. What are the “crown jewels” prime for theft or abuse in your environment?
  4. What is the risk of your organization being targeted by these threats? Remember that risk is a measure of probability of you being targeted and harm that could be caused if you were.
  5. What are better ways to detect and mitigate these types of threats in a timely and proactive manner?
  6. How can these types of threats be responded to more effectively?

Note that the fifth question is the only one that really involves those big lists of Indicators of Compromise (IoCs). There is much more that goes into intelligence about the threats that face us than simply raw detection of specific file hashes or domains without any context. You can see this in good quality threat intelligence reports – they clearly answer “what” and “how” while also providing strategic and tactical intelligence.

I’m not a fan of the “throw everything at the wall and see what sticks” mentality of using every raw feed of IoCs available. This is incredibly inefficient and difficult to vet and manage. The real intelligence aspect comes in when selecting which feeds of indicators and signatures are applicable to your environment, where to place sensors, and which monitored alerts might merit a faster response. Signatures should be used as opposed to one-off indicators when possible. Indicators and signatures should be vetted and deduplicated. Sensibly planning expiration for indicators that are relatively transient (like compromised sites used in phishing or watering hole attacks) is also pretty important for your sanity and the health of your security appliances.

So, how do you go about these tasks if you can’t staff a full time threat intelligence expert? Firstly, many of the questions about how you might be targeted and what might be targeted in your environment can be answered by your own staff. After your own vulnerability assessments, bring your risk management, loss prevention, and legal experts into the discussion, (as well as your sales and development teams if you develop products or services). Executive buy-in and support is key at this stage. Find out where the money is going to and coming from, and you will have a solid start on your list of crown jewels and potential threats. I also highly recommend speaking to your social media team about your company’s global reputation and any frequent threats or anger directed at them online. Are you disliked by a hacktivist organization? Do you have unscrupulous competitors? This all plays into threat intelligence and security decisions.

Additionally, identify your industry’s ISAC or equivalent, and become a participating member. This allows you the unique opportunity to speak under strict NDA with security staff at your competitors about threats that may impact you both. Be cognizant that this is a two way street; you will likely be expected to participate actively as opposed to just gleaning information from others, so you’ll want to discuss this agreement with your legal counsel and have the support of your senior leadership. It’s usually worth it.

Once you have begun to answer questions about how you might be targeted, and what types of organizations might pose a threat, you can begin to make an educated decision about which specific IOCs might be useful, and where to apply them in your network topology. For instance, most organizations are impacted by mass malware, yet if your environment consists entirely of Mac OS, a Windows ransomware indicator feed is probably not high in your priorities. You might, however, have a legacy Solaris server containing engineering data that could be a big target for theft, and decide to install additional sensors and Solaris signatures accordingly.

There are numerous commercial threat intelligence companies who will sell your organization varying types of cyber threat intelligence data of varying qualities (in the interest of affability, I’ll not be rating them in this article). When selecting between paid and free intelligence sources (and indeed, you should probably be using a combination of both), keep the aforementioned questions in mind. If a vendor’s product will not help answer a few of those questions for you, you may want to look elsewhere. When an alert fires, a vendor who sells “black box” feeds of indicators without context may cause you extra time and money, while conversely a vendor who sells nation state attribution in great detail doesn’t really provide the average company any actionable information.

Publicly available sources of threat intelligence data are almost endless on the internet and can be as creative as your ability to look for them. Emerging Threats provides a fantastic feed of free signatures that include malware and exploits used by advanced actors. AlienVault OTX and CIRCL’s MISP are great efforts to bring together a lot of community intelligence into one place. Potentially useful IoC feeds are available from many organizations like abuse.ch, IOC Bucket, SANS ISC DShield and MalwareDomains.com (I recommend checking out hslatman’s fairly comprehensive list.). As previously noted, don’t discount social media and your average saved Google search as a great source of Intel, as well.

The most important thing to remember about threat intelligence is that the threat landscape is always changing – both on your side, and the attackers’. You are never done with gathering intelligence or making security decisions based it. You should touch base with everybody involved in your threat intelligence gathering and process on a regular basis, to ensure you are still using actionable data in the correct context.

***

In summary, don’t do threat intelligence for the sake of doing threat intelligence. Give careful consideration to choosing intelligence that can provide contextual and actionable information to your organization’s defense. This is a doable task, possible even for organizations that do not have dedicated threat intelligence staff or budgets, but it will require some regular maintenance and thought.


Many thanks to the seasoned Intel pros who kindly took the time to read and critique this article: @swannysec, @MalwareJake, and @edwardmccabe

I highly recommend reading John Swanson’s work on building a Threat Intel program next, here.

Why do Smartphones make great Spy Devices?

There has been extensive, emotional political debate over the use of shadow IT and misuse of mobile phones in sensitive areas by former US Secretaries of State Colin Powell and Hillary Clinton. There is a much needed and very complex discussion we must have about executive security awareness and buy-in, but due to extensive misinformation I wanted to briefly tackle the issue of bringing smartphones into sensitive areas and conversations (and why that’s something that is our responsibility to educate our leadership to stop doing).

This should not be a partisan issue. It underscores a pervasive security issue in business and government: if employees perceive security controls inexplicably inconvenient, they will try to find a way to circumvent them, and if they are high enough level, their actions may go unquestioned. This can happen regardless of party or organization, and in the interest of security, information security professionals must try to discuss these cases in a non-partisan way to try to prevent them from reoccurring.

That being said, let’s talk briefly about why carrying smartphones into any sensitive business or government conversations matters, and is a particularly bad habit that needs to be broken.

There are two things to remember about hackers. The first is that we’re as lazy (efficient?) as any other humans, and we will take the path of least resistance to breach and move across a network. Instead of uploading and configuring our own tools on a network to move laterally and exfiltrate data, we will reach for the scripting and integrated tools already available on the network. In doing so, smart hackers accomplish a second and much more critical objective of limiting the number of detectable malicious tools in an environment. Every piece of malware removed from an infiltration operation is one less potential antivirus or intrusion detection system fire, and one less layer of defense in depth that is effective against hackers. An intrusion conducted using trusted and expected  administrative tools and protocols is very hard to detect.

These same principles can apply to more traditional audio and video surveillance. In the past, covert surveillance devices had to be brought into a target facility via human intervention (for instance, brought in by an operative, a bribe, or covertly planted on a person or delivery). The decades of history (we know) about bugs is fascinating – they had to be engineered to pass through intensive security measures and remain in target facilities without notice. In the pre-transistor and the early era of microelectronics, this was a complex engineering feat indeed.

Personal communication devices, and to a greater extent smartphones, are a game changer. Every function that a cold war -era industrial or military spy could want of a bug is a standard feature of the smartphones that billions of people carry everywhere. Most have excellent front and rear facing cameras. They have microphones capable of working at conference phone range. They have storage capable of holding hours of recording, multiple radio transmitters, and integrated GPS. James Bond’s dream.

More importantly than any of this, smartphones tend to be one of three major operating systems, which are commercially available globally and excruciatingly studied for exploits by every sort of hacker. Some of these exploits are offered to the highest bidder on the black market. Although the vulnerability of smartphone operating systems varies by age and phone manufacturer, each is also  vulnerable to social engineering and phishing through watering hole attacks, email, text message, or malicious apps.

Why expend the effort and risk to get a bug into a facility and conceal it when an authorized person brings such a fantastic, exploitable surveillance device in knowingly and hides it themselves? If the right person in the right position is targeted, they may not even be searched or reprimanded if caught.

There’s been a lot of discussion about countermeasures against compromised smartphones. Unfortunately, even operating inside a Faraday cage that blocks all communication is not effective because eventually, the phone leaves. A traditional covert device may not. As with the USB devices used to deploy Stuxnet, this trusted air gap is broken the moment an untrusted device can pass across it. A compromised phone can simply be instructed to begin recording audio when it’s cellular signal is lost, and upload the recording as soon as that connection is restored. Turning off the devices is also not particularly effective in the era of smartphones with irremovable batteries.

Yes, of course it’s still possible to put a listening device in a remote control or a light fixture. Surreptitious hacking tools used to compromise networks on site can still function this way. But why expend the substantial effort and risk in installing, communicating to, and removing them if there’s an easier way?

This is not to say it’s time to put on our tin foil hats and throw out our phones. Most people are probably not individual targets of espionage, and using smartphones with current updates and good security settings is decent protection against malware. However, there are people all over the world who are viable targets for industrial or nation-state espionage, either for their own position or for their access to sensitive people, information, or places. If you are informed by a credible authority that you may be targeted and should not bring your smartphone into a particular area, please take this advice seriously and consider that your device(s) could be compromised. If you suspect that there is another valid reason that you could be targeted by industrial or nation state espionage, leave your phone outside. It is generally far simpler to compromise your smartphone than it would have been to break into your office and install a listening device.