Ask Lesley InfoSec Advice Column: 2017-03-16

This week, I address some burning questions about education and training.  As always, submit your problems here!


Dear Lesley,

Let’s cut to the chase. I hate coding. I don’t enjoy building things from scratch. I do, however, love taking things apart, and would probably be able to learn to code if I started in that direction.

I currently work as a Linux sysadmin in the web industry, with a couple certs (and 4 years) under my belt so far. I love infosec and want to move in that direction, but I have no idea where to start, given my utter distaste for traditional methods to teach coding.
Do I just… download some arbitrary code and take it apart? That seems like a horribly insecure idea, but I’m just not sure where to start. I also tend to have serious issues with confidence in everything, especially tech. Please help! ”    

– Flustered and floundering

Dear Flustered,

I don’t like coding, either. It’s actually not uncommon in infosec – we tend to like rapidly changing environments instead of the routine patience involved in coding. I’ve spoken to many ex-programmers and ex-CS students who agreed.

I see two routes you can go if you think anything like me:

  1. The scripting route: Many, many blue team and red team tools are Python and Ruby based, and many of them are extensible by design. Pick offensive or defensive security, then choose a tool set in one of these common languages that interests you. (For me, it was the Volatility framework). Take apart a few existing scripts and see how they function in real life. Then pick some interesting feature to add in your own script. This won’t necessarily teach you how to write a stellar production application, but for most security roles scripting is what you need.
  2. The reversing route. If analyzing malware piques your interest, that’s a great way to learn how software works all the way down to the assembly level. The intrigue can be a great motivator to learn. Definitely don’t pick commodity malware out today to analyze -it’s purposefully hard to reverse! Start with a book like Practical Malware Analysis or Malware Analyst’s cookbook that has detailed, step by step tutorials from the very basics. Learning how to take something apart can be a great way to learn how to put it together, and you’ll definitely figure out what fundamentals you need to brush up on on the way.

Dear Lesley,

Looking into the future…what would you guess would be the safest career path/area to focus on now in security, considering the growth in available off the shelf tools to get the jobs done. Would penetration tester still be needed for example in 10-20 years time?  

–  Spinner.


Dear Spinner,

First off, no guarantees – I’m not clairvoyant. There definitely is something of an infosec bubble as more people enter degree programs. However, there’s a caveat – being a great hacker is a personality trait, not a skill that can be taught academically. If you’re innovative and adaptable, I sincerely doubt you’ll have trouble finding work in that time frame.

In terms of automation, some tasks automate better than others. Unfortunately, the one that automates the best is the entry level security analyst gig. Merely passing the Security+ and being able to read and route SIEM events may not cut it in a couple years. You’ll need creativity and a broader skill set. More advanced defensive and offensive roles will require human attention for the foreseeable future because attackers innovate constantly. While a magic black box may pick up a new zero day, remediating and understanding the impact and additional factors is more complicated.

Security engineering continues to become more automated. The need for people to simply maintain static blocklists, signatures, or firewall rule sets will continue to decrease. Those jobs are trending towards more advanced SIEM and log aggregation management.

The jobs I see in the most demand with the least supply right now are malware reversing at an assembly level, threat intelligence with an actual political science or foreign studies background, and higher level exploit research (coupled with good business and communication skills).


Dear Lesley,

How does one begin exploring the world of sec without coming off as a script kiddie or just wanting to be an “edgy hacker”?     

– Careful but eager beaver


Dear Careful but Eager,

I’m really sad you feel that you have to ask that question, because merely asking it means you probably aren’t the type you’re concerned about. How do you know if you’re skidding it up? You enter commands into a hacking tool with no idea what they are doing, and much more importantly, no interest in knowing what they are doing. Being a good hacker has nothing to do with pwning stuff. It has to do with understanding how lots of stuff works and being able to manipulate that to your advantage.  (I should put that at the top of my blog in huge red letters!)

Imagine you’re a secret agent, needing to break into a vault. You can take one other person with you. Person 1 is another agent who has read a few books on how the vault works. Person 2 is the engineer who has been installing and maintaining the vaults for 30 years and has agreed to help you. Who do you pick? I’d pick the second person, who knows the system inside and out. I can teach her to sneak around a little and how to wear a disguise. Person 1 doesn’t know the foibles of the vault and only knows how to attack it the way the books said.

To summarize, you skid check is how many commands you enter in Kali or Sift or whatever without bothering to figure out what the heck you are doing. When you’re learning, the goal is understanding that, not getting a shell.

You shouldn’t care what you come off as. If you’re genuinely interested in learning, plenty of hackers will be willing to help you.


Dear Lesley,

(tl;dr at the very last line)

I am a novice who is looking to break into the field of security. Currently, I have received an offer to read a book (The Web Application Hacker’s Handbook) and participate in an assessment to show if I can perform the work necessary to do the job. Essentially, the assessment (from what I’ve gathered) is to assess the security of a vulnerable web application and then reverse a protocol.

Coming from a mathematics background with limited formal education in computer science and no formal education in networking, the book is hard to digest. I have setup pen test labs such as DVWA and WebGoat which I am practicing with and I have made surprisingly good progress in these labs. I’ve also learned a little bit about networking through much trial in error in setting these labs up in safe environments!

However, I fear that even if I pass the assessment, I will not be offered a position due to my lack of networking knowledge. I am aware of certifications such as OSCP and Security+ to bolster my background, but they suggest a solid understanding of networking before enrollment in the courses or studying for the examinations.

Do you have any recommendations on books/courses/certifications that would take an individual from zero-knowledge of networking to the suggested level of networking knowledge for these kinds of security certifications?

– Not a smart man


Dear Smart Man (I refuse, because it’s untrue!),

It really sounds like you’re doing everything right. You have correctly recognized that solid TCP/IP knowledge is really important in security. The lab is fab. But, you can do other things in that lab. Like take a step back from the security tools, and concentrate on the networking ones. How long have you spent in Wireshark, just observing and filtering through network traffic? Something just watching what’s going on and identifying common ports and protocols can be huge. What does opening a website look like, and why? What does a ping look like? What does it look like when a new computer is connected to the network?

Certs (and associated books)… There are a lot of options in network land. Network+ is okay for fundamentals and really cheap (although an inch deep and mile wide). WCNA is the Wireshark specific cert, but by nature teaches a pretty in-depth level of knowledge of reading packets. It’s also quite affordable. If you have 600 bucks and free time, I’d do both (in that order) and blow those folks out of the water with your resume. If you don’t have those resources, they give you some great study materials to start with.

There are endless good books and blogs on TCP/IP out there that will get you started and give you an understanding of the OSI model and common ports and protocols. Hands on experience in your lab or on your home network  is much more important.

 

 

 

Ask Lesley InfoSec Advice Column: 2017-02-26

This week, we discuss red team and blue team self-study, getting kids interested in security, and security paranoia. As always, submit your problems here!


Dear Lesley,
I am a threat intelligence analyst who is currently underutilized in my current job, and feel like my skills and tradecraft are slipping because of it. I’m wanting to give myself some fun projects to work on in my off-time but am not really sure where to start. What types of things would you recommend?
-M

Dear M,
You’re certainly in a great field to want work in, in 2017. Not only do you have the whole pantheon of nation state actors conducting cyber operations to study, but you have a huge range of commodity malware, botnets, insider threats, malware authors, and dark web markets to study.  If you’re not feeling inspired by anything in that list, perhaps reach out on Intel sharing lists or social media to see if an existing project could use your skill set? Lots of folks are doing non-profit threat research work and need extra hands.


Dear Lesley,
If you do not have the budget to send people to SANS or to conferences, what free supplement resources would provide fundamental training for someone studying DFIR?  
-Curriculum Writer

Dear Curriculum Writer,
I can totally appreciate not being able to send somebody to a thousand dollar (or more) commercial conference or training program. However, most BSides conferences are free (or under 20 dollars). I suppose if you are totally geographically isolated and there is no BSides in any city in driving distance, those may be impossible, but I would definitely explore the conference scene in detail before writing them off. Sending somebody to a BSides or a regional conference for the cost of gas and a few bucks provides a lot of value for the money.

Otherwise, a DFIR lab will be your best friend for self study. Unfortunately, I can’t guarantee a home lab will be totally free to implement. Let’s talk about some fundamental requirements:

– One or more test hosts running assorted operating systems.
– An examiner system running Linux
– An examiner system running Windows (recommended)
– Intermediate networking
– Free (or free non-corporate) forensics and malware analysis tools.
– A disk forensics suite
– A memory forensics suite
– A write blocker, associated cables, and drives.

An ideal comprehensive DFIR lab, where money is no object, might look something like:

– A host PC with 16GB (or more) RAM.
– VMWare Workstation
– Ubuntu (free), Windows 7, 10, and Server 2008 VMs
– A SANS Sift Kit examiner VM (free)
– A REMnux Kit examiner VM (free)
– A Cuckoo Sandbox VM (free)
– A Server 2k8 examiner VM
– An EnCase or FTK forensics suite license
– A write blocker, associated cables, and a number of hard drives.

But, we can do it more cheaply, sacrificing convenience. We can virtualize with VirtualBox (losing the ability to take non-linear, branching snapshots), or on bare metal machines we scrounge from auctions or second hand stores (the least optimal solution). This can work, but every time we infect or corrupt a machine, we’ll have to spend time restoring the computers to the correct condition. We can stick with analyzing Windows versions that are out of support, but we won’t be totally up to date.

One of the most difficult things for people studying the “DF” side of DFIR is the inability to get expensive licenses for industry-standard corporate forensics suites. There’s really no great solution for this. There are limited demo versions of this software that come with some forensics textbooks. SANS Sift Kit does include The Sleuth Kit, an open source suite which performs some similar functions.

Physical forensic toolkits aren’t cheap, but aren’t in the same ludicrous territory as forensics software. You can pick up an older used Tableau forensic bridge for about 150 dollars on eBay. Perhaps if you network within your local security meetup, somebody will be able to lend you one, as many college and training courses provide them.

Once we have something resembling a lab, we can follow along with tutorials on SecurityTube and on blogs, in forensics and malware reversing textbooks, in open courseware, and exploring on our own.


Dear Lesley,
I have a daughter that I would like to encourage her to go into IT and possibly security if she’s interested. I know your father was influential to you getting into security. Do you have any suggestions to me as a dad on things I can do to encourage my daughter to become interested in IT and security?
-Crypto Dad

Hi Crypto Dad,

Yep, both of my parents had a big influence on my career! A hard question to answer, but an important aspect was not pushing me hard towards or away from hobbies. I was treated like a small adult and provided the opportunity to follow along with whatever my dad was doing in his shop, and even at a very young age he answered my questions without patronizing me or getting frustrated. He didn’t dumb things down; he just started at the beginning. I always had access to stuff to learn how it worked and how it was made. By the time I found out I ‘wasn’t supposed to’ know or like things , I already knew and liked them.


Dear Lesley,
I’m a penetration tester who seems to be falling behind with the times. My methods aren’t efficient. Recently I discovered there are better ways of doing things than my three year old SANS curriculum taught me. How can I stay current without becoming a lonely crazy old cat lady?
-Just a crazy cat lady

Hi Crazy Cat Lady,
You’re ahead of many folks by realizing there’s a problem. I see a lot of infosec people let their skills stagnate for many years after training or college, and our field changes really fast. No quick fix, but here are some suggestions:

– Participate in CTFs. Ignore the scoreboard and the dudebros and “rock stars”. Just compete against yourself, but do it genuinely and learn from your mistakes.
– Jump over to the blue team side for a bit and read some really thorough incident and threat reports from the past couple years. Sometimes seeing what other people are doing will give you interesting ideas of avenues to research.
– If you’re still reaching for Kali, escape its clutches. Kali is an amazing VM, but it will only take you so far and lacks some newer tools. It can also discourage thinking “out of the box” about how to compromise a network. After all, it is a box.
– Get out to cons to watch red team talks. Watch recent ones on YouTube, too. See what other folks are up to. Your cats will be okay for a couple days, and you’ll make new friends.
– PowerShell Empire. 💖💖💖
– Don’t be embarrassed to make mistakes and ask questions.
– Don’t be embarrassed to make mistakes and ask questions.
– Don’t be embarrassed to make mistakes and ask questions.


Dear Lesley,
How do you deal with any overbearing paranoia being in InfoSec? Example: I want my home network to be as secure if not more than my work network… How can I explain my paranoia regarding outside threats (however unlikely), and to cope with it 🙂
-Too Paranoid to enter my name

Hi Paranoid,

Fear is healthy in small doses. Fear keeps us alert to potential threats, and helps us survive dangerous situations. However, constant fear is not helpful and is patently unhealthy. If you see illusory threats in every dark corner, you won’t notice when a real one is there, and you’ll be too tired to respond properly to it.

You need approach this as analytically as you can. Let’s talk about measuring real risk.

– Evaluate your assets. What would somebody genuinely target you for? This isn’t necessarily items or information, but could also include your job position or connections.
– Evaluate real threats to you. Who rationally has motive to “get you”, and do they have the means and the opportunity to?
– Evaluate your vulnerability. How could somebody attack you or your assets, and how much effort and resource would it take to do it? How well do you mitigate vulnerabilities? Are you a harder target than others facing similar threats?

Risk is a direct result of the level of threat against you and your assets, and your vulnerabilities. It’s impossible to change the level of threat. All one can do to change risk is change assets, or change vulnerabilities.

People make personal decisions about acceptable risk. A firefighter lives with a different level of risk than a librarian. The firefighter likely has to deal with occasional moments of quite rational fear and adrenaline (due to actual threats and vulnerability), but does not live in constant fear of burning buildings. The librarian might consider running into burning buildings an unacceptable level of risk, which is why he found a less risky profession. However, both people live comfortable with their overall risk and their mitigations, and not in irrational fear.

With all this in mind, consider the things that you’re paranoid about carefully. What is the real level of risk each poses? What level of real risk will you choose to accept on a daily basis? If your overall level of risk is actually too high to cope with on a daily basis, reduce your targeted assets, or reduce your vulnerabilities. If you find your level of risk acceptable, then maintain that level rationally and try not to be unduly afraid. You likely have more to fear from chronic health problems than nameless threats.

Ask Lesley InfoSec Advice Column: 2017-01-30

Thanks for another wonderful week of submissions to my “Ask Lesley” advice form. Today, we’ll discuss digital forensics methodology, security awareness, career paths, and hostile workplaces.


 

Dear Lesley,

I’m a recent female college graduate that didn’t study computer science but is working in technical support at a software company. The more I learn about infosec, the more curious and interested I get about if this is the field for me. What do you resources/videos/courses/ANYTHING you recommend for people who want to make a serious stab at learning infosec?

– Curious Noob

Dear Curious,

I’m really glad to hear you’re discovering a passion for infosec, because curiosity is really the most fundamental requirement for becoming a good hacker. I wrote a long blog series about information security careers which I hope you may find helpful in discovering niches and planning self-study. For brevity’s sake, here are some options for you.

  • Study up on any fundamental computer science area you’re underexposed to in your current work – that means Windows administration, Linux administration, TCP/IP, or system architecture. You need to have a good base understanding of each.
  • Get involved in your local CitySec, DEF CON local, or 2600 meet up group. They are great networking opportunities and a fabulous place to find a mentor or people to study with. There are meet ups all over the world in surprising places.
  • Consider attending an infosec / hacking conference. The BSides security conference in the nearest major city to you is a great option and should be very affordable (if not free). Attend some talks and see what speaks to you. Consider playing in the CTFs or other security challenges offered there, or at least observing.
  • Security Tube and Irongeek.com are your friends, with massive repositories of conference talk videos you can watch for free. Nearly any security topic that piques your interest has probably been spoken about at some point. I would favor those sites over random YouTube hacking tutorials which really vary in quality (and legality).
  • Consider building your own home lab to practice with basic tools and techniques. Networked VMs are adequate as long as you keep them segregated: Kali Linux and a Windows XP VM are a great place to start. You need to take stuff apart to learn about hacking.

These are only some brief suggestions – there’s no streamlined approach to becoming a great hacker. Get involved, ask questions, and don’t be afraid to break stuff (legally)!



Dear Lesley,

What do you do when you provide security awareness training to your employees, but they still click on phishing links!

– Mr. Phrustrated

Dear Phrustrated,

Beyond generally poor quality “death by PowerPoint” training, one of the biggest problems I see in corporate security awareness programs is poor, unsustainable measures of success. For instance, it’s become really trendy to conduct internal phishing tests to identify how many people click on a phish. It’s incredibly tempting to show off to executives that this number is trending down, but that metric is really pretty worthless.

No matter how ruthlessly trained, somebody (and anybody) will click on a well-enough crafted phish, and it only takes one compromise to breach a network’s defenses. What we should be measuring is the reporting of phishing messages and good communication between employees and the security team. The faster we know an attack is underway, the faster we can respond and mitigate the threat.

In conclusion, you should be less concerned if “somebody is still clicking” phishing messages than if nobody is telling you they clicked, and they resist or lie in embarrassment when asked.


Dear Lesley,

Is there a mental checklist while doing digital forensics to not make your evidence point to your quick conclusions, even if you think you have seen a similar case?

– Jack Reacher Jr.

Dear Jack,

Identifying that this is a problem is a great first step. While intuition is an important part of being a good investigator, sound methodology is even more important. The checklist you use to collect evidence and perform an investigation is going to vary by where you work and what types of things you investigate, but you should always have and follow a checklist – and I recommend it be a paper checklist, not mental.

Don’t ever shortcut or skip steps, even when you’re in a high pressure situation. Shortcuts and assumptions are incredibly dangerous to the legal and technical validity of investigations. Gather all the facts available to you at the time, and document ever step you take so that a colleague (or a legal professional) can follow your work even far in the future.

Finally, always remember that in a digital forensic investigation we are generally providing evidence to reach conclusions about “what, when and how”. “Who” is shaky ground, because in most cases it involves context outside the digital device. “Why” is almost never the business of a forensic analyst (and is indeed often not within the capacity of a company to responsibly answer). If you find yourself looking for evidence to fit a presumed “why” scenario, you have a big problem and you need to step back.


Dear Lesley,

I’m this girl like I said, who just started working in the field, and for the past 4 months, I worked at this huge corporation, who has, among other services, an information security related one, offering technical security (pen testing, …) and non-technical security services. At that time, I had little information about advanced hacking techniques as well as the good practices that should be followed to secure our systems.

During the first weeks I got hacked by someone who’s working with me, and I was harassed and shamed by them since then. I knew it because this person would talk about their findings to everyone, even to non-technical people, in the corporation. People would look at me and laugh, smile, smirk, or look at me pathetically, in addition of other situations.

Knowing that this person is an expert (12 or more years working in information security) and that I don’t have any proofs on their actions, what should I do in your opinion ? What kind of advice would you give to girls and women like me, who want to work in the field but get harassed by their experienced co-workers instead of being encouraged by them ?

– I

Dear I,

Your story gave me pause enough to discuss it substantially with several colleagues in information technology who have also worked in extremely hostile environments.

This is a horrific situation. I want to make it crystal clear that this is utterly shameful on the part of your employer, your infosec colleagues, and your organizations’ corporate culture. I truly hope it does not drive you from our field. The most important thing I can tell you is that this is not your fault. and this is not normal.

The first thing I recommend you do is document everything that’s happening in as much detail as possible, even if you don’t feel you have evidence right now. The activity you’re talking about may not only be harassment, but violate hacking laws. Since device compromise is a concern, please maintain this documentation offline.

What you do next depends on factors you don’t mention in your note. First of all, if you have a trusted supervisor, manager outside your team, or senior mentor in your organization, please turn to them for assistance and ensure they are corroborating what has been happening to you on paper. It’s their responsibility to assist you in resolving the issue at a work center or corporate level, even if they’re not directly in your reporting chain.

If there’s nobody at all you can go to in confidence, the situation becomes substantially more unpleasant. Your options are to ignore the behavior to stick out the requisite ~2 years of entry level security at the organization(obviously the worst option), seek employment elsewhere, or contact an HR representative (with the risk of retribution and legal battles that can bring). Obviously, my personal recommendation is taking you and your computer straight to HR. As a wise colleague of mine pointed out, this is most likely not an isolated incident – the behavior and dismal culture will continue for you and others. Sadly, in some places in the world with less employment protections, this can carry the risk of termination. Keep in mind that it is okay to confidentially consult a lawyer within the terms of your employment contract, and pro bono options may be available.

If HR / legal action is not an option, you can’t find employment elsewhere, and you’re toughing it out to build entry level experience, please network and find a local mentor and support structure outside of your company as soon as possible. As well as much needed emotional support, these people could help you study, network, bite back, and explore other recourse against the employer. Feel free to reach out to me anonymously and we’ll try to connect you with somebody in your area.

Best,
Lesley

Ask Lesley InfoSec Advice Column: 2017-01-19

Thanks for your interesting question submissions to “Ask Lesley”! This column will repeat, on no specific schedule, when I receive interesting questions that are applicable to multiple people. See further details or submit a question, here. Without further ado, today we have OS debates, management communication issues, nation state actors, and career questions galore!



Dear Lesley,

So last year’s Anthem breach was from a nation state – why would a nation state want to hack health insurance info? I understand the identity theft motivation of a criminal, but why do you think a nation state would want this type of data?

– Inquisitive

Dear Inquisitive,

First off, I can’t confirm the details of the Anthem breach – I wasn’t involved in the investigation and haven’t had the privilege of reviewing all the evidence. However, when generally talking about why a state-sponsored actor might want to acquire data, you have to look at a bigger picture than data sets. Nation states usually view hacking as a means to an end. They (ab)use data with a firm political or military objective in mind. Whether a nation state intended to steal 80 million records, or the theft was a crime of opportunity when looking for something more specific, what they stole may unfortunately be useful to them for years to come.

You can obviously already see how the data stolen in a healthcare breach is a treasure trove for general identity theft. The piece I believe you might be missing considers how the data could be combined with other public domain and stolen information to facilitate political objectives. If you already have a target in mind, healthcare data could be a great boon to social engineering, blackmail, and surveillance efforts. For example, consider how much leverage knowing that a target’s child is ill could provide. Or that a target family is hundreds of thousands of dollars in medical debt. These are attractive attack vectors. I can only speculate on potential scenarios, but based on my experience in OSINT, the data stolen from Anthem adds attractive private information about many millions of people.

 


Dear Lesley,

The ‘researcher’ portion of ‘security researcher’ implies graduate school – is PhD study in cybersecurity worth it? There doesn’t seem to be many programs that are worthwhile (except on paper only)

– Not in Debt, Yet


Dear Not in Debt, Yet,

That’s an interesting implication – not one I necessarily agree with based on empirical evidence. I know full time, professional security researchers studying everything from exploits to governance who have every level of formal education, from GEDs to PhDs.  I do see certain fields of security research represented in higher education more than others – a couple examples are high level cryptography and electronic engineering.

I have always been an advocate for higher education and I see little harm and many benefits in getting a good education in a field you enjoy (particularly, a well-rounded education) if you can afford it. However, at the present, there are very few information security careers or communities of research which require a degree, and fewer good quality degree programs. You should see few credential-related barriers to participating in or publishing security research if your work and presentation is good quality.

In some ways, existing exclusively in academia can also make it harder to work in practical security research, as the security field changes more quickly than university curricula can keep up. As a result, some academic security research ends up impractical and theoretical to a fault. (See my yearly rants on steganography papers.) If you go the academic route, choose your field of study carefully, and be careful not to lose touch with the working world.


Dear Lesley,

While working on my 5 BILLION dollar data breach, I wanted some blue cheese dip and chips (The Spice House in Chicago has the best mix btw), a co-worker looked at me with disgust. Am I wrong? Also what’s a good resource to learn about file carving?

– Epicurean EnCE

Dear Epicurean,

Clearly, your coworker is a Ranch dressing fan and should therefore be looked upon with disdain. In regards to file carving, your mission, (should you choose to accept it), is to review how files are physically and logically stored on a hard drive. Next, you’ll want to start familiarizing yourself with typical file headers and footers. Gary Kessler has a pretty killer list, here. Some file types will be more relevant to your specific work in forensics than others; I can’t tell you which those will be.  Your best bet is to pick a couple file types you look at a lot and look at them in a hex editor, then start searching for them in a forensic image.

Brian Carrier’s File System Forensics book, while a bit older, is still a stellar resource for understanding How Disk Stuff Works. SANS SIFT kit includes the tools you will need to get started carving files from disk, and the associated cheat sheets will help with the commands.

If you want to carve files from packet captures, similar header/footer knowledge is required, along with a different tool set. Wireshark’s export alone will often suffice; if it fails, look at Network Miner.


Dear Lesley,

What was the silliest / dumbest thing you’ve googled this week?

– Curious in Cincinnati


Dear Curious,

“The shirt, 2017”

I still don’t get what’s up with that.

 


Dear Lesley,

I teach high school computer science courses and many students biggest interest is infoSec stuff. What should they do to prepare at that age? Any recommendations on software or skills I can teach them? I’m willing to put in the time and effort to learn things to teach and we have class time, but this isn’t what my tech career focused on so I need some help. Thank you, you’re the best!

– Mentor in Michigan

Dear Mentor,

Being a crummy hacker requires learning to use a few tools by following YouTube. Being a good hacker requires a great deal of foundational knowledge about other, less entertaining computer stuff.

The better one knows how computer hardware, operating systems, and networks work, the better he or she will be at hacking. If kids come out of your classes unafraid of taking their own software and hardware apart, you did your job right. That means a lot of thinking about how Windows and Linux function, how computer programs work all the way down to Assembly, and how data gets from point A to point B. If you are going to encourage kids to take stuff apart, make sure they also understand that law and ethics are involved. Provide them a safe and legal sandbox to explore, and explain why it’s important to know how to break things in order to fix them.

As an aside – by high school, kids are more than old enough to be actively participating in the infosec community if they wish. Numerous kids and teens attend and even present at hacker events, these days; in fact, many conferences have educational events and sponsorships specifically for youth.

 


Dear Lesley,

 I normally use a Chromebook, but I also have to use Windows 10 so that I can use Cisco packet tracer (I’m studying CCNA). I really trust the security of my Chromebook, but Windows 10 – not so much. I have antivirus, anti-exploit and anti-ransomware software on my Windows laptop. But my question to you is: Is there a resource that you know of that can help lock down Windows 10 for the home user? Most of what I find is for enterprises and Enterprise versions of Windows 10 and if I do find something for the home user it invariably talks about privacy rather than security.

–  Kerneled Out


Dear Kerneled Out,

The OS wars, while somewhat befuddled by 2016, are alive and well. There are dogmatic Linux fans, and dogmatic Windows fans, and so on and so forth. My opinion is that every OS has its place when used correctly by the right person. Many serious security people I know use every major OS on a daily basis – I sure do.

Swift On Security has a nice guide here on securing Windows 10 that should suit your needs.

As for Chrome over Windows – please don’t fall into the “security by obscurity” trap that MacOS and Chrome can encourage. They are both solid OSes with interesting ideas on security, and viable choices for home and business use cases. However, modern versions are not inherently more or less secure than modern Windows. MacOS, Windows, Chrome, and major Linux distros are as secure as they are configured and used by human beings. Of course, the complexity of configuring them can vary based on user experience and training.

 


Dear Lesley,

How come everyone wants 5 years experience for an entry level infosec job? I’ve been trying to get gainful employment in an offensive role for more than 6 months and no one wants anyone with less than 5 years of pentesting/red teaming experience. Can’t exactly do pen tests until you’re a pentester, so what do I do?

– Frustrated

Dear Frustrated,

I’m sorry to hear you’re having so much trouble finding a position. I have written quite a lot about infosec career paths and job hunting in previous blogs, and I hope that they can assist you a little. Red teaming is unfortunately much harder and more competitive to find work in than Blue teaming, so my suggestions here are not going to be particularly pleasant:

  • Consider your willingness to move. There are simply more red team jobs in places like DC and the west coast.
  • Consider if you can take a lower-paid internship. It sucks, but it’s an in, and pen testing firms do offer them.
  • Consider doing blue team SOC work for a couple years. It’s not exactly your cup of tea, but it will give you solid security experience.
  • Network like crazy. Get to the cons and the meet-ups in person. Talk to people and build relationships.
  • Do research and speak about it. Pick something that intrigues you, even if you have no professional experience, and do a few months work, and submit to a CFP. It will get you name recognition.

Dear Lesley,

Many infosec professionals feel that signature-based antivirus is dead. If that is the case… What do you recommend we replace it with to protect our most vulnerable endpoints (end users) with?

– Sigs Uneasy

Dear Sigs,

That’s the kind of black and white statement that makes a good headline, but exaggerates the truth a bit. Yes, there are a couple companies who have been able to ditch antivirus because of their topology and operations. The vast majority still use it. While signatures alone don’t cut it against quickly replaced and polymorphic threats, other antivirus features, such as HIPS and heuristics, still provide a benefit. (So, if you’re still using some kind of antivirus that can’t do those things, it’s time to upgrade.)

Antivirus today is useful as part of a “defense in depth” solution. It is not a silver bullet, and it’s certainly defeatable. However, it still catches mass malware and the occasional targeted threat. The threats AV misses should be caught by your network IPS, your firewall, your web filters, your application whitelisting solution, and so forth. None of those solutions is bulletproof alone, and even the efficacy of trendy solutions like whitelisting is limited if you don’t architect and administer your network securely.


Dear Lesley,

I was testing a network and found some major flaws. The management doesn’t seem too bothered but I feel the issues are huge. I want to out them because these flaws could impact many innocent people. But if I do, I won’t be hired again. I look forward to your response.

– Vaguely Disturbed

Dear Disturbed,

Before whistle-blowing and potentially getting in legal trouble, I highly recommend you approach this argument from a solid risk management perspective. Sometimes, “it could be hacked” means a lot less to management than, “9 companies in our industry were breached in 2016, and if we are, it will probably cost us over 70 million dollars in lost revenue”. If you have access to anybody with a risk analysis background you can reach out to under the relevant NDA, I highly recommend you have a chat with them and put together a quantified, evidenced argument, ASAP. The more dollar signs and legal cases, the better your chances of winning this.

At the very least, win or lose, ensure you’ve covered your butt. This means written statements and acknowledgements stating you clearly explained the potential risk and also that they willfully chose to ignore it. Not only does requiring a notarized signature make the appearance of threat go up, but it will be helpful in case they decide to blame you or your employer two years from now.

I would suggest you consult a lawyer before breaking NDA or employment contract by whistle blowing, no matter how noble your intentions. I am not a lawyer, nor do I play one on TV.


Dear Lesley,

I make software and web applications that connect to software and services from other companies. Sometimes those companies disable or cripple some features due to possible security exploits. When I’ve met with security people from those companies and asked them about the features they nerfed (disabled or crippled), I’m met with an awkward silence similar to the vague errors I get from their servers. As a developer, I’m so used to the open-source community that wants to help that this feels weird. Is there some certification, secret handshake, or specific brand of white fedora I need to have conversations with security people about their products security issues? Just trying to learn and grow, and not cause a mess for anybody.

– Snubbed

Dear Snubbed,

No secret handshake. Here are a couple suggestions from the receiving end of these types of concerns:

  • Set up a security lab with your applications and a client on it. Install a Snort or Suricata sensor(s) with the free Emerging Threats ruleset in the midst of them to intercept their communication. (Security Onion is a nice, relatively easy to install option.) Send normal application traffic back and forth and see what security signatures are firing on the network.  That will give you some idea of what might be getting blocked before you even start the discussion (and help you reduce false positives).
  • Ensure your applications are getting proper vulnerability testing before release. Again, even if you’re coding securely and responsibly, this can help reduce false positive detection by vulnerability scanners or sensors.
  • Ask the security people what security products or appliances they are using on the hosts and on the network, and what signatures are firing. You might not have access to a 20,000 dollar security appliance to test, but their sensor might have full packet capture functionality or verbose logs that will help you troubleshoot.
  • Try to build a better professional relationship with these teams if you can. If they’re involved in a local security group, perhaps drop by and have a drink with them.

 


Dear Lesley,

I’m feeling it is time to move on from Windows XP, but only because many things no longer support it, and 3Gb is a bit limiting when running VMs and the like. I’ve tried Windows 10, and it is completely alien, and I worry about security – it streams things back to Microsoft, and is less secure than my hardened XP install. I’ve tried Mint Linux, and that was quite good, but underneath it is even more alien than Windows 10. I’ve heard of BSD, but I’m worried that my political career could be over if word about that got out, so I’ve not tried it. What do you suggest?

– Unsupported in UK

Dear Unsupported,

It is indeed high time to move off XP.

Windows XP is unsupported, highly vulnerable, and trivially exploitable by hackers. It is not in the same league as Windows 10 in terms of security. Even application whitelisting (which is considered a bit a last resort silver bullet in industry) isn’t a reliable means of securing XP against attacks anymore.

Yes, there are some IT professionals who dislike Windows 10. Those concerns usually have to do with things like UI, embedded ads and system telemetry, not the underlying security (which is quite well engineered).

If those are your specific concerns, a current version of Mint (which you tried), Ubuntu, or MacOS are all okay options. They would all need to be thoughtfully configured for security just as much as Windows. BSD will feel just as unfamiliar if you were uncomfortable operating in Mint, but I certainly don’t discourage you from giving it a try. Even MacOS is *nix based under the hood.

Unfortunately, it seems to me that you’re stuck with two options if you want to maintain any semblance of security: cope with your dislike of Windows 10, or dedicate some time to learning the inner workings of a new operating system. Either way, please get off XP as soon as possible.


Dear Lesley,

My friend, since birth – who I’ll call M. E., has had a 23-year, jack-of-most-trades career in IT. ME is currently serving as the IT Decider (and Doer) at an SMB financial firm. Over the last five years, ME has enjoyed focusing on security. Technology, security in particular, is still near the top of his hobby list. However, compared to when he started his IT career, ME places a greater value on having a work-life balance. ME wonders if it’s too late for a change to the cyberz – without “starting over.” In your experience, is there a reasonable way for ME to jump from the “IT rail” to the “security rail” without touching the third rail and returning to Go, without collecting $200?

– ME’s Friend

Dear ME’s Friend,

Your ‘friend’ sounds like a great candidate for many security positions, but he or she might have to take a pay cut. 23 years of experience in systems administration and networking is 23 years of experience in how to take things apart, which is really mostly what security is behind the neat hats and the techno music.

ME is going to need to figure out two important things. Firstly, ME will need to gain some security-specific vocabulary to tie things together – a course or certification might be a nice feather in the cap. Then, ME is going to have to carefully plan out how to present him or herself as an Awesome Security Candidate in interviews and resumes. That will involve taking those 23 years of generalized experience, as well as security hobby work, and selling them as 23 years of Awesome Security Experience. For example, it takes a lot of understanding of Windows administration and scripting to be a good Windows pen tester. Or, it takes a lot of TCP/IP knowledge to do packet analysis of an IPS signature fire. Every niche of security requires deep knowledge of one or more areas of general IT.

All that being said, there are some security skills that need to be learned on the job. I wouldn’t push ME towards an entry level gig, but it may not be an easy lateral move to any senior technical position, either. A good segue if seniority is critical might be security engineering (IPS / SIEM / log aggregation administration, etc).


Dear Lesley,

How does an organization go about starting a patch testing program? Ours seems to be stuck in a “don’t update it, you’ll break the application” mindset. –

– TarPitted in Texas

Dear TarPitted,

As I noted to a reader above, sometimes this type of impasse with management can only be solved through presenting things as quantifiable risk. If you are telling management that your application is vulnerable, and they are saying it will cost too much if it breaks when you patch it, somebody else is quantifying risk better than you. You’d best believe that team saying, “the application might break” is also saying, “if this application breaks, it will cost us n dollars a day”. So, play that game. Tell management specifically how much money and time they stand to lose if a security incident occurs. Present this risk clearly – get help if you need to from all of the impacted teams, your disaster recovery and risk management professionals, and even your finance team.

Your managers should be making a decision based on monetary and other quantifiable business impact of the application going down for patching, vs. the monetary and other quantifiable business impacts of a potential security incident at x likelihood. Once they do that on paper, you’ve done due diligence.

 

Bridging the Gap between Media and Hackers: An Olive Branch

I had a lovely interview about IoT security with Emmy-award-winning reporter Kerry Tomlinson of Archer News this past week at BSides Jackson. It’s unfortunately rare in our field that we get to have such productive, mutually beneficial conversations with members of the media. There’s a lot of uncertainty and (often justified) lack of trust between both parties – which makes it easy to forget that presenting a coherent, technically correct, and comprehensible message on information security and privacy is crucial for everyone.

Since organizations like I Am the Cavalry are already approaching the outreach problem primarily from the side of security professionals, I’d like to take a slightly different approach by specifically addressing journalists and the media.

We need your help!

With the plethora of hacker conferences which are gaining legitimacy and attention across the world, there are many opportunities to address our community. Hacking conference call-for-papers are often open to everybody, not just people gainfully employed in security. You are welcome to apply and lend your unique perspective to these problems. It doesn’t have to be DEF CON or Black Hat. There are many smaller options which record and post talks, and have great reach within our community.

Here are some important topics which you could help educate us about, by sharing your perspective:

  • What is it like being a journalist covering security? What are the challenges?
  • How should we prepare for a media interview?
  • Many people in security feel burnt by misquotes and misinterpretations of their work. How can we better avoid this? What should we do if we feel we have been misrepresented by a media organization?
  • How can we better vet news outlets which want to work with us?
  • How can we help you as subject matter experts or fact checkers?
  • How can we help you present our most important security research to society without sensationalizing?
  • How can we better format and target our blogs and research for the media?

We want to help you!

There are plenty of security topics that are timely and  highly relevant to journalists and the media, and many of us are willing to offer education and insights to your communities of practice, if offered opportunities to do so.

Here are some topics which many willing security professionals (including myself) could provide a range of insights and training on at media conferences and educational programs:

  • How to conduct secure and private communications with sources and colleagues.
  • How to maintain operational security and avoid leakage of sensitive personal information.
  • How to secure computers and mobile devices.
  • Understanding, detecting, and avoiding social engineering.
  • How to approach hackers (white, grey, and black hat) for information on security research.
  • The realities of hacker “culture” and work, and how these differ from fictional stereotypes.
  • Current issues with malvertising on news sites, how to better decrease the risk thereof, and their effect on the rise of adblockers.

I want to take a moment to thank the many journalists and reporters who do fabulous coverage of security topics right now (especially Steve Ragan, who wrote the essential article on how to deal with the media as a hacker) who associate with our community on a regular basis. Thanks for dealing with our foibles and for doing great work.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


 

Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.

 

 

Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.

 

 

How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.

 

 

Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.

 

 

Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp ;).

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts (https://krypt3ia.wordpress.com/2014/12/20/fauxtribution/). Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.

 

 

How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.

 

 

So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.

 

 

Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).

 

 

Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.

 

 

Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.

 

Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.

 

Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..

What’s a Challenge Coin, Anyway? (For Hackers)

So what are these “challenge coins”?

Challenge coins come from an old military tradition that bled into the professional infosec realm then into the broader hacker community through the continual overlap between the communities. In some ways like an informal medal, coins generally represent somewhere you have been or something you have accomplished. Consequently, you can buy some, and be gifted or earn others; the latter are generally more traditional and respected.

There are a few stories about how challenge coins originated in the U.S. Military and most have been lost to history and embellished over time, but I will tell you the tale as it was passed down to me:

During World War I, an officer gifted coin-like squadron medallions to his men. One of his pilots decided to wear it about his neck as we would wear dog tags, today. Some time later, that pilot’s plane was shot down by the enemy and he was forced down behind enemy lines and captured. As a prisoner of war, all of his papers were taken, but as was customary he was allowed to keep his jewelry, including the medallion. During the night, the pilot managed to take advantage of a distraction to make a daring escape. He spent days avoiding patrols and ultimately made his way to the French border. Unfortunately, the pilot could not speak any French, and with no uniform and no identification, they assumed he was a spy. The only thing that spared him execution was showing them his medallion, upon which there was a squadron emblem the French soldiers recognized and could verify.

Today, people who collect challenge coins tend to have quite a few more than just one.

What’s the “challenge”?

Challenge coins are named such because anybody who has one can issue a challenge to anybody else who has one. The game is a gamble and goes as such:

  • The challenger throws down their coin, thereby issuing a challenge to one or more people.
  • The person or people challenged must each immediately produce a coin of their own.
  • If any of the people challenged cannot produce one coin, they must buy a drink for the challenger
  • If the people challenged all produce coins, the challenger must buy the next round of drink(s) for them.

Therefore, a wise person carries a coin in a pocket, wallet, or purse, at all times!

How do I get challenge coins?

As I mentioned before, the three major ways to get a challenge coin in the military and in the hacking community are to buy one, earn one, or be gifted one.

  • You can buy coins at many places and events to show you were there. Many cons sell them now, as well as places like military installations and companies. They’re a good fundraiser.
  • You can be gifted a coin. This is normally done as a sign of friendship or gratitude, and the coins gifted are normally ones that represent a group or organization like a military unit, company, non-profit, or government agency. The proper way to gift a coin is enclosed in a handshake.
  • You can earn a coin. Many competitions and training programs offer special coins for top graduates, champions, and similar accomplishments (similar to a trophy). This is the most traditional way to receive a coin.

How do I display my coins, once I have more than one?

On a coin rack or coin display case. https://www.amazon.com >>


Can I make my own challenge coins? How much do they cost?

Yes. Lots of companies will sell you challenge coins. The price varies drastically based on the number ordered, colors, materials, and complexity of the vector design.

Think about whether you plan to sell coins to people, gift them on special occasions, or make them a reward, and plan accordingly.

Can I see some examples of infosec / hacking challenge coins?

Sure! I hope you’ve enjoyed this brief introduction to challenge coins. Here are some of my friends and their favorite challenge coins:

 

 

Why do Smartphones make great Spy Devices?

There has been extensive, emotional political debate over the use of shadow IT and misuse of mobile phones in sensitive areas by former US Secretaries of State Colin Powell and Hillary Clinton. There is a much needed and very complex discussion we must have about executive security awareness and buy-in, but due to extensive misinformation I wanted to briefly tackle the issue of bringing smartphones into sensitive areas and conversations (and why that’s something that is our responsibility to educate our leadership to stop doing).

This should not be a partisan issue. It underscores a pervasive security issue in business and government: if employees perceive security controls inexplicably inconvenient, they will try to find a way to circumvent them, and if they are high enough level, their actions may go unquestioned. This can happen regardless of party or organization, and in the interest of security, information security professionals must try to discuss these cases in a non-partisan way to try to prevent them from reoccurring.

That being said, let’s talk briefly about why carrying smartphones into any sensitive business or government conversations matters, and is a particularly bad habit that needs to be broken.

There are two things to remember about hackers. The first is that we’re as lazy (efficient?) as any other humans, and we will take the path of least resistance to breach and move across a network. Instead of uploading and configuring our own tools on a network to move laterally and exfiltrate data, we will reach for the scripting and integrated tools already available on the network. In doing so, smart hackers accomplish a second and much more critical objective of limiting the number of detectable malicious tools in an environment. Every piece of malware removed from an infiltration operation is one less potential antivirus or intrusion detection system fire, and one less layer of defense in depth that is effective against hackers. An intrusion conducted using trusted and expected  administrative tools and protocols is very hard to detect.

These same principles can apply to more traditional audio and video surveillance. In the past, covert surveillance devices had to be brought into a target facility via human intervention (for instance, brought in by an operative, a bribe, or covertly planted on a person or delivery). The decades of history (we know) about bugs is fascinating – they had to be engineered to pass through intensive security measures and remain in target facilities without notice. In the pre-transistor and the early era of microelectronics, this was a complex engineering feat indeed.

Personal communication devices, and to a greater extent smartphones, are a game changer. Every function that a cold war -era industrial or military spy could want of a bug is a standard feature of the smartphones that billions of people carry everywhere. Most have excellent front and rear facing cameras. They have microphones capable of working at conference phone range. They have storage capable of holding hours of recording, multiple radio transmitters, and integrated GPS. James Bond’s dream.

More importantly than any of this, smartphones tend to be one of three major operating systems, which are commercially available globally and excruciatingly studied for exploits by every sort of hacker. Some of these exploits are offered to the highest bidder on the black market. Although the vulnerability of smartphone operating systems varies by age and phone manufacturer, each is also  vulnerable to social engineering and phishing through watering hole attacks, email, text message, or malicious apps.

Why expend the effort and risk to get a bug into a facility and conceal it when an authorized person brings such a fantastic, exploitable surveillance device in knowingly and hides it themselves? If the right person in the right position is targeted, they may not even be searched or reprimanded if caught.

There’s been a lot of discussion about countermeasures against compromised smartphones. Unfortunately, even operating inside a Faraday cage that blocks all communication is not effective because eventually, the phone leaves. A traditional covert device may not. As with the USB devices used to deploy Stuxnet, this trusted air gap is broken the moment an untrusted device can pass across it. A compromised phone can simply be instructed to begin recording audio when it’s cellular signal is lost, and upload the recording as soon as that connection is restored. Turning off the devices is also not particularly effective in the era of smartphones with irremovable batteries.

Yes, of course it’s still possible to put a listening device in a remote control or a light fixture. Surreptitious hacking tools used to compromise networks on site can still function this way. But why expend the substantial effort and risk in installing, communicating to, and removing them if there’s an easier way?

This is not to say it’s time to put on our tin foil hats and throw out our phones. Most people are probably not individual targets of espionage, and using smartphones with current updates and good security settings is decent protection against malware. However, there are people all over the world who are viable targets for industrial or nation-state espionage, either for their own position or for their access to sensitive people, information, or places. If you are informed by a credible authority that you may be targeted and should not bring your smartphone into a particular area, please take this advice seriously and consider that your device(s) could be compromised. If you suspect that there is another valid reason that you could be targeted by industrial or nation state espionage, leave your phone outside. It is generally far simpler to compromise your smartphone than it would have been to break into your office and install a listening device.

 

 

The Worst InfoSec Resume, Ever

I do quite a bit of InfoSec résumé reviewing and critiquing, both personally and professionally, so I’m repeatedly asked for tips on common problems. In order to ensure that these problems were not exclusive to me, I recently had a lengthy discussion  with a number of InfoSec professionals involved in hiring (thank you!). We discussed our “top 10” pet peeves when reading candidates’ résumés.

So without further ado, here is an illustrated example of some common problems we see on many résumés, and some suggestions about how to fix them.

(If these images are hard to view on your phone or at a specific resolution, you may click them to view them full screen.)

file-page1

file-page2