Whose Fault Is It? (A brief discussion on misconceptions about Equifax)

Our personal financial identities are exposed, and we’re mad. A sick, visceral, exhausted anger that hits us in the pit of our stomachs and makes us feel powerless.

People are understandably furious about the Equifax breach- to a degree that makes it tough to have a rational discussion about what happened. Unfortunately for information security professionals, anger is a luxury we don’t have right now. It’s now past time to have frank discussions about what went wrong and how to prevent it in our own environments. I’d like to take a moment to clear up a few exceptionally harmful  misconceptions about Equifax’s information security and security operations in similar practical environments.

Angry You Says: “I’m mad at Equifax for getting breached.”

It’s reasonable to be angry about Equifax’s existence, or their business model, or their retention of data. It makes no sense to be angry simply because they were breached. Any organization can, and likely will eventually be breached. What ultimately matters is their preparation, response, and risk mitigation.

You should be angry about Equifax executives selling stock before completing breach notifications.  You should be angry that Equifax was not prepared to respond to customer inquires about their breach in a timely manner.  You should be angry that the site Equifax put up in response to the breach was poorly branded and appeared hastily implemented. All these things could and should have been prepared for in advance.

Good incident response involves a lot more than simply performing forensics on an attack after the fact. It also involves solid communications plans, drilling for potential incidents, and procedures for plausible scenarios. To an experienced outside observer, Equifax’s incident response and breach notification plans were mediocre at best. Their DFIR team could be top notch at timelining attacker activity on servers, but that means little if they didn’t know who to call for hours.

We must remember to never base any of our metrics, good or bad, on attacker activity alone. Attackers are an unpredictable data point we cannot control. A sophisticated enough attacker can gain access to nearly any network given proper motivation and resources. You are not immune, and neither is any organization, huge or small. Every organization should plan like their most critical system will be hacked, tomorrow.

It may be Equifax’s fault that an individual attack worked due poor procedures, or that they weren’t prepared for an attack, but not simply that they were ultimately breached. It was their job to create the best defensive posture possible, and prepare for the worst case scenario.

Angry You Says: “The breach is Equifax’s fault for not patching.”

There are many scenarios in the corporate world that preclude or delay the application of software patches. Vendors go out of business or discontinue products. Responsible risk management decisions are made regarding critical application downtime vs. life and safety or preventing financial hardship.

The key phrase here is, “responsible risk management decision”. At the end of the line, there should be a clear audit trail leading back to risk managers who involved correct stakeholder teams and provided an analysis of patching versus not patching the system. The risks associated with not patching can be somewhat mitigated through other security practices, like adding defense in depth and monitoring solutions, or segregating vulnerable systems. In a healthy environment, all these things should occur. If Equifax didn’t make a responsible risk decision around not patching, and didn’t provide sensible mitigating controls, you can be angry about that.

Angry You Says: “The Equifax server admins are idiots for not patching, and I blame them!”

In most Fortune 1000 companies, if a system can be patched and isn’t, it is likely not the fault of “Joe or Sue admin”.

There are exceptions to this rule, such as malicious insiders. However in the vast majority of cases, the blame lies squarely with leadership – often C-level executives.

There are the cases where a server can’t be brought down for patching because the business refuses to accept the required downtime. In those scenarios it is the responsibility of management to have patching policies in place which account for limited and temporary exceptions given proper risk evaluation, with mitigating controls. These policies must have buy-in at executive levels so that an angry VP can’t override them merely by threatening a technician’s job.

Of course, there are also instances where organizations operate on unsupported software because leadership has decided to not expend the money or work hours necessary to upgrade them to a supported system. Once again, it falls to security and IT managers to make a case to  executives that the upgrade expenditure is a good risk management decision and financially responsible. If a sensible decision isn’t made by executives after being presented with this information, the blame lies squarely at the C-level.

Finally, there there are the cases in which a CIO or CISO fails to provide a policy or advocate for patching, and claims no knowledge of a server’s existence or of a threat. Ultimately, it’s the executives’ responsibility to hire savvy and articulate managers, who in turn hire subject matter experts who can generate comprehensive inventories and make reliable recommendations.

Do not make the mistake of comparing operational bureaucracy in a 50 person company with that of a 50,000 person company.

Angry You Says: “Equifax’s CISO was unqualified. She was a fine arts major!”

The Susan Mauldin‘s degree in music composition is totally irrelevant to whether you should be angry with Susan Mauldin.

It is possible for the Equifax CISO to have performed poorly at her job, while also being similarly credentialed to numerous, very competent information security professionals. Her degree should be treated as a non-issue.

As I’ve written in previous blogs, information security academia is new and delightfully inconsistent in quality. The vast majority of professionals with a decade or more experience in security did not attend a security-centric degree program, because those programs simply did not exist prior to around 2006. Like many fast-paced technical fields, information security degree programs that exist now are often abysmally out of date and fail to teach relevant skills. Hiring authorities still see many ‘paper tigers’ who leave 2-4 year degree programs with no substantial real life knowledge.

While I personally do recommend a computer science degree for academically-focused people interested in pursuing a security career, degrees still function mostly as a means of gaining fundamental knowledge in a structured environment, and a stepping stone for career progression and salary increases. Useful intangibles gained by attending a university often tend towards report writing, business, and interpersonal skills. There are other valid ways to gain those skill sets. Many a lauded information security executive has a degree in business, unrelated engineering, or indeed, fine arts. A large percentage don’t have degrees at all (although they still increase promotion potential).

What really counts toward being a competent information security executive? Passion, drive, and business savvy. A firm understanding of high-level fundamentals encompassing a broad range of niches. The ability to hire the right subject matter experts and technical managers to advise him or her without requiring micromanaging. Excellent risk management skills. The ability to play a tough political game to advocate for good security practices and necessary money and headcount.

I don’t know more about Ms. Mauldin than what the internet bios say. It’s possible the blame for a majority of the mistakes made by Equifax lie with her. It’s also possible her input and reports were universally dismissed by the CIO or CEO, and more of the blame can be placed on them. These things may become more clear as more technical and operational details are released. For the time being, stop looking at degrees and certifications for answers, lest you unintentionally personally insult some of the best minds in security as a side effect.

Phishing Exercises, without the “Ish”

Much like open offices and outsourcing in business, information security is subject to trends. One you probably saw in your vendor spam folder over the past couple of years is phishing awareness exercises.

The premise sounds simple – phish your employees before the bad guys do, monitor how they respond, and react accordingly.  In reality, people’s experiences have been more complex. There’s not much middle ground in the discussion of phishing exercises. I see either glowing articles praising their merits (most of which are selling something), or bemused cynicism about them from security professionals. In my experience, there really can be benefits to running phishing test exercises in a sensible way, but many organizations are not implementing them in a sensible way, so they end up pretty worthless.

When you’re setting up a phishing test program, you have the option of developing your own phishing exercise infrastructure and metrics collection toolkit, combining open source solutions like King Phisher or (SET), or purchasing one of many available commercial solutions. I won’t advocate for one brand over another in this blog – most will work (in the right configuration and conditions). A similar set of concerns exist, whether you develop a deployment and metrics solution, or you buy a commercial solution in a box. Let’s discuss how any and all of these tools are being used incorrectly.

 

Before spending money, or implementing anything

Develop a clear goal for your program with your senior leadership fully involved. This goal should not be, “stop employees from clicking on phishing messages”. That’s simply unattainable. Yeah, you want that number to decrease, but even top security professionals have fallen for well-crafted phishing messages. People click on things when they’re busy and distracted, and it theoretically only takes one compromised host to breach a network. A real attacker only has to get that one, inattentive click. If your senior management measures your success by phishing clicks reaching zero, you’ll ultimately find yourself dumbing down campaigns to look more successful. This won’t do anybody any favors.

A more realistic goal is improving the quantity and speed of reporting of suspicious emails. Detecting phishing with tech is hard. Most organizations spend a great deal of money on modern solutions to catch and alert on phishing messages, and even those can be circumvented. Your last line of defense against phishing and social engineering is a good relationship with end users who will promptly tell you they are being attacked. While it takes only one phish to compromise a network, it takes only one prompt report to security to shut an attack down.

Next, you should bring your HR and Legal teams into the conversation and discuss anonymity. There is no room for gray area here. You will either conduct phishing exercises anonymously or you will not. If you conduct the phishing exercises anonymously, you must develop the program in a double blind way where even network security can’t practically retrieve the names of people who clicked. You’ll still see an overall view of the health of your organization, but nobody can be pressured to provide identifying data, even by angry executives.

If you choose to not conduct exercises anonymously, I recommend that you clearly document any repercussions for clicking, and ensure they are uniform across your organization. Otherwise, your exercises could easily become a public humiliation game or end in unequal punishment by managers, putting you in hot water with HR.

 

A carrot, instead of a stick

Regardless of if you conduct your exercises anonymously or not, you may decide to provide extra security training to people who click on your test phishes. Frankly, a lot of security awareness training is pretty awful, “death by PowerPoint” stuff. If your users can fly through every slide and kludge their way through your multiple choice test, chances are it’s a waste of time. Try to have some empathy for how an end user is feeling when they click on a test phish and are routed to a long, mandatory training. They’re embarrassed, frustrated, and it’s very possible they clicked because they were already frantically busy. In their minds, you aren’t helping – they feel like you tricked them. There’s now hostility in your relationship, not a willingness to help “the team” stop attackers.

If possible, in-person training is a great option (snack bribery highly encouraged). Offer a lunch and learn, or a social hour with IT security. Offer this in lieu of traditional web-based training, and have a conversation with your end users. People are statistically more inclined to help somebody they have met in person and feel some connection to. You want to try to make your phishing exercises a positive thing that people want to improve, not a negative thing that people subconsciously associate with punishment or embarrassment.

If training has to be computer-based, try to make it quick, effective, and interactive. This is a space where you may wish to spend some money to get something good quality and enjoyable.

Be clear about what you’re trying to accomplish with phishing exercises and why they are important to your organization. Ensure you give credit to people who report phishing and help your team improve more than you punish people who make genuine mistakes. It’s better to provide measures to protect victims and help them learn, rather than encourage them to circumvent your security team.

 

Who should you phish?

Establish the scope of your exercises. Must certain employees be exempt for legal reasons? Are multiple languages spoken in your organization which will require separate exercises? Will your exercises be conducted across global business hours and all shifts? Have you done some OSINT to generate a list of exposed users and email addresses that require special attention?

I highly advise against phishing everybody at once. The only things that travel faster than light in workplaces are rumors. Once one person realizes he or she has fallen for the phishing exercise, it’s nearly impossible to contain the “helpful warnings” to neighbors and friends. This is good, but won’t necessarily give you accurate metrics about individual performance.

 

Designing your phish

Security teams everywhere look forward to this part with glee. I must remind my blue team friends of a lesson that successful red teamers learn early in their careers: your job is not to “get” your target for the laughs. Your job is to educate your target and improve their security. You are on their team. Yes, you can phish nearly anybody with a well crafted message and insider knowledge. Conversely, you can produce excellent metrics by selecting an absurdly easy phish. Neither results in any significant security training.

Your phishing exercises are a scientific experiment, and a good experiment has as few variables as possible. The variables that do exist must be well quantified, and should include the difficulty of the phishing message, which is easier said than measured. Comparing clicks on an excellent phish with perfect grammar and a timely topic to one that applies to few employees and is written in poor English is apples to oranges. If you want to change the variable of phishing difficulty, do not change the variable of employee selection or time of day, and vice-versa.

If you’re having trouble with this, look to your phishing awareness training. Most commercial training programs list warning signs of a phish. When developing your messages, choose a specific number of these specific warning signs to include.

 

Avoiding phishing-related divorces, and other unpleasantness

Writing a phishing email seems fun and easy. You copy one you’ve seen in your filters, or use a common phishing theme, and send it out with a link or attachment, right?

Or not.

Bad guys have it a lot easier than us, as defenders and pen testers. Bad guys can emulate any public company or person they want in their phishing messages, and abuse any emotion. While we want to make test phishes as realistic as possible, there are good reasons why we have to put more thought into ours.

The reaction of a human being to a phishing email depends on a lot more factors than just their corporate security training. They’re also influenced by their outside security education, their biases and experiences with the content of the message, and their emotions. Imagine a phishing test email that uses the classic “payment received” scam, ostensibly from some real online payment firm. Some people will look at the phish, see it for what it is, and report it appropriately. Others will Google the payment provider and report the phish to them instead; a black eye (or even a blacklist) for your company. In a worst case scenario, an employee could receive the message and apply a personal context, forwarding it to their spouse as ‘proof’ they’re hiding money.

You must try to keep your phishing exercise contained. Remember, you are handling live lies. Not only could forwarding of your test message alter your metrics, but it could also result in more dire legal or ethical consequences if it should leave your network perimeter. Ensure you thoroughly prevent this, and clean up after your exercise as soon as possible once you’re done.

Lesley’s Rules of SOC

I see a lot of the same errors made repeatedly as organizations stand up Security Operations. They not only result in lost time and money, but often result in breaches and malware outbreaks. I tweeted these out of frustration quite some time ago and I’ve since been repeatedly asked for a blog post condensing and elaborating on them. So, without further ado, here are Lesley’s Rules of SOC, in their unabridged form. Enjoy!


  1. You can’t secure anything if you don’t know what you’re securing. 

    Step one in designing and planning a SOC should be identifying high value targets in your organization, and who wants to steal or deface them. This basic risk and threat analysis shows you where to place sensors, what hours you should be staffed in what regions, what types of skill and talent you need on your team, and what your Incident Response plan might need to include,

  2. If you’re securing and monitoring one area really well and ignoring another, you’re really not securing anything. 

    An unfortunate flaw in we as an infosec community is that we often get distracted by the newest, coolest exploit. The vast majority of breaches and compromises don’t involve a cool exploit at all. They involve unpatched systems, untrained employees, and weak credentials. Unfortunately, I often see organizations spending immense time on their crown jewel systems like their domain controllers, and very little paid to their workstations or test systems. All an attacker needs to be in a network is a single vulnerable system from which he or she can move laterally to other devices (see the Target breach). I also see people following the letter of the law in PCI compliance, ignoring all the software and human practices beyond this insufficient box.

  3. You can buy the shiniest magic box, but if its not monitored, updated, and maintained with your input, you’re not doing security. 

    Security is a huge growth market, and vendors get better and better at selling solutions to executives with every newsworthy data breach. A lot of ‘cybersecurity’ solutions are now being sold as a product in a box – ‘install our appliances on your network and become secure’, This is simply not the case. Vendor solutions vary vastly in quality and upkeep. All of this is moot if the devices are placed in illogical places in the network, so that the devices can’t see inbound or outbound internet traffic, or host to host traffic. Even with a sales engineer providing product initial setup, a plan must be developed for the devices to be patched and updated. Who will troubleshoot the devices if they fail? And finally, their output must be monitored by somebody who understands the output. I’m constantly appalled by the poor documentations big vendors provide for the signatures produced by their product. Blocking alone is not adequate. Who is attacking and what is the attack?

  4. If your executives aren’t at the head of your InfoSec initiatives, they’re probably clicking on phishing emails. 

    I think this is pretty self explanatory. Security is not an initiative that can be ‘tacked on’ at a low level in an organization. To get the support and response needed to respond to incidents and prevent compromise, the SOC team must have a fast line to their organization’s executives in an emergency. 

  5. Defense in Depth, mother##%er. Your firewall isn’t stopping phishing, zero days, or port 443. 

    I constantly hear organizations (and students, and engineers) bragging about their firewall configs. This is tone deaf and obsolete thinking. Firewalls, even next generation firewalls that operate at layer 7, can only do so much. As I’ve said previously, exploits from outside to inside networks are not the #1 way that major breaches are occurring. All it takes is one employee clicking yes to security prompts on a phishing message or compromised website to have malware resident on a host inside their network. The command and control traffic from that host can take nigh infinite forms, many of which won’t be caught by a firewall without specific threat intelligence. You can’t block port 80 or 443 at the firewall in most any environment, and that’s all that’s really needed for an attacker to remote control a system. So you have to add layers of detection that have more control and visibility. such as HIDS, internal IDS, and system level restrictions. 

  6. There are a lot of things that log besides your firewall and antivirus. 

    I wrote a post on this a while back listing a bunch. The thing that horrifies me more than SOCs that don’t have a decent SIEM or log aggregation solution are the ones that only monitor their antivirus console and firewall. So many network devices and systems can provide security logs. Are you looking at authentication or change logs? DNS requests? Email? 

  7. Good security analysts and responders are hard to find. Educate, motivate, and compensate yours. 

    Or you will lose them just as they are becoming experienced. Our field has almost a 0% unemployment rate. 

  8. Make good connections everywhere in your organization. People will know who to report security incidents to, and you’ll know who to call when they do. 

    There’s often a personality and culture clash between infosec people and the rest of the business. This is really dangerous. We are ultimately just another agency supporting the business and business goals. All of our cases involve other units in or organization to some extent or another. 

  9. If you don’t have some kind of Wiki or KB with processes, contact info, and lessons learned, you’re doing it wrong. 

    I can’t believe I have to say this because it’s true of almost any scientific or technical field. If you don’t write down what you did and how you did it, the next person who comes along will have to spend the time and effort to recreate your steps and potentially make the same mistakes. This also means everybody on your team needs to be able to make notes and comment on processes, not just one gatekeeper. 

  10. You can’t do everything simultaneously. Identify and triage your security issues and tackle one project at a time. 

    Plenty of the horror stories I hear from security operations centers in their early stages involve taking on too much at once – especially without the guidance of a project manager. These teams drop everything because they can’t do it all simultaneously. We have the unfortunate tendency to be ideas people without organizing the projects and tasks we develop into structured projects.

  11. Threat Intelligence is not a buzzword and does not center around APTs. Have good feeds of new malware indicators. 

    Yes, there are predatory companies selling threat intelligence feeds with little or no value (or ones that consist entirely of otherwise free data). The peril in discounting threat intelligence is that signature based malware and threat detection is becoming less valuable every day. Every sample of the same malware campaign can look different due to polymorphism, and command and control mechanisms have gotten complex enough that traffic can change drastically. We are forced, at this point, to start looking in a more sophisticated way at who is attacking and how they operate to predict what they will do next. The includes things from identifying domains resolving to a set of IPs to sophisticated intelligence analysis. How far you take threat intelligence depends on time, funding, and industry, but every organization should be making it a part of their security plan.

  12. if your employees have to DM me for help with their basic SIEM / log aggregation, you’re failing at training. 

    Happens all the time, folks. I see a lot of good people at organizations with terrible training cultures. Make sure everybody has a level of basic knowledge from the start, and isn’t so intimidated in asking for help that he or she feels forced to go outside your organization. 

  13. Team build, and don’t exclude. The SOC that plays well will respond well together and knows their members’ strengths and shortfalls. 

    Prototypical hacker culture, while an absolute blast, is not for everyone. I’ve seen people shamed out of infosec for the most bizarre reasons – the fact is that some people don’t drink alcohol, or want to go to cons, or think Cards Against Humanity is appropriate. Yes, we are generally intelligent people and we can be rather eccentric. That doesn’t mean that people who find these things unpleasant don’t have skills and knowledge to contribute. Accept that they don’t have the same interests and move on without badgering. It’s their personal choice. When you plan your teambuilding activities, try to make them inclusive – people with kids might not be able to hang out at the bar at midnight.

  14. If you seek hires do it in range of places. Grads, veterans, exploit researchers, and more all may have different stuff to offer. 

    I see a lot of organizations with a relationship with a infosec group or university that only recruit from that specific pool. As with lack of genetic diversity, this provides no advancement or innovation. There are tons of places to find interesting perspectives on infosec from well educated candidates. It’s important to bring fresh ideas and perspective into your team.

  15. if your ticketing system doesn’t work in a security context, get your own dang ticketing system and forward. 

    There are two main reasons that you shouldn’t be using the same ticketing system for security cases that your IT department uses for everyday help desk operations. The first is security – there is no reason that your IT contractors or non-IT staff in general should be able to see the details of sensitive cases, even by an error in permissions. This also includes their accounts, should they become compromised. The second is that these ticketing systems are not designed with security incidents in mind. A security incident case management platform should do Critical things like store malware samples safely, provide court admissible records of evidence hashes and case notes, and integrate with SIEM or log aggregation solutions. If your ticketing solution is not doing these basic functions, it’s high time to consider a separate platform.

  16. DO virtualize your malware analysis. DON’T virtualize your security applications unless the vendor says how to. 

    Virtualization software is critical for lots of reasons in infosec – from setting up malware analysis labs to CTFs to honeypots. It is not appropriate for all security applications and solutions. Most organizations are heavily pushing virtualization as a cost saving initiative, but be very cautious when presuming all resource intensive and highly specialized security tools will function alike when virtualized.