Thanks for your interesting question submissions to “Ask Lesley”! This column will repeat, on no specific schedule, when I receive interesting questions that are applicable to multiple people. See further details or submit a question, here. Without further ado, today we have OS debates, management communication issues, nation state actors, and career questions galore!
So last year’s Anthem breach was from a nation state – why would a nation state want to hack health insurance info? I understand the identity theft motivation of a criminal, but why do you think a nation state would want this type of data?
First off, I can’t confirm the details of the Anthem breach – I wasn’t involved in the investigation and haven’t had the privilege of reviewing all the evidence. However, when generally talking about why a state-sponsored actor might want to acquire data, you have to look at a bigger picture than data sets. Nation states usually view hacking as a means to an end. They (ab)use data with a firm political or military objective in mind. Whether a nation state intended to steal 80 million records, or the theft was a crime of opportunity when looking for something more specific, what they stole may unfortunately be useful to them for years to come.
You can obviously already see how the data stolen in a healthcare breach is a treasure trove for general identity theft. The piece I believe you might be missing considers how the data could be combined with other public domain and stolen information to facilitate political objectives. If you already have a target in mind, healthcare data could be a great boon to social engineering, blackmail, and surveillance efforts. For example, consider how much leverage knowing that a target’s child is ill could provide. Or that a target family is hundreds of thousands of dollars in medical debt. These are attractive attack vectors. I can only speculate on potential scenarios, but based on my experience in OSINT, the data stolen from Anthem adds attractive private information about many millions of people.
The ‘researcher’ portion of ‘security researcher’ implies graduate school – is PhD study in cybersecurity worth it? There doesn’t seem to be many programs that are worthwhile (except on paper only)
– Not in Debt, Yet
Dear Not in Debt, Yet,
That’s an interesting implication – not one I necessarily agree with based on empirical evidence. I know full time, professional security researchers studying everything from exploits to governance who have every level of formal education, from GEDs to PhDs. I do see certain fields of security research represented in higher education more than others – a couple examples are high level cryptography and electronic engineering.
I have always been an advocate for higher education and I see little harm and many benefits in getting a good education in a field you enjoy (particularly, a well-rounded education) if you can afford it. However, at the present, there are very few information security careers or communities of research which require a degree, and fewer good quality degree programs. You should see few credential-related barriers to participating in or publishing security research if your work and presentation is good quality.
In some ways, existing exclusively in academia can also make it harder to work in practical security research, as the security field changes more quickly than university curricula can keep up. As a result, some academic security research ends up impractical and theoretical to a fault. (See my yearly rants on steganography papers.) If you go the academic route, choose your field of study carefully, and be careful not to lose touch with the working world.
While working on my 5 BILLION dollar data breach, I wanted some blue cheese dip and chips (The Spice House in Chicago has the best mix btw), a co-worker looked at me with disgust. Am I wrong? Also what’s a good resource to learn about file carving?
– Epicurean EnCE
Clearly, your coworker is a Ranch dressing fan and should therefore be looked upon with disdain. In regards to file carving, your mission, (should you choose to accept it), is to review how files are physically and logically stored on a hard drive. Next, you’ll want to start familiarizing yourself with typical file headers and footers. Gary Kessler has a pretty killer list, here. Some file types will be more relevant to your specific work in forensics than others; I can’t tell you which those will be. Your best bet is to pick a couple file types you look at a lot and look at them in a hex editor, then start searching for them in a forensic image.
Brian Carrier’s File System Forensics book, while a bit older, is still a stellar resource for understanding How Disk Stuff Works. SANS SIFT kit includes the tools you will need to get started carving files from disk, and the associated cheat sheets will help with the commands.
If you want to carve files from packet captures, similar header/footer knowledge is required, along with a different tool set. Wireshark’s export alone will often suffice; if it fails, look at Network Miner.
What was the silliest / dumbest thing you’ve googled this week?
– Curious in Cincinnati
“The shirt, 2017”
I still don’t get what’s up with that.
I teach high school computer science courses and many students biggest interest is infoSec stuff. What should they do to prepare at that age? Any recommendations on software or skills I can teach them? I’m willing to put in the time and effort to learn things to teach and we have class time, but this isn’t what my tech career focused on so I need some help. Thank you, you’re the best!
– Mentor in Michigan
Being a crummy hacker requires learning to use a few tools by following YouTube. Being a good hacker requires a great deal of foundational knowledge about other, less entertaining computer stuff.
The better one knows how computer hardware, operating systems, and networks work, the better he or she will be at hacking. If kids come out of your classes unafraid of taking their own software and hardware apart, you did your job right. That means a lot of thinking about how Windows and Linux function, how computer programs work all the way down to Assembly, and how data gets from point A to point B. If you are going to encourage kids to take stuff apart, make sure they also understand that law and ethics are involved. Provide them a safe and legal sandbox to explore, and explain why it’s important to know how to break things in order to fix them.
As an aside – by high school, kids are more than old enough to be actively participating in the infosec community if they wish. Numerous kids and teens attend and even present at hacker events, these days; in fact, many conferences have educational events and sponsorships specifically for youth.
I normally use a Chromebook, but I also have to use Windows 10 so that I can use Cisco packet tracer (I’m studying CCNA). I really trust the security of my Chromebook, but Windows 10 – not so much. I have antivirus, anti-exploit and anti-ransomware software on my Windows laptop. But my question to you is: Is there a resource that you know of that can help lock down Windows 10 for the home user? Most of what I find is for enterprises and Enterprise versions of Windows 10 and if I do find something for the home user it invariably talks about privacy rather than security.
– Kerneled Out
Dear Kerneled Out,
The OS wars, while somewhat befuddled by 2016, are alive and well. There are dogmatic Linux fans, and dogmatic Windows fans, and so on and so forth. My opinion is that every OS has its place when used correctly by the right person. Many serious security people I know use every major OS on a daily basis – I sure do.
Swift On Security has a nice guide here on securing Windows 10 that should suit your needs.
As for Chrome over Windows – please don’t fall into the “security by obscurity” trap that MacOS and Chrome can encourage. They are both solid OSes with interesting ideas on security, and viable choices for home and business use cases. However, modern versions are not inherently more or less secure than modern Windows. MacOS, Windows, Chrome, and major Linux distros are as secure as they are configured and used by human beings. Of course, the complexity of configuring them can vary based on user experience and training.
How come everyone wants 5 years experience for an entry level infosec job? I’ve been trying to get gainful employment in an offensive role for more than 6 months and no one wants anyone with less than 5 years of pentesting/red teaming experience. Can’t exactly do pen tests until you’re a pentester, so what do I do?
I’m sorry to hear you’re having so much trouble finding a position. I have written quite a lot about infosec career paths and job hunting in previous blogs, and I hope that they can assist you a little. Red teaming is unfortunately much harder and more competitive to find work in than Blue teaming, so my suggestions here are not going to be particularly pleasant:
- Consider your willingness to move. There are simply more red team jobs in places like DC and the west coast.
- Consider if you can take a lower-paid internship. It sucks, but it’s an in, and pen testing firms do offer them.
- Consider doing blue team SOC work for a couple years. It’s not exactly your cup of tea, but it will give you solid security experience.
- Network like crazy. Get to the cons and the meet-ups in person. Talk to people and build relationships.
- Do research and speak about it. Pick something that intrigues you, even if you have no professional experience, and do a few months work, and submit to a CFP. It will get you name recognition.
Many infosec professionals feel that signature-based antivirus is dead. If that is the case… What do you recommend we replace it with to protect our most vulnerable endpoints (end users) with?
– Sigs Uneasy
That’s the kind of black and white statement that makes a good headline, but exaggerates the truth a bit. Yes, there are a couple companies who have been able to ditch antivirus because of their topology and operations. The vast majority still use it. While signatures alone don’t cut it against quickly replaced and polymorphic threats, other antivirus features, such as HIPS and heuristics, still provide a benefit. (So, if you’re still using some kind of antivirus that can’t do those things, it’s time to upgrade.)
Antivirus today is useful as part of a “defense in depth” solution. It is not a silver bullet, and it’s certainly defeatable. However, it still catches mass malware and the occasional targeted threat. The threats AV misses should be caught by your network IPS, your firewall, your web filters, your application whitelisting solution, and so forth. None of those solutions is bulletproof alone, and even the efficacy of trendy solutions like whitelisting is limited if you don’t architect and administer your network securely.
I was testing a network and found some major flaws. The management doesn’t seem too bothered but I feel the issues are huge. I want to out them because these flaws could impact many innocent people. But if I do, I won’t be hired again. I look forward to your response.
– Vaguely Disturbed
Before whistle-blowing and potentially getting in legal trouble, I highly recommend you approach this argument from a solid risk management perspective. Sometimes, “it could be hacked” means a lot less to management than, “9 companies in our industry were breached in 2016, and if we are, it will probably cost us over 70 million dollars in lost revenue”. If you have access to anybody with a risk analysis background you can reach out to under the relevant NDA, I highly recommend you have a chat with them and put together a quantified, evidenced argument, ASAP. The more dollar signs and legal cases, the better your chances of winning this.
At the very least, win or lose, ensure you’ve covered your butt. This means written statements and acknowledgements stating you clearly explained the potential risk and also that they willfully chose to ignore it. Not only does requiring a notarized signature make the appearance of threat go up, but it will be helpful in case they decide to blame you or your employer two years from now.
I would suggest you consult a lawyer before breaking NDA or employment contract by whistle blowing, no matter how noble your intentions. I am not a lawyer, nor do I play one on TV.
I make software and web applications that connect to software and services from other companies. Sometimes those companies disable or cripple some features due to possible security exploits. When I’ve met with security people from those companies and asked them about the features they nerfed (disabled or crippled), I’m met with an awkward silence similar to the vague errors I get from their servers. As a developer, I’m so used to the open-source community that wants to help that this feels weird. Is there some certification, secret handshake, or specific brand of white fedora I need to have conversations with security people about their products security issues? Just trying to learn and grow, and not cause a mess for anybody.
No secret handshake. Here are a couple suggestions from the receiving end of these types of concerns:
- Set up a security lab with your applications and a client on it. Install a Snort or Suricata sensor(s) with the free Emerging Threats ruleset in the midst of them to intercept their communication. (Security Onion is a nice, relatively easy to install option.) Send normal application traffic back and forth and see what security signatures are firing on the network. That will give you some idea of what might be getting blocked before you even start the discussion (and help you reduce false positives).
- Ensure your applications are getting proper vulnerability testing before release. Again, even if you’re coding securely and responsibly, this can help reduce false positive detection by vulnerability scanners or sensors.
- Ask the security people what security products or appliances they are using on the hosts and on the network, and what signatures are firing. You might not have access to a 20,000 dollar security appliance to test, but their sensor might have full packet capture functionality or verbose logs that will help you troubleshoot.
- Try to build a better professional relationship with these teams if you can. If they’re involved in a local security group, perhaps drop by and have a drink with them.
I’m feeling it is time to move on from Windows XP, but only because many things no longer support it, and 3Gb is a bit limiting when running VMs and the like. I’ve tried Windows 10, and it is completely alien, and I worry about security – it streams things back to Microsoft, and is less secure than my hardened XP install. I’ve tried Mint Linux, and that was quite good, but underneath it is even more alien than Windows 10. I’ve heard of BSD, but I’m worried that my political career could be over if word about that got out, so I’ve not tried it. What do you suggest?
– Unsupported in UK
It is indeed high time to move off XP.
Windows XP is unsupported, highly vulnerable, and trivially exploitable by hackers. It is not in the same league as Windows 10 in terms of security. Even application whitelisting (which is considered a bit a last resort silver bullet in industry) isn’t a reliable means of securing XP against attacks anymore.
Yes, there are some IT professionals who dislike Windows 10. Those concerns usually have to do with things like UI, embedded ads and system telemetry, not the underlying security (which is quite well engineered).
If those are your specific concerns, a current version of Mint (which you tried), Ubuntu, or MacOS are all okay options. They would all need to be thoughtfully configured for security just as much as Windows. BSD will feel just as unfamiliar if you were uncomfortable operating in Mint, but I certainly don’t discourage you from giving it a try. Even MacOS is *nix based under the hood.
Unfortunately, it seems to me that you’re stuck with two options if you want to maintain any semblance of security: cope with your dislike of Windows 10, or dedicate some time to learning the inner workings of a new operating system. Either way, please get off XP as soon as possible.
My friend, since birth – who I’ll call M. E., has had a 23-year, jack-of-most-trades career in IT. ME is currently serving as the IT Decider (and Doer) at an SMB financial firm. Over the last five years, ME has enjoyed focusing on security. Technology, security in particular, is still near the top of his hobby list. However, compared to when he started his IT career, ME places a greater value on having a work-life balance. ME wonders if it’s too late for a change to the cyberz – without “starting over.” In your experience, is there a reasonable way for ME to jump from the “IT rail” to the “security rail” without touching the third rail and returning to Go, without collecting $200?
– ME’s Friend
Dear ME’s Friend,
Your ‘friend’ sounds like a great candidate for many security positions, but he or she might have to take a pay cut. 23 years of experience in systems administration and networking is 23 years of experience in how to take things apart, which is really mostly what security is behind the neat hats and the techno music.
ME is going to need to figure out two important things. Firstly, ME will need to gain some security-specific vocabulary to tie things together – a course or certification might be a nice feather in the cap. Then, ME is going to have to carefully plan out how to present him or herself as an Awesome Security Candidate in interviews and resumes. That will involve taking those 23 years of generalized experience, as well as security hobby work, and selling them as 23 years of Awesome Security Experience. For example, it takes a lot of understanding of Windows administration and scripting to be a good Windows pen tester. Or, it takes a lot of TCP/IP knowledge to do packet analysis of an IPS signature fire. Every niche of security requires deep knowledge of one or more areas of general IT.
All that being said, there are some security skills that need to be learned on the job. I wouldn’t push ME towards an entry level gig, but it may not be an easy lateral move to any senior technical position, either. A good segue if seniority is critical might be security engineering (IPS / SIEM / log aggregation administration, etc).
How does an organization go about starting a patch testing program? Ours seems to be stuck in a “don’t update it, you’ll break the application” mindset. –
– TarPitted in Texas
As I noted to a reader above, sometimes this type of impasse with management can only be solved through presenting things as quantifiable risk. If you are telling management that your application is vulnerable, and they are saying it will cost too much if it breaks when you patch it, somebody else is quantifying risk better than you. You’d best believe that team saying, “the application might break” is also saying, “if this application breaks, it will cost us n dollars a day”. So, play that game. Tell management specifically how much money and time they stand to lose if a security incident occurs. Present this risk clearly – get help if you need to from all of the impacted teams, your disaster recovery and risk management professionals, and even your finance team.
Your managers should be making a decision based on monetary and other quantifiable business impact of the application going down for patching, vs. the monetary and other quantifiable business impacts of a potential security incident at x likelihood. Once they do that on paper, you’ve done due diligence.