The InfoSec Amnesty Q&A

Foreword (Lesley)

One of the hardest things to accept in information security is that we as individuals will simply never know everything there is to know about the field, or all of its many niches. Despite this absolute reality, we still often feel embarrassed to ask basic questions about topics we don’t understand, due to a misplaced fear of looking unknowledgeable.

The reality is that there are a number of subjects in information security which many people who are otherwise quite competent professionals in the field are confused by. To try to alleviate this problem, I anonymously polled hundreds of infosec students and professionals about what topics they’re still having trouble wrapping their heads around. A few subjects and concepts rose to the top immediately: Blockchain, the Frida framework, DNSSEC, ASLR (and various associated bypasses), and PKI.

Since information security has many areas of specialty, I’ve stepped aside today and asked people specifically working in each niche to tackle breaking down these topics. Where possible, I have provided two perspectives from people with different experiences with the subject matter. Each of these contributors was tremendously generous with his or her time and knowledge. Please visit their social media profiles and personal blogs!

ASLR (Skip Duckwall and Mohamed Shahat)

Perspective One: Skip

1) This is a pretty tough topic, so let’s start with an easy one. Can you tell us a little about yourself, and your expertise related to ASLR / ASLR bypassing?

Yikes, ask the easy ones first, eh?  I’m a former DOD Red team member (contractor) who did some stuff to some things somewhere at some point in time.  My biggest life achievement is being part of a group which got a multi-billion dollar MS client pissed off enough to call MS to the carpet and eventually MS wrote a whitepaper.  Now I’m a consultant.  My experiences with ASLR, etc are mostly from a “I have to explain why these are things to C-level folks and why they should care” standpoint.

2) ASLR bypasses are common in security news, but a lot of infosec folks don’t fully understand what ASLR does, and why bypassing it is a goal for attackers. Can you please give us a “500-words-or-less” explanation of the concepts? (Assume an audience with solid IT fundamentals)

Caveat:  This is a very technical question and in order to answer it in an easy to understand manner, I have to provide some background and gloss over a lot of very pertinent details.  My goal is to provide a GIST and context, not a dissertation ;-).
Ok, while I can assume people have solid IT fundamentals, I need to define a Computer Science fundamental, namely the concept of a stack.  A stack is a conceptual (or abstract) data structure where the last element in is the first element out (LIFO).  You put stuff into a stack by “pushing” it and you pull stuff out by “popping” them.  The wikipedia page for a stack (https://en.wikipedia.org/wiki/Stack_(abstract_data_type) ) is a good read.
This is relevant because stacks are used extensively as the means for an operating system to handle programs and their associated memory spaces.  Generally, the memory associated with a process has three areas (arranged in a stack), namely the Text area (generally the program’s machine code), the data area (used for static variables), and the process stack, which is used to handle the flow of execution through the process.  When a process executes and hits a subroutine, the current information for the process (variables, data, and a pointer to where the execution was last at) gets pushed onto the process stack.  This allows the subroutine to execute and do whatever it needs to do, and if further subroutines occur, the same thing happens.  When the subroutine is finished, the stack gets popped and the previous execution flow gets restored.

One of the earliest types of attacks against programming mistakes was called ‘stack smashing’ (seminal paper here: http://www-inst.eecs.berkeley.edu/~cs161/fa08/papers/stack_smashing.pdf by Aleph One).  In this kind of attack, the attacker would try to stuff too much information into a buffer (a block of data which sits on the process stack) which would overwrite the stack pointer and force the process to execute attacker-generated code included in the buffer.  Given the generally linear nature of how the stacks were handled, once you found a buffer overflow, exploiting it to make bad stuff happen was fairly straightforward.

ASLR (Address Space Layout Randomization) is an attempt to make the class of bugs called buffer overflows much more difficult to exploit.  When a process executes, it is generally given virtual memory space all to itself to work with.  So the idea was, rather than try to have all the process stack be clumped together, what if we just spread it out somewhat randomly throughout the virtual memory space?  This would mean that if somebody did find a buffer overflow, they would not know where the stack pointer was in order to affect the flow of the process and inject their code, raising the bar for attackers. (in theory)

Obviously bypassing ASLR is a goal for attackers because it is a potential gate barring access to code execution 😉

3) What are two or three essential concepts for us to grasp about ASLR and the various  bypass techniques available?

So when it comes to ASLR bypasses there are really only a couple different categories of methods, brute force or information leakage.

In many cases, ASLR implementations were limited somehow.  For example, maybe there were only 16 bits (65535) of randomness, so if you were trying to exploit a service which would automatically restart if it crashed, you could keep trying until you got lucky.  Many ASLR implementation suffer from some problem or another.

Another common problem with ASLR is that there may be segments of code which DON’T use ASLR (think external libraries) which are called from code that is using ASLR. So it might be possible to jump into code at a well known location and then leverage that to further exploit.

Information leakage is the final issue that commonly arises.  The idea is that a different vulnerability (format string vulns are the most common) has to be exploited which will provide the attacker with a snapshot of memory, which can be analyzed to find the requisite information to proceed with the attack.

4) What would you tell somebody in infosec who’s having trouble grasping how ASLR works and how it is bypassed? (For example, what niches in security really need to “get it”? What other things could they study up on first to grasp it better?)

Honestly, unless you are an exploit developer, an application developer, or into operating systems memory design, a gist should be all you need to know. If you are a developer, there’s usually a compiler option somewhere which you’d need to enable to make sure that your program is covered. It is also worth noting that generally 64-bit programs have better ASLR because they can have more randomness in their address space.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

This topic rapidly reaches into the computer science scholarly paper area (Googling ASLR bypass pdfs will find you a lot of stuff). Also, look through Blackhat / DEF CON / other security conference archives, as many people will present their research. If you want to delve deeper, look into how the Linux kernel implements it, read through the kernel developer mailing lists, etc… lots of info available.

Perspective 2: Mohamed

1) Thank you for joining us! Would you mind telling us a little about yourself, and your expertise related to ASLR / ASLR bypassing?

Hi Lesley! My name is Mohamed, I’m a software engineer who has a lot of passion towards security. Some may know me from my blog (abatchy.com) where I write about various security concepts/challenges.

I currently work as an engineer on the Windows Security team where we design/implement security features and do other cool stuff.

2) ASLR bypasses are common in security news, but a lot of infosec folks don’t fully understand what ASLR does, and why bypassing it is a goal for attackers. Can you please give us a “500-words-or-less” explanation of the concepts? (Assume an audience with solid IT fundamentals)

Address space layout randomization (ASLR) is a security mitigation that aims to prevent an attacker from creating a reliable exploit. Its first implementation was over a decade and it became a stable in modern operating systems.

What it does is simple, the address space of a process is randomized on rerun/reboot depending on the implementation, this can be applied to the base address of the executable and libraries it loads as well as other data structures like the stack and the heap among other internal structures as well as the kernel (KASLR).

Executables are expected to be position-independent. In Windows, linking must be done with /DYNAMICBASE flag, while Linux requires -fPIE as a flag for gcc/ld.

How does that help? Well, exploits rely on knowledge about the address space to be able to manipulate the execution flow (I control EIP, where do I go next?) and with this information taken away, attackers can no longer depend on predictable addresses. When combined with other fundamental mitigations like DEP (Data Execution Prevention), exploiting memory corruption bugs becomes much harder.

Before we discuss the common bypassing techniques, it’s important to stress on that bypassing ASLR doesn’t directly enable code execution or pose a risk by itself as this is only a part of the exploit chain and you still need to trigger a vulnerability that results in code execution. Yet, finding an ASLR bypass mean that broken exploits can utilize that bypass again.

There are a few ways to bypass ASLR, some of these techniques are less likely to be applicable in modern OS/software than others:

  1.  Information Disclosure: Most commonly used method to bypass ASLR nowadays, the attacker aims to “trick” the application into leaking an address.

    Example: CVE-2012-0769

  2.  Abusing non-ASLR modules: The presence of a single non-ASLR module means an attacker has a reliable place to jump to. Nowadays, this is becoming less common.

    Example: CVE-2013-3893, CVE-2013-5057

  3.  Partial overwrite: Instead of overwriting EIP, overwrite the lower bytes only. This way you don’t have to deal with the higher bytes affected by ASLR.

    Example: CVE-2007-0038

  4. Brute-forcing: Keep trying out different addresses. This assumes that the target won’t crash, and the virtual memory area is small (ASLR on 64-bit > ASLR on 32-bit).

    Example: CVE-2003-0201

  5. Implementation flaws: Weak entropy, unexpected regression, logical mistakes or others. Lots of great research on this topic.

    Example: CVE-2015-1593, offset2lib

    In real world, attackers will need to bypass more than just ASLR.

3) What are two or three essential concepts for us to grasp about ASLR and the various bypass techniques available?

  1. For ASLR to be efficient, all memory regions within a process (at least the executable ones) must be randomized, otherwise attackers have a reliable location to jump to. It’s possible that not all objects are randomized with the same entropy (randomization), in a way the object with the lowest entropy is the weakest link.
  2. Bypassing ASLR doesn’t mean attackers can execute code. You still need an actual vulnerability that allows hijacking the execution flow.
  3. Some bypasses aim to reduce the effective entropy

4) What would you tell somebody in infosec who’s having trouble grasping how ASLR works and how it is bypassed? (For example, what niches in security really need to “get it”? What other things could they study up on first to grasp it better?)

  1. Understand the memory layout of a process for both Linux/Windows, see how they change on rerun/reboot.
  2. Write a simple C++ program that prints the address of local variables/heap allocations with and without ASLR. Fire up a debugger and check the process layout of various segments.
  3. Research past ASLR vulnerabilities and how they were used to bypass it and recreate them if possible.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

  1. Understand the implementation differences for ASLR in Windows and Linux.
  2. Familiarize yourself with other mitigations like DEP, stack cookies (Windows/Linux), AAAS, KSPP (Linux), policy-based mitigations like ACG/CIG (Windows). This list is in no way comprehensive but serves as a good start.
  3. Solve exploitation challenges from CTFs, recreate public exploits that rely on bypassing ASLR.
  4. Check PaX’s ASLR implementations.

Recommended reads:

  1. Differences Between ASLR on Windows and Linux
  2. On the effectiveness of DEP and ASLR
  3. The info leak era on software exploitation
  4. Exploiting Linux and PaX ASLR’s weaknesses on 32- and 64-bit systems

For hands-on experience I recommend the following:

  1. RPISEC’s MBE course
  2. https://exploit-exercises.com
  3. CTFs

Blockchain (Tony Arcieri and Jesse Mundis)

Perspective One: Tony

1) Thanks for joining us. Would you mind telling us a little about your background, and your expertise with blockchain technology?

I’m probably most known in the space for the blog post: “On the dangers of a blockchain monoculture“, which covers both my (somewhat dated) views of blockchains and how alternative “next generation fintech” systems not based on blockchains might provide better alternatives. I spent the last year working for Chain.com, an enterprise blockchain company targeting cryptographic ledgers-as-a-service, which I recently left to pursue other interests.

2) Would you please give us a 500-words-or-less explanation of what a blockchain is, and why the technology is important to us as security professionals? (Assume an audience with solid IT fundamentals)

“Blockchain” is a buzzword which loosely refers to the immutable, append-only log of transactions used by Bitcoin, collectively agreed upon in a distributed manner using a novel consensus algorithm typically referred to as “Nakamoto consensus”. Other systems have adopted some of the ideas from Bitcoin, often changing them radically, but still referring to their design as a “blockchain”, furthering a lack of clarity around what the word actually refers to.

A “blockchain” is more or less analogous to a Merkle Tree with some questionable tweaks by Satoshi[2], which authenticates a batch of transactions which consist of input and output cryptographic authorization programs that lock/unlock stored values/assets using digital signature keys.

Bitcoin in particular uses a proof-of-work function to implement a sort of by-lottery distributed leader election algorithm. Being a buzzword, it’s unclear whether the use of a proof-of-work function is a requirement of a blockchain (the Bitcoin paper refers to the idea of a blockchain as a “proof-of-work chain”, for example), but in colloquial usage several other systems claiming to be based on a “blockchain” have adopted alternative authorization mechanisms, namely ones based around digital signatures rather than a proof-of-work function.

As a bit of trivia: the term “blockchain” does not appear in the original Bitcoin whitepaper. It appears to be a term originally used by Hal Finney prior to Bitcoin which Satoshi adopted from Hal.

[2]: It really appears like Satoshi didn’t understand Merkle Trees very well: https://github.com/bitcoin/bitcoin/blob/master/src/consensus/merkle.cpp#L9

3) What are a couple really critical concepts we should understand with regards to how blockchain technology functions?

Perhaps the most notable aspect of Bitcoin’s blockchain is its use of authorization programs as part of the “Nakamoto consensus” process: every transaction in Bitcoin involves two programs: an input program which has locked funds which will only unlock them if the authorization program’s requirements are met, and an output program which specifies how funds should be locked after being unlocked. Every validating node in the system executes every program to determine whether or not actions affecting the global state of the system are authorized.

This idea has been referred to as “smart contracts”, which get comparatively little attention with Bitcoin (versus, say, Ethereum) due to its restrictive nature of its scripting language, but every Bitcoin transaction involves unlocking and re-locking of stored value using authorization programs. In other words, “smart contracts” aren’t optional but instead the core mechanism by which the system transfers value. If there is one thing I think is truly notable about Bitcoin, it’s that it was the first wide-scale deployment of a system based on distributed consensus by authorization programs. I would refer to this idea more generally as “distributed authorization programs”.

Bitcoin in particular uses something called the “unspent transaction output” (UTXO) model. In this model, the system tracks a set of unspent values which have been locked by authorization programs/”smart contracts”. UTXOs once created are immutable and can only move from an unspent to spent state, at which point they are removed from the set. This makes the Bitcoin blockchain a sort of immutable functional data structure, which is a clean and reliable programming model.

Ethereum has experimented in abandoning this nice clean side effect-free programming model for one which is mutable and stateful. This has enabled much more expressive smart contracts, but generally ended in disaster as far as mutability/side effects allowing for new classes of program bugs, to the tune of the Ethereum system losing the equivalent of hundreds of millions of dollars worth of value.

4) What would you tell somebody in infosec who’s struggling to conceptualize how a blockchain works? (For example, does everybody in the field really need to “get it”? Why or why not? What other things could they study up on to grasp it better?)

There are other systems which are a bit more straightforward which share some of the same design goals as Bitcoin, but with a much narrower focus, a more well-defined threat model, and both a cleaner and more rigorous cryptographic design. These are so-called “transparency log” systems originally developed at Google, namely Certificate Transparency (CT), General Transparency (GT) a.k.a. Trillian, Key Transparency (KT), and Binary Transparency. These systems all maintain a “blockchain”-like append-only cryptographically authenticated log, but one whose structure is a pure Merkle Tree free of the wacky gizmos and doodads that Satoshi tried to add. I personally find these systems much easier to understand and consider their cryptographic design far superior to and far more elegant than what has been used in any extant “blockchain”-based system, to the point I would recommend anyone who is interested in blockchains study them first and use them as the basis of their cryptographic designs.

Links to information about the design of the “transparency log” systems I just mentioned:

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

Here are some links to specific bits and pieces of Bitcoin I think are worth studying:
– Bitcoin Transactions (a.k.a. UTXO model): http://chimera.labs.oreilly.com/books/1234000001802/ch05.html

Perspective Two: Jesse

1) Let’s start with the easy one. Would you please tell us a little about your background, and your expertise with blockchain technology?

I’m a C / Unix Senior Software Developer with a CISSP, who has worked with encryption and payment technologies throughout my career. I have a recently published paper on the possible implications of the GDPR (General Data Protection Regulation) on blockchain-based businesses, and have a pending patent application involving cryptographic keying material and cryptocurrencies. As an Info Sec professional, I enjoy the chance to share some knowledge with folks who wish to learn more about the field.

2) Would you please give us a 500-words-or-less explanation of what a blockchain is, and why the technology is important to us as security professionals? (Assume an audience with solid IT fundamentals)

A blockchain is fundamentally a ledger of transactions, with each “block” or set of transactions hashed in such a way as to link it to the previous block, forming a “chain.” There are many blockchains, with varying implementations and design goals, but at their core, they all provide for continuity and integrity of an ever-growing ledger of transactions. They provide an unalterable(*) record of events, in a distributed fashion, verifiable by any participant, and can be an important tool for providing “Integrity” in the CIA triad. The Bitcoin blockchain is the most famous, providing a basis for the BTC currency, so I will use it as a blockchain example. However, please understand that blockchain transactions don’t have to be financial in nature – they could be hashes of timestamped signed documents, or just about anything else you might want to keep an unalterable, witnessed record of.

(*) “unalterable” – In this case means that the network integrity as a whole is only secured by substantial ongoing compute power in a proof-of-work blockchain. Without that, you lose the core assurance the technology is trying to provide.

In the proof-of-work bitcoin blockchain, transactions are effectively of the form “At time Z, wallet number X paid wallet number Y the sum of N bitcoins.” Imagine many of these messages being dumped on a common message bus worldwide. “Miners” (who should more descriptively be thought of as “notaries”) collect up a “block” of these transactions, and along with the digital hash of the previous block in the chain, begin searching for a nonce value, which when added to their block, will make the hash of their block have a required number of leading zeros to be considered successful. The winning miner announces this block with their nonce to the world. All other miners confirm the block is valid, throw their in-progress block away, and begin working on a new block, which must now contain the winning block’s hash, thus adding an other link to the chain.

Checking the hash of a block is trivial, but finding the right nonce to create a valid hash takes time inversely proportional to the miner’s computing power. Once the chain has a sufficiently large number of blocks, each chaining back to the previous block, it becomes impractical to refute, change, or delete any records deep enough in the chain, without re-doing all the computational work which follows. An attacker would require a substantial percentage of the entire computational capacity of the network to do this.

In summary, a “block” is a set or group of transactions or entries plus a nonce, and the “chain” is formed by including the hash of the previous block as part of the next block. The weight of all future computations to find nonces for future blocks collectively secure the integrity of all the previous records in the chain.

3) What are a couple really critical concepts we should understand with regards to how blockchain technology functions?

“Blockchain” is not magical security pixie dust, and many new startup businesses pitching blockchain haven’t thought it through. As mentioned above, proof-of-work blockchains need a lot of compute power to secure them. Bitcoin is a fascinating social hack, in that by making the transactions about a new currency, the algorithm was designed to incentivize participants to donate compute power to secure the network in return for being paid fees in the new currency. On the other hand, private blockchains, kept within a single company may be no more secure against tampering than other existing record keeping mechanisms. That is not to say blockchains are useless outside of cryptocurrencies. The blockchain is applicable to “The Byzantine Generals Problem” [1] in that it can create a distributed, trusted, ledger of agreement, between parties who don’t necessarily trust each other. I fully expect the basics of blockchain technology to soon be taught in CS classes, right alongside data structures and algorithms.

[1] https://www.microsoft.com/en-us/research/publication/byzantine-generals-problem/

4) What would you tell somebody in infosec who’s struggling to conceptualize how a blockchain works? (For example, does everybody in the field really need to “get it”? Why or why not? What other things could they study up on to grasp it better?)

Keep it simple. A block is just a set of entries, and the next block is chained back to the previous block via inclusion of the previous block’s hash. The hash on each individual block is the integrity check for that block, and by including it in the next block, you get an inheritance of integrity. A change in any earlier block would be detected by the mismatched hash, and replacing it with a new hash would invalidate all the later blocks. Hashing is computationally easy, but finding a new nonce to make the altered hash valid in a proof-of-work scheme requires redoing all the work for all the blocks after the change. That’s really all you need to keep in mind.

Everyone in the security field does not need to understand blockchain to any deep level. You should have a basic understanding, like I’ve sketched out above, to understand if blockchain makes sense for your given use case. Again, using the more famous Bitcoin blockchain as an example, I’d strongly recommend everyone read the original 2008 Satoshi white paper initially describing Bitcoin[2]. It’s only eight pages, light on math, and very readable. It encapsulates many of the ideas all blockchains share, but I have to say again that while Bitcoin is implemented on the original blockchain, it is far from the only way to “do blockchains” today.

[2] https://bitcoin.org/bitcoin.pdf

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

Blockchain startups, projects, and new cryptocurrencies are all hot. Ethereum is getting a lot of press due to its “smart contracts” which provide compute actions executed on their blockchain. There are over ten thousand hits on github for “blockchain” right now, and over one hundred and fifty for books and videos at Safari Online. The challenge really is to narrow down your interest. What do you want to do with blockchain technology? That should guide your next steps. Just to throw out some ideas, how about finding a more power efficient way to do proof-of-work? Currently the Bitcoin network as a whole is estimated to be running at about 12 petaHashes per second, and consuming 30 TerraWatt-Hours per year. This is environmentally unsustainable. Or, examine some of the proof-of-stake alt-coins. Figure out what kinds of problems can we solve with this nifty, distributed, trust-out-of-trustlessness tool.

In my opinion, blockchain technologies really are a tool searching for the right problem. An alt-currency was an interesting first experiment, which may or may not stand the test of time. Smart contracts don’t seem ready for production business use to me just yet, but what do I know – Ethereum has a 45 billion dollar market cap, second only to Bitcoin right now. I personally don’t see how inventory tracking within an enterprise is really done better with a private blockchain than traditional methods, but I do see how one might be of use for recording land title deed transfers in a government setting. All of these, and many more activities are having blockchain technologies slapped on to them, to see what works. My advice is to find something which excites you, and try it.

The distributed, immutable ledger a blockchain provides feels like it is an important new thing to me for our industry. Maybe one of you will figure out what it’s really good for.

DNSSEC (Paul Ebersman)

1) Nice to meet you, Paul. Could you please tell us a little about yourself, and a bit about your work with DNSSEC?

I’ve been supporting internet connected servers since 1984, large scale DNS since 1990. I’ve been involved with the IETF development of DNS/DNSSEC standards and the DNS-OARC organization. For 3+ years, I was the DNS/DNSSEC SME for Comcast, one of the largest users of DNSSEC signing and validation.

2) Would you please give us a brief explanation of what DNSSEC is, and why it’s important?

The DNS is used to convert human friendly strings, like http://www.example.com into the IP address or other information a computer or phone needs to connect a user to the desired service.

But if a malicious person can forge the DNS answer your device gets and give you the IP address of a “bad” machine instead of the server you think you’re connecting to, they can steal login information, infect your device with malware, etc.

DNSSEC is a technology that lets the owner of a domain, such as example.com, put cryptographic signatures on DNS records. If the user then uses a DNS resolver that does DNSSEC validation, the resolver can verify that the DNS answer it passes to the end user really is exactly what the domain owner signed, i.e. that the IP address for http://www.example.com is the IP address the example.com owner wanted you to connect to.

That validation means that the user will know that this answer is correct, or that someone has modified the answer and that it shouldn’t be trusted.

3) What are a couple really critical concepts we should understand with regards to how DNSSEC functions?

DNSSEC means that a 3rd party can’t modify DNS answers without it being detected.

However, this protection is only in place if the domain owner “signs” the zone data and if the user is using a DNS resolver that is doing DNSSSEC validation.

4) What would you tell somebody in infosec who’s struggling to conceptualize how DNSSEC works?

DNSSEC is end to end data integrity only. It does raise the bar on how hard it is to hijack the DNS zone, modify data in that zone or modify the answer in transit.

But it just means you know you got whatever the zone owner put into the zone and signed. There are some caveats:

– It does not mean that the data is “safe”, just unmodified in transit.
– This is data integrity, not encryption. Anyone in the data path can
see both the DNS query and response, who asked and who answered.
– It doesn’t guarantee delivery of the answer. If the zone data is DNSSEC signed and the user uses a DNSSEC validating resolver and the data doesn’t validate,the user gets no answer to the DNS query at all, making this a potential denial of service attack.

Because it does work for end to end data integrity, DNSSEC is being used to distribute certificates suitable for email/web (DANE) and to hold public keys for various PKI (PGP keys). Use of DNSSEC along with TLS/HTTPS greatly increases the security and privacy of internet use, since you don’t connect to a server unless DNSSEC validation for your answer succeeds.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper?

Start with the documentation for your DNS authoritative server for information on signing your zones. Similarly, read the documentation for your recursive resolver and enable DNSSEC validation on your recursive resolver (or use a public validating resolver, such as 8.8.8.8 or 9.9.9.9).

Here are some good online resources:

For debugging DNSSEC problems or seeing if a zone is correctly signed: https:/www.dnsviz.com

For articles on DNSSEC: https://www.internetsociety.org/deploy360/dnssec/

PKI (Tarah M. Wheeler and Mohammed Aldoub)

Perspective One: Tarah

(Tarah Wheeler, principal security researcher at Red Queen Technologies, New America Cybersecurity Policy Fellow, author Women In Tech. Find her at @tarah on Twitter.)

1) Hi, Tarah! Why don’t we start off with you telling us a little about your background, and your expertise with PKI.

My tech journey started in academia, where I spent my time writing math in Java. As I transitioned more and more to tech, I ended up as the de facto PKI manager for several projects. I handled certificate management while I was at Microsoft Game Studios working on Lips for Xbox and Halo for Xbox, and debugged the cert management process internally for two teams I worked on. On my own projects and for two startups, I used a 2009 Thawte initiative that provided certificates free to open source projects, and then rolled my own local CA out of that experience. I managed certs from Entrust for one startup. I handled part of certificate management at Silent Circle, the company founded by Phil Zimmermann and Jon Callas, the creators of PGP. I was Principal Security Advocate at Symantec, and Senior Director of Engineering in Website Security—the certificate authority that owns familiar words like VeriSign, Thawte, GeoTrust, and others. I was one of the Symantec representatives to the CA/B (Certification Authority/Browser) Forum, the international body that hosts fora on standards for  certificates, adjudicates reliability/trustworthiness of certificate authorities, and provides a discussion ground for the appropriate issuance and implementation of certificates in browsers. Now, I use LetsEncrypt and Comodo certs for two WordPress servers. I have a varied and colorful, and fortunately broad experience with cert management, and it helped me get a perspective on the field and on good vs. bad policy.

2) Would you please give your best, “500 words or less” explanation of what PKIs are and what they’re used for today (assume an audience with solid IT fundamentals)?

PKI or public key infrastructure is about how two entities learn to trust each other in order to exchange messages securely. You may already know that Kerberos and the KDC (Key Distribution Center) work on a shared-secrets principle, where users can go to a central authority and get authorization to communicate and act in a given network. PKI is a more complex system that understands lots of different networks which may or may not share a common trust authority. In PKI, you’re negotiating trust with a root which then tells you all the other entities that you can trust by default. The central idea of public key infrastructure is that some keys you already trust can delegate their trust (and hence yours) to other keys you don’t yet know. Think of it as a very warm introduction by a friend to someone you don’t yet know!

There are five parts of certificate or web PKI.

  1. Certificate authorities, the granting bodies for public/private keys, are in practice a form of verification to grease those wheels when there’s no other method of demonstrating that you are who you say you are…a function of identity. Yeah, I know I said that two entities can trust each other without a common authority, but humans aren’t good at that kind of trust without someone vouching for them. So, we have CAs.
  2. Registration authorities have what is essentially a license to issue certificates based on being trusted by the CA, and dependent upon their ability to validate organizational identity in a trustworthy way. Certificate authorities may perform their own registration, or they might outsource it. CAs issue certificates, and RAs verify the information provided in those certificates.
  3. Certificate databases store requests for certificates as opposed to the certificates themselves.
  4. Certificate stores hold the actual certificates. I wasn’t in charge of naming these bloody things or I’d have switched this one with certificate databases because it’s not intuitive.
  5. Key archival servers are a possible backup to the certificate database in case of some kind of disaster. This is optional and not used by all CAs.

Keys work like this: a pair of keys is generated from some kind of cryptographic algorithm. One common algorithm is the RSA (Rivest-Shamir-Adleman) algorithm, and ECDSA (Elliptic Curve Digital Signature Algorithm) is coming into more common use. Think of those as wildly complicated algebraic equations that spit out an ‘x’ string and a ‘y’ string at the end that are interrelated. You can give the ‘x’ to anyone anywhere, and they can encrypt any message, ‘m’ with that x. Now, while they know the original message, only you can unencrypt the message using your ‘y’ key. That’s why you can send the ‘x’ key to anyone who wants to talk to you, but you should protect the secrecy of your ‘y’ key with your teeth and nails.

The two major uses for PKI are for email and web traffic. On a very high level, remember that traffic over the Internet is just a series of packets—little chunks of bits and bytes. While we think of email messages and web requests as philosophically distinct, at the heart, they’re just packets with different port addresses. We define the difference between messages and web requests arbitrarily, but the bits and bytes are transmitted in an identical fashion. So, encrypting those packets is conceptually the same in PKI as well.

If you want to secure email back and forth between two people, the two most common forms of PKI are PGP (Pretty Good Privacy) and S/MIME (Secure/Multipurpose Internet Mail Extensions). PGP is the first commonly used form of email encryption. Created by Phil Zimmermann and Jon Callas in the early 1990s, PGP is notoriously both secure and difficult to configure for actual human usage, but remains the standard for hyper-secure communication such as with journalists or in government usage. S/MIME is the outsourced version of PKI that your email provider almost certainly uses (once they’ve machine-read your email for whatever commercial/advertising purposes they have) to transmit your email to another person over open Internet traffic. While S/MIME is something most users don’t have to think about, you’ll want to think about whether you trust both your email provider and the provider of the person you’re sending your email to.

The other major use for PKI is a web server authenticating and encrypting communications back and forth between a client—an SSL/TLS certificate that’s installed and working when you see “https” instead of “http” at the beginning of a URL. Most of the time, when we’re talking about PKI in a policy sense or in industry, this is what we mean. Certificate authorities such as DigiCert, Comodo, LetsEncrypt, and others will create those paired keys for websites to use to both verify that they are who they say they are, and to encrypt traffic between a client who’s then been assured that they’re talking to the correct web server and not a visually similar fake site created by an attacker.

This is the major way that we who create the Internet protect people’s personal information in transit from a client to a server.

Quick tangent: I’m casually using the terms “identification” and “authentication,” and to make sure we’re on the same page: identification is making sure someone is who they say they are. Authentication is making sure they’re allowed to do what they say they’re allowed to do. If I’m a night-time security guard, I can demand ID and verify the identity of anybody with their driver’s license, but that doesn’t tell me if they’re allowed to be in the building they’re in. The most famous example in literature of authentication without identification is the carte blanche letter Cardinal de Richelieu wrote for Madame de Winter in “The Three Musketeers,” saying that “By My Hand, and for the good of the State, the bearer has done what has been done.” Notably, D’Artagnan got away with literal murder by being authenticated without proof of identification when he presented this letter to Louis XIII at the end of the novel. Also: yes, this is a spoiler, but Alexandre Dumas wrote it in 1844. You’ve had 174 years to read it, so I’m calling it fair game.

There are a few other uses for PKI, including encrypting documents in XML and some Internet Of Things applications (but far, far fewer IoT products are using PKI well than should be, if I can mount my saponified standing cube for a brief moment).

Why do we use PKI and why do information security experts continue to push people and businesses to use encryption everywhere? It’s because encryption is the key (pun absolutely intended) to increasing the expense in terms of time for people who have no business watching your traffic to watch your traffic. Simple tools like Wireshark can sniff and read your mail and web traffic in open wireless access points without it.

3) What are a couple really critical concepts we as infosec people should understand with regards to how a modern PKI functions?

The difference between identity and security/encryption. We as security people understand the difference, but most of the time, the way we explain it to people is to say “are you at PayPal? See the big green bar? That’s how you know you’re at PayPal” as opposed to “whatever the site is that you’re at, your comms are encrypted on the way to them and back.

There’s a bit of a polite war on between people who think that CAs should help to verify identity and those who think it is solely a function of encryption/security. Extended validation (“EV certs”) certificates show up as those green bars in many desktop browsers, and are often used to show that a company is who they say they are, not just whether your traffic back and forth is safe.

Whether they *should* be used to identify websites and companies is a topic still up for debate and there are excellent arguments on both sides. An extended validation certificate can prove there’s a real company registered with the correct company name to own that site, but in rare cases, it may still not be the company you’re looking for. However, in practice and especially for nontechnical people, identifying the site is still a step up from being phished and is often the shortcut explanation we give our families at holidays when asked how to avoid bad links and giving out credit card info to the wrong site.

4) What would you tell somebody in infosec who’s struggling to conceptualize how PKI works? (For example, does everybody in the field really need to “get it”? Why or why not? What other things could they study up on to grasp it better?)

PKI has become an appliance with service providers and a functional oligopoly of certificate authorities that play well with the major browsers. That isn’t necessarily a bad thing; it’s simply how this technology evolved into its current form of staid usefulness and occasional security hiccups. In reality, most people would do better knowing how best to implement PKI, since vulnerabilities are in general about the endpoints of encryption, not in the encryption itself. For instance: don’t leave 777 perms on the directory with your private keys. If your security is compromised, it’s likely not because someone cracked your key encryption—they just snagged the files from a directory they shouldn’t have been allowed in. Most PKI security issues are actually sysadmin issues. A new 384-bit ECDSA key isn’t going to be cracked by the NSA brute forcing it. It’ll be stolen from a thumb drive at a coffee shop. PKI security is the same as all other kinds of security; if you don’t track your assets and keep them updated, you’ve got Schroedinger’s Vulnerability on your hands.

PKI isn’t the lowest-hanging fruit on the security tree, but having gaping network/system security holes is like leaving a convenient orchard ladder lying about.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

Roll your own certs and create your own CA. Do it for the practice. I was on Ubuntu years ago when I was rolling my own, and I used the excellent help docs. One best security practice is to regularly generate and use new keys, instead of keeping the same key for years and years, for the same reasons that changing your password regularly for high-security sites is a good idea—and that’s true whether you’re creating your own certs and local CA or if you’re simply purchasing a certificate from a CA. As with so much else, rolling your own crypto means that YMMV, so if you’re thinking of doing so formally and for a company or project that holds critical or personal information, get a pro to assess it. Think of this like a hobbyist building cars or airplanes at home. Most may be fine with riding in their own homebrewed contraptions, but wouldn’t put a child in it. If you don’t have the time to be a PKI professional, don’t keep other people’s data safe with your home-brewed certificate authority.

Most of the time, security issues aren’t with the encryption itself, but with how it’s been implemented and what happens on the endpoints—not with the math, but with the people. Focus on keeping your keys safe, your networks segmented, and your passwords unique, and you’ll be ok!

*I would like to thank Ryan Sleevi for feedback, and especially for providing the Kerberos/PKI analogy for comparison. All errors are mine.

Perspective Two: Mohammed

1) Thank you sharing your knowledge! When you reached out to me, you noted you had quite a unique perspective on PKI. Would you mind telling us a little about your background, and your expertise on the subject?

In my first information security job in the government of Kuwait, we had the opportunity to work on the country’s national PKI and Authentication project in its infancy, basically from the start, and together in a small team (5 at the time) we set out on a journey of ultra-accelerated and solid education, training and development for the country’s custom in-house solutions. Deciding that development of internal capability is far more useful, compliant with national security, and of course more fun, we began to develop our own tools and libraries for PKI, authentication, smartcards, and related technology. We produced our first version deployed to the public in 2010, much sooner than most (if not all) countries in the region, so it was for us a “throw them in the sea to learn swimming” type of experience. Developing certificate pinning in 2010 in C++ is not fun, but if there is one thing I learned, it’s this: chase the cutting edge, challenge yourself, and don’t belittle yourself or your background.

2) Would you please give your best, “500 words or less” explanation of what PKIs are and what they’re used for today (assume an audience with solid IT fundamentals)?

PKI (Public Key Infrastructure – ignore the name, it’s counterintuitive) is basically the set of technologies and standards/procedures that help you manage and utilize real-world cryptography.

PKI basically is a (major) field of applied cryptography.

If you ever took a cryptography course, while not being a total math nerd, and found out there’s lots of theory and math gibberish, then I can totally understand and sympathize. I personally believe math is one of the worst ways to get introduced to cryptography (just like grammar is a really bad way to start learning a new language). Cryptography should first be taught in an applied crypto fashion, then as one understands the main concepts and fundamentals, math can be slowly introduced when needed (You probably don’t need to understand Chinese Remainder Theorem to be able to use RSA!).

Ever visited an HTTPS website and wondered how you connected securely without having any shared keys to that website? That’s because of PKI.

Without asymmetric encryption, it would be impossible to create global-scale encrypted communication standards like SSL without presharing your keys with everyone in the world, and without PKI, managing global-scale asymmetric encryption deployments would be impossible at both the technical and management level.

So where is PKI in our world? Everywhere!

If you connected to HTTPS websites: PKI

Used Windows Update: PKI

Ran an application from a verified publisher: PKI

Email security? PKI

Connected through RDP or SSH? PKI

PKI encompasses technologies related to digital certificates, keys, encryption, signing, verification and procedures related to enrollment, registration, validation and other requirements that these technologies depend on.

Think of Let’s Encrypt. It’s now a Certificate Authority (entity that gives you certificates to put on your site and enable https/ssl/tls). To give you a certificate, they have certain procedures to check your identity and right to have a certificate issued to your domain name. This way anybody in the world can securely connect to your website without having to trust you personally through this delegated chain of trust.

For Let’s Encrypt to be trusted globally, proper application of PKI must be done, and must be verified by 3rd parties. If this trust is violated or abused through improper practices, compromise or negligence, you lose total or partial trust globally. DigitNotar went out of business after state actors compromised its CA and issued fake certificates to global websites, allowing them to have semi-automatic exploitation of wide scale communications. Symantec used improper certificate issuance practices and is now scheduled for full distrust in browser on September 2018 (They have already sold their PKI business to DigiCert).

The same idea applies to almost every popular software we run: It’s signed by trusted publishers to verify ownership. Software updates are, too.

Without PKI, you can’t boot your device with even a hint of security.

Fun exercise: Go check your device’s list of trusted Root Certificate authorities (Root CA: All powerful entities having -at least theoretical- power to compromise most of your communications and systems if their power is abused and targeted against you). You’d be surprised to find entries for so many foreign government CAs (sometimes even China) already trusted by your device!

3) What are a couple really critical concepts we as infosec people should understand with regards to how a modern PKI functions?

There are many concepts to understand in PKI, but I’ll list the ones I think are most important based on the mistakes I’ve seen in the wild:

– Learn the importance of securing and non-sharing of private keys (real world blunders: Superfish adware, VMWare VDP and Rapid7 Nexpose appliances ) https://blog.rapid7.com/2017/05/17/rapid7-nexpose-virtual-appliance-duplicate-ssh-host-key-cve-2017-5242/

– Know the secure and insecure protocol/algorithm configurations (real world blunders: Rapid7 CVE-2017-5243 SSH weak configs, Flame malware, FREAK vulnerability (using weak RSA_EXPORT configs) – Even NSA.GOV website was vulnerable! https://blog.cryptographyengineering.com/2015/03/03/attack-of-week-freak-or-factoring-nsa

– Don’t charge the bull; dance around it. Most PKI implementations can be attacked/bypassed not by trying to break the math involved but by abusing wrongly put trust, wide-open policies, bad management and wrong assumptions. Real world blunder: GoDaddy issued wrong certificates because they implemented a bad challenge-response method that was bypassed by 404 pages that reflected the requsted URL – so GoDaddy tool thought the server error was a valid response to their random code challenge: https://www.infoworld.com/article/3157535/security/godaddy-revokes-nearly-9000-ssl-certificates-issued-without-proper-validation.html

4) What would you tell somebody in infosec who’s struggling to conceptualize how PKI works? (For example, does everybody in the field really need to “get it”? Why or why not? What other things could they study up on to grasp it better?)

Learn it in an applied fashion. No math. Take a look at your own setup. Check out the Digital Signature tab in any signed EXE that you have on your system. Open wireshark and checkout the SSL handshake, or wait till an OCSP request/response is made and check how it looks in wireshark. Get familiar a bit with PKI tools such as openssl.
Or write a small program that connects over SSL to some SSL port, then write a small program that listens on an SSL interface. Use ready-made libraries at first.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

Check out the following topics/ideas:

– Certificate Transparency.

– OCSP stapling.

– Code signing.

– Checkout The Update Framework (https://theupdateframework.github.io/ ), to learn how to implement secure software updates.

– Implementing client certificates for server-to-server communications.

– Hardware security modules (HSMs). YubiHSM is an affordable such piece of hardware.

I believe understanding PKI is growing more important as we start automating more and more of our tools and workflows, and that using tools (such as certbot) is not a valid excuse to not learn the fundamentals.

Frida (Dawn Isabel and Jahmel [Jay] Harris)

Perspective One: Dawn

1) Thanks for taking the time to speak with us, Dawn. Would you mind telling us a little about yourself, and your expertise with Frida?

Thanks for the opportunity!  I’ve been in information security for around 12 years, and before that worked as a web application developer.  I currently work as a consultant, primarily testing web and mobile application security.  I’ve been using Frida for a little over a year, and most of my experience with it is on mobile platforms.  I regularly write scripts for Frida to automate testing tasks and to teach others about iOS internals.

2) Assume we work in infosec, but have never used Frida. How would you briefly explain the framework to us? Why is it useful for security professionals? (Assume an audience with solid IT fundamentals)

At a high level, Frida is a framework that enables you to inject your own code (JavaScript) into an application at runtime.  One of the simplest use cases for this is tracing or debugging – if you’ve ever sprinkled “print” statements in a program to debug it, you’ll immediately appreciate using Frida to inject logging into an application to see when and how functions and methods are called!  Security professionals will also use Frida to bypass security controls in an application – for instance, to make an iOS application think that a device is not jailbroken, or to force an application to accept an invalid SSL certificate.  On “jailed” platforms like stock iOS, Frida provides security professionals with a window into the application’s inner workings – you can interact with everything the application can, including the filesystem and memory.

3) What are a couple important things to know about Frida before we start using it?

I think the first thing to understand is that Frida is ultimately a framework for building tools.  Although it comes with several useful command-line tools for exploring applications (the Frida command-line interface (CLI) and frida-trace are both invaluable!), it isn’t a scanner or set-and-forget tool that will output a list of vulnerabilities.  If you are looking for a flexible, open-ended framework that will facilitate your runtime exploration, Frida might be for you! 

The second thing to keep in mind is that Frida is much more useful if you approach it with a specific goal, especially when you are starting out.  For instance, a good initial goal might be “figure out how the application interacts with the network”.  To use Frida to accomplish that goal, you would first need to do a little research around determining what libraries, classes, functions, and methods are involved in network communications in the application.  Once you have a list of those targets, you can use one of Frida’s tools (such as frida-trace) to get an idea of how they are invoked.  Because Frida is so flexible, the specifics of how you use it will vary greatly on the particular problem you are trying to solve.  Sometimes you’ll be able to rely on the provided command-line tools, and sometimes you’ll need to write your own scripts using Frida as a library.

4) What would you tell somebody in infosec who’s having trouble using Frida? (For example, what niches in security really need to “get it”? What other things could they study up on first to grasp it better?)

When I first started using Frida, I tried to jump right in writing scripts from scratch without having a clear idea of what I was trying to accomplish.  Figuring out all the moving parts at once ended up slowing me down, and felt overwhelming!  Based on those experiences, I usually recommend that people who are new to Frida get started by using frida-trace.  The neat thing about frida-trace is that it will generate stubs called “handlers” that print a simple log message when the functions and methods you specify are invoked.  These handlers are injected into the target process by frida-trace, which also handles details like receiving and formatting the log messages.  Editing the handlers is a great way to learn about Frida’s JavaScript API (https://www.frida.re/docs/javascript-api/) and gain visibility into specific areas of an application.  There is a nice walkthrough of the process of editing a handler script in the post “Hacking Android Apps With Frida I” (https://www.codemetrix.net/hacking-android-apps-with-frida-1/).

Once you are comfortable editing the handler code, experiment with creating your own self-contained script that can be loaded into a process using the Frida CLI.  Start by loading some examples that are compatible with your platform, and then try using those as a template to write your own.  There are many example scripts you can try on Frida Codeshare (https://codeshare.frida.re/) – copy the code to a file so you can easily edit it, and load it into the Frida CLI using the “-l” flag.  Initially, aim to gain proficiency using Frida to invoke native methods in the application.  Then practice using the Interceptor to attach to and replace functions.  Incidentally, if you started out by using frida-trace then using the Interceptor will be very familiar – just compare the contents of a handler script to the Interceptor.attach() example shown at https://www.frida.re/docs/javascript-api/#interceptor!

I don’t think you need to have a deep understanding of Frida’s internals to use it, but it is definitely helpful to understand the architecture at a high level.  Frida’s “Hacking” page has a nice diagram that lays out the different components (https://www.frida.re/docs/hacking/).  You’ll also want to know enough JavaScript that you don’t spend a lot of time struggling with syntax and basic programming primitives.  If you’ve never written in a scripting language, running through some JavaScript tutorials will make it easier to use Frida with the provided command-line tools.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

If you want to dive deeper, there are several directions you can go!  Since Frida is an open-source project, there are many ways to contribute depending on your interests.  There are also a lot of great tools built with Frida, many of which take contributions.  For any level of interest, I suggest checking out https://github.com/dweinstein/awesome-frida as a starting point.  You’ll find blog posts and demos showing some concrete examples of Frida’s functionality, as well as links to some of the projects that use it.

If you want to contribute to Frida, or build more complex tools that leverage it, I’d recommend gaining a greater understanding of how it works.  One good starting point is “Getting fun with Frida” (https://www.coresecurity.com/system/files/publications/2016/10/Getting%20fun%20with%20Frida-Ekoparty-21-10-2016.pdf), which discusses concepts in Dynamic Binary Instrumentation (DBI) and discusses prior work.  The 2015 presentation “The Engineering Behind the Reverse Engineering” (slides and video at https://www.frida.re/docs/presentations/) is even more in-depth, and a good follow-up once you grasp the high-level concepts.

Perspective Two: Jay

1) Hi Jay! Thanks for taking the time to chat with us. Would you please tell us a little about yourself, and your expertise with Frida?

My name is Jahmel Harris but some people know me as Jay. I’m a freelance pentester in the UK (digitalinterruption.com) and run Manchester Grey Hats (https://twitter.com/mcrgreyhats) – a group where we put on free workshops, ctfs etc to help teach practical cyber security skills to our members. We live stream so no need to be in the UK to attend! Also, feel free to join our Slack (invite link on Twitter).

I started using Frida when performing mobile application testing and found it worked much better than Xposed which I was using at the time. Although XPosed and Frida allows us to do similar things, Frida allows us to do it in a faster and more iterative way. A simple task could take several hours in Xposed can be done in minutes in Frida. More recently, i’ve been using Frida in bug bounties as many mobile apps go unlooked at due to some (fairly easy to bypass) client side security controls.

2) Assume we work in infosec but have never used Frida. How would you briefly explain the framework to us? Why is it useful for security professionals? (Assume an audience with solid IT fundamentals)

Frida allows us to inject JavaScript into a running application. Why is this useful? Well, it means we have the ability to change the behaviour of applications at runtime. By changing the behaviour of the application, we can add logging which can help us understand the flow, remove security controls or even dump secrets and keys. I find frida helps take testing one step further, especially where mobile apps are concerned. We can test assumptions easier, and change parts of the code without changing the signature. The other advantage is that as it becomes more difficult to jailbreak some devices, Frida can still allow us to perform a thorough test.

3) What are a couple important things to know about Frida before we start using it?

Frida is a great framework but there are some things I remind people:

  1. It is not very mature so you *will* discover bugs. Ole André V. Ravnås (the creator of Frida) is very friendly though and helps where he can so don’t be afraid to reach out to him.
  2. It’s not only for mobile application testing. For some reason I tend to only see Frida being used for Android and iOS application testing. It supports Windows and Linux so can be used for instrumenting Desktop applications too!
  3. Frida is bundled with a few tools such as frida-trace. This is where I start when trying to RE an application. Frida-trace will log functions that are called as well as generate the JavaScript handlers. This makes it super easy to start guessing interesting function names and tracing on them. As an example, if we’re looking at an IRC client, we can put traces on *send* or *irc* and we’re likely to get something interesting. Using Frida it’s then easy to start changing parameters to these functions or even change the behaviour of them *all at runtime without restarting the application!*

4) What would you tell somebody in infosec who’s having trouble using Frida? (For example, what niches in security really need to “get it”? What other things could they study up on first to grasp it better?)

Frida can really help mobile application testers go beyond the basics of app tests. Frida is also invaluable as it allows us perform a lot of useful tests from non rooted and non jailbroken devices which is something we struggle with with each new release of iOS. It’s important to understand though that Frida isn’t an exploitation framework. We still need to know what we’re looking for in an application or the controls we’re trying to disable. As an example, when doing a mobile application test, I might discover the application uses Certificate Pinning. To bypass this using Frida I will need to reverse the application, figure out the Certificate Pinning logic before writing a Frida hook to bypass it which of course requires some basic coding knowledge.

5) What about somebody who has a solid grasp on the basics and wants to delve deeper? (Any self-study suggestions? Are there any open source projects that could benefit from their help?)

As Frida is a framework and not an application per se, anyone using Frida that wants to help should work on more high level tooling using Frida. For example, more general purpose Certificate Pinning bypassing tools or fuzzing tools. The code for Frida is very well written so it’s easy to understand how Frida works and to contribute with bug fixes. As you find bugs or missing functionality in Frida, raise bug reports as it’s likely the same issue will be faced by many people.

The Infosec Introvert Travel Blog

So, you’ve finally landed that infosec job of your dreams! The clouds have parted and angels have descended from the sky singing Aphex Twin.

Congratulations, I believed in you all along.

One small problem: they say you’re going to have to travel. Maybe to a customer site. Maybe to training. It doesn’t matter. You’re an introvert and haven’t traveled much, and you’re starting to panic.

Don’t worry – I’m here for you, friend! Let’s go over some basic travel tips for introverted infosec people.


Learn How and What to Pack

There are hundreds of great blogs on packing for travel you can seek out, so I’ll keep these tips fairly brief:

  • A decent suitcase is a really important investment. Cheap suitcases without proper roller wheels are frustrating to lug across airports and will break at incredibly inopportune times. I recommend that every traveler have one decent quality carry-on suitcase and one decent quality backpack or shoulder bag with a laptop pouch, at a minimum. The last thing you need is a strap, zipper, or wheel snapping in the middle of the airport. I see no particular advantage to either soft-side or hard-side bags – the most important things to me in a carry-on are a lightweight, sturdy bag that will fit in regional jet overhead bins even when full.
  • Learn to neatly and tightly fold or roll your clothes. Clean ones, and dirty ones upon your return. Packing cubes are a huge help by this. I personally like these ones. Some people prefer compression bags, but I’ve found them a lot more frustrating to use on the return trip, and they don’t last as long.
  • Choose clothes that don’t easily wrinkle, and stick to a common color scheme. The more pieces of clothing you can mix, match, reuse for a couple days, and layer, the easier your life will be on your trip.
  • Shoes and boots are some of the bulkiest and heaviest things you can pack, so choose a versatile pair of dress shoes and bring as few pairs as possible.
  • Pack a small towel.
  • When flying, always pack essential travel-size toiletries and one change of clothes (underwear and socks at a minimum) in your carry-on bag. Luggage does get lost, and flights get delayed (sometimes overnight).
  • On the same note, always have medication, contact lenses, underwear, and socks for one more day than you plan to travel.
  • Always carry a travel-size Ibuprofen, Benadryl, and antacid. Those are a few small things you do not want to have to take a walk for in a strange city when you really need them.
  • Consider your personal daily usage of toiletry items. A million bloggers will tell you a million different things about how much soap to pack. For the most part, travel-size items will last you 3-4 days. For longer trips, you’ll probably need more. However, if you have long hair like I do, you might need more than a 3oz / 100ml bottle of conditioner for even a three day trip. This is something you’ll learn with practice.
  • If you run out of your travel-size toiletry items, buying toiletries at your destination is usually by far the most economical option, particularly when flying. There are convenience stories or pharmacies almost anywhere. However, expensive cosmetics or skincare products are definitely an exception and may motivate you to pay $25 each way to check a suitcase. Your call.
  • One final note about toiletries and flying – learn what the TSA and similar international agencies consider a “liquid” and a “gel”. There are lots of alternative toiletries like face wipes and solid deodorants that are not controlled by liquid restrictions that can give you a bit more wiggle room.
  • Have two phone chargers – one in your suitcase or car, and one in your carry-on or laptop bag.
  • If traveling to a different country, ensure you have the correct power adapters or plugs for your electronics. Bring a power converter if necessary, but they’re bulky and becoming irrelevant. Most laptops and phones made in the last 10 years can handle either 110v or 220v AC, so all you’ll need to replace is the plug, not the power brick. Check yours and make sure.
    TIP: MacBook wall plugs side off the power brick and are trivial to swap at will!
  • Plan for a catastrophic laptop crash, with either a USB drive or a recovery partition.

Have a Passport

They last a decade and aren’t super-expensive, but they take quite a while to arrive unless you pay for them to be expedited. Every infosec person should have one for last minute work or conference travel. Pat notes that it’s a great idea to pay for a passport card as well, as secondary emergency ID, and for the smaller form factor.

Learn How To Fly

It’s okay if you’ve never flown on a plane before. Lots of great infosec people hadn’t before they got their first job.

Read up a bit on air travel regulations before getting on your first flight. Prepare to go through airport security. For instance, read up on liquid and gel restrictions, and keep this bag easily retrievable in your carry on. Be prepared to take your laptop out quickly in the security line. In most places, security also requires removing belts, jewelry, wallets, and shoes, then placing them in a bin.

US Residents – ensure your State ID or Driver’s License is still adequate to use at the airport. Some states’ will not be soon, and you may need to purchase an enhanced ID or use a federal ID card such as a passport or military ID card.

Domestically, check into your flight at least an hour prior to boarding time (not departure time) – longer if you intend to check a bag. (If you’re running late, checking in on your phone can sometimes get you on the plane after check-in closes at the airport.) International travel has a significantly longer lead time – check the airport’s website for details.

Check the gate on your boarding pass and find and verify it has not changed before going off for a washroom break or a coffee. Airports all over the world are full of signs and maps to help you. Make sure you’re back at the gate before boarding time. (Once again, this is not the same as departure time.)

Most economy-class domestic flights in the US no longer serve any meal, and some may not even serve drinks. Others offer packaged food at a pretty exorbitant cost. I recommend you grab a sandwich and a drink in the airport after you find your gate. In my experience, most other countries’ carriers still serve a light snack – your ticket will usually indicate this. International flights will usually serve at least one meal, but you might not get any choice of what it is (allergen free, vegetarian, etc).

A bit about boarding groups – you and I will probably never be in the oft fabled Boarding Group 1. That tends to be pay-to-play, or extremely frequent travelers, or business class. If you’re in a higher boarding group (3-5 on most airlines), the overhead bins may fill up, and you’ll be required to check your carry-on bag for free at the gate. Ensure your important documents, electronics, and medications are transferred to your person if this is required.

On the plane, follow all posted safety instructions and stay seated with your seatbelt fastened unless you go to the lavatory. Be polite to the crew and don’t be afraid to ask questions.

What I normally have on my person or under the seat (not in the overhead bin) on your average flight:

  • Phone in airplane mode
  • Headphones (most commercial aircraft now support standard ones)
  • Wallet
  • Earplugs
  • Sandwich (on domestic flights)
  • Water bottle
  • Book
  • Travel neck pillow
  • Pen (especially if I have to fill out international customs forms)
  • Melatonin (on international flights) – (please note different sleep aids are OTC-authorized in different countries; plan accordingly).
  • Vicks Vapor Inhaler or equivalent (no, it’s not a vape – it helps with the dry air.)

Congratulations, you’re now an airport pro.

Safety and Security

Once again, we’ve reached a topic on which there have been many great blogs and articles already written (I particularly love Stephen Northcutt‘s – he’s definitely had some adventures!)

A few small fundamentals:

  • Be aware of the threats you will face as an individual and as an information security employee of your company in the place you’re going, before you arrive.
  • Consider bringing loaner / disposable electronic devices. At the very least, update and encrypt your devices. (They should be already, but this becomes absolutely critical during travel.)
  • Do not carry large sums of cash on your person, and don’t carry all your money in one place. Consider a discreet money belt or anti-theft bag.
  • Ensure the locks, peephole, phone, and safe in your hotel room work properly and ask to change rooms immediately if they do not.
  • Never let a stranger into your hotel room.
  • Pay attention to your surroundings. It’s very easy in a strange city to get distracted by the sights or your map. Tourist areas all over the world often have heavy pickpocket activity and crazy traffic.
  • Consider sightseeing with a buddy, but don’t let eating or sightseeing alone stop you from getting out. (Just make sure somebody knows where you are.)
  • Don’t make yourself a target! Don’t wear clothing that identifies your point of origin or that you are a tourist (language, flags, distinct regional clothing styles, etc). Dress like a local whenever possible. Keep the camera in the bag until you’re ready to use it.
  • Addendum, AMERICANS: Yes, us! We stand out. We tend to be significantly louder and less professionally dressed than locals, especially in Europe. Please, just don’t.
  • If you’re leaving your country, understand what access foreign internet service providers and customs agents may have to your personal and work devices.
  • Evaluate your personal threat model and make an informed risk decision about what devices and data to bring with you, and how you plan to connect to the internet and authenticate to your accounts while traveling (private VPN? Yubikey?)
  • notes that when progressing through security, Immigration, or Customs, it’s never particularly wise to introduce yourself as as a “computer hacker”. “IT” or “computer security” is quite sufficient unless pressed for specifics. “Hacking” carries various legal and social connotations around the world.

We as Information Security professionals tend to be highly and often reasonably paranoid about our personal security, so I will simply leave you with a reminder that everyone is in fact not out to get you, and while you should always make sensible and informed risk decisions about your security, you should also not let them entirely prevent you from exploring a new place.

Before You Leave Your Country

For US Residents:

  • Check the State Department Website for travel safety information on the country you will be visiting: https://travel.state.gov/content/passports/en/country.html
  • Check the CDC website for information on vaccinations you require prior to travel: https://wwwnc.cdc.gov/travel/destinations/list/
    TIP: Doctor on Demand can provide you a cheap and easy vaccine referral via your phone or tablet when walk in clinic nurse practitioners cannot.
  • Consider enrolling in the US State Department STEP program.
  • comments that the TSA PreCheck and Global Entry programs are a huge benefit for frequent air travelers, especially travelers in a professional group. Those programs do come with significant background checks and biometric disclosure, so while I personally find them extremely time-saving, you will need to make your own privacy decision.

For Everyone:

  • Contact your personal and/or work mobile phone provider for information on international voice and data plans for the duration of your travel. If you do not purchase international data service, disable cellular data for the duration of the trip or you may unwittingly face extremely steep fees. T-Mobile One is my favorite pick  for frequent international travelers from the US, as it provides free 2G data service in dozens of countries with no plan modification or additional fees.  prefers GoogleFi for the faster global 3G speeds, but their plans contain a firm data cap and overage charges if you plan to tether. If your phone is unlocked, you can also consider buying a SIM card at your destination if you need to do a lot of local calling.
  • Consider purchasing a travel health insurance policy, particularly if you’re traveling somewhere without universal health coverage for non-residents, or if you might be participating in high risk activities. Do get your shots in advance.
  • Choose a chip-enabled credit card that is preferably not your primary bill auto-payment method to bring on your travel, and contact the provider in advance to inform them you will be traveling abroad. ( adds an great reminder that some credit cards carry not insignificant international transaction fees – ensure you check this with your bank).
  • Read up a little on your destination. Understand the general geography, weather, economy, customs and courtesies (like tipping), criminal statistics, food and water safety, corruption, and political climate. Learn the current exchange rate to your country’s currency. Learning a couple phrases in the local language, (particularly courtesies and greetings), is usually appreciated by locals.
  • Make a copy of your important travel documents to lock in your room safe for the duration of your trip, in case of a lost or stolen wallet.

Have a Good Attitude

So you’re going to training in Springfield, population 700, with nothing but cornfields for miles in every direction. Or maybe you’re going to a country you never wanted to visit and you don’t speak the language. Everything’s terrible, right?

Let me let you in on a secret: I have never in my life traveled anywhere I didn’t like something about! In the most remote, Midwestern town I’ve ever traveled to, I found an amazing Amish market with the best sandwich I’ve ever eaten! I had amazing traditional Central American chocolate and an incredible boat ride through the glaciers in Anchorage. I saw adorable meerkats at a private zoo in Germany. These are the things you will remember in 10 years. You will not remember the hotel room – they start to blend together.

It’s important to remember that people are complicated individuals with lives and hobbies, wherever you go. Life might be much faster paced or much slower paced than what you’re used to, but people still eat, have families, and find recreation. If you keep your spirits up and ask around, you’ll find something cool to do anywhere you’re sent.

Packing the Game Console?

I love gaming too, but try to leave the PS4 at home if at all possible on your first trip to a new place. Give the place a chance. If you still hate it after 3 days, I’ll give you a pass on watching cable and playing smartphone games.

Plan Outside Business Hours

Traveling for business is a very different experience than traveling for pleasure. Significantly – packing requirements will be different, and your schedule will be different. This shouldn’t be an excuse for you to stay in your hotel room. Particularly in large cities, there are plenty of sights to see after business hours. While museums may frequently be closed after 5PM, outdoor sights will likely remain open much later – and be less crowded! Many attractions and tour companies offer passes and tickets at discounted rates in the evenings. There are also musical and theatrical events, even on weeknights.

Tripadvisor and Viator are a great resource for finding interesting things to do prior to your travel. Keep in mind that lots of smaller attractions have active Facebook pages where you can seek additional information from locals or employees. I like to take some notes with operating hours, locations, and prices to bring with me.

Ask a Local, and Keep an Open Mind

Don’t be afraid to ask colleagues, employees, or the hotel concierge for recommendations of local stuff to do or places to eat. People usually love talking about their favorite things! Even if what they suggest isn’t normally your cup of tea, consider giving their recommendations a shot (with reasonable health, security, and safety considerations).

The absolute worst that is likely to happen in 99.5% of cases is you’ll be stuck ordering the plain tomato soup, or you’ll be bored and bemused for a few hours. Conversely, you might have a great time, and discover a new favorite food. Either way, you’ve had a new life experience and you’ve grown as an individual.

Be The Travel Agent

Traveling with a group can be tough – even deciding where to eat can take a while if everybody is polite and introverted. Don’t be afraid to make yourself the travel agent for a day. Once you’ve identified something cool to see or a great place to eat, do a little research and suggest it to your traveling companions, and you’ll probably be surprised how many people were just waiting for somebody else to take the initiative. If you can tell them how you’ll get there and what the entry fees and hours are, all the better!

Have An Escape Plan

It’s important for any introverted traveler to plan reliable places to recombobulate that frequently exist and are similar in any unfamiliar city. Two reasons:

1) When something goes wrong (hotel room not ready, plane delayed, etc), this will give you a place to spend an hour or two and rethink your plans, and

2) When you get fed up with being around the same coworkers or customers, it will  provide you something do to alone.

These places are unique to you and I can’t tell you exactly what yours are going to be. In general, they should:

  • Be open across a broad range of hours.
  • Have a place to sit with free WiFi.
  • Be safely and easily accessible by ride-share, walking, or taxi – even if your phone’s dead.
  • Have reasonably clean public washrooms.
  • Be reasonably secure.
  • Allow you to stay for an hour or two.
  • Have friendly employees or patrons who can give you directions or assistance.
  • Provide you something to do, even if it’s just read a map without disruption.
  • Outlets are a plus.

My personal choices are shopping malls and yoga studios. They exist pretty ubiquitously and it’s easy for a stranger to patronize them without a lot of discussion. They provide me with familiar surroundings and some peace and quiet to think about my next move. Any rideshare driver knows where one is. Some other suggestions that exist in nearly any medium to large town might be:

  • Gyms with drop-in rates.
  • Libraries
  • Coffee shops

Bars are great but I don’t recommend them for this purpose in specific.

Whatever you choose, make sure you have those factors in the back of your mind, and even consider looking up where your choices are on a map before you travel. You’ll have a fallback plan when something goes wrong (or you just need some time to yourself). Don’t spend all of your time there, but use them as needed to recharge.

3-2-1

No amount of Vitamin C in a pouch alone will reliably keep you from getting sick! The facts are simple – you will likely be in a confined space with a few sick people during any flight, class, or conference. The #1 best way to prevent con plague is adequate sleep, healthy meals, and washing your hands regularly with soap and warm water. Bring hand sanitizer, but don’t rely on it exclusively. Try to drink plenty of water and juice to moderate coffee and alcohol.

No Problem is Insurmountable

Everybody makes mistakes while traveling. I’ve been in 7 countries this year and have a go bag, and I still occasionally forget to pack basic stuff. Things are going to go wrong. You’re going to forget something important like deodorant or medication, or it’s going to rain your entire trip, or your luggage is going to get lost. Maybe your wallet will get stolen or misplaced.

Do your best to plan sensibly, but realize plans will sometimes go awry. There are very few places you will travel for an information security job where even these problems will be insurmountable or deadly. There are convenience stores, pharmacies, and Western Unions all over the world. Clothes can be replaced. Replacement credit cards can be overnight-ed to your hotel. Toiletries can be replaced. Cables and adapters can be same day delivered by Amazon. Even money, passports, and mobile phones can be replaced within a day in most places. Consider it a learning experience.

The first thing you must do when something goes massively awry is take a deep breath and think. The second thing you should do is contact the authorities if a crime has been committed. This may be local police, or your country’s consulate, or both. Your employer’s loss prevention, physical security, or travel team will probably be able to assist you with next steps. Your hotel can also provide assistance in many situations you might feel are impossible crises.

You can do this! Keep calm and carry on!

Ask Lesley InfoSec Advice Column: 2017-03-16

This week, I address some burning questions about education and training.  As always, submit your problems here!


Dear Lesley,

Let’s cut to the chase. I hate coding. I don’t enjoy building things from scratch. I do, however, love taking things apart, and would probably be able to learn to code if I started in that direction.

I currently work as a Linux sysadmin in the web industry, with a couple certs (and 4 years) under my belt so far. I love infosec and want to move in that direction, but I have no idea where to start, given my utter distaste for traditional methods to teach coding.
Do I just… download some arbitrary code and take it apart? That seems like a horribly insecure idea, but I’m just not sure where to start. I also tend to have serious issues with confidence in everything, especially tech. Please help! ”    

– Flustered and floundering

Dear Flustered,

I don’t like coding, either. It’s actually not uncommon in infosec – we tend to like rapidly changing environments instead of the routine patience involved in coding. I’ve spoken to many ex-programmers and ex-CS students who agreed.

I see two routes you can go if you think anything like me:

  1. The scripting route: Many, many blue team and red team tools are Python and Ruby based, and many of them are extensible by design. Pick offensive or defensive security, then choose a tool set in one of these common languages that interests you. (For me, it was the Volatility framework). Take apart a few existing scripts and see how they function in real life. Then pick some interesting feature to add in your own script. This won’t necessarily teach you how to write a stellar production application, but for most security roles scripting is what you need.
  2. The reversing route. If analyzing malware piques your interest, that’s a great way to learn how software works all the way down to the assembly level. The intrigue can be a great motivator to learn. Definitely don’t pick commodity malware out today to analyze -it’s purposefully hard to reverse! Start with a book like Practical Malware Analysis or Malware Analyst’s cookbook that has detailed, step by step tutorials from the very basics. Learning how to take something apart can be a great way to learn how to put it together, and you’ll definitely figure out what fundamentals you need to brush up on on the way.

Dear Lesley,

Looking into the future…what would you guess would be the safest career path/area to focus on now in security, considering the growth in available off the shelf tools to get the jobs done. Would penetration tester still be needed for example in 10-20 years time?  

–  Spinner.


Dear Spinner,

First off, no guarantees – I’m not clairvoyant. There definitely is something of an infosec bubble as more people enter degree programs. However, there’s a caveat – being a great hacker is a personality trait, not a skill that can be taught academically. If you’re innovative and adaptable, I sincerely doubt you’ll have trouble finding work in that time frame.

In terms of automation, some tasks automate better than others. Unfortunately, the one that automates the best is the entry level security analyst gig. Merely passing the Security+ and being able to read and route SIEM events may not cut it in a couple years. You’ll need creativity and a broader skill set. More advanced defensive and offensive roles will require human attention for the foreseeable future because attackers innovate constantly. While a magic black box may pick up a new zero day, remediating and understanding the impact and additional factors is more complicated.

Security engineering continues to become more automated. The need for people to simply maintain static blocklists, signatures, or firewall rule sets will continue to decrease. Those jobs are trending towards more advanced SIEM and log aggregation management.

The jobs I see in the most demand with the least supply right now are malware reversing at an assembly level, threat intelligence with an actual political science or foreign studies background, and higher level exploit research (coupled with good business and communication skills).


Dear Lesley,

How does one begin exploring the world of sec without coming off as a script kiddie or just wanting to be an “edgy hacker”?     

– Careful but eager beaver


Dear Careful but Eager,

I’m really sad you feel that you have to ask that question, because merely asking it means you probably aren’t the type you’re concerned about. How do you know if you’re skidding it up? You enter commands into a hacking tool with no idea what they are doing, and much more importantly, no interest in knowing what they are doing. Being a good hacker has nothing to do with pwning stuff. It has to do with understanding how lots of stuff works and being able to manipulate that to your advantage.  (I should put that at the top of my blog in huge red letters!)

Imagine you’re a secret agent, needing to break into a vault. You can take one other person with you. Person 1 is another agent who has read a few books on how the vault works. Person 2 is the engineer who has been installing and maintaining the vaults for 30 years and has agreed to help you. Who do you pick? I’d pick the second person, who knows the system inside and out. I can teach her to sneak around a little and how to wear a disguise. Person 1 doesn’t know the foibles of the vault and only knows how to attack it the way the books said.

To summarize, you skid check is how many commands you enter in Kali or Sift or whatever without bothering to figure out what the heck you are doing. When you’re learning, the goal is understanding that, not getting a shell.

You shouldn’t care what you come off as. If you’re genuinely interested in learning, plenty of hackers will be willing to help you.


Dear Lesley,

(tl;dr at the very last line)

I am a novice who is looking to break into the field of security. Currently, I have received an offer to read a book (The Web Application Hacker’s Handbook) and participate in an assessment to show if I can perform the work necessary to do the job. Essentially, the assessment (from what I’ve gathered) is to assess the security of a vulnerable web application and then reverse a protocol.

Coming from a mathematics background with limited formal education in computer science and no formal education in networking, the book is hard to digest. I have setup pen test labs such as DVWA and WebGoat which I am practicing with and I have made surprisingly good progress in these labs. I’ve also learned a little bit about networking through much trial in error in setting these labs up in safe environments!

However, I fear that even if I pass the assessment, I will not be offered a position due to my lack of networking knowledge. I am aware of certifications such as OSCP and Security+ to bolster my background, but they suggest a solid understanding of networking before enrollment in the courses or studying for the examinations.

Do you have any recommendations on books/courses/certifications that would take an individual from zero-knowledge of networking to the suggested level of networking knowledge for these kinds of security certifications?

– Not a smart man


Dear Smart Man (I refuse, because it’s untrue!),

It really sounds like you’re doing everything right. You have correctly recognized that solid TCP/IP knowledge is really important in security. The lab is fab. But, you can do other things in that lab. Like take a step back from the security tools, and concentrate on the networking ones. How long have you spent in Wireshark, just observing and filtering through network traffic? Something just watching what’s going on and identifying common ports and protocols can be huge. What does opening a website look like, and why? What does a ping look like? What does it look like when a new computer is connected to the network?

Certs (and associated books)… There are a lot of options in network land. Network+ is okay for fundamentals and really cheap (although an inch deep and mile wide). WCNA is the Wireshark specific cert, but by nature teaches a pretty in-depth level of knowledge of reading packets. It’s also quite affordable. If you have 600 bucks and free time, I’d do both (in that order) and blow those folks out of the water with your resume. If you don’t have those resources, they give you some great study materials to start with.

There are endless good books and blogs on TCP/IP out there that will get you started and give you an understanding of the OSI model and common ports and protocols. Hands on experience in your lab or on your home network  is much more important.

 

 

 

Ask Lesley InfoSec Advice Column: 2017-02-26

This week, we discuss red team and blue team self-study, getting kids interested in security, and security paranoia. As always, submit your problems here!


Dear Lesley,
I am a threat intelligence analyst who is currently underutilized in my current job, and feel like my skills and tradecraft are slipping because of it. I’m wanting to give myself some fun projects to work on in my off-time but am not really sure where to start. What types of things would you recommend?
-M

Dear M,
You’re certainly in a great field to want work in, in 2017. Not only do you have the whole pantheon of nation state actors conducting cyber operations to study, but you have a huge range of commodity malware, botnets, insider threats, malware authors, and dark web markets to study.  If you’re not feeling inspired by anything in that list, perhaps reach out on Intel sharing lists or social media to see if an existing project could use your skill set? Lots of folks are doing non-profit threat research work and need extra hands.


Dear Lesley,
If you do not have the budget to send people to SANS or to conferences, what free supplement resources would provide fundamental training for someone studying DFIR?  
-Curriculum Writer

Dear Curriculum Writer,
I can totally appreciate not being able to send somebody to a thousand dollar (or more) commercial conference or training program. However, most BSides conferences are free (or under 20 dollars). I suppose if you are totally geographically isolated and there is no BSides in any city in driving distance, those may be impossible, but I would definitely explore the conference scene in detail before writing them off. Sending somebody to a BSides or a regional conference for the cost of gas and a few bucks provides a lot of value for the money.

Otherwise, a DFIR lab will be your best friend for self study. Unfortunately, I can’t guarantee a home lab will be totally free to implement. Let’s talk about some fundamental requirements:

– One or more test hosts running assorted operating systems.
– An examiner system running Linux
– An examiner system running Windows (recommended)
– Intermediate networking
– Free (or free non-corporate) forensics and malware analysis tools.
– A disk forensics suite
– A memory forensics suite
– A write blocker, associated cables, and drives.

An ideal comprehensive DFIR lab, where money is no object, might look something like:

– A host PC with 16GB (or more) RAM.
– VMWare Workstation
– Ubuntu (free), Windows 7, 10, and Server 2008 VMs
– A SANS Sift Kit examiner VM (free)
– A REMnux Kit examiner VM (free)
– A Cuckoo Sandbox VM (free)
– A Server 2k8 examiner VM
– An EnCase or FTK forensics suite license
– A write blocker, associated cables, and a number of hard drives.

But, we can do it more cheaply, sacrificing convenience. We can virtualize with VirtualBox (losing the ability to take non-linear, branching snapshots), or on bare metal machines we scrounge from auctions or second hand stores (the least optimal solution). This can work, but every time we infect or corrupt a machine, we’ll have to spend time restoring the computers to the correct condition. We can stick with analyzing Windows versions that are out of support, but we won’t be totally up to date.

One of the most difficult things for people studying the “DF” side of DFIR is the inability to get expensive licenses for industry-standard corporate forensics suites. There’s really no great solution for this. There are limited demo versions of this software that come with some forensics textbooks. SANS Sift Kit does include The Sleuth Kit, an open source suite which performs some similar functions.

Physical forensic toolkits aren’t cheap, but aren’t in the same ludicrous territory as forensics software. You can pick up an older used Tableau forensic bridge for about 150 dollars on eBay. Perhaps if you network within your local security meetup, somebody will be able to lend you one, as many college and training courses provide them.

Once we have something resembling a lab, we can follow along with tutorials on SecurityTube and on blogs, in forensics and malware reversing textbooks, in open courseware, and exploring on our own.


Dear Lesley,
I have a daughter that I would like to encourage her to go into IT and possibly security if she’s interested. I know your father was influential to you getting into security. Do you have any suggestions to me as a dad on things I can do to encourage my daughter to become interested in IT and security?
-Crypto Dad

Hi Crypto Dad,

Yep, both of my parents had a big influence on my career! A hard question to answer, but an important aspect was not pushing me hard towards or away from hobbies. I was treated like a small adult and provided the opportunity to follow along with whatever my dad was doing in his shop, and even at a very young age he answered my questions without patronizing me or getting frustrated. He didn’t dumb things down; he just started at the beginning. I always had access to stuff to learn how it worked and how it was made. By the time I found out I ‘wasn’t supposed to’ know or like things , I already knew and liked them.


Dear Lesley,
I’m a penetration tester who seems to be falling behind with the times. My methods aren’t efficient. Recently I discovered there are better ways of doing things than my three year old SANS curriculum taught me. How can I stay current without becoming a lonely crazy old cat lady?
-Just a crazy cat lady

Hi Crazy Cat Lady,
You’re ahead of many folks by realizing there’s a problem. I see a lot of infosec people let their skills stagnate for many years after training or college, and our field changes really fast. No quick fix, but here are some suggestions:

– Participate in CTFs. Ignore the scoreboard and the dudebros and “rock stars”. Just compete against yourself, but do it genuinely and learn from your mistakes.
– Jump over to the blue team side for a bit and read some really thorough incident and threat reports from the past couple years. Sometimes seeing what other people are doing will give you interesting ideas of avenues to research.
– If you’re still reaching for Kali, escape its clutches. Kali is an amazing VM, but it will only take you so far and lacks some newer tools. It can also discourage thinking “out of the box” about how to compromise a network. After all, it is a box.
– Get out to cons to watch red team talks. Watch recent ones on YouTube, too. See what other folks are up to. Your cats will be okay for a couple days, and you’ll make new friends.
– PowerShell Empire. 💖💖💖
– Don’t be embarrassed to make mistakes and ask questions.
– Don’t be embarrassed to make mistakes and ask questions.
– Don’t be embarrassed to make mistakes and ask questions.


Dear Lesley,
How do you deal with any overbearing paranoia being in InfoSec? Example: I want my home network to be as secure if not more than my work network… How can I explain my paranoia regarding outside threats (however unlikely), and to cope with it 🙂
-Too Paranoid to enter my name

Hi Paranoid,

Fear is healthy in small doses. Fear keeps us alert to potential threats, and helps us survive dangerous situations. However, constant fear is not helpful and is patently unhealthy. If you see illusory threats in every dark corner, you won’t notice when a real one is there, and you’ll be too tired to respond properly to it.

You need approach this as analytically as you can. Let’s talk about measuring real risk.

– Evaluate your assets. What would somebody genuinely target you for? This isn’t necessarily items or information, but could also include your job position or connections.
– Evaluate real threats to you. Who rationally has motive to “get you”, and do they have the means and the opportunity to?
– Evaluate your vulnerability. How could somebody attack you or your assets, and how much effort and resource would it take to do it? How well do you mitigate vulnerabilities? Are you a harder target than others facing similar threats?

Risk is a direct result of the level of threat against you and your assets, and your vulnerabilities. It’s impossible to change the level of threat. All one can do to change risk is change assets, or change vulnerabilities.

People make personal decisions about acceptable risk. A firefighter lives with a different level of risk than a librarian. The firefighter likely has to deal with occasional moments of quite rational fear and adrenaline (due to actual threats and vulnerability), but does not live in constant fear of burning buildings. The librarian might consider running into burning buildings an unacceptable level of risk, which is why he found a less risky profession. However, both people live comfortable with their overall risk and their mitigations, and not in irrational fear.

With all this in mind, consider the things that you’re paranoid about carefully. What is the real level of risk each poses? What level of real risk will you choose to accept on a daily basis? If your overall level of risk is actually too high to cope with on a daily basis, reduce your targeted assets, or reduce your vulnerabilities. If you find your level of risk acceptable, then maintain that level rationally and try not to be unduly afraid. You likely have more to fear from chronic health problems than nameless threats.

Ask Lesley InfoSec Advice Column: 2017-01-30

Thanks for another wonderful week of submissions to my “Ask Lesley” advice form. Today, we’ll discuss digital forensics methodology, security awareness, career paths, and hostile workplaces.


 

Dear Lesley,

I’m a recent female college graduate that didn’t study computer science but is working in technical support at a software company. The more I learn about infosec, the more curious and interested I get about if this is the field for me. What do you resources/videos/courses/ANYTHING you recommend for people who want to make a serious stab at learning infosec?

– Curious Noob

Dear Curious,

I’m really glad to hear you’re discovering a passion for infosec, because curiosity is really the most fundamental requirement for becoming a good hacker. I wrote a long blog series about information security careers which I hope you may find helpful in discovering niches and planning self-study. For brevity’s sake, here are some options for you.

  • Study up on any fundamental computer science area you’re underexposed to in your current work – that means Windows administration, Linux administration, TCP/IP, or system architecture. You need to have a good base understanding of each.
  • Get involved in your local CitySec, DEF CON local, or 2600 meet up group. They are great networking opportunities and a fabulous place to find a mentor or people to study with. There are meet ups all over the world in surprising places.
  • Consider attending an infosec / hacking conference. The BSides security conference in the nearest major city to you is a great option and should be very affordable (if not free). Attend some talks and see what speaks to you. Consider playing in the CTFs or other security challenges offered there, or at least observing.
  • Security Tube and Irongeek.com are your friends, with massive repositories of conference talk videos you can watch for free. Nearly any security topic that piques your interest has probably been spoken about at some point. I would favor those sites over random YouTube hacking tutorials which really vary in quality (and legality).
  • Consider building your own home lab to practice with basic tools and techniques. Networked VMs are adequate as long as you keep them segregated: Kali Linux and a Windows XP VM are a great place to start. You need to take stuff apart to learn about hacking.

These are only some brief suggestions – there’s no streamlined approach to becoming a great hacker. Get involved, ask questions, and don’t be afraid to break stuff (legally)!



Dear Lesley,

What do you do when you provide security awareness training to your employees, but they still click on phishing links!

– Mr. Phrustrated

Dear Phrustrated,

Beyond generally poor quality “death by PowerPoint” training, one of the biggest problems I see in corporate security awareness programs is poor, unsustainable measures of success. For instance, it’s become really trendy to conduct internal phishing tests to identify how many people click on a phish. It’s incredibly tempting to show off to executives that this number is trending down, but that metric is really pretty worthless.

No matter how ruthlessly trained, somebody (and anybody) will click on a well-enough crafted phish, and it only takes one compromise to breach a network’s defenses. What we should be measuring is the reporting of phishing messages and good communication between employees and the security team. The faster we know an attack is underway, the faster we can respond and mitigate the threat.

In conclusion, you should be less concerned if “somebody is still clicking” phishing messages than if nobody is telling you they clicked, and they resist or lie in embarrassment when asked.


Dear Lesley,

Is there a mental checklist while doing digital forensics to not make your evidence point to your quick conclusions, even if you think you have seen a similar case?

– Jack Reacher Jr.

Dear Jack,

Identifying that this is a problem is a great first step. While intuition is an important part of being a good investigator, sound methodology is even more important. The checklist you use to collect evidence and perform an investigation is going to vary by where you work and what types of things you investigate, but you should always have and follow a checklist – and I recommend it be a paper checklist, not mental.

Don’t ever shortcut or skip steps, even when you’re in a high pressure situation. Shortcuts and assumptions are incredibly dangerous to the legal and technical validity of investigations. Gather all the facts available to you at the time, and document ever step you take so that a colleague (or a legal professional) can follow your work even far in the future.

Finally, always remember that in a digital forensic investigation we are generally providing evidence to reach conclusions about “what, when and how”. “Who” is shaky ground, because in most cases it involves context outside the digital device. “Why” is almost never the business of a forensic analyst (and is indeed often not within the capacity of a company to responsibly answer). If you find yourself looking for evidence to fit a presumed “why” scenario, you have a big problem and you need to step back.


Dear Lesley,

I’m this girl like I said, who just started working in the field, and for the past 4 months, I worked at this huge corporation, who has, among other services, an information security related one, offering technical security (pen testing, …) and non-technical security services. At that time, I had little information about advanced hacking techniques as well as the good practices that should be followed to secure our systems.

During the first weeks I got hacked by someone who’s working with me, and I was harassed and shamed by them since then. I knew it because this person would talk about their findings to everyone, even to non-technical people, in the corporation. People would look at me and laugh, smile, smirk, or look at me pathetically, in addition of other situations.

Knowing that this person is an expert (12 or more years working in information security) and that I don’t have any proofs on their actions, what should I do in your opinion ? What kind of advice would you give to girls and women like me, who want to work in the field but get harassed by their experienced co-workers instead of being encouraged by them ?

– I

Dear I,

Your story gave me pause enough to discuss it substantially with several colleagues in information technology who have also worked in extremely hostile environments.

This is a horrific situation. I want to make it crystal clear that this is utterly shameful on the part of your employer, your infosec colleagues, and your organizations’ corporate culture. I truly hope it does not drive you from our field. The most important thing I can tell you is that this is not your fault. and this is not normal.

The first thing I recommend you do is document everything that’s happening in as much detail as possible, even if you don’t feel you have evidence right now. The activity you’re talking about may not only be harassment, but violate hacking laws. Since device compromise is a concern, please maintain this documentation offline.

What you do next depends on factors you don’t mention in your note. First of all, if you have a trusted supervisor, manager outside your team, or senior mentor in your organization, please turn to them for assistance and ensure they are corroborating what has been happening to you on paper. It’s their responsibility to assist you in resolving the issue at a work center or corporate level, even if they’re not directly in your reporting chain.

If there’s nobody at all you can go to in confidence, the situation becomes substantially more unpleasant. Your options are to ignore the behavior to stick out the requisite ~2 years of entry level security at the organization(obviously the worst option), seek employment elsewhere, or contact an HR representative (with the risk of retribution and legal battles that can bring). Obviously, my personal recommendation is taking you and your computer straight to HR. As a wise colleague of mine pointed out, this is most likely not an isolated incident – the behavior and dismal culture will continue for you and others. Sadly, in some places in the world with less employment protections, this can carry the risk of termination. Keep in mind that it is okay to confidentially consult a lawyer within the terms of your employment contract, and pro bono options may be available.

If HR / legal action is not an option, you can’t find employment elsewhere, and you’re toughing it out to build entry level experience, please network and find a local mentor and support structure outside of your company as soon as possible. As well as much needed emotional support, these people could help you study, network, bite back, and explore other recourse against the employer. Feel free to reach out to me anonymously and we’ll try to connect you with somebody in your area.

Best,
Lesley

Ask Lesley InfoSec Advice Column: 2017-01-19

Thanks for your interesting question submissions to “Ask Lesley”! This column will repeat, on no specific schedule, when I receive interesting questions that are applicable to multiple people. See further details or submit a question, here. Without further ado, today we have OS debates, management communication issues, nation state actors, and career questions galore!



Dear Lesley,

So last year’s Anthem breach was from a nation state – why would a nation state want to hack health insurance info? I understand the identity theft motivation of a criminal, but why do you think a nation state would want this type of data?

– Inquisitive

Dear Inquisitive,

First off, I can’t confirm the details of the Anthem breach – I wasn’t involved in the investigation and haven’t had the privilege of reviewing all the evidence. However, when generally talking about why a state-sponsored actor might want to acquire data, you have to look at a bigger picture than data sets. Nation states usually view hacking as a means to an end. They (ab)use data with a firm political or military objective in mind. Whether a nation state intended to steal 80 million records, or the theft was a crime of opportunity when looking for something more specific, what they stole may unfortunately be useful to them for years to come.

You can obviously already see how the data stolen in a healthcare breach is a treasure trove for general identity theft. The piece I believe you might be missing considers how the data could be combined with other public domain and stolen information to facilitate political objectives. If you already have a target in mind, healthcare data could be a great boon to social engineering, blackmail, and surveillance efforts. For example, consider how much leverage knowing that a target’s child is ill could provide. Or that a target family is hundreds of thousands of dollars in medical debt. These are attractive attack vectors. I can only speculate on potential scenarios, but based on my experience in OSINT, the data stolen from Anthem adds attractive private information about many millions of people.

 


Dear Lesley,

The ‘researcher’ portion of ‘security researcher’ implies graduate school – is PhD study in cybersecurity worth it? There doesn’t seem to be many programs that are worthwhile (except on paper only)

– Not in Debt, Yet


Dear Not in Debt, Yet,

That’s an interesting implication – not one I necessarily agree with based on empirical evidence. I know full time, professional security researchers studying everything from exploits to governance who have every level of formal education, from GEDs to PhDs.  I do see certain fields of security research represented in higher education more than others – a couple examples are high level cryptography and electronic engineering.

I have always been an advocate for higher education and I see little harm and many benefits in getting a good education in a field you enjoy (particularly, a well-rounded education) if you can afford it. However, at the present, there are very few information security careers or communities of research which require a degree, and fewer good quality degree programs. You should see few credential-related barriers to participating in or publishing security research if your work and presentation is good quality.

In some ways, existing exclusively in academia can also make it harder to work in practical security research, as the security field changes more quickly than university curricula can keep up. As a result, some academic security research ends up impractical and theoretical to a fault. (See my yearly rants on steganography papers.) If you go the academic route, choose your field of study carefully, and be careful not to lose touch with the working world.


Dear Lesley,

While working on my 5 BILLION dollar data breach, I wanted some blue cheese dip and chips (The Spice House in Chicago has the best mix btw), a co-worker looked at me with disgust. Am I wrong? Also what’s a good resource to learn about file carving?

– Epicurean EnCE

Dear Epicurean,

Clearly, your coworker is a Ranch dressing fan and should therefore be looked upon with disdain. In regards to file carving, your mission, (should you choose to accept it), is to review how files are physically and logically stored on a hard drive. Next, you’ll want to start familiarizing yourself with typical file headers and footers. Gary Kessler has a pretty killer list, here. Some file types will be more relevant to your specific work in forensics than others; I can’t tell you which those will be.  Your best bet is to pick a couple file types you look at a lot and look at them in a hex editor, then start searching for them in a forensic image.

Brian Carrier’s File System Forensics book, while a bit older, is still a stellar resource for understanding How Disk Stuff Works. SANS SIFT kit includes the tools you will need to get started carving files from disk, and the associated cheat sheets will help with the commands.

If you want to carve files from packet captures, similar header/footer knowledge is required, along with a different tool set. Wireshark’s export alone will often suffice; if it fails, look at Network Miner.


Dear Lesley,

What was the silliest / dumbest thing you’ve googled this week?

– Curious in Cincinnati


Dear Curious,

“The shirt, 2017”

I still don’t get what’s up with that.

 


Dear Lesley,

I teach high school computer science courses and many students biggest interest is infoSec stuff. What should they do to prepare at that age? Any recommendations on software or skills I can teach them? I’m willing to put in the time and effort to learn things to teach and we have class time, but this isn’t what my tech career focused on so I need some help. Thank you, you’re the best!

– Mentor in Michigan

Dear Mentor,

Being a crummy hacker requires learning to use a few tools by following YouTube. Being a good hacker requires a great deal of foundational knowledge about other, less entertaining computer stuff.

The better one knows how computer hardware, operating systems, and networks work, the better he or she will be at hacking. If kids come out of your classes unafraid of taking their own software and hardware apart, you did your job right. That means a lot of thinking about how Windows and Linux function, how computer programs work all the way down to Assembly, and how data gets from point A to point B. If you are going to encourage kids to take stuff apart, make sure they also understand that law and ethics are involved. Provide them a safe and legal sandbox to explore, and explain why it’s important to know how to break things in order to fix them.

As an aside – by high school, kids are more than old enough to be actively participating in the infosec community if they wish. Numerous kids and teens attend and even present at hacker events, these days; in fact, many conferences have educational events and sponsorships specifically for youth.

 


Dear Lesley,

 I normally use a Chromebook, but I also have to use Windows 10 so that I can use Cisco packet tracer (I’m studying CCNA). I really trust the security of my Chromebook, but Windows 10 – not so much. I have antivirus, anti-exploit and anti-ransomware software on my Windows laptop. But my question to you is: Is there a resource that you know of that can help lock down Windows 10 for the home user? Most of what I find is for enterprises and Enterprise versions of Windows 10 and if I do find something for the home user it invariably talks about privacy rather than security.

–  Kerneled Out


Dear Kerneled Out,

The OS wars, while somewhat befuddled by 2016, are alive and well. There are dogmatic Linux fans, and dogmatic Windows fans, and so on and so forth. My opinion is that every OS has its place when used correctly by the right person. Many serious security people I know use every major OS on a daily basis – I sure do.

Swift On Security has a nice guide here on securing Windows 10 that should suit your needs.

As for Chrome over Windows – please don’t fall into the “security by obscurity” trap that MacOS and Chrome can encourage. They are both solid OSes with interesting ideas on security, and viable choices for home and business use cases. However, modern versions are not inherently more or less secure than modern Windows. MacOS, Windows, Chrome, and major Linux distros are as secure as they are configured and used by human beings. Of course, the complexity of configuring them can vary based on user experience and training.

 


Dear Lesley,

How come everyone wants 5 years experience for an entry level infosec job? I’ve been trying to get gainful employment in an offensive role for more than 6 months and no one wants anyone with less than 5 years of pentesting/red teaming experience. Can’t exactly do pen tests until you’re a pentester, so what do I do?

– Frustrated

Dear Frustrated,

I’m sorry to hear you’re having so much trouble finding a position. I have written quite a lot about infosec career paths and job hunting in previous blogs, and I hope that they can assist you a little. Red teaming is unfortunately much harder and more competitive to find work in than Blue teaming, so my suggestions here are not going to be particularly pleasant:

  • Consider your willingness to move. There are simply more red team jobs in places like DC and the west coast.
  • Consider if you can take a lower-paid internship. It sucks, but it’s an in, and pen testing firms do offer them.
  • Consider doing blue team SOC work for a couple years. It’s not exactly your cup of tea, but it will give you solid security experience.
  • Network like crazy. Get to the cons and the meet-ups in person. Talk to people and build relationships.
  • Do research and speak about it. Pick something that intrigues you, even if you have no professional experience, and do a few months work, and submit to a CFP. It will get you name recognition.

Dear Lesley,

Many infosec professionals feel that signature-based antivirus is dead. If that is the case… What do you recommend we replace it with to protect our most vulnerable endpoints (end users) with?

– Sigs Uneasy

Dear Sigs,

That’s the kind of black and white statement that makes a good headline, but exaggerates the truth a bit. Yes, there are a couple companies who have been able to ditch antivirus because of their topology and operations. The vast majority still use it. While signatures alone don’t cut it against quickly replaced and polymorphic threats, other antivirus features, such as HIPS and heuristics, still provide a benefit. (So, if you’re still using some kind of antivirus that can’t do those things, it’s time to upgrade.)

Antivirus today is useful as part of a “defense in depth” solution. It is not a silver bullet, and it’s certainly defeatable. However, it still catches mass malware and the occasional targeted threat. The threats AV misses should be caught by your network IPS, your firewall, your web filters, your application whitelisting solution, and so forth. None of those solutions is bulletproof alone, and even the efficacy of trendy solutions like whitelisting is limited if you don’t architect and administer your network securely.


Dear Lesley,

I was testing a network and found some major flaws. The management doesn’t seem too bothered but I feel the issues are huge. I want to out them because these flaws could impact many innocent people. But if I do, I won’t be hired again. I look forward to your response.

– Vaguely Disturbed

Dear Disturbed,

Before whistle-blowing and potentially getting in legal trouble, I highly recommend you approach this argument from a solid risk management perspective. Sometimes, “it could be hacked” means a lot less to management than, “9 companies in our industry were breached in 2016, and if we are, it will probably cost us over 70 million dollars in lost revenue”. If you have access to anybody with a risk analysis background you can reach out to under the relevant NDA, I highly recommend you have a chat with them and put together a quantified, evidenced argument, ASAP. The more dollar signs and legal cases, the better your chances of winning this.

At the very least, win or lose, ensure you’ve covered your butt. This means written statements and acknowledgements stating you clearly explained the potential risk and also that they willfully chose to ignore it. Not only does requiring a notarized signature make the appearance of threat go up, but it will be helpful in case they decide to blame you or your employer two years from now.

I would suggest you consult a lawyer before breaking NDA or employment contract by whistle blowing, no matter how noble your intentions. I am not a lawyer, nor do I play one on TV.


Dear Lesley,

I make software and web applications that connect to software and services from other companies. Sometimes those companies disable or cripple some features due to possible security exploits. When I’ve met with security people from those companies and asked them about the features they nerfed (disabled or crippled), I’m met with an awkward silence similar to the vague errors I get from their servers. As a developer, I’m so used to the open-source community that wants to help that this feels weird. Is there some certification, secret handshake, or specific brand of white fedora I need to have conversations with security people about their products security issues? Just trying to learn and grow, and not cause a mess for anybody.

– Snubbed

Dear Snubbed,

No secret handshake. Here are a couple suggestions from the receiving end of these types of concerns:

  • Set up a security lab with your applications and a client on it. Install a Snort or Suricata sensor(s) with the free Emerging Threats ruleset in the midst of them to intercept their communication. (Security Onion is a nice, relatively easy to install option.) Send normal application traffic back and forth and see what security signatures are firing on the network.  That will give you some idea of what might be getting blocked before you even start the discussion (and help you reduce false positives).
  • Ensure your applications are getting proper vulnerability testing before release. Again, even if you’re coding securely and responsibly, this can help reduce false positive detection by vulnerability scanners or sensors.
  • Ask the security people what security products or appliances they are using on the hosts and on the network, and what signatures are firing. You might not have access to a 20,000 dollar security appliance to test, but their sensor might have full packet capture functionality or verbose logs that will help you troubleshoot.
  • Try to build a better professional relationship with these teams if you can. If they’re involved in a local security group, perhaps drop by and have a drink with them.

 


Dear Lesley,

I’m feeling it is time to move on from Windows XP, but only because many things no longer support it, and 3Gb is a bit limiting when running VMs and the like. I’ve tried Windows 10, and it is completely alien, and I worry about security – it streams things back to Microsoft, and is less secure than my hardened XP install. I’ve tried Mint Linux, and that was quite good, but underneath it is even more alien than Windows 10. I’ve heard of BSD, but I’m worried that my political career could be over if word about that got out, so I’ve not tried it. What do you suggest?

– Unsupported in UK

Dear Unsupported,

It is indeed high time to move off XP.

Windows XP is unsupported, highly vulnerable, and trivially exploitable by hackers. It is not in the same league as Windows 10 in terms of security. Even application whitelisting (which is considered a bit a last resort silver bullet in industry) isn’t a reliable means of securing XP against attacks anymore.

Yes, there are some IT professionals who dislike Windows 10. Those concerns usually have to do with things like UI, embedded ads and system telemetry, not the underlying security (which is quite well engineered).

If those are your specific concerns, a current version of Mint (which you tried), Ubuntu, or MacOS are all okay options. They would all need to be thoughtfully configured for security just as much as Windows. BSD will feel just as unfamiliar if you were uncomfortable operating in Mint, but I certainly don’t discourage you from giving it a try. Even MacOS is *nix based under the hood.

Unfortunately, it seems to me that you’re stuck with two options if you want to maintain any semblance of security: cope with your dislike of Windows 10, or dedicate some time to learning the inner workings of a new operating system. Either way, please get off XP as soon as possible.


Dear Lesley,

My friend, since birth – who I’ll call M. E., has had a 23-year, jack-of-most-trades career in IT. ME is currently serving as the IT Decider (and Doer) at an SMB financial firm. Over the last five years, ME has enjoyed focusing on security. Technology, security in particular, is still near the top of his hobby list. However, compared to when he started his IT career, ME places a greater value on having a work-life balance. ME wonders if it’s too late for a change to the cyberz – without “starting over.” In your experience, is there a reasonable way for ME to jump from the “IT rail” to the “security rail” without touching the third rail and returning to Go, without collecting $200?

– ME’s Friend

Dear ME’s Friend,

Your ‘friend’ sounds like a great candidate for many security positions, but he or she might have to take a pay cut. 23 years of experience in systems administration and networking is 23 years of experience in how to take things apart, which is really mostly what security is behind the neat hats and the techno music.

ME is going to need to figure out two important things. Firstly, ME will need to gain some security-specific vocabulary to tie things together – a course or certification might be a nice feather in the cap. Then, ME is going to have to carefully plan out how to present him or herself as an Awesome Security Candidate in interviews and resumes. That will involve taking those 23 years of generalized experience, as well as security hobby work, and selling them as 23 years of Awesome Security Experience. For example, it takes a lot of understanding of Windows administration and scripting to be a good Windows pen tester. Or, it takes a lot of TCP/IP knowledge to do packet analysis of an IPS signature fire. Every niche of security requires deep knowledge of one or more areas of general IT.

All that being said, there are some security skills that need to be learned on the job. I wouldn’t push ME towards an entry level gig, but it may not be an easy lateral move to any senior technical position, either. A good segue if seniority is critical might be security engineering (IPS / SIEM / log aggregation administration, etc).


Dear Lesley,

How does an organization go about starting a patch testing program? Ours seems to be stuck in a “don’t update it, you’ll break the application” mindset. –

– TarPitted in Texas

Dear TarPitted,

As I noted to a reader above, sometimes this type of impasse with management can only be solved through presenting things as quantifiable risk. If you are telling management that your application is vulnerable, and they are saying it will cost too much if it breaks when you patch it, somebody else is quantifying risk better than you. You’d best believe that team saying, “the application might break” is also saying, “if this application breaks, it will cost us n dollars a day”. So, play that game. Tell management specifically how much money and time they stand to lose if a security incident occurs. Present this risk clearly – get help if you need to from all of the impacted teams, your disaster recovery and risk management professionals, and even your finance team.

Your managers should be making a decision based on monetary and other quantifiable business impact of the application going down for patching, vs. the monetary and other quantifiable business impacts of a potential security incident at x likelihood. Once they do that on paper, you’ve done due diligence.

 

Bridging the Gap between Media and Hackers: An Olive Branch

I had a lovely interview about IoT security with Emmy-award-winning reporter Kerry Tomlinson of Archer News this past week at BSides Jackson. It’s unfortunately rare in our field that we get to have such productive, mutually beneficial conversations with members of the media. There’s a lot of uncertainty and (often justified) lack of trust between both parties – which makes it easy to forget that presenting a coherent, technically correct, and comprehensible message on information security and privacy is crucial for everyone.

Since organizations like I Am the Cavalry are already approaching the outreach problem primarily from the side of security professionals, I’d like to take a slightly different approach by specifically addressing journalists and the media.

We need your help!

With the plethora of hacker conferences which are gaining legitimacy and attention across the world, there are many opportunities to address our community. Hacking conference call-for-papers are often open to everybody, not just people gainfully employed in security. You are welcome to apply and lend your unique perspective to these problems. It doesn’t have to be DEF CON or Black Hat. There are many smaller options which record and post talks, and have great reach within our community.

Here are some important topics which you could help educate us about, by sharing your perspective:

  • What is it like being a journalist covering security? What are the challenges?
  • How should we prepare for a media interview?
  • Many people in security feel burnt by misquotes and misinterpretations of their work. How can we better avoid this? What should we do if we feel we have been misrepresented by a media organization?
  • How can we better vet news outlets which want to work with us?
  • How can we help you as subject matter experts or fact checkers?
  • How can we help you present our most important security research to society without sensationalizing?
  • How can we better format and target our blogs and research for the media?

We want to help you!

There are plenty of security topics that are timely and  highly relevant to journalists and the media, and many of us are willing to offer education and insights to your communities of practice, if offered opportunities to do so.

Here are some topics which many willing security professionals (including myself) could provide a range of insights and training on at media conferences and educational programs:

  • How to conduct secure and private communications with sources and colleagues.
  • How to maintain operational security and avoid leakage of sensitive personal information.
  • How to secure computers and mobile devices.
  • Understanding, detecting, and avoiding social engineering.
  • How to approach hackers (white, grey, and black hat) for information on security research.
  • The realities of hacker “culture” and work, and how these differ from fictional stereotypes.
  • Current issues with malvertising on news sites, how to better decrease the risk thereof, and their effect on the rise of adblockers.

I want to take a moment to thank the many journalists and reporters who do fabulous coverage of security topics right now (especially Steve Ragan, who wrote the essential article on how to deal with the media as a hacker) who associate with our community on a regular basis. Thanks for dealing with our foibles and for doing great work.

Nation State Threat Attribution: a FAQ

Threat actor attribution has been big news, and big business for the past couple years. This blog consists of seven very different infosec professionals’ responses to frequently asked questions about attribution, with thoughts, experiences, and opinions (focusing on nation state attribution circa 2016). The contributors to this FAQ introduce themselves as follows (and express personal opinions in this article that don’t necessarily reflect those of their employers or this site):

  • DA_667: A loud, ranty guy on social media. Farms potatoes. Has nothing to do with Cyber.
  • Ryan Duff: Former cyber tactician for the gov turned infosec profiteer.
  • Munin: Just a simple country blacksmith who happens to do infosec.
  • Lesley Carhart: Irritatingly optimistic digital forensics and incident response nerd.
  • Krypt3ia: Cyber Nihilist
  • Viss: Dark Wizard, Internet bad-guy, feeder and waterer of elderly shells.
  • Coleman Kane: Cyber Intelligence nerd, malware analyst, threat hunter.

Many thanks to everybody above for helping create this, and for sharing their thoughts on a super-contentious and complex subject. Additional thanks to everybody on social media who contributed questions.

This article’s primary target audience is IT staff and management at traditional corporations and non-governmental organizations who do not deal with traditional military intelligence on a regular basis. Chances are, if you’re the exception to our rules, you already know it (and you’re probably not reading this FAQ).

Without further ado, let’s start with some popular questions. We hope you find some answers (and maybe more questions) in our responses.


 

Are state-sponsored network intrusions a real thing?

DA_667: Absolutely. “Cyber” has been considered a domain of warfare. State-sponsored intrusions have skyrocketed. Nation-states see the value of data that can be obtained through what is termed as “Cyberwarfare”. Not only is access to sensitive data a primary motivator, but access to critical systems. Like, say, computers that control the power grid. Denying access to critical infrastructure can come in handy when used in concert with traditional, kinetic warfare.

Coleman: I definitely feel there’s ample evidence reported publicly by the community to corroborate this claim. It is likely important to distinguish how the “sponsorship” happens, and that there may (or may not) be a divide between those whose goal is the network intrusion and those carrying out the attack.

Krypt3ia: Moot question. Next.

Lesley: There’s pretty pretty conclusive public domain evidence that they are. For instance, we’ve seen countries’ new weapons designs appear in other nations’ arsenals, critical infrastructure attacked, communications disrupted, flagship commercial and scientific products duplicated within implausibly short timeframes.

Munin: Certainly, but they’re not exactly common, and there’s a continuum of attackers from “fully state sponsored” (that is, “official” “cyberwarfare” units) to “tolerated” (independent groups whose actions are not materially supported but whose activities are condoned).

Viss: Yes, but governments outsource that. We do. Look at NSA/Booz.

Ryan: Of course they are real. I spent a decent portion of my career participating in the planning of them.

 

 

Is this sort of thing new?

Coleman: The most common blame frequently is pointed at China, though a lot of evidence (again, in the public) indicates that it is broader. That said, one of the earliest publicly-documented “nation-state” attacks is “Titan Rain”, which was reported as going back as far as 2003, and widely regarded as “state sponsored”. With that background, it would give an upper bound of ~13 years, which is pretty old in my opinion.

Ryan: It’s definitely not new. These types of activities have been around for as long as they have been able to be. Any well resourced nation will identify when an intelligence or military opportunity presents itself at the very earliest stages of that opportunity. This is definitely true when it comes to network intrusions. Ever since there has been intel to retrieve on a network, you can bet there have been nation states trying to get it.

Munin: Not at all. This is merely an extension of the espionage activities that countries have been flinging at each other since time immemorial.

DA_667: To make a long story short, absolutely not. For instance, it has believed that a recent exploit used by a group of nation-state actors is well over 10 years old. That’s one exploit, that is supposed tied to one actor. Just to give you an idea.

Lesley: Nation state and industrial sabotage, political maneuvering, espionage, and counterespionage have existed as long as industry and nation states have. It’s nothing new. In some ways, it’s just gotten easier in the internet era. I don’t really differentiate.

Krypt3ia: No. Go read The Cuckoo’s Egg.

Viss: Hard to say – first big one we knew about was Stuxnet, right? – Specifically computer security stuff, not in-person assets doing Jason Bourne stuff.

 

 

How are state-sponsored network intrusions different from everyday malware and attacks?

Lesley: Sometimes they may be more sophisticated, and other times aspects are less sophisticated. It really depends on actor goals and resources. A common theme we’ve seen is long term persistence – hiding in high value targets’ networks quietly for months or years until an occasion to sabotage them or exfiltrate data. This is pretty different from your average crimeware, the goal of which is to make as much money as possible as quickly as possible. Perhaps surprisingly, advanced actors might favor native systems administration tools over highly sophisticated malware in order to make their long term persistence even harder to detect. Conversely, they might employ very specialized malware to target a specialized system. There’s often some indication that their goals are not the same as the typical crimeware author.

Viss: The major difference is time, attention to detail and access to commercial business resources. Take Stuxnet – they went to Microsoft to validate their usb hardware so that it would run autorun files – something that Microsoft killed years and years ago. Normal malware can’t do that. Red teams don’t do that. Only someone who can go to MS and say “Do this. Or you’ll make us upset” can do that. That’s the difference.

Munin: It’s going to differ depending on the specifics of the situation, and on the goals being served by the attack. It’s kind of hard to characterize any individual situation as definitively state-sponsored because of the breadth of potential actions that could be taken.

DA_667: In most cases, the differences between state-sponsored network intrusions and your run-of-the-mill intruder is going to boil down to their motivations, and their tradecraft. Tradecraft being defined as, and I really hate to use this word, their sophistication. How long have the bad guys operated in their network? How much data did they take? Did they use unique tools that have never before been seen, or are they using commodity malware and RATs (Trojans) to access targets? Did they actively try to hide or suppress evidence that they were on your computers and in your network? Nation-state actors are usually in one’s network for an extended period of time — studies show the average amount of time between initial access and first detection is somewhere over 180 days (and this is considered an improvement over the past few years). This is the primary difference between nation-states and standard actors; nation-states are in it for the long haul (unlike commodity malware attackers). They have the skill (unlike skids and/or hacktivists). They want sustained access so that they can keep tabs on you, your business, and your trade secrets to further whatever goals they have.

Krypt3ia: All of the above with one caveat. TTP’s are being spread through sales, disinformation campaigns and use of proxies. Soon it will be a singularity.

Coleman: Not going to restate a lot of really good info provided above. However, I think some future-proofing to our mindset is in order. There are a lot of historic “nation-state attributed” attacks (you can easily browse FireEye’s blog for examples) with very specific tools/TTPs. More recently, some tools have emerged as being allegedly used in both (Poison Ivy, PlugX, DarkComet, Gh0st RAT). It kind of boils down to “malware supply chain”. Back in 2003, the “supply chain” for malware capable of the stealth as well as remote-access capability was comparatively low to today, so it was likely more common to have divergence between tooling funded for “state sponsored” attacks, versus what was available to the more common “underground market”. I think we have, and will continue to see, a convergence in tactics that muddies the waters and also makes our work as intel analysts more difficult, as more commodity tools improve.

 

 

Is attributing network attacks to a nation state actor really possible?

Munin: Maybe, under just the right circumstances – and with information outside of that gained within the actual attacked systems. Confirming nation-state responsibility is likely to require more conventional espionage information channels [ e.g. a mole in the ‘cyber’ unit who can confirm that such a thing happened ] for attribution to be firmer than a “best guess” though.

DA_667: Yes and No. Hold on, let me explain. There are certain signatures, TTPs, common targets, common tradecraft between victims that can be put together to grant you clues as to what nation-state might be interested in given targets (foreign governments, economic verticals, etc.). There may be some interesting clues in artifacts (tools, scripts, executables, things the nation-state uses) such as compile times and/or language support that could be used if you have enough samples to make educated guesses as well, but that is all that data will amount to: hypothetical attribution. There are clues that say X is the likely suspect, but that is about as far as you can go.

Lesley: Kind of, by the right people with access to the right evidence. It ends up being a matter of painstaking analysis leading to a supported conclusion that is deemed plausible beyond a reasonable doubt, just like most criminal investigations.

Viss: Sure! Why not? You could worm your way back from the c2 and find the people talking to it and shell them! NSA won’t do that though, because they don’t care or haven’t been tasked to – and the samples they find, if they even find samples will be kept behind closed doors at Mandiant or wherever, never to see the light of day – and we as the public will always get “trust us, we’re law enforcement”. So while, sure, It’s totally possible, A) they won’t let us do it because, well, “we’re not cool enough”, and B) they can break the law and we can’t. It will always boil down to “just trust us”, which isn’t good enough, and never helps any public discourse at all. The only purpose it serves talking to the press about it is so that they can convince the House/Senate/other decision makers “we need to act!” or whatever. It’s so that they can go invade countries, or start shit overseas, or tap cables, or spy on Americans. The only purpose talking about it in the media serves is so that they get their way.

Coleman: It is, but I feel only by the folks with the right level of visibility (which, honestly, involves diplomacy and basically the resources of a nation-state to research). I feel the interstate diplomacy/cooperation part is significantly absent from a lot of the nation-state attribution reporting today. At the end of the day, I can’t tell you with 100% certainty what the overall purpose of an intrusion or data theft is. I can only tell you what actions were taken, where they went, what was taken, and possible hypotheses about what relevance it may have.

Ryan: Yes, but I believe it takes the resources of a nation-state to do it properly. There needs to be a level of access to the foreign actors that is beyond just knowing the tools they use and the tradecraft they employ. These can all be stolen and forged. There needs to be insight into adversaries mission planning, the creation of their infrastructure, their communications with each other, etc in order to conduct proper attribution. Only a nation-state with an intelligence capability can realistically perform this kind of collection. That’s why it’s extremely difficult, in my opinion, for a non-government entity to really do proper state-sponsored attribution.

Krypt3ia: There will always be doubt because disinformation can be baked into the malware, the operations, and the clues left deliberately. As we move forward, the actors will be using these techniques more and it will really rely on other “sources and methods” (i.e. espionage with HUMINT) to say more definitively who dunnit.

 

 

Why do security professionals say attribution is hard?

Lesley: Commercial security teams and researchers often lack enough access to data to make any reliable determination. This doesn’t just include lack of the old-fashioned spy vs. spy intelligence, but also access to the compromised systems that attackers often use to launch their intrusions and control their malware. It can take heavy cooperation from law enforcement and foreign governments far outside one network to really delve into a well-planned global hacking operation. There’s also the matter of time – while a law enforcement or government agency has the freedom to track a group across multiple intrusions for years, the business goal of a most private organizations is normally to mitigate the damage and move on to the next fire.

Munin: Being truly anonymous online is extremely difficult. Framing someone else? That’s comparatively easy. Especially in situations where there exists knowledge that certain infrastructure was used to commit certain acts, it’s entirely possible to co-opt that infrastructure for your own uses – and thus gain at least a veneer of being the same threat actor. If you pay attention to details (compiling your programs during the working hours of those you’re seeking to frame; using their country’s language for localizing your build systems; connecting via systems and networks in that country, etc.) then you’re likely to fool all but the most dedicated and well-resourced investigators.

Coleman: In my opinion, many of us in the security field suffer from a “fog of war” effect. We only have complete visibility to our interior, and beyond that we have very limited visibility of the perimeter of the infrastructure used for attacks. Beyond that, unless we are very lucky, we be granted some visibility into other victims’ networks. This is a unique space that both the governments and the private sector infosec companies get to reside within. However, in my opinion, the visibility will still end just beyond their customer base or scope of authority. At the end of the day, it becomes an inference game, trying to sum together multiple data points of evidence to eliminate alternative hypotheses in order to converge on “likeliest reality”. It takes a lot of time and effort to get it right, and very frequently, there are external drivers to get it “fast” before getting it “correct”. When the “fast” attribution ends up in public, it becomes “ground truth” for many, whether or not it actually is. This complicates the job of an analyst trying to do it it correctly. So I guess, both “yes” and “no” apply. Attribution is “easy” if your audience needs to point a finger quickly, attribution is “hard” if your audience expects you to blame the right perp ;).

DA_667: Okay so in answering this, let me give you an exercise to think about. If I were a nation-state and I wanted to attack target “Z” to serve some purpose or goal, directly attacking target “Z” has implications and risks associated to it, right? So instead, why not look for a vulnerable system in another country “Y”,  compromise that system, then make all of my attacks on “Z” look like they are coming from “Y”? This is the problem with trying to do attribution. There were previous campaigns where there was evidence that nation-states were doing exactly this;  proxying off of known, compromised systems to purposely hinder attribution efforts (https://krypt3ia.wordpress.com/2014/12/20/fauxtribution/). Now, imagine having to get access to a system that was used to attack you, that is in a country that doesn’t speak your native language or, perhaps doesn’t have good diplomatic ties with your country. Let’s not even talk about the possibility that they may have used more than one system to hide their tracks, or the fact that there may be no forensic data on these systems that assists in the investigation. This is why attribution is a nightmare.

Krypt3ia: See my answers above.

Viss: Because professionals never get to see the data. And if they *DO* get to see the data, they get to deal with what DA explains above. It’s a giant shitshow and you can’t catch people breaking the law if you have to follow the law. That’s just the physics of things.

Ryan: DA gave a great example about why you can’t trust where the attack “comes from” to perform attribution. I’d like to give an example regarding why you can’t trust what an attack “looks like” either. It is not uncommon for nation-state actors to not only break into other nation-state actors’ networks and take their tools for analysis, but to also then take those tools and repurpose them for their own use. If you walk the dog on that, you’re now in a situation where the actor is using pre-compromised infrastructure in use by another actor, while also using tools from another actor to perform their mission. If Russia is using French tools and deploying them from Chinese compromised hop-points, how do you actually know it’s Russia? As I mentioned above, I believe you need the resources of a nation-state to truly get the information needed to make the proper attribution to Russia (ie: an intelligence capability). This makes attribution extremely hard to perform for anyone in the commercial sector.

 

 

How do organizations attribute attacks to nation states the wrong way?

Munin: Wishful thinking, trying to make an attack seem more severe than perhaps it really was. Nobody can blame you for falling to the wiles of a nation-state! But if the real entrypoint was boring old phishing, well, that’s a horse of a different color – and likely a set of lawsuits for negligence.

Lesley: From a forensics perspective, the number one problem I see is trying to fit evidence to a conclusion, which is totally contrary to the business of investigating crimes. You don’t base your investigation or conclusions off of your initial gut feeling. There is certainly a precedent for false flag operations in espionage, and it’s pretty easy for a good attacker to emulate a less advanced one. To elaborate, quite a bit of “advanced” malware is available to anybody on the black market, and adversaries can use the same publicly posted indicators of compromise that defenders do to emulate another actor like DA and Ryan previously discussed (for various political and defensive reasons). That misdirection can be really misleading, especially if it plays to our biases and suits our conclusions.

DA_667: Trying to fit data into a mold; you’ve already made up your mind that advanced nation-state actors from Elbonia want your secret potato fertilizer formula, and you aren’t willing to see it any differently. What I’m saying is that some organizations have a bias that leads them to believe that a nation-state actor hacked them.

In other cases, you could say “It was a nation-state actor that attacked me”, and if you have an incident response firm back up that story, it could be enough to get an insurance company to pay out a “cyber insurance” policy for a massive data breach because, after all, “no reasonable defense could have been expected to stop such sophisticated actors and tools.”

Krypt3ia: Firstly they listen to vendors. Secondly they are seeking a bad guy to blame when they should be focused on how they got in, how they did what they did, and what they took. Profile the UNSUB and forget about attribution in the cyber game of Clue.

Viss: They do it for political reasons. If you accuse Pakistan of lobbing malware into the US it gives politicians the talking points they need to get the budget and funding to send the military there – or to send drones there – or spies – or write their own malware. Since they never reveal the samples/malware, and since they aren’t on the hook to, everyone seems to be happy with the “trust us, we’re law enforcement” replies, so they can accuse whoever they want, regardless of the reality and face absolutely no scrutiny. Attribution at the government level is a universal adapter for motive. Spin the wheel of fish, pick a reason, get funding/motive/etc.

Coleman: All of the above are great answers. In my opinion, among the biggest mistakes I’ve seen not addressed above is asking the wrong questions. I’ve heard many stories about “attributions” driven by a desire by customers/leaders to know “Who did this?”, which 90% of the time is non-actionable information, but it satisfies the desires of folks glued to TV drama timelines like CSI and NCIS. Almost all the time, “who did this?” doesn’t need to be answered, but rather “what tools, tactics, infrastructure, etc. should I be looking for next?”. Nine times out of ten, the adversary resides beyond the reach of prosecution, and your “end game” is documentation of the attack, remediation of the intrusion, and closing the vulnerabilities used to execute the attack.

 

 

So, what does it really take to fairly attribute an attack to a nation state?

Munin: Extremely thorough analysis coupled with corroborating reports from third parties – you will never get the whole story from the evidence your logs get; you are only getting the story that your attacker wants you to see. Only the most naive of attackers is likely to let you have a true story – unless they’re sending a specific message.

Coleman: In my opinion, there can be many levels to “attribution” of an attack. Taking the common “defense/industrial espionage” use case that’s widely associated with “nation state attacks”, there could be three semi-independent levels that may or may not intersect: 1) Tool authors/designers, 2) Network attack/exploiters, 3) Tasking/customers. A common fallacy that I’ve observed is to mistake that a particular adversary (#2 from above) exclusively cares about espionage gathering specific data that they’ve been tasked with at one point. IMO, recognize that any data you have is “in play” for any of #2, from my list above. If you finally get an attacker out, and keep them out, someone else is bound to be thrown your way with different TTPs to get the same data. Additionally, a good rule as time goes on, is that all malware becomes “shared tooling”, and to make sure not to confuse “tool sharing” with any particular adversary. Or, maybe you’re tracking a “Poison Ivy Group”. Lots of hard work, and also a recognition that no matter how certain you are, new information can (and will!) lead to reconsideration.

Lesley: It’s not as simple as looking at IP addresses! Attribution is all about doing thorough analysis of internal and external clues, then deciding that they lead to a conclusion beyond a reasonable doubt. Clues can include things like human language and malicious code, timestamps on files that show activity in certain time zones, targets, tools, and even “softer” indicators like the patience, error rate, and operational timeframes of the attackers. Of course, law enforcement and the most well-resourced security firms can employ more traditional detective, Intel, and counterespionage resources. In the private sector, we can only leverage shared, open source, or commercially purchased intelligence, and the quality of this varies.

Viss: A slip up on their part – like the NSA derping it up and leaving their malware on a staging server, or using the same payload in two different places at the same time which gets ID’ed later at something like Stuxnet where attribution happens for one reason or another out of band and it’s REALLY EASY to put two and two together. If you’re a government hacking another government you want deniability. If you’re the NSA you use Booz and claim they did it. If you’re China you proxy through Korea or Russia. If you’re Russia you ride in on a fucking bear because you literally give no fucks.

DA_667: A lot of hard work, thorough analysis of tradecraft (across multiple targets), access to vast sets of data to attempt to perform some sort of correlation, and, in most cases, access to intelligence community resources that most organizations cannot reasonably expect to have access to.

Krypt3ia: Access to IC data and assets for other sources and methods. Then you adjudicate that information the best you can. Then you forget that and move on.

Ryan: The resources of a nation-state are almost a prerequisite to “fairly” attribute something to a nation state. You need intelligence resources that are able to build a full picture of the activity. Just technical indicators of the intrusion are not enough.

 

 

Is there a way to reliably tell a private advanced actor aiding a state (sanctioned or unsanctioned) from a military or government threat actor?

Krypt3ia: Let me put it this way. How do you know that your actor isn’t a freelancer working for a nation state? How do you know that a nation state isn’t using proxy hacking groups or individuals?

Ryan: No. Not unless there is some outside information informing your analysis like intelligence information on the private actor or a leak of their tools (for example, the HackingTeam hack). I personally believe there isn’t much of a distinction to be made between these types of actors if they are still state-sponsored in their activities because they are working off of their sponsors requirements. Depending on the level of the sponsor’s involvement, the tools could even conform to standards laid out by the nation-state itself. I think efforts to try to draw these distinctions, are rather futile.

DA_667: No. In fact, given what you now know about how nation-state actors can easily make it seem like attacks are coming from a different IP address and country entirely, what makes you think that they can’t alter their tool footprint and just use open-source penetration testing tools, or recently open-sourced bots with re-purposed code?

Munin: Not a chance.

Viss: Not unless you have samples or track record data of some kind. A well funded corporate adversary who knows what they’re doing should likely be indistinguishable from a government. Especially because the governments will usually hire exactly these companies to do that work for them, since they tend not to have the talent in house.

Coleman: I don’t think there is a “reliable” way to do it. Rather, for many adversaries, with constant research and regular data point collection, it is possible to reliably track specific adversary groups. Whether or not they could be distinguished as “military”, “private”, or “paramilitary” is up for debate. I think that requires very good visibility into the cyber aspects of the country / military in question.

Lesley: That would be nearly impossible without boots-on-ground, traditional intelligence resources that you and I will never see (or illegal hacking of our own).

 

 

Why don’t all security experts publicly corroborate the attribution provided by investigating firms and agencies?

DA_667: In most cases, disagreements on attribution boil down to:

  1. Lack of information
  2. Inconclusive evidence
  3. Said investigating firms and/or agencies are not laying all the cards out on the table; security experts do not have access to the same dataset the investigators have (either due to proprietary vendor data, or classified intelligence)

Munin: Lack of proof. It’s very hard to prove with any reliability who’s done what online; it’s even harder to make it stick. Plausible deniability is very much a thing.

Lesley: Usually, because I don’t have enough information. We might lean towards agreeing or disagreeing with the conclusions of the investigators, but at the same time be reluctant to stake our professional and ethical reputation on somebody else’s investigation of evidence we can’t see ourselves. There have also been many instances where the media jumped to conclusions which were not yet appropriate or substantiated. The important thing to remember is that attribution has nothing to do with what we want or who we dislike. It’s the study of facts, and the consequences for being wrong can be pretty dire.

Krypt3ia: Because they are smarter than the average Wizard?

Coleman: In my opinion, many commercial investigative firms are driven to threat attribution by numerous non-evidential factors. There’s kind of a “race to the top (bottom?)” these days for “threat intelligence”, and a significant influence on private companies to be first-to-report, as well as show themselves to have unique visibility to deliver a “breaking” story. In a word: marketing. Each agency wants to look like they have more and better intelligence on the most advanced threats than their competition. Additionally, there’s an audience component to it as well. Many organizations suffering a breach would prefer to adopt the story line that their expensive defenses were breached by “the most advanced well-funded nation-state adversary” (a.k.a. “Deep Panda”), versus “some 13 year-olds hanging out in an IRC chatroom named #operation_dildos”. Because of this, I generally consider a lot of public reporting conclusions to be worth taking with a grain of salt, and I’m more interested in the handful that actually report technical data that I can act upon.

Viss: Some want to get in bed with (potential)employers so they cozy up to that version of the story. Some don’t want to rock the boat so they go along with the boss. Some have literally no idea what they’re talking about, they’re fresh out of college and they can’t keep their mouths shut. Some are being paid by someone to say something. It’s a giant grab bag.

 

 

Should my company attribute network attacks to a nation state?

DA_667: No. Often times, your organization will NOT gain anything of value attempting to attribute an attack to a given nation-state. Identify the Indicators of Compromise as best you can, and distribute them to peers in your industry or professional organizations who may have more resources for determining whether an attack was a part of a campaign spanning multiple targets. Focus on recovery and hardening your systems so you are no longer considered a soft target.

Viss: I don’t understand why this would be even remotely interesting to average businesses. This is only interesting to the “spymaster bobs” of the world, and the people who routinely fellate the intelligence community for favors/intel/jobs/etc. In most cases it doesn’t matter, and in the cases it DOES matter, it’s not really a public discussion – or a public discussion won’t help things.

Lesley: For your average commercial organization, there’s rarely any reason (or sufficient data) to attribute an attack to a nation state. Identifying the type of actor, IOCs, and TTPs is normally adequate to maintain threat intelligence or respond to an incident. Be very cautious (legally / ethically / career-wise) if your executives ask you to attribute to a foreign government.

Munin: I would advise against it. You’ll get a lot of attention, and most of it’s going to be bad. Attribution to nation-state actors is very much part of the espionage and diplomacy game and you do not want to engage in that if you do not absolutely have to.

Ryan: No. The odds of your organization even being equipped to make such an attribution are almost nil. It’s not worth expending the resources to even attempt such an attribution. The gain, even if you are successful, would still be minimal.

Coleman: I generally would say “no”. You should ask yourselves, if you actually had that information in a factual form, what are you going to do? Stop doing business in that country? I think it is generally more beneficial to focus on threat grouping/clustering (if I see activity from IP address A.B.C.D, what historically have I observed in relation to that that I should look out for?) over trying to tie back to “nation-states” or even to answer the question “nation state or not?”. If you’re only prioritizing things you believe are “nation-state”, you’re probably losing the game considerably in other threat areas. I have observed very few examples where nation-state attribution makes any significant difference, as far as response and mitigation are concerned.

Krypt3ia: Too many try and fail.

 

Can’t we just block [nation state]?

Krypt3ia: HA! I have seen rule sets on firewalls where they try to block whole countries. It’s silly. If I am your adversary and I have the money and time, I will get in.

DA_667: No, and for a couple reasons. By the time a research body or a government agency has released indicators against a certain set of tools or a supposed nation-state actor to the general public, those indicators are long past stale. The actors have moved on to using new hosts to hide their tracks, using new tools and custom malware to achieve their goals, and so on, and so forth. Not only that, but the solution isn’t as easy as block [supposed malicious country’s IP address space]. A lot of companies that are targeted by nation-states are international organizations with customers and users that live in countries all over the world. Therefore, you can’t take a ham-fisted approach such as blocking all Elbonian IP addresses. In some cases, if you’re a smaller business who has no users or customers from a given country (e.g. a local bank somewhere in Nevada would NOT be expecting customers or users to connect from Elbonia.), you might be able to get away with blocking certain countries and that will make it harder for the lowest tier of attackers to attack your systems directly… but again, given what you now know about how easy it is for a nation-state actor to compromise another system, in another country, you should realize that blocking IP addresses assigned to a given country is not going to be terribly helpful if the nation-state is persistent and has high motivation to attack you.

Munin: Not really. IP blocks will kill the low bar attacks, but those aren’t really what you’re asking after if you’re in this FAQ, are you? Any attacker worth their salt can find some third party to proxy through. Not to mention IP ranges get traded or sold now and then – today’s Chinese block could be someone else entirely tomorrow.

Lesley: Not only might this be pretty bad for business, it’s pretty easy for any actor to evade using compromised hosts elsewhere as proxies. Some orgs do it, though.

Coleman: Depending upon the impact, sure, why not? It’s up to you informing your leadership, and if your leaders are fine with blocking large blocks of the Internet that sometimes are the endpoint of an attack, then that’s acceptable. I’ve had some associates in my peer group that are able to successfully execute this strategy. Some times (3:30pm on a Friday, for instance) I envy them.

Ryan: If you’re not doing business outside of your local country and don’t ever care to, it couldn’t hurt. By restricting connections to your network from only your home country, you will likely add some security. However, if your network is a target, doing this won’t stop an actor from pivoting from a location that is within your whitelist to gain access to your network.

Viss: Sure! Does your company do business with China? Korea? Pakistan? Why bother accepting traffic from them? Take the top ten ‘shady countries’ and just block them at the firewall. If malware lands on your LAN, it won’t be able to phone home. If your company DOES to business with those countries, it’s another story – but if there is no legitimate reason 10 laptops in your sales department should be talking to Spain or South Africa, then it’s a pretty easy win. It won’t stop a determined attacker, but if you’re paying attention to dropped packets leaving your network you’re gonna find out REAL FAST if there’s someone on your LAN. They won’t know you’re blocking til they slam headfirst into a firewall rule and leave a bunch of logs.

 

Hey, what’s with the Attribution Dice?

Ryan: I’m convinced that lots of threat intelligence companies have these as part of their standard report writing kit.

Lesley: They’re awesome! If you do purposefully terrible, bandwagon attribution of the trendy scapegoat of the day, infosec folks are pretty likely to notice poke a little fun at your expense.

Krypt3ia: They are cheaper than Mandiant or Crowdstrike and likely just as accurate.

Coleman: In some situations, the “Who Hacked Us?” web application may be better than public reporting.

Munin: I want a set someday….

Viss: they’re more accurate than the government, that’s for sure.

DA_667: I have a custom set of laser-printed attribution dice that a friend had commissioned for me, where my twitter handle is listed as a possible threat actor. But in all seriousness, the attribution dice are a sort of inside joke amongst security experts who deal in threat intelligence. Trying to do attribution is a lot like casting the dice..

What’s a Challenge Coin, Anyway? (For Hackers)

So what are these “challenge coins”?

Challenge coins come from an old military tradition that bled into the professional infosec realm then into the broader hacker community through the continual overlap between the communities. In some ways like an informal medal, coins generally represent somewhere you have been or something you have accomplished. Consequently, you can buy some, and be gifted or earn others; the latter are generally more traditional and respected.

There are a few stories about how challenge coins originated in the U.S. Military and most have been lost to history and embellished over time, but I will tell you the tale as it was passed down to me:

During World War I, an officer gifted coin-like squadron medallions to his men. One of his pilots decided to wear it about his neck as we would wear dog tags, today. Some time later, that pilot’s plane was shot down by the enemy and he was forced down behind enemy lines and captured. As a prisoner of war, all of his papers were taken, but as was customary he was allowed to keep his jewelry, including the medallion. During the night, the pilot managed to take advantage of a distraction to make a daring escape. He spent days avoiding patrols and ultimately made his way to the French border. Unfortunately, the pilot could not speak any French, and with no uniform and no identification, they assumed he was a spy. The only thing that spared him execution was showing them his medallion, upon which there was a squadron emblem the French soldiers recognized and could verify.

Today, people who collect challenge coins tend to have quite a few more than just one.

What’s the “challenge”?

Challenge coins are named such because anybody who has one can issue a challenge to anybody else who has one. The game is a gamble and goes as such:

  • The challenger throws down their coin, thereby issuing a challenge to one or more people.
  • The person or people challenged must each immediately produce a coin of their own.
  • If any of the people challenged cannot produce one coin, they must buy a drink for the challenger
  • If the people challenged all produce coins, the challenger must buy the next round of drink(s) for them.

Therefore, a wise person carries a coin in a pocket, wallet, or purse, at all times!

How do I get challenge coins?

As I mentioned before, the three major ways to get a challenge coin in the military and in the hacking community are to buy one, earn one, or be gifted one.

  • You can buy coins at many places and events to show you were there. Many cons sell them now, as well as places like military installations and companies. They’re a good fundraiser.
  • You can be gifted a coin. This is normally done as a sign of friendship or gratitude, and the coins gifted are normally ones that represent a group or organization like a military unit, company, non-profit, or government agency. The proper way to gift a coin is enclosed in a handshake.
  • You can earn a coin. Many competitions and training programs offer special coins for top graduates, champions, and similar accomplishments (similar to a trophy). This is the most traditional way to receive a coin.

How do I display my coins, once I have more than one?

On a coin rack or coin display case. https://www.amazon.com >>


Can I make my own challenge coins? How much do they cost?

Yes. Lots of companies will sell you challenge coins. The price varies drastically based on the number ordered, colors, materials, and complexity of the vector design.

Think about whether you plan to sell coins to people, gift them on special occasions, or make them a reward, and plan accordingly.

Can I see some examples of infosec / hacking challenge coins?

Sure! I hope you’ve enjoyed this brief introduction to challenge coins. Here are some of my friends and their favorite challenge coins: