Well, the Tiger is going to Black Hat in Las Vegas in a few weeks time, and so I’ve been planning our risk mitigation strategy for when we are there. The last thing I want is for the Tiger to show up on the Wall of Sheep. To that end, I’ve written up an internal document that covers potential threats and the mitigation steps we will take to limit the risk presented by the threats identified. I’m not going to release that document to the public right now (not because I think it would be a security risk to do so – I’m actually a big proponent of transparent security), but because I haven’t had time to polish it to something I would foist off on the general public, rather than the suffering souls here at Tigera that have to read what I write.
Instead, I thought I would spend a bit of time going through my methodology and some examples. The reason I think this might be useful is that we (and everyone else) are talking about Zero-Trust environments and defining your security posture based around the concept of an untrusted environment as discussed in the Zero Trust book and the Google Beyond Corp papers.
Tigera is a very cloud-centric company, not only from the aspect of our product but also how we run our company. We have next-to-none on-site resources that are critical, and very few self-hosted infrastructures in public cloud providers. If there’s a SaaS solution, we will try and use it before we go down the Roll-Your-Own / Deploy-Your-Own (RYO/DYO) path.
We also assume that Tigerians will be accessing those resources from a wide variety of locations over a wide range of networks (from the HQ office network to co-working spaces, to home, airport, customer and coffee-shop networks). We are not alone in that set of assumptions. That said, we don’t use VPNs (they tend to cause more grief than they solve) and rely on TLS, 2FA, and other concepts to secure that access.
As with any security posture, we needed to evaluate the threat, the risk to the company that a given threat presented, and the cost to mitigate that threat/risk in dollars, lost productivity, frustration, etc.
As a general rule, we’ve drawn the hard line around requiring TLS, limiting the number of unique accounts someone has (rather than using a smaller subset of master accounts that can provide authentication and authorization for other services), requiring some form of 2FA for access to those master accounts, etc. None of this should look odd to most folks who base their organization on cloud-delivered SaaS.
However, some attacks can breach the current level of security that we deploy. However, I judge that possibility of such an attack, and damage that such an attack could do is outweighed by making it harder for folks to work, adding technical complexity to their platforms, etc. Therefore, these are cataloged risks and threats, but ones that I have judged as acceptable to bear. That is until we decided to send folks to Black Hat for the first time.
Part of my risk evaluation that I had done was that the threats I was looking at were technically much more complex than standard Starbucks Snooping, and would most likely need to be directed at us in particular, and frankly, we’re just not that interesting.
However, that calculation changes in the light of Black Hat. You could be a target of attack at events like Black Hat, or other events or locations that you travel to. There will be lots of skilled opposition at Black Hat who are explicitly trying to collect scalps. In most cases, just for bragging rights, or to test a potential vulnerability, but in some cases, with more serious motivations. In the light of that environment, I decided it was time to go back and re-appraise threat/risk/cost equations.
The outcome of that evaluation is that we will be tightening up, across the company during the Black Hat event in general, and some specific additional guards and wards for the folks at the event. I also look at this as being a good lab to see if the additional guards and wards are really more complex, or maybe should just be adopted by us going forward. Let’s look at some of these.
- There have been attacks at Black Hat for years where SDR (Software Defined Radio) based pico-cells can insert a Man-in-the-Middle between your phone and your carrier. This attack allows hackers to intercept, drop, modify, etc. various traffic, including SMS. This is one way (not the only one) where an SMS-based 2FA security regime can be breached. To mitigate this (and the other SMS-based 2FA vulnerabilities), we will be deploying U2F/FIDO keys for the folks going to Black Hat.
- We are encouraging our staff not to bring phones that they care about, as the same technique used for the SMS interception above can be used to force some phones to ‘upgrade’ their firmware with an image the MitM is offering. Such an event never ends well.
- We will be tightening up our laptop/phone/tablet endpoint filtering rules to ONLY allow TLS or other cryptographically secured and authenticated traffic out of those devices.
- We will be using DNS over HTTPS and/or DNSCrypt to ensure that we are talking to the DNS servers we think we are talking to, limiting various attack vectors, such as compromised CA’s and/or Internationalized DNS Name (IDN) DNS zone spoofing.
The key thing I want to point out is that these cost/benefit calculations we do about our security environment cannot be static. You must watch the threat environment as it evolves, as your value as a target changes, and as the environment in which you operate changes and when there is a change, rerun that cost/benefit analysis. Use perceived changes as an excuse to experiment and test if new security methods are really as painful as you think, or maybe they are mostly transparent as technology advances. Just as anything else in this new world of application delivery, rapid cycles of develop – test – deploy – evaluate are necessary., Don’t be afraid to fail (too complex a 2FA system, for example), but do it quickly and always learn from them.
Hope to see you at Black Hat, but not on the Wall of Sheep.