TLS (transport layer security) is used to secure HTTP traffic. There’s an increasing push to use HTTPS by default, and non-HTTPS sites are being labeled by browsers as “insecure.” Meanwhile, HTTPS adoption reached the tipping point of 50% of web traffic just last year.

To further the adoption of HTTPS – and iron out existing problems in the protocol – a new version of TLS has arrived, version 1.3. It’s the first update in over eight years. Changes include:

  • Removed one stage of handshakes to reduce latency in setting up TLS
  • Stripped out insecure cryptographic algorithms, leaving only vetted protocols that have gone through an open cryptographic analysis
  • Addressed downgrade issues – a form of attack where an attacker downgrades your choice of cryptographic protocol, allowing them to intercept your data

These changes have left TLS faster, simpler, and more secure with one last big change worth mentioning – perfect forward secrecy. Perfect forward secrecy was optional under v1.2 but now is mandated by v1.3 . Under TLS 1.2 the majority of traffic between clients and servers is encrypted using a rotating series of session keys. If an attacker is monitoring your network traffic, they’ll monitor both the session key exchanges and the encrypted traffic itself. In the meantime, all of the client’s session keys are encrypted by a relatively static certificate. If the attacker steals the client’s private key, the attacker can begin decrypting the intercepted session keys, as well as any other stored traffic that they may have.

Single points of failure are bad news in security. New versions of TLS employ a key exchange method based on the Diffie-Hellman Exchange, which does offer perfect forward secrecy. Diffie-Hellman works by using a random number shared with a private key, resulting in a new key. The client and server now each have a unique key. The keys are then exchanged and combined with private keys once again, resulting in a shared secret key.

To reiterate, the client and server both arrive at the same secret key without ever actually transmitting it between both parties. This means that an attacker can’t intercept it. TLS 1.3 de-emphasizes certificates and relies on secret keys almost exclusively – and secret keys are rotated as fast as every five minutes. A theoretical attacker now has a much smaller attack surface and is forced to expect much more effort to decrypt a smaller amount of data.

So, How Does This Impact IDS/IPS?

In a Kubernetes environment, hundreds of pods are communicating with the internet at any given time. Under TLS 1.2, this represented a massive attack surface – an attacker could steal certificates and have access to every piece of data being communicated across the internet.

Under TLS 1.3, the game is changed – the attacker has to expend large amounts of effort to decrypt a few megabytes of data at most. This assumes, of course, that all traffic is encrypted and that the keys are widely separated.

Enter deep packet inspection (DPI).

Typical DPI models assume that your TLS proxy (that which does the encryption) is placed next to the internet gateway. Unfortunately, however, this approach is incompatible with the perfect forward security provided by TLS 1.3.

The approach above lets administrators conduct DPI easily because traffic will run unencrypted from the Kubernetes Pods, through the hosts, all the way to the TLS proxy. See the danger? An attacker can do this as well. What’s more, concentrating your traffic at the TLS proxy means concentrating your keys – making it easy for attackers to totally own your communications.

There’s another approach – putting the TLS proxy on the hosts. Now you and your attacker can only run DPI on the hosts. This makes Eve’s job harder, but not much harder. She might only be able to crack one host, but each host contains hundreds of Pods.

Now what? There’s another answer, and it’s new – put the TLS host in each Pod. This narrows the attack surface down to a single point. Your attacker must attack the Pod individually to gain access to its communications. To have worthwhile access, your attacker must maintain a permanent active presence across a hundred thousand Pods or more – no mean feat.

By moving the DPI boundary into the pod, you create a secure, flexible, and ephemeral environment that’s perfect for cloud-native applications. It has the side benefit of making it almost pointlessly hard for an attacker to get at your communications. The actual implementation of the DPI boundary at the pod level may be easier said than done. For information on that process and more, watch our on-demand webinar or request a Tigera demo.

 

Subscribe to our newsletter

Get updates on webinars, blog posts, new releases and more!

Thanks for signing up. You must confirm your email address before we can send you. Please check your email and follow the instructions.

Pin It on Pinterest

Share This