Why Security Teams Need Visibility into Container Networks

Containers and orchestration systems use numerous technical abstractions to support auto-scaling and distributed applications that obfuscate visibility into application communication flows. Security teams lose visibility into application communication flows, rendering traditional tools useless and exposing the application to risk.

Containerized applications come in all shapes and sizes, and it’s likely that they’re a mashup of service-oriented, microservice, and monolithic architectures. Different services communicate with other services, some services are dependent on others, and there are expected technical flows across services for various features. Services can also be split into different parts. One service may have one web server container, another service might be working off of a message queue, and a different service may be for scheduled jobs. All of these services can also be part of the same application.

In this blog post, we will cover some of the technical challenges with mapping container networks, and discuss why it’s important for security teams to have visibility into application flows.

Containers Create Security Challenges

Containers offer many advantages to organizations and engineering teams. They help technically diverse teams ship products faster, meet various scaling challenges, and allow operations to standardize deployment infrastructure. Containers are managed by orchestration systems like Kubernetes. Kubernetes represents applications with what they call “pods.” These pods group together functionally related containers. All containers in a pod run on the same node in the cluster. Containers can communicate with all of the other containers in the same pod, and with other pods in the cluster with Kubernetes services.

How containers establish connections varies based on where the connection is going. Container-to-container connections (i.e., inter-pod communications) happen over localhost. Kubernetes then routes the connection to the correct container according to the port. This also accounts for multiple pods with containers on the same port (e.g., different applications that are running a webserver on port 80). Technically, this uses a mix of iptables, features on the container networking interface, and automatically managed Kubernetes features.

Now, think about the abstractions a container orchestration system like Kubernetes requires to make various networking topologies work. Kubernetes can create publicly accessible load balancers for configured applications. These load balancers route traffic to any node on the cluster, and then the node routes traffic to the correct container.

Comparatively, connections generated by Kubernetes services are more complex. A Kubernetes service works like a proxy to all of the associated pods. Each service has a unique internal cluster IP and DNS name. Containers (as part of pods) establish connections to the cluster IP or DNS name, and are then routed to any matching pod that can be anywhere in the cluster. For example, let’s say pod X connects to service A. Service A matches two pods (P1, P2) running on nodes (N1, N2). Pod X may actually connect to P1 on N1 or P2 on N2. Kubernetes handles this behind the scenes.

You Can’t Secure What You Can’t See

So, how does one identify the connections that are opened by the orchestration or by your own processes? How can security peel back the layers and gain the visibility needed to see what’s really going on behind the scenes?

Consider a pre-container application that handles medical records with traditional visibility and security controls in place. In the web server tier, whitelisted processes are permitted to communicate on ports 80 and 443. All other traffic is blocked. Attackers routinely scan ports to launch attacks and move across the network to high-value targets. We’ve seen on numerous occasions that attackers on a host in the web server attempt to move laterally to a host in the database tier using using a whitelisted port. This is why it’s important for security teams to have visibility into all connection flows—so they can identify when a host is communicating suspiciously. Connections from the web server process to the database server are expected, but connections from a random python program are not.

Using container orchestration does not mean sacrificing security or insight into your applications. Instead, it requires equally powerful technology that peels back the abstraction layers to show which applications and containers are communicating, how they’re communicating, and across which hosts they are communicating. Not having this visibility makes it difficult for micro-segmentation policies to secure containerized applications, to determine compliance risks and, to detect anomalies before they cause a security breach or impact customers.

0 comments

Leave a Comment

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *