Posts

Containers vs Virtual Machines – Your Cheat Sheet to Know the Differences

Docker, Kubernetes, and even Windows Server Containers have seen a huge rise in popularity the last few years. With the application container market having a projected CAGR (Compound Annual Growth Rate) of 32.9% between 2018 to 2023, we can expect that trend to continue. Containers have a huge impact on application delivery and are a real game changer for DevOps teams.

However, despite the popularity of containerization, there is still significant confusion and misunderstanding about how containers work and the difference between containers and virtual machines. This also leads to ambiguity in how to properly secure infrastructure that uses containers.

In this piece, we’ll provide a crash course on containers vs virtual machines by comparing the two, describing some common use cases for both, and providing some insights to help you keep both your virtual machines and containers secure.

What are Virtual Machines?

VMware’s description of a virtual machine as a “software computer” is a succinct way to describe the concept. A virtual machine is effectively an operating system or other similar computing environment that runs on top of software (a hypervisor) as opposed to directly on top of bare metal computer hardware (e.g. a server).

To better conceptualize what a virtual machine is, it’s useful to understand what a hypervisor is. A hypervisor is a special type of operating system that enables a single physical computer or server to run multiple virtual machines with different operating systems. The virtual machines are logically isolated from one another and the hypervisor virtualizes the underlying hardware and gives the virtual machines virtual compute resources (CPU, RAM, Storage, etc.) to work with. Two of the most popular hypervisors today are Windows HyperV and VMware’s ESXi.

In short, hypervisors abstract away the hardware layer so virtual machines can run independent of the underlying hardware resources. This technology has enabled huge strides in virtualization and cloud computing over the last two decades.

Note: If you’re interested in learning more about the nuts and bolts of hypervisors, it is important to note that what we’ve described here is a “Type 1” hypervisor. There are also “Type 2” hypervisors (e.g. Virtual Box or VMware Fusion) that can run on-top of standard operating systems (e.g. Windows 10).

What are Containers?

A container is a means of packaging an application and all its dependencies into a single unit that can run anywhere the corresponding container engine is. To conceptualize this, we can compare what a container engine does for containers to what a hypervisor does for virtual machines. While a hypervisor abstracts away hardware for the virtual machines so they can run an operating system, a container engine abstracts away an operating system so containers can run applications.

If you’re new to the world of containers and containerization, there is likely a ton of new terminology you need to get up to speed on, so here is a quick reference:

  • Docker. One of the biggest players in the world of containers and makers of the Docker Engine. However, there are many other options for using containers such as LXC Linux Containers and CoreOS rkt.
  • Kubernetes. A popular orchestration system for managing containers. Kubernetes will often be written as “K8s” for short. Other less popular orchestration tools include Docker Swarm and Apache Marathon.
  • Cluster. A group of containers that has a “master” machine that enables orchestration and one or more worker machines that actually run pods.
  • Pods. Pods are one or more containers in a cluster with shared resources that are deployed for a specific purpose.

Understanding the differences between containers vs virtual machines becomes easier when you view them from the standpoint of what is being abstracted away to provide the technology. With virtual machines, you’re abstracting away the hardware that would have previously been provided by a server and running your operating system. With containers you’re abstracting away the operating system that has been provided by your virtual machine (or server) and running your application (e.g. MySQL, Apache, NGINX, etc.).

Use Cases for Containers vs Virtual Machines

At this point, you may be asking: “why bother with containers if I already have virtual machines”? While that is a common thought process, it’s important to understand that each technology has valid use cases and there is plenty of room for both in the modern data center.

Many of the benefits of containers stem from the fact they only include the binaries, libraries, other required dependencies, and your app – no other overhead. It should be noted that all containers on the same host share the same operating system kernel. This makes them significantly smaller than virtual machines and more lightweight. As a result, containers boot quicker, ease application delivery, and help maximize efficient utilization of server resources. This means containers make sense for use cases such as:

  • Microservices
  • Web applications
  • DevOps testing
  • Maximization of the amount of apps you can deploy per server

Virtual machines on the other hand are larger and boot slower, but they are logically isolated from one another (with their own kernel) and can run multiple applications if needed. They also give you all the benefits of a full-blown operating system. This means virtual machines make sense for use cases such as:

  • Running multiple applications together
  • Monolithic applications
  • Complete logical isolation between apps
  • Legacy apps that require old operating systems

It’s also important to note that the topic of containers vs virtual machines is not zero-sum and the two can be used together. For example, you can install the Ubuntu Operating System on a virtual machine, install the Docker Engine on Ubuntu, and then run containers on top of the Docker Engine.

Security Challenges of Containers vs Virtual Machines

As data centers and hybrid cloud infrastructures integrate containers into an already complex ecosystem that includes virtual machines running on-premises and a variety of cloud services providers, keeping up with security can be difficult.

While virtual machines do offer logical isolation of kernels, there are still a myriad of challenges associated with virtual machines including: limited visibility to virtual networks, sprawl leading to expanded attack surfaces, and hypervisor security. These problems only become more magnified as your infrastructure scales and becomes more complex. Without the proper tools, adequate visibility and security is more challenging.

This is where Guardicore Centra can help. Centra enables enterprises to gain process-level visibility over the entirety of their infrastructure, whether virtual machines are deployed on-premises, in the cloud, or a mixture of both. Further, micro-segmentation helps limit the spread of threats and meet compliance requirements.

Micro-segmentation is particularly important when you begin to consider the challenges associated with container security. Containers running on the same operating system share the same kernel. This means that a single compromised container could lead to the host operating system and all the other containers on the host being compromised as well. Micro-segmentation can help limit the lateral movement of breaches and further harden a hybrid cloud infrastructure that uses containers.

Interested in Learning More About Securing Your Infrastructure?

That was our quick “cheat sheet” regarding containers vs virtual machines. We hope you enjoyed it! If you’d like to learn more about Docker security, check out our 5 Docker Security Best Practices to Avoid Breaches article. To learn more about securing modern infrastructure, check out our white paper on securing modern data centers and clouds. If you’d like to learn more about how Centra can help secure your hybrid cloud infrastructure, contact us today.

5 Docker Security Best Practices to Avoid Breaches

Docker has had a major impact on the world of IT over the last five years, and its popularity continues to surge. Since its release in 2013, 3.5 million apps have been “Dockerized” and 37 billion Docker containers have been downloaded. Enterprises and individual users have been implementing Docker containers in a variety of use-cases to deploy applications in a fast, efficient, and scalable manner.

There are a number of compelling benefits for organizations that adopt Docker, but like with any technology, there are security concerns as well. For example, the recently discovered runc container breakout vulnerability (CVE-2019-5736) could allow malicious containers to compromise a host machine. What this means is organizations that adopt Docker need to be sure to do so in a way that takes security into account. In this piece, we’ll provide an overview of the benefits of Docker and then dive into 5 Docker security best practices to help keep your infrastructure and applications secure.

Benefits of Docker

Many new to the world of containerization and Docker are often confused about what makes containers different from running virtual machines on top of a hypervisor. After all, both are ways of running multiple logically isolated apps on the same hardware.

Why then would anyone bother with containerization if virtual machines are available? Why are so many DevOps teams such big proponents of Docker? Simply put, containers are more lightweight, scalable, and a better fit for many use cases related to automation and application delivery. This is because containers abstract away the need for an underlying hypervisor and can run on a single operating system.

Using web apps as an example, let’s review the differences.

In a typical hypervisor/virtual machine configuration you have bare metal hardware, the hypervisor (e.g. VMware ESXi), the guest operating system (e.g. Ubuntu), the binaries and libraries required to run an application, and then the application itself. Generally, another set of binaries and libraries for a different app would require a new guest operating system.

With containerization you have bare metal hardware, an operating system, the container engine, the binaries and libraries required to run an application, and the application itself. You can then stack more containers running different binaries and libraries on the same operating system, significantly reducing overhead and increasing efficiency and portability.

When coupled with orchestration tools like Kubernetes or Docker Swarm, the benefits of Docker are magnified even further.

Docker Security Best Practices

With an understanding of the benefits of Docker, let’s move on to 5 Docker security best practices that can help you address your Docker security concerns and keep your network infrastructure secure.

#1 Secure the Docker host

As any infosec professional will tell you, truly robust security must be holistic. With Docker containers, that means not only securing the containers themselves, but also the host machines that run them. Containers on a given host all share that host’s kernel. If an attacker is able to compromise the host, all your containers are at risk. This means that using secure, up to date operating systems and kernel versions is vitally important. Ensure that your patch and update processes are well defined and audit systems for outdated operating system and kernel versions regularly.

#2 Only use trusted Docker images

It’s a common practice to download and leverage Docker images from Docker Hub. Doing so provides DevOps teams an easy way to get a container for a given purpose up and running quickly. Why reinvent the wheel?

However, not all Docker images are created equal and a malicious user could create an image that includes backdoors and malware to compromise your network. This isn’t just a theoretical possibility either. Last year it was reported by Ars Technica that a single Docker Hub account posted 17 images that included a backdoor. These backdoored images were downloaded 5 million times. To help avoid falling victim to a similar attack, only use trusted Docker images. It’s good practice to use images that are “Docker Certified” whenever possible or use images from a reputable “Verified Publisher”.

#3 Don’t run Docker containers using –privileged or –cap-add

If you’re familiar with why you should NOT “sudo” every Linux command you run, this tip will make intuitive sense. The –privileged flag gives your container full capabilities. This includes access to kernel capabilities that could be dangerous, so only use this flag to run your containers if you have a very specific reason to do so.

Similarly, you can use the –cap-add switch to grant specific capabilities that aren’t granted to containers by default. Following the principle of least privilege, you should only use –cap-add if there is a well-defined reason to do so.

#4 Use Docker Volumes for your data

By storing data (e.g. database files & logs) in Docker Volumes as opposed to within a container, you help enhance data security and help ensure your data persists even if the container is removed. Additionally, volumes can enable secure data sharing between multiple containers, and contents can be encrypted for secure storage at 3rd party locations (e.g. a co-location data center or cloud service provider).

#5 Maintain Docker Network Security

As container usage grows, teams develop a larger and more complex network of Docker containers within Kubernetes clusters. Analyzing and auditing traffic flows as these networks grow becomes more complex. Finding a balance between security and performance in these instances can be a difficult balancing act. If security policies are too strict, the inherent advantages of agility, speed, and scalability offered by containers is hamstrung. If they are too lax, breaches can go undetected and an entire network could be compromised.

Process-level visibility, tracking network flows between containers, and effectively implementing micro-segmentation are all important parts of Docker network security. Doing so requires tools and platforms that can help integrate with Docker and implement security without stifling the benefits of containerization. This is where Guardicore Centra can assist.

How Guardicore Centra helps enhance Docker Network Security

The Centra security platform takes a holistic approach to network security that includes integration with containers. Centra is able to provide visibility into individual containers, track network flows and process information, and implement micro-segmentation for any size deployment of Docker & Kubernetes.

For example, with Centra, you can create scalable segmentation policies that take into account both pod to pod traffic flows and bare metal or virtual machine to flows without negatively impacting performance. Additionally, Centra can help DevSecOps teams implement and demonstrate the monitoring and segmentation required for compliance to standards such as PCI-DSS 3.2. For more on how Guardicore Centra can help enable Docker network security, check out the Container Security Use Case page.

Interested in learning more?

There are a variety of Docker security issues you’ll need to be prepared to address if you want to securely leverage containers within your network. By following the 5 Docker security best practices we reviewed here, you’ll be off to a great start. If you’re interested in learning more about Docker network security, check out our How to Leverage Micro-Segmentation for Container Security webinar. If you’d like to discuss Docker security with a team of experts that understand Docker security requires a holistic approach that leverages a variety of tools and techniques, contact us today!

CVE-2019-5736 – runC container breakout

A major vulnerability related to containers was released on Feb 12th. The vulnerability allows a malicious container that is running as root to break out into the hosting OS and gain administrative privileges.

Adam Iwanuik, one of the researchers who took part in the discovery shares in detail the different paths taken to discover this vulnerability.

The mitigations suggested as part of the research for unpatched systems are:

  1. Use Docker containers with SELinux enabled (–selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary.
  2. Use read-only file system on the host, at least for storing the docker-runc binary.
  3. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runC binary on the host).

The first two suggestions are pretty straightforward but I would like to elaborate on the third one. It’s important to understand that Docker containers run as root by default unless stated otherwise. This does not explicitly mean that the container also has root access to the host OS but it’s the main prerequisite for this vulnerability to work.

To run a quick check whether your host is running any containers as root:


#!/bin/bash

# get all running docker container names
containers=$(docker ps | awk '{if(NR>1) print $NF}')

echo "List of containers running as root"

# loop through all containers
for container in $containers
do
    uid=$(docker inspect --format='{{json .Config.User}}' $container)
    if [ $uid = '"0"' ] ; then
        echo "Container name: $container"
    fi
done

In any case, as a best practice you should prevent your users from running containers as root. This can be enforced by existing controls of the common orchestration\management system. For example, OpenShift prevents users from running containers as root out of the box so your job here is basically done. However, in Kubernetes your can run as root by default but you can easily configure PodSecurityPolicy to prevent this as described here.

In order to fix this issue, you should patch the version of your container runtime. Whether you are just using a container runtime (docker) or some flavor of a container orchestration system (Kubernetes, Mesos, etc…) you should look up the instructions for your specific software version and OS.

How can Guardicore help?

Guardicore provides a network security solution for hybrid cloud environments that spans across multiple compute architectures, containers being one of them. Guardicore Centra is a holistic micro-segmentation solution that provides process-level visibility and enforcement of the traffic flows both for containers and VMs. This is extremely important in the case of this CVE, as the attack would originate from the host VM or a different container and not the original container in case of a malicious actor breaking out.

Guardicore can mitigate this risk by controlling which processes can actually communicate between the containers or VMs covered by the system.

Learn more about containers and cloud security

Guardicore Enables Secure Rapid Container Deployment

Guardicore Centra Security Platform Reduces Compliance Risks, Enforces Security Policies Within Containerized Applications and Workloads

Read more

Why Security Teams Need Visibility into Container Networks

Containers and orchestration systems use numerous technical abstractions to support auto-scaling and distributed applications that obfuscate visibility into application communication flows. Security teams lose visibility into application communication flows, rendering traditional tools useless and exposing the application to risk.
Read more

GuardiCore Expands Support for Docker Open Platform

Process-Level Visibility Between Containers Delivers More Granular Application Security Monitoring and Troubleshooting for “Dockerized” Applications

DockerCon Europe 2015, Barcelona, Spain – GuardiCore, a leader in internal data center security, today announced that it has expanded support for the Docker open platform for building, shipping and running distributed applications. In addition to providing advanced breach detection and response for “Dockerized” applications, GuardiCore has extended its support for Docker environments to deliver process-level visibility between any two containers, allowing security and devops teams to effectively secure, monitor, maintain and troubleshoot applications in a very granular manner. GuardiCore will be demonstrating its Docker support at Dockercon in the New Innovators Showcase. Read more

5 Key Takeaways from VMworld Barcelona

So, VMworld Europe just concluded last week and certainly there was a lot to talk about, from hybrid clouds, VMware’s acquisition of Boxer, Dell’s acquisition of EMC and how this affects VMware (it doesn’t according to Dell CEO Michael Dell), VMware CEO Pat Gelsinger’s keynote where he highlighted the five imperatives of the digital business and also called out of some enterprises for lack of agility (“Elephants must learn to dance”) and of course, security, which seemed to be integrated into almost every topic at the event.

Read more