5 Docker Security Best Practices to Avoid Breaches

5 Docker Security Best Practices

Docker has had a major impact on the world of IT over the last five years, and its popularity continues to surge. Since its release in 2013, 3.5 million apps have been “Dockerized” and 37 billion Docker containers have been downloaded. Enterprises and individual users have been implementing Docker containers in a variety of use-cases to deploy applications in a fast, efficient, and scalable manner.

There are a number of compelling benefits for organizations that adopt Docker, but like with any technology, there are security concerns as well. For example, the recently discovered runc container breakout vulnerability (CVE-2019-5736) could allow malicious containers to compromise a host machine. What this means is organizations that adopt Docker need to be sure to do so in a way that takes security into account. In this piece, we’ll provide an overview of the benefits of Docker and then dive into 5 Docker security best practices to help keep your infrastructure and applications secure.

Benefits of Docker

Many new to the world of containerization and Docker are often confused about what makes containers different from running virtual machines on top of a hypervisor. After all, both are ways of running multiple logically isolated apps on the same hardware.
Why then would anyone bother with containerization if virtual machines are available? Why are so many DevOps teams such big proponents of Docker? Simply put, containers are more lightweight, scalable, and a better fit for many use cases related to automation and application delivery. This is because containers abstract away the need for an underlying hypervisor and can run on a single operating system.

Using web apps as an example, let’s review the differences.

In a typical hypervisor/virtual machine configuration you have bare metal hardware, the hypervisor (e.g. VMware ESXi), the guest operating system (e.g. Ubuntu), the binaries and libraries required to run an application, and then the application itself. Generally, another set of binaries and libraries for a different app would require a new guest operating system.

With containerization you have bare metal hardware, an operating system, the container engine, the binaries and libraries required to run an application, and the application itself. You can then stack more containers running different binaries and libraries on the same operating system, significantly reducing overhead and increasing efficiency and portability.

When coupled with orchestration tools like Kubernetes or Docker Swarm, the benefits of Docker are magnified even further.

Docker Security Best Practices

With an understanding of the benefits of Docker, let’s move on to 5 Docker security best practices that can help you address your Docker security concerns and keep your network infrastructure secure.

#1 Secure the Docker host

As any infosec professional will tell you, truly robust security must be holistic. With Docker containers, that means not only securing the containers themselves, but also the host machines that run them. Containers on a given host all share that host’s kernel. If an attacker is able to compromise the host, all your containers are at risk. This means that using secure, up to date operating systems and kernel versions is vitally important. Ensure that your patch and update processes are well defined and audit systems for outdated operating system and kernel versions regularly.

#2 Only use trusted Docker images

It’s a common practice to download and leverage Docker images from Docker Hub. Doing so provides DevOps teams an easy way to get a container for a given purpose up and running quickly. Why reinvent the wheel?

However, not all Docker images are created equal and a malicious user could create an image that includes backdoors and malware to compromise your network. This isn’t just a theoretical possibility either. Last year it was reported by Ars Technica that a single Docker Hub account posted 17 images that included a backdoor. These backdoored images were downloaded 5 million times. To help avoid falling victim to a similar attack, only use trusted Docker images. It’s good practice to use images that are “Docker Certified” whenever possible or use images from a reputable “Verified Publisher”.

#3 Don’t run Docker containers using –privileged or –cap-add

If you’re familiar with why you should NOT “sudo” every Linux command you run, this tip will make intuitive sense. The –privileged flag gives your container full capabilities. This includes access to kernel capabilities that could be dangerous, so only use this flag to run your containers if you have a very specific reason to do so.

Similarly, you can use the –cap-add switch to grant specific capabilities that aren’t granted to containers by default. Following the principle of least privilege, you should only use –cap-add if there is a well-defined reason to do so.

#4 Use Docker Volumes for your data

By storing data (e.g. database files & logs) in Docker Volumes as opposed to within a container, you help enhance data security and help ensure your data persists even if the container is removed. Additionally, volumes can enable secure data sharing between multiple containers, and contents can be encrypted for secure storage at 3rd party locations (e.g. a co-location data center or cloud service provider).

#5 Maintain Docker Network Security

As container usage grows, teams develop a larger and more complex network of Docker containers within Kubernetes clusters. Analyzing and auditing traffic flows as these networks grow becomes more complex. Finding a balance between security and performance in these instances can be a difficult balancing act. If security policies are too strict, the inherent advantages of agility, speed, and scalability offered by containers is hamstrung. If they are too lax, breaches can go undetected and an entire network could be compromised.

Process-level visibility, tracking network flows between containers, and effectively implementing micro-segmentation are all important parts of Docker network security. Doing so requires tools and platforms that can help integrate with Docker and implement security without stifling the benefits of containerization. This is where Guardicore Centra can assist.

How Guardicore Centra helps enhance Docker Network Security

The Centra security platform takes a holistic approach to network security that includes integration with containers. Centra is able to provide visibility into individual containers, track network flows and process information, and implement micro-segmentation for any size deployment of Docker & Kubernetes.

For example, with Centra, you can create scalable segmentation policies that take into account both pod to pod traffic flows and bare metal or virtual machine to flows without negatively impacting performance. Additionally, Centra can help DevSecOps teams implement and demonstrate the monitoring and segmentation required for compliance to standards such as PCI-DSS 3.2. For more on how Guardicore Centra can help enable Docker network security, check out the Container Security Use Case page.

Interested in learning more?

There are a variety of Docker security issues you’ll need to be prepared to address if you want to securely leverage containers within your network. By following the 5 Docker security best practices we reviewed here, you’ll be off to a great start. If you’re interested in learning more about Docker network security, check out our How to Leverage Micro-Segmentation for Container Security webinar. If you’d like to discuss Docker security with a team of experts that understand Docker security requires a holistic approach that leverages a variety of tools and techniques, contact us today!

Are you Protected against These Common Types of Cyber Attacks?

The types of cyber-security attacks that businesses need to protect themselves from are continually growing and evolving. Keeping your company secure means having insight into the most common threats, and the categories of cyber attacks that might go unnoticed. From how to use the principle of least privilege to which connections you need to be monitoring, we look at the top types of network attacks and how to level up your security for 2019.

Watering Hole Attacks

A watering hole attack is an infected website, where vulnerabilities in software or design can be leveraged to embed malicious code. One well-known example is MageCart, the consumer website malware campaign. There are at least half a dozen criminal groups using this toolkit, notably in a payment-card information skimming exploit that has used JavaScript code on the checkout pages of major retailers to steal credentials.

Last year, Guardicore Labs discovered Operation Prowli, a campaign that compromised more than 40,000 machines around the world, using attack techniques such as brute-force, exploits, and the leveraging of weak configurations. This was achieved by targeting CMS servers hosting popular websites, backup servers running HP Data Protector, DSL modems and IoT devices among other infrastructure. Consumers were tricked and diverted from legitimate websites to fake ones, and the attackers then spread malware and malicious code to over 9,000 companies through scam services and browser extensions. This kind of attack puts a whole organization in jeopardy.

More effective watering hole attacks can be achieved if an attacker homes in on the websites that you and your employees use regularly. On top of this, always make sure that your software is up to date so that attackers cannot leverage vulnerabilities to complete these types of cyber attacks. Lastly, ensure you have a method in place to closely watch network traffic and prevent intrusions.

Third-Party Service Vulnerabilities

Today’s surge in connectivity means that enterprises are increasingly relying on third party services for backup, storage, scale, or MSSP’s, to name a few examples. Attackers are increasingly managing to infiltrate your network through your connection with other businesses who have access to your data center or systems. According to the Ponemon Institute, more than half of businesses have suffered a breach due to access through a third-party vendor, one example being the devastating Home Depot breach where attackers used a third-party vendors credentials to steal more than 56 million customer credit and debit card details.

As well as current suppliers, businesses need to be aware of previous suppliers who might not have removed your information from their systems, and breach of confidentiality where third-parties have sold or shared your data with another unknown party. As such, your company needs visibility into all your communication flows, including those with third-party vendors, suppliers, or cloud services, as well as in-depth incident response to handle these kinds of attacks.

Web Application Attacks

When it comes to categories of cyber attacks that use web applications, SQL injection is one of the most common. An attacker simply inserts additional SQL commands into a application database query, allowing them to access data from the database, modify or delete the data, and sometimes even execute operations or issue commands to the operating system itself. This can be done in a number of ways, often through client-server web forms, by modifying cookies, or by using server variables such as HTTP headers.

Another example of a web application attack is managed through deserialization vulnerabilities. There are inherent design flaws in many serialization and deserialization specifications that means that systems will convert any serialized stream, into an object without validating its content. At an application level, companies need to be sure that deserialization end points are only accessible by trusted users.
Giving web applications the minimum privilege necessary is one way to limit these types of cyber-security attacks from breaching your network. Ensuring you have full visibility of connections and flows to your database server is also essential, with alerts set up for any suspicious activity.

What Can Attackers Do Once They Have Access to Your Network?

Ransomware: Attackers can use all types of network attacks to withhold access to your data and operations, usually through encryption, in the hope of a pay-out.
Data destruction/theft: Once attackers have breached your perimeter, without controls they can access critical assets such as customer data. This can be destroyed or stolen causing untold brand damage and legal consequences.
Crypto-jacking: These types of cyber attacks are usually initiated when a user downloads malicious crypto-mining code onto their machine, or by brute-force using SSH credentials, like the ‘Butter’ attacks monitored by Guardicore labs over the past few years.
Pivot to attack other internal applications: If a hacker breaches one area, they can leverage user credentials to escalate their privileges or make lateral moves to another more sensitive area. This is why it’s so important to isolate critical assets as well as take advantage of easy and early wins like separating the production arm of your company from development.

The Most Common Types of Cyber-Security Attacks are Always Evolving

With so many types of cyber attacks risking your network, and subtle changes turning even known quantities into new threats, visibility of your whole ecosystem is foundational for a well-protected IT environment.

As well as using micro-segmentation to separate environments, you can create policy that secures end points and servers with application segmentation. This helps to stop a breach from escalating, with strong segmentation policies that secure your communication flows with the principle of least privilege.

On top of this, complementary controls that include breach detection and incident response with visibility at their core ensures that nothing sinister can fly underneath your radar.

The cost of over-compliance

A few weeks ago I visited a prospect who presented me with an interesting business case.
They are a financial services company with all their applications hosted on their premises.
As expected from a financial services company, they are heavily regulated – having to meet PCI DSS and other standards and requirements.

When they started their business ~10 years ago, the core set of their applications were under that or another regulation. At that time a plausible solution was to define all of their production environment as “regulated” and implement all the requirements there. The overhead was small and it made a lot of sense to simplify the management of segregation of regulated from non-regulated.

But over the years the situation has changed quite a lot. In addition to financial applications that remain regulated, they added tens of other applications to their production environment and now the situation is that in fact fewer than 50% of their servers run regulated applications, and the overhead becomes quite big. They estimated a few hundreds of thousands of dollars annually “wasted” on compliance where it is not needed (from licenses on software, auditing hours, and time of compliance oriented engineers internally etc.)

So “why not separate the irrelevant applications from the regulated data-center?” you might ask, and so did I. But here are a few challenges that the prospect presented me with:

  1. The data center is quite complex today, spanning a few different virtualization solutions, networking equipment etc, so separating them into different VLANs will require quite a lot of networking effort.
  2. The regulated and non-regulated applications are interconnected – mapping those dependencies (for identifying the FW rules) is a very complex task without the right visibility.
  3. Some applications are business critical and they cannot afford the down-time associated with moving them to another VLAN, changing their IPs etc – just the thought of that scares away everyone from application owners to leadership.
  4. When looking deeper into the regulation requirements – they would like to separate the “regulated part” even further into separate segments, thus driving the compliance and auditing costs event further down. So take all the problems above and multiply them…
  5. As with all modern organizations, they would like to embrace “new” technologies such as cloud – so they would like to enable this easily within any change they implement in their IT and plan for future expansions.

What a perfect use-case for an overlay segmentation solution as Guardicore!!! We can help implement any size of segments, across any infrastructure, without any downtime, and help save quite a lot of money in the process of uplifting their security posture.

Want to hear more – talk to us.

Understanding and Avoiding Security Misconfiguration

Security Misconfiguration is simply defined as failing to implement all the security controls for a server or web application, or implementing the security controls, but doing so with errors. What a company thought of as a safe environment actually has dangerous gaps or mistakes that leave the organization open to risk. According to the OWASP top 10, this type of misconfiguration is number 6 on the list of critical web application security risks.

How Do I Know if I Have a Security Misconfiguration, and What Could It Be?

The truth is, you probably do have misconfigurations in your security, as this is a widespread problem, and can happen at any level of the application stack. Some of the most common misconfigurations in traditional data centers include default configurations that have never been changed and remain insecure, incomplete configurations that were intended to be temporary, and wrong assumptions about the application expected network behavior and connectivity requirements.

In today’s hybrid data centers and cloud environments, and with the complexity of applications, operating systems, frameworks and workloads, this challenge is growing. These environments are technologically diverse and rapidly changing, making it difficult to understand and introduce the right controls for secure configuration. Without the right level of visibility, security misconfiguration is opening new risks for heterogeneous environments. These include:

Unnecessary administration ports that are open for an application. These expose the application to remote attacks.
Outbound connections to various internet services. These could reveal unwanted behavior of the application in a critical environment.
Legacy applications that are trying to communicate with applications that do not exist anymore. Attackers could mimic these applications to establish a connection.

The Enhanced Risk of Misconfiguration in a Hybrid-Cloud Environment

While security misconfiguration in traditional data centers put companies at risk of unauthorized access to application resources, data exposure and in-organization threats, the advent of the cloud has increased the threat landscape exponentially. It comes as no surprise that “2017 saw an incredible 424 percent increase in records breached through misconfigurations in cloud servers” according to a recent report by IBM. This kind of cloud security misconfiguration accounted for almost 70% of the overall compromised data records that year.

One element to consider in a hybrid environment is the use of public cloud services, third party services, and applications that are hosted in different infrastructure. Unauthorized application access, both from external sources or internal applications or legacy applications can open a business up to a large amount of risk.

Firewalls can often suffer from misconfiguration, with policies left dangerously loose and permissive, providing a large amount of exposure to the network. In many cases, production environments are not firewalled from development environments, or firewalls are not used to enforce least privilege where it could be most beneficial.

Private servers with third-party vendors or software can lack visibility or an understanding of shared responsibility, often resulting in misconfiguration. One example is the 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility.

With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe?

Learning about Application Behavior to Mitigate the Risk of Misconfiguration

Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices.

This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features.

Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration.

Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas.

Visibility and Smart Policy Limit the Risk of Security Misconfiguration

The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems.

Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication.

Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole.

Want to hear more about Guardicore Centra and micro-segmentation? 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility. With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe? Learning about Application Behavior to Mitigate the Risk of Misconfiguration Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices. This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features. Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration. Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas. Visibility and Smart Policy Limit the Risk of Security Misconfiguration. The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems. Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication. Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole. Want to hear more about Guardicore Centra and micro-segmentation? Get in touch. Get in touch.

Want to learn more about securing data centers and clouds? Check out our white paper.

Read More

Ready to Give Micro-Segmentation Your Full Attention? Look Out for the Most Common Roadblocks

Security experts continue to promote micro-segmentation as an essential tool for risk reduction in hybrid cloud environments. If you’re ready to make 2019 the year you get your micro-segmentation journey off the ground, make sure you can identify the roadblocks you should be looking to avoid.

The irreversible movement of critical workloads into virtualized, hybrid cloud environments demands new security solutions that go further than traditional firewalls or endpoint controls. Audits and industry compliance requirements make it an imperative. News stories of the continued fallout of data center breaches in which attackers have caused severe brand and monetary damage, such as the Equifax breach, make it even more important to move to the top of your to-do list.

East-west data center traffic now accounts for most enterprise traffic — and has been said to “dwarf traditional client-server traffic which moves north-south.” As a result, traditional network and host-based security, even when virtualized, doesn’t provide the visibility, security controls, or protection capabilities to secure what has become the largest attack surface of today’s enterprise computing environments. Furthermore, point solutions offered by cloud and on-premises vendors come up short and add layers of complexity most enterprises can’t afford.

Attackers know this and are exploiting it. Today’s attacks are smarter and more straightforward, often launched to covertly harness portions of an enterprise’s compute power to commit other crimes. A good example of this is the rise in crypto-jacking, growing faster than ransomware as the means by which attackers attempt a pay-out. Alongside APTs, these types of threats take advantage of zero day vulnerabilities or weaknesses in existing security and launch attacks that are direct against the data center or cloud.

As IT environments continue to grow increasingly dynamic and complex, attackers can accomplish their ends more quickly and efficiently. This is especially true in a hybrid ecosystem, given the lack of native security controls and the average length of dwell time before detection.

The responsibility to ensure that you are protected against these threats lies squarely in your court – it is on you to safeguard your business. Security is ultimately — and contractually — a shared responsibility between the provider and the user. Enterprises must continue to work on securing the workloads and applications themselves, not merely rely on intrusion prevention tools.

The Micro-Segmentation Dilemma

In view of this sense of urgency, micro-segmentation has become a popular solution to address the reality of todays data centers. We’ve had conversations with people at dozens of organizations that have tried to implement micro-segmentation. By identifying some of the more common pitfalls, we can lay out the tips and tricks that will help you make your implementation a success.

Lack of visibility: Without deep visibility into east-west data center traffic, any effort to implement micro-segmentation is thwarted. Even with lengthy analysis meetings, traffic collection, and manual mapping processes, security professionals will be left with blind spots. Despite the strength of automated mapping, too many efforts lack process-level visibility and critical contextual orchestration data. The ability to map out application workflows at a very granular level is necessary to identify logical groupings of applications for segmentation purposes.

All-or-nothing segmentation paralysis: Too often, executives think they need to micro-segment everything decisively, which leads to fears of disruption. The project looks too intimidating, so they never begin. They fail to understand that micro-segmentation must be done gradually, in phases. The right provider will be able to identify use cases that will provide quick time to value for your unique business context.

Layer 4 complacency: Some organizations believe that traditional network segmentation is sufficient. But ask them, “When was the last time your perimeter firewalls were strictly Layer 4 port forwarding devices?” Attacks over the last 15 years often include port hijacking – taking over an allowed port with a new process for obfuscation and data exfiltration. Attackers can exploit open ports and protocols for lateral movement. Layer 4 approaches, typical of most point solutions, can in some cases be equal to under-segmentation. Of course, effective micro-segmentation must strike a balance between application protection and business agility, delivering strong security without disrupting business-critical applications, so it’s important not to enforce such tight policy that you lose flexibility. However, there are certain examples in dynamic infrastructures where workloads are communicating and often migrating across segments where you will want to enforce more granular policy down to Layer 7.

Lack of multi-cloud convergence: The hybrid cloud data center adds agility through autoscaling and mobility of workloads. However, it is built on a heterogeneous architectural base. Each cloud vendor may offer point solutions and security group methodologies that focus on its own architecture. They have their own best interests at heart, and multiple solutions can result in unnecessary complexity. Successful micro-segmentation requires a solution that works in a converged fashion across the entire architecture. On top of this, a converged approach can be implemented more quickly and easily than one that must account for different cloud providers’ security technologies.

Inflexible policy engines: Point solutions often have poorly thought-out policy engines. Most include “allow-only” rule sets. Most security professionals would prefer to start with a “global-deny” list, which establishes a base policy against unauthorized actions across the entire environment. This lets enterprises demonstrate a security posture directly correlated with the compliance standards they must adhere to, such as HIPAA for health organizations or PCI-DSS for anyone who takes payments.

Moreover, point solutions usually don’t allow policies to be dynamically provisioned or updated when workflows are autoscaled, services expand or contract, or processes spin up or down — a key reason enterprises are moving to hybrid cloud data centers in the first place. Without this capability, micro-segmentation is virtually impossible.

Given these obstacles, it’s understandable that most micro-segmentation projects suffer from lengthy implementation cycles, cost overruns, and excessive demands on scarce security resources, ultimately failing to achieve their goals.

So, how can you increase your chances of success?

Winning Strategies for Successful Micro-Segmentation

When intelligently planned and executed, reducing risk with micro-segmentation is very achievable. It starts with discovery of your applications and a visual map of their communications and dependencies within your network. With granular visibility into your entire environment, including network flows, assets, and orchestration details from various third-party platforms and workloads, you can more easily identify critical assets that can logically be grouped via labels to use in policy creation. Process-level (Layer 7) visibility accelerates your ability to identify and label workflows, and to achieve a more effective level of protection.
Converged micro-segmentation strategies that work seamlessly across your entire heterogeneous environment, from on premises to the cloud, will simplify and accelerate the rollout. When a policy can truly follow the workload, regardless of the underlying platform, it becomes easier to implement and manage, and delivers more effective protection.

Autoscaling is one of the major features of the hybrid cloud terrain. The inherent intelligence to understand and apply policies to workloads as they dynamically appear and disappear is key.

Finally, take a gradual, phased approach to operationalizing micro-segmentation. Start with critical assets, or applications that need to be secured for compliance. What is most likely to be targets of attackers? Which assets contain sensitive customer data or are most vulnerable to compute hijacking? Create policies around those groups first. Over time, you can gradually build out increasingly refined policies, whether this is for increased risk reduction, the principle of least privilege, wider compliance needs, or any other specific end goals for your business needs.

Want to learn more about best practices for micro-segmentation? Read more.

AWS Security Best Practices

AWS is the biggest player in the public IaaS (Infrastructure as a Service) market and a critical component of the hybrid-cloud infrastructure in many enterprises. Understanding how to secure AWS resources and minimize the impact of any breaches that do occur has become more important than ever. For this reason, after closing 2018 with Infection Monkey & GuardiCore Centra’s integration into AWS Security Hub, we decided to open 2019 with a crash course on AWS security best practices.

In this piece, we’ll dive into some of the basics of AWS security, provide some tips to help you get started, and supply you with information on where you can learn more.

#1 AWS security best practice: Get familiar with the AWS shared responsibility model

Understanding the AWS security paradigm at a high level is an important part of getting started securing your AWS infrastructure. AWS uses the shared responsibility model to define who is responsible for securing what in the world of AWS. To help conceptualize the model, the public cloud infrastructure giant has come up with succinct verbiage to describe what they are responsible for and what you (the customer) are responsible for. In short:

AWS is responsible for “security of the cloud”- This means select software, hardware, and global infrastructure (think racks in physical data centers, hypervisors, switches, routers, storage, etc.) are AWS’s responsibility to secure.

Customers are responsible “for security in the cloud”- This means customers are responsible for ensuring things like customer data, applications, operating systems, firewalls, authentication, access management, etc.

Worded differently, AWS gives you the public cloud infrastructure to build upon, but it’s up to you to do so responsibly. It is expected that not everything you need will be baked into any given AWS solution. Third-party security tools like Centra can help fill those gaps. Understanding the shared responsibility model and what tools can help will allow you to ensure you’re doing your part to secure your infrastructure.

#2 AWS security best practice: Use IAM wisely

AWS Identity and Access Management (IAM) is a means of managing access to AWS resources and services, and is built-into AWS accounts. In a nutshell, IAM enables you to configure granular permissions and access rights for users, groups, and roles. Here are a few useful high-level recommendations to help you get started with IAM:

  • Grant least privilege – The principle of least privilege is a popular concept in the world of InfoSec and it is even more important to adhere to in the cloud. Only grant users and services the privileges necessary for the given set of tasks they should be legitimately responsible for, and nothing more.
  • Use IAM groups – Using groups to assign permissions to users significantly simplifies and streamlines access management.
  • Regularly rotate credentials – Enforcing expiration dates on credentials helps ensure that if a given set of credentials is compromised, there is a limited window for an attacker to access your infrastructure.
  • Limit use of root – Avoid using the Linux “root” user. Being conservative with your use of root access helps keep your infrastructure secure.
  • Use MFA – Multi-factor authentication (MFA) should be considered a must for users with high-level privileges.

#3 AWS security best practice: Disable SSH password authentication

If you’re familiar with Linux server administration in general, you’re likely familiar with the benefits of SSH keys over passwords. If you’re not, the short version is:

  • SSH keys are less susceptible to brute force attacks than passwords.
  • To compromise SSH public-key authentication used with a passphrase, an attacker would need to obtain the SSH private-key AND determine (or guess) the passphrase.
  • While SSH keys may require a little more work when it comes to key management, the pros far outweigh the cons from a security perspective.

#4 AWS security best practice: Use security groups

First, to clear up a common misconception: AWS security groups are NOT user groups or IAM groups. An AWS security group is effectively a virtual firewall. If you’re comfortable understanding the benefits of a firewall within a traditional network infrastructure, conceptualizing the benefits of AWS security groups will be intuitive.

AWS security group best practices

Now that we’ve clarified what a security group is, we’ll dive into a few AWS security group best practices to help you get started using them.

    • Minimize open ports – Unless there is a highly compelling argument to do so, only allow access to required ports on any given instance. For example, if you’re running a cluster of instances for a web-server, access to TCP ports 80 and 443 makes sense (and maybe 22 for SSH), but opening other ports is an unnecessary risk.
    • Don’t expose database ports to the Internet – In most cases, there is no need to expose the database to the Internet – doing so puts your infrastructure at risk. Use security group policies to restrict database port (e.g. TCP 3306 for MySQL) access to other specific AWS security groups.
    • Regularly audit your security group policies – Requirements change, rules that were once needed become liabilities, and people make mistakes. Regularly auditing your security rules for relevance and proper configuration help you minimize the likelihood that an outdated or misconfigured security group creates a network breach.

This is just the tip of the iceberg when it comes to AWS security group best practices. For more information, check out the AWS Security Groups User Guide and our Strategies for Protecting Cloud Workloads with Shared Security Models whitepaper.

#5 AWS security best practice: Leverage micro-segmentation

One of the most important components of securing public-cloud infrastructure, particularly in hybrid-cloud environments, is micro-segmentation. Micro-segmentation helps limit both north-south and east-west movement of breaches when they occur, which helps mitigate the spread of threats from one node to another. Further, Guardicore’s intelligent micro-segmentation solution can limit one of the biggest drivers of breach impact: dwell time. If you’re interested in learning more, check out this blog post for a crash course on micro-segmentation best practices.

How micro-segmentation complements AWS security groups

Security groups are an important part of AWS security, and micro-segmentation is excellent way to complement them and round out a hybrid-cloud security plan. A micro-segmentation solution like Guardicore Centra helps ensure you are able to implement micro-segmentation seamlessly both on-premises and in the cloud. Specific benefits of using Centra to complement AWS security groups include:

  • Enhanced visibility – Centra is able to automatically discover applications and flows, use its AWS API integration to pull labels and asset information, and provide granular visibility and baselining for your entire infrastructure.
  • Application aware policies- Next Generation Firewalls (NGFWs) are a big part of on-premises security, and Centra helps bring the same features to your AWS cloud. You wouldn’t compromise on application-aware security in a physical datacenter, and with Centra you don’t have to in the cloud either.
  • Protection across multiple cloud platforms & on-prem- It is common for the modern enterprise to have workloads scattered across multiple cloud service providers as well as physical servers on-premises. Centra is able to provide micro-segmentation for workloads running in AWS, other IaaS providers, and on physical servers in corporate offices and data centers. This helps enterprises ensure that their security is robust across the entirety of their infrastructure.

If you’re interested in learning more about the benefits of Centra for AWS, check out this solution brief (PDF).

Putting it all together: a holistic approach to AWS security

As we have seen, there is no single magic bullet when it comes to securing your AWS infrastructure. Understanding the AWS shared responsibility model enables you to know where to focus your attention, and leveraging built-in AWS features like security groups and IAM are a great start. However, there are still gaps left unaccounted for by AWS tools, and 3rd party solutions are needed to address them. Guardicore Centra provides users with micro-segmentation, breach detection & response, and application-level visibility that help round out a holistic approach to AWS security.

Want to learn more?

For more on how Guardicore Centra and micro-segmentation can help you keep your AWS resources secure,  contact us today or sign up for a demo of the Centra Security Platform.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More

CVE-2019-5736 – runC container breakout

A major vulnerability related to containers was released on Feb 12th. The vulnerability allows a malicious container that is running as root to break out into the hosting OS and gain administrative privileges.

Adam Iwanuik, one of the researchers who took part in the discovery shares in detail the different paths taken to discover this vulnerability.

The mitigations suggested as part of the research for unpatched systems are:

  1. Use Docker containers with SELinux enabled (–selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary.
  2. Use read-only file system on the host, at least for storing the docker-runc binary.
  3. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runC binary on the host).

The first two suggestions are pretty straightforward but I would like to elaborate on the third one. It’s important to understand that Docker containers run as root by default unless stated otherwise. This does not explicitly mean that the container also has root access to the host OS but it’s the main prerequisite for this vulnerability to work.

To run a quick check whether your host is running any containers as root:


#!/bin/bash

# get all running docker container names
containers=$(docker ps | awk '{if(NR>1) print $NF}')

echo "List of containers running as root"

# loop through all containers
for container in $containers
do
    uid=$(docker inspect --format='{{json .Config.User}}' $container)
    if [ $uid = '"0"' ] ; then
        echo "Container name: $container"
    fi
done

In any case, as a best practice you should prevent your users from running containers as root. This can be enforced by existing controls of the common orchestration\management system. For example, OpenShift prevents users from running containers as root out of the box so your job here is basically done. However, in Kubernetes your can run as root by default but you can easily configure PodSecurityPolicy to prevent this as described here.

In order to fix this issue, you should patch the version of your container runtime. Whether you are just using a container runtime (docker) or some flavor of a container orchestration system (Kubernetes, Mesos, etc…) you should look up the instructions for your specific software version and OS.

How can Guardicore help?

Guardicore provides a network security solution for hybrid cloud environments that spans across multiple compute architectures, containers being one of them. Guardicore Centra is a holistic micro-segmentation solution that provides process-level visibility and enforcement of the traffic flows both for containers and VMs. This is extremely important in the case of this CVE, as the attack would originate from the host VM or a different container and not the original container in case of a malicious actor breaking out.

Guardicore can mitigate this risk by controlling which processes can actually communicate between the containers or VMs covered by the system.

Learn more about containers and cloud security

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Highlights of BlueHat Israel 2019

BlueHat Israel covered many interesting talks, covering supply chain attacks, processor flaws and many more.