Posts

Limitations of Azure Security Groups: Policy Creation Across Multiple vNets

In our previous post, we discussed the limitations of Cloud Security Groups and flow logs within a specific vNet. In today’s post, we will focus on another specific scenario and use case that is common to most organizations, discussing Cloud Security Group limitations across multiple regions and vNets. We will then deep dive into Guardicore’s value in this scenario.

In a recent analysis, Gartner mentions the inherent incompatibility between existing monitoring tools and the cloud providers’ native monitoring platforms and data handling solutions. Gartner explains that an organization’s own monitoring strategies must evolve to accommodate these differences.

As the infrastructure monitoring feature sets offered by cloud providers’ native tools are continuing to evolve and mature, Gartner comments that “Gaps still exist between the capabilities of these tools and the monitoring requirements of many production applications… Remediation mechanisms can still require significant development and integration efforts, or the introduction of a third-party tool or service.”

To understand the challenges faced when using native monitoring tools, in this post I’ll again share details from an experiment that was performed by one of our customers. The customer created a simulation of multiple applications running in Azure, and created security policies between these applications.

The lab setup

Let’s look at the simulation environment. There are multiple Azure subscriptions, and within each subscription, there is a Virtual Network (VNet). In this case, SubscriptionA is the Production environment based in the Brazil region, and SubscriptionB is the Development environment, based in West Europe. Each has its own vNet. Both VNets are peered together.

ASGs:
The team created 3 Application Security Groups (ASGs). Note that the locations correspond to the locations used for the Virtual Networks (VNets).

The customer wanted to test the following scenario:
Block all communication from the CMS application over port 80, unless CMS communicates over this port with the SWIFT and Billing applications.

However, CMS application servers reside in the West Europe region, and the Swift and Billing application servers reside in the Brazil South region.

In this scenario, with 2 Virtual Networks (vNets), our customer wanted to know, will an Application Security Group (ASG) that exists in one Virtual Network (VNet), be available for reference in the opposite Virtual Network’s (vNet’s) Network Security Group (NSG)? Would it be possible to create a rule with an ASG for the CMS App servers to the SWIFT & Billing applications even though they are in separate vNets?

The limitations and constraints of using Azure Security Groups were immediately clear

The team attempted to add a new inbound security rule from the CMS servers’ ASG to the SWIFT servers’ ASG. As you can see from the screenshot, the only Application Security Group (ASG) that appears in the list of options, is the local one, CMS servers ASGs.

Let’s explore what happened above. According to the documentation provided by Azure:
Each subscription in Azure is assigned to a specific, single, region.
Multiple subscriptions cannot share the same vNet.
NSGs can only be applied within a vNet.

Thus each region must contain a single vNet, and each region will have its own specific NSGs in place. The team attempted a few options to troubleshoot this issue using Security Groups.

First, they attempted to use ASGs to resolve this and create policies cross regions. However, the customer came up against the following Azure rule.
All network interfaces assigned to an ASG have to exist in the same vNet. You cannot add network interfaces from different vNets to the same application security group.
If your application spans cross regions or vNets, you cannot create a single ASG to include all servers within this application. A similar rule applies when application dependencies cross regions. ASGs therefore couldn’t solve the problem with policy creation.

Next, the customer tried combining two ASGs from different vNets to achieve this policy. Again, Azure rules made this impossible, as you can see below.
If you specify an application security group as the source and destination in a security rule, the network interfaces in both application security groups must exist in the same virtual network. For example, if AsgLogic contained network interfaces from VNet1, and AsgDb contained network interfaces from VNet2, you could not assign AsgLogic as the source and AsgDb as the destination in a rule. All network interfaces for both the source and destination application security groups need to exist in the same virtual network.

Simply put, according to Azure documentation, it is not possible to create an NSG containing two ASGs from different vNets.

Thus if your application spans multiple vNets, using a single ASG for all application components is not an option, nor is combining two ASGs in an NSG. You’ll see the same problem when application dependencies cross regions, like in the case of our CMS, SWIFT and billing applications above.

Bottom line: It is not possible to create NSG rules, using ASGs for cross-region and vNet traffic.

Introducing Guardicore to the Simulation

The team had an entirely different experience when using Guardicore Centra to enforce the required policy settings.

The team had already been using Guardicore Centra for visibility to explore the network. In fact, this visibility had helped the team realize they needed to permit the CMS application to communicate with SWIFT over port 8080 in the first place. The team was therefore immediately able to view the real traffic between both regions/vNets and within each region/vNet, visualizing the connections between the CMS application in West Europe and the SWIFT and Billing application in the Brazil region.

With Guardicore, policies are created based on labels, and are therefore decoupled from the underlying infrastructure, supporting seamless migration of policies alongside workloads, wherever they may go in the future. As the customer planned to test migrating the CMS application to AWS, policies were created based on the environments and applications, not based on the infrastructure or the underlying “Cloud” context.

A critical layer added to Guardicore Centra’s visibility is labeling and grouping. This context enables deep comprehension of application dependencies. While Centra provides a standard hierarchy that many customers follow, our labeling approach is highly customizable. Flexible grouping enables you to see your data center in the context of how you as a business speak about your data center.

Labeling decouples the IP address from the segmentation process and enables application migration between environments, seamlessly, without the need to change the policies in place. With this functionality, the lab team were able to put the required policies into place.

 

One of the most impactful things we can do to make Guardicore’s visualization relevant to your organization quickly, is integrate with any existing sources of metadata, such as data center or cloud orchestration tools or configuration management databases. In the case above, all labels were received automatically from the existing Azure orchestration tags.

As Guardicore does not rely on the underlying infrastructure to enforce policies, such as Security Groups or endpoint firewalls, policies are completely decoupled from the underlying infrastructure. This enables the creation of a single policy across the whole environment, and covers those use cases that are cross environment, too. In the case of Azure, it allowed our customer to simulate policies that cross vNet and Region, while doing so seamlessly from a single pane of glass.

Guardicore Now Available in the Microsoft Azure Marketplace

Microsoft Azure customers worldwide now gain access to the Guardicore Centra security platform to take advantage of the scalability, reliability, and agility of Azure to drive application development and shape business strategies

Boston, Mass. and Tel Aviv, Israel – October 8, 2019 – Guardicore, a leader in internal data center and cloud security, today announced the availability of its Guardicore Centra security platform in the Microsoft Azure Marketplace, an online store providing applications and services for use on Azure. Guardicore customers can now take advantage of the scalability, high availability, and security of Azure, with streamlined deployment and management.

Guardicore Centra helps accelerate security migration from an on-premises data center to Azure. Additionally, it supports hybrid clouds and can protect legacy applications for those customers that prefer to keep such applications in their traditional data centers while migrating other applications to Azure. The Guardicore Centra security platform is also among the first cloud and data center micro-segmentation solutions in the market to achieve Microsoft IP Co-Sell status. This designation recognizes that Guardicore has demonstrated proven technology and deep expertise that helps customers achieve their cloud security goals.

“By implementing Guardicore Centra, combined with the range of powerful tools from Microsoft Azure, customers are able to gain the highest level of visibility and implement micro-segmentation for enhanced security. And they can do it faster and more effectively than traditional firewall technology with our simple-to-deploy overlay that can go to the cloud, stay on-premise, or do both at the same time,” said Pavel Gurvich, CEO and cofounder, Guardicore. “Achieving this status demonstrates our commitment to the Microsoft partner ecosystem and our ability to deliver innovative solutions that help forward-thinking enterprise customers to secure their business-critical applications and data quickly, reduce the cost and burden of compliance, and secure cloud adoption.”

Sajan Parihar, Senior Director, Microsoft Azure Platform at Microsoft Corp said, “We’re pleased to welcome Guardicore and the Guardicore Centra security platform to the Microsoft Azure Marketplace, which gives our partners great exposure to cloud customers around the globe. Azure Marketplace offers world-class quality experiences from global trusted partners with solutions tested to work seamlessly with Azure.”

The Azure Marketplace is an online market for buying and selling cloud solutions certified to run on Azure. The Azure Marketplace helps connect companies seeking innovative, cloud-based solutions with partners who have developed solutions that are ready to use.

About Guardicore

Guardicore is a data center and cloud security company that protects your organization’s core assets using flexible, quickly deployed, and easy to understand micro-segmentation controls. Our solutions provide a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment. For more information, visit www.guardicore.com.

The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

Thoughts on the Capital One Attack

The ink on the Equifax settlement papers is hardly dry, and another huge data breach, this time at Capital One, is sending shock waves across North America.

The company has disclosed that in March of this year, a former systems engineer, Paige Thompson, exploited a configuration vulnerability (associated with a firewall or WAF) and was able to execute a series of commands on the bank’s servers that were hosted on AWS. About 106 million customers have had their data exposed, including names, incomes, dates of birth, and even social security numbers and bank account credentials. Some of the data was encrypted, some was tokenzied but there’s been a large amount of damage to customers, as well as to the bank’s reputation and the entire security ecosystem.

Our customers, partners and even employees have asked us to comment about the Capital One data breach. Guardicore is an Advanced Technology Partner for AWS with security competency. There are only a small number of companies with such certification and thus I’d like to think that our thoughts do matter.

First – there are a couple of positive things related to this breach:

  1. Once notified, Capital One acted very quickly. It means that they have the right procedures, processes and people.
  2. Responsible disclosure programs provide real value. This is important and many organizations should follow suit.

While not a lot of information is available, based on the content that has been published thus far, we have some additional thoughts:

Could this Data Breach Have Been Avoided?

Reading the many articles on this subject everyone is trying to figure out the same thing. How did this happen, and what could have been done to keep Capital One’s customer data more secure?

What Does a ‘Configuration Vulnerability’ Mean on AWS?

When it comes to managing security in a cloud or a hybrid-cloud environment, organizations often experience issues with maintaining good visibility and control over applications and traffic. The first step is understanding what your role is in a partnership with any cloud vendor. Being part of a shared-responsibility model in AWS means recognizing that Amazon gives you what it calls “full ownership and control” over how you store and secure your content and data. While AWS is responsible for infrastructure, having freedom over your content means you need to take charge when it comes to securing applications and data.

Looking at this data breach specifically, an AWS representative has said “AWS was not compromised in any way and functioned as designed. The perpetrator gained access through misconfiguration of the web application and not the underlying cloud-based infrastructure.”

Thompson gained access by leveraging a configuration error or vulnerability which affected a web firewall guarding a web application. By passing what seems to have been a thin (maybe even single) layer of defense, she was then able to make some kind of lateral movement across the network and then to the S3 bucket where the sensitive data was being stored.

Cloud Native Security Controls are Just Your First Layer of Defense

Can we learn anything from this incomplete information? I do think that the answer is “yes”: Cloud-native security controls provide a good start, but this alone is not enough : Best practice is to add an extra layer of detection and prevention, adding application-aware security to the cloud, just as you would expect on-premises. Defense-in-depth as a concept is not going away even in the cloud. The controls and defenses that the Cloud Service Provider includes should be seen as an add-on or part of the basic hygiene requirements.

I would argue that the built-in Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to identify and block malicious or suspicious traffic than cloud APIs can achieve. When we were designing Guardicore Centra, we decided to try to develop independent capabilities whenever possible, even if it meant that we had to spend more time and put more into our development. The result is a better security solution that is independent of the infrastructure and is not limited to what a 3rd party supplier/vendor or partner provides.

Guardicore Centra is used as an added security platform for AWS as well as the other clouds. We know from our customers that acting on the facts listed below have protected them on multiple occasions.

  • Guardicore is an Advanced Technology Partner for AWS: Guardicore is the only vendor that specializes in micro-segmentation with this certification from AWS, and Guardicore Centra is fully integrated with AWS. Users can see native-cloud information and AWS-specific data alongside all information about their hybrid ecosystem. When creating policy, this can be visualized and enforced on flows and down to the process level, layer 7.
  • Micro-Segmentation Fills the Gaps of Built-in Cloud Segmentation: Many companies might rely on native cloud segmentation through cloud-vendor tools, but it would have been insufficient to stop the kind of lateral movement the attacker used to harvest these credentials in the Capital One breach. In contrast, solutions like Centra that are deployed on top of the cloud’s infrastructure and are independent are not limited. Specifically for Centra, the product enables companies to set policies at the process level itself.
  • Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to block malicious or suspicious traffic than cloud APIs can achieve. In contrast, Guardicore Centra can block unwanted traffic with dynamic application policies that monitor and enforce on east-west traffic as well as north-south. As smart labeling and grouping can pull in information such as EC2 tags, users obtain a highly visible and configurable expression of their data centers, both for mapping and policy enforcement.
  • Breach Detection in Minutes, not Months: The Capital One breach was discovered on July 19th 2019, but the attack occurred in late March this year. This is a gap of almost four months from breach to detection. Many businesses struggle with visibility on the cloud, but Guardicore Centra’s foundational map is created with equal insight into all environments. Breach detection occurs in real-time, with visibility down to Layer 7. Security incidents or policy violations can be sent immediately to AWS security hub, automated, or escalated internally for mitigation.

Capital One Bank are well known for good security practices. Their contributions to the security and open source communities are tremendous. This highlights how easily even a business with a strong security posture can fall victim to this kind of vulnerability. As more enterprises move to hybrid-cloud realities, visibility and control get more difficult to achieve.

Guardicore micro-segmentation is built for this challenge, achieving full visibility on the cloud, and creating single granular policies that follow the workload, working seamlessly across a heterogeneous environment.

Want to find out more about how to secure your AWS instances?

Read these Best Practices

Guardicore’s Insights from Security Field Day 2019

We had such a great time speaking at Security Field Day recently, presenting the changes to our product since our last visit, and hearing from delegates about what issues concern them in micro-segmentation technology.

The last time we were at Field Day was four years ago, and our product was in an entirely different place. The technology and vision have evolved since then. Of course, we’re still learning as we innovate, with the help of our customers who continually come up with new use cases and challenges to meet.

For those who missed our talk, here’s a look at some of what we discussed, and a brief recap of a few interesting and challenging questions that came up on the day.

Simplicity and Visibility First

At Guardicore, we know that ease of use is the foundation to widespread adoption of a new technology for any business. When we get into discussions with each enterprise, customer, or team, we see clearly that they have their own issues or road map to address. As there is no such thing as the ultimate or only use case for micro-segmentation, we can start with the customer in mind. Our product can support any flavor, any need. Just as examples, some of the most popular use cases include separation of environments such as Dev/Prod, ring fencing critical assets, micro-segmenting digital crown jewels, compliance or least privilege and more general IT hygiene like vulnerable port protocols.

To make these use cases into reality, organizations need deep visibility to understand what’s going on in the data center from a human lens. It’s important to have flexible labeling so that you can physically see your data center with the same language that you use to speak about it. We also enhance this by allowing users to see a particular view based on their need or their role within the company. A compliance officer would have a different use for the map than the CTO, or a developer in DevSecOps for example. In addition, organizations need to enforce both blacklist and whitelist policy models for intuitive threat prevention and response. Our customers benefit from our cutting edge visibility tool, Reveal, which is completely customizable and checks all of these boxes. They also benefit from our flexible policy models that include both whitelisting and blacklisting.

To learn more about how our mapping and visibility happen, and how this helps to enforce policy with our uniquely flexible policy model as well and show quick value, watch our full presentation, below.

Addressing Questions and Challenges

With only one hour for presenting our product, there were a lot of questions that we couldn’t get to answer. Maybe next time! Here are three of the topics we wanted to address further.

Q. How does being agent-based affect your solution?

One of the questions raised during the session was surrounding the fact that Guardicore micro-segmentation is an agent-based solution, as the benefits are clear, but people often want to know what the agent’s impact is on the workload.

The first thing we always tell customers who ask this question is that our solution is tried and tested. It is already deployed in some of the world’s biggest data centers such as Santander and Openlink, and works with a negligible impact on performance. Our agent footprint is very small, less than 0.1% CPU and takes up 185MB on Linux and 800MB on windows. Our resources are also configurable, allowing you to tailor the agent to what you need. At the same time, we support the largest amount of operating systems as compared to other vendors.

If the agent is still not suitable, you can use our L4 collectors, which sit at the hypervisor level or switch level, and give you full visibility, and use our virtual appliance for enforcement, as we touched upon during the talk. As experts in segmentation, we can talk you through your cybersecurity use cases, and discuss which approach works best, and where.

Q. Which breach detection capabilities are included?

Complementary controls are an important element of our solution, because they contribute to the ease of use and simplicity. One tool for multiple use cases offers a powerful competitive edge. Here are three of the tools we include:

  • Reputation Analysis: We can identify granular threats, including suspicious domain names, IP addresses, and even file hashes in traffic flows.
  • Dynamic Deception: The latest in incident response, this technique tricks attackers, diverting them to a honeypot environment where Labs can learn from their behavior.
  • File Integrity Monitoring: A prerequisite for many compliance regulations, this change-detection mechanism will immediately alert to any unauthorized changes to files.

Q. How do you respond to a known threat?

Flexible policy models allow us to respond quickly and intuitively when it comes to breach detection and incident response. Some vendors have a whitelist only model, which impedes their ability to take immediate action and is not enough in a hybrid fast-paced data center. In contrast, we can immediately block a known threat or undesired ports by adding it to the blacklist. One example might be blocking Telnet across the whole environment, or blocking FTP at the process level. This helps us show real value from day one. Composite models can allow complex rules like the real-world example we used at the presentation, SSH is only allowed in Production environment if it comes from Jumpboxes. With Guardicore, this takes 2 simple rules, while with a whitelist model it would take thousands.

Security Field Day 2019 staff

Until Next Time!

We loved presenting at Field Day, and want to thank all the delegates for their time and their interesting questions! If you want to talk more about any of the topics raised in the presentation, reach out to me via LinkedIn.

In the meantime, learn more about securing a hybrid modern data center which works across legacy infrastructure as well as containers and clouds.

Download our white paper

NSX-T vs. NSX-V – Key Differences and Pitfalls to Avoid

While working with many customers on segmentation projects, we often get questions about alternative products to Guardicore. This is expected, and, in fact, welcome, as we will take on any head-to-head comparison of Guardicore Centra to other products for micro-segmentation.

Guardicore vs. NSX-T vs NSX- V

One of the common comparisons we get is to VMware NSX. And specifically, we get a lot of questions from customers about the difference between VMware’s two offerings in this space, NSX-T vs NSX-V. Although many security and virtualization experts have written about the differences between the two offerings, including speculation on whether or not these two solutions will merge into a single offering, we think we offer a unique perspective on some of the differences, and what to pay attention to in order to ensure segmentation projects are successful. Also, regardless of which product variant an organization is considering, there are several potential pitfalls with NSX that are important to understand and consider before proceeding with deployment.

NSX-T vs. NSX-V: Key Differences

NSX-V (NSX for “vSphere”) was the first incarnation of NSX and has been around for several years now. As the name suggests, NSX-V is designed for on-premises vSphere deployments only and is architected so that a single NSX-V manager is tied to a single VMware vCenter Server instance. It is only applicable for VMware virtual machines, which leaves a coverage gap for organizations whose use a hybrid infrastructure model. The 2019 RightScale State of the Cloud Report in fact shows that 94% of organizations use the cloud — with 28% of those prioritizing hybrid cloud – with VMware vSphere at 50% of private cloud adoption, flat from last year. So, given the large number of organizations embracing the cloud, interest in NSX-V is waning.

NSX-T (NSX “Transformers”) was designed to address the use cases that NSX-V could not cover, such as multi-hypervisors, cloud, containers and bare metal servers. It is decoupled from VMware’s proprietary hypervisor platform and incorporates agents to perform micro-segmentation on non-VMware platforms. As a result, NSX-T is a much more viable offering than NSX-V now that hybrid cloud and cloud-only deployment models are growing in popularity. However, NSX-T remains limited by feature gaps when compared to both NSX-V and other micro-segmentation solutions, including Guardicore Centra.

Key Pitfalls to Avoid with NSX

While the evolution to NSX-T was a step in the right direction for VMware strategically, there are a number of limitations that continue to limit NSX’s value and effectiveness, particularly when compared to specialized micro-segmentation solutions like Guardicore Centra .

The following are some of the key pitfalls to avoid when considering NSX.

  • Solution Complexity
    VMware NSX requires multiple tools to cover the entire hybrid data center environment. This means NSX-V for ESXi hosts, NSX-T for bare-metal servers, and NSX-Cloud for VMware cloud hosting. In addition, it is a best practice in any micro-segmentation project to first start with visibility to map flows and classify assets where policy will be applied. This requires a separate product, vRealize Network Insight (vRNI). So, a true hybrid infrastructure requires multiple products from VMware, and the need to synchronize policy across them. This leads to more complexity and significantly more time to achieve results. In addition, vRNI is not well-integrated into NSX, which makes the task of moving from visibility to policy a long and complex process. It requires manual downloading and uploading of files to share information between tools.But don’t just take our word for it. A recent Gartner report, Solution Comparison for Microsegmentation Products, April 2019, stated that VMware NSX “comes with massive complexity and many moving parts”. And, when considering NSX for organizations that have implemented the VMware SDN, there is additional complexity added. For example, the network virtualization service alone requires an architecture that consists of “logical switches, logical routers routers, NSX Edge Nodes, NSX Edge Clusters, Transport Nodes, Transport Zones, the logical firewall and logical load balancers,” according to Gartner. Not to mention all the manual configuration steps required to implement.
  • Overspending on Licensing
    For many organizations, segmentation requirements develop in stages. They may not even consciously be beginning a micro-segmentation project. It could start as a focused need to protect a critical set of “digital crown jewels” or subsets of the infrastructure that are subject to regulatory requirements. VMware’s licensing model for NSX does not align well with practical approaches to segmentation like these. When deploying NSX, an organization must license its entire infrastructure. If a segmentation project only applies to 20 percent of the total infrastructure, NSX licenses must be purchased for the remaining 80 percent regardless of whether they will ever be used.
  • Management Console Sprawl
    As mentioned above, detailed infrastructure virtualization is a critical building block for effective micro-segmentation. You can’t protect what you can’t see. While micro-segmentation products integrate virtualization and micro-segmentation into a single interface, NSX does not include native visualization capabilities. Instead, NSX requires the use of a separately licensed product, vRealize Network Insight, for infrastructure visibility. This adds both cost and complexity. It also makes it much more difficult and time-consuming to translate insights from visualization into corresponding micro-segmentation policies. The impact is significant, as it puts additional resource strain on already over-taxed IT resources and results in less effective and less complete segmentation policies.
  • Limited Visibility
    Even when NSX customers choose to deploy vRNI as part of an NSX deployment, the real-time visibility it provides is limited to Layer 4 granularity. This does not provide the level of visibility to set fine-grained, application-aware policies to protect against today’s data center and cloud infrastructure threats. As environments and security requirements become more sophisticated, it is often necessary to combine Layer 4 and Layer 7 views to gain a complete picture of how applications and workloads work and develop strategies for protecting them.Also, while real-time visibility is critical, historical visibility also plays an important role in segmentation. IT environments – and the threat landscape – are constantly changing, and the ability to review historical activity helps security teams continuously improve segmentation policies over time. However, NSX and vRNI lack any historical reporting or views.
  • Enforcement Dependencies and Limitations
    As with visualization, it is important to be able to implement policy enforcement at both the network and process levels. Native NSX policy enforcement can only be performed at the network level.It is possible to achieve limited application-level policy control by using NSX in conjunction with a third VMware product, VMware Distributed Firewall. However, even using VMware Distributed Firewall and NSX together has significant limitations. For example, VMware Distributed Firewall can only be used with on-premises vSphere deployments or with VMware’s proprietary VMware Cloud for AWS cloud deployment model. This makes it non-applicable to modern hybrid cloud infrastructure.
  • Insufficient Protection of Legacy Assets
    While most organizations strive to deploy key applications on modern operating systems, legacy assets remain a fact of life in many environments. While the introduction of agents with NSX-T broadens platform coverage beyond the VMware stack, operating system compatibility is highly constrained. NSX-T agent support is limited to Windows Server 2012 or newer and the latest Linux distributions. Many organizations continue to run high-value applications on older versions of Windows and Linux. The same is true for legacy operating systems like Solaris, AIX, and HP-UX. In many ways, these legacy systems are leading candidates for protection with micro-segmentation, as they are less likely than more modern systems to have current security updates available and applied. But they cannot be protected with NSX.
  • Inability to Detect Breaches
    While the intent of micro-segmentation policies is to proactively block attacks and lateral movement attempts, it is important to complement policy controls with breach detection capabilities. Doing so acts as a safety net, allowing security teams to detect and respond to any malicious activities that micro-segmentation policies do not block. Detecting infrastructure access from sources with questionable reputation and monitoring for network scans and unexpected file changes can both uncover in-progress security incidents and help inform ongoing micro-segmentation policy improvements. NSX lacks any integrated breach detection capabilities.

With the introduction of NSX-T, VMware took an important step away from the proprietary micro-segmentation model it originally created with NSX-V. But even NSX-T requires customers to lock themselves into a sprawling collection of VMware tools. And some key elements, such as VMware Distributed Firewall, remain highly aligned with VMware’s traditional on-premises model.

In contrast, Guardicore Centra is a software-defined, micro-segmentation solution that was designed from day one to be platform-agnostic. This makes is much more effective than NSX at applying micro-segmentation to any combination of VMware and non-VMware infrastructures.

Centra also avoids the key pitfalls that limit the usefulness of NSX.

For example, Centra offers:

  • Flexible licensing that can be applied to a subset of the overall infrastructure if desired.
  • Visualization capabilities that are fully integrated with the micro-segmentation policy creation process.
  • Visibility and integrated enforcement at both Layer 4 and Layer 7 for more granular micro-segmentation control.
  • Extensive support for legacy operating systems, including older Windows and Linux versions, Solaris, AIX, and HP-UX.
  • Fully integrated breach detection and response capabilities, including reputation-based detection, dynamic deception, file integrity monitoring, and network scan detection.

Don’t Let NSX Limitations Undermine Your Micro-Segmentation Strategy

Before considering NSX, see first-hand how Guardicore Centra can help you achieve a simpler and more effective micro-segmentation approach.

Interested in more information on how Guardicore Centra is better for your needs than any NSX amalgam? Read our Guardicore vs. VMware NSX Comparison Guide

Read More

How to Establish your Next-Gen Data Center Security Strategy

In 2019, 46 percent of businesses are expected to use hybrid data centers, and it is therefore critical for these businesses to be prepared to deal with the inherent security challenges. Developing a next gen data center security strategy that takes into account the complexity of hybrid cloud infrastructure can help keep your business operations secure by way of real-time responsiveness, enhanced scalability, and improved uptime.

One of the biggest challenges of securing the next gen data center is accounting for the various silos that develop. Every cloud service provider has its own methods to implement security policies, and those solutions are discrete from one another. These methods are also discrete from on-premises infrastructure and associated security policies. This siloed approach to security adds complexity and increases the likelihood of blind spots in your security plan, and isn’t consistent with the goals of developing a next gen data center. To overcome these challenges, any forward-thinking company with security top of mind requires security tools that enable visibility and policy enforcement across the entirety of a hybrid cloud infrastructure.

In this piece, we’ll review the basics of the next gen data center, dive into some of the details of developing a next gen data center security strategy, and explain how Guardicore Centra fits into a holistic security plan.

What is a next gen data center?

The idea of hybrid cloud has been around for a while now, so what’s the difference between what we’re used to and a next gen data center? In short, next gen data centers are hybrid cloud infrastructures that abstract away complexity, automate as many workflows as possible, and include scalable orchestration tools. Scalable technologies like SDN (software defined networking), virtualization, containerization, and Infrastructure as Code (IaC) are hallmarks of the next gen data center.

Given this definition, the benefits of the next gen data center are clear: agile, scalable, standardized, and automated IT operations that limit costly manual configuration, human error, and oversights. However, when creating a next gen data center security strategy, enterprises must ensure that the policies, tools, and overall strategy they implement are able to account for the inherent challenges of the next gen data center.

Asking the right questions about your next gen data center security strategy

There are a number of questions enterprises must ask themselves as they begin to design a next gen data center and a security strategy to protect it. Here, we’ll review a few of the most important.

  • What standards and compliance regulations must we meet?Regulations such as HIPAA, PCI-DSS, and SOX subject enterprises to strict security and data protection requirements that must be met, regardless of other goals. Failure to account for these requirements in the planning stages can prove costly in the long run should you fail an audit due to a simple oversight.
  • How can we gain granular visibility into our entire infrastructure? One of the challenges of the next gen data center is the myriad of silos that emerge from a security and visibility perspective. With so many different IaaS, SaaS, and on-premises solutions going into a next gen data center, capturing detailed visibility of data flows down to the process level can be a daunting task. However, in order to optimize security, this is a question you’ll need to answer in the planning stages. If you don’t have a baseline of what traffic flows on your network look like at various points in time (e.g. peak hours on a Monday vs midnight Saturday) identifying and reacting to anomalies becomes almost impossible.
  • How can we implement scalable, cross-platform security policies?As mentioned, the variety of solutions that make up a next gen data center can lead to a number of silos and discrete security policies. Managing security discretely for each platform flies in the face of the scalable, DevOps-inspired ideals of the next gen data center. To ensure that your security can keep up with your infrastructure, you’ll need to seek out scalable, intelligent security tools. While security is often viewed as hamstringing DevOps efforts, the right tools and strategy can help bridge the gap between these two teams.

Finding the right solutions

Given what we have reviewed thus far, we can see that the solutions to the security challenges of the next gen data center need to be scalable and compliant, provide granular visibility, and function across the entirety of your infrastructure.

Guardicore Centra is uniquely capable of addressing these challenges and helping secure the next gen data center. For example, not only can micro-segmentation help enable compliance to standards like HIPAA and PCI-DSS, but Centra offers enterprises the level of visibility required in the next gen data center. Centra is capable of contextualizing all application dependencies across all platforms to ensure that your micro-segmentation policies are properly implemented. Regardless of where your apps run, Centra helps you overcome silos and provides visibility down to the process level.

Further, Centra is capable of achieving the scalability that the next gen data center demands. To help conceptualize how scalable micro-segmentation with Guardicore Centra can be, consider that a typical LAN build-out that can last for a few months and require hundreds of IT labor hours. On the other hand, a comparable micro-segmentation deployment takes about a month and significantly fewer IT labor hours.

Finally, Centra can help bridge the gap between DevOps and Security teams by enabling the use of “zero trust” security models. The general idea behind zero trust is, as the name implies, nothing inside or outside of your network should be trusted by default. This shifts focus to determining what is allowed as opposed to being strictly on the hunt for threats, which is much more conducive to a modern DevSecOps approach to the next gen data center.

Guardicore helps enable your next gen data center security strategy

When developing a next gen data center security strategy, you must be able to account for the nuances of the various pieces of on-premises and cloud infrastructure that make up a hybrid data center. A big part of doing so is selecting tools that minimize complexity and can scale across all of your on-premises and cloud platforms. Guardicore Centra does just that and helps implement scalable and granular security policies to establish the robust security required in the next gen data center.

If you’re interested in redefining and adapting the way you secure your hybrid cloud infrastructure, contact us to learn more.

Want to know more about proper data center security? Get our white paper about operationalizing a proper micro-segmentation project.

Read More

Determining security posture, and how micro-segmentation can improve it

As the recent Quora breach that compromised 100 million user accounts demonstrates, the threat of a cyber attack is ever present in the modern IT environment. Cybercrime and data breaches continue to plague small businesses and enterprises alike, and network security teams are constantly working to stay one step ahead of an attack. This is no easy task since intrusion attempts occur daily and are constantly evolving to find the smallest weakness to exploit.

Attackers can employ direct attacks on data centers and clouds, enact crypto-jacking threats to mine cryptocurrency, devise advanced persistent threat (APT) attacks to extract data while remaining hidden within a network, or even add fileless malware to manipulate in-memory vulnerabilities and access sensitive system resources.

For these reasons, it’s more important than ever for IT teams to evaluate their current security posture to ensure the safety of their sensitive information and assets. This is particularly true in hybrid cloud environments where discrete platforms take siloed approaches to security that can make infrastructure-wide visibility and a holistic approach to security policies extremely difficult. In this piece, we’ll dive into the basics of security posture and explain how Guardicore Centra can help you improve yours.

Security posture defined

Security posture is the overall defensive capability a business has over its computing system infrastructure. Also referred to as cybersecurity posture, the term focuses not only on hardware and software resources, but also the people, policies, and processes in place to maintain security. It is then necessary to prioritize what areas require the most protection, managing the greatest risk, identify weaknesses, and have incident response and disaster recovery plans in place in the event a breach does occur. All of these factors determine the effectiveness, or lack thereof, of an organization’s security posture.

Identifying the areas that deserve attention

In order to determine an organization’s security posture, first it’s the responsibility of a security team to have complete and thorough understanding of the risks associated with the operation of their computing systems. Research must be conducted to quantify attack surfaces, determine risk tolerance, and identify areas within the infrastructure that require more focus.

This planning stage is particularly difficult when attempting to account for the complexities that come with a hybrid cloud infrastructure, as the dynamics of a hybrid cloud make it difficult to get a holistic view of enterprise information systems. Often different policies and controls are in place for different endpoints that exist in different clouds or on-premises.

All of this internal assessment and process scrutiny is essential to develop a foundation for a robust security posture. However, the right tools are required to enforce policies that support it. Modern integrated security techniques such as micro-segmentation and process-level visibility, which are enabled by solutions like Guardicore Centra, help enterprises ensure that they are effectively implementing their strategy and capable of meeting the security challenges of the modern hybrid cloud.

The impact of enhanced visibility on security posture

The heterogeneous nature of a hybrid cloud environment makes it difficult to scale security policies, since there usually is not an effective way to account for the entire infrastructure. Further, because you are dealing with multiple platforms and varying security controls, the possibility of blind spots and oversights increases.

The visualization features of Guardicore Centra were created with these challenges in mind. Using Centra, enterprises can drill down and rapidly discover specific applications and flows within a network, regardless of the particular platform a given node may be running on. Since Guardicore can provide visibility to the process level and enable inspection of systems down to the TCP/UDP port level, blind spots that may otherwise become exploit targets can be eliminated. In a hybrid cloud environment this means you are able to automatically and rapidly learn how applications behave within your network to build a baseline of expected behavior, and better understand how to harden your infrastructure.

The value of micro-segmentation

Given that the greater potential for lateral movement an attacker can perform after a breach, the more damage they can do, it is easy to conceptualize the value of micro-segmentation. We’re all familiar with the benefits of network segmentation using techniques such as access control lists, firewalls and VLANs, and micro-segmentation brings these down to the most granular levels and applies them across the entire hybrid cloud infrastructure. For users of Centra, this means least-access policies can be implemented that limit access to specific groups of users (e.g. database admins), restrict access to certain applications (e.g. a MySQL database server), and restrict access to specific ports (e.g. TCP 3306), with the flexibility of process-level context and cross-platform coverage.

As an added benefit, Centra suggests rules based on analysis of historical data, and development of robust policies becomes significantly easier. By removing complexity, enabling micro-segmentation, and providing process-level visibility, Centra reduces blind spots and limits exposed attack surfaces, two key components of improving security posture.

The importance of threat detection and proactive responses

In addition to enhanced visibility and micro-segmentation, identifying unrecognized and malicious intrusions and reducing dwell-time is an important part of improving security posture. A pragmatic, modern organization understands that despite the best laid plans, breaches may occur and if and when they do, they must be rapidly detected, contained, and remediated.

To this end, Centra is uniquely capable of meeting the breach detection and incident response challenges enterprises with hybrid cloud infrastructures face. Centra uses three different detection methods (Dynamic Deception, Reputation Analysis, and Policy-Based Detection) to rapidly identify and react to attacks. By doing so, Centra helps ensure that in the event a security breach does occur, you are able to reduce the damage and minimize dwell time. This proactive approach to threat detection and response rounds out the Centra offering and helps you ensure your hybrid cloud infrastructure is secure and flexible enough to meet the challenges of modern IT security without sacrificing the performance of your infrastructure or adding unnecessary complexity.

Interested in learning more?

Guardicore Centra can help you significantly enhance your security posture, particularly in complex, difficult-to-manage, hybrid cloud environments. The benefits of hybrid cloud infrastructure are clear from a capex and scalability standpoint, but the tech is not without inherent risk. Hybrid cloud suffers with a myriad of siloed approaches to security policies and controls for reducing attack surfaces in an environment.

Adopting a proactive approach to security and leveraging security solutions that enable micro-segmentation are important steps towards enhancing your security posture and protecting your systems from falling victim to the next data breach.

To learn more about how micro-segmentation can benefit your enterprise, check out the micro-segmentation hub, or set up a demo to see Guardicore Centra in action.

Want to learn more about securing your hybrid cloud environment and strengthening your security posture? Get our white paper on best practices for the technical champion.

Read More

5 Docker Security Best Practices to Avoid Breaches

Docker has had a major impact on the world of IT over the last five years, and its popularity continues to surge. Since its release in 2013, 3.5 million apps have been “Dockerized” and 37 billion Docker containers have been downloaded. Enterprises and individual users have been implementing Docker containers in a variety of use-cases to deploy applications in a fast, efficient, and scalable manner.

There are a number of compelling benefits for organizations that adopt Docker, but like with any technology, there are security concerns as well. For example, the recently discovered runc container breakout vulnerability (CVE-2019-5736) could allow malicious containers to compromise a host machine. What this means is organizations that adopt Docker need to be sure to do so in a way that takes security into account. In this piece, we’ll provide an overview of the benefits of Docker and then dive into 5 Docker security best practices to help keep your infrastructure and applications secure.

Benefits of Docker

Many new to the world of containerization and Docker are often confused about what makes containers different from running virtual machines on top of a hypervisor. After all, both are ways of running multiple logically isolated apps on the same hardware.

Why then would anyone bother with containerization if virtual machines are available? Why are so many DevOps teams such big proponents of Docker? Simply put, containers are more lightweight, scalable, and a better fit for many use cases related to automation and application delivery. This is because containers abstract away the need for an underlying hypervisor and can run on a single operating system.

Using web apps as an example, let’s review the differences.

In a typical hypervisor/virtual machine configuration you have bare metal hardware, the hypervisor (e.g. VMware ESXi), the guest operating system (e.g. Ubuntu), the binaries and libraries required to run an application, and then the application itself. Generally, another set of binaries and libraries for a different app would require a new guest operating system.

With containerization you have bare metal hardware, an operating system, the container engine, the binaries and libraries required to run an application, and the application itself. You can then stack more containers running different binaries and libraries on the same operating system, significantly reducing overhead and increasing efficiency and portability.

When coupled with orchestration tools like Kubernetes or Docker Swarm, the benefits of Docker are magnified even further.

Docker Security Best Practices

With an understanding of the benefits of Docker, let’s move on to 5 Docker security best practices that can help you address your Docker security concerns and keep your network infrastructure secure.

#1 Secure the Docker host

As any infosec professional will tell you, truly robust security must be holistic. With Docker containers, that means not only securing the containers themselves, but also the host machines that run them. Containers on a given host all share that host’s kernel. If an attacker is able to compromise the host, all your containers are at risk. This means that using secure, up to date operating systems and kernel versions is vitally important. Ensure that your patch and update processes are well defined and audit systems for outdated operating system and kernel versions regularly.

#2 Only use trusted Docker images

It’s a common practice to download and leverage Docker images from Docker Hub. Doing so provides DevOps teams an easy way to get a container for a given purpose up and running quickly. Why reinvent the wheel?

However, not all Docker images are created equal and a malicious user could create an image that includes backdoors and malware to compromise your network. This isn’t just a theoretical possibility either. Last year it was reported by Ars Technica that a single Docker Hub account posted 17 images that included a backdoor. These backdoored images were downloaded 5 million times. To help avoid falling victim to a similar attack, only use trusted Docker images. It’s good practice to use images that are “Docker Certified” whenever possible or use images from a reputable “Verified Publisher”.

#3 Don’t run Docker containers using –privileged or –cap-add

If you’re familiar with why you should NOT “sudo” every Linux command you run, this tip will make intuitive sense. The –privileged flag gives your container full capabilities. This includes access to kernel capabilities that could be dangerous, so only use this flag to run your containers if you have a very specific reason to do so.

Similarly, you can use the –cap-add switch to grant specific capabilities that aren’t granted to containers by default. Following the principle of least privilege, you should only use –cap-add if there is a well-defined reason to do so.

#4 Use Docker Volumes for your data

By storing data (e.g. database files & logs) in Docker Volumes as opposed to within a container, you help enhance data security and help ensure your data persists even if the container is removed. Additionally, volumes can enable secure data sharing between multiple containers, and contents can be encrypted for secure storage at 3rd party locations (e.g. a co-location data center or cloud service provider).

#5 Maintain Docker Network Security

As container usage grows, teams develop a larger and more complex network of Docker containers within Kubernetes clusters. Analyzing and auditing traffic flows as these networks grow becomes more complex. Finding a balance between security and performance in these instances can be a difficult balancing act. If security policies are too strict, the inherent advantages of agility, speed, and scalability offered by containers is hamstrung. If they are too lax, breaches can go undetected and an entire network could be compromised.

Process-level visibility, tracking network flows between containers, and effectively implementing micro-segmentation are all important parts of Docker network security. Doing so requires tools and platforms that can help integrate with Docker and implement security without stifling the benefits of containerization. This is where Guardicore Centra can assist.

How Guardicore Centra helps enhance Docker Network Security

The Centra security platform takes a holistic approach to network security that includes integration with containers. Centra is able to provide visibility into individual containers, track network flows and process information, and implement micro-segmentation for any size deployment of Docker & Kubernetes.

For example, with Centra, you can create scalable segmentation policies that take into account both pod to pod traffic flows and bare metal or virtual machine to flows without negatively impacting performance. Additionally, Centra can help DevSecOps teams implement and demonstrate the monitoring and segmentation required for compliance to standards such as PCI-DSS 3.2. For more on how Guardicore Centra can help enable Docker network security, check out the Container Security Use Case page.

Interested in learning more?

There are a variety of Docker security issues you’ll need to be prepared to address if you want to securely leverage containers within your network. By following the 5 Docker security best practices we reviewed here, you’ll be off to a great start. If you’re interested in learning more about Docker network security, check out our How to Leverage Micro-Segmentation for Container Security webinar. If you’d like to discuss Docker security with a team of experts that understand Docker security requires a holistic approach that leverages a variety of tools and techniques, contact us today!

CVE-2019-5736 – runC container breakout

A major vulnerability related to containers was released on Feb 12th. The vulnerability allows a malicious container that is running as root to break out into the hosting OS and gain administrative privileges.

Adam Iwanuik, one of the researchers who took part in the discovery shares in detail the different paths taken to discover this vulnerability.

The mitigations suggested as part of the research for unpatched systems are:

  1. Use Docker containers with SELinux enabled (–selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary.
  2. Use read-only file system on the host, at least for storing the docker-runc binary.
  3. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runC binary on the host).

The first two suggestions are pretty straightforward but I would like to elaborate on the third one. It’s important to understand that Docker containers run as root by default unless stated otherwise. This does not explicitly mean that the container also has root access to the host OS but it’s the main prerequisite for this vulnerability to work.

To run a quick check whether your host is running any containers as root:


#!/bin/bash

# get all running docker container names
containers=$(docker ps | awk '{if(NR>1) print $NF}')

echo "List of containers running as root"

# loop through all containers
for container in $containers
do
    uid=$(docker inspect --format='{{json .Config.User}}' $container)
    if [ $uid = '"0"' ] ; then
        echo "Container name: $container"
    fi
done

In any case, as a best practice you should prevent your users from running containers as root. This can be enforced by existing controls of the common orchestration\management system. For example, OpenShift prevents users from running containers as root out of the box so your job here is basically done. However, in Kubernetes your can run as root by default but you can easily configure PodSecurityPolicy to prevent this as described here.

In order to fix this issue, you should patch the version of your container runtime. Whether you are just using a container runtime (docker) or some flavor of a container orchestration system (Kubernetes, Mesos, etc…) you should look up the instructions for your specific software version and OS.

How can Guardicore help?

Guardicore provides a network security solution for hybrid cloud environments that spans across multiple compute architectures, containers being one of them. Guardicore Centra is a holistic micro-segmentation solution that provides process-level visibility and enforcement of the traffic flows both for containers and VMs. This is extremely important in the case of this CVE, as the attack would originate from the host VM or a different container and not the original container in case of a malicious actor breaking out.

Guardicore can mitigate this risk by controlling which processes can actually communicate between the containers or VMs covered by the system.

Learn more about containers and cloud security