Posts

Limitations of Azure Security Groups: Policy Creation Across Multiple vNets

In our previous post, we discussed the limitations of Cloud Security Groups and flow logs within a specific vNet. In today’s post, we will focus on another specific scenario and use case that is common to most organizations, discussing Cloud Security Group limitations across multiple regions and vNets. We will then deep dive into Guardicore’s value in this scenario.

In a recent analysis, Gartner mentions the inherent incompatibility between existing monitoring tools and the cloud providers’ native monitoring platforms and data handling solutions. Gartner explains that an organization’s own monitoring strategies must evolve to accommodate these differences.

As the infrastructure monitoring feature sets offered by cloud providers’ native tools are continuing to evolve and mature, Gartner comments that “Gaps still exist between the capabilities of these tools and the monitoring requirements of many production applications… Remediation mechanisms can still require significant development and integration efforts, or the introduction of a third-party tool or service.”

To understand the challenges faced when using native monitoring tools, in this post I’ll again share details from an experiment that was performed by one of our customers. The customer created a simulation of multiple applications running in Azure, and created security policies between these applications.

The lab setup

Let’s look at the simulation environment. There are multiple Azure subscriptions, and within each subscription, there is a Virtual Network (VNet). In this case, SubscriptionA is the Production environment based in the Brazil region, and SubscriptionB is the Development environment, based in West Europe. Each has its own vNet. Both VNets are peered together.

ASGs:
The team created 3 Application Security Groups (ASGs). Note that the locations correspond to the locations used for the Virtual Networks (VNets).

The customer wanted to test the following scenario:
Block all communication from the CMS application over port 80, unless CMS communicates over this port with the SWIFT and Billing applications.

However, CMS application servers reside in the West Europe region, and the Swift and Billing application servers reside in the Brazil South region.

In this scenario, with 2 Virtual Networks (vNets), our customer wanted to know, will an Application Security Group (ASG) that exists in one Virtual Network (VNet), be available for reference in the opposite Virtual Network’s (vNet’s) Network Security Group (NSG)? Would it be possible to create a rule with an ASG for the CMS App servers to the SWIFT & Billing applications even though they are in separate vNets?

The limitations and constraints of using Azure Security Groups were immediately clear

The team attempted to add a new inbound security rule from the CMS servers’ ASG to the SWIFT servers’ ASG. As you can see from the screenshot, the only Application Security Group (ASG) that appears in the list of options, is the local one, CMS servers ASGs.

Let’s explore what happened above. According to the documentation provided by Azure:
Each subscription in Azure is assigned to a specific, single, region.
Multiple subscriptions cannot share the same vNet.
NSGs can only be applied within a vNet.

Thus each region must contain a single vNet, and each region will have its own specific NSGs in place. The team attempted a few options to troubleshoot this issue using Security Groups.

First, they attempted to use ASGs to resolve this and create policies cross regions. However, the customer came up against the following Azure rule.
All network interfaces assigned to an ASG have to exist in the same vNet. You cannot add network interfaces from different vNets to the same application security group.
If your application spans cross regions or vNets, you cannot create a single ASG to include all servers within this application. A similar rule applies when application dependencies cross regions. ASGs therefore couldn’t solve the problem with policy creation.

Next, the customer tried combining two ASGs from different vNets to achieve this policy. Again, Azure rules made this impossible, as you can see below.
If you specify an application security group as the source and destination in a security rule, the network interfaces in both application security groups must exist in the same virtual network. For example, if AsgLogic contained network interfaces from VNet1, and AsgDb contained network interfaces from VNet2, you could not assign AsgLogic as the source and AsgDb as the destination in a rule. All network interfaces for both the source and destination application security groups need to exist in the same virtual network.

Simply put, according to Azure documentation, it is not possible to create an NSG containing two ASGs from different vNets.

Thus if your application spans multiple vNets, using a single ASG for all application components is not an option, nor is combining two ASGs in an NSG. You’ll see the same problem when application dependencies cross regions, like in the case of our CMS, SWIFT and billing applications above.

Bottom line: It is not possible to create NSG rules, using ASGs for cross-region and vNet traffic.

Introducing Guardicore to the Simulation

The team had an entirely different experience when using Guardicore Centra to enforce the required policy settings.

The team had already been using Guardicore Centra for visibility to explore the network. In fact, this visibility had helped the team realize they needed to permit the CMS application to communicate with SWIFT over port 8080 in the first place. The team was therefore immediately able to view the real traffic between both regions/vNets and within each region/vNet, visualizing the connections between the CMS application in West Europe and the SWIFT and Billing application in the Brazil region.

With Guardicore, policies are created based on labels, and are therefore decoupled from the underlying infrastructure, supporting seamless migration of policies alongside workloads, wherever they may go in the future. As the customer planned to test migrating the CMS application to AWS, policies were created based on the environments and applications, not based on the infrastructure or the underlying “Cloud” context.

A critical layer added to Guardicore Centra’s visibility is labeling and grouping. This context enables deep comprehension of application dependencies. While Centra provides a standard hierarchy that many customers follow, our labeling approach is highly customizable. Flexible grouping enables you to see your data center in the context of how you as a business speak about your data center.

Labeling decouples the IP address from the segmentation process and enables application migration between environments, seamlessly, without the need to change the policies in place. With this functionality, the lab team were able to put the required policies into place.

 

One of the most impactful things we can do to make Guardicore’s visualization relevant to your organization quickly, is integrate with any existing sources of metadata, such as data center or cloud orchestration tools or configuration management databases. In the case above, all labels were received automatically from the existing Azure orchestration tags.

As Guardicore does not rely on the underlying infrastructure to enforce policies, such as Security Groups or endpoint firewalls, policies are completely decoupled from the underlying infrastructure. This enables the creation of a single policy across the whole environment, and covers those use cases that are cross environment, too. In the case of Azure, it allowed our customer to simulate policies that cross vNet and Region, while doing so seamlessly from a single pane of glass.

Trials and Tribulations – A Practical Look at the Challenges of Azure Security Groups and Flow Logs

Cloud Security Groups are the firewalls of the cloud. They are built-in and provide basic access control functionality as part of the shared responsibility model. However, Cloud Security Groups do not provide the same protection or functionality that enterprises have come to expect with on-premises deployments. While Next-Generation firewalls protect and segment applications on premises’ perimeter (mostly), AWS, Azure, and GCP do not mirror this in the cloud. Segmenting applications using Cloud Security Groups is done in a restricted manner, supporting only layer 4 traffic, ports and IPs. This means that to benefit from application-aware security capabilities with your cloud applications you will need an additional set of controls which is not available with the built-in functionality of Cloud Security Groups.

The basic function that Cloud Security Groups should provide is network separation, so they can be best compared to what VLANs provides on premises, Access Control Lists on switches and endpoint FWs. Unfortunately, like VLANs, ACLs and endpoint FWs, Cloud Security Groups come with similar ailments and limitations. This makes using them complex, expensive and ultimately ineffective for modern networks that are hybrid and require adequate segmentation. To create application aware policies, and micro-segment an application, you need to visualize application dependencies, which Cloud Security Groups do not support. Furthermore, if your application dependencies cross regions within the same cloud provider or between clouds and on premises, application security groups are ineffective by design. We will touch on this topic in upcoming posts.

In today’s post we will focus on a specific scenario and use case that is common to most organizations, discussing Cloud Security Groups and flow logs limitations within a specific vNet, and illustrating what Guardicore’s value is in this scenario.

Experiment: Simulate a SWIFT Application Migration to Azure

Let’s look at the details from an experiment performed by one of our customers during a simulation of a SWIFT application migration to Azure.

Our customer used a subscription in Azure, in the Southern region of Brazil. Within the subscription, there is a Virtual Network (vNet). The vNet includes a Subnet 10.0.2.0/24 with various application servers that serve different roles.

This customer attempted to simulate the migration of their SWIFT application to Azure given the subscription above. General segmentation rules for their migrated SWIFT application were set using both NSGs (Network Security Groups) & ASGs (Application Security Groups). These were used to administrate and control network traffic within the virtual network (vNet) and specifically to segment this application.

Let’s review the difference:

  • An NSG is the Azure resource that is used to enforce and control the network traffic. NSGs control access by permitting or denying network traffic. All traffic entering or leaving your Azure network can be processed via an NSG.
  • An ASG is an object reference within a Network Security Group. ASGs are used within an NSG to apply a network security rule to a specific workload or group of VMs. An ASG is a “network object,” and explicit IP addresses are added to this object. This provides the capability to group VMs into associated groups or workloads.

The lab setup:
The cloud setup in this experiment included a single vNet, with a single Subnet, which has its own Network Security Group (NSG) assigned.

ASGs

  • Notice that they are all contained within the same Resource Group, and belong to the Location of the vNet (Brazil South).

NSGs:

The following NSG rules were in place for the simulated migrated SWIFT Application:

  • Load Balancers to Web Servers, over specific ports, allow.
  • Web Servers to Databases, over specific ports, allow.
  • Deny all else between SWIFT servers.

The problem:

A SWIFT application team member in charge of the simulation project called the cloud security team telling them a critical backup operation had stopped working on the migrated application, and he suspects the connection is blocked. The cloud network team, at this point, had to verify the root cause of the problem, partially through process of elimination, out of several possible options:

  1. The application team member was wrong, it’s not a policy issue but a configuration issue within the application.
  2. The ASGs are misconfigured while NSGs are configured correctly.
  3. The ASGs are configured correctly but the NSGs are misconfigured or missing a rule.

The cloud team began the process of elimination. They used Azure flow logs to try to detect the possible blocked connections. The following is an example of such a log:

Using the Microsoft Azure Log Analytics platform, the cloud team sifted through the data, with no success. They were searching for a blocked connection that could potentially be the backup process. The blocked connection was non-detectable. The cloud team members therefore dismissed the issue as a misconfiguration in the application.

The SWIFT team member insisted it was not an application issue and several days passed with no solution, all while the SWIFT backup operation kept failing. In a live environment, this stalemate would have been a catastrophe, with team members likely working around the clock to find the blocked connection, or prove misconfiguration in the application itself. In many cases an incident like this would lead to removing the security policy for the sake of business continuity as millions of dollars are at stake daily.

After many debates and an escalation of the incident, it was decided- based on the Protect team’s recommendation- to leverage Guardicore Centra in the Azure cloud environment to help with the investigation and migration simulation project.

Using Guardicore Centra, the team used Reveal to filter for all failed connections related to the SWIFT application. This immediately revealed an attempted failed connection, between the SWIFT load balancer and the SWIFT databases. The connection failed due to missing allow security groups. There was no NSG in place to allow SWIFT LBs to talk to SWIFT DBs in the policy.

The filters in Reveal

 

Discovering the process

Guardicore was able to provide visibility down to the process level for further context and identification of the failed backup process.

Application Context is a Necessity

The reason the flow logs were inadequate to detect the connection was that IPs were constantly changing as the application scaled up and down and the migration simulation project moved forward. Throughout this, the teams had no context of when the backup operation was supposed to occur or what servers initiated these attempted connections, therefore the search came up empty handed. They were searching for what they thought would reveal the failed connections. As flow logs are limited to IPs and ports, they were unable to search based on application context.

The cloud team decided to use Guardicore Centra to manage the migration and segmentation of the SWIFT application simulation for ease of management and ease of maintenance. Additionally, they added process and user context to the rules for more granular security and testing. Guardicore Centra enabled comparing the on-premises application deployment with the cloud setup to make sure all configurations were in place.

The team then went on to use Guardicore Centra to simulate the SWIFT policy over real SWIFT traffic. Making sure they are not blocking additional critical services, and will not inadvertently block these in the future.

 

Guardicore Centra provided the cloud security team with:

  • Visibility and speed to detect the relevant blocked flows
  • Process and user context to identify the failed operation as the backup operation
  • Ability to receive real-time alerts on any policy violation
  • Applying process level rules & user level rules required for the critical SWIFT Application
  • Simulation and testing capabilities to simulate the policies over real application traffic before blocking

All of these features are not available in Azure. These limitations cause serious implications, such as the backup operation failure and no ability to adequately investigate and resolve the issue.

Furthermore, as part of general environment hygiene, our customer attempted to add several rules to govern the whole vNet, blocking Telnet and insecure FTP. For Telnet, our customer could add a block rule in Azure on port 23; For FTP, an issue was raised. FTP can communicate over high range ports that many other applications will need to use, how could it be blocked? Using Guardicore, a simple block rule over the ftpd process was put in place with no port restriction, immediately blocking any insecure ftp communication at process level regardless of the ports used.

Visibility is key to any successful application migration project. Understanding your application dependencies is a critical step, enabling setting up the application successfully in the cloud. Guardicore Centra provides rich context for each connection, powerful filtering capabilities, flexible timeframes and more. We collect all the flows, show successful, failed, and blocked connections, and store historical data, not just short windows of it, to be able to support many use cases. These include troubleshooting, forensics, compliance and of course, segmentation. This enables us to help our customers migrate to the cloud 30x faster and achieve their segmentation and application migration goals across any infrastructure.

Securing a Hybrid Data Center – Strategies and Best Practices

Today’s data centers exist in a hybrid reality. They often include on-premises infrastructure such as Bare Metal or Virtual Machines, as well as both Public and Private cloud. At the same time, most businesses have legacy systems that they need to support. Even as you embrace cutting-edge infrastructure like containers and microservices, your legacy systems aren’t going anywhere, and it’s probably not even on your near future road-map to replace them. As a result, your security strategy needs to be suitable across a hybrid ecosystem, which is not as simple as it sounds.

The Top Issues with Securing a Hybrid Data Center

Many businesses attempt to use traditional security tools to manage a hybrid data center, and quickly run into problems.

Here are the most common problems that companies encounter when traditional tools are used to secure a modern, hybrid data center:

  • Keeping up with the pace of change: Business moves fast, and traditional security tools such as legacy firewalls, ACLs, VLANs and cloud security groups are ineffectual. This is because these solutions are made for one specific underlying infrastructure. VLANs will work well for on premises – but fall short when it comes to cloud and container infrastructure. Cloud security groups work for the cloud, but won’t support additional cloud providers or on premises. If you want to migrate, security will seriously affect the speed and flexibility of your move, slowing down the whole process – and probably negating the reasons you chose cloud to begin with.
  • Management overhead: Incorporating different solutions for different infrastructure is nothing short of a headache. You’ll need to hire more staff, including experts in each area. A cross-platform security strategy that incorporates everyone’s field of expertise is costly, complex, and prone to bottlenecks because of the traditional ‘too many cooks’ issue.
  • No visibility: Your business will also need to think about compliance. This could involve an entirely different solution and staff member dedicated to compliance and visibility. Without granular insight into your entire ecosystem, it’s impossible to pass an audit. VLANs for example offer no visibility into application dependencies, a major requirement for audit-compliance. When businesses use VLANs, compliance therefore becomes an additional headache.
  • Insufficient control: Today’s security solutions need Layer 7 control, with granularity that can look at user identity, FQDN (fully qualified domain names), command lines and more. Existing solutions rely on IPs and ports, which are insufficient to say the least.
    Take cloud security groups for example, which for many has become the standard technology for segmenting applications, the same way as they would on-premises. However, on the cloud this solution stops at Layer 4 traffic, ports and IPs. For application-aware security on AWS, you will need to add another set of controls. In a dynamic data center, security needs to be decoupled from the IPs themselves, allowing for migration of machines. Smart security uses an abstraction level, enabling the policy to follow the workload, rather than the IP.
  • Lack of automation: In a live hybrid cloud data center, automation is essential. Without automation as standard, for example using VLANs, changes can take weeks or even months. Manually implementing rules can result in the downtime of critical systems, as well as multiple lengthy changes in IPs, configurations of routers, and more.

Hybrid Data Center Security Strategies that Meet These Issues Head-On

The first essential item on your checklist should be infrastructure-agnostic security. Centralized management means one policy approach across everything, from modern and legacy technology on-premises to both public and private cloud. Distributed enforcement decouples the security from the IP or any underlying infrastructure – allowing policy to follow the workload, however it moves or changes. Security policy becomes an enabler of migration and change, automatically moving with the assets themselves.

The most effective hybrid cloud solutions will be software-based, able to integrate with any other existing software solution, including ansible, chef, puppet, SCCM, and more. This will also make deployment fast and seamless, with implementation taking hours rather than days. At Guardicore, our customers often express surprise when we request three hours to install our solution for a POC, as competitors have asked for three days!

The ease of use should continue after the initial deployment. An automated, readable visualization of your entire ecosystem makes issues like compliance entirely straightforward, and provides an intuitive and knowledgeable map that is the foundation to policy creation. Coupling this with a flexible labeling system means that any stakeholder can view the map of your infrastructure, and immediately understand what they are looking at.

These factors allow you to implement micro-segmentation in a highly effective way, with granular control down to the process level. In comparison to traditional security tools, Guardicore can secure and micro-segment an application in just weeks, while for one customer it had taken 9 months to do the same task using VLANs.

What Makes Guardicore Unique When it Comes to Hybrid Data Center Security Strategies?

For Guardicore, it all starts with the map. We collect all the flows, rather than just a sample, and allow you to access all your securely stored historical data rather than only snap-shotting small windows in time. This allows us to support more use cases for our customers, from making compliance simple to troubleshooting a slowdown or forensic investigation into a breach. We also use contextual analysis on all application dependencies and traffic, using orchestration data, as well as the process, user, FQDN and command line of all traffic. We can enable results, whatever use case you’re looking to meet.

Guardicore is also known for our flexibility, providing a grouping and labeling process that lets you see your data center the way you talk about it, using your own labels rather than pre-defined ones superimposed on you by a vendor, and Key:Value formats instead of tags. This makes it much easier to create the right policies for your environment, and use the map to see a hierarchical view of your entire business structure, with context that makes sense to you. Taking this a step further into policy creation, your rules methodology can be a composite of whitelisting and blacklisting, giving less risk of inflexibility and complexity in your data center, and even allowing security rules that are not connected to segmentation use cases. In contrast, competitors use white-list only approaches with fixed labels and tiers.

Fast & Simple Segmentation with Guardicore

Your hybrid data center security strategies should enable speed and flexibility, not stand in your way. First, ensure that your solution supports any environment. Next, gain as much visibility as possible, including context. Use this to glean all data in an intuitive way, without gaps, before creating flexible policies that focus on your key objectives – regardless of the underlying infrastructure.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

Guardicore Raises $60 Million; Funding Fuels Company Growth and Continued Disruption

Today I am excited to share that we have secured a Series C funding round of $60 million, bringing our total funding to more than $110 million. The latest round was led by Qumra Capital and was joined by other new investors DTCP, Partech, and ClalTech. Existing investors Battery Ventures, 83North, TPG Growth, and Greenfield Partners also participated in the round.

Since we launched the company in 2015, Guardicore has been focused on a single vision for providing a new, innovative way to protect critical assets in the cloud and data center. Our focus, and our incredible team, has earned the trust of some of the world’s most respected brands by helping them protect what matters most to their business. As the confidence our customers have in us has grown, so has our business, which has demonstrated consistent year-over-year growth for the past three years.

Our growth is due to our ability to deliver on a new approach to secure data centers and clouds using distributed, software-defined segmentation. This approach aligns with the transformation of the modern data center, driven by cloud, hybrid cloud, and PaaS adoption. As a result, we have delivered a solution that redefines the role of firewalls and implementing Zero Trust security frameworks. More dynamic, agile, and practical security techniques are required to complement or even replace the next-generation firewall technologies. We are delivering this and give our customers the ability to innovate rapidly with the confidence their security posture can keep up with the pace of change.

Continued Innovation

The movement of critical workloads into virtualized, hybrid cloud environments, industry compliance requirements and the increase of data center breaches demands a new approach to security that moves away from legacy firewalls and other perimeter-based security products to a new, software-defined approach. This movement continues to inspire our innovations and ensure that our customers have a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment.

Our innovation is evident in several areas of the company. First, we have been able to quickly add new innovative technology into our Centra solution, working in close partnership with our customers. For example, we deliver expansive coverage of data center, cloud infrastructure and operating environments, and simpler and more intuitive ways to define application dependencies and segmentation policies. This gives our customers the right level of protection for critical applications and workloads in virtually any environment.

Second, our Guardicore Labs global research team continues to provide deep insights into the latest exploits and vulnerabilities that matter to the data center. They also equip industry with access to open source tools like Infection Monkey, and Cyber Threat Intelligence (CTI) that allows security teams to keep track of potential threats that are happening in real time.

We have also continued to build out other areas of our business, such as our partner ecosystem, which earned the five-star partner program rating from CRN since its inception two years ago, as well as our technology alliances, which include relationships with leading cloud / IaaS infrastructure players such as AWS, Azure, and Nutanix.

Looking Ahead

We are proud of our past, but even more excited about our future. While there is always more work to do, we are in a unique position to lead the market with not only great technology, but a strong roster of customers, partners and, most importantly, a team of Guardicorians that challenge the status quo every single day to deliver the most innovative solutions to meet the new requirements of a cloud-centric era. I truly believe that we have the best team in the business.

Finally, as we celebrate this important milestone, I want to say thanks to our customers who have made Guardicore their trusted security partner. It is our mission to continue to earn your trust by
ensuring you maximize the value of your security investments beyond your goals and expectations.

Understanding and Avoiding Security Misconfiguration

Security Misconfiguration is simply defined as failing to implement all the security controls for a server or web application, or implementing the security controls, but doing so with errors. What a company thought of as a safe environment actually has dangerous gaps or mistakes that leave the organization open to risk. According to the OWASP top 10, this type of misconfiguration is number 6 on the list of critical web application security risks.

How to Detect Security Misconfiguration -Diagnosing and Determining the Issue

The truth is, you probably do have misconfigurations in your security, as this is a widespread problem, and can happen at any level of the application stack. Some of the most common misconfigurations in traditional data centers include default configurations that have never been changed and remain insecure, incomplete configurations that were intended to be temporary, and wrong assumptions about the application expected network behaviour and connectivity requirements.

In today’s hybrid data centers and cloud environments, and with the complexity of applications, operating systems, frameworks and workloads, this challenge is growing. These environments are technologically diverse and rapidly changing, making it difficult to understand and introduce the right controls for secure configuration. Without the right level of visibility, security misconfiguration is opening new risks for heterogeneous environments. These include:

  • Unnecessary administration ports that are open for an application. These expose the application to remote attacks.
  • Outbound connections to various internet services. These could reveal unwanted behavior of the application in a critical environment.
  • Legacy applications that are trying to communicate with applications that do not exist anymore. Attackers could mimic these applications to establish a connection.

The Enhanced Risk of Misconfiguration in a Hybrid-Cloud Environment

While security misconfiguration in traditional data centers put companies at risk of unauthorized access to application resources, data exposure and in-organization threats, the advent of the cloud has increased the threat landscape exponentially. It comes as no surprise that “2017 saw an incredible 424 percent increase in records breached through misconfigurations in cloud servers” according to a recent report by IBM. This kind of cloud security misconfiguration accounted for almost 70% of the overall compromised data records that year.

One element to consider in a hybrid environment is the use of public cloud services, third party services, and applications that are hosted in different infrastructure. Unauthorized application access, both from external sources or internal applications or legacy applications can open a business up to a large amount of risk.

Firewalls can often suffer from misconfiguration, with policies left dangerously loose and permissive, providing a large amount of exposure to the network. In many cases, production environments are not firewalled from development environments, or firewalls are not used to enforce least privilege where it could be most beneficial.

Private servers with third-party vendors or software can lack visibility or an understanding of shared responsibility, often resulting in misconfiguration. One example is the 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility.

With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe?

Learning about Application Behavior to Mitigate the Risk of Misconfiguration

Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices.

This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features.

Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration.

Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas.

Visibility and Smart Policy Limit the Risk of Security Misconfiguration

The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems.

Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication.

Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole.

Want to hear more about Guardicore Centra and micro-segmentation? Get in touch

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More

What is Micro-Segmentation?

Micro-segmentation is an emerging security best practice that offers a number of advantages over more established approaches like network segmentation and application segmentation. The added granularity that micro-segmentation offers is essential at a time when many organizations are adopting cloud services and new deployment options like containers that make traditional perimeter security less relevant.

Micro-Segmentation Methods

The best way for organizations to get started with micro-segmentation is to identify the methods that best align with their security and policy objectives, start with focused policies, and gradually layer additional micro-segmentation techniques over time through step-by-step iteration.

Harness the Benefits of Micro-Segmentation

One of the major benefits of micro-segmentation is that it provides shared visibility into the assets and activities in an environment without slowing development and innovation. Implementing micro-segmentation greatly reduces the attack surface in environments with a diverse set of deployment models and a high rate of change.