The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

From On-Prem to Cloud: The Complete AWS Security Checklist

Cloud computing has redefined how organizations handle “business as usual.” In the past, organizations were responsible for deploying, maintaining, and securing all of their own systems. However, doing this properly requires resources, and some organizations simply don’t have the necessary in-house talent to accomplish it. With the cloud, it’s now possible to rent resources from a cloud service providers (CSPs) and offload the maintenance and some of the security workload to them.

Just as the cloud is different from an on-premises deployment, security in the cloud can differ from traditional best practices as well. Below, we provide an AWS security checklist that includes the most crucial steps for implementing network security best practices within a cloud environment.

AWS Security Checklist: Step-by-Step Guide

  • Get the Whole Picture. Before you can secure the cloud, you need to know what’s in the cloud. Cloud computing is designed to be easy to use, which means that even non-technical employees can create accounts and upload sensitive data to it. Amazon does what it can to help, but poorly secured cloud storage is still a major cause of data breaches. Before your security team can secure your organization’s footprint in the cloud, they first need to do the research necessary to find any unauthorized (and potentially insecure) cloud accounts containing company data.
  • Define an AWS Audit Checklist. After you have an understanding of the scope of your organization’s cloud security deployments, it’s time to apply an AWS audit checklist to them. The purpose of this checklist is to ensure that every deployment containing your organization’s sensitive data meets the minimum standards for a secure cloud deployment. There are a variety of resources available for development of your organization’s AWS audit checklist. Amazon has provided a security checklist for cloud computing, and our piece on AWS Security Best Practices provides the information that you need for a solid foundation in cloud security. Use these resources to define a baseline for a secure AWS and then apply it to all cloud resources in your organization.
  • Improve Visibility. A CSP’s “as a Service” offerings sacrifice visibility for convenience. When using a cloud service, you lose visibility into and control over the underlying infrastructure, a situation that is very different from an on-premises deployment. Your applications may be deployed over multiple cloud instances and on servers in different sites and even different regions, making it more difficult to define clear security boundaries. Guardicore Centra’s built-in dashboard can be a major asset when trying to understand the scope and layout of your cloud resources. The tool automatically discovers applications on your cloud deployment and maps the data flows between them. This data is then presented in an intuitive user interface, making it easy to understand applications that you have running in the cloud and how they interact with one another.
  • Manage Your Attack Surface. Once you have a solid understanding of your cloud deployment, the next step is working to secure it. The concept of network segmentation to minimize the impact of a breach is nothing new, but many organizations are at a loss on how to do it in the cloud.While securing all of your application’s traffic within a particular cloud infrastructure (like AWS) or securing traffic between applications and external networks is a good start, it’s simply not enough. In the cloud, it’s necessary to implement micro-segmentation, defining policies at the application level. By defining which applications are allowed to interact and the types of interactions that are permitted, it’s possible to provide the level of security necessary for applications operating in the cloud.In an attempt to ensure the security of their applications, many organizations go too far in defining security policies. In fact, according to Gartner, 70% of segmentation projects originally suffer from over-segmentation. With Guardicore Centra, the burden of defining effective policy rules no longer rests on the members of the security team. Centra’s micro-segmentation solution provides automatic policy recommendations that can be effectively applied on any cloud infrastructure, streamlining your organization’s security policy for AWS and all other cloud deployments.
  • Empower Security Through Visualization. The success of Security Information and Event Management (SIEM) solutions demonstrates the effectiveness and importance of collating security data into an easy-to-use format for the security team. Many data breaches are enabled by a lack of understanding of the protected system or an inability to effectively analyze and cross-reference alert data.Humans operate most effectively when dealing with visual data, and Centra is designed to provide your security team with the information that they need to secure your cloud deployment. Centra’s threat detection and response technology uses dynamic detection, reputation analysis, and policy-based detection to draw analysts’ attention to where it is needed most. The Guardicore incident response dashboard aggregates all necessary details regarding the attack, empowering defenders to respond rapidly and minimize the organizational impact of an attack.

Applying the AWS Security Checklist

Protecting your organization’s sensitive data and intellectual property requires going beyond the minimum when securing your organization’s cloud deployment. Built for the cloud, Guardicore Centra is designed to provide your organization with the tools it needs to secure your AWS deployment.

To find out more, contact us today or sign up for a demo of the Centra Security Platform and see its impact on your cloud security for yourself.

Thoughts on the Capital One Attack

The ink on the Equifax settlement papers is hardly dry, and another huge data breach, this time at Capital One, is sending shock waves across North America.

The company has disclosed that in March of this year, a former systems engineer, Paige Thompson, exploited a configuration vulnerability (associated with a firewall or WAF) and was able to execute a series of commands on the bank’s servers that were hosted on AWS. About 106 million customers have had their data exposed, including names, incomes, dates of birth, and even social security numbers and bank account credentials. Some of the data was encrypted, some was tokenzied but there’s been a large amount of damage to customers, as well as to the bank’s reputation and the entire security ecosystem.

Our customers, partners and even employees have asked us to comment about the Capital One data breach. Guardicore is an Advanced Technology Partner for AWS with security competency. There are only a small number of companies with such certification and thus I’d like to think that our thoughts do matter.

First – there are a couple of positive things related to this breach:

  1. Once notified, Capital One acted very quickly. It means that they have the right procedures, processes and people.
  2. Responsible disclosure programs provide real value. This is important and many organizations should follow suit.

While not a lot of information is available, based on the content that has been published thus far, we have some additional thoughts:

Could this Data Breach Have Been Avoided?

Reading the many articles on this subject everyone is trying to figure out the same thing. How did this happen, and what could have been done to keep Capital One’s customer data more secure?

What Does a ‘Configuration Vulnerability’ Mean on AWS?

When it comes to managing security in a cloud or a hybrid-cloud environment, organizations often experience issues with maintaining good visibility and control over applications and traffic. The first step is understanding what your role is in a partnership with any cloud vendor. Being part of a shared-responsibility model in AWS means recognizing that Amazon gives you what it calls “full ownership and control” over how you store and secure your content and data. While AWS is responsible for infrastructure, having freedom over your content means you need to take charge when it comes to securing applications and data.

Looking at this data breach specifically, an AWS representative has said “AWS was not compromised in any way and functioned as designed. The perpetrator gained access through misconfiguration of the web application and not the underlying cloud-based infrastructure.”

Thompson gained access by leveraging a configuration error or vulnerability which affected a web firewall guarding a web application. By passing what seems to have been a thin (maybe even single) layer of defense, she was then able to make some kind of lateral movement across the network and then to the S3 bucket where the sensitive data was being stored.

Cloud Native Security Controls are Just Your First Layer of Defense

Can we learn anything from this incomplete information? I do think that the answer is “yes”: Cloud-native security controls provide a good start, but this alone is not enough : Best practice is to add an extra layer of detection and prevention, adding application-aware security to the cloud, just as you would expect on-premises. Defense-in-depth as a concept is not going away even in the cloud. The controls and defenses that the Cloud Service Provider includes should be seen as an add-on or part of the basic hygiene requirements.

I would argue that the built-in Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to identify and block malicious or suspicious traffic than cloud APIs can achieve. When we were designing Guardicore Centra, we decided to try to develop independent capabilities whenever possible, even if it meant that we had to spend more time and put more into our development. The result is a better security solution that is independent of the infrastructure and is not limited to what a 3rd party supplier/vendor or partner provides.

Guardicore Centra is used as an added security platform for AWS as well as the other clouds. We know from our customers that acting on the facts listed below have protected them on multiple occasions.

  • Guardicore is an Advanced Technology Partner for AWS: Guardicore is the only vendor that specializes in micro-segmentation with this certification from AWS, and Guardicore Centra is fully integrated with AWS. Users can see native-cloud information and AWS-specific data alongside all information about their hybrid ecosystem. When creating policy, this can be visualized and enforced on flows and down to the process level, layer 7.
  • Micro-Segmentation Fills the Gaps of Built-in Cloud Segmentation: Many companies might rely on native cloud segmentation through cloud-vendor tools, but it would have been insufficient to stop the kind of lateral movement the attacker used to harvest these credentials in the Capital One breach. In contrast, solutions like Centra that are deployed on top of the cloud’s infrastructure and are independent are not limited. Specifically for Centra, the product enables companies to set policies at the process level itself.
  • Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to block malicious or suspicious traffic than cloud APIs can achieve. In contrast, Guardicore Centra can block unwanted traffic with dynamic application policies that monitor and enforce on east-west traffic as well as north-south. As smart labeling and grouping can pull in information such as EC2 tags, users obtain a highly visible and configurable expression of their data centers, both for mapping and policy enforcement.
  • Breach Detection in Minutes, not Months: The Capital One breach was discovered on July 19th 2019, but the attack occurred in late March this year. This is a gap of almost four months from breach to detection. Many businesses struggle with visibility on the cloud, but Guardicore Centra’s foundational map is created with equal insight into all environments. Breach detection occurs in real-time, with visibility down to Layer 7. Security incidents or policy violations can be sent immediately to AWS security hub, automated, or escalated internally for mitigation.

Capital One Bank are well known for good security practices. Their contributions to the security and open source communities are tremendous. This highlights how easily even a business with a strong security posture can fall victim to this kind of vulnerability. As more enterprises move to hybrid-cloud realities, visibility and control get more difficult to achieve.

Guardicore micro-segmentation is built for this challenge, achieving full visibility on the cloud, and creating single granular policies that follow the workload, working seamlessly across a heterogeneous environment.

Want to find out more about how to secure your AWS instances?

Read these Best Practices

Moving Zero Trust from a Concept to a Reality

Most people understand the reasoning and the reality behind a zero trust model. While historically, a network perimeter was considered sufficient to keep attacks at bay, today this is not the case. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to be much more useful and capable in preventing breaches.

But how organizations can move from a concept or idea into implementation? Using the same tools that are developed with 15-20 year old technologies is not adequate.

There is a growing demand for IT resources that can be accessed in a location-agnostic way, and cloud services are being used more widely than ever. These facts, on top of businesses embracing broader use of distributed application architectures, mean that both the traditional firewall and the Next Generation are no longer effective for risk reduction.
The other factor to consider is that new malware and attack vectors are being discovered every day, and businesses have no idea where the next threat might come from. It’s more important than ever to use micro-segmentation and micro-perimeters to limit the fallout of a cyber attack.

How does applying the best practices of zero trust combat these issues?

Simply put, implementing the zero trust model creates and enforces small segments of control around sensitive data and applications, increasing your data security overall. Businesses can use zero trust to monitor all network traffic for malicious activity or unauthorized access, limiting the risk of lateral movement through escalating user privileges and improving breach detection and incident response. As Forrester Research, who originally introduced the concept, explain, with zero trust, network policy can be managed from one central console through automation.

The Guardicore principles of zero trust

At Guardicore, we support IT teams in implementing zero trust with the support of our four high level principles. Together, they create an environment where you are best-placed to glean the benefits of zero trust.

  • A least privilege access strategy: Access permissions are only assigned based on a well-defined need. ‘Never trust- always verify’. This doesn’t stop at users alone. We also include applications, and even the data itself, with continuous review of the need for access. Group permissions can help make this seamless, and then individual assets or elements can be removed from each group as necessary.
  • Secure access to all resources: This is true no matter the location or its user. Our authentication level is the same both inside and outside of the local area network, for example services from the LAN will not be available via VPN.
  • Access control at all levels: Both the network itself and each resource or application need multi-factor authentication.
  • Audit everything: Rather than simply collecting data, we review all the logs that are manually collected, using automation to generate alerts where necessary. These bots perform multiple actions, such as our ‘nightwatch bot’ that generates phone calls to the right member of staff in the case of an emergency.

However, knowing these best principles and understanding the benefits behind zero trust is not the same as being able to implement securely and with the right amount of flexibility and control.

Many companies fall at the first hurdle, unsure how to gain full visibility of their ecosystem. Without this, it is impossible to define policy clearly, set up the correct alerts so that business can run as usual, or stay on top of costs. If your business does not have the right guidance or skill-sets, the zero trust model becomes a ‘nice to have’ in theory but not something that can be achieved in practice.

It all starts with the map

With a zero trust model that starts with deep visibility, you can automatically identify all resources across all environments, at both the application and network level. At this point, you can work out what you need to enforce, turning to technology once you know what you’re looking to build as a strategy for your business. Other solutions will start with their capabilities, using these to suggest enforcement, which is the opposite of what you need, and can leave gaps where you need policy the most.

It’s important to ensure that you have a method in place for classification so that stakeholders can understand what they are looking at on your map. We bring in data from third-party orchestration, using automation to create a highly accessible map that is simple to visualize across both technical and business teams. With a context-rich map, you can generate intelligence on malicious activity even at the application layer, and tightly enforce policy without worrying about the impact on business as usual.

With these best practices in mind, and a map as your foundation – your business can achieve the goals of zero trust, enforcing control around sensitive data and apps, finding malicious activity in network traffic, and centrally managing network policy with automation.

Want to better understand how to implement segmentation for securing modern data centers to work towards a zero trust model?

Download our white paper

Guardicore’s Insights from Security Field Day 2019

We had such a great time speaking at Security Field Day recently, presenting the changes to our product since our last visit, and hearing from delegates about what issues concern them in micro-segmentation technology.

The last time we were at Field Day was four years ago, and our product was in an entirely different place. The technology and vision have evolved since then. Of course, we’re still learning as we innovate, with the help of our customers who continually come up with new use cases and challenges to meet.

For those who missed our talk, here’s a look at some of what we discussed, and a brief recap of a few interesting and challenging questions that came up on the day.

Simplicity and Visibility First

At Guardicore, we know that ease of use is the foundation to widespread adoption of a new technology for any business. When we get into discussions with each enterprise, customer, or team, we see clearly that they have their own issues or road map to address. As there is no such thing as the ultimate or only use case for micro-segmentation, we can start with the customer in mind. Our product can support any flavor, any need. Just as examples, some of the most popular use cases include separation of environments such as Dev/Prod, ring fencing critical assets, micro-segmenting digital crown jewels, compliance or least privilege and more general IT hygiene like vulnerable port protocols.

To make these use cases into reality, organizations need deep visibility to understand what’s going on in the data center from a human lens. It’s important to have flexible labeling so that you can physically see your data center with the same language that you use to speak about it. We also enhance this by allowing users to see a particular view based on their need or their role within the company. A compliance officer would have a different use for the map than the CTO, or a developer in DevSecOps for example. In addition, organizations need to enforce both blacklist and whitelist policy models for intuitive threat prevention and response. Our customers benefit from our cutting edge visibility tool, Reveal, which is completely customizable and checks all of these boxes. They also benefit from our flexible policy models that include both whitelisting and blacklisting.

To learn more about how our mapping and visibility happen, and how this helps to enforce policy with our uniquely flexible policy model as well and show quick value, watch our full presentation, below.

Addressing Questions and Challenges

With only one hour for presenting our product, there were a lot of questions that we couldn’t get to answer. Maybe next time! Here are three of the topics we wanted to address further.

Q. How does being agent-based affect your solution?

One of the questions raised during the session was surrounding the fact that Guardicore micro-segmentation is an agent-based solution, as the benefits are clear, but people often want to know what the agent’s impact is on the workload.

The first thing we always tell customers who ask this question is that our solution is tried and tested. It is already deployed in some of the world’s biggest data centers such as Santander and Openlink, and works with a negligible impact on performance. Our agent footprint is very small, less than 0.1% CPU and takes up 185MB on Linux and 800MB on windows. Our resources are also configurable, allowing you to tailor the agent to what you need. At the same time, we support the largest amount of operating systems as compared to other vendors.

If the agent is still not suitable, you can use our L4 collectors, which sit at the hypervisor level or switch level, and give you full visibility, and use our virtual appliance for enforcement, as we touched upon during the talk. As experts in segmentation, we can talk you through your cybersecurity use cases, and discuss which approach works best, and where.

Q. Which breach detection capabilities are included?

Complementary controls are an important element of our solution, because they contribute to the ease of use and simplicity. One tool for multiple use cases offers a powerful competitive edge. Here are three of the tools we include:

  • Reputation Analysis: We can identify granular threats, including suspicious domain names, IP addresses, and even file hashes in traffic flows.
  • Dynamic Deception: The latest in incident response, this technique tricks attackers, diverting them to a honeypot environment where Labs can learn from their behavior.
  • File Integrity Monitoring: A prerequisite for many compliance regulations, this change-detection mechanism will immediately alert to any unauthorized changes to files.

Q. How do you respond to a known threat?

Flexible policy models allow us to respond quickly and intuitively when it comes to breach detection and incident response. Some vendors have a whitelist only model, which impedes their ability to take immediate action and is not enough in a hybrid fast-paced data center. In contrast, we can immediately block a known threat or undesired ports by adding it to the blacklist. One example might be blocking Telnet across the whole environment, or blocking FTP at the process level. This helps us show real value from day one. Composite models can allow complex rules like the real-world example we used at the presentation, SSH is only allowed in Production environment if it comes from Jumpboxes. With Guardicore, this takes 2 simple rules, while with a whitelist model it would take thousands.

Security Field Day 2019 staff

Until Next Time!

We loved presenting at Field Day, and want to thank all the delegates for their time and their interesting questions! If you want to talk more about any of the topics raised in the presentation, reach out to me via LinkedIn.

In the meantime, learn more about securing a hybrid modern data center which works across legacy infrastructure as well as containers and clouds.

Download our white paper

Rethinking Segmentation for Better Security

Cloud services and their related security challenges will continue to grow

One of the biggest shifts in the enterprise computing industry in the past decade is the migration to the cloud. As more and more organizations discover the benefits of moving their data centers to private and public cloud environments, this trend is expected to continue dominating the enterprise landscape. Gartner projects cloud services will grow exponentially from 2019 through 2022, with Infrastructure-as-a-Service (IaaS) being the fastest growing segment of the market, already showing an increase of 27.5% in 2019 compared to 2018.

So what’s the big challenge?

The added agility of cloud infrastructure comes with a trade-off, in the form of increased complexity of cyber security. Traditional security tools were designed for on premise servers and endpoints, focusing on perimeter defense to block the attacks at the entry point. But the dynamic nature of hybrid cloud services meant that perimeter defense became insufficient. When the perimeter itself is constantly shifting, as data and workloads move back and forth among public and private clouds and on premise data centers, the attack surfaces became much larger and required network segmentation to control lateral movement within the perimeter.

From the early days of clouds, segmentation became a popular concept. Traditionally, businesses were looking to divide the network into segments and enforce some sort of access control between the segments. In practice, the way it worked was that relevant servers were put into a dedicated VLAN and routed through a firewall. The higher level of segmentation meant smaller segment size, which reduced the attack surface and limited the impact of any potential breach.

Then – the rules of the game changed! Moving from one static cloud to dynamic, hybrid cloud-based data centers

Simple segmentation by firewalls used to work in the past, when the networks were comprised of relatively large static segments. However, the “rules of the game” have changed significantly in recent years. Dynamic data centers and hybrid cloud adoption have created problems that cannot be solved with legacy firewalls, and yet achieving segmentation is now more vital than ever before. The cadence of change to the infrastructure and application services is very high, accentuating the need for granular segments with an understanding of their dependencies and impacting their security policy.

Take, for example, the 2017 Equifax breach. The US House of Representatives report on this incident pointed directly to the lack of internal segmentation as one of the key gaps that allowed the breach impact to be so big, affecting 143 million consumers.

Regulation is another driver of segmentation. One of Guardicore’s customers, a global investment bank, needed to comply with a new regulation of SWIFT – which requires all SWIFT servers to be put into a separate segment and whitelist all connection allowed in and out of this segment. Using traditional methods, it took the bank 10 months and a costly labor-intensive process to complete this change, spurring them on to find smarter segmentation methods moving forward.

The examples above demonstrate how although segmentation is a known and well understood security measure, in practice organizations struggle to implement it properly in a cost-effective way.

Adapt easily to these changes and start micro-segmentation

To deal with these challenges, micro-segmentation was born. Micro-segmentation takes enterprise security to a new level and is a step further than existing network segmentation and application segmentation methods, adding visibility and policy granularity. It typically works by establishing security policies around individual or groups of applications, regardless of where they reside in the hybrid data center. These policies dictate which applications can and cannot communicate with each other.

Micro-segmentation includes the ability to fully visualize the environment and define security policies with Layer 7 process-level precision, making it highly effective at preventing lateral movement in a hybrid cloud environment.

Take the first step in preparing your enterprise for a better data security

Want to learn more? Listen to Guardicore’s CTO and Co-founder, Ariel Zeitlin, as he walks through the challenges and the solutions to better secure your data in his latest interview with the CIO Talk Network. In this podcast, Ariel discusses the new approaches to implementing segmentation, the key aspects you need to consider when comparing different vendors and technologies, and what comes ahead of the curve for security leaders in this space.

 

Want to learn more about how to first think through, then properly implement micro-segmentation? Read our white paper on operationalizing your segmentation project.

Read More

NSX-T vs. NSX-V – Key Differences and Pitfalls to Avoid

While working with many customers on segmentation projects, we often get questions about alternative products to Guardicore. This is expected, and, in fact, welcome, as we will take on any head-to-head comparison of Guardicore Centra to other products for micro-segmentation.

Guardicore vs. NSX-T vs NSX- V

One of the common comparisons we get is to VMware NSX. And specifically, we get a lot of questions from customers about the difference between VMware’s two offerings in this space, NSX-T vs NSX-V. Although many security and virtualization experts have written about the differences between the two offerings, including speculation on whether or not these two solutions will merge into a single offering, we think we offer a unique perspective on some of the differences, and what to pay attention to in order to ensure segmentation projects are successful. Also, regardless of which product variant an organization is considering, there are several potential pitfalls with NSX that are important to understand and consider before proceeding with deployment.

NSX-T vs. NSX-V: Key Differences

NSX-V (NSX for “vSphere”) was the first incarnation of NSX and has been around for several years now. As the name suggests, NSX-V is designed for on-premises vSphere deployments only and is architected so that a single NSX-V manager is tied to a single VMware vCenter Server instance. It is only applicable for VMware virtual machines, which leaves a coverage gap for organizations whose use a hybrid infrastructure model. The 2019 RightScale State of the Cloud Report in fact shows that 94% of organizations use the cloud — with 28% of those prioritizing hybrid cloud – with VMware vSphere at 50% of private cloud adoption, flat from last year. So, given the large number of organizations embracing the cloud, interest in NSX-V is waning.

NSX-T (NSX “Transformers”) was designed to address the use cases that NSX-V could not cover, such as multi-hypervisors, cloud, containers and bare metal servers. It is decoupled from VMware’s proprietary hypervisor platform and incorporates agents to perform micro-segmentation on non-VMware platforms. As a result, NSX-T is a much more viable offering than NSX-V now that hybrid cloud and cloud-only deployment models are growing in popularity. However, NSX-T remains limited by feature gaps when compared to both NSX-V and other micro-segmentation solutions, including Guardicore Centra.

Key Pitfalls to Avoid with NSX

While the evolution to NSX-T was a step in the right direction for VMware strategically, there are a number of limitations that continue to limit NSX’s value and effectiveness, particularly when compared to specialized micro-segmentation solutions like Guardicore Centra .

The following are some of the key pitfalls to avoid when considering NSX.

  • Solution Complexity
    VMware NSX requires multiple tools to cover the entire hybrid data center environment. This means NSX-V for ESXi hosts, NSX-T for bare-metal servers, and NSX-Cloud for VMware cloud hosting. In addition, it is a best practice in any micro-segmentation project to first start with visibility to map flows and classify assets where policy will be applied. This requires a separate product, vRealize Network Insight (vRNI). So, a true hybrid infrastructure requires multiple products from VMware, and the need to synchronize policy across them. This leads to more complexity and significantly more time to achieve results. In addition, vRNI is not well-integrated into NSX, which makes the task of moving from visibility to policy a long and complex process. It requires manual downloading and uploading of files to share information between tools.But don’t just take our word for it. A recent Gartner report, Solution Comparison for Microsegmentation Products, April 2019, stated that VMware NSX “comes with massive complexity and many moving parts”. And, when considering NSX for organizations that have implemented the VMware SDN, there is additional complexity added. For example, the network virtualization service alone requires an architecture that consists of “logical switches, logical routers routers, NSX Edge Nodes, NSX Edge Clusters, Transport Nodes, Transport Zones, the logical firewall and logical load balancers,” according to Gartner. Not to mention all the manual configuration steps required to implement.
  • Overspending on Licensing
    For many organizations, segmentation requirements develop in stages. They may not even consciously be beginning a micro-segmentation project. It could start as a focused need to protect a critical set of “digital crown jewels” or subsets of the infrastructure that are subject to regulatory requirements. VMware’s licensing model for NSX does not align well with practical approaches to segmentation like these. When deploying NSX, an organization must license its entire infrastructure. If a segmentation project only applies to 20 percent of the total infrastructure, NSX licenses must be purchased for the remaining 80 percent regardless of whether they will ever be used.
  • Management Console Sprawl
    As mentioned above, detailed infrastructure virtualization is a critical building block for effective micro-segmentation. You can’t protect what you can’t see. While micro-segmentation products integrate virtualization and micro-segmentation into a single interface, NSX does not include native visualization capabilities. Instead, NSX requires the use of a separately licensed product, vRealize Network Insight, for infrastructure visibility. This adds both cost and complexity. It also makes it much more difficult and time-consuming to translate insights from visualization into corresponding micro-segmentation policies. The impact is significant, as it puts additional resource strain on already over-taxed IT resources and results in less effective and less complete segmentation policies.
  • Limited Visibility
    Even when NSX customers choose to deploy vRNI as part of an NSX deployment, the real-time visibility it provides is limited to Layer 4 granularity. This does not provide the level of visibility to set fine-grained, application-aware policies to protect against today’s data center and cloud infrastructure threats. As environments and security requirements become more sophisticated, it is often necessary to combine Layer 4 and Layer 7 views to gain a complete picture of how applications and workloads work and develop strategies for protecting them.Also, while real-time visibility is critical, historical visibility also plays an important role in segmentation. IT environments – and the threat landscape – are constantly changing, and the ability to review historical activity helps security teams continuously improve segmentation policies over time. However, NSX and vRNI lack any historical reporting or views.
  • Enforcement Dependencies and Limitations
    As with visualization, it is important to be able to implement policy enforcement at both the network and process levels. Native NSX policy enforcement can only be performed at the network level.It is possible to achieve limited application-level policy control by using NSX in conjunction with a third VMware product, VMware Distributed Firewall. However, even using VMware Distributed Firewall and NSX together has significant limitations. For example, VMware Distributed Firewall can only be used with on-premises vSphere deployments or with VMware’s proprietary VMware Cloud for AWS cloud deployment model. This makes it non-applicable to modern hybrid cloud infrastructure.
  • Insufficient Protection of Legacy Assets
    While most organizations strive to deploy key applications on modern operating systems, legacy assets remain a fact of life in many environments. While the introduction of agents with NSX-T broadens platform coverage beyond the VMware stack, operating system compatibility is highly constrained. NSX-T agent support is limited to Windows Server 2012 or newer and the latest Linux distributions. Many organizations continue to run high-value applications on older versions of Windows and Linux. The same is true for legacy operating systems like Solaris, AIX, and HP-UX. In many ways, these legacy systems are leading candidates for protection with micro-segmentation, as they are less likely than more modern systems to have current security updates available and applied. But they cannot be protected with NSX.
  • Inability to Detect Breaches
    While the intent of micro-segmentation policies is to proactively block attacks and lateral movement attempts, it is important to complement policy controls with breach detection capabilities. Doing so acts as a safety net, allowing security teams to detect and respond to any malicious activities that micro-segmentation policies do not block. Detecting infrastructure access from sources with questionable reputation and monitoring for network scans and unexpected file changes can both uncover in-progress security incidents and help inform ongoing micro-segmentation policy improvements. NSX lacks any integrated breach detection capabilities.

With the introduction of NSX-T, VMware took an important step away from the proprietary micro-segmentation model it originally created with NSX-V. But even NSX-T requires customers to lock themselves into a sprawling collection of VMware tools. And some key elements, such as VMware Distributed Firewall, remain highly aligned with VMware’s traditional on-premises model.

In contrast, Guardicore Centra is a software-defined, micro-segmentation solution that was designed from day one to be platform-agnostic. This makes is much more effective than NSX at applying micro-segmentation to any combination of VMware and non-VMware infrastructures.

Centra also avoids the key pitfalls that limit the usefulness of NSX.

For example, Centra offers:

  • Flexible licensing that can be applied to a subset of the overall infrastructure if desired.
  • Visualization capabilities that are fully integrated with the micro-segmentation policy creation process.
  • Visibility and integrated enforcement at both Layer 4 and Layer 7 for more granular micro-segmentation control.
  • Extensive support for legacy operating systems, including older Windows and Linux versions, Solaris, AIX, and HP-UX.
  • Fully integrated breach detection and response capabilities, including reputation-based detection, dynamic deception, file integrity monitoring, and network scan detection.

Don’t Let NSX Limitations Undermine Your Micro-Segmentation Strategy

Before considering NSX, see first-hand how Guardicore Centra can help you achieve a simpler and more effective micro-segmentation approach.

Interested in more information on how Guardicore Centra is better for your needs than any NSX amalgam? Read our Guardicore vs. VMware NSX Comparison Guide

Read More

What is AWS re:Inforce?

AWS re:Inforce is a spin-off of AWS re:Invent. Why the need for a spinoff? Legend has it that the security tracks during re:Invent got so crowded that AWS decided that the security track should have a conference of its own.

AWS re:Inforce is a different kind of conference, a highly-technical conference of curated content meant for security professionals. This is a conference where knowledge runs deep and conversations go deeper, with few marketing overtures and high-level musings. Even the vendor-sponsored presentation were very technical with interesting takeaways. If your organization is invested in AWS at any level, it’s a great conference to attend. You get two condensed days of dedicated security content for the different services, architectures, and platforms offered by AWS. The content is available for multiple levels of expertise. You also get access to the top-tier AWS experts, with whom you can consult with on your different architecture dilemmas. Being that this conference turned out to be very popular, one tip I’d give next year’s attendees is to book your desired sessions as far ahead of time as you can (at least a few weeks, if possible). In conversations with colleagues, I learned that there were many who couldn’t get into all the sessions they had wanted. So I suggest you plan well for next year.

Here are some of the takeaways from the conference that I’d like to share with you:

  1. Humans don’t scale – This is not a revolutionary new thought, it’s common knowledge in the DevOps world. However the same understanding is becoming prevalent in the security industry as well. Organizations are starting to understand that as they move to the cloud, managing security for multiple dynamic environments just doesn’t scale- both from the configuration and IR perspectives. Organizations are moving away from complaining about security personnel shortage, and instead are looking to converge their multiple security platforms into 2-3 systems that provide a wide coverage of use cases and allow a high level of automation and compatibility with common DevOps practices.
  2. Security platforms converge – Organizations are transforming their IT operations to be efficient and automated. Security has to follow suit and be an enabler instead of a road block. The end goal from a CISO perspective is to achieve governance of the whole network, not just the cloud deployments or just the on-prem ones. Vendors can no longer have separate solutions for on-prem and cloud. A single unified solution is the only viable, sustainable option.
  3. Migration is hard – Migrating your workloads to the cloud is hard, migrating your security policy is even harder. Organizations moving all or some of their workloads to AWS find it very hard to keep the same level of security posture. Running a successful migration project while not compromising on security requires changing controls that do not exist any in the cloud. The existing security tools these organizations are using are not suitable or sufficient for enforcing the same security posture in the cloud.
  4. Hit F5 on your threat model – One of the main takeaways for security practitioners on AWS is to have a fresh approach to what actually needs to be secured. Make sure that as new cloud constructs and services are adopted by the organization, you actually have the right tools or policies in place to secure them. For example, solutions like AWS Control Tower (announced GA at the time of the conference), which helps you govern your AWS environment and accounts policy. When looking at the hybrid or cloud-only topologies that require a complex network model, you realize that you would need a hybrid solution to provide an overlay policy for both your cloud and on-prem assets.
  5. API is king – As our architectures and networks become more complex the ability of a human to monitor or maintain a network is becoming unrealistic. A great example is the SOAR (security automation and remediation) space. Organizations are moving away from shiny SOCs (security operation centers) with big TVs and hordes of operators. Human operators are not an effective solution over time and especially at scale. The move to automated playbooks solves both the staffing issue and the variable quality of incident handling. Each incident is handled according to a premeditated script for that scenario, with no need to reinvent the wheel. Sometimes it’s smart to allow automation to be our friend, and make our lives easier.

As CISOs need to be able to secure their entire network, and not just the cloud elements, the same concepts should apply more widely to network security. These have been the cornerstones of building Guardicore Centra, a micro-segmentation solution that works across all environments, and can complement and secure your AWS strategy. Modern infrastructures are dynamic and can change thousands of times over a span of a day. Security policies should be just as dynamic and be applied just as fast and be able to adhere to the same cadence. Guardicore enables security practitioners to integrate with APIs and move at the speed of the organization. Tools that require your security and network engineers to define security policy only through the UI and do not provide a way to script and automate policy creation are not transitioning to the cloud.

We believe that security shouldn’t be an obstacle or a cause for delay, and so one single, unified solution is a must-have. This obviously needs to work in a hybrid and multi cloud reality, without interfering with AWS best practices for it to be beneficial and not slow you down.

Want to learn more about hybrid-cloud security? Watch this video about micro-segmentation and breach detection in an increasingly complex environment.

 

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Are You on Top of the Latest Cloud Security Trends?

As enterprises embrace public and private cloud adoption, most find themselves working in a hybrid environment. Whether a hybrid architecture is a step towards becoming a fully cloud-enabled business, or an end-goal choice that allows you more freedom and flexibility over your business, you need the ability to protect your critical applications and assets across multiple environments while reducing your overall attack surface.

Understanding the Effect of Cloud Security Future Regulations

Achieving compliance can feel like an uphill struggle, with regular updates to existing regulations, as well as new regulations being written to handle the latest issues that enterprises face. While compliance doesn’t guarantee security, it’s tough to be secure without being compliant as a minimum foundation. The EU’s GDPR, for example, was created in response to the large amount of data breaches that businesses are facing, protecting PII (personally identifiable information) from attackers who would use it for identity theft, crime, and fraud. Another example is the new California Privacy laws that will go into effect in 2020. These are supposedly as strict as GDPR regulations and will affect all companies who have customers living in California, both businesses in America and internationally.

As fines and consequences for non-compliance get up and running, (GDPR fines for instance have totaled €56 million in its first year) it’s likely that businesses will start uncovering their own limitations. This will include their legacy architecture and security techniques, and prompt the companies to make changes to include public cloud services that have been built with GDPR or California Privacy compliance in mind, and extending their networks to include cloud as well as on-premises assets. It’s more important than ever that businesses put security first when making this kind of change, or they may be solving the problem of compliance at the expense of overall security and visibility.

Visibility is More Important than Ever as Businesses Adopt New Cloud Security Trends

All three main public cloud providers, AWS, Azure, and Google use the shared responsibility model. Simply put, the cloud providers manage infrastructure, and you as a customer are fully responsible for all customer data, access management, and network and firewall configuration. Each enterprise will have its own unique needs in terms of governance, SLA requirements and security overall, and in a multi-cloud environment, staying on top of this can be complex.

The bottom line is that customers often experience a lack of visibility and control when they consolidate their IT on the cloud, exactly where they need that insight and attention the most. If you have specific regulatory or industry needs, you will need more assurance that you have control over your workloads and communication flows.

Cloud-Native Environments are the Cloud Security Future

Improving your visibility across a hybrid IT ecosystem limits the chances of you falling victim to attacks on vulnerable or poorly authenticated infrastructure. Guardicore Centra offers automatic asset and dependency mapping down to the process level, allowing IT to quickly uncover and manage misconfigurations or dangerous open communications, providing early value to your business.

Once these are dealt with, a continuous view of all communication flows and assets moving forward puts your business in a strong position as attackers begin launching more sophisticated campaigns in the cloud. As cloud adoption continues to grow, future-focused businesses need to be on the lookout for cloud-native attacks that take advantage of container vulnerabilities and architectures, for example.

Shift-Left on Cloud Security

Enterprises are realizing that cloud providers are not responsible for their workload or application security, and that cloud solutions do not remove a business’ own responsibility when it comes to data security and compliance. One of the popular cloud security trends is that businesses are looking to adopt an early and continuous security solution to meet this challenge head-on. The latest micro-segmentation technology is smart and modern, robust enough to take control of an increasingly complex environment, while accomplishing early value use cases when it comes to solving infrastructure problems. As a built-in security method, the strongest micro-segmentation technology can handle a heterogeneous data center, covering legacy solutions, bare-metal, VMs, hybrid, containers, server-less and multi-cloud. One security vendor reduces complexity, which explains why many companies are opting for solutions that include strong complementary controls such as breach detection and incident response.

‘Application-Aware’ is a Cloud Security Future Must-Have

Moving to the cloud is all about businesses being able to be more flexible, scale faster and larger, providing and benefiting from new and exciting services. Your micro-segmentation solution needs to be able to keep up. Application-centric security takes over from traditional manual implementation, providing deep visibility, smart policy creation and airtight governance, protecting against threats in a holistic way. Cloud security future success is dependent on security that is built both for the cloud and all its vulnerabilities, at the same time as effortlessly managing legacy systems and everything in between.

Want to learn more about cloud security trends and how to manage a heterogeneous environment? Check out this white paper.

How to Establish your Next-Gen Data Center Security Strategy

In 2019, 46 percent of businesses are expected to use hybrid data centers, and it is therefore critical for these businesses to be prepared to deal with the inherent security challenges. Developing a next gen data center security strategy that takes into account the complexity of hybrid cloud infrastructure can help keep your business operations secure by way of real-time responsiveness, enhanced scalability, and improved uptime.

One of the biggest challenges of securing the next gen data center is accounting for the various silos that develop. Every cloud service provider has its own methods to implement security policies, and those solutions are discrete from one another. These methods are also discrete from on-premises infrastructure and associated security policies. This siloed approach to security adds complexity and increases the likelihood of blind spots in your security plan, and isn’t consistent with the goals of developing a next gen data center. To overcome these challenges, any forward-thinking company with security top of mind requires security tools that enable visibility and policy enforcement across the entirety of a hybrid cloud infrastructure.

In this piece, we’ll review the basics of the next gen data center, dive into some of the details of developing a next gen data center security strategy, and explain how Guardicore Centra fits into a holistic security plan.

What is a next gen data center?

The idea of hybrid cloud has been around for a while now, so what’s the difference between what we’re used to and a next gen data center? In short, next gen data centers are hybrid cloud infrastructures that abstract away complexity, automate as many workflows as possible, and include scalable orchestration tools. Scalable technologies like SDN (software defined networking), virtualization, containerization, and Infrastructure as Code (IaC) are hallmarks of the next gen data center.

Given this definition, the benefits of the next gen data center are clear: agile, scalable, standardized, and automated IT operations that limit costly manual configuration, human error, and oversights. However, when creating a next gen data center security strategy, enterprises must ensure that the policies, tools, and overall strategy they implement are able to account for the inherent challenges of the next gen data center.

Asking the right questions about your next gen data center security strategy

There are a number of questions enterprises must ask themselves as they begin to design a next gen data center and a security strategy to protect it. Here, we’ll review a few of the most important.

  • What standards and compliance regulations must we meet?Regulations such as HIPAA, PCI-DSS, and SOX subject enterprises to strict security and data protection requirements that must be met, regardless of other goals. Failure to account for these requirements in the planning stages can prove costly in the long run should you fail an audit due to a simple oversight.
  • How can we gain granular visibility into our entire infrastructure? One of the challenges of the next gen data center is the myriad of silos that emerge from a security and visibility perspective. With so many different IaaS, SaaS, and on-premises solutions going into a next gen data center, capturing detailed visibility of data flows down to the process level can be a daunting task. However, in order to optimize security, this is a question you’ll need to answer in the planning stages. If you don’t have a baseline of what traffic flows on your network look like at various points in time (e.g. peak hours on a Monday vs midnight Saturday) identifying and reacting to anomalies becomes almost impossible.
  • How can we implement scalable, cross-platform security policies?As mentioned, the variety of solutions that make up a next gen data center can lead to a number of silos and discrete security policies. Managing security discretely for each platform flies in the face of the scalable, DevOps-inspired ideals of the next gen data center. To ensure that your security can keep up with your infrastructure, you’ll need to seek out scalable, intelligent security tools. While security is often viewed as hamstringing DevOps efforts, the right tools and strategy can help bridge the gap between these two teams.

Finding the right solutions

Given what we have reviewed thus far, we can see that the solutions to the security challenges of the next gen data center need to be scalable and compliant, provide granular visibility, and function across the entirety of your infrastructure.

Guardicore Centra is uniquely capable of addressing these challenges and helping secure the next gen data center. For example, not only can micro-segmentation help enable compliance to standards like HIPAA and PCI-DSS, but Centra offers enterprises the level of visibility required in the next gen data center. Centra is capable of contextualizing all application dependencies across all platforms to ensure that your micro-segmentation policies are properly implemented. Regardless of where your apps run, Centra helps you overcome silos and provides visibility down to the process level.

Further, Centra is capable of achieving the scalability that the next gen data center demands. To help conceptualize how scalable micro-segmentation with Guardicore Centra can be, consider that a typical LAN build-out that can last for a few months and require hundreds of IT labor hours. On the other hand, a comparable micro-segmentation deployment takes about a month and significantly fewer IT labor hours.

Finally, Centra can help bridge the gap between DevOps and Security teams by enabling the use of “zero trust” security models. The general idea behind zero trust is, as the name implies, nothing inside or outside of your network should be trusted by default. This shifts focus to determining what is allowed as opposed to being strictly on the hunt for threats, which is much more conducive to a modern DevSecOps approach to the next gen data center.

Guardicore helps enable your next gen data center security strategy

When developing a next gen data center security strategy, you must be able to account for the nuances of the various pieces of on-premises and cloud infrastructure that make up a hybrid data center. A big part of doing so is selecting tools that minimize complexity and can scale across all of your on-premises and cloud platforms. Guardicore Centra does just that and helps implement scalable and granular security policies to establish the robust security required in the next gen data center.

If you’re interested in redefining and adapting the way you secure your hybrid cloud infrastructure, contact us to learn more.

Want to know more about proper data center security? Get our white paper about operationalizing a proper micro-segmentation project.

Read More