The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

From On-Prem to Cloud: The Complete AWS Security Checklist

Cloud computing has redefined how organizations handle “business as usual.” In the past, organizations were responsible for deploying, maintaining, and securing all of their own systems. However, doing this properly requires resources, and some organizations simply don’t have the necessary in-house talent to accomplish it. With the cloud, it’s now possible to rent resources from a cloud service providers (CSPs) and offload the maintenance and some of the security workload to them.

Just as the cloud is different from an on-premises deployment, security in the cloud can differ from traditional best practices as well. Below, we provide an AWS security checklist that includes the most crucial steps for implementing network security best practices within a cloud environment.

AWS Security Checklist: Step-by-Step Guide

  • Get the Whole Picture. Before you can secure the cloud, you need to know what’s in the cloud. Cloud computing is designed to be easy to use, which means that even non-technical employees can create accounts and upload sensitive data to it. Amazon does what it can to help, but poorly secured cloud storage is still a major cause of data breaches. Before your security team can secure your organization’s footprint in the cloud, they first need to do the research necessary to find any unauthorized (and potentially insecure) cloud accounts containing company data.
  • Define an AWS Audit Checklist. After you have an understanding of the scope of your organization’s cloud security deployments, it’s time to apply an AWS audit checklist to them. The purpose of this checklist is to ensure that every deployment containing your organization’s sensitive data meets the minimum standards for a secure cloud deployment. There are a variety of resources available for development of your organization’s AWS audit checklist. Amazon has provided a security checklist for cloud computing, and our piece on AWS Security Best Practices provides the information that you need for a solid foundation in cloud security. Use these resources to define a baseline for a secure AWS and then apply it to all cloud resources in your organization.
  • Improve Visibility. A CSP’s “as a Service” offerings sacrifice visibility for convenience. When using a cloud service, you lose visibility into and control over the underlying infrastructure, a situation that is very different from an on-premises deployment. Your applications may be deployed over multiple cloud instances and on servers in different sites and even different regions, making it more difficult to define clear security boundaries. Guardicore Centra’s built-in dashboard can be a major asset when trying to understand the scope and layout of your cloud resources. The tool automatically discovers applications on your cloud deployment and maps the data flows between them. This data is then presented in an intuitive user interface, making it easy to understand applications that you have running in the cloud and how they interact with one another.
  • Manage Your Attack Surface. Once you have a solid understanding of your cloud deployment, the next step is working to secure it. The concept of network segmentation to minimize the impact of a breach is nothing new, but many organizations are at a loss on how to do it in the cloud.While securing all of your application’s traffic within a particular cloud infrastructure (like AWS) or securing traffic between applications and external networks is a good start, it’s simply not enough. In the cloud, it’s necessary to implement micro-segmentation, defining policies at the application level. By defining which applications are allowed to interact and the types of interactions that are permitted, it’s possible to provide the level of security necessary for applications operating in the cloud.In an attempt to ensure the security of their applications, many organizations go too far in defining security policies. In fact, according to Gartner, 70% of segmentation projects originally suffer from over-segmentation. With Guardicore Centra, the burden of defining effective policy rules no longer rests on the members of the security team. Centra’s micro-segmentation solution provides automatic policy recommendations that can be effectively applied on any cloud infrastructure, streamlining your organization’s security policy for AWS and all other cloud deployments.
  • Empower Security Through Visualization. The success of Security Information and Event Management (SIEM) solutions demonstrates the effectiveness and importance of collating security data into an easy-to-use format for the security team. Many data breaches are enabled by a lack of understanding of the protected system or an inability to effectively analyze and cross-reference alert data.Humans operate most effectively when dealing with visual data, and Centra is designed to provide your security team with the information that they need to secure your cloud deployment. Centra’s threat detection and response technology uses dynamic detection, reputation analysis, and policy-based detection to draw analysts’ attention to where it is needed most. The Guardicore incident response dashboard aggregates all necessary details regarding the attack, empowering defenders to respond rapidly and minimize the organizational impact of an attack.

Applying the AWS Security Checklist

Protecting your organization’s sensitive data and intellectual property requires going beyond the minimum when securing your organization’s cloud deployment. Built for the cloud, Guardicore Centra is designed to provide your organization with the tools it needs to secure your AWS deployment.

To find out more, contact us today or sign up for a demo of the Centra Security Platform and see its impact on your cloud security for yourself.

Thoughts on the Capital One Attack

The ink on the Equifax settlement papers is hardly dry, and another huge data breach, this time at Capital One, is sending shock waves across North America.

The company has disclosed that in March of this year, a former systems engineer, Paige Thompson, exploited a configuration vulnerability (associated with a firewall or WAF) and was able to execute a series of commands on the bank’s servers that were hosted on AWS. About 106 million customers have had their data exposed, including names, incomes, dates of birth, and even social security numbers and bank account credentials. Some of the data was encrypted, some was tokenzied but there’s been a large amount of damage to customers, as well as to the bank’s reputation and the entire security ecosystem.

Our customers, partners and even employees have asked us to comment about the Capital One data breach. Guardicore is an Advanced Technology Partner for AWS with security competency. There are only a small number of companies with such certification and thus I’d like to think that our thoughts do matter.

First – there are a couple of positive things related to this breach:

  1. Once notified, Capital One acted very quickly. It means that they have the right procedures, processes and people.
  2. Responsible disclosure programs provide real value. This is important and many organizations should follow suit.

While not a lot of information is available, based on the content that has been published thus far, we have some additional thoughts:

Could this Data Breach Have Been Avoided?

Reading the many articles on this subject everyone is trying to figure out the same thing. How did this happen, and what could have been done to keep Capital One’s customer data more secure?

What Does a ‘Configuration Vulnerability’ Mean on AWS?

When it comes to managing security in a cloud or a hybrid-cloud environment, organizations often experience issues with maintaining good visibility and control over applications and traffic. The first step is understanding what your role is in a partnership with any cloud vendor. Being part of a shared-responsibility model in AWS means recognizing that Amazon gives you what it calls “full ownership and control” over how you store and secure your content and data. While AWS is responsible for infrastructure, having freedom over your content means you need to take charge when it comes to securing applications and data.

Looking at this data breach specifically, an AWS representative has said “AWS was not compromised in any way and functioned as designed. The perpetrator gained access through misconfiguration of the web application and not the underlying cloud-based infrastructure.”

Thompson gained access by leveraging a configuration error or vulnerability which affected a web firewall guarding a web application. By passing what seems to have been a thin (maybe even single) layer of defense, she was then able to make some kind of lateral movement across the network and then to the S3 bucket where the sensitive data was being stored.

Cloud Native Security Controls are Just Your First Layer of Defense

Can we learn anything from this incomplete information? I do think that the answer is “yes”: Cloud-native security controls provide a good start, but this alone is not enough : Best practice is to add an extra layer of detection and prevention, adding application-aware security to the cloud, just as you would expect on-premises. Defense-in-depth as a concept is not going away even in the cloud. The controls and defenses that the Cloud Service Provider includes should be seen as an add-on or part of the basic hygiene requirements.

I would argue that the built-in Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to identify and block malicious or suspicious traffic than cloud APIs can achieve. When we were designing Guardicore Centra, we decided to try to develop independent capabilities whenever possible, even if it meant that we had to spend more time and put more into our development. The result is a better security solution that is independent of the infrastructure and is not limited to what a 3rd party supplier/vendor or partner provides.

Guardicore Centra is used as an added security platform for AWS as well as the other clouds. We know from our customers that acting on the facts listed below have protected them on multiple occasions.

  • Guardicore is an Advanced Technology Partner for AWS: Guardicore is the only vendor that specializes in micro-segmentation with this certification from AWS, and Guardicore Centra is fully integrated with AWS. Users can see native-cloud information and AWS-specific data alongside all information about their hybrid ecosystem. When creating policy, this can be visualized and enforced on flows and down to the process level, layer 7.
  • Micro-Segmentation Fills the Gaps of Built-in Cloud Segmentation: Many companies might rely on native cloud segmentation through cloud-vendor tools, but it would have been insufficient to stop the kind of lateral movement the attacker used to harvest these credentials in the Capital One breach. In contrast, solutions like Centra that are deployed on top of the cloud’s infrastructure and are independent are not limited. Specifically for Centra, the product enables companies to set policies at the process level itself.
  • Cloud API for Policy Enforcement is Insufficient: SecDevOps need more effective ways to block malicious or suspicious traffic than cloud APIs can achieve. In contrast, Guardicore Centra can block unwanted traffic with dynamic application policies that monitor and enforce on east-west traffic as well as north-south. As smart labeling and grouping can pull in information such as EC2 tags, users obtain a highly visible and configurable expression of their data centers, both for mapping and policy enforcement.
  • Breach Detection in Minutes, not Months: The Capital One breach was discovered on July 19th 2019, but the attack occurred in late March this year. This is a gap of almost four months from breach to detection. Many businesses struggle with visibility on the cloud, but Guardicore Centra’s foundational map is created with equal insight into all environments. Breach detection occurs in real-time, with visibility down to Layer 7. Security incidents or policy violations can be sent immediately to AWS security hub, automated, or escalated internally for mitigation.

Capital One Bank are well known for good security practices. Their contributions to the security and open source communities are tremendous. This highlights how easily even a business with a strong security posture can fall victim to this kind of vulnerability. As more enterprises move to hybrid-cloud realities, visibility and control get more difficult to achieve.

Guardicore micro-segmentation is built for this challenge, achieving full visibility on the cloud, and creating single granular policies that follow the workload, working seamlessly across a heterogeneous environment.

Want to find out more about how to secure your AWS instances?

Read these Best Practices