Posts

The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

Moving Zero Trust from a Concept to a Reality

Most people understand the reasoning and the reality behind a zero trust model. While historically, a network perimeter was considered sufficient to keep attacks at bay, today this is not the case. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to be much more useful and capable in preventing breaches.

But how organizations can move from a concept or idea into implementation? Using the same tools that are developed with 15-20 year old technologies is not adequate.

There is a growing demand for IT resources that can be accessed in a location-agnostic way, and cloud services are being used more widely than ever. These facts, on top of businesses embracing broader use of distributed application architectures, mean that both the traditional firewall and the Next Generation are no longer effective for risk reduction.
The other factor to consider is that new malware and attack vectors are being discovered every day, and businesses have no idea where the next threat might come from. It’s more important than ever to use micro-segmentation and micro-perimeters to limit the fallout of a cyber attack.

How does applying the best practices of zero trust combat these issues?

Simply put, implementing the zero trust model creates and enforces small segments of control around sensitive data and applications, increasing your data security overall. Businesses can use zero trust to monitor all network traffic for malicious activity or unauthorized access, limiting the risk of lateral movement through escalating user privileges and improving breach detection and incident response. As Forrester Research, who originally introduced the concept, explain, with zero trust, network policy can be managed from one central console through automation.

The Guardicore principles of zero trust

At Guardicore, we support IT teams in implementing zero trust with the support of our four high level principles. Together, they create an environment where you are best-placed to glean the benefits of zero trust.

  • A least privilege access strategy: Access permissions are only assigned based on a well-defined need. ‘Never trust- always verify’. This doesn’t stop at users alone. We also include applications, and even the data itself, with continuous review of the need for access. Group permissions can help make this seamless, and then individual assets or elements can be removed from each group as necessary.
  • Secure access to all resources: This is true no matter the location or its user. Our authentication level is the same both inside and outside of the local area network, for example services from the LAN will not be available via VPN.
  • Access control at all levels: Both the network itself and each resource or application need multi-factor authentication.
  • Audit everything: Rather than simply collecting data, we review all the logs that are manually collected, using automation to generate alerts where necessary. These bots perform multiple actions, such as our ‘nightwatch bot’ that generates phone calls to the right member of staff in the case of an emergency.

However, knowing these best principles and understanding the benefits behind zero trust is not the same as being able to implement securely and with the right amount of flexibility and control.

Many companies fall at the first hurdle, unsure how to gain full visibility of their ecosystem. Without this, it is impossible to define policy clearly, set up the correct alerts so that business can run as usual, or stay on top of costs. If your business does not have the right guidance or skill-sets, the zero trust model becomes a ‘nice to have’ in theory but not something that can be achieved in practice.

It all starts with the map

With a zero trust model that starts with deep visibility, you can automatically identify all resources across all environments, at both the application and network level. At this point, you can work out what you need to enforce, turning to technology once you know what you’re looking to build as a strategy for your business. Other solutions will start with their capabilities, using these to suggest enforcement, which is the opposite of what you need, and can leave gaps where you need policy the most.

It’s important to ensure that you have a method in place for classification so that stakeholders can understand what they are looking at on your map. We bring in data from third-party orchestration, using automation to create a highly accessible map that is simple to visualize across both technical and business teams. With a context-rich map, you can generate intelligence on malicious activity even at the application layer, and tightly enforce policy without worrying about the impact on business as usual.

With these best practices in mind, and a map as your foundation – your business can achieve the goals of zero trust, enforcing control around sensitive data and apps, finding malicious activity in network traffic, and centrally managing network policy with automation.

Want to better understand how to implement segmentation for securing modern data centers to work towards a zero trust model?

Download our white paper

Rethinking Segmentation for Better Security

Cloud services and their related security challenges will continue to grow

One of the biggest shifts in the enterprise computing industry in the past decade is the migration to the cloud. As more and more organizations discover the benefits of moving their data centers to private and public cloud environments, this trend is expected to continue dominating the enterprise landscape. Gartner projects cloud services will grow exponentially from 2019 through 2022, with Infrastructure-as-a-Service (IaaS) being the fastest growing segment of the market, already showing an increase of 27.5% in 2019 compared to 2018.

So what’s the big challenge?

The added agility of cloud infrastructure comes with a trade-off, in the form of increased complexity of cyber security. Traditional security tools were designed for on premise servers and endpoints, focusing on perimeter defense to block the attacks at the entry point. But the dynamic nature of hybrid cloud services meant that perimeter defense became insufficient. When the perimeter itself is constantly shifting, as data and workloads move back and forth among public and private clouds and on premise data centers, the attack surfaces became much larger and required network segmentation to control lateral movement within the perimeter.

From the early days of clouds, segmentation became a popular concept. Traditionally, businesses were looking to divide the network into segments and enforce some sort of access control between the segments. In practice, the way it worked was that relevant servers were put into a dedicated VLAN and routed through a firewall. The higher level of segmentation meant smaller segment size, which reduced the attack surface and limited the impact of any potential breach.

Then – the rules of the game changed! Moving from one static cloud to dynamic, hybrid cloud-based data centers

Simple segmentation by firewalls used to work in the past, when the networks were comprised of relatively large static segments. However, the “rules of the game” have changed significantly in recent years. Dynamic data centers and hybrid cloud adoption have created problems that cannot be solved with legacy firewalls, and yet achieving segmentation is now more vital than ever before. The cadence of change to the infrastructure and application services is very high, accentuating the need for granular segments with an understanding of their dependencies and impacting their security policy.

Take, for example, the 2017 Equifax breach. The US House of Representatives report on this incident pointed directly to the lack of internal segmentation as one of the key gaps that allowed the breach impact to be so big, affecting 143 million consumers.

Regulation is another driver of segmentation. One of Guardicore’s customers, a global investment bank, needed to comply with a new regulation of SWIFT – which requires all SWIFT servers to be put into a separate segment and whitelist all connection allowed in and out of this segment. Using traditional methods, it took the bank 10 months and a costly labor-intensive process to complete this change, spurring them on to find smarter segmentation methods moving forward.

The examples above demonstrate how although segmentation is a known and well understood security measure, in practice organizations struggle to implement it properly in a cost-effective way.

Adapt easily to these changes and start micro-segmentation

To deal with these challenges, micro-segmentation was born. Micro-segmentation takes enterprise security to a new level and is a step further than existing network segmentation and application segmentation methods, adding visibility and policy granularity. It typically works by establishing security policies around individual or groups of applications, regardless of where they reside in the hybrid data center. These policies dictate which applications can and cannot communicate with each other.

Micro-segmentation includes the ability to fully visualize the environment and define security policies with Layer 7 process-level precision, making it highly effective at preventing lateral movement in a hybrid cloud environment.

Take the first step in preparing your enterprise for a better data security

Want to learn more? Listen to Guardicore’s CTO and Co-founder, Ariel Zeitlin, as he walks through the challenges and the solutions to better secure your data in his latest interview with the CIO Talk Network. In this podcast, Ariel discusses the new approaches to implementing segmentation, the key aspects you need to consider when comparing different vendors and technologies, and what comes ahead of the curve for security leaders in this space.

 

Want to learn more about how to first think through, then properly implement micro-segmentation? Read our white paper on operationalizing your segmentation project.

Read More

You don’t have to be mature in order to be more secure – cloud, maturity, and micro-segmentation

Whether you’ve transitioned to the cloud, are still using on-prem servers, or are operating on a hybrid system, you need security services that are up to the task of protecting all your assets. Naturally, you want the best protection for your business assets. In the cybersecurity world, it’s generally agreed that micro-segmentation is the foundation for truly powerful, flexible, and complete cloud network security. The trouble is that conventional wisdom might tell you that you aren’t yet ready for it.

If you are using a public cloud or VMware NSX-V, you already have a limited set of basic micro-segmentation capabilities built-in with your cloud infrastructure, using security groups and DFW (NSX-V). But security requirements, the way that you have built your network, or your use of multiple vendors require more than a limited set of basic capabilities.

The greatest security benefits can be accessed by enterprises that can unleash the full potential of micro-segmentation beyond layers 3 and 4 of the OSI model, and use application-aware micro-segmentation. Generally, your cloud security choices will be based on the cloud maturity level of your organization. It’s assumed that enterprises that aren’t yet fully mature, according to typical cloud maturity models, won’t have the resources to implement the most advanced cloud security solutions.

But what if that’s not the case? Perhaps a different way of thinking about organizational maturity would show that you can enjoy at least some of the benefits of advanced cloud security systems. Take a closer look at a different way to assess your enterprise’s maturity.

A different way to think about your organizational maturity

Larger organizations already have a solid understanding of their maturity. They constantly monitor and reevaluate their maturity profile, so as to make the best decisions about cloud services and cloud security options. We like to compare an organization learning about the best cloud security services to people who are learning to ski.

When an adult learns how to ski, they’ll begin by buying ski equipment and signing up for ski lessons. Then they’ll spend some time learning how to use their skis and getting used to the feeling of wearing them, before they’re taught to actually ski. It could take a few lessons until an adult skis downhill. If they don’t have strong core muscles and a good sense of balance, they are likely to be sent away to improve their general fitness before trying something new. But when a child learns how to ski, they usually learn much faster than an adult, without taking as long to adjust to the new movements.

Just like an adult needs to be strong enough to learn to ski, an organization needs to be strong enough to implement cloud security services. While adults check their fitness with exercises and tests, organizations check their fitness using cloud maturity models. But typical cloud maturity models might not give an accurate picture of your maturity profile. They usually use 4, 5, or 6 levels of maturity to evaluate your organization in a number of different areas. If your enterprise hasn’t reached a particular level in enough areas, you’ll have to build up your maturity before you can implement an advanced cloud security solution.

At Guardicore, we take a different approach. We developed a solution that yields high security dividends, even if the security capabilities of your organization are not fully mature.

Assessing the maturity of ‘immature’ organizations

Most cloud security providers assume that a newer enterprise doesn’t have the maturity to use advanced cloud security systems. But we view newer enterprises like children who learn to ski. Children have less fear and more flexibility than an adult. They don’t worry about falling, and when they do fall, they simply get up and carry on. The consequences of falling can be a lot more serious for adults. In the same way, newer enterprises can be more agile, less risk-averse, and more able to try something new than an older enterprise that appears to be more mature.

Newer organizations often have these advantages:

  • Fewer silos between departments
  • Better visibility into a less complex environment
  • A much higher tolerance for risk that enables them to test new cloud services and structures, due to a lower investment in existing architecture and processes
  • A more agile and streamlined environment
  • A lighter burden of inherited infrastructure
  • A more unified environment that isn’t weakened by a patchwork of legacy items

While a newer enterprise might not be ready to run a full package of advanced cloud security solutions, it could be agile enough to implement many or most of the security features while it continues to mature. Guardicore allows young organizations to leapfrog the functions that they aren’t yet ready for, while still taking advantage of the superior protection offered by micro-segmentation. Like a child learning to ski, we’ll help you enjoy the blue runs sooner, even if you can’t yet head off-piste.

Organizational maturity in ‘mature’ organizations

Although an older, longer-established organization might seem more cloud mature, it may not be ready for advanced cloud security systems. Many older enterprises aren’t even sure what is within their own ecosystem. They face data silos, duplicate workflows, and cumbersome business processes. Factors holding them back can include:

  • Inefficient workflows
  • Long-winded work processes
  • Strange and divisive infrastructure
  • Awkward legacy environments
  • Business information that is siloed in various departments
  • Complex architectures

Here, Guardicore Centra will be instrumental in bridging the immaturity gap: It provides deep visibility through clear visualization of the entire environment, even those parts that are siloed. Guardicore Centra delivers benefits for multiple teams, and its policy engine supports (almost) any kind of organizational security policy.

What’s more, Guardicore supports phased deployment. It is not an all-or-nothing solution. An organization that can’t yet run a full set of advanced cloud security services still needs the best protection it can get for its business environment. In these situations, Guardicore helps implement only those features that your organization is ready for, while making alternative security arrangements for the rest of your enterprise. By taking it slowly, you can grow into your cloud capabilities and gradually implement the full functionality of micro-segmentation.

Flexible cloud security solutions for every organization

Guardicore’s advanced cloud security solutions provide the highest level of protection for your critical business assets. They are flexible enough to handle legacy infrastructure and complex environments, while allowing for varying levels of cloud maturity.

Whether you are a ‘young’ organization that’s not seen as cloud-mature, or an older enterprise struggling with organizational immaturity, Guardicore can help you to get your skis on. As long as you have a realistic understanding of your organization’s requirements and capabilities, you can apply the right Guardicore security solution to your business and enjoy superior protection without breaking a leg.

Lessons Learned from One of the Largest Bank Heists in Mexico

News report: $20M was stolen from Mexican banks, with the initial intention to steal $150M. Automatically we are drawn to think of a “Casa de Papel” style heist, bank robbers wearing masks hijacking a bank and stealing money from an underground vault. This time, the bank robbers were hackers, the vault is the SPEI application and well, no mask was needed. The hackers were able to figuratively “walk right in” and take the money. Nothing was stopping them from entering the back door and moving laterally until they reached the SPEI application.

Central bank Banco de México, also known as Banxico, has published an official report detailing the attack, the techniques used by the attackers and how they were able to compromise several banks in Mexico to steal $20M. The report clearly emphasizes how easy it was for the attackers to reach their goal, due to insecure network architecture and lack of controls.

The bank heist was directed at the Mexican financial system called SPEI, Mexico’s domestic money transfer platform, managed by Banxico. Once the attackers found their initial entrance into the network, they started moving laterally to find the “crown jewels”, the SPEI application servers. The report states that the lack of network segmentation enabled the intruders to use that initial access to go deeper in the network with little to no interference and reach the SPEI transaction servers easily. Moreover, the SPEI app itself and its different components had bugs and lacked adequate validation checks of communication between the application servers. This meant that within the application the attackers could create an infrastructure of control that eventually enabled them to create bogus transactions and extract the money they were after.

Questions arise: what can be learned from this heist? How do we prevent the next one? Attackers will always find their way in to the network, so how do you prevent them from getting the gold?

Follow Advice to Remain Compliant

When it comes to protecting valuable customer information and achieving regulatory compliance, organizations such as PCI-DSS and SWIFT recommend the following basic steps: system integrity monitoring, vulnerability management, and segmentation and application control. For financial information, PCI-DSS regulations enforce file integrity monitoring on your Cardholder Data Environment itself, to examine the way that files change, establish the origin of such changes, and determine if they are suspicious in nature. SWIFT regulations require customers to “Restrict internet access and protect critical systems from the general IT environment” as well as encourage companies to implement internal segmentation within each secure zone to further reduce the attack surface.

Let’s look at a few guidelines, as detailed by SWIFT while incorporating our general advice on remaining compliant in a hybrid environment.

  • Inbound and outbound connectivity for the secure zone is fully limited.
    Transport layer stateful firewalls are used to create logical separation at the boundary of the secure zone.
  • No “allow any” firewall rules are implemented, and all network flows are explicitly authorized.
    Operators connect from dedicated operator PCs located within the secure zone (that is, PCs located within the secure zone, and used only for secure zone purposes).
  • SWIFT systems within the secure zone restrict administrative access to only expected ports, protocols, and originating IPs.
  • Internal segmentation is implemented between components in the secure zone to further reduce the risk.

SPEI servers, that serve a similar function to SWIFT application servers should adhere to similar regulatory requirements, and as elaborated on by Banxico in the official analysis report, such regulations are forming for this critical application.

Don’t Rely on Traditional Security Controls

The protocols detailed above are recommended by security experts and compliance regulations worldwide, so it’s safe to assume the Mexican bank teams were aware of the benefits of such controls. Many of them have even been open about their attempts to implement these kinds of controls with traditionally available tools such as VLANS and endpoint FWs. This has proven to be a long, costly and tiresome process, sometimes requiring 9 months of work to segment a single SWIFT application! Would you take 9 months to install a metal gate around your vault and between your vault compartments? I didn’t think so…

Guardicore Centra is set on resolving this challenge. Moving away from traditional segmentation methods to use micro-segmentation that provides foundational actionable data center visibility, this technology shows quick time to value, with controls down to the process level. Our customers, including Santander Brasil and BancoDelBajio in Mexico, benefit from early wins like protecting critical assets or achieving regulatory compliance, avoiding the trap of “all or nothing segmentation” that can happen when competitors do not implement a phased approach.

Guardicore provides the whole package to secure the data center, including real-time and historical visibility down to the process level, segmentation and micro-segmentation supporting various segmentation use cases, and breach detection and response, to thoroughly strengthen our client’s security posture overall.

Micro-segmentation is more achievable than ever before. Let’s upgrade your company’s security practices to prevent attackers from gaining access to sensitive information and crown jewels in your hybrid data center. Request a demo now or read more about smart segmentation.

Read More

Micro-Segmentation: Getting Done Faster With Machine Learning

Building micro-segmentation policies around workloads to address compliance, reduce attack surfaces and prevent threat propagation between machines is on every organization’s security agenda and made it to the CISO’s 2019 shortlist. Many times, deploying segmentation policies in hybrid data centers proves harder than it looks. At Guardicore, we are very proud of our ability to assist customers segment and micro-segment their clouds and data centers quickly, protecting their workloads across any environment and achieving fast return on security investments.

But, we always think that there is room for improvement. Analyzing the different assignments that are involved with the task of micro-segmentation, we have identified several steps that can be accelerated with more sophisticated code. Using data that was collected from our customers and studied by Guardicore Labs, we added machine learning capabilities that accelerate micro-segmentation.

In order to properly micro-segment a large environment, one should discover all the workloads, create application dependency mappings, classify the workloads and label accordingly. Next, one is required to understand how the application is tiered and its behavior in order to set micro-segmentation policies both for its internal components as well as the other entities it is serving.

This is where our machine learning capabilities can assist.

We are taking advantage of the fact that in Guardicore deployments we collect information about every flow in the network. Discovery is automatic, creating a visualization of all application communications and dependencies. The visualized map shows how workloads are communicating. The algorithms use this data and model the network as an annotated graph and use our customized unsupervised machine learning technique to cluster similar workloads into groups, based on communication patterns. Then, Centra can perform the following tasks:

  • Automatic classification of workloads
  • Automatic label creation for applications and their tiers
  • Automatic rule suggestion for flow level-segmentation and process level micro-segmentation

Here is an example of running classification from Reveal’s data center map:

running classification from Reveal with ML

Below is a visualization of results of automatic workload classification:

results of automatic workload classification with machine learning

 

And this is how this looks in Reveal, at the application tier:

Reveal view with ML

 

Want to learn more about our solution? Contact us.

Guardicore Threat Intelligence Helps Cybersecurity Community Research Attacks and Mitigate Risks

Guardicore Labs Launches Freely Available Public Resource For Investigating Malicious IP Addresses and Domains

 

Boston, Mass. and Tel Aviv, Israel – March 25, 2019 – Guardicore, a leader in internal data center and cloud security, today announced the launch of its Guardicore Threat Intelligence community resource. Developed by the Guardicore Labs research team, Guardicore Threat Intelligence is a freely available public resource for identifying and investigating malicious IP addresses and domains. With an easy to understand dashboard, Guardicore Threat Intelligence rates top attackers, top attacked ports and top malicious domains, giving security teams the insight they need to research and understand attacks and mitigate risks.

“Based on our deployment and technology Guardicore has a unique view of the most recent threats that are targeting servers in the cloud and in data centers. As a company we believe in giving back to the community and contributing where we can to the benefit of all. Thus, the Guardicore Labs research team has made its data and research available for the public,” said Pavel Gurvich, Co-founder and CEO, Guardicore. “With the launch of Guardicore Threat Intelligence, the cyber security community now has the opportunity to benefit from the same insights leveraged by Guardicore to protect its customers. Busy security teams can now benefit from a trusted, freely available resource that allows them to keep track of potential threats and enjoy unique analysis specific to data center attacks.”

 

Guardicore Threat Intelligence Features
Guardicore Threat Intelligence is currently the only publicly available community resource to focus exclusively on data center attacks. Specifically, it includes data not available in other public feeds, including the role of IP addresses in specific attacks and detailed attack flow, providing context for attacks on Internet-facing servers with a single aggregated view. Security analysts, threat hunters, and incident response or forensics teams can leverage Guardicore Threat Intelligence as an aggregated source to verify threats, understand attack patterns, and update IoCs quickly, eliminating the need to check multiple feeds and accelerating the time to response. Ultimately, Guardicore Threat Intelligence can help defenders anticipate future attacks and mitigate risks. Guardicore sources data from its Guardicore Global Sensors Network (GGSN), which streams early threat information to Guardicore Labs’ team for new attack identification and analysis.

 

Availability & Contributions

Guardicore Threat Intelligence is freely available now at https://threatintelligence.guardicore.com. Contributions are welcome. Guardicore Labs invites the cybersecurity community to contribute to its Threat Intelligence knowledge base by submitting data, asking questions and collaborating with Guardicore researchers on additional findings.

 

Guardicore Labs

Guardicore Labs is a global research team, consisting of hackers, cybersecurity researchers and industry experts. Its mission is to deliver cutting-edge cyber security research, lead and participate in academic research and provide analysis, insights and response methodologies to the latest cyber threats. Guardicore Labs helps Guardicore customers and the security community to continually enhance their security posture and protect critical business applications and infrastructure.

 

Creators of Infection Monkey, a popular open-source network resiliency test tool, Guardicore Labs’ high-profile threat discoveries include the Hexmen multiple attack campaigns targeting database services, the Bondnet botnet used to mine different cryptocurrencies, Operation Prowli, a traffic manipulation and cryptocurrency mining campaign, and Butter, a brute force SSH attack on Linux machines that leaves a backdoor to deliver a Samba payload. To learn more visit Guardicore Labs.


About Guardicore

Guardicore is a data center and cloud security company that protects your organization’s core assets using flexible, quickly deployed, and easy to understand micro-segmentation controls. Our solutions provide a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment. For more information, visit www.guardicore.com.

Understanding and Avoiding Security Misconfiguration

Security Misconfiguration is simply defined as failing to implement all the security controls for a server or web application, or implementing the security controls, but doing so with errors. What a company thought of as a safe environment actually has dangerous gaps or mistakes that leave the organization open to risk. According to the OWASP top 10, this type of misconfiguration is number 6 on the list of critical web application security risks.

How Do I Know if I Have a Security Misconfiguration, and What Could It Be?

The truth is, you probably do have misconfigurations in your security, as this is a widespread problem, and can happen at any level of the application stack. Some of the most common misconfigurations in traditional data centers include default configurations that have never been changed and remain insecure, incomplete configurations that were intended to be temporary, and wrong assumptions about the application expected network behaviour and connectivity requirements.

In today’s hybrid data centers and cloud environments, and with the complexity of applications, operating systems, frameworks and workloads, this challenge is growing. These environments are technologically diverse and rapidly changing, making it difficult to understand and introduce the right controls for secure configuration. Without the right level of visibility, security misconfiguration is opening new risks for heterogeneous environments. These include:

  • Unnecessary administration ports that are open for an application. These expose the application to remote attacks.
  • Outbound connections to various internet services. These could reveal unwanted behavior of the application in a critical environment.
  • Legacy applications that are trying to communicate with applications that do not exist anymore. Attackers could mimic these applications to establish a connection.

The Enhanced Risk of Misconfiguration in a Hybrid-Cloud Environment

While security misconfiguration in traditional data centers put companies at risk of unauthorized access to application resources, data exposure and in-organization threats, the advent of the cloud has increased the threat landscape exponentially. It comes as no surprise that “2017 saw an incredible 424 percent increase in records breached through misconfigurations in cloud servers” according to a recent report by IBM. This kind of cloud security misconfiguration accounted for almost 70% of the overall compromised data records that year.

One element to consider in a hybrid environment is the use of public cloud services, third party services, and applications that are hosted in different infrastructure. Unauthorized application access, both from external sources or internal applications or legacy applications can open a business up to a large amount of risk.

Firewalls can often suffer from misconfiguration, with policies left dangerously loose and permissive, providing a large amount of exposure to the network. In many cases, production environments are not firewalled from development environments, or firewalls are not used to enforce least privilege where it could be most beneficial.

Private servers with third-party vendors or software can lack visibility or an understanding of shared responsibility, often resulting in misconfiguration. One example is the 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility.

With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe?

Learning about Application Behavior to Mitigate the Risk of Misconfiguration

Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices.

This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features.

Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration.

Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas.

Visibility and Smart Policy Limit the Risk of Security Misconfiguration

The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems.

Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication.

Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole.

Want to hear more about Guardicore Centra and micro-segmentation? Get in touch

Ready to Give Micro-Segmentation Your Full Attention? Look Out for the Most Common Roadblocks

Security experts continue to promote micro-segmentation as an essential tool for risk reduction in hybrid cloud environments. If you’re ready to make 2019 the year you get your micro-segmentation journey off the ground, make sure you can identify the roadblocks you should be looking to avoid.

The irreversible movement of critical workloads into virtualized, hybrid cloud environments demands new security solutions that go further than traditional firewalls or endpoint controls. Audits and industry compliance requirements make it an imperative. News stories of the continued fallout of data center breaches in which attackers have caused severe brand and monetary damage, such as the Equifax breach, make it even more important to move to the top of your to-do list.

East-west data center traffic now accounts for most enterprise traffic — and has been said to “dwarf traditional client-server traffic which moves north-south.” As a result, traditional network and host-based security, even when virtualized, doesn’t provide the visibility, security controls, or protection capabilities to secure what has become the largest attack surface of today’s enterprise computing environments. Furthermore, point solutions offered by cloud and on-premises vendors come up short and add layers of complexity most enterprises can’t afford.

Attackers know this and are exploiting it. Today’s attacks are smarter and more straightforward, often launched to covertly harness portions of an enterprise’s compute power to commit other crimes. A good example of this is the rise in crypto-jacking, growing faster than ransomware as the means by which attackers attempt a pay-out. Alongside APTs, these types of threats take advantage of zero day vulnerabilities or weaknesses in existing security and launch attacks that are direct against the data center or cloud.

As IT environments continue to grow increasingly dynamic and complex, attackers can accomplish their ends more quickly and efficiently. This is especially true in a hybrid ecosystem, given the lack of native security controls and the average length of dwell time before detection.

The responsibility to ensure that you are protected against these threats lies squarely in your court – it is on you to safeguard your business. Security is ultimately — and contractually — a shared responsibility between the provider and the user. Enterprises must continue to work on securing the workloads and applications themselves, not merely rely on intrusion prevention tools.

The Micro-Segmentation Dilemma

In view of this sense of urgency, micro-segmentation has become a popular solution to address the reality of todays data centers. We’ve had conversations with people at dozens of organizations that have tried to implement micro-segmentation. By identifying some of the more common pitfalls, we can lay out the tips and tricks that will help you make your implementation a success.

Lack of visibility: Without deep visibility into east-west data center traffic, any effort to implement micro-segmentation is thwarted. Even with lengthy analysis meetings, traffic collection, and manual mapping processes, security professionals will be left with blind spots. Despite the strength of automated mapping, too many efforts lack process-level visibility and critical contextual orchestration data. The ability to map out application workflows at a very granular level is necessary to identify logical groupings of applications for segmentation purposes.

All-or-nothing segmentation paralysis: Too often, executives think they need to micro-segment everything decisively, which leads to fears of disruption. The project looks too intimidating, so they never begin. They fail to understand that micro-segmentation must be done gradually, in phases. The right provider will be able to identify use cases that will provide quick time to value for your unique business context.

Layer 4 complacency: Some organizations believe that traditional network segmentation is sufficient. But ask them, “When was the last time your perimeter firewalls were strictly Layer 4 port forwarding devices?” Attacks over the last 15 years often include port hijacking – taking over an allowed port with a new process for obfuscation and data exfiltration. Attackers can exploit open ports and protocols for lateral movement. Layer 4 approaches, typical of most point solutions, can in some cases be equal to under-segmentation. Of course, effective micro-segmentation must strike a balance between application protection and business agility, delivering strong security without disrupting business-critical applications, so it’s important not to enforce such tight policy that you lose flexibility. However, there are certain examples in dynamic infrastructures where workloads are communicating and often migrating across segments where you will want to enforce more granular policy down to Layer 7.

Lack of multi-cloud convergence: The hybrid cloud data center adds agility through autoscaling and mobility of workloads. However, it is built on a heterogeneous architectural base. Each cloud vendor may offer point solutions and security group methodologies that focus on its own architecture. They have their own best interests at heart, and multiple solutions can result in unnecessary complexity. Successful micro-segmentation requires a solution that works in a converged fashion across the entire architecture. On top of this, a converged approach can be implemented more quickly and easily than one that must account for different cloud providers’ security technologies.

Inflexible policy engines: Point solutions often have poorly thought-out policy engines. Most include “allow-only” rule sets. Most security professionals would prefer to start with a “global-deny” list, which establishes a base policy against unauthorized actions across the entire environment. This lets enterprises demonstrate a security posture directly correlated with the compliance standards they must adhere to, such as HIPAA for health organizations or PCI-DSS for anyone who takes payments.

Moreover, point solutions usually don’t allow policies to be dynamically provisioned or updated when workflows are autoscaled, services expand or contract, or processes spin up or down — a key reason enterprises are moving to hybrid cloud data centers in the first place. Without this capability, micro-segmentation is virtually impossible.

Given these obstacles, it’s understandable that most micro-segmentation projects suffer from lengthy implementation cycles, cost overruns, and excessive demands on scarce security resources, ultimately failing to achieve their goals.

So, how can you increase your chances of success?

Winning Strategies for Successful Micro-Segmentation

When intelligently planned and executed, reducing risk with micro-segmentation is very achievable. It starts with discovery of your applications and a visual map of their communications and dependencies within your network. With granular visibility into your entire environment, including network flows, assets, and orchestration details from various third-party platforms and workloads, you can more easily identify critical assets that can logically be grouped via labels to use in policy creation. Process-level (Layer 7) visibility accelerates your ability to identify and label workflows, and to achieve a more effective level of protection.
Converged micro-segmentation strategies that work seamlessly across your entire heterogeneous environment, from on premises to the cloud, will simplify and accelerate the rollout. When a policy can truly follow the workload, regardless of the underlying platform, it becomes easier to implement and manage, and delivers more effective protection.

Autoscaling is one of the major features of the hybrid cloud terrain. The inherent intelligence to understand and apply policies to workloads as they dynamically appear and disappear is key.

Finally, take a gradual, phased approach to operationalizing micro-segmentation. Start with critical assets, or applications that need to be secured for compliance. What is most likely to be targets of attackers? Which assets contain sensitive customer data or are most vulnerable to compute hijacking? Create policies around those groups first. Over time, you can gradually build out increasingly refined policies, whether this is for increased risk reduction, the principle of least privilege, wider compliance needs, or any other specific end goals for your business needs.

Want to learn more about best practices for micro-segmentation? Read more.

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More