Moving Zero Trust from a Concept to a Reality

Most people understand the reasoning and the reality behind a zero trust model. While historically, a network perimeter was considered sufficient to keep attacks at bay, today this is not the case. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to be much more useful and capable in preventing breaches.

But how organizations can move from a concept or idea into implementation? Using the same tools that are developed with 15-20 year old technologies is not adequate.

There is a growing demand for IT resources that can be accessed in a location-agnostic way, and cloud services are being used more widely than ever. These facts, on top of businesses embracing broader use of distributed application architectures, mean that both the traditional firewall and the Next Generation are no longer effective for risk reduction.
The other factor to consider is that new malware and attack vectors are being discovered every day, and businesses have no idea where the next threat might come from. It’s more important than ever to use micro-segmentation and micro-perimeters to limit the fallout of a cyber attack.

How does applying the best practices of zero trust combat these issues?

Simply put, implementing the zero trust model creates and enforces small segments of control around sensitive data and applications, increasing your data security overall. Businesses can use zero trust to monitor all network traffic for malicious activity or unauthorized access, limiting the risk of lateral movement through escalating user privileges and improving breach detection and incident response. As Forrester Research, who originally introduced the concept, explain, with zero trust, network policy can be managed from one central console through automation.

The Guardicore principles of zero trust

At Guardicore, we support IT teams in implementing zero trust with the support of our four high level principles. Together, they create an environment where you are best-placed to glean the benefits of zero trust.

  • A least privilege access strategy: Access permissions are only assigned based on a well-defined need. ‘Never trust- always verify’. This doesn’t stop at users alone. We also include applications, and even the data itself, with continuous review of the need for access. Group permissions can help make this seamless, and then individual assets or elements can be removed from each group as necessary.
  • Secure access to all resources: This is true no matter the location or its user. Our authentication level is the same both inside and outside of the local area network, for example services from the LAN will not be available via VPN.
  • Access control at all levels: Both the network itself and each resource or application need multi-factor authentication.
  • Audit everything: Rather than simply collecting data, we review all the logs that are manually collected, using automation to generate alerts where necessary. These bots perform multiple actions, such as our ‘nightwatch bot’ that generates phone calls to the right member of staff in the case of an emergency.

However, knowing these best principles and understanding the benefits behind zero trust is not the same as being able to implement securely and with the right amount of flexibility and control.

Many companies fall at the first hurdle, unsure how to gain full visibility of their ecosystem. Without this, it is impossible to define policy clearly, set up the correct alerts so that business can run as usual, or stay on top of costs. If your business does not have the right guidance or skill-sets, the zero trust model becomes a ‘nice to have’ in theory but not something that can be achieved in practice.

It all starts with the map

With a zero trust model that starts with deep visibility, you can automatically identify all resources across all environments, at both the application and network level. At this point, you can work out what you need to enforce, turning to technology once you know what you’re looking to build as a strategy for your business. Other solutions will start with their capabilities, using these to suggest enforcement, which is the opposite of what you need, and can leave gaps where you need policy the most.

It’s important to ensure that you have a method in place for classification so that stakeholders can understand what they are looking at on your map. We bring in data from third-party orchestration, using automation to create a highly accessible map that is simple to visualize across both technical and business teams. With a context-rich map, you can generate intelligence on malicious activity even at the application layer, and tightly enforce policy without worrying about the impact on business as usual.

With these best practices in mind, and a map as your foundation – your business can achieve the goals of zero trust, enforcing control around sensitive data and apps, finding malicious activity in network traffic, and centrally managing network policy with automation.

Want to better understand how to implement segmentation for securing modern data centers to work towards a zero trust model?

Download our white paper

Guardicore Selected as Finalist in Black Unicorn Awards for 2019

Guardicore Named One of the Top 30 Finalists for Cybersecurity Companies

Boston, Mass. and Tel Aviv, Israel – July 22, 2019 – Guardicore, a leader in internal data center and cloud security, today announced that is has been named a finalist in the Black Unicorn Awards for 2019, sponsored by Cyber Defense Magazine. Founded in Tel Aviv in 2013 Guardicore is a global company with more than 150 employees, a worldwide network of more than 50 partners and $110 million in venture funding from Battery Ventures, 83North, TPG, Qumra Capital and Deutsche Telecom Capital Partners and Partech.

Guardicore recognizes that traditional perimeter defenses are ineffective at reducing the attack surface, maintaining compliance or deploying granular policies quickly and at scale, in today’s dynamic, heterogeneous hybrid environments. Guardicore protects modern enterprise networks by constantly engaging with customers to understand their challenges in hybrid data centers and providing micro-segmentation to define security policies.

As one of thirty finalists, Guardicore is competing against many of the industry’s leading providers of cybersecurity products and services for this prestigious award. The term “Black Unicorn” signifies a cybersecurity company that has the potential to reach a $1 billion dollar market value as determined by private or public investment (Source) and these awards showcase those companies with this kind of incredible potential in the cybersecurity marketplace. Ten winners will be announced on August 7, 2019 by Cyber Defense Magazine.

“It’s exciting to see Guardicore making it into the finalist round among other cybersecurity industry leaders in our first annual Black Unicorn awards,” said Judges Robert Herjavec of Herjavec GroupDavid DeWalt of Night Dragon and Gary Miliefsky of Cyber Defense Media Group. Learn more about the judges at: Black Unicorn Awards 2019.

“We are honored to be recognized as a finalist for this prestigious award as Guardicore continues to expand research areas to identify and prevent threats before they impact the enterprise organizations that put their trust in our hands,” said Pavel Gurvich, CEO and co-founder of Guardicore. “We will continue to provide a simple and flexible solution to meet the current and future needs of the modern enterprise,” added Gurvich.

About Guardicore

Guardicore is a data center and cloud security company that protects your organization’s core assets using flexible, quickly deployed, and easy to understand micro-segmentation controls. Our solutions provide a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment. For more information, visit www.guardicore.com.

Guardicore’s Insights from Security Field Day 2019

We had such a great time speaking at Security Field Day recently, presenting the changes to our product since our last visit, and hearing from delegates about what issues concern them in micro-segmentation technology.

The last time we were at Field Day was four years ago, and our product was in an entirely different place. The technology and vision have evolved since then. Of course, we’re still learning as we innovate, with the help of our customers who continually come up with new use cases and challenges to meet.

For those who missed our talk, here’s a look at some of what we discussed, and a brief recap of a few interesting and challenging questions that came up on the day.

Simplicity and Visibility First

At Guardicore, we know that ease of use is the foundation to widespread adoption of a new technology for any business. When we get into discussions with each enterprise, customer, or team, we see clearly that they have their own issues or road map to address. As there is no such thing as the ultimate or only use case for micro-segmentation, we can start with the customer in mind. Our product can support any flavor, any need. Just as examples, some of the most popular use cases include separation of environments such as Dev/Prod, ring fencing critical assets, micro-segmenting digital crown jewels, compliance or least privilege and more general IT hygiene like vulnerable port protocols.

To make these use cases into reality, organizations need deep visibility to understand what’s going on in the data center from a human lens. It’s important to have flexible labeling so that you can physically see your data center with the same language that you use to speak about it. We also enhance this by allowing users to see a particular view based on their need or their role within the company. A compliance officer would have a different use for the map than the CTO, or a developer in DevSecOps for example. In addition, organizations need to enforce both blacklist and whitelist policy models for intuitive threat prevention and response. Our customers benefit from our cutting edge visibility tool, Reveal, which is completely customizable and checks all of these boxes. They also benefit from our flexible policy models that include both whitelisting and blacklisting.

To learn more about how our mapping and visibility happen, and how this helps to enforce policy with our uniquely flexible policy model as well and show quick value, watch our full presentation, below.

Addressing Questions and Challenges

With only one hour for presenting our product, there were a lot of questions that we couldn’t get to answer. Maybe next time! Here are three of the topics we wanted to address further.

Q. How does being agent-based affect your solution?

One of the questions raised during the session was surrounding the fact that Guardicore micro-segmentation is an agent-based solution, as the benefits are clear, but people often want to know what the agent’s impact is on the workload.

The first thing we always tell customers who ask this question is that our solution is tried and tested. It is already deployed in some of the world’s biggest data centers such as Santander and Openlink, and works with a negligible impact on performance. Our agent footprint is very small, less than 0.1% CPU and takes up 185MB on Linux and 800MB on windows. Our resources are also configurable, allowing you to tailor the agent to what you need. At the same time, we support the largest amount of operating systems as compared to other vendors.

If the agent is still not suitable, you can use our L4 collectors, which sit at the hypervisor level or switch level, and give you full visibility, and use our virtual appliance for enforcement, as we touched upon during the talk. As experts in segmentation, we can talk you through your cybersecurity use cases, and discuss which approach works best, and where.

Q. Which breach detection capabilities are included?

Complementary controls are an important element of our solution, because they contribute to the ease of use and simplicity. One tool for multiple use cases offers a powerful competitive edge. Here are three of the tools we include:

  • Reputation Analysis: We can identify granular threats, including suspicious domain names, IP addresses, and even file hashes in traffic flows.
  • Dynamic Deception: The latest in incident response, this technique tricks attackers, diverting them to a honeypot environment where Labs can learn from their behavior.
  • File Integrity Monitoring: A prerequisite for many compliance regulations, this change-detection mechanism will immediately alert to any unauthorized changes to files.

Q. How do you respond to a known threat?

Flexible policy models allow us to respond quickly and intuitively when it comes to breach detection and incident response. Some vendors have a whitelist only model, which impedes their ability to take immediate action and is not enough in a hybrid fast-paced data center. In contrast, we can immediately block a known threat or undesired ports by adding it to the blacklist. One example might be blocking Telnet across the whole environment, or blocking FTP at the process level. This helps us show real value from day one. Composite models can allow complex rules like the real-world example we used at the presentation, SSH is only allowed in Production environment if it comes from Jumpboxes. With Guardicore, this takes 2 simple rules, while with a whitelist model it would take thousands.

Security Field Day 2019 staff

Until Next Time!

We loved presenting at Field Day, and want to thank all the delegates for their time and their interesting questions! If you want to talk more about any of the topics raised in the presentation, reach out to me via LinkedIn.

In the meantime, learn more about securing a hybrid modern data center which works across legacy infrastructure as well as containers and clouds.

Download our white paper

Rethinking Segmentation for Better Security

Cloud services and their related security challenges will continue to grow

One of the biggest shifts in the enterprise computing industry in the past decade is the migration to the cloud. As more and more organizations discover the benefits of moving their data centers to private and public cloud environments, this trend is expected to continue dominating the enterprise landscape. Gartner projects cloud services will grow exponentially from 2019 through 2022, with Infrastructure-as-a-Service (IaaS) being the fastest growing segment of the market, already showing an increase of 27.5% in 2019 compared to 2018.

So what’s the big challenge?

The added agility of cloud infrastructure comes with a trade-off, in the form of increased complexity of cyber security. Traditional security tools were designed for on premise servers and endpoints, focusing on perimeter defense to block the attacks at the entry point. But the dynamic nature of hybrid cloud services meant that perimeter defense became insufficient. When the perimeter itself is constantly shifting, as data and workloads move back and forth among public and private clouds and on premise data centers, the attack surfaces became much larger and required network segmentation to control lateral movement within the perimeter.

From the early days of clouds, segmentation became a popular concept. Traditionally, businesses were looking to divide the network into segments and enforce some sort of access control between the segments. In practice, the way it worked was that relevant servers were put into a dedicated VLAN and routed through a firewall. The higher level of segmentation meant smaller segment size, which reduced the attack surface and limited the impact of any potential breach.

Then – the rules of the game changed! Moving from one static cloud to dynamic, hybrid cloud-based data centers

Simple segmentation by firewalls used to work in the past, when the networks were comprised of relatively large static segments. However, the “rules of the game” have changed significantly in recent years. Dynamic data centers and hybrid cloud adoption have created problems that cannot be solved with legacy firewalls, and yet achieving segmentation is now more vital than ever before. The cadence of change to the infrastructure and application services is very high, accentuating the need for granular segments with an understanding of their dependencies and impacting their security policy.

Take, for example, the 2017 Equifax breach. The US House of Representatives report on this incident pointed directly to the lack of internal segmentation as one of the key gaps that allowed the breach impact to be so big, affecting 143 million consumers.

Regulation is another driver of segmentation. One of Guardicore’s customers, a global investment bank, needed to comply with a new regulation of SWIFT – which requires all SWIFT servers to be put into a separate segment and whitelist all connection allowed in and out of this segment. Using traditional methods, it took the bank 10 months and a costly labor-intensive process to complete this change, spurring them on to find smarter segmentation methods moving forward.

The examples above demonstrate how although segmentation is a known and well understood security measure, in practice organizations struggle to implement it properly in a cost-effective way.

Adapt easily to these changes and start micro-segmentation

To deal with these challenges, micro-segmentation was born. Micro-segmentation takes enterprise security to a new level and is a step further than existing network segmentation and application segmentation methods, adding visibility and policy granularity. It typically works by establishing security policies around individual or groups of applications, regardless of where they reside in the hybrid data center. These policies dictate which applications can and cannot communicate with each other.

Micro-segmentation includes the ability to fully visualize the environment and define security policies with Layer 7 process-level precision, making it highly effective at preventing lateral movement in a hybrid cloud environment.

Take the first step in preparing your enterprise for a better data security

Want to learn more? Listen to Guardicore’s CTO and Co-founder, Ariel Zeitlin, as he walks through the challenges and the solutions to better secure your data in his latest interview with the CIO Talk Network. In this podcast, Ariel discusses the new approaches to implementing segmentation, the key aspects you need to consider when comparing different vendors and technologies, and what comes ahead of the curve for security leaders in this space.

 

Want to learn more about how to first think through, then properly implement micro-segmentation? Read our white paper on operationalizing your segmentation project.

Read More

NSX-T vs. NSX-V – Key Differences and Pitfalls to Avoid

While working with many customers on segmentation projects, we often get questions about alternative products to Guardicore. This is expected, and, in fact, welcome, as we will take on any head-to-head comparison of Guardicore Centra to other products for micro-segmentation.

Guardicore vs. NSX-T vs NSX- V

One of the common comparisons we get is to VMware NSX. And specifically, we get a lot of questions from customers about the difference between VMware’s two offerings in this space, NSX-T vs NSX-V. Although many security and virtualization experts have written about the differences between the two offerings, including speculation on whether or not these two solutions will merge into a single offering, we think we offer a unique perspective on some of the differences, and what to pay attention to in order to ensure segmentation projects are successful. Also, regardless of which product variant an organization is considering, there are several potential pitfalls with NSX that are important to understand and consider before proceeding with deployment.

NSX-T vs. NSX-V: Key Differences

NSX-V (NSX for “vSphere”) was the first incarnation of NSX and has been around for several years now. As the name suggests, NSX-V is designed for on-premises vSphere deployments only and is architected so that a single NSX-V manager is tied to a single VMware vCenter Server instance. It is only applicable for VMware virtual machines, which leaves a coverage gap for organizations whose use a hybrid infrastructure model. The 2019 RightScale State of the Cloud Report in fact shows that 94% of organizations use the cloud — with 28% of those prioritizing hybrid cloud – with VMware vSphere at 50% of private cloud adoption, flat from last year. So, given the large number of organizations embracing the cloud, interest in NSX-V is waning.

NSX-T (NSX “Transformers”) was designed to address the use cases that NSX-V could not cover, such as multi-hypervisors, cloud, containers and bare metal servers. It is decoupled from VMware’s proprietary hypervisor platform and incorporates agents to perform micro-segmentation on non-VMware platforms. As a result, NSX-T is a much more viable offering than NSX-V now that hybrid cloud and cloud-only deployment models are growing in popularity. However, NSX-T remains limited by feature gaps when compared to both NSX-V and other micro-segmentation solutions, including Guardicore Centra.

Key Pitfalls to Avoid with NSX

While the evolution to NSX-T was a step in the right direction for VMware strategically, there are a number of limitations that continue to limit NSX’s value and effectiveness, particularly when compared to specialized micro-segmentation solutions like Guardicore Centra .

The following are some of the key pitfalls to avoid when considering NSX.

  • Solution Complexity
    VMware NSX requires multiple tools to cover the entire hybrid data center environment. This means NSX-V for ESXi hosts, NSX-T for bare-metal servers, and NSX-Cloud for VMware cloud hosting. In addition, it is a best practice in any micro-segmentation project to first start with visibility to map flows and classify assets where policy will be applied. This requires a separate product, vRealize Network Insight (vRNI). So, a true hybrid infrastructure requires multiple products from VMware, and the need to synchronize policy across them. This leads to more complexity and significantly more time to achieve results. In addition, vRNI is not well-integrated into NSX, which makes the task of moving from visibility to policy a long and complex process. It requires manual downloading and uploading of files to share information between tools.But don’t just take our word for it. A recent Gartner report, Solution Comparison for Microsegmentation Products, April 2019, stated that VMware NSX “comes with massive complexity and many moving parts”. And, when considering NSX for organizations that have implemented the VMware SDN, there is additional complexity added. For example, the network virtualization service alone requires an architecture that consists of “logical switches, logical routers routers, NSX Edge Nodes, NSX Edge Clusters, Transport Nodes, Transport Zones, the logical firewall and logical load balancers,” according to Gartner. Not to mention all the manual configuration steps required to implement.
  • Overspending on Licensing
    For many organizations, segmentation requirements develop in stages. They may not even consciously be beginning a micro-segmentation project. It could start as a focused need to protect a critical set of “digital crown jewels” or subsets of the infrastructure that are subject to regulatory requirements. VMware’s licensing model for NSX does not align well with practical approaches to segmentation like these. When deploying NSX, an organization must license its entire infrastructure. If a segmentation project only applies to 20 percent of the total infrastructure, NSX licenses must be purchased for the remaining 80 percent regardless of whether they will ever be used.
  • Management Console Sprawl
    As mentioned above, detailed infrastructure virtualization is a critical building block for effective micro-segmentation. You can’t protect what you can’t see. While micro-segmentation products integrate virtualization and micro-segmentation into a single interface, NSX does not include native visualization capabilities. Instead, NSX requires the use of a separately licensed product, vRealize Network Insight, for infrastructure visibility. This adds both cost and complexity. It also makes it much more difficult and time-consuming to translate insights from visualization into corresponding micro-segmentation policies. The impact is significant, as it puts additional resource strain on already over-taxed IT resources and results in less effective and less complete segmentation policies.
  • Limited Visibility
    Even when NSX customers choose to deploy vRNI as part of an NSX deployment, the real-time visibility it provides is limited to Layer 4 granularity. This does not provide the level of visibility to set fine-grained, application-aware policies to protect against today’s data center and cloud infrastructure threats. As environments and security requirements become more sophisticated, it is often necessary to combine Layer 4 and Layer 7 views to gain a complete picture of how applications and workloads work and develop strategies for protecting them.Also, while real-time visibility is critical, historical visibility also plays an important role in segmentation. IT environments – and the threat landscape – are constantly changing, and the ability to review historical activity helps security teams continuously improve segmentation policies over time. However, NSX and vRNI lack any historical reporting or views.
  • Enforcement Dependencies and Limitations
    As with visualization, it is important to be able to implement policy enforcement at both the network and process levels. Native NSX policy enforcement can only be performed at the network level.It is possible to achieve limited application-level policy control by using NSX in conjunction with a third VMware product, VMware Distributed Firewall. However, even using VMware Distributed Firewall and NSX together has significant limitations. For example, VMware Distributed Firewall can only be used with on-premises vSphere deployments or with VMware’s proprietary VMware Cloud for AWS cloud deployment model. This makes it non-applicable to modern hybrid cloud infrastructure.
  • Insufficient Protection of Legacy Assets
    While most organizations strive to deploy key applications on modern operating systems, legacy assets remain a fact of life in many environments. While the introduction of agents with NSX-T broadens platform coverage beyond the VMware stack, operating system compatibility is highly constrained. NSX-T agent support is limited to Windows Server 2012 or newer and the latest Linux distributions. Many organizations continue to run high-value applications on older versions of Windows and Linux. The same is true for legacy operating systems like Solaris, AIX, and HP-UX. In many ways, these legacy systems are leading candidates for protection with micro-segmentation, as they are less likely than more modern systems to have current security updates available and applied. But they cannot be protected with NSX.
  • Inability to Detect Breaches
    While the intent of micro-segmentation policies is to proactively block attacks and lateral movement attempts, it is important to complement policy controls with breach detection capabilities. Doing so acts as a safety net, allowing security teams to detect and respond to any malicious activities that micro-segmentation policies do not block. Detecting infrastructure access from sources with questionable reputation and monitoring for network scans and unexpected file changes can both uncover in-progress security incidents and help inform ongoing micro-segmentation policy improvements. NSX lacks any integrated breach detection capabilities.

With the introduction of NSX-T, VMware took an important step away from the proprietary micro-segmentation model it originally created with NSX-V. But even NSX-T requires customers to lock themselves into a sprawling collection of VMware tools. And some key elements, such as VMware Distributed Firewall, remain highly aligned with VMware’s traditional on-premises model.

In contrast, Guardicore Centra is a software-defined, micro-segmentation solution that was designed from day one to be platform-agnostic. This makes is much more effective than NSX at applying micro-segmentation to any combination of VMware and non-VMware infrastructures.

Centra also avoids the key pitfalls that limit the usefulness of NSX.

For example, Centra offers:

  • Flexible licensing that can be applied to a subset of the overall infrastructure if desired.
  • Visualization capabilities that are fully integrated with the micro-segmentation policy creation process.
  • Visibility and integrated enforcement at both Layer 4 and Layer 7 for more granular micro-segmentation control.
  • Extensive support for legacy operating systems, including older Windows and Linux versions, Solaris, AIX, and HP-UX.
  • Fully integrated breach detection and response capabilities, including reputation-based detection, dynamic deception, file integrity monitoring, and network scan detection.

Don’t Let NSX Limitations Undermine Your Micro-Segmentation Strategy

Before considering NSX, see first-hand how Guardicore Centra can help you achieve a simpler and more effective micro-segmentation approach.

Interested in more information on how Guardicore Centra is better for your needs than any NSX amalgam? Read our Guardicore vs. VMware NSX Comparison Guide

Read More