NSX-T vs. NSX-V – Key Differences and Pitfalls to Avoid

While working with many customers on segmentation projects, we often get questions about alternative products to Guardicore. This is expected, and, in fact, welcome, as we will take on any head-to-head comparison of Guardicore Centra to other products for micro-segmentation.

Guardicore vs. NSX-T vs NSX- V

One of the common comparisons we get is to VMware NSX. And specifically, we get a lot of questions from customers about the difference between VMware’s two offerings in this space, NSX-T vs NSX-V. Although many security and virtualization experts have written about the differences between the two offerings, including speculation on whether or not these two solutions will merge into a single offering, we think we offer a unique perspective on some of the differences, and what to pay attention to in order to ensure segmentation projects are successful. Also, regardless of which product variant an organization is considering, there are several potential pitfalls with NSX that are important to understand and consider before proceeding with deployment.

NSX-T vs. NSX-V: Key Differences

NSX-V (NSX for “vSphere”) was the first incarnation of NSX and has been around for several years now. As the name suggests, NSX-V is designed for on-premises vSphere deployments only and is architected so that a single NSX-V manager is tied to a single VMware vCenter Server instance. It is only applicable for VMware virtual machines, which leaves a coverage gap for organizations whose use a hybrid infrastructure model. The 2019 RightScale State of the Cloud Report in fact shows that 94% of organizations use the cloud — with 28% of those prioritizing hybrid cloud – with VMware vSphere at 50% of private cloud adoption, flat from last year. So, given the large number of organizations embracing the cloud, interest in NSX-V is waning.

NSX-T (NSX “Transformers”) was designed to address the use cases that NSX-V could not cover, such as multi-hypervisors, cloud, containers and bare metal servers. It is decoupled from VMware’s proprietary hypervisor platform and incorporates agents to perform micro-segmentation on non-VMware platforms. As a result, NSX-T is a much more viable offering than NSX-V now that hybrid cloud and cloud-only deployment models are growing in popularity. However, NSX-T remains limited by feature gaps when compared to both NSX-V and other micro-segmentation solutions, including Guardicore Centra.

Key Pitfalls to Avoid with NSX

While the evolution to NSX-T was a step in the right direction for VMware strategically, there are a number of limitations that continue to limit NSX’s value and effectiveness, particularly when compared to specialized micro-segmentation solutions like Guardicore Centra .

The following are some of the key pitfalls to avoid when considering NSX.

  • Solution Complexity
    VMware NSX requires multiple tools to cover the entire hybrid data center environment. This means NSX-V for ESXi hosts, NSX-T for bare-metal servers, and NSX-Cloud for VMware cloud hosting. In addition, it is a best practice in any micro-segmentation project to first start with visibility to map flows and classify assets where policy will be applied. This requires a separate product, vRealize Network Insight (vRNI). So, a true hybrid infrastructure requires multiple products from VMware, and the need to synchronize policy across them. This leads to more complexity and significantly more time to achieve results. In addition, vRNI is not well-integrated into NSX, which makes the task of moving from visibility to policy a long and complex process. It requires manual downloading and uploading of files to share information between tools.But don’t just take our word for it. A recent Gartner report, Solution Comparison for Microsegmentation Products, April 2019, stated that VMware NSX “comes with massive complexity and many moving parts”. And, when considering NSX for organizations that have implemented the VMware SDN, there is additional complexity added. For example, the network virtualization service alone requires an architecture that consists of “logical switches, logical routers routers, NSX Edge Nodes, NSX Edge Clusters, Transport Nodes, Transport Zones, the logical firewall and logical load balancers,” according to Gartner. Not to mention all the manual configuration steps required to implement.
  • Overspending on Licensing
    For many organizations, segmentation requirements develop in stages. They may not even consciously be beginning a micro-segmentation project. It could start as a focused need to protect a critical set of “digital crown jewels” or subsets of the infrastructure that are subject to regulatory requirements. VMware’s licensing model for NSX does not align well with practical approaches to segmentation like these. When deploying NSX, an organization must license its entire infrastructure. If a segmentation project only applies to 20 percent of the total infrastructure, NSX licenses must be purchased for the remaining 80 percent regardless of whether they will ever be used.
  • Management Console Sprawl
    As mentioned above, detailed infrastructure virtualization is a critical building block for effective micro-segmentation. You can’t protect what you can’t see. While micro-segmentation products integrate virtualization and micro-segmentation into a single interface, NSX does not include native visualization capabilities. Instead, NSX requires the use of a separately licensed product, vRealize Network Insight, for infrastructure visibility. This adds both cost and complexity. It also makes it much more difficult and time-consuming to translate insights from visualization into corresponding micro-segmentation policies. The impact is significant, as it puts additional resource strain on already over-taxed IT resources and results in less effective and less complete segmentation policies.
  • Limited Visibility
    Even when NSX customers choose to deploy vRNI as part of an NSX deployment, the real-time visibility it provides is limited to Layer 4 granularity. This does not provide the level of visibility to set fine-grained, application-aware policies to protect against today’s data center and cloud infrastructure threats. As environments and security requirements become more sophisticated, it is often necessary to combine Layer 4 and Layer 7 views to gain a complete picture of how applications and workloads work and develop strategies for protecting them.Also, while real-time visibility is critical, historical visibility also plays an important role in segmentation. IT environments – and the threat landscape – are constantly changing, and the ability to review historical activity helps security teams continuously improve segmentation policies over time. However, NSX and vRNI lack any historical reporting or views.
  • Enforcement Dependencies and Limitations
    As with visualization, it is important to be able to implement policy enforcement at both the network and process levels. Native NSX policy enforcement can only be performed at the network level.It is possible to achieve limited application-level policy control by using NSX in conjunction with a third VMware product, VMware Distributed Firewall. However, even using VMware Distributed Firewall and NSX together has significant limitations. For example, VMware Distributed Firewall can only be used with on-premises vSphere deployments or with VMware’s proprietary VMware Cloud for AWS cloud deployment model. This makes it non-applicable to modern hybrid cloud infrastructure.
  • Insufficient Protection of Legacy Assets
    While most organizations strive to deploy key applications on modern operating systems, legacy assets remain a fact of life in many environments. While the introduction of agents with NSX-T broadens platform coverage beyond the VMware stack, operating system compatibility is highly constrained. NSX-T agent support is limited to Windows Server 2012 or newer and the latest Linux distributions. Many organizations continue to run high-value applications on older versions of Windows and Linux. The same is true for legacy operating systems like Solaris, AIX, and HP-UX. In many ways, these legacy systems are leading candidates for protection with micro-segmentation, as they are less likely than more modern systems to have current security updates available and applied. But they cannot be protected with NSX.
  • Inability to Detect Breaches
    While the intent of micro-segmentation policies is to proactively block attacks and lateral movement attempts, it is important to complement policy controls with breach detection capabilities. Doing so acts as a safety net, allowing security teams to detect and respond to any malicious activities that micro-segmentation policies do not block. Detecting infrastructure access from sources with questionable reputation and monitoring for network scans and unexpected file changes can both uncover in-progress security incidents and help inform ongoing micro-segmentation policy improvements. NSX lacks any integrated breach detection capabilities.

With the introduction of NSX-T, VMware took an important step away from the proprietary micro-segmentation model it originally created with NSX-V. But even NSX-T requires customers to lock themselves into a sprawling collection of VMware tools. And some key elements, such as VMware Distributed Firewall, remain highly aligned with VMware’s traditional on-premises model.

In contrast, Guardicore Centra is a software-defined, micro-segmentation solution that was designed from day one to be platform-agnostic. This makes is much more effective than NSX at applying micro-segmentation to any combination of VMware and non-VMware infrastructures.

Centra also avoids the key pitfalls that limit the usefulness of NSX.

For example, Centra offers:

  • Flexible licensing that can be applied to a subset of the overall infrastructure if desired.
  • Visualization capabilities that are fully integrated with the micro-segmentation policy creation process.
  • Visibility and integrated enforcement at both Layer 4 and Layer 7 for more granular micro-segmentation control.
  • Extensive support for legacy operating systems, including older Windows and Linux versions, Solaris, AIX, and HP-UX.
  • Fully integrated breach detection and response capabilities, including reputation-based detection, dynamic deception, file integrity monitoring, and network scan detection.

Don’t Let NSX Limitations Undermine Your Micro-Segmentation Strategy

Before considering NSX, see first-hand how Guardicore Centra can help you achieve a simpler and more effective micro-segmentation approach.

Interested in more information on how Guardicore Centra is better for your needs than any NSX amalgam? Read our Guardicore vs. VMware NSX Comparison Guide

Read More

What is AWS re:Inforce?

AWS re:Inforce is a spin-off of AWS re:Invent. Why the need for a spinoff? Legend has it that the security tracks during re:Invent got so crowded that AWS decided that the security track should have a conference of its own.

AWS re:Inforce is a different kind of conference, a highly-technical conference of curated content meant for security professionals. This is a conference where knowledge runs deep and conversations go deeper, with few marketing overtures and high-level musings. Even the vendor-sponsored presentation were very technical with interesting takeaways. If your organization is invested in AWS at any level, it’s a great conference to attend. You get two condensed days of dedicated security content for the different services, architectures, and platforms offered by AWS. The content is available for multiple levels of expertise. You also get access to the top-tier AWS experts, with whom you can consult with on your different architecture dilemmas. Being that this conference turned out to be very popular, one tip I’d give next year’s attendees is to book your desired sessions as far ahead of time as you can (at least a few weeks, if possible). In conversations with colleagues, I learned that there were many who couldn’t get into all the sessions they had wanted. So I suggest you plan well for next year.

Here are some of the takeaways from the conference that I’d like to share with you:

  1. Humans don’t scale – This is not a revolutionary new thought, it’s common knowledge in the DevOps world. However the same understanding is becoming prevalent in the security industry as well. Organizations are starting to understand that as they move to the cloud, managing security for multiple dynamic environments just doesn’t scale- both from the configuration and IR perspectives. Organizations are moving away from complaining about security personnel shortage, and instead are looking to converge their multiple security platforms into 2-3 systems that provide a wide coverage of use cases and allow a high level of automation and compatibility with common DevOps practices.
  2. Security platforms converge – Organizations are transforming their IT operations to be efficient and automated. Security has to follow suit and be an enabler instead of a road block. The end goal from a CISO perspective is to achieve governance of the whole network, not just the cloud deployments or just the on-prem ones. Vendors can no longer have separate solutions for on-prem and cloud. A single unified solution is the only viable, sustainable option.
  3. Migration is hard – Migrating your workloads to the cloud is hard, migrating your security policy is even harder. Organizations moving all or some of their workloads to AWS find it very hard to keep the same level of security posture. Running a successful migration project while not compromising on security requires changing controls that do not exist any in the cloud. The existing security tools these organizations are using are not suitable or sufficient for enforcing the same security posture in the cloud.
  4. Hit F5 on your threat model – One of the main takeaways for security practitioners on AWS is to have a fresh approach to what actually needs to be secured. Make sure that as new cloud constructs and services are adopted by the organization, you actually have the right tools or policies in place to secure them. For example, solutions like AWS Control Tower (announced GA at the time of the conference), which helps you govern your AWS environment and accounts policy. When looking at the hybrid or cloud-only topologies that require a complex network model, you realize that you would need a hybrid solution to provide an overlay policy for both your cloud and on-prem assets.
  5. API is king – As our architectures and networks become more complex the ability of a human to monitor or maintain a network is becoming unrealistic. A great example is the SOAR (security automation and remediation) space. Organizations are moving away from shiny SOCs (security operation centers) with big TVs and hordes of operators. Human operators are not an effective solution over time and especially at scale. The move to automated playbooks solves both the staffing issue and the variable quality of incident handling. Each incident is handled according to a premeditated script for that scenario, with no need to reinvent the wheel. Sometimes it’s smart to allow automation to be our friend, and make our lives easier.

As CISOs need to be able to secure their entire network, and not just the cloud elements, the same concepts should apply more widely to network security. These have been the cornerstones of building Guardicore Centra, a micro-segmentation solution that works across all environments, and can complement and secure your AWS strategy. Modern infrastructures are dynamic and can change thousands of times over a span of a day. Security policies should be just as dynamic and be applied just as fast and be able to adhere to the same cadence. Guardicore enables security practitioners to integrate with APIs and move at the speed of the organization. Tools that require your security and network engineers to define security policy only through the UI and do not provide a way to script and automate policy creation are not transitioning to the cloud.

We believe that security shouldn’t be an obstacle or a cause for delay, and so one single, unified solution is a must-have. This obviously needs to work in a hybrid and multi cloud reality, without interfering with AWS best practices for it to be beneficial and not slow you down.

Want to learn more about hybrid-cloud security? Watch this video about micro-segmentation and breach detection in an increasingly complex environment.

 

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Are You on Top of the Latest Cloud Security Trends?

As enterprises embrace public and private cloud adoption, most find themselves working in a hybrid environment. Whether a hybrid architecture is a step towards becoming a fully cloud-enabled business, or an end-goal choice that allows you more freedom and flexibility over your business, you need the ability to protect your critical applications and assets across multiple environments while reducing your overall attack surface.

Understanding the Effect of Cloud Security Future Regulations

Achieving compliance can feel like an uphill struggle, with regular updates to existing regulations, as well as new regulations being written to handle the latest issues that enterprises face. While compliance doesn’t guarantee security, it’s tough to be secure without being compliant as a minimum foundation. The EU’s GDPR, for example, was created in response to the large amount of data breaches that businesses are facing, protecting PII (personally identifiable information) from attackers who would use it for identity theft, crime, and fraud. Another example is the new California Privacy laws that will go into effect in 2020. These are supposedly as strict as GDPR regulations and will affect all companies who have customers living in California, both businesses in America and internationally.

As fines and consequences for non-compliance get up and running, (GDPR fines for instance have totaled €56 million in its first year) it’s likely that businesses will start uncovering their own limitations. This will include their legacy architecture and security techniques, and prompt the companies to make changes to include public cloud services that have been built with GDPR or California Privacy compliance in mind, and extending their networks to include cloud as well as on-premises assets. It’s more important than ever that businesses put security first when making this kind of change, or they may be solving the problem of compliance at the expense of overall security and visibility.

Visibility is More Important than Ever as Businesses Adopt New Cloud Security Trends

All three main public cloud providers, AWS, Azure, and Google use the shared responsibility model. Simply put, the cloud providers manage infrastructure, and you as a customer are fully responsible for all customer data, access management, and network and firewall configuration. Each enterprise will have its own unique needs in terms of governance, SLA requirements and security overall, and in a multi-cloud environment, staying on top of this can be complex.

The bottom line is that customers often experience a lack of visibility and control when they consolidate their IT on the cloud, exactly where they need that insight and attention the most. If you have specific regulatory or industry needs, you will need more assurance that you have control over your workloads and communication flows.

Cloud-Native Environments are the Cloud Security Future

Improving your visibility across a hybrid IT ecosystem limits the chances of you falling victim to attacks on vulnerable or poorly authenticated infrastructure. Guardicore Centra offers automatic asset and dependency mapping down to the process level, allowing IT to quickly uncover and manage misconfigurations or dangerous open communications, providing early value to your business.

Once these are dealt with, a continuous view of all communication flows and assets moving forward puts your business in a strong position as attackers begin launching more sophisticated campaigns in the cloud. As cloud adoption continues to grow, future-focused businesses need to be on the lookout for cloud-native attacks that take advantage of container vulnerabilities and architectures, for example.

Shift-Left on Cloud Security

Enterprises are realizing that cloud providers are not responsible for their workload or application security, and that cloud solutions do not remove a business’ own responsibility when it comes to data security and compliance. One of the popular cloud security trends is that businesses are looking to adopt an early and continuous security solution to meet this challenge head-on. The latest micro-segmentation technology is smart and modern, robust enough to take control of an increasingly complex environment, while accomplishing early value use cases when it comes to solving infrastructure problems. As a built-in security method, the strongest micro-segmentation technology can handle a heterogeneous data center, covering legacy solutions, bare-metal, VMs, hybrid, containers, server-less and multi-cloud. One security vendor reduces complexity, which explains why many companies are opting for solutions that include strong complementary controls such as breach detection and incident response.

‘Application-Aware’ is a Cloud Security Future Must-Have

Moving to the cloud is all about businesses being able to be more flexible, scale faster and larger, providing and benefiting from new and exciting services. Your micro-segmentation solution needs to be able to keep up. Application-centric security takes over from traditional manual implementation, providing deep visibility, smart policy creation and airtight governance, protecting against threats in a holistic way. Cloud security future success is dependent on security that is built both for the cloud and all its vulnerabilities, at the same time as effortlessly managing legacy systems and everything in between.

Want to learn more about cloud security trends and how to manage a heterogeneous environment? Check out this white paper.

How to Establish your Next-Gen Data Center Security Strategy

In 2019, 46 percent of businesses are expected to use hybrid data centers, and it is therefore critical for these businesses to be prepared to deal with the inherent security challenges. Developing a next gen data center security strategy that takes into account the complexity of hybrid cloud infrastructure can help keep your business operations secure by way of real-time responsiveness, enhanced scalability, and improved uptime.

One of the biggest challenges of securing the next gen data center is accounting for the various silos that develop. Every cloud service provider has its own methods to implement security policies, and those solutions are discrete from one another. These methods are also discrete from on-premises infrastructure and associated security policies. This siloed approach to security adds complexity and increases the likelihood of blind spots in your security plan, and isn’t consistent with the goals of developing a next gen data center. To overcome these challenges, any forward-thinking company with security top of mind requires security tools that enable visibility and policy enforcement across the entirety of a hybrid cloud infrastructure.

In this piece, we’ll review the basics of the next gen data center, dive into some of the details of developing a next gen data center security strategy, and explain how Guardicore Centra fits into a holistic security plan.

What is a next gen data center?

The idea of hybrid cloud has been around for a while now, so what’s the difference between what we’re used to and a next gen data center? In short, next gen data centers are hybrid cloud infrastructures that abstract away complexity, automate as many workflows as possible, and include scalable orchestration tools. Scalable technologies like SDN (software defined networking), virtualization, containerization, and Infrastructure as Code (IaC) are hallmarks of the next gen data center.

Given this definition, the benefits of the next gen data center are clear: agile, scalable, standardized, and automated IT operations that limit costly manual configuration, human error, and oversights. However, when creating a next gen data center security strategy, enterprises must ensure that the policies, tools, and overall strategy they implement are able to account for the inherent challenges of the next gen data center.

Asking the right questions about your next gen data center security strategy

There are a number of questions enterprises must ask themselves as they begin to design a next gen data center and a security strategy to protect it. Here, we’ll review a few of the most important.

  • What standards and compliance regulations must we meet?Regulations such as HIPAA, PCI-DSS, and SOX subject enterprises to strict security and data protection requirements that must be met, regardless of other goals. Failure to account for these requirements in the planning stages can prove costly in the long run should you fail an audit due to a simple oversight.
  • How can we gain granular visibility into our entire infrastructure? One of the challenges of the next gen data center is the myriad of silos that emerge from a security and visibility perspective. With so many different IaaS, SaaS, and on-premises solutions going into a next gen data center, capturing detailed visibility of data flows down to the process level can be a daunting task. However, in order to optimize security, this is a question you’ll need to answer in the planning stages. If you don’t have a baseline of what traffic flows on your network look like at various points in time (e.g. peak hours on a Monday vs midnight Saturday) identifying and reacting to anomalies becomes almost impossible.
  • How can we implement scalable, cross-platform security policies?As mentioned, the variety of solutions that make up a next gen data center can lead to a number of silos and discrete security policies. Managing security discretely for each platform flies in the face of the scalable, DevOps-inspired ideals of the next gen data center. To ensure that your security can keep up with your infrastructure, you’ll need to seek out scalable, intelligent security tools. While security is often viewed as hamstringing DevOps efforts, the right tools and strategy can help bridge the gap between these two teams.

Finding the right solutions

Given what we have reviewed thus far, we can see that the solutions to the security challenges of the next gen data center need to be scalable and compliant, provide granular visibility, and function across the entirety of your infrastructure.

Guardicore Centra is uniquely capable of addressing these challenges and helping secure the next gen data center. For example, not only can micro-segmentation help enable compliance to standards like HIPAA and PCI-DSS, but Centra offers enterprises the level of visibility required in the next gen data center. Centra is capable of contextualizing all application dependencies across all platforms to ensure that your micro-segmentation policies are properly implemented. Regardless of where your apps run, Centra helps you overcome silos and provides visibility down to the process level.

Further, Centra is capable of achieving the scalability that the next gen data center demands. To help conceptualize how scalable micro-segmentation with Guardicore Centra can be, consider that a typical LAN build-out that can last for a few months and require hundreds of IT labor hours. On the other hand, a comparable micro-segmentation deployment takes about a month and significantly fewer IT labor hours.

Finally, Centra can help bridge the gap between DevOps and Security teams by enabling the use of “zero trust” security models. The general idea behind zero trust is, as the name implies, nothing inside or outside of your network should be trusted by default. This shifts focus to determining what is allowed as opposed to being strictly on the hunt for threats, which is much more conducive to a modern DevSecOps approach to the next gen data center.

Guardicore helps enable your next gen data center security strategy

When developing a next gen data center security strategy, you must be able to account for the nuances of the various pieces of on-premises and cloud infrastructure that make up a hybrid data center. A big part of doing so is selecting tools that minimize complexity and can scale across all of your on-premises and cloud platforms. Guardicore Centra does just that and helps implement scalable and granular security policies to establish the robust security required in the next gen data center.

If you’re interested in redefining and adapting the way you secure your hybrid cloud infrastructure, contact us to learn more.

Want to know more about proper data center security? Get our white paper about operationalizing a proper micro-segmentation project.

Read More

Have You Heard the News? Guardicore Employees Making Waves in Cybersecurity

Here at Guardicore, our employee successes are always a cause for celebration. We love seeing their names up in lights when they gain media attention for their achievements in cybersecurity and beyond.

With that in mind, let’s take a closer look at some of our Guardicore family who have hit the headlines recently, and understand why the Guardicore culture promotes and attracts this kind of success.

Encouraging our Diverse Voices

Ola Sergatchov, our Vice President of Corporate Strategy, was recently recognized as one of The Software Report’s Top 25 Women Leaders in Cybersecurity for 2019. An Executive Leader at Guardicore, Ola encourages women in technology to pursue both technical and leadership positions with creativity, integrity, and determination. Ola has more than 20 years in the industry, and combines technical knowledge with strategic business experience and an innovative flair.

On the topic of awesome Guardicore women who are gaining press attention, check out Danielle Kuznetz Nohi, Guardicore’s Information Security Researcher and Team Leader, featured in this article on female voices that are making a difference in cybersecurity. She talked about how she looks for the right skill set and personality when she is hiring for her team, applicants who show creativity, communication, organization and superb management ability.

Age is Just a Number

An open mind when it comes to hiring practices is an area where many companies fall short, often focusing on the age and experience of candidates rather than their skills and raw talent and potential to contribute. In contrast, at Guardicore we look for the right talent, no matter where it comes from. Rather than restricting ourselves to one ‘type’ of person, we look for interesting people with fresh ideas who can add to our teams. Omri’s story has attracted a lot of interest, as he was just 18 years old when he came to work for us. His high school teacher had sparked his interest by teaching him Scratch, and he began developing his own applications and programming websites.

When Omri applied to Guardicore, Daniel Goldberg, our Information Security Expert and Security Researcher, said that the decision to hire him was an easy one, although he knew that Omri could only join the team for a few months and then would leave for his army service. He saw the win-win nature of the situation, and said yes where others may have said no. Tangling with the bad actors and malicious hackers that only the top percentage of security experts ever grapple with is an unusual experience for any teenager, and one that Omri feels has prepared him for both his army intelligence unit, and an ongoing career in hi-tech.

Innovation and Fresh Thinking

A fresh voice shouting out from the frontlines of cybersecurity research, Ophir Harpaz is a reverse-engineering enthusiast, sharing her skills through her pet project, begin.re where even beginners can get some hands-on advice and knowledge. She was recently featured in 21 Cybersecurity Twitter Accounts You Should Follow for bestowing her insight and practical know-how to the masses. Innovative and exciting, it’s easy to see why she is such a good fit for Guardicore Labs.

Sharing her own story on her experience in cybersecurity, Product Manager, Avishag Daniely was recently featured in ITSP magazine, giving her fresh and unique perspective on how minorities in the workplace can fight their fear of failure.

We encourage our staff to work on their own unique personal goals, and then use these to excel in the workplace, too. Expanding the company’s global footprint and extending the search for talent to new markets is increasingly important. With this in mind, for Avishag, becoming confident in business Spanish, learning to present and hold meetings in this language helped her to close the culture gap, whether she was making new connections, presenting to large audiences, or building informal relationships while she temporarily relocated abroad.

The Best People for the Job

Despite the company experiencing great growth over the past few years, one unique element of Guardicore is that we still manage to keep a truly caring culture, the feeling of being one big family, celebrating one another’s successes.

I believe that this has a lot to do with our hiring practices, and how we create a strong, cohesive culture that runs through everything we do as a company. Tune in to my next blog to hear about the steps we put in place to make this happen.

4 Insights about the Salesforce Outage

On May 17th, Salesforce announced a significant outage to its service, resulting in customers losing access to one of the most critical applications being used daily. The issue was acknowledged by Parker Harris, Salesforce’s chief technology officer and a co-founder, while the company worked together to try to resolve the critical outage as soon as possible.

At the center of the disaster was a faulty database script that was deployed in the production environment. Salesforce announced that “a database script deployment inadvertently gave users broader data access than intended.” This affected Salesforce customers who use Salesforce Pardot, a b2b marketing CRM, as well as any customers who have used Pardot in the past. The inadvertent access allowed users to both read and write permissions to restricted data.

Salesforce took initial steps to mitigate the problem by blocking access to all instances that contained impacted customers, and by shutting down other Salesforce services. This heat map below shows the extent of the blackout for Salesforce customers.

Salesforce outage map

The essential nature of the Salesforce application is self-evident, so these outages were extremely significant. Users who need Salesforce on a daily basis as part of their job found themselves idle, forcing many businesses to simply send them home.

As a data center company, focused on protecting the most critical applications, here are our essential four insights following the crisis:

  1. Think Further than Cyber-Attacks
    Always remember that cyber-attacks are not the only threats on your data center. When evaluating your data-center risks, it is important to take into account internal “threats” and implement the right controls that will protect your “digital crown jewels” – the most critical business applications and processes. For example, separating your production and development environments is foundational for strong security, ensuring that testing scripts cannot run in your production environment, even in the case of human error.
  2. Always Consider the Cloud
    Companies are increasing their presence on the cloud, for reasons such as a positive impact on cost, maintenance efforts, and flexibility. However, security needs to be considered from the outset of your cloud strategy. Some companies are unaware that cloud apps have a greater exposure to different threats due to lack of visibility and the difficulty to introduce policy and controls. On the cloud, your business is at greater risk in the case of a breach or an outage.
  3. Zero Trust
    You cannot trust your single point of configuration to control and isolate your environment. Best practice is to criticize your controls and simulate the situation of failures. Zero Trust, the approach of “never trust, always verify,” can be focused on lateral movement and breach detection attempts in internal vs. external networks. However, it can also be relevant for any security controls that are being used or updated. In many cases, your business is in danger from internal threats, misconfigurations, and innocent mistakes, all of which can be as catastrophic as a malicious cyber-attack. The zero trust approach helps to limit the damage.
  4. Be Ready for a Crisis
    Distributed controls are your strongest weapon to ensure that you are prepared for any eventuality. These will allow you to act quickly against the unexpected, especially in hybrid cloud environments where you need to manage multiple clusters and control planes. Make sure that you have the visibility and control of your entire environment that allows you to instantly isolate any affected environments. This will give you time to put your incident response plan into place, and protect your critical assets until a solution has been found.

The Salesforce outage shows that mistakes can happen to anyone, and the best protection is always going to be preparation. Start by separating your environments, limiting the exposed surface, and then move on to using the zero trust model to keep your most critical assets safe from harm, even in a hybrid-cloud infrastructure. Remember that without adequate segmentation, you are exposing your applications to internal threats as well as external ones. With strong data center security, you are one step ahead at all times.

Want to learn more about micro-segmentation in the cloud? Read our white paper on how to secure today’s modern data centers.

Download now

Guardicore Raises $60 Million; Funding Fuels Company Growth and Continued Disruption

Today I am excited to share that we have secured a Series C funding round of $60 million, bringing our total funding to more than $110 million. The latest round was led by Qumra Capital and was joined by other new investors DTCP, Partech, and ClalTech. Existing investors Battery Ventures, 83North, TPG Growth, and Greenfield Partners also participated in the round.

Since we launched the company in 2015, Guardicore has been focused on a single vision for providing a new, innovative way to protect critical assets in the cloud and data center. Our focus, and our incredible team, has earned the trust of some of the world’s most respected brands by helping them protect what matters most to their business. As the confidence our customers have in us has grown, so has our business, which has demonstrated consistent year-over-year growth for the past three years.

Our growth is due to our ability to deliver on a new approach to secure data centers and clouds using distributed, software-defined segmentation. This approach aligns with the transformation of the modern data center, driven by cloud, hybrid cloud, and PaaS adoption. As a result, we have delivered a solution that redefines the role of firewalls and implementing Zero Trust security frameworks. More dynamic, agile, and practical security techniques are required to complement or even replace the next-generation firewall technologies. We are delivering this and give our customers the ability to innovate rapidly with the confidence their security posture can keep up with the pace of change.

Continued Innovation

The movement of critical workloads into virtualized, hybrid cloud environments, industry compliance requirements and the increase of data center breaches demands a new approach to security that moves away from legacy firewalls and other perimeter-based security products to a new, software-defined approach. This movement continues to inspire our innovations and ensure that our customers have a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment.

Our innovation is evident in several areas of the company. First, we have been able to quickly add new innovative technology into our Centra solution, working in close partnership with our customers. For example, we deliver expansive coverage of data center, cloud infrastructure and operating environments, and simpler and more intuitive ways to define application dependencies and segmentation policies. This gives our customers the right level of protection for critical applications and workloads in virtually any environment.

Second, our Guardicore Labs global research team continues to provide deep insights into the latest exploits and vulnerabilities that matter to the data center. They also equip industry with access to open source tools like Infection Monkey, and Cyber Threat Intelligence (CTI) that allows security teams to keep track of potential threats that are happening in real time.

We have also continued to build out other areas of our business, such as our partner ecosystem, which earned the five-star partner program rating from CRN since its inception two years ago, as well as our technology alliances, which include relationships with leading cloud / IaaS infrastructure players such as AWS, Azure, and Nutanix.

Looking Ahead

We are proud of our past, but even more excited about our future. While there is always more work to do, we are in a unique position to lead the market with not only great technology, but a strong roster of customers, partners and, most importantly, a team of Guardicorians that challenge the status quo every single day to deliver the most innovative solutions to meet the new requirements of a cloud-centric era. I truly believe that we have the best team in the business.

Finally, as we celebrate this important milestone, I want to say thanks to our customers who have made Guardicore their trusted security partner. It is our mission to continue to earn your trust by
ensuring you maximize the value of your security investments beyond your goals and expectations.

4 of the Most Devastating Data Center Breaches of the Past 5 Years. (And How They Could Have Been Prevented)

In our last blog about data center hygiene, I talked about how most hackers are getting into your data centers in pretty standard, and more importantly, preventable ways. You can read more in depth about hacks and security with some interesting perspectives and thought leadership from our Guardicore Labs team, who research and write about the truly interesting campaigns they discover and analyze. In this article, however, I want to focus on four of the most talked about breaches of the past few years. By looking at what was stolen, who was impacted, how the data center breaches occurred, and what the tangible damage was, we should be able to see a pattern in how these attacks were perpetrated, and how they could have been stopped.

Equifax – What Happened?

The amount of data stolen was huge, including 148 million American names, dates of birth and social security numbers, 15 million British names, dates of birth and driving license and financial details, and an unknown number of PII belonging to Canadians and Australians. Home addresses, genders, passport details and taxpayer ID cards were stolen, as well as payment card information.

The damage to Equifax continues to grow. The stock drop alone cost the company $4 billion, alongside scrapped bonuses and IT costs of $242.7 million. There are 19 class action lawsuits pending against the company, and fines outstanding from US and Canadian regulatory commissions. The UK have fined the company £500,000, (US$660,000) which is the maximum they could levy prior to new GDPR regulations.

How Could it Have Been Prevented?

The initial entry point for the Equifax attackers was an unpatched vulnerability in their front-end web services, specifically in Apache Struts 2.0. According to the US House of Representatives, “The company’s failure to implement basic security protocols, including file integrity monitoring and network segmentation, allowed attackers to access and remove large amounts of data.” To understand more about this advice, see our deep dive on the Equifax failures.

Segmentation was obviously a core problem for Equifax- the lack of segmentation allowed the attackers to move with ease to critical areas once they made it through the perimeter. This was made worse by poor data hygiene, something we mentioned in our previous blog. The hackers were able to steal the data because of an out-of-date digital certificate, which had expired more than 19 months before the breach.

Equifax have also been criticized for their lack of proper incident response, a problem we see in many data center breaches. As early as August 2016, Equifax was warned about vulnerabilities, and told about flaws in their data center. Allegedly, the company did nothing, even when they learned that hackers had broken into their computer system and when they observed and blocked what they called “suspicious network traffic.” This dangerous inaction is not only illegal, but borders on negligence.

Target – What Happened?

100 million customers were affected by the 2013 Target data center breach, with data exposed including mailing addresses, names, email addresses, phone numbers and credit and debit card account data. This information could then be used to hack consumer accounts or launch phishing scams. Financial data stolen was complete, including account numbers, CVV codes and expiration dates.

Target also suffered a drop in their stock price as a result of the data center breach, with hundreds of lawsuits and around $3.6 billion worth of fines levied against them. The most unique outcome of this in comparison to other data center breaches is that it caused a total change to the retail industry. Card payment systems and Point of Sale systems were changed, the EMV chip was adopted, and a new protocol began with the tokenization of transactions.

How Can Data Center Breaches like the Target Attack be Avoided?

Third party vendors and services can be the weakest link in your ecosystem, without you even knowing it. For Target, the hack of their HVAC vendor network was the entry point for the hackers. With the right amount of visibility, the company could have seen that this was a risky connection, and a potential breach point. Once inside the Target network, the financial data was accessible to the attackers because it was not segmented for PCI compliance. Lastly, poorly patched Point of Sale systems could have been protected with better account management.

While the company was warned of the breach in advance, the CISO at Target was concerned about losing revenue during the all-important holiday season, so delayed the incident response.

Yahoo – What Happened?

The largest breach of its kind, 1 billion records were exposed in 2014 when Russian hackers infiltrated the Yahoo network. Among the data stolen was email addresses, usernames, phone numbers, security questions and encrypted passwords. This information has since been used in hundreds of attacks worldwide. Shockingly, Yahoo failed to notify anyone about the breach until 2016, and there is still not a clear answer to how the network was breached.

In 2018, the SEC fined them $50 million, and as with all data center breaches that affect this many individuals, there is likely to be more financial and legal consequences on the way.

Marriott – What Happened?

It can be hard to gauge the damage of an attack, especially when the company in question is less than upfront about the situation. Attackers breached the network of the Starwood systems, owned by Marriott hotels, during 2014, which means they achieved a dwell time of at least 1441 days. The hackers were Chinese intelligence services, with the motive of tracking people of interest and espionage.

The data stolen includes the names, phone numbers, email addresses, dates of birth and passport information of guests at the hotel, providing clear benefits for intelligence agencies who want insight into people’s movements, meetings, and credentials. It can also be used to create counterfeit passports, with real identification information. The secrecy around this attack, and the length of the dwell time means that the consequences are likely to be harsh. The GDPR breach alone will cost Marriott $915 million, while US federal investigations are still underway before further fines can be given.

Gaining Visibility and Control over Data Center Breaches like Yahoo and Marriott

Now let’s look again at our data center security checklist from the previous blog. In all of these cases, solving the issues on the checklist could have reduced risk and perhaps even prevented these data center breaches. Starting with visibility, identify your critical assets and digital crown jewels, so that you know where segmentation can make a difference. Ensure that areas of compliance are near the top of your to-do list. Protect your data center from the weakest links, namely the third-party vendors, suppliers and distributors who could be putting you at risk. Lastly, alongside underlying data hygiene, make sure you have an incident response plan that is up to date and tried and tested.

Want to learn more about how segmentation and micro-segmentation can help you achieve early wins for your company? Check out our white paper on smart segmentation.

Read More

Interested in research? Follow the exploits uncovered by Guardicore Labs here. You can also check out Infection Monkey, a free, open source vulnerability assessment tool that works across premises, vSphere, multiple clouds and containers. A recent addition- look up potential threatening domains and IPs using our cutting-edge Cyber Threat Intelligence.

Containers vs Virtual Machines – Your Cheat Sheet to Know the Differences

Docker, Kubernetes, and even Windows Server Containers have seen a huge rise in popularity the last few years. With the application container market having a projected CAGR (Compound Annual Growth Rate) of 32.9% between 2018 to 2023, we can expect that trend to continue. Containers have a huge impact on application delivery and are a real game changer for DevOps teams.

However, despite the popularity of containerization, there is still significant confusion and misunderstanding about how containers work and the difference between containers and virtual machines. This also leads to ambiguity in how to properly secure infrastructure that uses containers.

In this piece, we’ll provide a crash course on containers vs virtual machines by comparing the two, describing some common use cases for both, and providing some insights to help you keep both your virtual machines and containers secure.

What are Virtual Machines?

VMware’s description of a virtual machine as a “software computer” is a succinct way to describe the concept. A virtual machine is effectively an operating system or other similar computing environment that runs on top of software (a hypervisor) as opposed to directly on top of bare metal computer hardware (e.g. a server).

To better conceptualize what a virtual machine is, it’s useful to understand what a hypervisor is. A hypervisor is a special type of operating system that enables a single physical computer or server to run multiple virtual machines with different operating systems. The virtual machines are logically isolated from one another and the hypervisor virtualizes the underlying hardware and gives the virtual machines virtual compute resources (CPU, RAM, Storage, etc.) to work with. Two of the most popular hypervisors today are Windows HyperV and VMware’s ESXi.

In short, hypervisors abstract away the hardware layer so virtual machines can run independent of the underlying hardware resources. This technology has enabled huge strides in virtualization and cloud computing over the last two decades.

Note: If you’re interested in learning more about the nuts and bolts of hypervisors, it is important to note that what we’ve described here is a “Type 1” hypervisor. There are also “Type 2” hypervisors (e.g. Virtual Box or VMware Fusion) that can run on-top of standard operating systems (e.g. Windows 10).

What are Containers?

A container is a means of packaging an application and all its dependencies into a single unit that can run anywhere the corresponding container engine is. To conceptualize this, we can compare what a container engine does for containers to what a hypervisor does for virtual machines. While a hypervisor abstracts away hardware for the virtual machines so they can run an operating system, a container engine abstracts away an operating system so containers can run applications.

If you’re new to the world of containers and containerization, there is likely a ton of new terminology you need to get up to speed on, so here is a quick reference:

  • Docker. One of the biggest players in the world of containers and makers of the Docker Engine. However, there are many other options for using containers such as LXC Linux Containers and CoreOS rkt.
  • Kubernetes. A popular orchestration system for managing containers. Kubernetes will often be written as “K8s” for short. Other less popular orchestration tools include Docker Swarm and Apache Marathon.
  • Cluster. A group of containers that has a “master” machine that enables orchestration and one or more worker machines that actually run pods.
  • Pods. Pods are one or more containers in a cluster with shared resources that are deployed for a specific purpose.

Understanding the differences between containers vs virtual machines becomes easier when you view them from the standpoint of what is being abstracted away to provide the technology. With virtual machines, you’re abstracting away the hardware that would have previously been provided by a server and running your operating system. With containers you’re abstracting away the operating system that has been provided by your virtual machine (or server) and running your application (e.g. MySQL, Apache, NGINX, etc.).

Use Cases for Containers vs Virtual Machines

At this point, you may be asking: “why bother with containers if I already have virtual machines”? While that is a common thought process, it’s important to understand that each technology has valid use cases and there is plenty of room for both in the modern data center.

Many of the benefits of containers stem from the fact they only include the binaries, libraries, other required dependencies, and your app – no other overhead. It should be noted that all containers on the same host share the same operating system kernel. This makes them significantly smaller than virtual machines and more lightweight. As a result, containers boot quicker, ease application delivery, and help maximize efficient utilization of server resources. This means containers make sense for use cases such as:

  • Microservices
  • Web applications
  • DevOps testing
  • Maximization of the amount of apps you can deploy per server

Virtual machines on the other hand are larger and boot slower, but they are logically isolated from one another (with their own kernel) and can run multiple applications if needed. They also give you all the benefits of a full-blown operating system. This means virtual machines make sense for use cases such as:

  • Running multiple applications together
  • Monolithic applications
  • Complete logical isolation between apps
  • Legacy apps that require old operating systems

It’s also important to note that the topic of containers vs virtual machines is not zero-sum and the two can be used together. For example, you can install the Ubuntu Operating System on a virtual machine, install the Docker Engine on Ubuntu, and then run containers on top of the Docker Engine.

Security Challenges of Containers vs Virtual Machines

As data centers and hybrid cloud infrastructures integrate containers into an already complex ecosystem that includes virtual machines running on-premises and a variety of cloud services providers, keeping up with security can be difficult.

While virtual machines do offer logical isolation of kernels, there are still a myriad of challenges associated with virtual machines including: limited visibility to virtual networks, sprawl leading to expanded attack surfaces, and hypervisor security. These problems only become more magnified as your infrastructure scales and becomes more complex. Without the proper tools, adequate visibility and security is more challenging.

This is where Guardicore Centra can help. Centra enables enterprises to gain process-level visibility over the entirety of their infrastructure, whether virtual machines are deployed on-premises, in the cloud, or a mixture of both. Further, micro-segmentation helps limit the spread of threats and meet compliance requirements.

Micro-segmentation is particularly important when you begin to consider the challenges associated with container security. Containers running on the same operating system share the same kernel. This means that a single compromised container could lead to the host operating system and all the other containers on the host being compromised as well. Micro-segmentation can help limit the lateral movement of breaches and further harden a hybrid cloud infrastructure that uses containers.

Interested in Learning More About Securing Your Infrastructure?

That was our quick “cheat sheet” regarding containers vs virtual machines. We hope you enjoyed it! If you’d like to learn more about Docker security, check out our 5 Docker Security Best Practices to Avoid Breaches article. To learn more about securing modern infrastructure, check out our white paper on securing modern data centers and clouds. If you’d like to learn more about how Centra can help secure your hybrid cloud infrastructure, contact us today.

Easy Ways to Greatly Reduce Risk in Today’s Data Centers

Whether your infrastructure is on premises, in the cloud, or a combination of hybrid cloud, there are core characteristics of breached data centers that make them vulnerable to attack. These data centers are easier to penetrate and utilize, making them higher value targets for opportunistic hackers to exploit.

The truth is, protection is not that complicated. There are common, easily fixable data center problems that come up again and again in the biggest breaches, and best practices that can be easily implemented to provide significant risk reduction for your company against these kinds of threats. While security professionals often feel inundated with content that discusses ideas like “IT ecosystems are increasingly complex and fast-changing, and are therefore so difficult to secure” this is – in most cases, simply wrong.

What Are the Attackers Looking For?

Data centers offer the biggest bang for the criminal’s buck, whether that’s harvesting PII or other sensitive information such as technical intellectual property and best practices. Beyond direct gain, data centers offer a wealth of processing power which many attackers hijack for additional revenue opportunities to resell to other criminal groups. The black market for cyber-crime is continuously growing, with examples such as DDoS-as-a-service, and RAT-as-a-service giving attackers access to your compute infrastructure, to inject malware or to achieve remote access. We’ve even seen victims become the “false flag” bounce network to obfuscate an attack’s origin. Using hijacked resources for cryptocurrency mining is a steadily growing threat as well, up 459% in 2018.

The Simple Fixes That if Ignored, make a Data Center Easy to Compromise

Just over three years ago, In proposing a Zero Trust model, John Kindervag of Forrester said that we need to move to architectures with “no more chewy centers.” When we look broadly at data centers there are several things that lead them naturally to be what we don’t want, very soft in the middle. By making small changes, we can turn these deficits into enterprise positives, doing much to prevent future attacks from occurring and catching them quicker when they do happen.

  1. Good hygiene: Far too often attacks in data centers start by taking advantage of poor hygiene. By merely shoring up the below, attackers would have a much more difficult time getting in.
    1. Better patching acumen – doing a better job at finding unpatched vulnerabilities in applications.
    2. Better password and account management and enabling two factor authentication – many attacks come from simple brute force password attacks against single factor authentication applications.
    3. Better automation including OS, Application and kernel checks – while we have become very good at applying DevOps scripting in the form of auto-provisioning and managing playbooks/scripts like chef, puppet, ansible, we have not always added easy to incorporate OS, application and kernel update checks into those scripts. Instead of spinning up new automations that are only as good as the day they were born, it would be very easy to perpetually – and automatically update these scripts with these added checks cutting down exploitable vulnerabilities easily.
  2. Better segmentation & micro-segmentation – when an enterprise incorporates modern segmentation techniques – even if sparingly, it finds its risk greatly reduced. What makes these modern segmentation techniques different than what we have used in the past? Several things.
    1. Segmentation that is platform-agnostic and which provides visibility and enforcement to all platforms quickly and easily – Today’s data centers are heterogeneous in nature. Enterprises have embraced modern hypervisors and operating systems, containers and clouds, as well as serverless technology. Most enterprises also contain a good number of legacy systems and EoL operating systems such as Solaris, HP/UX, AIX, EoL Windows or EoL Linux as well.
    2. Segmentation that can be automated and works like your DevOps-based enterprise – Traditional security devices such as legacy firewalls, ACLs, and VLANs are extremely resource-intensive and impossible to manage in this kind of complex and dynamic environment. In some cases, such as in a hybrid cloud infrastructure, legacy security is not just insufficient, it’s unfeasible as a solution altogether. Enterprises need visibility across all of your platforms easily and seamlessly. Micro-segmentation technology is built for the dynamic and platform-agnostic nature of today’s enterprises, without the need for manual moves, adds, changes, or deletes. What is extremely important to understand – these modern techniques have been proven time and time again to be able to be implemented 30x faster than legacy techniques can be deployed and maintained.
    3. Segmentation – even when applied sparingly in “just a start” manner – this begins to reduce attack surface greatly. Grabbing these low hanging fruit makes it easy. Such examples include, but are not limited to:
      1. Isolating/securing off a compliance mandated environments
      2. Segmenting your “critical crown jewels” applications
      3. Sectioning off your vendors, suppliers, distributors, contractors off from the rest of the enterprise
      4. Securing off critical enterprise services and applications like remote access, network services and others
  3. Adequate Incident Response Plans & Practice – the final critical ingredient that can easily change an enterprise data center posture is having a well-thought -out incident response plan. One which incorporates not only the technical staff but also the business and legal parties that need to be involved as well. These plans should be practiced with incident response drills planned and run to establish blind spots or gaps in security.

Don’t believe everything you hear. Many of today’s biggest breaches are entirely preventable. In my next blog, I’ll take a look at four of the most devastating data center breaches from the last five years, and see how the checklist above could have made all the difference.

Interested in learning more about how to secure modern data centers and hybrid cloud environments?

Check out our White Paper on re-evaluating your security architecture