Posts

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More

Operationalizing Micro-Segmentation to Get You Started

It seems like the idea of micro-segmentation is everywhere today. Considering that the modern corporate perimeter is vastly different than that of the past, with our dependence on cloud based resources, mobile devices and third party vendors, what worked to protect it back then is no longer sufficient. It’s becoming clear that micro-segmenting networks is the way forward in a security landscape where traditional tools are failing to protect whatever is left of the disappearing perimeter.

In Forrester’s 2017 Gauge Your Zero Trust Maturity report, they detail how micro-segmentation has become a crucial element in any security architecture, but they aren’t the only ones praising the model. The concept is garnering more and more attention by leading security experts because it’s one of the most effective means of preventing lateral movement within networks and it’s time and cost effective too — a typical VLAN segmentation deployment for a small number of applications can take many months months and eats up hundreds of hours of IT staff time; a properly deployed micro-segmentation deployment of similar parameters takes about a month and consumes far less time to set up.

Getting it right, the first time

Before you begin your micro-segmentation journey however, there are some aspects to consider. Establishing a solid micro-segmentation deployment isn’t a 1-2-3 process. It needs to be approached with deliberate forethought and cannot be slapped together. Doing it the right way the first time ensures that your micro-segmentation project will remain cost and time effective when viewed against other extended projects that don’t effectively segment, let alone secure, networks.

Here are some of the most important things you need to think about as you embark on your micro-segmentation project. This post will focus on the ‘before implementation’ stage to help facilitate your journey to a successful micro-segmentation deployment.

Micro-segmentation considerations – Before deployment

Understand what you want to segment:

Before you do anything, decide what needs segmenting and why you want to segment it. This is critical because what you are going to segment will impact how you do it. For example, if you need to comply with industry regulations like HIPAA, SWIFT or PCI, your micro-segmentation project must start with ensuring that the areas in which those asset are housed are ringed off immediately. On the other hand, if you’re considering micro-segmentation with an end goal of general risk reduction, you’ll need to start with segmenting your most sensitive assets first and then move on to segmenting the less sensitive areas in a phased approach.

Go after short term, then long term, priorities:

Yes, you already know that your long term priority is improving your overall security posture and reducing your organizational risk. But to get there, you need to prioritize your segmentation goals. Start with simple micro-segmentation tasks which significantly reduce risk (for example, blocking insecure protocols such as telnet or FTP, or forcing SSH traffic through a specific jumpbox), and then proceed to a more thorough policy (such as application and tier segmentation).

So start with identifying and prioritizing and prioritizing your goals — then you can put the plans to implement them into action.

Build a picture of your environment for visibility:

Building a clear visualization of your environment will help you roll out your robust micro-segmentation implementation. The initial picture of what you think you have running and connected is inherently limited — so be aware, as you build out that detailed plan, you’re likely to discover connections and dependencies you had not thought of before. With this in mind, you may need backtrack just a bit to incorporate the aspects that were initially missed to create that comprehensive and accurate map. Once you’ve got this picture, you can begin to map all your dependencies which will help you understand where you need to create and enforce rules.

Decide which labels you’re going to use:

Based on the goals you established, figure out which labels you need. Labels are snippets of metadata attached to workloads, which provide context regarding that specific workload. For example, a server might be labeled with “Production Environment”, “PCI compliant” and “ERP application” labels.

Properly labeling your workloads is critical to the success of your microsegmentation deployment. Thus, it’s important to have flexibility in this process, instead of trying to force yourself into existing, limited options. Labels can and should be adapted to your specific organizational needs — for example, if you need application segmentation, use the “Application” label type. Need to segment your PCI compliant machines from others? You probably need a “PCI compliant” label. The important part is that they can be augmented to reflect your unique business requirements.

ID your information sources:

Identify the sources of information from which labels are going to be imported to your micro-segmentation solution. Labels can be found in various places: Public Cloud instance tags, Kubernetes labels, orchestration platforms, CMDB systems and more. Once this is done effectively, you can plan (and later implement) a way to extract the information from these sources into your micro-segmentation solution. At first this might be a semi-manual process, but long term, the goal is to create a fully automated, reliable solution here.

Further considerations for a successful micro-segmentation deployment

Congratulations – you’ve laid the groundwork out for your micro-segmentation project. But there is more to be considered to make sure your deployment is solid and seamless. Want to find out about the next set of key considerations for ensuring micro-segmentation success?

Download our white paper now and learn how to nail down all the considerations for your micro-segmentation deployment.

A Deep Dive into Point of Sale Security

Many businesses think of their Point of Sale (POS) systems as an extension of a cashier behind a sales desk. But with multiple risk factors to consider, such as network connectivity, open ports, internet access and communication with the most sensitive data a company handles, POS solutions are more accurately an extension of a company’s data center, a remote branch of their critical applications. This being considered, they should be seen as a high-threat environment, which means that they need a targeted security strategy.

Understanding a Unique Attack Surface

Distributed geographically, POS systems can be found in varied locations at multiple branches, making it difficult to keep track of each device individually and to monitor their connections as a group. They cover in-store terminals, as well as public kiosks and self-service stations in places like shopping malls, airports, and hospitals. Multiple factors, from a lack of resources to logistical difficulties, can make it near impossible to secure these devices at the source or react quickly enough in case of a vulnerability or a breach. Remote IT teams will often have a lack of visibility when it comes to being able to accurately see data and communication flows. This creates blind spots which prevent a full understanding of the open risks across a spread-out network. Threats are exacerbated further by the vulnerabilities of old operating systems used by many POS solutions.

Underestimating the extent of this risk could be a devastating oversight. POS solutions are connected to many of a business’s main assets, from customer databases to credit card information and internal payment systems, to name a few. The devices themselves are very exposed, as they are accessible to anyone, from a waiter in a restaurant to a passer-by in a department store. This makes them high-risk for physical attacks such as downloading a malicious application through USB, as well as remote attacks like exploiting the terminal through exposed interfaces, Recently, innate vulnerabilities have been found in mobile POS solutions from vendors that include PayPal, Square and iZettle, because of their use of Bluetooth and third-party mobile apps. According to the security researchers who uncovered the vulnerabilities, these “could allow unscrupulous merchants to raid the accounts of customers or attackers to steal credit card data.”

In order to allow system administrators remote access for support and maintenance, POS are often connected to the internet, leaving them exposed to remote attacks, too. In fact, 62% of attacks on POS environments are completed through remote access. For business decision makers, ensuring that staff are comfortable using the system needs to be a priority, which can make security a balancing act. A straightforward on-boarding process, a simple UI, and flexibility for non-technical staff are all important factors, yet can often open up new attack vectors while leaving security considerations behind.

One example of a remote attack is the POSeidon malware which includes a memory scraper and keylogger, so that credit card details and other credentials can be gathered on the infected machine and sent to the hackers. POSeidon gains access through third party remote support tools such as LogMeIn. From this easy access point, attackers then have room to move across a business network by escalating user privileges or making lateral moves.

High risk yet hard to secure, for many businesses POS are a serious security blind spot.

Safeguarding this Complex Environment and Getting Ahead of the Threat Landscape

Firstly, assume your POS environment is compromised. You need to ensure that your data is safe, and the attacker is unable to make movements across your network to access critical assets and core servers. At the top of your list should be preventing an attacker from gaining access to your payment systems, protecting customer cardholder information and sensitive data.

The first step is visibility. While some businesses will wait for operational slowdown or clear evidence of a breach before they look for any anomalies, a complex environment needs full contextual visibility of the ecosystem and all application communication within. Security teams will then be able to accurately identify suspicious activity and where it’s taking place, such as which executables are communicating with the internet where they shouldn’t be. A system that generates reports on high severity incidents can show you what needs to be analyzed further.

Now that you have detail on the communication among the critical applications, you can identify the expected behavior and create tight segmentation policy. Block rules,with application process context, can be used to contain any potential threat, ensuring that any future attackers in the data center would be completely isolated without disrupting business process or having any effect on performance.

The risk goes in both directions. Next, let’s imagine your POS is secure, but it’s your data center that is under attack. Your POS is an obvious target, with links to sensitive data and customer information. Micro-segmentation can protect this valuable environment, and stop an attack getting any further once it’s already in progress, without limiting the communication that your payment system needs to keep business running as usual.

With visibility and clarity, you can create and enforce the right policies, crafted around the strict boundaries that your POS application needs to communicate, and no further. Some examples of policy include:

    • Limiting outgoing internet connections to only the relevant servers and applications
    • Limiting incoming internet connections to only specific machines or labels
    • Building default block rules for ports that are not in use
    • Creating block rules that detail known malicious processes for network connectivity
    • Whitelisting rules to prevent unauthorized apps from running on the POS
    • Create strict allow rules to enable only the processes that should communicate, and block all other potential traffic

Tight policy means that your business can detect any attempt to connect to other services or communicate with an external application, reducing risk and potential damage. With a flexible policy engine, these policies will be automatically copied to any new terminal that is deployed within the network, allowing you to adapt and scale automatically, with no manual moves, changes, or adds slowing down business processes.

Don’t Risk Leaving this Essential Touchpoint Unsecured

Point of Sale solutions are a high-risk open door for attackers to access some of your most critical infrastructure and assets. Without adequate protection, a breach could grind your business to a halt and cost you dearly in both financial damage and brand reputation.

Intelligent micro-segmentation policy can isolate an attacker quickly to stop them doing any further damage, and set up strong rules that keep your network proactively safe against any potential risk. Combined with integrated breach detection capabilities, this technology allows for quick response and isolation of an attacker before the threat is able to spread and create more damage.

Want to learn more about how micro-segmentation can protect your endpoints while hardening the overall security for your data center?

Read More

Micro-Segmentation and Application Discovery – Gaining Context for Accurate Action

The infrastructure and techniques used to deliver applications are undergoing a significant transformation. Many organizations now use the public cloud extensively alongside traditional on-premises data centers, and DevOps-focused deployment techniques and processes are bringing rapid and constant change to application delivery infrastructure.

While this transformation is realizing many positive business benefits, a side effect is that it is now more challenging than ever for IT and security teams to maintain both point-in-time and historical awareness of all application activity. Achieving the best possible security protection, compliance posture, and application performance levels amidst constant change is only possible through an effective application discovery process that spans all of an organization’s environments and application delivery technologies.

Essential Application Discovery Process Components

Application discovery plays a valuable role for organizations defining and implementing a micro-segmentation strategy. Micro-segmentation solutions like GuardiCore Centra are more powerful and simpler to use when they have a complete and granular representation of an organization’s infrastructure as a foundation.

Application discovery is achieved through a multi-step process that includes the following key elements:

  • Collecting and aggregating data from throughout the infrastructure
  • Organizing and labeling data for business context
  • Presenting application discovery data in a visual and relevant manner
  • Making it seamless to use application discovery insights to create policies and respond to security incidents

Each step has its own nuances, which require consideration when evaluating micro-segmentation technologies.

Application Data Collection and Aggregation

Modern application delivery infrastructure often consists of numerous physical locations, including third-party cloud infrastructure, and a wide range of application types and delivery models. This can make it challenging to collect comprehensive data from throughout the infrastructure. For example, GuardiCore Centra relies on multiple techniques to collect data, including:

  • Deploying agents on application hosts to monitor application activity
  • Collecting detailed network data through TAP/SPAN ports and virtual SPANs
  • Collecting VPC flow logs from cloud providers

While each of these techniques is valuable, agent-based collection in particular ensures that Layer 7 granularity is included in the application discovery data set.

Once collected, application activity data must be aggregated and stored in a scalable manner to support the subsequent steps in the application discovery process.

Applying Context to Application Discovery Data

Whenever data is collected from disparate sources, it is difficult to interpret and derive value from it in its raw form. Therefore, it is critical to organize data and present it in context that is relevant to the organization. GuardiCore Centra employs several complementary techniques to simplify and, when possible, automate this essential step, including:

    • Querying an organization’s’ existing data sources, such as orchestration tools and configuration management databases, using REST APIs.
    • Automatically applying dynamic labels based on pre-defined logic
    • Discovering labels using agents deployed on applications hosts
    • Giving customers a simple and flexible framework to create labels manually.

A sound labeling approach makes it easy for an organization to view application activity in meaningful ways using attributes such as environment, application type, regulatory scope, location, role, or owner. While these are common examples, GuardiCore Centra’s labeling framework is also highly flexible, so organizations can define a custom label hierarchy to accommodate any specialized needs.

Visualizing Application Discovery Information

Once application data has been collected, harmonized, and contextualized, the next step is to present it in a manner that is meaningful to IT professionals, security experts, and application owners. The following examples from GuardiCore Centra illustrate the impact that the preceding three steps have on the quality of the visual representation of application discovery data.

Without context, raw data may look something like this:

Without context, it’s difficult to know which applications exist in the environment and how they interact with one another.

As you can see, this view contains a large amount of information but provides very little insight into which applications exist in the environment and how they interact with one another.

In contrast, once context has been added through labeling, more meaningful visualizations like the following example become possible:

Visualizing Application Discovery Information

In this case, the underlying data is presented in a manner that defines a specific application, its components, and its flows very clearly.

When evaluating possible application discovery data visualizations approaches, it is important to consider both real-time and historical visualization needs. Real-time data is helpful for assessing additional policy needs or responding to in-progress security incidents. However, historical data is also extremely valuable for compliance audits and security incident forensics and post mortems.

Moving from Application Discovery to Action

A final consideration when implementing an application discovery process is how to best make the data collected actionable. Once security teams and application stakeholders gain a complete view of application activity across their infrastructure, they often identify new legitimate applications that must be protected, unauthorized applications that they would like to block, possible security enhancements for existing applications, and even active security incidents that must be contained.Therefore, it is important to have seamless linkage between application discovery and micro-segmentation policy definition.

GuardiCore Centra accomplishes this by making application discovery visualizations directly actionable through point and click actions. Administrators can click on assets and flows in the visualization and gain immediate access to policy definition options. They can even create sophisticated compound policies through GuardiCore’s intuitive, highly visual interface.

Understanding the mutually-beneficial relationship between application discovery and micro-segmentation.

This final step illustrates the mutually-beneficial relationship between application discovery and micro-segmentation. A well-implemented application discovery process gives an organization’s application stakeholders both a clear view of application activity across all environments and an intuitive path to positively affect it through granular micro-segmentation policies. Similarly, once micro-segmentation policies have been implemented, the ability to view them in an up-to-date visualization of the infrastructure at any time makes it easier to update and maintain policies as environments change and new threats emerge.

The challenge of implementing an integrated application discovery process that spans all environments and delivery models may seem daunting to many organizations. However, by breaking the problem down into its four key elements and considering how each can be addressed more effectively with the help of flexible technologies like GuardiCore Centra, security teams and other stakeholders can set their application discovery process on a path to success.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Secure Critical Applications

Today’s information security teams face two major trends that make it more challenging than ever to secure critical applications. The first is that IT infrastructure is evolving rapidly and continuously. Hybrid cloud architectures with a combination of on-premises and cloud workloads are now the norm. There are also now a multitude of application workload deployment methods, including bare-metal servers, virtualization platforms, cloud instances, and containers. This growing heterogeneity, combined with increased automation, makes it more challenging for security teams to stay current with sanctioned application usage, much less malicious activity.

The second major challenge that makes it difficult to secure critical applications is that attackers are growing more targeted and sophisticated over time. As security technologies become more effective at detecting and stopping more generic, broad-scale attacks, attackers are shifting to more deliberate techniques focused on specific targets. These efforts are aided by the rapid growth of east-west traffic in enterprise environments as application architectures become more distributed and as cloud workloads introduce additional layers of abstraction. By analyzing this east-west traffic for clues about how applications function and interact with each other, attackers can identify potential attack vectors. The large quantity of east-west traffic also provides potential cover when attacks are advanced, as attackers often attempt to blend unauthorized lateral movement in with legitimate traffic.

Securing Critical Applications with Micro-Segmentation

Implementing a sound micro-segmentation approach is one of the best steps that security teams can take to gain greater infrastructure visibility and secure critical applications. While the concept of isolating applications and application components is not new, micro-segmentation solutions like GuardiCore Centra have improved on this concept in a number of ways that help security teams overcome the challenges described above.

It’s important for organizations considering micro-segmentation to avoid becoming overwhelmed by its broad range of applications. While the flexibility that micro-segmentation offers is one of its key advantages over alternative security approaches, attempting to address every possible micro-segmentation use case on day one is impractical. The best results are often achieved through a phased approach. Focusing on the most critical applications early in a micro-segmentation rollout process is an excellent way to deliver value to the organization quickly while developing a greater understanding of how micro-segmentation can be applied to additional use cases in subsequent phases.

Process-Level Granularity

The most significant benefit that micro-segmentation provides over more traditional segmentation approaches is that it can enables visibility and control at the process level. This gives security teams much greater ability to secure critical applications by making it possible to align segmentation policies with application logic. Application-aware micro-segmentation policies that allow known legitimate flows while blocking everything else significantly reduce attackers’ ability to move laterally and blend in with legitimate east-west traffic.

Unified Data Center and Cloud Workload Protection

Another important advantage that micro-segmentation offers is a consistent policy approach for both on-premises and cloud workloads. While traditional segmentation approaches are often tied to specific environments, such as network infrastructure, a specific virtualization technology, or a specific cloud provider, micro-segmentation solutions like GuardiCore Centra are implemented at the workload level and can migrate with workloads as they move between environments. This makes it possible to secure critical applications in hybrid cloud infrastructure and prevent new security risks from being introduced as the result of infrastructure changes.

Platform Independence

In addition to providing a unified security approach across disparate environments, micro-segmentation solutions like GuardiCore Centra also work consistently across various operating systems and deployment models. This is essential at a time when many organizations have a blend of bare-metal servers, virtualized servers, containers, and cloud instances. Implementing micro-segmentation at the application level ensures that policies can persist as underlying deployment platform technologies change.

Common Workload Protection Needs

There are several categories of critical applications that exist in most organizations and are particularly challenging – and particularly important – to secure.

Protecting High-Value Targets

Every organization has infrastructure components that play a central role in governing access to other systems throughout the environment. Examples may include domain controllers, privileged access management systems, and jump servers. It is essential to have a well-considered workflow protection strategy for these systems, as a compromise will give an attacker extensive ability to move laterally in the direction of systems containing sensitive or highly valuable data. Micro-segmentation policies with process-level granularity allow security teams to tightly manage how these systems are used, reducing the risk of unauthorized use.

Cloud Workload Protection

As more workloads migrate to the cloud, traditional security controls are often supplanted by security settings provided by a specific cloud provider. While the native capabilities that cloud providers offer are often valuable, they create situations in which security teams must segment their environment one way on-premises and another way in the cloud. This creates greater potential for new security issues as a result of confusion, mis-configuration, or lack of clarity about roles and responsibilities.

The challenge is compounded when organizations use more than one cloud provider, as each has its own set of security frameworks. Because micro-segmentation is platform-independent, the introduction of cloud workloads does not significantly increase the attack surface. Moreover, micro-segmentation can be performed consistently across multiple cloud platforms as a complement to any native cloud provider security features in use, avoiding confusion and providing greater flexibility to migrate workloads between cloud providers.

New Application Deployment Technologies

While bare-metal servers, virtualized servers, and cloud instances all preserve the traditional Windows or Linux operating system deployment model, new technologies such as containers represent a fundamentally different application deployment approach with a unique set of workload protection challenges. Implementing a micro-segmentation solution that includes support for containerized applications is another step organizations can take to secure critical applications in a manner that will persist as the underlying infrastructure evolves over time.

Critical Applications in Specific Industries

Along with the general steps that all organizations should take to secure critical applications, many industries have unique workload protection challenges based on the types of data they store or their specific regulatory requirements.

Examples include:

  • Healthcare applications that store or access protected health information (PHI) for patients that is both confidential and subject to HIPAA regulation.
  • Financial services applications that contain extensive personally identifiable information (PII) and other sensitive data that is subject to industry regulations like PCI DSS.
  • Law firm applications that store sensitive information that must be protected for client confidentiality reasons.

In these and other vertical-specific scenarios, micro-segmentation technologies can be used to both enforce required regulatory boundaries within the infrastructure and gain real-time and historical visibility to support regulatory audits.

Decoupling Security from Infrastructure

While there are a variety of factors that security teams must consider when securing critical applications in their organization, workload protection efforts do not need to be complicated by IT infrastructure evolution. By using micro-segmentation to align security policies with application functionality rather than underlying infrastructure, security teams can protect key applications effectively even as deployment approaches change or diversify. In addition, the added granularity of control that micro-segmentation provides makes it easier to address organization- or industry-specific security requirements effectively and consistently.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Are you Following Micro-Segmentation Best Practices?

With IT infrastructures increasingly becoming more virtualized and software-defined, micro-segmentation is fast becoming a priority for IT teams to enhance security measures and reduce the attack surface of their data center and cloud environments. With its fine-grained approach to segmentation policy, micro-segmentation enables more granular control of communication flows between critical application components that goes a step further than traditional network segmentation methods in support of moving to a Zero Trust security model.

Finding the Right Segmentation Balance

If not approached in the right way, micro-segmentation can be a complex process to plan, implement, and manage. For example, overzealous organizations may run too fast to implement these fine-grained policies across their environment, leading to over-segmentation, which could have a negative impact on the availability of IT applications and services, increase security complexity and overhead, and actually increase risk. At the same time, businesses need to be aware of the risks of under-segmentation, leaving the attack surface dangerously large in the case of a breach.

With a well-thought-out approach to micro-segmentation, organizations can see fast time to value for high-priority, short term use cases, while also putting in place the right structure for a broader implementation of micro-segmentation across future data center architectures. To achieve your micro-segmentation goals without adding unnecessary complexity, a business should consider these micro-segmentation security best practices.

Start with Granular Visibility Into Your Environment

It’s simple when you think about it – how can you secure what you can’t see? Whether you’re using application segmentation to reduce the risk between individual or groups of applications, or tier segmentation to define the rules for communication within the same application cluster, you need visibility into workloads and flows, at a process level. Process-level visibility allows security administrators to identify servers with similar roles and shared responsibilities so they can be easily grouped for the purpose of establishing security policies.

At first blush, this may seem to be a daunting task and is likely the first impediment to effective micro-segmentation. However, with the aid of graphic visualization tools that enable administrators to automatically discover and accurately map their data center applications and the communication processes between them, the complexity of implementing an effective micro-segmentation strategy can be greatly simplified.

Once administrators have gained this depth of visibility, they can begin to filter and organize applications into groups for the purpose of setting common security policies – for example, all applications related to a particular workflow or business function. The micro-segmentation best practices are to then create policies that can be tested and refined as needed for each defined group.

Micro-Segmentation Best Practices for Choosing the Right Model

There are two basic models for the implementation of micro-segmentation in a data center or cloud environment: network-centric, which typically leverages hypervisor-based virtual firewalls or security groups in cloud environments, and application-centric, which typically are agent-based distributed firewalls. Both have some pros and cons.

In a network-centric model, traffic control is managed by network choke points, third-party controls, or by trying to enforce rules onto each workload’s existing network enforcement.

In contrast, an application-centric model deploys agents onto the workload itself. This has a number of benefits. Visibility is incomparable, available down to Layer 7, and without the constraints or encryption that proprietary applications may enforce. An agent-based solution is also suitable across varied infrastructures, as well as any operational environment. This gives one consistent method across technologies, even when you consider new investments in containers and other microservices-based application development and delivery models.

Additionally, as there are no choke points to consider, the policy is entirely scalable, and can follow the workload even as it moves between environments, from on-premises to public cloud and back. Also, an application-centric approach allows you to define more granular policies, which reduces the attack surface beyond what can be accomplished with a network-centric model. Tools built for a specific environment are simply not good enough for hybrid multi-cloud data center needs, which explains why agent-based solutions have become micro-segmentation security best practices in recent years.

Also with agent-based approaches, one can more easily align with the DevOps models most enterprises use today. Business can leverage automation and autoscaling to streamline provisioning and management of workloads. Micro-segmentation policies are able to be easily and dynamically incorporated. There is no need for manual moves, adds and changes like you would have in the network-centric model.

Define “Early Win” Use Cases

Organizations that are successful with micro-segmentation typically start by focusing on projects that are tangible, fairly easy to complete, and in which the benefits will be readily apparent. These typically include something as basic as environment segmentation, such as separating servers and workloads in development or quality assurance from those in production.

Another common starting point is the isolation of applications for compliance purposes, known to be one of micro-segmentation security best practices. Regulatory regimes such as SWIFT, PCI, or HIPAA typically spell out the types of data and processes that must be protected from everyday network traffic. Micro-segmentation allows for the quick isolation of these applications and data, even if the application workloads are distributed across different environments, such as on-premises data centers and public clouds.

Organizations may also undertake projects to restrict access to data center assets or services from outside users or Internet of Things devices. In health care, hospitals will use micro-segmentation to isolate medical devices from the general network. Businesses might use micro-segmentation as a means of traditional ring-fencing to isolate their most critical applications.

The common thread running through these examples is that they represent business needs and challenges for which micro-segmentation is ideally suited. They are easily defined projects with clear business objectives while at the same providing a proving ground for micro-segmentation.

Think Long Term and Consider the Cloud

Organizations that have successfully implemented micro-segmentation typically take a phased approach, piloting on a few priority projects, getting comfortable with the tools and the process, and gradually expanding. A pragmatic approach to micro-segmentation is to align your requirements with both your current and future-state data center architectures.

A key component of this is consideration of “coverage” in your micro-segmentation tool stack. Look for tools that cover not only a single environment, but provide support for workloads in both your current and future data center architectures. This typically includes workloads running on legacy systems, bare metal servers, virtualized environments, containers and public cloud.

In addition, don’t assume that native security controls offered by IaaS or public cloud services will be adequate enough to fully protect your cloud workloads. Cloud service providers operate on a shared security model, in which the provider takes responsibility for securing the cloud infrastructure while customers are responsible for their own operating systems, applications and data. A cloud provider’s controls are only effective in that provider’s environment. Enterprises would have to manage multiple security platforms and make manual adjustments as applications move among different cloud environments. Furthermore, most native security controls are directed at the port level (Layer 4) and not at the process level (Layer 7) where vulnerable applications reside. That means they will not reduce the attack surface sufficiently to be effective.

Integrate with Complementary Controls

When evaluating solutions, another of micro-segmentation best practices is to look for those where there are value-added and integrated complementary controls. This helps reduce security management complexity, as you can find solutions that give you more than just micro-segmentation out of the box.

Single-platform micro-segmentation solutions might be effective at segmenting your applications and workloads to reduce risk. Micro-segmentation security best practices, however, are to look for a choice that takes you to the next level. Threat detection and response is a perfect example of a valuable complementary control. It allows you to do more than simply protect processes and check compliance off your to-do list. Of course, both breach detection and incident response are must-haves for any complex IT infrastructure.

The difference with an all-in-one solution is the reduction in administrative overhead of attempting to make disparate solutions work in tandem. As micro-segmentation tackles risk reduction in both data centers and clouds – threat detection and incident response can take the obvious next step in quickly detecting and mitigating active breaches, which can help your dramatically reduce dwell time and reduce the cost and impact of a breach.

A Summary of Micro-Segmentation Security Best Practices

From choosing an application-centric model that deploys agents onto the workload itself and comes with valuable complementary controls, to ensuring visibility from the start and looking for the ‘quick wins’ that provide early value, following these micro-segmentation security best practices will give your business the best chance of successful implementation.

For more information on micro-segmentation, visit our Micro-Segmentation Hub

Policy Enforcement Essentials for your Micro-Segmentation Strategy Policy

Policy enforcement is one of those terms that can have varied meanings depending on the context. When discussing data center security, application and network policy enforcement refers to any controls that your business uses to govern behavior and access to your network and applications, with special emphasis on east-west (E-W) traffic patterns. The data center and your cloud environments are harder to manage than other parts of your network, due to the special characteristics of the virtualization layers. In addition, most of the traffic inside the data center never traverses a choke point, and so businesses lose the ability to use these parts of a topology as as a control point.

Network policy enforcement should in theory allow organizations to ensure that only authorized applications and users are communicating with each other while enabling them to meet their own governance, security, and compliance requirements. There are a number of challenges in a hybrid data center that make creating and enforcing flexible policy more difficult. These include:

  • Creating policy that finds the right scope
  • Reducing attack surface in case of a breach
  • Remaining adaptable despite granular policy
  • Managing networks with thousands or more workloads over varied locations
  • Creating policy across multiple cloud architectures
  • Enforcing at both the network and process level for ultimate risk reduction

Micro-segmentation technology was invented to solve these challenges, allowing businesses to secure the data center from the inside, prevent lateral movements, meet compliance requirements and gain east-west traffic visibility. Using the right micro-segmentation policy, these rules can be truly granular – not only keeping environments from interacting with one another with coarse segmentation, but also making fine-grained policy. With GuardiCore Centra, your micro-segmentation project is enabled with enforcement capabilities that allow you to orchestrate at the flow level and even down to the process level on all platforms, so that stakeholders can meet different security and compliance mandates, using micro-segmentation as a security solution as well as a compensating control for compliance where other tools can’t be used.

policy enforcement visualization

Finding the Right Scope for your Policy Enforcement Strategy

Anyone involved in compliance and security knows that defining the scope is the biggest initial challenge. One of the first stages of creating effective micro-segmentation policy is to be clear about your policy objectives, both for business and security. If you’re just looking at security, the more granular your policies are, the stronger your security posture is. However, this could also limit communication and flexibility. The wrong policy choice could cause frustration or delays for your business. Overall, smart micro-segmentation policy allows you to enforce a strong security policy without compromising your communications or your business goals.

Network policy enforcement is well known as a method to help businesses meet compliance regulations. Take PCI DSS for example. By reducing the scope of what can reach your CDE, you are dramatically reducing the work it takes to achieve compliance. By building application-aware policies, you can enforce system access to specific data. If your policies segment all the way to Layer 7, the application layer, attackers who have breached your perimeter still can’t pivot from an out-of-scope area to one that is in scope. With tier segmentation, this can be enforced even within the same application cluster.

Strategies for Practical Implementation of Micro-segmentation Policy

First, you will want to map out your business objectives and gain visibility of your environment, understanding application dependencies and flows within your architecture as a whole. Then, you can start to think about the kinds of controls that are required for your business and teams. This will allow you to set the right policy enforcement. A good security solution will allow you to start with global, high-level rules, and then add layers, increasing the granularity of your policies.

Some rules will apply to large segments, such as only allowing the sales staff to access the sales applications, or allowing DNS resolving through the internal, secure DNS cluster, keeping the production environment separate from the test environment altogether. With GuardiCore you can also define blocking rules as part of your micro-segmentation policy strategy. In fact, you can combine both allow rules and block rules within the same policy. This will enable you to define rules like blocking non-admin access to SSH on the network.

Micro-segmentation policy should allow you to be creative, providing the ability to use different collection and enforcement methods based on your clouds and network topologies. With this technology, you can set up the rules that balance your unique needs for flexibility and security.

The following are some examples of policy creation ideas. Some are coarse, while others show the benefits of enforcing at a granular level.

  • Separating between Development, Production and Test environments (as requested by regulations like PCI-DSS)
  • Restricting access to servers from non-server environments
  • Application segmentation inside environments, for example allowing the Sharepoint applications to communicate with internal storage while limiting other types of traffic
  • Tier segmentation inside application environments, such as communication between a web server and a DB server
  • Restricting admin access to servers to comply with EU regulations such as GDPR
  • Blocking all un-encrypted protocols such as FTP or Telnet within your data center traffic
  • Denying a specific application tier or data center area from communicating with the internet

Building Flexible Policy Enforcement That Works in the Real World

Businesses are increasingly moving away from static business environments that have flat structures or on-premise data centers. Whichever policy engine a company chooses needs to be future-proof, allowing policy creation that gives control over auto-scaled workflows, expanding and contracting services, or processes that are constantly changing and adapting. Hybrid-cloud data centers are a great example of this kind of environment, where traditional inflexible policy engines can’t provide adequate dynamic provision.

In contrast, a flexible policy engine will support the latest breakthroughs in policy enforcement, such as the ability to support auto-scaling environments , or to allow the policy to follow the workload, no matter what platform it is on or on which kind of cloud it is deployed. This is impossible if policy is expressed in IP address, ranges, or VLANs. In order to really get the benefits of micro-segmentation technology in network policy enforcement, your policy engine and labeling need to be able to breathe with your data center, providing different models of control methods, able to give both quick wins and ongoing risk-reduction. In other words, you are able to see clearly how the entire data center behaves and communicates, application-wise and turn it into a policy using allow rules, while adding block rules to enforce compliance and best-practice security requirements.

Being Smart About Network Policy Enforcement

Not all micro-segmentation policy enforcement solutions are created equally. With the help of a flexible policy engine that supports both allow and block rules and includes policies built and enforced on a process level as well as a network level, you can take policy enforcement to the next level. Firstly, you can achieve visibility over your entire environment. Secondly, it allows you to enforce at a level of granularity that your organizational maturity can tolerate. Thirdly, it allows you to eliminate a lot of risk fast, using a small number of rules.

Using a micro-segmentation policy enforcement engine that supports granular deep visibility and micro-segmentation policies makes your investment yield even more results, faster. With such an engine, the real-time view of the dependencies and communication on your network can be turned into policies that suit the context of your unique business objectives and needs, strengthening your security posture without limiting business agility.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Implementing Micro-Segmentation Insights, Part 2: Getting Internal Buy-In

In a recent blog, I revealed part one of my insights from implementing micro-segmentation projects with large customers. Vendors don’t always get the perspective of deep involvement in the execution of these projects, but I have been fortunate enough to cultivate relationships with our customers and have thus been granted an inner view. In my first blog of this series, I discussed short versus long-term objectives and the importance of knowing your goals and breaking them down into phases to ensure that you’re truly working on the important matters first and foremost. In this blog, I want to get into the importance of getting internal buy-in from other teams in order to enable improved implementation. To make the process easier for all involved, “selling” the project to other teams is an important early step.

More often than not the segmentation project is driven by the security organization and they are the ones who see immediate value out of it, but they need the collaboration of other teams to help them deploy the product. It doesn’t really matter if the product is an overlay (usually based on agents) or underlay (comes as a part of the infrastructure).

Some teams will need to carry more of the weight and bear more in the processes of deploying such a project- networking, application, infrastructure, system etc. To achieve collaboration of those teams it helps to carry a carrot, not just a stick. Getting early buy-in on these projects and planning out collaboration from the right people will make the process much easier in the long run. In order to do so, the solution needs to be “sold” internally. And just like with any sales process, the more you are prepared and equipped for this, the smoother it will go.

As is true with any sales process, there will be the “early adopters” and the “late adopters.” In our experience, when the project team was well prepared and presented the benefits of the solution to the other teams, the impact was significant. If you can show the application team how they can benefit from the solution they will not just ease up on the objections, in fact they will be pushing the deployment.

But, there is a significant BUT to this. The product you choose for your segmentation needs to be able to deliver those carrots. It needs to be able to support use cases that are not the clear immediate concern of its direct, original audience. Here is a small example: let’s say you need to convince the application owner to install the agents that will enforce the policy. The application owner usually has little interest in the security use case and might object or hesitate because of the additional player introduced in the mix: wondering how will it affect the stability, and performance etc. But if you are able to demonstrate that he will gain value from the product, he in fact will become the one who pushes the deployment.

One of such value propositions that we constantly see is the value of “getting visibility into your application.” This is of course a great promise, but for the application owner to actually be able to gain value from this visibility, the visibility should have certain properties: it needs to be L7 with application context, it needs to collect data and store it historically (pay attention here that for the sake of building policy the historical aspect of the data is not important at all, you just need to know what connections might happen), and it needs to be searchable and filterable to allow simple and convenient consumption by the application owner. This of course is just one example, but the important lesson is to show the many ways in which the solution can expand beyond the obvious reason for implementation to help ensure buy-in of other teams through the illustration of other benefits.

So, when starting to deploy a segmentation solution make sure that you prepare an on-boarding package for the teams you need to cooperate with that includes ways they can leverage the product to expedite the adoption process and make the projects deadlines. Equally as important, when doing so make sure that the product you choose can actually cater to those use-cases, and many of the products on the market today miss that important point.

Learn more on our micro-segmentation hub or read about GuardiCore Centra for best practices.

Read part one of my insight from implementing micro-segmentation.

Reduce Attack Surface

Rapid adoption of cloud services by companies of all sizes is enabling many business benefits, most notably improved agility and lower IT infrastructure costs. However, as IT environments become more heterogeneous and geographically distributed in nature, many organizations are seeing their security attack surface multiply exponentially. This challenge is compounded by the accelerating rate of IT infrastructure change as more organizations embrace DevOps-style application deployment approaches and more extensive infrastructure automation.

Longstanding security practices such as system hardening, proactive vulnerability management, strong access controls, and network segmentation continue to play valuable roles in security teams’ attack surface reduction efforts. However, these measures alone are no longer sufficient in hybrid cloud environments for several reasons.

The first is that while these practices remain relevant, they do little to counteract the significant attack surface growth that cloud adoption and new application deployment models like containers are introducing. In addition, it is difficult to implement these practices consistently across a hybrid cloud infrastructure, as they are often tied to a specific on-premises or cloud environment. Lastly, as application deployment models become more distributed and dynamic, it is exposing organizations to greater risk of unsanctioned lateral movement. As the volume of east/west traffic grows, network-based segmentation alone is too coarse to prevent attackers from exploiting open ports and services to expand their attack footprint and find exploitable vulnerabilities.

These realities are leading many security executives and industry experts to embrace micro-segmentation as a strategic priority. Implementing a holistic micro-segmentation approach that includes visualization capabilities and process-level policy controls is the most effective way to reduce attack surface as the cloud transforms IT infrastructure. Moreover, because micro-segmentation is performed at the workload level rather than at the infrastructure level, it can be implemented consistently throughout a hybrid cloud infrastructure and adapt seamlessly as environments change or workloads relocate.

Visualizing the Attack Surface

One of the most beneficial steps that security teams can take to reduce their attack surface is to gain a deeper understanding of how their application infrastructure functions and how it is evolving over time. By understanding the attack surface in detail, security teams can be much more effective at implementing new controls to reduce its size.

Using a micro-segmentation solution to visualize the environment makes it easier for security teams identify any indicators of compromise and assess their current state of potential exposure. This process should include visualizing individual applications (and their dependencies), systems, networks, and flows to clearly define expected behavior and identify areas where additional controls can be applied to reduce attack surface.

Attack Surface Reduction with Micro-Segmentation

As more application workloads shift to public cloud and hybrid-cloud architectures, one area where existing attack surface reduction efforts often fall short is lateral movement detection and prevention. More distributed application architectures are significantly increasing the volume of “east/west” traffic in many data center and cloud environments. While much of this traffic is legitimate, trusted assets that are capable of communicating broadly within these environments are attractive targets for attackers. They are also much more accessible as the traditional concept of a network perimeter becomes less relevant.

When an asset is compromised, the first step that attackers often take is to probe and profile the environment around the compromised asset, seek out higher-value targets, and attempt to blend lateral movement in with legitimate application and network activity.

Micro-segmentation solutions can help defend against this type of attack by giving security teams the ability to create granular policies that:

  • Segment applications from each other
  • Segment the tiers within an application
  • Create a clear security boundary around assets with specific compliance or regulatory requirements
  • Enforce general corporate security policies and best practices throughout the infrastructure

These measures and others like them slow or block attackers’ efforts to move laterally. When implemented effectively, micro-segmentation applies the principle of least privilege more broadly throughout the infrastructure, even as it extends from the data center to one or more cloud platforms.

This focus on preventing lateral movement through in-depth governance of applications and flows reduces the available attack surface even as IT infrastructure grows and diversifies.

Beyond the Network Attack Surface

Successful use of micro-segmentation to reduce attack surface requires both Layer 4 and Layer 7 process-level controls. Process-level control allows security teams to truly align their security policies with specific application logic and regulatory requirements rather than viewing them purely through an infrastructure lens.

This application awareness is a key enabler of the attack surface reduction benefits of micro-segmentation. Granular policies that whitelist very specific process-level flows are much more effective at reducing attack surface than Layer 4 controls, which savvy attackers can circumvent by exploiting systems with trusted IP addresses and/or blending attacks in over allowed ports.

Granular Layer 7 policy controls make it more possible for organizations to achieve a zero-trust architecture where only the application activity and flows represent known sanctioned behavior are allowed to function unimpeded in the trusted environment.

The Importance of a Multi-OS, Multi-Environment Approach

As the transition to hybrid cloud environments accelerates, it is easy for organizations to overlook the extent to which this change magnifies the size of their attack surface. New physical environments, platforms, and application deployment methods create many new areas of potential exposure.

In addition to providing more granular control, another benefit that micro-segmentation provides to organizations seeking to reduce attack surface is that it achieves a unified security model that spans multiple operating systems and deployment environments. When policies are focused on specific process and flows rather than infrastructure components, they can be applied across any mix of on-premises and cloud-hosted resources and even remain consistent when a specific workload moves between the data center and one or more cloud platforms. This is a major advantage over point security products that are tied to a specific environment or platform, as it enables attack surface to be minimized even as the environment becomes larger and more heterogeneous.

When selecting a micro-segmentation platform, it is important to validate that the solution works seamlessly across your entire infrastructure without any environment- or platform-specific dependencies. This includes validation that the level of control is consistent between Windows and Linux and that there is no dependence on built-in operating system firewalls, which do not offer the necessarily flexibility and granularity.

While the transformation to cloud or hybrid-cloud IT infrastructure does have the potential to introduce new security risks, a well-managed micro-segmentation approach that is highly granular, de-coupled from the underlying infrastructure, and application aware can actually reduce the attack surface even more as more infrastructure diversity and complexity is introduced.

For more information on micro-segmentation, visit our Micro-Segmentation Hub