Posts

Secure Critical Applications

Today’s information security teams face two major trends that make it more challenging than ever to secure critical applications. The first is that IT infrastructure is evolving rapidly and continuously. Hybrid cloud architectures with a combination of on-premises and cloud workloads are now the norm. There are also now a multitude of application workload deployment methods, including bare-metal servers, virtualization platforms, cloud instances, and containers. This growing heterogeneity, combined with increased automation, makes it more challenging for security teams to stay current with sanctioned application usage, much less malicious activity.

The second major challenge that makes it difficult to secure critical applications is that attackers are growing more targeted and sophisticated over time. As security technologies become more effective at detecting and stopping more generic, broad-scale attacks, attackers are shifting to more deliberate techniques focused on specific targets. These efforts are aided by the rapid growth of east-west traffic in enterprise environments as application architectures become more distributed and as cloud workloads introduce additional layers of abstraction. By analyzing this east-west traffic for clues about how applications function and interact with each other, attackers can identify potential attack vectors. The large quantity of east-west traffic also provides potential cover when attacks are advanced, as attackers often attempt to blend unauthorized lateral movement in with legitimate traffic.

Securing Critical Applications with Micro-Segmentation

Implementing a sound micro-segmentation approach is one of the best steps that security teams can take to gain greater infrastructure visibility and secure critical applications. While the concept of isolating applications and application components is not new, micro-segmentation solutions like GuardiCore Centra have improved on this concept in a number of ways that help security teams overcome the challenges described above.

It’s important for organizations considering micro-segmentation to avoid becoming overwhelmed by its broad range of applications. While the flexibility that micro-segmentation offers is one of its key advantages over alternative security approaches, attempting to address every possible micro-segmentation use case on day one is impractical. The best results are often achieved through a phased approach. Focusing on the most critical applications early in a micro-segmentation rollout process is an excellent way to deliver value to the organization quickly while developing a greater understanding of how micro-segmentation can be applied to additional use cases in subsequent phases.

Process-Level Granularity

The most significant benefit that micro-segmentation provides over more traditional segmentation approaches is that it can enables visibility and control at the process level. This gives security teams much greater ability to secure critical applications by making it possible to align segmentation policies with application logic. Application-aware micro-segmentation policies that allow known legitimate flows while blocking everything else significantly reduce attackers’ ability to move laterally and blend in with legitimate east-west traffic.

Unified Data Center and Cloud Workload Protection

Another important advantage that micro-segmentation offers is a consistent policy approach for both on-premises and cloud workloads. While traditional segmentation approaches are often tied to specific environments, such as network infrastructure, a specific virtualization technology, or a specific cloud provider, micro-segmentation solutions like GuardiCore Centra are implemented at the workload level and can migrate with workloads as they move between environments. This makes it possible to secure critical applications in hybrid cloud infrastructure and prevent new security risks from being introduced as the result of infrastructure changes.

Platform Independence

In addition to providing a unified security approach across disparate environments, micro-segmentation solutions like GuardiCore Centra also work consistently across various operating systems and deployment models. This is essential at a time when many organizations have a blend of bare-metal servers, virtualized servers, containers, and cloud instances. Implementing micro-segmentation at the application level ensures that policies can persist as underlying deployment platform technologies change.

Common Workload Protection Needs

There are several categories of critical applications that exist in most organizations and are particularly challenging – and particularly important – to secure.

Protecting High-Value Targets

Every organization has infrastructure components that play a central role in governing access to other systems throughout the environment. Examples may include domain controllers, privileged access management systems, and jump servers. It is essential to have a well-considered workflow protection strategy for these systems, as a compromise will give an attacker extensive ability to move laterally in the direction of systems containing sensitive or highly valuable data. Micro-segmentation policies with process-level granularity allow security teams to tightly manage how these systems are used, reducing the risk of unauthorized use.

Cloud Workload Protection

As more workloads migrate to the cloud, traditional security controls are often supplanted by security settings provided by a specific cloud provider. While the native capabilities that cloud providers offer are often valuable, they create situations in which security teams must segment their environment one way on-premises and another way in the cloud. This creates greater potential for new security issues as a result of confusion, mis-configuration, or lack of clarity about roles and responsibilities.

The challenge is compounded when organizations use more than one cloud provider, as each has its own set of security frameworks. Because micro-segmentation is platform-independent, the introduction of cloud workloads does not significantly increase the attack surface. Moreover, micro-segmentation can be performed consistently across multiple cloud platforms as a complement to any native cloud provider security features in use, avoiding confusion and providing greater flexibility to migrate workloads between cloud providers.

New Application Deployment Technologies

While bare-metal servers, virtualized servers, and cloud instances all preserve the traditional Windows or Linux operating system deployment model, new technologies such as containers represent a fundamentally different application deployment approach with a unique set of workload protection challenges. Implementing a micro-segmentation solution that includes support for containerized applications is another step organizations can take to secure critical applications in a manner that will persist as the underlying infrastructure evolves over time.

Critical Applications in Specific Industries

Along with the general steps that all organizations should take to secure critical applications, many industries have unique workload protection challenges based on the types of data they store or their specific regulatory requirements.

Examples include:

  • Healthcare applications that store or access protected health information (PHI) for patients that is both confidential and subject to HIPAA regulation.
  • Financial services applications that contain extensive personally identifiable information (PII) and other sensitive data that is subject to industry regulations like PCI DSS.
  • Law firm applications that store sensitive information that must be protected for client confidentiality reasons.

In these and other vertical-specific scenarios, micro-segmentation technologies can be used to both enforce required regulatory boundaries within the infrastructure and gain real-time and historical visibility to support regulatory audits.

Decoupling Security from Infrastructure

While there are a variety of factors that security teams must consider when securing critical applications in their organization, workload protection efforts do not need to be complicated by IT infrastructure evolution. By using micro-segmentation to align security policies with application functionality rather than underlying infrastructure, security teams can protect key applications effectively even as deployment approaches change or diversify. In addition, the added granularity of control that micro-segmentation provides makes it easier to address organization- or industry-specific security requirements effectively and consistently.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Are you Following Micro-Segmentation Best Practices?

With IT infrastructures increasingly becoming more virtualized and software-defined, micro-segmentation is fast becoming a priority for IT teams to enhance security measures and reduce the attack surface of their data center and cloud environments. With its fine-grained approach to segmentation policy, micro-segmentation enables more granular control of communication flows between critical application components that goes a step further than traditional network segmentation methods in support of moving to a Zero Trust security model.

Finding the Right Segmentation Balance

If not approached in the right way, micro-segmentation can be a complex process to plan, implement, and manage. For example, overzealous organizations may run too fast to implement these fine-grained policies across their environment, leading to over-segmentation, which could have a negative impact on the availability of IT applications and services, increase security complexity and overhead, and actually increase risk. At the same time, businesses need to be aware of the risks of under-segmentation, leaving the attack surface dangerously large in the case of a breach.

With a well-thought-out approach to micro-segmentation, organizations can see fast time to value for high-priority, short term use cases, while also putting in place the right structure for a broader implementation of micro-segmentation across future data center architectures. To achieve your micro-segmentation goals without adding unnecessary complexity, a business should consider these micro-segmentation security best practices.

Start with Granular Visibility Into Your Environment

It’s simple when you think about it – how can you secure what you can’t see? Whether you’re using application segmentation to reduce the risk between individual or groups of applications, or tier segmentation to define the rules for communication within the same application cluster, you need visibility into workloads and flows, at a process level. Process-level visibility allows security administrators to identify servers with similar roles and shared responsibilities so they can be easily grouped for the purpose of establishing security policies.

At first blush, this may seem to be a daunting task and is likely the first impediment to effective micro-segmentation. However, with the aid of graphic visualization tools that enable administrators to automatically discover and accurately map their data center applications and the communication processes between them, the complexity of implementing an effective micro-segmentation strategy can be greatly simplified.

Once administrators have gained this depth of visibility, they can begin to filter and organize applications into groups for the purpose of setting common security policies – for example, all applications related to a particular workflow or business function. The micro-segmentation best practices are to then create policies that can be tested and refined as needed for each defined group.

Micro-Segmentation Best Practices for Choosing the Right Model

There are two basic models for the implementation of micro-segmentation in a data center or cloud environment: network-centric, which typically leverages hypervisor-based virtual firewalls or security groups in cloud environments, and application-centric, which typically are agent-based distributed firewalls. Both have some pros and cons.

In a network-centric model, traffic control is managed by network choke points, third-party controls, or by trying to enforce rules onto each workload’s existing network enforcement.

In contrast, an application-centric model deploys agents onto the workload itself. This has a number of benefits. Visibility is incomparable, available down to Layer 7, and without the constraints or encryption that proprietary applications may enforce. An agent-based solution is also suitable across varied infrastructures, as well as any operational environment. This gives one consistent method across technologies, even when you consider new investments in containers and other microservices-based application development and delivery models.

Additionally, as there are no choke points to consider, the policy is entirely scalable, and can follow the workload even as it moves between environments, from on-premises to public cloud and back. Also, an application-centric approach allows you to define more granular policies, which reduces the attack surface beyond what can be accomplished with a network-centric model. Tools built for a specific environment are simply not good enough for hybrid multi-cloud data center needs, which explains why agent-based solutions have become micro-segmentation security best practices in recent years.

Also with agent-based approaches, one can more easily align with the DevOps models most enterprises use today. Business can leverage automation and autoscaling to streamline provisioning and management of workloads. Micro-segmentation policies are able to be easily and dynamically incorporated. There is no need for manual moves, adds and changes like you would have in the network-centric model.

Define “Early Win” Use Cases

Organizations that are successful with micro-segmentation typically start by focusing on projects that are tangible, fairly easy to complete, and in which the benefits will be readily apparent. These typically include something as basic as environment segmentation, such as separating servers and workloads in development or quality assurance from those in production.

Another common starting point is the isolation of applications for compliance purposes, known to be one of micro-segmentation security best practices. Regulatory regimes such as SWIFT, PCI, or HIPAA typically spell out the types of data and processes that must be protected from everyday network traffic. Micro-segmentation allows for the quick isolation of these applications and data, even if the application workloads are distributed across different environments, such as on-premises data centers and public clouds.

Organizations may also undertake projects to restrict access to data center assets or services from outside users or Internet of Things devices. In health care, hospitals will use micro-segmentation to isolate medical devices from the general network. Businesses might use micro-segmentation as a means of traditional ring-fencing to isolate their most critical applications.

The common thread running through these examples is that they represent business needs and challenges for which micro-segmentation is ideally suited. They are easily defined projects with clear business objectives while at the same providing a proving ground for micro-segmentation.

Think Long Term and Consider the Cloud

Organizations that have successfully implemented micro-segmentation typically take a phased approach, piloting on a few priority projects, getting comfortable with the tools and the process, and gradually expanding. A pragmatic approach to micro-segmentation is to align your requirements with both your current and future-state data center architectures.

A key component of this is consideration of “coverage” in your micro-segmentation tool stack. Look for tools that cover not only a single environment, but provide support for workloads in both your current and future data center architectures. This typically includes workloads running on legacy systems, bare metal servers, virtualized environments, containers and public cloud.

In addition, don’t assume that native security controls offered by IaaS or public cloud services will be adequate enough to fully protect your cloud workloads. Cloud service providers operate on a shared security model, in which the provider takes responsibility for securing the cloud infrastructure while customers are responsible for their own operating systems, applications and data. A cloud provider’s controls are only effective in that provider’s environment. Enterprises would have to manage multiple security platforms and make manual adjustments as applications move among different cloud environments. Furthermore, most native security controls are directed at the port level (Layer 4) and not at the process level (Layer 7) where vulnerable applications reside. That means they will not reduce the attack surface sufficiently to be effective.

Integrate with Complementary Controls

When evaluating solutions, another of micro-segmentation best practices is to look for those where there are value-added and integrated complementary controls. This helps reduce security management complexity, as you can find solutions that give you more than just micro-segmentation out of the box.

Single-platform micro-segmentation solutions might be effective at segmenting your applications and workloads to reduce risk. Micro-segmentation security best practices, however, are to look for a choice that takes you to the next level. Threat detection and response is a perfect example of a valuable complementary control. It allows you to do more than simply protect processes and check compliance off your to-do list. Of course, both breach detection and incident response are must-haves for any complex IT infrastructure.

The difference with an all-in-one solution is the reduction in administrative overhead of attempting to make disparate solutions work in tandem. As micro-segmentation tackles risk reduction in both data centers and clouds – threat detection and incident response can take the obvious next step in quickly detecting and mitigating active breaches, which can help your dramatically reduce dwell time and reduce the cost and impact of a breach.

A Summary of Micro-Segmentation Security Best Practices

From choosing an application-centric model that deploys agents onto the workload itself and comes with valuable complementary controls, to ensuring visibility from the start and looking for the ‘quick wins’ that provide early value, following these micro-segmentation security best practices will give your business the best chance of successful implementation.

For more information on micro-segmentation, visit our Micro-Segmentation Hub

Policy Enforcement Essentials for your Micro-Segmentation Strategy Policy

Policy enforcement is one of those terms that can have varied meanings depending on the context. In the context of data center security, application and network policy enforcement refers to any controls that your business uses to govern behavior and access to your network and applications, with special emphasis on east-west (E-W) traffic patterns. Every data center needs a set of rules for how a network is managed and traffic and communications are directed. This enables companies to meet their own governance, security, and compliance requirements. The data center and your clouds are harder to manage than other parts of your network, due to the special characteristics of the virtualization layers. In addition, inside the data center, most of the traffic never traverses a choke point, and so businesses lose the ability to use these parts of a topology as a control point.

Setting up flexible security policies for data centers can therefore be challenging, especially creating policy that finds the right scope, reducing your attack surface in case of a breach but allowing you to remain adaptable. This task is even more challenging in case of networks with thousands or more workloads, varied locations and multiple cloud architectures. To provide maximum risk-reduction, security policies should be built and enforced both on the network and process levels.

Micro-segmentation technology was invented to solve the challenges of how to secure the data center from the inside, prevent lateral movements, meet compliance requirements and gain east-west traffic visibility. Using the right micro-segmentation policy, these rules can be truly granular – not only keeping environments from interacting with one another with coarse segmentation, but also making fine-grained policy. With GuardiCore Centra, your micro-segmentation project is enabled with enforcement capabilities that enable you to orchestrate at the flow level and even down to the process level on all platforms, allowing you to meet different security and compliance mandates and use micro-segmentation as a security solution as well as a compensating control for compliance where other tools can’t be used.

Finding the Right Scope for your Policy Enforcement Strategy

Anyone involved in compliance and security knows that defining the scope is the biggest initial challenge. One of the first stages of creating effective micro-segmentation policy is to be clear about your policy objectives, both for business and security. If you’re just looking at security, the more granular your policies are, the stronger your security posture is. However, this could also limit communication and flexibility. The wrong policy choice could cause frustration or delays for your business. Overall, smart micro-segmentation policy allows you to enforce a strong security policy without compromising your communications or your business goals.

Network policy enforcement is well known as a method to help businesses meet compliance regulations. Take PCI DSS for example. By reducing the scope of what can reach your CDE, you are dramatically reducing the work it takes to achieve compliance. By building application-aware policies, you can enforce system access to specific data. If your policies segment all the way to Layer 7, the application layer, attackers who have breached your perimeter still can’t pivot from an out-of-scope area to one that is in scope. With tier segmentation, this can be enforced even within the same application cluster.

Strategies for Practical Implementation of Micro-segmentation Policy

First, you will want to map out your business objectives and gain visibility of your environment, understanding application dependencies and flows within your architecture as a whole. Then, you can start to think about the kinds of controls that are required for your business and teams. This will allow you to set the right policy enforcement. A good security solution will allow you to start with global, high-level rules, and then add layers, increasing the granularity of your policies.

Some rules will apply to large segments, such as only allowing the sales staff to access the sales applications, or allowing DNS resolving through the internal, secure DNS cluster, keeping the production environment separate from the test environment altogether. With GuardiCore you can also define blocking rules as part of your micro-segmentation policy strategy. In fact, you can combine both allow rules and block rules within the same policy. This will enable you to define rules like blocking non-admin access to SSH on the network.

Micro-segmentation policy should allow you to be creative, providing the ability to use different collection and enforcement methods based on your clouds and network topologies. With this technology, you can set up the rules that balance your unique needs for flexibility and security.

The following are some examples of policy creation ideas. Some are coarse, while others show the benefits of enforcing at a granular level.

  • Separating between Development, Production and Test environments (as requested by regulations like PCI-DSS)
  • Restricting access to servers from non-server environments
  • Application segmentation inside environments, for example allowing the Sharepoint applications to communicate with internal storage while limiting other types of traffic
  • Tier segmentation inside application environments, such as communication between a web server and a DB server
  • Restricting admin access to servers to comply with EU regulations such as GDPR
  • Blocking all un-encrypted protocols such as FTP or Telnet within your data center traffic
  • Denying a specific application tier or data center area from communicating with the internet

Building Flexible Policy Enforcement That Works in the Real World

Businesses are increasingly moving away from static business environments that have flat structures or on-premise data centers. Whichever policy engine a company chooses needs to be future-proof, allowing policy creation that gives control over auto-scaled workflows, expanding and contracting services, or processes that are constantly changing and adapting. Hybrid-cloud data centers are a great example of this kind of environment, where traditional inflexible policy engines can’t provide adequate dynamic provision.

In contrast, a flexible policy engine will support the latest breakthroughs in policy enforcement, such as the ability to support auto-scaling environments , or to allow the policy to follow the workload, no matter what platform it is on or on which kind of cloud it is deployed. This is impossible if policy is expressed in IP address, ranges, or VLANs. In order to really get the benefits of micro-segmentation technology in network policy enforcement, your policy engine and labeling need to be able to breathe with your data center, providing different models of control methods, able to give both quick wins and ongoing risk-reduction. In other words, you are able to see clearly how the entire data center behaves and communicates, application-wise and turn it into a policy using allow rules, while adding block rules to enforce compliance and best-practice security requirements.

Being Smart About Network Policy Enforcement

Not all micro-segmentation policy enforcement solutions are created equally. With the help of a flexible policy engine that supports both allow and block rules and includes policies built and enforced on a process level as well as a network level, you can take policy enforcement to the next level. Firstly, you can achieve visibility over your entire environment. Secondly, it allows you to enforce at a level of granularity that your organizational maturity can tolerate. Thirdly, it allows you to eliminate a lot of risk fast, using a small number of rules.

Using a micro-segmentation policy enforcement engine that supports granular deep visibility and micro-segmentation policies makes your investment yield even more results, faster. With such an engine, the real-time view of the dependencies and communication on your network can be turned into policies that suit the context of your unique business objectives and needs, strengthening your security posture without limiting business agility.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Implementing Micro-Segmentation Insights, Part 2: Getting Internal Buy-In

In a recent blog, I revealed part one of my insights from implementing micro-segmentation projects with large customers. Vendors don’t always get the perspective of deep involvement in the execution of these projects, but I have been fortunate enough to cultivate relationships with our customers and have thus been granted an inner view. In my first blog of this series, I discussed short versus long-term objectives and the importance of knowing your goals and breaking them down into phases to ensure that you’re truly working on the important matters first and foremost. In this blog, I want to get into the importance of getting internal buy-in from other teams in order to enable improved implementation. To make the process easier for all involved, “selling” the project to other teams is an important early step.

More often than not the segmentation project is driven by the security organization and they are the ones who see immediate value out of it, but they need the collaboration of other teams to help them deploy the product. It doesn’t really matter if the product is an overlay (usually based on agents) or underlay (comes as a part of the infrastructure).

Some teams will need to carry more of the weight and bear more in the processes of deploying such a project- networking, application, infrastructure, system etc. To achieve collaboration of those teams it helps to carry a carrot, not just a stick. Getting early buy-in on these projects and planning out collaboration from the right people will make the process much easier in the long run. In order to do so, the solution needs to be “sold” internally. And just like with any sales process, the more you are prepared and equipped for this, the smoother it will go.

As is true with any sales process, there will be the “early adopters” and the “late adopters.” In our experience, when the project team was well prepared and presented the benefits of the solution to the other teams, the impact was significant. If you can show the application team how they can benefit from the solution they will not just ease up on the objections, in fact they will be pushing the deployment.

But, there is a significant BUT to this. The product you choose for your segmentation needs to be able to deliver those carrots. It needs to be able to support use cases that are not the clear immediate concern of its direct, original audience. Here is a small example: let’s say you need to convince the application owner to install the agents that will enforce the policy. The application owner usually has little interest in the security use case and might object or hesitate because of the additional player introduced in the mix: wondering how will it affect the stability, and performance etc. But if you are able to demonstrate that he will gain value from the product, he in fact will become the one who pushes the deployment.

One of such value propositions that we constantly see is the value of “getting visibility into your application.” This is of course a great promise, but for the application owner to actually be able to gain value from this visibility, the visibility should have certain properties: it needs to be L7 with application context, it needs to collect data and store it historically (pay attention here that for the sake of building policy the historical aspect of the data is not important at all, you just need to know what connections might happen), and it needs to be searchable and filterable to allow simple and convenient consumption by the application owner. This of course is just one example, but the important lesson is to show the many ways in which the solution can expand beyond the obvious reason for implementation to help ensure buy-in of other teams through the illustration of other benefits.

So, when starting to deploy a segmentation solution make sure that you prepare an on-boarding package for the teams you need to cooperate with that includes ways they can leverage the product to expedite the adoption process and make the projects deadlines. Equally as important, when doing so make sure that the product you choose can actually cater to those use-cases, and many of the products on the market today miss that important point.

Learn more on our micro-segmentation hub or read about GuardiCore Centra for best practices.

Read part one of my insight from implementing micro-segmentation.

Reduce Attack Surface

Rapid adoption of cloud services by companies of all sizes is enabling many business benefits, most notably improved agility and lower IT infrastructure costs. However, as IT environments become more heterogeneous and geographically distributed in nature, many organizations are seeing their security attack surface multiply exponentially. This challenge is compounded by the accelerating rate of IT infrastructure change as more organizations embrace DevOps-style application deployment approaches and more extensive infrastructure automation.

Longstanding security practices such as system hardening, proactive vulnerability management, strong access controls, and network segmentation continue to play valuable roles in security teams’ attack surface reduction efforts. However, these measures alone are no longer sufficient in hybrid cloud environments for several reasons.

The first is that while these practices remain relevant, they do little to counteract the significant attack surface growth that cloud adoption and new application deployment models like containers are introducing. In addition, it is difficult to implement these practices consistently across a hybrid cloud infrastructure, as they are often tied to a specific on-premises or cloud environment. Lastly, as application deployment models become more distributed and dynamic, it is exposing organizations to greater risk of unsanctioned lateral movement. As the volume of east/west traffic grows, network-based segmentation alone is too coarse to prevent attackers from exploiting open ports and services to expand their attack footprint and find exploitable vulnerabilities.

These realities are leading many security executives and industry experts to embrace micro-segmentation as a strategic priority. Implementing a holistic micro-segmentation approach that includes visualization capabilities and process-level policy controls is the most effective way to reduce attack surface as the cloud transforms IT infrastructure. Moreover, because micro-segmentation is performed at the workload level rather than at the infrastructure level, it can be implemented consistently throughout a hybrid cloud infrastructure and adapt seamlessly as environments change or workloads relocate.

Visualizing the Attack Surface

One of the most beneficial steps that security teams can take to reduce their attack surface is to gain a deeper understanding of how their application infrastructure functions and how it is evolving over time. By understanding the attack surface in detail, security teams can be much more effective at implementing new controls to reduce its size.

Using a micro-segmentation solution to visualize the environment makes it easier for security teams identify any indicators of compromise and assess their current state of potential exposure. This process should include visualizing individual applications (and their dependencies), systems, networks, and flows to clearly define expected behavior and identify areas where additional controls can be applied to reduce attack surface.

Attack Surface Reduction with Micro-Segmentation

As more application workloads shift to public cloud and hybrid-cloud architectures, one area where existing attack surface reduction efforts often fall short is lateral movement detection and prevention. More distributed application architectures are significantly increasing the volume of “east/west” traffic in many data center and cloud environments. While much of this traffic is legitimate, trusted assets that are capable of communicating broadly within these environments are attractive targets for attackers. They are also much more accessible as the traditional concept of a network perimeter becomes less relevant.

When an asset is compromised, the first step that attackers often take is to probe and profile the environment around the compromised asset, seek out higher-value targets, and attempt to blend lateral movement in with legitimate application and network activity.

Micro-segmentation solutions can help defend against this type of attack by giving security teams the ability to create granular policies that:

  • Segment applications from each other
  • Segment the tiers within an application
  • Create a clear security boundary around assets with specific compliance or regulatory requirements
  • Enforce general corporate security policies and best practices throughout the infrastructure

These measures and others like them slow or block attackers’ efforts to move laterally. When implemented effectively, micro-segmentation applies the principle of least privilege more broadly throughout the infrastructure, even as it extends from the data center to one or more cloud platforms.

This focus on preventing lateral movement through in-depth governance of applications and flows reduces the available attack surface even as IT infrastructure grows and diversifies.

Beyond the Network Attack Surface

Successful use of micro-segmentation to reduce attack surface requires both Layer 4 and Layer 7 process-level controls. Process-level control allows security teams to truly align their security policies with specific application logic and regulatory requirements rather than viewing them purely through an infrastructure lens.

This application awareness is a key enabler of the attack surface reduction benefits of micro-segmentation. Granular policies that whitelist very specific process-level flows are much more effective at reducing attack surface than Layer 4 controls, which savvy attackers can circumvent by exploiting systems with trusted IP addresses and/or blending attacks in over allowed ports.

Granular Layer 7 policy controls make it more possible for organizations to achieve a zero-trust architecture where only the application activity and flows represent known sanctioned behavior are allowed to function unimpeded in the trusted environment.

The Importance of a Multi-OS, Multi-Environment Approach

As the transition to hybrid cloud environments accelerates, it is easy for organizations to overlook the extent to which this change magnifies the size of their attack surface. New physical environments, platforms, and application deployment methods create many new areas of potential exposure.

In addition to providing more granular control, another benefit that micro-segmentation provides to organizations seeking to reduce attack surface is that it achieves a unified security model that spans multiple operating systems and deployment environments. When policies are focused on specific process and flows rather than infrastructure components, they can be applied across any mix of on-premises and cloud-hosted resources and even remain consistent when a specific workload moves between the data center and one or more cloud platforms. This is a major advantage over point security products that are tied to a specific environment or platform, as it enables attack surface to be minimized even as the environment becomes larger and more heterogeneous.

When selecting a micro-segmentation platform, it is important to validate that the solution works seamlessly across your entire infrastructure without any environment- or platform-specific dependencies. This includes validation that the level of control is consistent between Windows and Linux and that there is no dependence on built-in operating system firewalls, which do not offer the necessarily flexibility and granularity.

While the transformation to cloud or hybrid-cloud IT infrastructure does have the potential to introduce new security risks, a well-managed micro-segmentation approach that is highly granular, de-coupled from the underlying infrastructure, and application aware can actually reduce the attack surface even more as more infrastructure diversity and complexity is introduced.

For more information on micro-segmentation, visit our Micro-Segmentation Hub

Implementing Micro-Segmentation- Insights from the Trenches, Part One

Recently I have been personally engaged in implementing micro-segmentation for our key customers, which include a top US retail brand, major Wall Street bank, top international pharmaceutical company, and leading European telco. Having spent significant time with each one of the customers including running weekly project calls, workshops, planning meetings, and more allows me a unique glimpse into the reality of what it means to implement such projects in a major enterprise – a point of view that is not always available for a vendor.

I would like to share some observations of how those projects roll out, and hope you will find these insights useful and especially helpful if you are planning to implement micro-segmentation in your network. Each blog in this short series will focus on one insight I’ve gathered from my time both in the boardroom and in the trenches, and I hope you find some practical pieces to help you improve your understanding and implementation of any current or upcoming security projects.

Application segmentation is not necessarily the short-term objective

If you look into the online material for micro-segmentation, vendors, analyst and experts all talk about breaking your data center into applications and those applications into tiers, and limiting the access among those to only what is needed by the applications.

I was surprised to discover that many customers look at the problem from a slightly different angle. For them, segmentation is a risk-reduction project driven by regulations, internal auditing requirements, or simply a desire to reduce the attack surface. These drivers do not always translate into segmenting applications from each other; if segmentation is a priority, it usually is not as the primary objective but instead a means to an end, and is not necessarily a comprehensive process in the short term as they look to achieve their goals. Let me give you a couple examples:

  1. A major Wall Street bank was required by the auditor to validate that admin access to servers is only done through a Jumpbox or a CyberArk-like solution. It means in reality the bank wanted to set a policy that “Windows machines can only be accessed from this set of machines’ RDP – all other RDP connections to it are not allowed. Linux machines are only allowed to be accessed from this set of machines by SSH – all other SSH connections are not allowed.” I think there is no need to explain the risk reduction contribution of such simple policy, but this has nothing to do with segmenting your data center by applications. Theoretically speaking one could achieve this goal as a side effect of complete data-center segmentation, but that would require significantly more effort and in fact the result would be somewhat implicit and a bit harder to demonstrate to the auditor.
  2. A European bank needed to implement a simple risk reduction scheme – to mark each server in their DC as “accessible from ATMs,” “accessible from printer area,” accessible from user area,” or “not accessible from non-server area” with very simple, well-defined rules for each one of the groups. Again, the attack surface reduction is quite simple and in their case is in fact very significant, but it has little to do with textbook application segmentation. Here too you could theoretically achieve the same goal by implementing classic micro-segmentation, but Confucius taught us to not try to kill a mosquito with a cannon. Most of those organizations do plan to implement micro-segmentation as the market defines it, but they know it takes a bit of time and they want to first hit the low-hanging fruit in risk reduction early on while implementing this crucial security project incrementally in a way that makes the most sense for their business.

So if you are looking to implement a micro-segmentation project – understand your goals, drivers and motivations and remember that this is a risk reduction project after all and as they say there are many ways to peel an orange – some of them are simpler, faster, more straightforward, and more efficient than the others. But the sooner you get started, the sooner you can enjoy the taste of your success. In any case, when choosing technology to help you with a segmentation project, make sure you choose one that is flexible enough that will help you do textbook micro-segmentation, but also address those numerous other use cases that you might not even be aware of at the initial stages.

Stay tuned to our blog to catch more of my upcoming insights from the trenches.

Learn more information about choosing a micro-segmentation solution.

What is File Integrity Monitoring and Why Do I Need It?

File integrity monitoring (FIM) is an internal control that examines files to see the way that they change, establishing the source, details and reasons behind the modifications made and alerting security if the changes are unauthorized. It is an essential component of a healthy security posture. File integrity monitoring is also a requirement for compliance, including for PCI-DSS and HIPAA, and it is one of the foremost tools used for breach and malware detection. Networks and configurations are becoming increasingly complex, and file integrity monitoring provides an increased level of confidence that no unauthorized changes are slipping through the cracks.

How Does File Integrity Monitoring Work?

In a dynamic, agile environment, you can expect continuous changes to files and configuration. The trick is to separate between authorized changes due to security, communication, or patch management, and problems like configuration errors or malicious intent that need your immediate attention.

File integrity monitoring uses the process of baseline comparison to make this differentiation. One or more file attributes are stored internally as a baseline, and this is then compared periodically when the file is being checked. Examples of baseline data used could be user credentials, access rights, creation dates, or last known modification dates. In order to ensure the data is not tampered with, the best solutions calculate a known cryptographic checksum, and can then use this against the current state of the file at a later date.

File Integrity Monitoring: Essential for Breach Detection and Prevention

File integrity monitoring is a prerequisite for many compliance regulations. PCI DSS for example mentions this foundational control in two sections of its policy, For GDPR, this kind of monitoring can support five separate articles on the checklist. From HIPAA for health organizations, to NERC-CIP for utility providers, file integrity monitoring is explicitly mentioned to support best practice in preventing unauthorized access or changes to data and files.

Outside of regulatory assessment, although file integrity monitoring can alert you to configuration problems like storage errors or software bugs, it’s most widely used as a powerful tool against malware.

There are two main ways that file integrity monitoring makes a difference, Firstly, once attackers have gained entry to your network, they often make changes to file contents to avoid being detected. By utilizing in-depth detection of every change happening on your network and contextually supporting alerts based on unauthorized policy violations, file integrity monitoring ensures attackers are stopped in their tracks.
Secondly, the monitoring tools give you the visibility to see exactly what changes have been made, by whom, and when. This is the quickest way to detect and limit a breach in real-time, getting the information in front of the right personnel through alerts and notifications before any lateral moves can be made or a full-blown attack is launched.

Incorporating file integrity monitoring as part of a strong security solution can give you even more benefits. Micro-segmentation is an essential tool that goes hand in hand for example. File integrity monitoring can give you the valuable information you need about where the attack is coming from, while micro-segmentation allows you to reduce the attack surface within your data centers altogether, so that even if a breach occurs, no lateral movement is possible. You can create your own strict access and communication policies, making it easier to use your file integrity monitoring policies to see the changes that are authorized and those which are not. As micro-segmentation works in hybrid environments, ‘file’ monitoring becomes the monitoring of your entire infrastructure. This extended perimeter protection can cover anything from servers, workstations and network devices, to VMware, containers, routers and switches, directories, IoT devices and more.

Features to Look for in a File Integrity Monitoring Solution

Of course, file integrity monitoring can vary between security providers. Your choice needs to be integrated as part of a full-service platform that can help to mitigate the breach when it’s detected, rather than just hand-off the responsibility to another security product down the line.

Making sure you find that ideal security solution involves checking the features on offer. There are some must-haves, which include real-time information so you always have an accurate view of your IT environment, and multi-platform availability. Most IT environments now use varied platforms including different Windows and Linux blends.

Another area to consider is how the process of file integrity monitoring seamlessly integrates with other areas of your security posture. One example would be making sure you can compare your change data with other event and log data for easy reporting, allowing you to quickly identify causes and correlative information.

If you’re using a micro-segmentation approach, creating rules is something you’re used to already. You want to look for a file integrity monitoring solution that makes applying rules and configuring them as simple as possible. Preferably, you would have a template that allows you to define the files and services that you want monitored, and which assets or asset labels contain those files. You can then configure how often you want these monitored, and be alerted of incidents as they occur, in real-time.

Lastly, the alerts and notifications themselves will differ between solutions. Your ideal solution is one that provides high level reporting of all the changes throughout the network, and then allows you to drill down for more granular information for each file change, as well as sending information to your email or SIEM (security information and event management) for immediate action.

File Integrity Monitoring with Micro-Segmentation – A Breach Detection Must Have

It’s clear that file integrity monitoring is essential for breach detection, giving you the granular, real-time information on every change to your files, including the who, what, where and when. Alongside a powerful micro-segmentation strategy, you can detect breaches faster, limit the attack area ahead of time, and extend your perimeter to safeguard hybrid and multi-platform environments, giving you the tools to stay one step ahead at all times.

Application Segmentation

Business applications are the principal target of attackers seeking access to an organization’s most sensitive information, and as application deployment approaches become more dynamic and extend to the external cloud platforms, the number of possible attack vectors is multiplying. This is driving a shift from traditional perimeter security to increased focus on detection and prevention of lateral movement within both on-premises and cloud infrastructure..

Most security pros and industry experts agree that greater segmentation is the best step that an organization can take to stop lateral movement, but it can be challenging to parse the various available segmentation techniques. For example, IT pros and security vendors alike often use the terms application segmentation and micro-segmentation interchangeably. There is, in fact, some overlap between these two techniques, but selecting the right approach for a specific set of security and compliance needs requires a clear understanding of the different ways in which segmentation can be performed.

What is Application Segmentation?

Application segmentation is the practice of implementing Layer 4 controls that can both isolate an application’s distinct service tiers from one another and create a security boundary around the complete application to reduce its exposure to attacks originating from other applications.

This serves two purposes:

  • Enforcing clear separation between the tiers of an individual application, allowing only the minimum level of access to each tier required to deliver the application functionality
  • Isolating a complete application from unrelated applications and other resources that could be possible sources of lateral movement attempts if compromised

Intra-Application Segmentation

It is a longstanding IT practice to separate business applications into tiers to improve both scalability and security. For example, a typical business application may include a set of load balancers that field inbound connections, one or more application servers that deliver core application functionality, and one or more database instances that store underlying application data.

Each tier has its own distinct security profile. For example, access to the load balancer is broad, but its capabilities are narrowly limited to directing traffic. In contrast, a database may contain large amounts of sensitive data, so access should be tightly limited.

This is where intra-application segmentation comes into play, as security teams may, for example, limit access to the database to specific IP addresses (e.g., the application server) over specific ports.

Application Isolation

The second important role that application segmentation can play is isolating an entire application cluster, such as the example above, from other applications and IT resources. There are a number of reasons that IT teams may wish to achieve this level of isolation.

One common reason is to reduce the potential for unauthorized lateral movement within the environment. Even with strong intra-application isolation between tiers in place, an attacker who compromises a resource in another application cluster may be able to exploit vulnerabilities or mis-configurations to move laterally to another cluster. Implementing a security boundary around each sensitive application cluster reduces this risk.

There may also be business or compliance reasons for isolating applications. For example, compliance with industry-specific regulations, such as HIPAA, PCI-DSS, and SWIFT security standards are simplified by establishing clear isolation of in-scope IT resources. This is also true for jurisdictional regulations like the EU General Data Protection Regulation (GDPR).

Application Segmentation vs. Micro-Segmentation

The emergence of micro-segmentation as a best practice has created some confusion for IT pros evaluating possible internal security techniques. Micro-segmentation is, in fact, a method of implementing application segmentation. However, micro-segmentation capabilities significantly improve an organization’s ability to perform application segmentation through greater visibility and granularity.

Traditional application segmentation approaches have relied primarily on Layer 4 controls. This does have value, but firewalls and other systems used to implement such controls do not give security teams a clear picture of the impact of these controls. As a result, they are time-consuming to manage and susceptible to configuration errors, particularly as environments evolve to include cloud services and new deployment models like containers.

Moreover, Layer 4 controls alone are very coarse. Sophisticated attackers are skilled at spoofing IP addresses and piggybacking on allowed ports to circumvent Layer 4 controls.

Micro-segmentation improves upon traditional application segmentation techniques in two ways. The first is giving security teams a visual representation of the environment and the policies protecting it. Effective visualization makes it possible for security teams to better understand the policies they need and identify whether gaps in policy coverage exist. This level of visibility rarely exists when organizations are attempting to perform application segmentation using a mix of existing network-centric technologies.

A second major advantage that micro-segmentation offers is greater application awareness. Leading micro-segmentation technologies can display and control activity at Layer 7 in addition to Layer 4. An application-centric micro-segmentation approach can do more than simply create a coarse boundary between application tiers or around an application cluster. It allows specific processes – and their associated data flows – to be viewed in an understandable way and serve as the basis for segmentation policies. Rather than relying solely on IP addresses and ports, micro-segmentation rules can white-list very specific processes and flows while blocking everything else by default. This enables far superior application isolation than traditional application segmentation techniques.

Balancing Application Segmentation with Business Agility

Application segmentation is more important than ever as dynamic hybrid cloud environments and fast-paced DevOps deployment models become the norm. The business agility that these advances enable are highly valuable to the organizations that adopt them. However, heterogeneous environments that are constantly evolving are also more challenging to secure. Security teams can easily find themselves facing a lose/lose proposition of either slowing down innovation or overlooking new possible security risks.

The granular visibility and control that application-centric micro-segmentation offers makes it possible to proactively secure new or updated applications at the time of deployment without added complexity or delay. It also ensures that security teams can quickly detect any abnormal application activity that slips through the cracks and respond rapidly to new security risks before they can be exploited.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

GuardiCore’s Journey from Vision to Best-in-Class Micro-Segmentation

Micro-segmentation as we know it today has gone through several stages in the last few years, moving from a rising trend for securing software-defined data centers to a full-blown cyber security technology and a top priority on the agenda of nearly every CISO.

Built on the vision of securing the hybrid cloud and software defined data centers, we started our journey in 2013, thinking how to solve what in our opinion was a huge challenge for a market that did not exist at that time. In this post we’ll share how we created the micro-segmentation solution that is considered the best on the market – from vision to execution.

2015: First steps towards segmentation

Throughout the second half of 2015, we started delivering our micro-segmentation methodology after realizing that understanding how applications communicate inside the cloud was the key to success and as such – must be addressed first. “You can’t protect what you can’t see” wasn’t coined by GuardiCore but was immediately embraced by us when we started planning our micro-segmentation solution. We started developing our visibility solution Reveal, a visual map of all the applications running in the data center, all the way down to the process level. Reveal allows you to view applications and the flow they create in real time while also providing historic views. For the first time, admins and security teams were able to easily discover the running applications, one by one, and then review relations between the application tiers. Early releases supported general data center topologies as well as Docker containers.

2016: Gartner names micro-segmentation a top information security technology

We launched our segmentation solution at the RSA conference 2016 with a big splash. Reveal gained a lot of coverage and was well received by security teams who were lacking the proper tools to see the application flows in their data centers. It was one of the hottest security products at RSA 2016 and for a good reason!

Important to note that when micro-segmentation was introduced in Gartner’s Top 10 Technologies for Information Security in 2016 time in June 2016, many security professionals were unaware of the concept. In that report Gartner stated that to prevent attackers from moving “unimpeded laterally to other systems” there was “an emerging requirement for microsegmentation of east/west traffic in enterprise networks”. Enthusiasm was then at its peak, micro-segmentation was widely covered in the media and conferences dealing with the technology abound.

2017: Micro-segmentation for early adopters

Micro-segmentation was gaining traction as one of the most effective ways to secure data centers and clouds, but organizations learned the hard way that the path to meaningful micro-segmentation was full of challenges. Incomplete visibility into east-west traffic flows, inflexible policy engines and lack of multi-cloud support were among the most cited reasons. Throughout 2017 market penetration was around 5% of target audience and micro-segmentation was far from being mainstream. Andrew Lerner, Research Vice President at Gartner, noted in a blog post that “Micro-segmentation is the future of modern data center and cloud security; but not getting the micro-segmentation-supporting technology right can be analogous to building the wrong foundation for a building and trying to adapt afterward”.

That year GuardiCore tackled these challenges head on and based on the feedback we received from our growing customer base, we added flexible policy management and moved on from using only 3rd party integration to add native enforcement at the flow and process levels. Customers were able to move from zero-segmentation to native enforcement in 3 easy steps, based on revealing applications, building policies and natively enforcing policies.

2018: Our solution takes complexity out of micro-segmentation

Today, micro-segmentation serves as a foundational element of data center security in any data center. According to a Citi group’s report, cloud security is the number one priority among CISOs in 2018, with micro-segmentation the top priority in plans to purchase in this category. Concentrated effort on the part of organizations from different industries has resulted in better understanding of the technology. This year we were able to deploy micro-segmentation across all types of environments, from bare metal to virtualized machines, through public cloud instances and recently to containerized environments.

So if you are planning a micro-segmentation project let’s talk. We can show you how to do it in a way that is quick, affordable, secure, and provable across any environment.

Lateral Movement Security

Security teams often focus significant effort and resources on protecting the perimeter of their IT infrastructure and tightly controlling north-south traffic, or traffic that flows between clients and servers. However, several major transformations in enterprise computing are causing east-west traffic, or server-to-server communication within the data center, to outgrow north-south traffic in both volume and strategic importance.

For example:

  • Traditional on-premises data centers increasingly use horizontal scaling techniques that employ large sets of peer nodes to service the requests of clients, rather than a simple north-south flow.
  • The emergence of big data analytics as an essential competency is also driving substantial growth of east-west traffic, as processing of large data sets distributed across many nodes is generally required.
  • The growing adoption of public cloud infrastructure makes the traditional concept of a network perimeter obsolete, increasing the importance of securing east-west traffic among nodes.

While many organizations remain heavily invested in perimeter security, they are often extremely limited in their ability to detect and prevent lateral movement within their data center and cloud infrastructure.

What is Lateral Movement?

Lateral movement is the set of steps that attackers who have gained a foothold in a trusted environment take to identify the most vulnerable and/or valuable assets, expand their level of access, move to additional trusted assets, and further advance in the direction of high-value targets. Lateral movement typically starts with an infection or credential-based compromise of an initial data center or cloud node. From there, an attacker may employ various discovery techniques to learn more about the networks, nodes, and applications surrounding the compromised resource.

As attackers are learning about the environment, they often make parallel efforts to steal credentials, identify software vulnerabilities, or exploit misconfigurations that may allow them to move successfully to their next target node.

When an attacker executes an effective combination of lateral movement techniques, it can be extremely difficult for IT teams to detect, as these movements often blend in with the growing volume of legitimate east-west traffic. The more they learn about how legitimate traffic flows work, the easier it is for them to attempt to masquerade their attacks as a sanctioned activity. This, combined with many organizations’ insufficient investment in lateral movement security, can cause security breaches to escalate quickly.

Assessing Lateral Movement Security

One fast, simple, and inexpensive step that organizations concerned about lateral movement security can take is to test how vulnerable their environment is to unsanctioned east-west traffic. GuardiCore Labs offers a free, open-source breach and attack simulation tool called Infection Monkey that can be used for this purpose.

Infection Monkey scans the environment, identifies potential points of vulnerability, and attempts predetermined attack scenarios to attempt lateral movement. The output is a security report that identifies the security issues that were discovered and includes actionable remediation recommendations.

Infection Monkey Warns of Danger of Lateral Movement

Visualizing East-West Traffic

Organizations seeking more proactive lateral movement security can begin by visualizing the east-west traffic in their environment. Once a clear baseline of sanctioned east-west traffic is established and viewable on a real-time and historical basis, it becomes much easier to identify unsanctioned lateral movement attempts.

This is one of the flagship capabilities of GuardiCore Centra. Centra uses network and host-based sensors to collect detailed information about assets and flows in data center, cloud, and hybrid environments, combines this information with available labeling information from orchestration tools, and displays a visual representation of east-west traffic in the environment.

Visibility for Lateral Movement

This added visibility alone delivers immediate benefits to organizations seeking a greater understanding of potential lateral movement risks. It also provides the foundation for more sophisticated lateral movement security techniques.

Improving Lateral Movement Security

Once an organization has a clear view of both sanctioned and unsanctioned east-west traffic in its data center and cloud infrastructure, it can use this information to take active steps to stop lateral movement. An optimal approach includes a mix of both proactive and reactive lateral movement security techniques.

Micro-Segmentation Policies

Once an IT team has visualized its east-west traffic, the addition of micro-segmentation policies can significantly reduce attackers’ ability to move laterally. Micro-segmentation applies workload and process-level security controls to data center and cloud assets that have an explicit business purpose for communicating with each other. When strong micro-segmentation policies are implemented, attempts at lateral movement that do not explicitly match sanctioned flows – down to the specific process level – can generate alerts to the security operations team or even be blocked proactively.

Detecting and Responding to Unauthorized East-West Traffic

While micro-segmentation policies significantly improve lateral movement security, it is important to complement policy measures with additional detection and response capabilities. In addition to providing information-risk alerts when policy violations occur, GuardiCore Centra can detect and respond to unauthorized east-west traffic by leveraging deception technology to monitor and investigate suspicious behavior within east-west traffic.

Deception

GuardiCore Centra applies deception technology to analyze all failed attempts at lateral movement and then redirect suspicious behavior to a high-interaction deception engine. The attacker is fed responses that suggest that their attack techniques are successful, but all their tools, techniques and exploits are being recorded and analyzed in a fully isolated environment.

Deception

This helps IT teams learn more about the lateral movement being attempted in the environment and assess how to best improve security policies over time.

A Growing Strategic Priority

While strong perimeter security remains essential, the transition from traditional on-premises infrastructure to hybrid-cloud and multi-cloud architectures is increasing the strategic importance of lateral movement security.

It’s essential for security teams to:

  • Gain ongoing visibility into their organization’s east-west traffic
  • Develop techniques for differentiating between sanctioned and unsanctioned east-west traffic
  • Implement controls like micro-segmentation to tightly govern infrastructure activity
  • Actively monitor for unauthorized lateral movement to both contain breaches quickly and continuously refine policies based on the latest attack techniques.

Organizations that move beyond perimeter-focused thinking and place greater emphasis on lateral movement security will ensure that their security measures remain in step as IT infrastructure becomes more dynamic and heterogeneous.

For more information about Micro-Segmentation, visit our Micro-Segmentation Hub