Leverages GuardiCore Centra Security Platform to Drive New Service Opportunities
As our CTO Ariel Zeitlin mentioned in his recent post , the Guardicore field team has been very busy over the past several months working with some of the world’s largest corporations on different hybrid cloud security projects. More specifically, the Guardicore Centra solution has been helping these large companies achieve greater visibility and assisting them in creating micro-segmentation policies.
At the same time, the Guardicore product teams were busy developing the next wave of innovation for Guardicore Centra. Some of our customers told us that the ability to quickly innovate and introduce new capabilities is one of our key differentiators as a company, and we take this feedback and the responsibility to push the boundaries of our technology seriously.
I have selected a couple of important highlights of the recent releases that I wanted to share with you, to give you a glimpse of the exciting progress we are making. The overview below is only partial. For the complete list of new release features and release content, please see the documentation on our customer portal (login required).
Of note – we are currently on release 28, and soon will EA release 29 and start the development of release 30. We are in continuous motion, upgrading, optimizing and pushing out the best improvements for our customers and if I may add a personal note – setting an example for the industry.
Guardicore Reveal provides visibility into application flows and processes. When visualizing assets, one can now perform asset grouping according to multiple, nested keys. This allows a much clearer view of large data centers and communication flows between environments, applications and roles. In addition, Centra now supports defining segmentation rules according to complicated logic of labels. Want to know more? Watch the demo to learn about Centra and visibility.
Some of the other recent enhancements include the following capabilities:
Users can now define map groupings that consist of multiple keys to form a nested map structure. For example, a user can define a default “Environment” → “Application” → “Role” grouping; Reveal maps will then show the different environments by default. When expanded, each environment will reveal its underlying applications, and correspondingly when an application is expanded, Reveal will show its underlying Roles.
AND Segmentation Rules
Segmentation rules now support specifying the result of a logic “AND” operation on label criteria as a rule’s source or destination. As in previous versions, users can get these suggestions directly from the Reveal map or enter them manually in the Segmentation Policy screen.
AND rules are directly related to nested groups. For example, when suggesting rules from the eCommerce application node in the Production environment, to the Data Processing application in the Production environment, the resulting rules will have a source of “Environment: Production AND Application: eCommerce” and a destination of “Environment: Production AND Application: Data Processing”.
One-Click Daily Maps
This new feature produces daily Reveal maps, generated automatically every 24 hours. Clicking “Explore” on the Reveal menu displays the most recent map by default. Maps are created once and are automatically updated based on your configuration.
Time estimation – We added a progress bar to indicate how long it takes the map to build. When you create a new map on an extended time frame (a week, a month etc’) or activate the Accurate connection times option on the Create New Map window, you will get an ETA indication on the Saved Maps page.
Tighter Process Level Policy Enforcement
To enable more granular and secure policies , we added the ability to explicitly specify the full path of the process as part of the Allow/Block rules. For example, when creating a policy for application “nginx”, Centra will suggest to allow /usr/local/nginx instead of /tmp/nginx.
Cloud Native Visibility, More Multi-Cloud UI Controls
We simplified the way users activate multiple orchestration providers: AWS, vSphere and Kubernetes (K8s) simultaneously. Asset inventory and metadata will be continuously fetched from all defined orchestration providers.
We also added the ability to display orchestrations data from multiple sources for the same Kubernetes asset. All the data about a specific node is now collected both from the Kubernetes API and the compute providers’ APIs.
For Guardicore customers who are using agentless, managed cloud solutions such as AWS, GCP and Azure, we provide a visibility and ‘soft’ enforcement solution with AWS inherent virtual private cloud (VPC) flow logs. VPC flow logs provide a way to inspect all the flows between all the different cloud assets within a given cloud network. Policy-wise this means that only alerts are supported without enforcement.
Private Threat Feeds Integrated into Guardicore Reputation Services
Our users have asked us to enable them to use their own existing threat feeds (IoCs) with the Guardicore Reputation Service. Now Guardicore users can add their internal threat feed and enjoy the same rich visual incident experience as with all Guardicore incidents. The IoC types that are supported are file and IP. The IoCs are uploaded in a JSON format to Centra REST API. Once uploaded, Centra will alert on the presence of these IoCs across the entire customer’s data center.
Rapid development and deployment can be a major competitive business advantage. This approach minimizes waste and cost, aligns business and IT teams, and allows companies to respond to real-time customer need and market trends. However exciting these opportunities are, it’s important to remember that dynamic and complex IT environments are creating increasing risk and threat, and reliability and security are a must have, not an optional extra.
Ensuring that rapid development and security protocols are not at odds should be a goal for any forward-focused business, especially during October’s cyber security Awareness Month. Shifting left on your security is becoming increasingly popular, but how can it be done?
Embracing the Shift Left Approach from a Security Standpoint
The idea behind the ‘shift-left’ approach for security is simple. Instead of first building a new product or service entirely, and then introducing security as a rubber stamp of approval at the end, you bring the security process in at an earlier point in the timeline, at the DevOps stage.
This has multiple benefits. From a business perspective, it’s a more cost-effective way to work on a new project. In fact, according to software development guru Steve McConnell, “violations are 10x to 20x less expensive to resolve during software development compared to at the production release step.”
The shift-left approach also ensures that areas such as reliability and compliance are considered at the earliest possible stage and can be part of the game plan from the start. As any security problems are discovered at the beginning, they are much easier to resolve, as they aren’t integral to the product yet. Troubleshooting security issues in advance means you can fix potential security violations before they become a reality.
Change the Way Security Fits Within your Business Structure and Company Culture
Without “shifting left,” when security is added as an afterthought, key stakeholders in development have historically seen security as a hurdle to get past, or a hoop to jump through. Often, security can stand in the way of a product or a service, making it more difficult to make quick decisions or streamline a process.
By moving security earlier on in the process, it can do the exact opposite – making it easier to say yes to new innovation and change. One example could be third-party code that would speed up development of a new product. Instead of being forced to build your own code from scratch to ensure security, automated processes could scan the code at the point of entry and ensure it is architecturally sound, working with DevOps teams to make their lives easier.
Going Further to Break Down Traditional Silos
Another method to increase the speed of deployment and its agility is to create a shared ownership over delivery of projects as well as a shared accountability for each other’s bottom lines. If development is responsible for secure code going out, and security is responsible for quick deployment, they suddenly have a shared goal they can work towards.
This change in mentality provides functionality and security in one for your business, with a seamless ability to feedback and improve. This is effective throughout a specific development cycle, and also as an overall posture of communication and collaboration for your company. Furthermore, this approach makes the security function less disruptive. It’s a quiet and constant part of the process rather than an addition that is seen to blow up the hard work of your development team at the very last stages.
What Does This Look Like in Action?
Embedding security into the application itself as part of the risk reduction process can be done in a number of ways. Let’s look at a practical example of implementing this methodology using GuardiCore Centra.
First, you will identify the applications and the connections it creates, either on staging or in the QA environment. You can then verify and analyze what the associated risks are.
Once the the GuardiCore agent is embedded into the workloads, you can then configure the security policy using our flexible policy engine. Workload specific, this can be implemented with a Zero Trust policy model. The policies are applied to the assets themselves, without the need to rely on IP addresses or any physical location, so wherever the application moves to, the policy follows.
The Benefits are Clear
Rapid digital transformation is essential for business success, and yet without security at its core – the risks are simply too great. Rather than allow security to continue to take a bolted-on role that is disparate from business process, we should be using tools such as Centra to enable security to shift-left and take an early and equal continuous role in development.
As CTO Ariel Zeitlin shared with his insights, the sooner you get started, the sooner you can enjoy the taste of your success
Rapid adoption of cloud services by companies of all sizes is enabling many business benefits, most notably improved agility and lower IT infrastructure costs. However, as IT environments become more heterogeneous and geographically distributed in nature, many organizations are seeing their security attack surface multiply exponentially. This challenge is compounded by the accelerating rate of IT infrastructure change as more organizations embrace DevOps-style application deployment approaches and more extensive infrastructure automation.
Longstanding security practices such as system hardening, proactive vulnerability management, strong access controls, and network segmentation continue to play valuable roles in security teams’ attack surface reduction efforts. However, these measures alone are no longer sufficient in hybrid cloud environments for several reasons.
The first is that while these practices remain relevant, they do little to counteract the significant attack surface growth that cloud adoption and new application deployment models like containers are introducing. In addition, it is difficult to implement these practices consistently across a hybrid cloud infrastructure, as they are often tied to a specific on-premises or cloud environment. Lastly, as application deployment models become more distributed and dynamic, it is exposing organizations to greater risk of unsanctioned lateral movement. As the volume of east/west traffic grows, network-based segmentation alone is too coarse to prevent attackers from exploiting open ports and services to expand their attack footprint and find exploitable vulnerabilities.
These realities are leading many security executives and industry experts to embrace micro-segmentation as a strategic priority. Implementing a holistic micro-segmentation approach that includes visualization capabilities and process-level policy controls is the most effective way to reduce attack surface as the cloud transforms IT infrastructure. Moreover, because micro-segmentation is performed at the workload level rather than at the infrastructure level, it can be implemented consistently throughout a hybrid cloud infrastructure and adapt seamlessly as environments change or workloads relocate.
Visualizing the Attack Surface
One of the most beneficial steps that security teams can take to reduce their attack surface is to gain a deeper understanding of how their application infrastructure functions and how it is evolving over time. By understanding the attack surface in detail, security teams can be much more effective at implementing new controls to reduce its size.
Using a micro-segmentation solution to visualize the environment makes it easier for security teams identify any indicators of compromise and assess their current state of potential exposure. This process should include visualizing individual applications (and their dependencies), systems, networks, and flows to clearly define expected behavior and identify areas where additional controls can be applied to reduce attack surface.
Attack Surface Reduction with Micro-Segmentation
As more application workloads shift to public cloud and hybrid-cloud architectures, one area where existing attack surface reduction efforts often fall short is lateral movement detection and prevention. More distributed application architectures are significantly increasing the volume of “east/west” traffic in many data center and cloud environments. While much of this traffic is legitimate, trusted assets that are capable of communicating broadly within these environments are attractive targets for attackers. They are also much more accessible as the traditional concept of a network perimeter becomes less relevant.
When an asset is compromised, the first step that attackers often take is to probe and profile the environment around the compromised asset, seek out higher-value targets, and attempt to blend lateral movement in with legitimate application and network activity.
Micro-segmentation solutions can help defend against this type of attack by giving security teams the ability to create granular policies that:
- Segment applications from each other
- Segment the tiers within an application
- Create a clear security boundary around assets with specific compliance or regulatory requirements
- Enforce general corporate security policies and best practices throughout the infrastructure
These measures and others like them slow or block attackers’ efforts to move laterally. When implemented effectively, micro-segmentation applies the principle of least privilege more broadly throughout the infrastructure, even as it extends from the data center to one or more cloud platforms.
This focus on preventing lateral movement through in-depth governance of applications and flows reduces the available attack surface even as IT infrastructure grows and diversifies.
Beyond the Network Attack Surface
Successful use of micro-segmentation to reduce attack surface requires both Layer 4 and Layer 7 process-level controls. Process-level control allows security teams to truly align their security policies with specific application logic and regulatory requirements rather than viewing them purely through an infrastructure lens.
This application awareness is a key enabler of the attack surface reduction benefits of micro-segmentation. Granular policies that whitelist very specific process-level flows are much more effective at reducing attack surface than Layer 4 controls, which savvy attackers can circumvent by exploiting systems with trusted IP addresses and/or blending attacks in over allowed ports.
Granular Layer 7 policy controls make it more possible for organizations to achieve a zero-trust architecture where only the application activity and flows represent known sanctioned behavior are allowed to function unimpeded in the trusted environment.
The Importance of a Multi-OS, Multi-Environment Approach
As the transition to hybrid cloud environments accelerates, it is easy for organizations to overlook the extent to which this change magnifies the size of their attack surface. New physical environments, platforms, and application deployment methods create many new areas of potential exposure.
In addition to providing more granular control, another benefit that micro-segmentation provides to organizations seeking to reduce attack surface is that it achieves a unified security model that spans multiple operating systems and deployment environments. When policies are focused on specific process and flows rather than infrastructure components, they can be applied across any mix of on-premises and cloud-hosted resources and even remain consistent when a specific workload moves between the data center and one or more cloud platforms. This is a major advantage over point security products that are tied to a specific environment or platform, as it enables attack surface to be minimized even as the environment becomes larger and more heterogeneous.
When selecting a micro-segmentation platform, it is important to validate that the solution works seamlessly across your entire infrastructure without any environment- or platform-specific dependencies. This includes validation that the level of control is consistent between Windows and Linux and that there is no dependence on built-in operating system firewalls, which do not offer the necessarily flexibility and granularity.
While the transformation to cloud or hybrid-cloud IT infrastructure does have the potential to introduce new security risks, a well-managed micro-segmentation approach that is highly granular, de-coupled from the underlying infrastructure, and application aware can actually reduce the attack surface even more as more infrastructure diversity and complexity is introduced.
For more information on micro-segmentation, visit our Micro-Segmentation Hub
Recently I have been personally engaged in implementing micro-segmentation for our key customers, which include a top US retail brand, major Wall Street bank, top international pharmaceutical company, and leading European telco. Having spent significant time with each one of the customers including running weekly project calls, workshops, planning meetings, and more allows me a unique glimpse into the reality of what it means to implement such projects in a major enterprise – a point of view that is not always available for a vendor.
I would like to share some observations of how those projects roll out, and hope you will find these insights useful and especially helpful if you are planning to implement micro-segmentation in your network. Each blog in this short series will focus on one insight I’ve gathered from my time both in the boardroom and in the trenches, and I hope you find some practical pieces to help you improve your understanding and implementation of any current or upcoming security projects.
Application segmentation is not necessarily the short-term objective
If you look into the online material for micro-segmentation, vendors, analyst and experts all talk about breaking your data center into applications and those applications into tiers, and limiting the access among those to only what is needed by the applications.
I was surprised to discover that many customers look at the problem from a slightly different angle. For them, segmentation is a risk-reduction project driven by regulations, internal auditing requirements, or simply a desire to reduce the attack surface. These drivers do not always translate into segmenting applications from each other; if segmentation is a priority, it usually is not as the primary objective but instead a means to an end, and is not necessarily a comprehensive process in the short term as they look to achieve their goals. Let me give you a couple examples:
- A major Wall Street bank was required by the auditor to validate that admin access to servers is only done through a Jumpbox or a CyberArk-like solution. It means in reality the bank wanted to set a policy that “Windows machines can only be accessed from this set of machines’ RDP – all other RDP connections to it are not allowed. Linux machines are only allowed to be accessed from this set of machines by SSH – all other SSH connections are not allowed.” I think there is no need to explain the risk reduction contribution of such simple policy, but this has nothing to do with segmenting your data center by applications. Theoretically speaking one could achieve this goal as a side effect of complete data-center segmentation, but that would require significantly more effort and in fact the result would be somewhat implicit and a bit harder to demonstrate to the auditor.
- A European bank needed to implement a simple risk reduction scheme – to mark each server in their DC as “accessible from ATMs,” “accessible from printer area,” accessible from user area,” or “not accessible from non-server area” with very simple, well-defined rules for each one of the groups. Again, the attack surface reduction is quite simple and in their case is in fact very significant, but it has little to do with textbook application segmentation. Here too you could theoretically achieve the same goal by implementing classic micro-segmentation, but Confucius taught us to not try to kill a mosquito with a cannon. Most of those organizations do plan to implement micro-segmentation as the market defines it, but they know it takes a bit of time and they want to first hit the low-hanging fruit in risk reduction early on while implementing this crucial security project incrementally in a way that makes the most sense for their business.
So if you are looking to implement a micro-segmentation project – understand your goals, drivers and motivations and remember that this is a risk reduction project after all and as they say there are many ways to peel an orange – some of them are simpler, faster, more straightforward, and more efficient than the others. But the sooner you get started, the sooner you can enjoy the taste of your success. In any case, when choosing technology to help you with a segmentation project, make sure you choose one that is flexible enough that will help you do textbook micro-segmentation, but also address those numerous other use cases that you might not even be aware of at the initial stages.
Stay tuned to our blog to catch more of my upcoming insights from the trenches.
Learn more information about choosing a micro-segmentation solution.
A critical vulnerability (CVE-2018-10933) was disclosed in libSSH, a library implementing the SSH2 protocol for clients and servers. The vulnerability allows an attacker to completely bypass the authentication step and connect to the server without providing any credentials, the worst possible flaw for a library implementing SSH.
Honeypots are systems on your network that attract and reroute hackers away from your servers, trapping them to identify malicious activities before they can cause harm. The perfect decoy, they often containing false information, without providing access to any live data. Honeypots are a valuable tool for uncovering information about your adversaries in a no-risk environment. A more sophisticated honeypot can even divert attackers in real-time as they attempt to access your network.
How Does Honeypot Security Work?
The design of the honeypot security system is extremely important. The system should be created to look as similar as possible to your real servers and databases, both internally and externally. While it looks like your network, the actual honeypot is a replica, entirely disparate from your real server. Throughout an attack, your honeypot is able to be monitored closely by your IT team.
A honeypot is built to trick attackers into breaking into that system instead of elsewhere. The value of a honeypot is in being hacked. This means that the security controls on your honeypot need to be weaker than on your real server. The balance is essential. Too strong, and attackers won’t be able to make a move. Too weak, and they may suspect a trap.
Your security team will need to decide whether to deploy a low-interaction honeypot or a high-interaction honeypot. A low-interaction solution will be a less effective decoy, but easier to create and manage, while a high-interaction system will provide a more perfect replica of your network, but involve more effort for IT. This could include tools for tricking returning attackers or separating external and internal deception.
What Can a Honeypot Cyber Security System Do?
Your honeypot cyber security system should be able to simulate multiple virtual hosts at the same time, assign hackers with a unique passive fingerprint, simulate numerous TCP/IP stacks and network topologies, and set up HTTP and FTP servers as well as virtual IP addresses with UNIX applications.
The type of information you glean depends on the kind of honeypot security you have deployed. There are two main kinds:
Research Honeypot: This type of honeypot security is usually favored by educational institutions, researchers and non-profits. By uncovering the motives and behavior of hackers, research teams such as Guardicore Labs can learn the tactics the hacking community are using. They can then spread awareness and new intelligence to prevent threats, promoting innovation and collaboration within the cyber security community.
Production Honeypot: More often used by enterprises and organizations, production honeypot cyber security measures are used to mitigate the risk of an attacker on their own network, and to learn more about the motives of bad actors on their data and security.
These honeypots have one particular element in common: the drive to get into the mind of the attacker and recognize the way they move and respond. By attracting and tracking adversaries, and wasting their time, you can reinforce your security posture with accurate information.
What are the Benefits of Honeypot Security?
Unlike a firewall, a honeypot is designed to identify both internal and external threats. While a firewall can prevent attackers getting in, a honeypot can detect internal threats and become a second line of defense when a firewall is breached. A honeypot cyber security method therefore gives you greater intelligence and threat detection than a firewall alone, and an added layer of security against malware and database attacks.
As honeypots are not supposed to have any traffic, all traffic found is malicious by its very existence. This means you have unparalleled ease of detection and no anomalies to question before you start learning about possible attacks. This system provides smaller datasets that are entirely high-value, as your IT and analytics team does not have to filter out legitimate traffic.
Honeypot security also puts you ahead of the game. While your attackers believe they have made their way into your network, you have diverted their attacks to a system with no value. Your security team is given early warning against new and emerging attacks, even those that do not have known attack signatures.
Making Valuable Use of Honeypot Security
More recently, sophisticated honeypots support the active prevention of attacks. A comprehensive honeypot security solution can redirect opportunistic hackers from real servers to your honeypot, learning about their intentions and following their moves, before ending the incident internally with no harm done.
Using cutting-edge security technology, a honeypot can divert a hacker in real-time, re-routing them away from your actual systems and to a virtualized environment where they can do no harm. Dynamic deception methods generate live environments that adapt to the attackers, identifying their methods without disrupting your data center performance.
You can then use the information you receive from the zero-risk attack to build policies against malicious domains, IP addresses and file hashes within traffic flows, creating an environment of comprehensive breach detection.
It’s important to remember that a high-interaction honeypot without endpoint security could be used as a launch pad for attacks against legitimate data and truly valuable assets. Honeypots are intended to invite attackers, and therefore add risk and complexity to your IT ecosystem. As with any tool, honeypots work best when they are integrated as part of a comprehensive solution for a strong security posture. The best cyber-security choice for your organization will incorporate honeypots as a detection and prevention tool, while utilizing additional powerful security measures to protect your live production environment.
Virtualization and Cloud review comment that while honeypots and other methods of intrusion detection “are usable in a classical environment, they really shine in the kinds of highly automated and orchestrated environments that make use of microsegmentation.”
Honeypot security systems can add a valuable layer of security to your IT systems and give you an incomparable chance to observe hackers in action, and learn from their behavior. You can gather valuable insight on new attack vectors, security weaknesses and malware, using this to better train your staff and defend your network. With the help of micro-segmentation, your honeypot security strategy does not need to leave you open to risk, and can support an advanced security posture for your entire organization.
File integrity monitoring (FIM) is an internal control that examines files to see the way that they change, establishing the source, details and reasons behind the modifications made and alerting security if the changes are unauthorized. It is an essential component of a healthy security posture. File integrity monitoring is also a requirement for compliance, including for PCI-DSS and HIPAA, and it is one of the foremost tools used for breach and malware detection. Networks and configurations are becoming increasingly complex, and file integrity monitoring provides an increased level of confidence that no unauthorized changes are slipping through the cracks.
How Does File Integrity Monitoring Work?
In a dynamic, agile environment, you can expect continuous changes to files and configuration. The trick is to separate between authorized changes due to security, communication, or patch management, and problems like configuration errors or malicious intent that need your immediate attention.
File integrity monitoring uses the process of baseline comparison to make this differentiation. One or more file attributes are stored internally as a baseline, and this is then compared periodically when the file is being checked. Examples of baseline data used could be user credentials, access rights, creation dates, or last known modification dates. In order to ensure the data is not tampered with, the best solutions calculate a known cryptographic checksum, and can then use this against the current state of the file at a later date.
File Integrity Monitoring: Essential for Breach Detection and Prevention
File integrity monitoring is a prerequisite for many compliance regulations. PCI DSS for example mentions this foundational control in two sections of its policy, For GDPR, this kind of monitoring can support five separate articles on the checklist. From HIPAA for health organizations, to NERC-CIP for utility providers, file integrity monitoring is explicitly mentioned to support best practice in preventing unauthorized access or changes to data and files.
Outside of regulatory assessment, although file integrity monitoring can alert you to configuration problems like storage errors or software bugs, it’s most widely used as a powerful tool against malware.
There are two main ways that file integrity monitoring makes a difference, Firstly, once attackers have gained entry to your network, they often make changes to file contents to avoid being detected. By utilizing in-depth detection of every change happening on your network and contextually supporting alerts based on unauthorized policy violations, file integrity monitoring ensures attackers are stopped in their tracks.
Secondly, the monitoring tools give you the visibility to see exactly what changes have been made, by whom, and when. This is the quickest way to detect and limit a breach in real-time, getting the information in front of the right personnel through alerts and notifications before any lateral moves can be made or a full-blown attack is launched.
Incorporating file integrity monitoring as part of a strong security solution can give you even more benefits. Micro-segmentation is an essential tool that goes hand in hand for example. File integrity monitoring can give you the valuable information you need about where the attack is coming from, while micro-segmentation allows you to reduce the attack surface within your data centers altogether, so that even if a breach occurs, no lateral movement is possible. You can create your own strict access and communication policies, making it easier to use your file integrity monitoring policies to see the changes that are authorized and those which are not. As micro-segmentation works in hybrid environments, ‘file’ monitoring becomes the monitoring of your entire infrastructure. This extended perimeter protection can cover anything from servers, workstations and network devices, to VMware, containers, routers and switches, directories, IoT devices and more.
Features to Look for in a File Integrity Monitoring Solution
Of course, file integrity monitoring can vary between security providers. Your choice needs to be integrated as part of a full-service platform that can help to mitigate the breach when it’s detected, rather than just hand-off the responsibility to another security product down the line.
Making sure you find that ideal security solution involves checking the features on offer. There are some must-haves, which include real-time information so you always have an accurate view of your IT environment, and multi-platform availability. Most IT environments now use varied platforms including different Windows and Linux blends.
Another area to consider is how the process of file integrity monitoring seamlessly integrates with other areas of your security posture. One example would be making sure you can compare your change data with other event and log data for easy reporting, allowing you to quickly identify causes and correlative information.
If you’re using a micro-segmentation approach, creating rules is something you’re used to already. You want to look for a file integrity monitoring solution that makes applying rules and configuring them as simple as possible. Preferably, you would have a template that allows you to define the files and services that you want monitored, and which assets or asset labels contain those files. You can then configure how often you want these monitored, and be alerted of incidents as they occur, in real-time.
Lastly, the alerts and notifications themselves will differ between solutions. Your ideal solution is one that provides high level reporting of all the changes throughout the network, and then allows you to drill down for more granular information for each file change, as well as sending information to your email or SIEM (security information and event management) for immediate action.
File Integrity Monitoring with Micro-Segmentation – A Breach Detection Must Have
It’s clear that file integrity monitoring is essential for breach detection, giving you the granular, real-time information on every change to your files, including the who, what, where and when. Alongside a powerful micro-segmentation strategy, you can detect breaches faster, limit the attack area ahead of time, and extend your perimeter to safeguard hybrid and multi-platform environments, giving you the tools to stay one step ahead at all times.
Business applications are the principal target of attackers seeking access to an organization’s most sensitive information, and as application deployment approaches become more dynamic and extend to the external cloud platforms, the number of possible attack vectors is multiplying. This is driving a shift from traditional perimeter security to increased focus on detection and prevention of lateral movement within both on-premises and cloud infrastructure..
Most security pros and industry experts agree that greater segmentation is the best step that an organization can take to stop lateral movement, but it can be challenging to parse the various available segmentation techniques. For example, IT pros and security vendors alike often use the terms application segmentation and micro-segmentation interchangeably. There is, in fact, some overlap between these two techniques, but selecting the right approach for a specific set of security and compliance needs requires a clear understanding of the different ways in which segmentation can be performed.
What is Application Segmentation?
Application segmentation is the practice of implementing Layer 4 controls that can both isolate an application’s distinct service tiers from one another and create a security boundary around the complete application to reduce its exposure to attacks originating from other applications.
This serves two purposes:
- Enforcing clear separation between the tiers of an individual application, allowing only the minimum level of access to each tier required to deliver the application functionality
- Isolating a complete application from unrelated applications and other resources that could be possible sources of lateral movement attempts if compromised
It is a longstanding IT practice to separate business applications into tiers to improve both scalability and security. For example, a typical business application may include a set of load balancers that field inbound connections, one or more application servers that deliver core application functionality, and one or more database instances that store underlying application data.
Each tier has its own distinct security profile. For example, access to the load balancer is broad, but its capabilities are narrowly limited to directing traffic. In contrast, a database may contain large amounts of sensitive data, so access should be tightly limited.
This is where intra-application segmentation comes into play, as security teams may, for example, limit access to the database to specific IP addresses (e.g., the application server) over specific ports.
The second important role that application segmentation can play is isolating an entire application cluster, such as the example above, from other applications and IT resources. There are a number of reasons that IT teams may wish to achieve this level of isolation.
One common reason is to reduce the potential for unauthorized lateral movement within the environment. Even with strong intra-application isolation between tiers in place, an attacker who compromises a resource in another application cluster may be able to exploit vulnerabilities or mis-configurations to move laterally to another cluster. Implementing a security boundary around each sensitive application cluster reduces this risk.
There may also be business or compliance reasons for isolating applications. For example, compliance with industry-specific regulations, such as HIPAA, PCI-DSS, and SWIFT security standards are simplified by establishing clear isolation of in-scope IT resources. This is also true for jurisdictional regulations like the EU General Data Protection Regulation (GDPR).
Application Segmentation vs. Micro-Segmentation
The emergence of micro-segmentation as a best practice has created some confusion for IT pros evaluating possible internal security techniques. Micro-segmentation is, in fact, a method of implementing application segmentation. However, micro-segmentation capabilities significantly improve an organization’s ability to perform application segmentation through greater visibility and granularity.
Traditional application segmentation approaches have relied primarily on Layer 4 controls. This does have value, but firewalls and other systems used to implement such controls do not give security teams a clear picture of the impact of these controls. As a result, they are time-consuming to manage and susceptible to configuration errors, particularly as environments evolve to include cloud services and new deployment models like containers.
Moreover, Layer 4 controls alone are very coarse. Sophisticated attackers are skilled at spoofing IP addresses and piggybacking on allowed ports to circumvent Layer 4 controls.
Micro-segmentation improves upon traditional application segmentation techniques in two ways. The first is giving security teams a visual representation of the environment and the policies protecting it. Effective visualization makes it possible for security teams to better understand the policies they need and identify whether gaps in policy coverage exist. This level of visibility rarely exists when organizations are attempting to perform application segmentation using a mix of existing network-centric technologies.
A second major advantage that micro-segmentation offers is greater application awareness. Leading micro-segmentation technologies can display and control activity at Layer 7 in addition to Layer 4. An application-centric micro-segmentation approach can do more than simply create a coarse boundary between application tiers or around an application cluster. It allows specific processes – and their associated data flows – to be viewed in an understandable way and serve as the basis for segmentation policies. Rather than relying solely on IP addresses and ports, micro-segmentation rules can white-list very specific processes and flows while blocking everything else by default. This enables far superior application isolation than traditional application segmentation techniques.
Balancing Application Segmentation with Business Agility
Application segmentation is more important than ever as dynamic hybrid cloud environments and fast-paced DevOps deployment models become the norm. The business agility that these advances enable are highly valuable to the organizations that adopt them. However, heterogeneous environments that are constantly evolving are also more challenging to secure. Security teams can easily find themselves facing a lose/lose proposition of either slowing down innovation or overlooking new possible security risks.
The granular visibility and control that application-centric micro-segmentation offers makes it possible to proactively secure new or updated applications at the time of deployment without added complexity or delay. It also ensures that security teams can quickly detect any abnormal application activity that slips through the cracks and respond rapidly to new security risks before they can be exploited.
For more information on micro-segmentation, visit our Micro-Segmentation Hub.
In the US, the financial cost of a data breach is rising year on year. IBM’s Cost of a Data Breach Report, is independently conducted annually by the Ponemon Institute. This year, the report included data from more than 15 regions, across 17 industries. They interviewed IT, compliance, and data protection experts from 477 companies. As a result, the true average cost of a data breach is more accurate than ever.
Crunching the Numbers: The Average Cost of a Data Breach
According to the study, the average cost of a data breach in 2018 is $3.86 million, which has increased by 6.4% since last year’s report.
While the risk of a data breach is around 1 in 4, not all breaches are created equally. Of course, the more records that are exposed, the more expensive and devastating a breach will be. A single stolen or exposed data record costs a company an average of $148, while 1 million, considered a Mega Breach, will cost $40 million. 50 million may be reserved for the largest enterprises, but this will raise the financial cost to $350 million.
Beyond a Ransom: The Hidden Cost of Data Breach
Although many businesses worry about the rise in ransomware, the cost of a data breach is about much more than any malicious demand from a hacker could be. The true cost can be broken down into dozens of areas, from security upgrades in response to the attack to a drop in your stock price when word of the breach gets out. Research by Comparitech found that companies tend to see a stock price slide of 42% following a breach. Other costly elements of a data breach include Incident investigation, legal and regulatory activity, and even updating customers. These all contribute to the escalating cost when you fail to adequately protect your company against a data breach.
The Ponemon study found that the largest cost comes from customer churn. The US sees the highest cost in the world in terms of lost business due to a data breach, more than two times the average figure, at $4.2 million per incident. Most analysts put this discrepancy down to the nature of commerce in the United States. In the US, there is far more competition and choice, and customer loyalty is both harder to hold onto and almost impossible to retrieve once trust is lost.
Customers also have more awareness of data breaches in the US, as laws dictate they must be informed of any issues as they are uncovered. This kind of reputational damage is devastating, especially in the case of a Mega Breach. In fact, 1/3 of the cost of Mega Breaches can be attributed to lost business.
Of course, there is also the fear that even if you manage to recover from a data breach, the worst is not over. The IBM study found that there is a 27.9% chance of another breach in the following two years after an attack, making your company extremely vulnerable unless you can make considerable changes, and fast.
Preparing Your Business for the Average Cost of a Data Breach
The numbers don’t lie. The speed and impact of data breaches is something to which every company, no matter the size, should be paying attention. There are definitely ways to protect your business and to position yourself responsibly for the worst case scenarios.
According to Verizon, 81% of all breaches exploit identity, often through weak passwords or human error. Malware can piggyback onto a legitimate user to get behind a physical firewall, which is why most IT professionals agree that even next-gen firewalls are insufficient. To limit the potential repercussions of this, all businesses need to be employing a zero-trust model.
With micro-segmentation, perimeters can be created specifically for the protection of sensitive or critical data. This ensures that all networks are considered not trusted. Using a granular approach to limit communications, and tagging workloads themselves with labels and restrictions. Containment of attacks is built into your security from the outset, by limiting the attacker’s freedom of movement and restricting ability for any lateral movement at all. As the financial impact of a data breach rises with the amount of data records stolen, this is a significant weapon to have at your disposal.
Rapid Response Can Limit the Cost of Data Breaches
Efficiency in identifying an incident as well as the speed of the response itself has a huge impact. Rapid response can save money, as well as proving to your customers that you still deserve their trust. According to the IBM report, the average time it took companies to identify the data breach was 197 days. Even once a breach was detected, the average time to contain it was a further 69. When it came to a Mega Breach – it could take an entire year to detect and contain.
With micro-segmentation, the visibility is immediate. All communications are logged, including East-West traffic. This includes private architecture, cloud-based systems, and even hybrid solutions. The best solutions will offer alerts and notifications in case of any unusual behavior, allowing you to stop threats in their tracks, before any damage has been done.
The quicker this happens, the less financial damage will be done. In fact, on average, companies who suffered a breach that managed to contain it within 30 days saved more than $1 million over companies who couldn’t. The larger the breach – the more significant these savings are likely to be.
Ensure You’re Fully Armed Against a Data Breach
The complex nature of most businesses IT systems explains the growing threat of cyber-crime, and the increasing financial cost of lax security holding us all to ransom. Traditional security systems are not enough to ensure adequate protection from a data breach, or rapid detection and response in case the worst happens.
Micro-segmentation offers granular flexible security that adapts to your exact environment, detecting and limiting the force of an attack, and providing the visibility and response tools you need to keep your customers loyal.