Do You Have an Effective Security Incident Response Plan? – Assess your Readiness

The Ponemon Institute has found that the survival rate for businesses without a security incident response plan is just 10%. Enterprises will often focus on creating a strong security posture to detect and thwart attackers, but fail to detail what to do if and when a breach actually occurs. That’s not unusual; it can feel defeatist to prepare for the worst. However, with new attacks being discovered all the time, and increasingly connected networks putting us all at risk, an incident response plan is essential.

1. Understanding the Consequences of Ignoring a Security Incident Response Plan

The first stage in your security incident response strategy needs to be recognizing the ramifications of an attack. From the obvious problems, such as asset and data breach, to reputational damage, compliance failures and public image breakdown, it’s in your company’s best interests to be fully prepared. Detailing these threats in writing can help your staff focus on maintaining a strong security posture to prevent attacks, and encourage everyone to work together with a mutual understanding of what’s at risk if the worst happens.

2. Assigning Roles Before an Emergency

Especially in large organizations, it can be hard to keep everyone in the loop when there is a crisis. Identifying the core stakeholders for a security incident before a breach occurs is therefore essential. Here are some key personnel who need to be detailed in your security incident response plan. In some cases they may be obvious, while in others you might need to choose staff to take on responsibilities for some of these roles in your cyber-security incident response team.

  • Incident response managers . It’s worth having at least two members of staff on hand who can oversee and prioritize the incident response plan, communicating information and tasks throughout the business.
  • Security analysts. Maintain the investigation, support the managers in following the plan, and filter out false positives. They may also alert others to potential attacks. It’s essential to ensure that they are given the right tools to be able to manage their role effectively.
  • Threat researchers. These personnel will be the port of call for contextual information around a threat. Using the web, as well as other threat intelligence, they can build and maintain a database of intelligence internally.
  • Key internal stakeholders. Who needs to be kept in the loop when a threat occurs? From board level personnel who may need to sign off on your actions or give the go-ahead for your response plan, to your CISO or human resources representative if human error is involved.
  • Third-party organizations, such as legal counsel, law enforcement, forensics experts or breach remediation companies.

3. Create a High-Level Document Outlining the Security Incident Response Procedure

Many organizations have multiple playbooks with granular detail on the technical side of an attack, in order to help IT manage and contain a breach. However, if you’ve ever experienced a security incident, you know that IT are far from the only department affected by an attack. Your incident response plan needs to be easily communicated and understood by C-suite employees, Human Resources, Vendor Management and all other lines of business stakeholders including global offices or teams in the field. As regulation increasingly dictates that customers are kept informed when their data is at risk, you may even need customer experience managers to be able to relay your position.

Some of the best security incident response plans are one or two pages, and give a high-level overview of how to manage the consequences of an incident. While playbooks might hold specific information for targeting a type of attack, such as Ransomware, your incident response plan should be written so that it can be read by anyone and understood easily in a moment of crisis.

4. Outline Response Priorities

Not every key stakeholder is going to have the same priorities when an attack hits, and not all priorities can be taken into consideration. For example, your board might want to get your operations up and running as quickly as possible, while legal counsel may suggest staying offline until vendors have been notified or customers contacted. Without a clear outline of whose priorities take precedence, existing relationships can dictate what procedure is followed after a breach, following tribal knowledge rather than smart decision making.

Assessing the scale of an attack and making quick decisions about revenue over security for example should not be done in the moment, or by whomever has the ear of the CISO that day. While you’re building your incident response plan, think about who should have autonomy over decisions that manage risk, and engage them in creating priorities based on levels of threat.

Detailed performance objectives can help here. In the event of a customer data breach, your security team might be tasked with finding out what has been exposed and how many customers are affected within a given amount of time. Making smart decisions about the action needed before a problem becomes a reality means all relevant teams can hit the ground running.

5. Simulate Breaches to Troubleshoot in a Safe Environment

Having an incident response plan is not enough in and of itself. Without testing and simulation, there is no way to recognize gaps in protocol or resources, or to uncover changes in third-party procedure. Regular simulations can ensure that your security incident response strategy remains up to date and nothing falls through the cracks. This can include finding replacements for staff who took on security roles and have now left the company, or for external vendors with lapsed service agreements. It can also help you keep up with changes in regulation, and keep new staff informed of the process in case of a breach.

A simulation can be as in-depth as you would like and can range from table top exercises to injecting your system with a known and containable malware, but a few basics to cover include:

  • Going over the lines of communication from detection to resolution
  • Understanding who is authorized to make decisions on security and risk
  • Confirming you have the third-party services in place you need to control a breach
  • Who needs to be contacted in case of a breach for continued regulatory compliance/operations?

The more you make simulation and testing part of your usual security posture, the more likely it will be second nature for the relevant stakeholders when the incident is no longer theoretical.

6. Identify the Scope of a Breach

Many companies act too quickly when they see a threat. Failing to recognize the size of a breach can cause more problems in the long run. Finding one point of entry does not mean that you’ve identified all the endpoints that have been compromised for example. Acting like you have found patient zero when it’s actually patient 10 or 15 can slow down recovery time overall. Modern day attacks are stealthy and subtle, and could have caused more damage than you might have first assumed.

The best security solutions will intercept suspicious activity on threat detection and reroute it to where it cannot do any harm using dynamic deception. The full extent of the breach can then be searched for and contained in real-time, giving your security team an accurate dynamic map of your entire data center and network. Your automatically generated report shows you the deception incidents, including integral information you need to investigate the breach. What passwords were used, and where did the attacker gain entry? Were there malicious binaries used, or suspicious C&C servers? With this level of detail, your security teams are able to start building up a clear picture of root cause.

Containment of this kind can also give you more time to understand what you’re dealing with in a safe environment. By rerouting an attacker using dynamic deception, you can isolate them safely, and monitor and learn from their activities rather than frighten them away by alerting them that you know they’ve gained entry. In this way, you can take back the upper hand, responding to the attackers behavior without going into crisis mode, calmly following your incident response plan priorities – risk free.

7. Limit Dwell Time

Having this level of granular visibility manages the next part of your incident response plan, limiting the amount of time that attackers are on your network. The SANS Institute found that a shocking 50% of organizations didn’t notice a breach for more than 48 hours, while 7% had no idea how long an attacker had breached their network for, even after the fact. The longer an attack continues for without being stopped, the more damage can be done, so having a plan for limiting this is essential.

Your security solution should be able to limit dwell time by provide application layer visibility. This uncovers and tracks process-level activity (not just at the transport layer) across applications in real-time. This can then be automatically correlated with network events and context, allowing you to access reports on suspected incidents and any anomalies detected across all workloads. With this, even new attack vectors are isolated in real-time. With nowhere for attackers to hide, dwell time is automatically minimized at a policy level.

8. Including Recovery Plans

The clearest part of your security incident response plan should outline what happens when a breach has been confirmed. Detail the processes that are automated so that all key stakeholders understand what has already been put into place.

Does your security solution allow IOCs (Indicators of Compromise) to be automatically exported to your SIEM or security gateways to speed up incident response? Can you update your micro-segmentation policies quickly and seamlessly in response to traffic violations? There might be different automated procedures needed for various environments. For example, stopping the spread of damage from VMs or Containers could involve an IOC halting or disconnecting service entirely. The best solutions will provide an integrated platform that shows the full picture from both a security and an infrastructure point of view.

Recovery plans might need their own smaller security incident response plans or playbooks. A DDoS attack is different from an injection of malware. An external bad actor is a different adversary from an insider with high level access who has compromised the network. Your company might have one set of response plans for a breach to customer data, another for artificial intelligence, and yet another for asset recovery. Make sure the right documentation is ready for any event, and the right personnel are equipped with a plan of action.

9. What Lessons Can You Add to Your Security Incident Response Plan?

By utilizing a smart incident response plan, you can use a breach to help prepare for the future. Once the attack is contained and eradicated, make sure to complete any incident documentation for regulation or internal records. You can also perform your own analysis internally to learn from the attack and your responses to it as a company. With the lessons you’ve learned, you can update your security incident response plan. What can you improve for next time, and what gaps did you uncover if any?

A strong security incident response plan is a must-have in today’s increasingly interconnected IT environment. If and when a breach occurs, your business will be asked how you prepared for an incident. This could be used to establish regulatory compliance as well as assessment of the attack and even blame. Creating a detailed analysis of how your company prepares for a threat, as well as responds in the moment and learns from the experience puts you one step ahead, and ready for anything.

5 Ways that PCI DSS Micro-Segmentation Can Help You Achieve Compliance

As regulations for compliance become increasingly stringent, the consequences for failing an audit go far beyond a bureaucratic headache. As well as damage to your public image, you could be subject to financial penalties and even a halt to your business operations altogether until safety measures have been put into place.

Relying on a security solution that employs micro-segmentation can be a powerful tool that provides unparalleled control over the traffic cross your hybrid IT ecosystem. The right approach will be able to isolate and segment all applications, monitoring and routing all traffic, including east-west. By doing this, micro-segmentation can effortlessly check boxes for your compliance regulations, whether that’s PCI-DSS, HIPAA, or others.

PCI DSS Micro-Segmentation through Separation of Zones

When it comes to PCI DSS, micro-segmentation can support you in reducing scope. The compliance regulations are very clear. “To be considered out of scope for PCI DSS, a system component must be properly isolated from the cardholder data environment (CDE), such that even if the out-of-scope system component was compromised it could not impact the security of the CDE.” A similar rule is found for HIPAA compliance, but this time regarding Protected Health Information (PHI).

It is likely that some systems can be physically separated from your CDE or PHI. In the past, firewalls could enforce network zones, as could virtual LANs with strong ACLs. However, more complex architecture such as cloud-based VMs or containers have this made this difficult. Even simple compliance regulations, such as placing a firewall, become a challenge. Additionally, dynamic workloads mean you need granular visibility of where changes are happening within the CDE in real-time. This has encouraged businesses to look for a solution that allows for continuous process or identity level detail and control.

Ensuring that you have rich visibility into the flow of traffic is number one on the list for any auditor. This has two benefits. Firstly, it shows the regulatory board that you have a strong understanding of the data and access in your network. Secondly, it proves that you can automatically detect a threat or breach if the worst happens.

Reduced Impact of a Breach

Once you have established visibility, controlling traffic to isolate and resolve an attack should be next on the agenda. By starting with broad micro-segmentation policies and then creating more specific layers you can achieve the right balance between under and over segmenting your network. This should be done gradually, allowing you to gain the perfect amount of control without losing functionality and flexibility. Because the policies you build for micro-segmentation are application-aware, you can use them to enforce system access to specific regulated data, such as PHI for HIPAA compliance. Even if a breach happens to your perimeter, a hacker would not be able to move from an out of scope area to one that threatens compliance posture. Companies that only focus on protecting their perimeter between external and internal systems are behind the times. If attackers get through your perimeter, your entire data center or network is up for grabs. For PCI-DSS, micro-segmentation can provide a deeper level of security on all the important systems on your network. It can also stop attackers from making lateral moves within your network, pivoting dangerously from an out of scope area to one which can reach your CDE or PHI.

Another benefit for HIPAA or PCI DSS, micro-segmentation can meet the requirement of maintaining a vulnerability management program. For this to work best, your solution needs to work in tandem with a strong breach detection and mitigation solution, protecting your system against malware. Micro-segmentation works with the principle of least privilege, perfect for verticals like healthcare dealing with HIPAA compliance, where 70% of organizations cite employee negligence as the most worrying reason for breaches.

Another important element to keep in mind for compliance is having separate development and testing environments from production environments. Top tip: Make sure that scanning and auditing is done in a continuous cycle, not just periodically.

Locking Down Systems with PCI DSS Micro-Segmentation

PCI DSS dictates that more in-depth security features should be implemented for what they call “insecure” services, daemons or protocols. An example of this could be using a VPN for file sharing. Using a flexible policy engine is an important element of a compliance-ready micro-segmentation approach. This can enable you to validate administrative access to each system, and restrict specific protocols to using additional security measures.

Another element of compliance is ensuring that only one primary function can be implemented on each server. This means that functions with different security levels cannot be on the same server, preventing lateral moves from weaker entry points. By implementing PCI DSS micro-segmentation, process level policies can be enforced so that only necessary services are making connections, and only one secure function is implemented per server.

Logging all Systems and Mapping Vulnerabilities in PHI or PCI Micro-Segmentation

As well as showing that you’ve created zones in your network, nearly all compliance regulations will expect you to have visibility into the traffic that moves among them and the ability to log this information for later. Traditionally, companies have had visibility into north-south traffic which moves between client and server. The best approaches can now analyze and monitor east-west traffic, also known as server to server traffic, from within the data center itself. The policies that you define for your micro-segmentation approach can be used as documentation of your compliance, and the granular detail of east-west traffic serves as proof that you have a strong security posture that meets regulations.

Many businesses struggle to prove the systems that they have deemed out of scope actually are separate from their CDE or PHI, especially when dynamic boundaries are part of their IT infrastructure. If you choose a PCI micro-segmentation approach with labeling functionality, you can examine the PCI or PHI environments and inspect the flows and communications in granular detail. Filtering where necessary can allow you to drill down to specific protocols at process level, granting you unparalleled levels of control in comparison to traditional network segmentation.

Finding an All-Inclusive Solution for Compliance

There are many requirements for ongoing compliance, and companies will need to have various security controls in place to establish they are meeting the regulations of complex standards like PCI DSS or HIPAA. For example, when you’re employing PCI DSS micro-segmentation to meet regulations, you will need a distributed firewall to separate the CDE from other applications, as well as file integrity monitoring on your CDE itself. For mapping and documentation you’ll benefit from powerful process level visibility on traffic and data flows.

Lastly, especially important in compliance-heavy industries like healthcare where attacks are so common, your micro-segmentation approach should integrate with tools that allow you to secure the environment and maintain overall vulnerability control. These could include powerful breach detection tools like honeypots and malware detection, Choose a solution that covers many requirements in one, and you’ll take on less risk and management overall, simplifying the road to ongoing compliance.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Learn more about micro-segmentation and PCI compliance.

I Know What We Did Last Summer, You Should Too: See What’s New with GuardiCore Centra

As our CTO Ariel Zeitlin mentioned in his recent post , the GuardiCore field team has been very busy over the past several months working with some of the world’s largest corporations on different hybrid cloud security projects. More specifically, the GuardiCore Centra solution has been helping these large companies achieve greater visibility and assisting them in creating micro-segmentation policies.

At the same time, the GuardiCore product teams were busy developing the next wave of innovation for GuardiCore Centra. Some of our customers told us that the ability to quickly innovate and introduce new capabilities is one of our key differentiators as a company, and we take this feedback and the responsibility to push the boundaries of our technology seriously.

I have selected a couple of important highlights of the recent releases that I wanted to share with you, to give you a glimpse of the exciting progress we are making. The overview below is only partial. For the complete list of new release features and release content, please see the documentation on our customer portal (login required).

Of note – we are currently on release 28, and soon will EA release 29 and start the development of release 30. We are in continuous motion, upgrading, optimizing and pushing out the best improvements for our customers and if I may add a personal note – setting an example for the industry.

Reveal

GuardiCore Reveal provides visibility into application flows and processes. When visualizing assets, one can now perform asset grouping according to multiple, nested keys. This allows a much clearer view of large data centers and communication flows between environments, applications and roles. In addition, Centra now supports defining segmentation rules according to complicated logic of labels. Want to know more? Watch the demo to learn about Centra and visibility.

Some of the other recent enhancements include the following capabilities:

Nested Grouping

Users can now define map groupings that consist of multiple keys to form a nested map structure. For example, a user can define a default “Environment” → “Application” → “Role” grouping; Reveal maps will then show the different environments by default. When expanded, each environment will reveal its underlying applications, and correspondingly when an application is expanded, Reveal will show its underlying Roles.

3-tier GuardiCore Centra product update

 

AND Segmentation Rules

Segmentation rules now support specifying the result of a logic “AND” operation on label criteria as a rule’s source or destination. As in previous versions, users can get these suggestions directly from the Reveal map or enter them manually in the Segmentation Policy screen.
AND rules are directly related to nested groups. For example, when suggesting rules from the eCommerce application node in the Production environment, to the Data Processing application in the Production environment, the resulting rules will have a source of “Environment: Production AND Application: eCommerce” and a destination of “Environment: Production AND Application: Data Processing”.

One-Click Daily Maps

This new feature produces daily Reveal maps, generated automatically every 24 hours. Clicking “Explore” on the Reveal menu displays the most recent map by default. Maps are created once and are automatically updated based on your configuration.

Time estimation – We added a progress bar to indicate how long it takes the map to build. When you create a new map on an extended time frame (a week, a month etc’) or activate the Accurate connection times option on the Create New Map window, you will get an ETA indication on the Saved Maps page.

Tighter Process Level Policy Enforcement

To enable more granular and secure policies , we added the ability to explicitly specify the full path of the process as part of the Allow/Block rules. For example, when creating a policy for application “nginx”, Centra will suggest to allow /usr/local/nginx instead of  /tmp/nginx.

Cloud Native Visibility, More Multi-Cloud UI Controls

We simplified the way users activate multiple orchestration providers: AWS, vSphere and Kubernetes (K8s) simultaneously. Asset inventory and metadata will be continuously fetched from all defined orchestration providers.

We also added the ability to display orchestrations data from multiple sources for the same Kubernetes asset. All the data about a specific node is now collected both from the Kubernetes API and the compute providers’ APIs.

For GuardiCore customers who are using agentless, managed cloud solutions such as AWS, GCP and Azure, we provide a visibility and ‘soft’ enforcement solution with AWS inherent virtual private cloud (VPC) flow logs. VPC flow logs provide a way to inspect all the flows between all the different cloud assets within a given cloud network. Policy-wise this means that only alerts are supported without enforcement.

Private Threat Feeds Integrated into GuardiCore Reputation Services

Our users have asked us to enable them to use their own existing threat feeds (IoCs) with the GuardiCore Reputation Service. Now GuardiCore users can add their internal threat feed and enjoy the same rich visual incident experience as with all GuardiCore incidents. The IoC types that are supported are file and IP. The IoCs are uploaded in a JSON format to Centra REST API. Once uploaded, Centra will alert on the presence of these IoCs across the entire customer’s data center.

Shift Left on Security to Enable Secure and Rapid Digital Transformation

Rapid development and deployment can be a major competitive business advantage. This approach minimizes waste and cost, aligns business and IT teams, and allows companies to respond to real-time customer need and market trends. However exciting these opportunities are, it’s important to remember that dynamic and complex IT environments are creating increasing risk and threat, and reliability and security are a must have, not an optional extra.

Ensuring that rapid development and security protocols are not at odds should be a goal for any forward-focused business, especially during October’s cyber security Awareness Month. Shifting left on your security is becoming increasingly popular, but how can it be done?

Embracing the Shift Left Approach from a Security Standpoint

The idea behind the ‘shift-left’ approach for security is simple. Instead of first building a new product or service entirely, and then introducing security as a rubber stamp of approval at the end, you bring the security process in at an earlier point in the timeline, at the DevOps stage.

This has multiple benefits. From a business perspective, it’s a more cost-effective way to work on a new project. In fact, according to software development guru Steve McConnell, “violations are 10x to 20x less expensive to resolve during software development compared to at the production release step.”

The shift-left approach also ensures that areas such as reliability and compliance are considered at the earliest possible stage and can be part of the game plan from the start. As any security problems are discovered at the beginning, they are much easier to resolve, as they aren’t integral to the product yet. Troubleshooting security issues in advance means you can fix potential security violations before they become a reality.

Change the Way Security Fits Within your Business Structure and Company Culture

Without “shifting left,” when security is added as an afterthought, key stakeholders in development have historically seen security as a hurdle to get past, or a hoop to jump through. Often, security can stand in the way of a product or a service, making it more difficult to make quick decisions or streamline a process.

By moving security earlier on in the process, it can do the exact opposite – making it easier to say yes to new innovation and change. One example could be third-party code that would speed up development of a new product. Instead of being forced to build your own code from scratch to ensure security, automated processes could scan the code at the point of entry and ensure it is architecturally sound, working with DevOps teams to make their lives easier.

Going Further to Break Down Traditional Silos

Another method to increase the speed of deployment and its agility is to create a shared ownership over delivery of projects as well as a shared accountability for each other’s bottom lines. If development is responsible for secure code going out, and security is responsible for quick deployment, they suddenly have a shared goal they can work towards.

This change in mentality provides functionality and security in one for your business, with a seamless ability to feedback and improve. This is effective throughout a specific development cycle, and also as an overall posture of communication and collaboration for your company. Furthermore, this approach makes the security function less disruptive. It’s a quiet and constant part of the process rather than an addition that is seen to blow up the hard work of your development team at the very last stages.

What Does This Look Like in Action?

Embedding security into the application itself as part of the risk reduction process can be done in a number of ways. Let’s look at a practical example of implementing this methodology using GuardiCore Centra.

First, you will identify the applications and the connections it creates, either on staging or in the QA environment. You can then verify and analyze what the associated risks are.

Once the the GuardiCore agent is embedded into the workloads, you can then configure the security policy using our flexible policy engine. Workload specific, this can be implemented with a Zero Trust policy model. The policies are applied to the assets themselves, without the need to rely on IP addresses or any physical location, so wherever the application moves to, the policy follows.

The Benefits are Clear

Rapid digital transformation is essential for business success, and yet without security at its core – the risks are simply too great. Rather than allow security to continue to take a bolted-on role that is disparate from business process, we should be using tools such as Centra to enable security to shift-left and take an early and equal continuous role in development.

As CTO Ariel Zeitlin shared with his insights, the sooner you get started, the sooner you can enjoy the taste of your success

Reduce Attack Surface

Rapid adoption of cloud services by companies of all sizes is enabling many business benefits, most notably improved agility and lower IT infrastructure costs. However, as IT environments become more heterogeneous and geographically distributed in nature, many organizations are seeing their security attack surface multiply exponentially. This challenge is compounded by the accelerating rate of IT infrastructure change as more organizations embrace DevOps-style application deployment approaches and more extensive infrastructure automation.

Longstanding security practices such as system hardening, proactive vulnerability management, strong access controls, and network segmentation continue to play valuable roles in security teams’ attack surface reduction efforts. However, these measures alone are no longer sufficient in hybrid cloud environments for several reasons.

The first is that while these practices remain relevant, they do little to counteract the significant attack surface growth that cloud adoption and new application deployment models like containers are introducing. In addition, it is difficult to implement these practices consistently across a hybrid cloud infrastructure, as they are often tied to a specific on-premises or cloud environment. Lastly, as application deployment models become more distributed and dynamic, it is exposing organizations to greater risk of unsanctioned lateral movement. As the volume of east/west traffic grows, network-based segmentation alone is too coarse to prevent attackers from exploiting open ports and services to expand their attack footprint and find exploitable vulnerabilities.

These realities are leading many security executives and industry experts to embrace micro-segmentation as a strategic priority. Implementing a holistic micro-segmentation approach that includes visualization capabilities and process-level policy controls is the most effective way to reduce attack surface as the cloud transforms IT infrastructure. Moreover, because micro-segmentation is performed at the workload level rather than at the infrastructure level, it can be implemented consistently throughout a hybrid cloud infrastructure and adapt seamlessly as environments change or workloads relocate.

Visualizing the Attack Surface

One of the most beneficial steps that security teams can take to reduce their attack surface is to gain a deeper understanding of how their application infrastructure functions and how it is evolving over time. By understanding the attack surface in detail, security teams can be much more effective at implementing new controls to reduce its size.

Using a micro-segmentation solution to visualize the environment makes it easier for security teams identify any indicators of compromise and assess their current state of potential exposure. This process should include visualizing individual applications (and their dependencies), systems, networks, and flows to clearly define expected behavior and identify areas where additional controls can be applied to reduce attack surface.

Attack Surface Reduction with Micro-Segmentation

As more application workloads shift to public cloud and hybrid-cloud architectures, one area where existing attack surface reduction efforts often fall short is lateral movement detection and prevention. More distributed application architectures are significantly increasing the volume of “east/west” traffic in many data center and cloud environments. While much of this traffic is legitimate, trusted assets that are capable of communicating broadly within these environments are attractive targets for attackers. They are also much more accessible as the traditional concept of a network perimeter becomes less relevant.

When an asset is compromised, the first step that attackers often take is to probe and profile the environment around the compromised asset, seek out higher-value targets, and attempt to blend lateral movement in with legitimate application and network activity.

Micro-segmentation solutions can help defend against this type of attack by giving security teams the ability to create granular policies that:

  • Segment applications from each other
  • Segment the tiers within an application
  • Create a clear security boundary around assets with specific compliance or regulatory requirements
  • Enforce general corporate security policies and best practices throughout the infrastructure

These measures and others like them slow or block attackers’ efforts to move laterally. When implemented effectively, micro-segmentation applies the principle of least privilege more broadly throughout the infrastructure, even as it extends from the data center to one or more cloud platforms.

This focus on preventing lateral movement through in-depth governance of applications and flows reduces the available attack surface even as IT infrastructure grows and diversifies.

Beyond the Network Attack Surface

Successful use of micro-segmentation to reduce attack surface requires both Layer 4 and Layer 7 process-level controls. Process-level control allows security teams to truly align their security policies with specific application logic and regulatory requirements rather than viewing them purely through an infrastructure lens.

This application awareness is a key enabler of the attack surface reduction benefits of micro-segmentation. Granular policies that whitelist very specific process-level flows are much more effective at reducing attack surface than Layer 4 controls, which savvy attackers can circumvent by exploiting systems with trusted IP addresses and/or blending attacks in over allowed ports.

Granular Layer 7 policy controls make it more possible for organizations to achieve a zero-trust architecture where only the application activity and flows represent known sanctioned behavior are allowed to function unimpeded in the trusted environment.

The Importance of a Multi-OS, Multi-Environment Approach

As the transition to hybrid cloud environments accelerates, it is easy for organizations to overlook the extent to which this change magnifies the size of their attack surface. New physical environments, platforms, and application deployment methods create many new areas of potential exposure.

In addition to providing more granular control, another benefit that micro-segmentation provides to organizations seeking to reduce attack surface is that it achieves a unified security model that spans multiple operating systems and deployment environments. When policies are focused on specific process and flows rather than infrastructure components, they can be applied across any mix of on-premises and cloud-hosted resources and even remain consistent when a specific workload moves between the data center and one or more cloud platforms. This is a major advantage over point security products that are tied to a specific environment or platform, as it enables attack surface to be minimized even as the environment becomes larger and more heterogeneous.

When selecting a micro-segmentation platform, it is important to validate that the solution works seamlessly across your entire infrastructure without any environment- or platform-specific dependencies. This includes validation that the level of control is consistent between Windows and Linux and that there is no dependence on built-in operating system firewalls, which do not offer the necessarily flexibility and granularity.

While the transformation to cloud or hybrid-cloud IT infrastructure does have the potential to introduce new security risks, a well-managed micro-segmentation approach that is highly granular, de-coupled from the underlying infrastructure, and application aware can actually reduce the attack surface even more as more infrastructure diversity and complexity is introduced.

For more information on micro-segmentation, visit our Micro-Segmentation Hub

Implementing Micro-Segmentation- Insights from the Trenches, Part One

Recently I have been personally engaged in implementing micro-segmentation for our key customers, which include a top US retail brand, major Wall Street bank, top international pharmaceutical company, and leading European telco. Having spent significant time with each one of the customers including running weekly project calls, workshops, planning meetings, and more allows me a unique glimpse into the reality of what it means to implement such projects in a major enterprise – a point of view that is not always available for a vendor.

I would like to share some observations of how those projects roll out, and hope you will find these insights useful and especially helpful if you are planning to implement micro-segmentation in your network. Each blog in this short series will focus on one insight I’ve gathered from my time both in the boardroom and in the trenches, and I hope you find some practical pieces to help you improve your understanding and implementation of any current or upcoming security projects.

Application segmentation is not necessarily the short-term objective

If you look into the online material for micro-segmentation, vendors, analyst and experts all talk about breaking your data center into applications and those applications into tiers, and limiting the access among those to only what is needed by the applications.

I was surprised to discover that many customers look at the problem from a slightly different angle. For them, segmentation is a risk-reduction project driven by regulations, internal auditing requirements, or simply a desire to reduce the attack surface. These drivers do not always translate into segmenting applications from each other; if segmentation is a priority, it usually is not as the primary objective but instead a means to an end, and is not necessarily a comprehensive process in the short term as they look to achieve their goals. Let me give you a couple examples:

  1. A major Wall Street bank was required by the auditor to validate that admin access to servers is only done through a WebEx or a CyberArk-like solution. It means in reality the bank wanted to set a policy that “Windows machines can only be accessed from this set of machines’ RDP – all other RDP connections to it are not allowed. Linux machines are only allowed to be accessed from this set of machines by SSH – all other SSH connections are not allowed.” I think there is no need to explain the risk reduction contribution of such simple policy, but this has nothing to do with segmenting your data center by applications. Theoretically speaking one could achieve this goal as a side effect of complete data-center segmentation, but that would require significantly more effort and in fact the result would be somewhat implicit and a bit harder to demonstrate to the auditor.
  2. A European bank needed to implement a simple risk reduction scheme – to mark each server in their DC as “accessible from ATMs,” “accessible from printer area,” accessible from user area,” or “not accessible from non-server area” with very simple, well-defined rules for each one of the groups. Again, the attack surface reduction is quite simple and in their case is in fact very significant, but it has little to do with textbook application segmentation. Here too you could theoretically achieve the same goal by implementing classic micro-segmentation, but Confucius taught us to not try to kill a mosquito with a cannon. Most of those organizations do plan to implement micro-segmentation as the market defines it, but they know it takes a bit of time and they want to first hit the low-hanging fruit in risk reduction early on while implementing this crucial security project incrementally in a way that makes the most sense for their business.

So if you are looking to implement a micro-segmentation project – understand your goals, drivers and motivations and remember that this is a risk reduction project after all and as they say there are many ways to peel an orange – some of them are simpler, faster, more straightforward, and more efficient than the others. But the sooner you get started, the sooner you can enjoy the taste of your success. In any case, when choosing technology to help you with a segmentation project, make sure you choose one that is flexible enough that will help you do textbook micro-segmentation, but also address those numerous other use cases that you might not even be aware of at the initial stages.

Stay tuned to our blog to catch more of my upcoming insights from the trenches.

Learn more information about choosing a micro-segmentation solution.

Using Dynamic Honeypot Cyber Security: What Do I Need to Know?

Honeypots are systems on your network that attract and reroute hackers away from your servers, trapping them to identify malicious activities before they can cause harm. The perfect decoy, they often containing false information, without providing access to any live data. Honeypots are a valuable tool for uncovering information about your adversaries in a no-risk environment. A more sophisticated honeypot can even divert attackers in real-time as they attempt to access your network.

How Does Honeypot Security Work?

The design of the honeypot security system is extremely important. The system should be created to look as similar as possible to your real servers and databases, both internally and externally. While it looks like your network, the actual honeypot is a replica, entirely disparate from your real server. Throughout an attack, your honeypot is able to be monitored closely by your IT team.

A honeypot is built to trick attackers into breaking into that system instead of elsewhere. The value of a honeypot is in being hacked. This means that the security controls on your honeypot need to be weaker than on your real server. The balance is essential. Too strong, and attackers won’t be able to make a move. Too weak, and they may suspect a trap.

Your security team will need to decide whether to deploy a low-interaction honeypot or a high-interaction honeypot. A low-interaction solution will be a less effective decoy, but easier to create and manage, while a high-interaction system will provide a more perfect replica of your network, but involve more effort for IT. This could include tools for tricking returning attackers or separating external and internal deception.

What Can a Honeypot Cyber Security System Do?

Your honeypot cyber security system should be able to simulate multiple virtual hosts at the same time, assign hackers with a unique passive fingerprint, simulate numerous TCP/IP stacks and network topologies, and set up HTTP and FTP servers as well as virtual IP addresses with UNIX applications.

The type of information you glean depends on the kind of honeypot security you have deployed. There are two main kinds:

Research Honeypot: This type of honeypot security is usually favored by educational institutions, researchers and non-profits. By uncovering the motives and behavior of hackers, research teams such as Guardicore Labs can learn the tactics the hacking community are using. They can then spread awareness and new intelligence to prevent threats, promoting innovation and collaboration within the cyber security community.

Production Honeypot: More often used by enterprises and organizations, production honeypot cyber security measures are used to mitigate the risk of an attacker on their own network, and to learn more about the motives of bad actors on their data and security.

These honeypots have one particular element in common: the drive to get into the mind of the attacker and recognize the way they move and respond. By attracting and tracking adversaries, and wasting their time, you can reinforce your security posture with accurate information.

What are the Benefits of Honeypot Security?

Unlike a firewall, a honeypot is designed to identify both internal and external threats. While a firewall can prevent attackers getting in, a honeypot can detect internal threats and become a second line of defense when a firewall is breached. A honeypot cyber security method therefore gives you greater intelligence and threat detection than a firewall alone, and an added layer of security against malware and database attacks.

As honeypots are not supposed to have any traffic, all traffic found is malicious by its very existence. This means you have unparalleled ease of detection and no anomalies to question before you start learning about possible attacks. This system provides smaller datasets that are entirely high-value, as your IT and analytics team does not have to filter out legitimate traffic.

Honeypot security also puts you ahead of the game. While your attackers believe they have made their way into your network, you have diverted their attacks to a system with no value. Your security team is given early warning against new and emerging attacks, even those that do not have known attack signatures.

Making Valuable Use of Honeypot Security

More recently, sophisticated honeypots support the active prevention of attacks. A comprehensive honeypot security solution can redirect opportunistic hackers from real servers to your honeypot, learning about their intentions and following their moves, before ending the incident internally with no harm done.

Using cutting-edge security technology, a honeypot can divert a hacker in real-time, re-routing them away from your actual systems and to a virtualized environment where they can do no harm. Dynamic deception methods generate live environments that adapt to the attackers, identifying their methods without disrupting your data center performance.

You can then use the information you receive from the zero-risk attack to build policies against malicious domains, IP addresses and file hashes within traffic flows, creating an environment of comprehensive breach detection.

It’s important to remember that a high-interaction honeypot without endpoint security could be used as a launch pad for attacks against legitimate data and truly valuable assets. Honeypots are intended to invite attackers, and therefore add risk and complexity to your IT ecosystem. As with any tool, honeypots work best when they are integrated as part of a comprehensive solution for a strong security posture. The best cyber-security choice for your organization will incorporate honeypots as a detection and prevention tool, while utilizing additional powerful security measures to protect your live production environment.

Virtualization and Cloud review comment that while honeypots and other methods of intrusion detection “are usable in a classical environment, they really shine in the kinds of highly automated and orchestrated environments that make use of microsegmentation.”

Honeypot security systems can add a valuable layer of security to your IT systems and give you an incomparable chance to observe hackers in action, and learn from their behavior. You can gather valuable insight on new attack vectors, security weaknesses and malware, using this to better train your staff and defend your network. With the help of micro-segmentation, your honeypot security strategy does not need to leave you open to risk, and can support an advanced security posture for your entire organization.

What is File Integrity Monitoring and Why Do I Need It?

File integrity monitoring (FIM) is an internal control that examines files to see the way that they change, establishing the source, details and reasons behind the modifications made and alerting security if the changes are unauthorized. It is an essential component of a healthy security posture. File integrity monitoring is also a requirement for compliance, including for PCI-DSS and HIPAA, and it is one of the foremost tools used for breach and malware detection. Networks and configurations are becoming increasingly complex, and file integrity monitoring provides an increased level of confidence that no unauthorized changes are slipping through the cracks.

How Does File Integrity Monitoring Work?

In a dynamic, agile environment, you can expect continuous changes to files and configuration. The trick is to separate between authorized changes due to security, communication, or patch management, and problems like configuration errors or malicious intent that need your immediate attention.

File integrity monitoring uses the process of baseline comparison to make this differentiation. One or more file attributes are stored internally as a baseline, and this is then compared periodically when the file is being checked. Examples of baseline data used could be user credentials, access rights, creation dates, or last known modification dates. In order to ensure the data is not tampered with, the best solutions calculate a known cryptographic checksum, and can then use this against the current state of the file at a later date.

File Integrity Monitoring: Essential for Breach Detection and Prevention

File integrity monitoring is a prerequisite for many compliance regulations. PCI DSS for example mentions this foundational control in two sections of its policy, For GDPR, this kind of monitoring can support five separate articles on the checklist. From HIPAA for health organizations, to NERC-CIP for utility providers, file integrity monitoring is explicitly mentioned to support best practice in preventing unauthorized access or changes to data and files.

Outside of regulatory assessment, although file integrity monitoring can alert you to configuration problems like storage errors or software bugs, it’s most widely used as a powerful tool against malware.

There are two main ways that file integrity monitoring makes a difference, Firstly, once attackers have gained entry to your network, they often make changes to file contents to avoid being detected. By utilizing in-depth detection of every change happening on your network and contextually supporting alerts based on unauthorized policy violations, file integrity monitoring ensures attackers are stopped in their tracks.
Secondly, the monitoring tools give you the visibility to see exactly what changes have been made, by whom, and when. This is the quickest way to detect and limit a breach in real-time, getting the information in front of the right personnel through alerts and notifications before any lateral moves can be made or a full-blown attack is launched.

Incorporating file integrity monitoring as part of a strong security solution can give you even more benefits. Micro-segmentation is an essential tool that goes hand in hand for example. File integrity monitoring can give you the valuable information you need about where the attack is coming from, while micro-segmentation allows you to reduce the attack surface within your data centers altogether, so that even if a breach occurs, no lateral movement is possible. You can create your own strict access and communication policies, making it easier to use your file integrity monitoring policies to see the changes that are authorized and those which are not. As micro-segmentation works in hybrid environments, ‘file’ monitoring becomes the monitoring of your entire infrastructure. This extended perimeter protection can cover anything from servers, workstations and network devices, to VMware, containers, routers and switches, directories, IoT devices and more.

Features to Look for in a File Integrity Monitoring Solution

Of course, file integrity monitoring can vary between security providers. Your choice needs to be integrated as part of a full-service platform that can help to mitigate the breach when it’s detected, rather than just hand-off the responsibility to another security product down the line.

Making sure you find that ideal security solution involves checking the features on offer. There are some must-haves, which include real-time information so you always have an accurate view of your IT environment, and multi-platform availability. Most IT environments now use varied platforms including different Windows and Linux blends.

Another area to consider is how the process of file integrity monitoring seamlessly integrates with other areas of your security posture. One example would be making sure you can compare your change data with other event and log data for easy reporting, allowing you to quickly identify causes and correlative information.

If you’re using a micro-segmentation approach, creating rules is something you’re used to already. You want to look for a file integrity monitoring solution that makes applying rules and configuring them as simple as possible. Preferably, you would have a template that allows you to define the files and services that you want monitored, and which assets or asset labels contain those files. You can then configure how often you want these monitored, and be alerted of incidents as they occur, in real-time.

Lastly, the alerts and notifications themselves will differ between solutions. Your ideal solution is one that provides high level reporting of all the changes throughout the network, and then allows you to drill down for more granular information for each file change, as well as sending information to your email or SIEM (security information and event management) for immediate action.

File Integrity Monitoring with Micro-Segmentation – A Breach Detection Must Have

It’s clear that file integrity monitoring is essential for breach detection, giving you the granular, real-time information on every change to your files, including the who, what, where and when. Alongside a powerful micro-segmentation strategy, you can detect breaches faster, limit the attack area ahead of time, and extend your perimeter to safeguard hybrid and multi-platform environments, giving you the tools to stay one step ahead at all times.

Application Segmentation

Business applications are the principal target of attackers seeking access to an organization’s most sensitive information, and as application deployment approaches become more dynamic and extend to the external cloud platforms, the number of possible attack vectors is multiplying. This is driving a shift from traditional perimeter security to increased focus on detection and prevention of lateral movement within both on-premises and cloud infrastructure..

Most security pros and industry experts agree that greater segmentation is the best step that an organization can take to stop lateral movement, but it can be challenging to parse the various available segmentation techniques. For example, IT pros and security vendors alike often use the terms application segmentation and micro-segmentation interchangeably. There is, in fact, some overlap between these two techniques, but selecting the right approach for a specific set of security and compliance needs requires a clear understanding of the different ways in which segmentation can be performed.

What is Application Segmentation?

Application segmentation is the practice of implementing Layer 4 controls that can both isolate an application’s distinct service tiers from one another and create a security boundary around the complete application to reduce its exposure to attacks originating from other applications.

This serves two purposes:

  • Enforcing clear separation between the tiers of an individual application, allowing only the minimum level of access to each tier required to deliver the application functionality
  • Isolating a complete application from unrelated applications and other resources that could be possible sources of lateral movement attempts if compromised

Intra-Application Segmentation

It is a longstanding IT practice to separate business applications into tiers to improve both scalability and security. For example, a typical business application may include a set of load balancers that field inbound connections, one or more application servers that deliver core application functionality, and one or more database instances that store underlying application data.

Each tier has its own distinct security profile. For example, access to the load balancer is broad, but its capabilities are narrowly limited to directing traffic. In contrast, a database may contain large amounts of sensitive data, so access should be tightly limited.

This is where intra-application segmentation comes into play, as security teams may, for example, limit access to the database to specific IP addresses (e.g., the application server) over specific ports.

Application Isolation

The second important role that application segmentation can play is isolating an entire application cluster, such as the example above, from other applications and IT resources. There are a number of reasons that IT teams may wish to achieve this level of isolation.

One common reason is to reduce the potential for unauthorized lateral movement within the environment. Even with strong intra-application isolation between tiers in place, an attacker who compromises a resource in another application cluster may be able to exploit vulnerabilities or mis-configurations to move laterally to another cluster. Implementing a security boundary around each sensitive application cluster reduces this risk.

There may also be business or compliance reasons for isolating applications. For example, compliance with industry-specific regulations, such as HIPAA, PCI-DSS, and SWIFT security standards are simplified by establishing clear isolation of in-scope IT resources. This is also true for jurisdictional regulations like the EU General Data Protection Regulation (GDPR).

Application Segmentation vs. Micro-Segmentation

The emergence of micro-segmentation as a best practice has created some confusion for IT pros evaluating possible internal security techniques. Micro-segmentation is, in fact, a method of implementing application segmentation. However, micro-segmentation capabilities significantly improve an organization’s ability to perform application segmentation through greater visibility and granularity.

Traditional application segmentation approaches have relied primarily on Layer 4 controls. This does have value, but firewalls and other systems used to implement such controls do not give security teams a clear picture of the impact of these controls. As a result, they are time-consuming to manage and susceptible to configuration errors, particularly as environments evolve to include cloud services and new deployment models like containers.

Moreover, Layer 4 controls alone are very coarse. Sophisticated attackers are skilled at spoofing IP addresses and piggybacking on allowed ports to circumvent Layer 4 controls.

Micro-segmentation improves upon traditional application segmentation techniques in two ways. The first is giving security teams a visual representation of the environment and the policies protecting it. Effective visualization makes it possible for security teams to better understand the policies they need and identify whether gaps in policy coverage exist. This level of visibility rarely exists when organizations are attempting to perform application segmentation using a mix of existing network-centric technologies.

A second major advantage that micro-segmentation offers is greater application awareness. Leading micro-segmentation technologies can display and control activity at Layer 7 in addition to Layer 4. An application-centric micro-segmentation approach can do more than simply create a coarse boundary between application tiers or around an application cluster. It allows specific processes – and their associated data flows – to be viewed in an understandable way and serve as the basis for segmentation policies. Rather than relying solely on IP addresses and ports, micro-segmentation rules can white-list very specific processes and flows while blocking everything else by default. This enables far superior application isolation than traditional application segmentation techniques.

Balancing Application Segmentation with Business Agility

Application segmentation is more important than ever as dynamic hybrid cloud environments and fast-paced DevOps deployment models become the norm. The business agility that these advances enable are highly valuable to the organizations that adopt them. However, heterogeneous environments that are constantly evolving are also more challenging to secure. Security teams can easily find themselves facing a lose/lose proposition of either slowing down innovation or overlooking new possible security risks.

The granular visibility and control that application-centric micro-segmentation offers makes it possible to proactively secure new or updated applications at the time of deployment without added complexity or delay. It also ensures that security teams can quickly detect any abnormal application activity that slips through the cracks and respond rapidly to new security risks before they can be exploited.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

The Average Cost of a Data Breach, and how Micro-Segmentation can Make a Difference

In the US, the financial cost of a data breach is rising year on year. IBM’s Cost of a Data Breach Report, is independently conducted annually by the Ponemon Institute. This year, the report included data from more than 15 regions, across 17 industries. They interviewed IT, compliance, and data protection experts from 477 companies. As a result, the true average cost of a data breach is more accurate than ever.

Crunching the Numbers: The Average Cost of a Data Breach

According to the study, the average cost of a data breach in 2018 is $3.86 million, which has increased by 6.4% since last year’s report.

While the risk of a data breach is around 1 in 4, not all breaches are created equally. Of course, the more records that are exposed, the more expensive and devastating a breach will be. A single stolen or exposed data record costs a company an average of $148, while 1 million, considered a Mega Breach, will cost $40 million. 50 million may be reserved for the largest enterprises, but this will raise the financial cost to $350 million.

Beyond a Ransom: The Hidden Cost of Data Breach

Although many businesses worry about the rise in ransomware, the cost of a data breach is about much more than any malicious demand from a hacker could be. The true cost can be broken down into dozens of areas, from security upgrades in response to the attack to a drop in your stock price when word of the breach gets out. Research by Comparitech found that companies tend to see a stock price slide of 42% following a breach. Other costly elements of a data breach include Incident investigation, legal and regulatory activity, and even updating customers. These all contribute to the escalating cost when you fail to adequately protect your company against a data breach.

The Ponemon study found that the largest cost comes from customer churn. The US sees the highest cost in the world in terms of lost business due to a data breach, more than two times the average figure, at $4.2 million per incident. Most analysts put this discrepancy down to the nature of commerce in the United States. In the US, there is far more competition and choice, and customer loyalty is both harder to hold onto and almost impossible to retrieve once trust is lost.

Customers also have more awareness of data breaches in the US, as laws dictate they must be informed of any issues as they are uncovered. This kind of reputational damage is devastating, especially in the case of a Mega Breach. In fact, 1/3 of the cost of Mega Breaches can be attributed to lost business.

Of course, there is also the fear that even if you manage to recover from a data breach, the worst is not over. The IBM study found that there is a 27.9% chance of another breach in the following two years after an attack, making your company extremely vulnerable unless you can make considerable changes, and fast.

Preparing Your Business for the Average Cost of a Data Breach

The numbers don’t lie. The speed and impact of data breaches is something to which every company, no matter the size, should be paying attention. There are definitely ways to protect your business and to position yourself responsibly for the worst case scenarios.

According to Verizon, 81% of all breaches exploit identity, often through weak passwords or human error. Malware can piggyback onto a legitimate user to get behind a physical firewall, which is why most IT professionals agree that even next-gen firewalls are insufficient. To limit the potential repercussions of this, all businesses need to be employing a zero-trust model.

With micro-segmentation, perimeters can be created specifically for the protection of sensitive or critical data. This ensures that all networks are considered not trusted. Using a granular approach to limit communications, and tagging workloads themselves with labels and restrictions. Containment of attacks is built into your security from the outset, by limiting the attacker’s freedom of movement and restricting ability for any lateral movement at all. As the financial impact of a data breach rises with the amount of data records stolen, this is a significant weapon to have at your disposal.

Rapid Response Can Limit the Cost of Data Breaches

Efficiency in identifying an incident as well as the speed of the response itself has a huge impact. Rapid response can save money, as well as proving to your customers that you still deserve their trust. According to the IBM report, the average time it took companies to identify the data breach was 197 days. Even once a breach was detected, the average time to contain it was a further 69. When it came to a Mega Breach – it could take an entire year to detect and contain.

With micro-segmentation, the visibility is immediate. All communications are logged, including East-West traffic. This includes private architecture, cloud-based systems, and even hybrid solutions. The best solutions will offer alerts and notifications in case of any unusual behavior, allowing you to stop threats in their tracks, before any damage has been done.

The quicker this happens, the less financial damage will be done. In fact, on average, companies who suffered a breach that managed to contain it within 30 days saved more than $1 million over companies who couldn’t. The larger the breach – the more significant these savings are likely to be.

Ensure You’re Fully Armed Against a Data Breach

The complex nature of most businesses IT systems explains the growing threat of cyber-crime, and the increasing financial cost of lax security holding us all to ransom. Traditional security systems are not enough to ensure adequate protection from a data breach, or rapid detection and response in case the worst happens.

Micro-segmentation offers granular flexible security that adapts to your exact environment, detecting and limiting the force of an attack, and providing the visibility and response tools you need to keep your customers loyal.