A Deep Dive into Point of Sale Security

Many businesses think of their Point of Sale (POS) systems as an extension of a cashier behind a sales desk. But with multiple risk factors to consider, such as network connectivity, open ports, internet access and communication with the most sensitive data a company handles, POS solutions are more accurately an extension of a company’s data center, a remote branch of their critical applications. This being considered, they should be seen as a high-threat environment, which means that they need a targeted security strategy.

Understanding a Unique Attack Surface

Distributed geographically, POS systems can be found in varied locations at multiple branches, making it difficult to keep track of each device individually and to monitor their connections as a group. They cover in-store terminals, as well as public kiosks and self-service stations in places like shopping malls, airports, and hospitals. Multiple factors, from a lack of resources to logistical difficulties, can make it near impossible to secure these devices at the source or react quickly enough in case of a vulnerability or a breach. Remote IT teams will often have a lack of visibility when it comes to being able to accurately see data and communication flows. This creates blind spots which prevent a full understanding of the open risks across a spread-out network. Threats are exacerbated further by the vulnerabilities of old operating systems used by many POS solutions.

Underestimating the extent of this risk could be a devastating oversight. POS solutions are connected to many of a business’s main assets, from customer databases to credit card information and internal payment systems, to name a few. The devices themselves are very exposed, as they are accessible to anyone, from a waiter in a restaurant to a passer-by in a department store. This makes them high-risk for physical attacks such as downloading a malicious application through USB, as well as remote attacks like exploiting the terminal through exposed interfaces, Recently, innate vulnerabilities have been found in mobile POS solutions from vendors that include PayPal, Square and iZettle, because of their use of Bluetooth and third-party mobile apps. According to the security researchers who uncovered the vulnerabilities, these “could allow unscrupulous merchants to raid the accounts of customers or attackers to steal credit card data.”

In order to allow system administrators remote access for support and maintenance, POS are often connected to the internet, leaving them exposed to remote attacks, too. In fact, 62% of attacks on POS environments are completed through remote access. For business decision makers, ensuring that staff are comfortable using the system needs to be a priority, which can make security a balancing act. A straightforward on-boarding process, a simple UI, and flexibility for non-technical staff are all important factors, yet can often open up new attack vectors while leaving security considerations behind.

One example of a remote attack is the POSeidon malware which includes a memory scraper and keylogger, so that credit card details and other credentials can be gathered on the infected machine and sent to the hackers. POSeidon gains access through third party remote support tools such as LogMeIn. From this easy access point, attackers then have room to move across a business network by escalating user privileges or making lateral moves.

High risk yet hard to secure, for many businesses POS are a serious security blind spot.

Safeguarding this Complex Environment and Getting Ahead of the Threat Landscape

Firstly, assume your POS environment is compromised. You need to ensure that your data is safe, and the attacker is unable to make movements across your network to access critical assets and core servers. At the top of your list should be preventing an attacker from gaining access to your payment systems, protecting customer cardholder information and sensitive data.

The first step is visibility. While some businesses will wait for operational slowdown or clear evidence of a breach before they look for any anomalies, a complex environment needs full contextual visibility of the ecosystem and all application communication within. Security teams will then be able to accurately identify suspicious activity and where it’s taking place, such as which executables are communicating with the internet where they shouldn’t be. A system that generates reports on high severity incidents can show you what needs to be analyzed further.

Now that you have detail on the communication among the critical applications, you can identify the expected behavior and create tight segmentation policy. Block rules,with application process context, can be used to contain any potential threat, ensuring that any future attackers in the data center would be completely isolated without disrupting business process or having any effect on performance.

The risk goes in both directions. Next, let’s imagine your POS is secure, but it’s your data center that is under attack. Your POS is an obvious target, with links to sensitive data and customer information. Micro-segmentation can protect this valuable environment, and stop an attack getting any further once it’s already in progress, without limiting the communication that your payment system needs to keep business running as usual.

With visibility and clarity, you can create and enforce the right policies, crafted around the strict boundaries that your POS application needs to communicate, and no further. Some examples of policy include:

    • Limiting outgoing internet connections to only the relevant servers and applications
    • Limiting incoming internet connections to only specific machines or labels
    • Building default block rules for ports that are not in use
    • Creating block rules that detail known malicious processes for network connectivity
    • Whitelisting rules to prevent unauthorized apps from running on the POS
    • Create strict allow rules to enable only the processes that should communicate, and block all other potential traffic

Tight policy means that your business can detect any attempt to connect to other services or communicate with an external application, reducing risk and potential damage. With a flexible policy engine, these policies will be automatically copied to any new terminal that is deployed within the network, allowing you to adapt and scale automatically, with no manual moves, changes, or adds slowing down business processes.

Don’t Risk Leaving this Essential Touchpoint Unsecured

Point of Sale solutions are a high-risk open door for attackers to access some of your most critical infrastructure and assets. Without adequate protection, a breach could grind your business to a halt and cost you dearly in both financial damage and brand reputation.

Intelligent micro-segmentation policy can isolate an attacker quickly to stop them doing any further damage, and set up strong rules that keep your network proactively safe against any potential risk. Combined with integrated breach detection capabilities, this technology allows for quick response and isolation of an attacker before the threat is able to spread and create more damage.

Want to learn more about how micro-segmentation can protect your endpoints while hardening the overall security for your data center?

Read More

Considering Cyber Insurance in the Aftermath of the NotPetya Attack

It’s been 18 months since June 2017 when the Petya/NotPetya cyber attacks fell on businesses around the globe, resulting in a dramatic loss of income and intense business disruption. Has cyber insurance limited the fallout for the victims of the ransomware attacks, and should proactive businesses follow suit and ensure they are financially covered in case of a breach?

Monetizing the Impact of Cybercrime

The effect on the IT and insurance industries of last years wave of cybercrime continues to grow as businesses disclose silent cyber impacts, as well as affirmative losses from WannaCry/Petya. The latest reports from Property Claim Services put the loss at over $3.3 billion, and growing.

Despite this, for some businesses, reliance on insurance schemes has proven inadequate. US Pharmaceutical company Merck disclosed that the Petya cyberattacks have cost them as much as $580 million since June 2017, and predicted an additional $200 million in costs by the end of 2018. In contrast, experts estimated their insurance pay-out would be around $275 million, a huge number, but under half of the amount they have incurred so far, let alone as their silent costs continue to rise.

Other companies have been left even worse off, such as snack food company Mondolez International Inc, who are in a continuing battle with their property insurer, Zurich American Insurance Company. Mondolez claimed for the Petya attacks under a policy that included “all risks of physical loss or damage” specifying “physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”

However, Zurich disputed the claim, due to a clause that excludes insurance coverage for any “hostile or war-like act by any government or sovereign power.” As US Intelligence officials have determined that the NotPetya malware originated as an attack by the Russian military against the Ukraine, Zurich are fighting the claim by Mondelez that they are wrongfully denying coverage.

How Does This Lawsuit Affect the Cyber-Insurance Market Overall?

As cyber crime continues to rise, cyber insurance is understandably becoming big business. For companies deciding on whether to take out coverage, CISO’s need to find space in the budget for monthly costs and potentially large premiums. For this risk to be worthwhile, businesses want to be confident that they will recover their costs if a breach happens.

The insurance pay-outs around the Petya cyberattacks, and in particular the Mondolez case, throw this into question. This is especially true considering the rise in cyberattacks that are nation-backed or could plausibly be claimed to be nation-backed by insurance companies in order to dispute a claim. As regulations change and the US military are given more freedom to launch preventative cyberattacks against foreign government hackers, any evidence that suggests governmental or military attribution could be legitimately used against claimants looking to settle their losses.

The Effect on Public Research

The ripple effect of this could go beyond the claims sector, and have a connected impact on security research, as well as free press and journalism in the long run, something we feel strongly about at Guardicore Labs. Traditionally, researchers have had the freedom to comment and even speculate on the attribution of cyber attacks, through information on the attackers’ behavior behind the scenes and the attack signatures they use. If insurance companies and claims handlers begin using public research as a reason to deny coverage to the victims, this could put research teams in an ethical bind, reducing the amount of public research and the transparency of the industry overall.

How Much of a ‘Guarantee’ Can Security Companies Provide?

The issue of what claims to honor extends to financial guarantees from security companies, not only to insurance handlers. It is becoming increasingly popular to offer guarantees to customers who purchase cybersecurity products, in order to ‘put your money where your mouth is’ on the infallibility of a particular solution.

However, many experts believe that these policies have so many loopholes that they negate the benefit of the warranty overall. One example is the often cited ‘nation state or act of god’ exception, which includes cyberterrorism. Others include exclusions of coverage for portable devices, insider threats, or intentional acts. Even if you are widely covered for an event, does that extend to all employees? According to the latest Cyber Insurance Buying Guide, “most policies do not adequately provide for both first-party and third-party loss.”

Your ‘Guarantee’ is not a Guarantee

The bottom line for CISOs looking to protect their business is that cyber insurance is not a catch-all solution by any means. Whether it’s insurance companies paying out a limited figure or skirting a pay-out altogether, or cybersecurity companies making big promises that are ultimately undermined by the small print, cyber insurance has a way to go.

Focus on your cybersecurity solution, including strong technology like micro-segmentation to limit the attack surface in the case of a breach. With this in place, you can ensure that your critical assets and data are ring-fenced and isolated, no matter what your infrastructure looks like and what direction the attack comes from. Integration with powerful breach detection and incident response capabilities strengthens your position even further, reducing dwell time, and giving you a security posture you can rely on.

What’s the Difference Between a High Interaction Honeypot and a Low Interaction Honeypot?

A honeypot is a decoy system that is intentionally insecure, used to detect and alert on an attacker’s malicious activity. A smart honeypot solution can divert hackers from your real data center, and also allow you to learn about their behavior in greater detail, without any disruption to your data center or cloud performance.

Honeypots differ in the way that they’re deployed and the sophistication of the decoy. One way to classify the different kinds of honeypots is by their level of involvement, or interaction. Businesses can choose from a low interaction honeypot, a medium interaction honeypot or a high interaction honeypot. Let’s look at the key differences, as well as the pros and cons of each.

Choosing a Low Interaction Honeypot

A low interaction honeypot will only give an attacker very limited access to the operating system. ‘Low interaction’ means exactly that, the adversary will not be able to interact with your decoy system in any depth, as it is a much more static environment. A low interaction honeypot will usually emulate a small amount of internet protocols and network services, just enough to deceive the attacker and no more. In general, most businesses simulate protocols such as TCP and IP, which allows the attacker to think they are connecting to a real system and not a honeypot environment.

A low interaction honeypot is simple to deploy, does not give access to a real root shell, and does not use significant resources to maintain. However, a low interaction honeypot may not be effective enough, as it is only the basic simulation of a machine. It may not fool attackers into engaging, and it’s certainly not in-depth enough to capture complex threats such as zero-day exploits.

Is a High Interaction Honeypot a More Effective Choice?

A high interaction honeypot is the opposite end of the scale in deception technology. Rather than simply emulate certain protocols or services, the attacker is provided with real systems to attack, making it far less likely they will guess they are being diverted or observed. As the systems are only present as a decoy, any traffic that is found is by its very existence malicious, making it easy to spot threats and track and trace an attackers behavior. Using a high interaction honeypot, researchers can learn the tools an attacker uses to escalate privileges, or the lateral movements they make to attempt to uncover sensitive data.

With today’s cutting-edge dynamic deception methods, a high interaction honeypot can adapt to each incident, making it far less likely that the attacker will realize they are engaging with a decoy. If your vendor team or in-house team has a research arm that works behind the scenes to uncover new and emerging cyber threats, this can be a great tool to allow them to learn relevant information about the latest tactics and trends.

Of course, the biggest downside to a high interaction honeypot is the time and effort it takes to build the decoy system at the start, and then to maintain the monitoring of it long-term in order to mitigate risk for your company. For many, a medium interaction honeypot strategy is the best balance, providing less risk than creating a complete physical or virtualized system to divert attackers, but with more functionality. These would still not be suitable for complex threats such as zero day exploits, but could target attackers looking for specific vulnerabilities. For example, a medium interaction honeypot might emulate a Microsoft IIS web server and have sophisticated enough functionality to attract a certain attack that researchers want more information about.

Reducing Risk When Using a High Interaction Honeypot

Using a high interaction honeypot is the best way of using deception technology to fool attackers and get the most information out of an attempted breach. Sophisticated honeypots can simulate multiple hosts or network topologies, include HTTP and FTP servers and virtual IP addresses. The technology can identify returning hackers by marking them with a unique passive fingerprint. You could also use your honeypot solution to separate internal and external deception, keeping you safe from cyber threats that move East-West as well as North-South.

Mitigating the risk of using a high interaction honeypot is easiest when you choose a security solution that uses honeypot technology as one branch of an in-depth solution. Micro-segmentation technology is a powerful way to segment your live environment from your honeypot decoy, ensuring that attackers cannot make lateral moves to sensitive data. With the information you glean from an isolated attacker, you can enforce and strengthen your policy creation to double down on your security overall.

Sweeter than Honey

Understanding the differences between low, medium and high interaction honeypot solutions can help you make the smart choice for your company. While a low interaction honeypot might be simple to deploy and low risk, the real benefits come from using a strong, multi-faceted approach to breach detection and incident response that uses the latest high interaction honeypot technology. For ultimate security, a solution that utilizes micro-segmentation ensures an isolated environment for the honeypot. This lets you rest assured that you aren’t opening yourself up to unnecessary risk while reaping the rewards of a honeypot solution.

Micro-Segmentation and Application Discovery – Gaining Context for Accurate Action

The infrastructure and techniques used to deliver applications are undergoing a significant transformation. Many organizations now use the public cloud extensively alongside traditional on-premises data centers, and DevOps-focused deployment techniques and processes are bringing rapid and constant change to application delivery infrastructure.

While this transformation is realizing many positive business benefits, a side effect is that it is now more challenging than ever for IT and security teams to maintain both point-in-time and historical awareness of all application activity. Achieving the best possible security protection, compliance posture, and application performance levels amidst constant change is only possible through an effective application discovery process that spans all of an organization’s environments and application delivery technologies.

Essential Application Discovery Process Components

Application discovery plays a valuable role for organizations defining and implementing a micro-segmentation strategy. Micro-segmentation solutions like GuardiCore Centra are more powerful and simpler to use when they have a complete and granular representation of an organization’s infrastructure as a foundation.

Application discovery is achieved through a multi-step process that includes the following key elements:

  • Collecting and aggregating data from throughout the infrastructure
  • Organizing and labeling data for business context
  • Presenting application discovery data in a visual and relevant manner
  • Making it seamless to use application discovery insights to create policies and respond to security incidents

Each step has its own nuances, which require consideration when evaluating micro-segmentation technologies.

Application Data Collection and Aggregation

Modern application delivery infrastructure often consists of numerous physical locations, including third-party cloud infrastructure, and a wide range of application types and delivery models. This can make it challenging to collect comprehensive data from throughout the infrastructure. For example, GuardiCore Centra relies on multiple techniques to collect data, including:

  • Deploying agents on application hosts to monitor application activity
  • Collecting detailed network data through TAP/SPAN ports and virtual SPANs
  • Collecting VPC flow logs from cloud providers

While each of these techniques is valuable, agent-based collection in particular ensures that Layer 7 granularity is included in the application discovery data set.

Once collected, application activity data must be aggregated and stored in a scalable manner to support the subsequent steps in the application discovery process.

Applying Context to Application Discovery Data

Whenever data is collected from disparate sources, it is difficult to interpret and derive value from it in its raw form. Therefore, it is critical to organize data and present it in context that is relevant to the organization. GuardiCore Centra employs several complementary techniques to simplify and, when possible, automate this essential step, including:

    • Querying an organization’s’ existing data sources, such as orchestration tools and configuration management databases, using REST APIs.
    • Automatically applying dynamic labels based on pre-defined logic
    • Discovering labels using agents deployed on applications hosts
    • Giving customers a simple and flexible framework to create labels manually.

A sound labeling approach makes it easy for an organization to view application activity in meaningful ways using attributes such as environment, application type, regulatory scope, location, role, or owner. While these are common examples, GuardiCore Centra’s labeling framework is also highly flexible, so organizations can define a custom label hierarchy to accommodate any specialized needs.

Visualizing Application Discovery Information

Once application data has been collected, harmonized, and contextualized, the next step is to present it in a manner that is meaningful to IT professionals, security experts, and application owners. The following examples from GuardiCore Centra illustrate the impact that the preceding three steps have on the quality of the visual representation of application discovery data.

Without context, raw data may look something like this:

Without context, it’s difficult to know which applications exist in the environment and how they interact with one another.

As you can see, this view contains a large amount of information but provides very little insight into which applications exist in the environment and how they interact with one another.

In contrast, once context has been added through labeling, more meaningful visualizations like the following example become possible:

Visualizing Application Discovery Information

In this case, the underlying data is presented in a manner that defines a specific application, its components, and its flows very clearly.

When evaluating possible application discovery data visualizations approaches, it is important to consider both real-time and historical visualization needs. Real-time data is helpful for assessing additional policy needs or responding to in-progress security incidents. However, historical data is also extremely valuable for compliance audits and security incident forensics and post mortems.

Moving from Application Discovery to Action

A final consideration when implementing an application discovery process is how to best make the data collected actionable. Once security teams and application stakeholders gain a complete view of application activity across their infrastructure, they often identify new legitimate applications that must be protected, unauthorized applications that they would like to block, possible security enhancements for existing applications, and even active security incidents that must be contained.Therefore, it is important to have seamless linkage between application discovery and micro-segmentation policy definition.

GuardiCore Centra accomplishes this by making application discovery visualizations directly actionable through point and click actions. Administrators can click on assets and flows in the visualization and gain immediate access to policy definition options. They can even create sophisticated compound policies through GuardiCore’s intuitive, highly visual interface.

Understanding the mutually-beneficial relationship between application discovery and micro-segmentation.

This final step illustrates the mutually-beneficial relationship between application discovery and micro-segmentation. A well-implemented application discovery process gives an organization’s application stakeholders both a clear view of application activity across all environments and an intuitive path to positively affect it through granular micro-segmentation policies. Similarly, once micro-segmentation policies have been implemented, the ability to view them in an up-to-date visualization of the infrastructure at any time makes it easier to update and maintain policies as environments change and new threats emerge.

The challenge of implementing an integrated application discovery process that spans all environments and delivery models may seem daunting to many organizations. However, by breaking the problem down into its four key elements and considering how each can be addressed more effectively with the help of flexible technologies like GuardiCore Centra, security teams and other stakeholders can set their application discovery process on a path to success.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Secure Critical Applications

Today’s information security teams face two major trends that make it more challenging than ever to secure critical applications. The first is that IT infrastructure is evolving rapidly and continuously. Hybrid cloud architectures with a combination of on-premises and cloud workloads are now the norm. There are also now a multitude of application workload deployment methods, including bare-metal servers, virtualization platforms, cloud instances, and containers. This growing heterogeneity, combined with increased automation, makes it more challenging for security teams to stay current with sanctioned application usage, much less malicious activity.

The second major challenge that makes it difficult to secure critical applications is that attackers are growing more targeted and sophisticated over time. As security technologies become more effective at detecting and stopping more generic, broad-scale attacks, attackers are shifting to more deliberate techniques focused on specific targets. These efforts are aided by the rapid growth of east-west traffic in enterprise environments as application architectures become more distributed and as cloud workloads introduce additional layers of abstraction. By analyzing this east-west traffic for clues about how applications function and interact with each other, attackers can identify potential attack vectors. The large quantity of east-west traffic also provides potential cover when attacks are advanced, as attackers often attempt to blend unauthorized lateral movement in with legitimate traffic.

Securing Critical Applications with Micro-Segmentation

Implementing a sound micro-segmentation approach is one of the best steps that security teams can take to gain greater infrastructure visibility and secure critical applications. While the concept of isolating applications and application components is not new, micro-segmentation solutions like GuardiCore Centra have improved on this concept in a number of ways that help security teams overcome the challenges described above.

It’s important for organizations considering micro-segmentation to avoid becoming overwhelmed by its broad range of applications. While the flexibility that micro-segmentation offers is one of its key advantages over alternative security approaches, attempting to address every possible micro-segmentation use case on day one is impractical. The best results are often achieved through a phased approach. Focusing on the most critical applications early in a micro-segmentation rollout process is an excellent way to deliver value to the organization quickly while developing a greater understanding of how micro-segmentation can be applied to additional use cases in subsequent phases.

Process-Level Granularity

The most significant benefit that micro-segmentation provides over more traditional segmentation approaches is that it can enables visibility and control at the process level. This gives security teams much greater ability to secure critical applications by making it possible to align segmentation policies with application logic. Application-aware micro-segmentation policies that allow known legitimate flows while blocking everything else significantly reduce attackers’ ability to move laterally and blend in with legitimate east-west traffic.

Unified Data Center and Cloud Workload Protection

Another important advantage that micro-segmentation offers is a consistent policy approach for both on-premises and cloud workloads. While traditional segmentation approaches are often tied to specific environments, such as network infrastructure, a specific virtualization technology, or a specific cloud provider, micro-segmentation solutions like GuardiCore Centra are implemented at the workload level and can migrate with workloads as they move between environments. This makes it possible to secure critical applications in hybrid cloud infrastructure and prevent new security risks from being introduced as the result of infrastructure changes.

Platform Independence

In addition to providing a unified security approach across disparate environments, micro-segmentation solutions like GuardiCore Centra also work consistently across various operating systems and deployment models. This is essential at a time when many organizations have a blend of bare-metal servers, virtualized servers, containers, and cloud instances. Implementing micro-segmentation at the application level ensures that policies can persist as underlying deployment platform technologies change.

Common Workload Protection Needs

There are several categories of critical applications that exist in most organizations and are particularly challenging – and particularly important – to secure.

Protecting High-Value Targets

Every organization has infrastructure components that play a central role in governing access to other systems throughout the environment. Examples may include domain controllers, privileged access management systems, and jump servers. It is essential to have a well-considered workflow protection strategy for these systems, as a compromise will give an attacker extensive ability to move laterally in the direction of systems containing sensitive or highly valuable data. Micro-segmentation policies with process-level granularity allow security teams to tightly manage how these systems are used, reducing the risk of unauthorized use.

Cloud Workload Protection

As more workloads migrate to the cloud, traditional security controls are often supplanted by security settings provided by a specific cloud provider. While the native capabilities that cloud providers offer are often valuable, they create situations in which security teams must segment their environment one way on-premises and another way in the cloud. This creates greater potential for new security issues as a result of confusion, mis-configuration, or lack of clarity about roles and responsibilities.

The challenge is compounded when organizations use more than one cloud provider, as each has its own set of security frameworks. Because micro-segmentation is platform-independent, the introduction of cloud workloads does not significantly increase the attack surface. Moreover, micro-segmentation can be performed consistently across multiple cloud platforms as a complement to any native cloud provider security features in use, avoiding confusion and providing greater flexibility to migrate workloads between cloud providers.

New Application Deployment Technologies

While bare-metal servers, virtualized servers, and cloud instances all preserve the traditional Windows or Linux operating system deployment model, new technologies such as containers represent a fundamentally different application deployment approach with a unique set of workload protection challenges. Implementing a micro-segmentation solution that includes support for containerized applications is another step organizations can take to secure critical applications in a manner that will persist as the underlying infrastructure evolves over time.

Critical Applications in Specific Industries

Along with the general steps that all organizations should take to secure critical applications, many industries have unique workload protection challenges based on the types of data they store or their specific regulatory requirements.

Examples include:

  • Healthcare applications that store or access protected health information (PHI) for patients that is both confidential and subject to HIPAA regulation.
  • Financial services applications that contain extensive personally identifiable information (PII) and other sensitive data that is subject to industry regulations like PCI DSS.
  • Law firm applications that store sensitive information that must be protected for client confidentiality reasons.

In these and other vertical-specific scenarios, micro-segmentation technologies can be used to both enforce required regulatory boundaries within the infrastructure and gain real-time and historical visibility to support regulatory audits.

Decoupling Security from Infrastructure

While there are a variety of factors that security teams must consider when securing critical applications in their organization, workload protection efforts do not need to be complicated by IT infrastructure evolution. By using micro-segmentation to align security policies with application functionality rather than underlying infrastructure, security teams can protect key applications effectively even as deployment approaches change or diversify. In addition, the added granularity of control that micro-segmentation provides makes it easier to address organization- or industry-specific security requirements effectively and consistently.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Are you Following Micro-Segmentation Best Practices?

With IT infrastructures increasingly becoming more virtualized and software-defined, micro-segmentation is fast becoming a priority for IT teams to enhance security measures and reduce the attack surface of their data center and cloud environments. With its fine-grained approach to segmentation policy, micro-segmentation enables more granular control of communication flows between critical application components that goes a step further than traditional network segmentation methods in support of moving to a Zero Trust security model.

Finding the Right Segmentation Balance

If not approached in the right way, micro-segmentation can be a complex process to plan, implement, and manage. For example, overzealous organizations may run too fast to implement these fine-grained policies across their environment, leading to over-segmentation, which could have a negative impact on the availability of IT applications and services, increase security complexity and overhead, and actually increase risk. At the same time, businesses need to be aware of the risks of under-segmentation, leaving the attack surface dangerously large in the case of a breach.

With a well-thought-out approach to micro-segmentation, organizations can see fast time to value for high-priority, short term use cases, while also putting in place the right structure for a broader implementation of micro-segmentation across future data center architectures. To achieve your micro-segmentation goals without adding unnecessary complexity, a business should consider these micro-segmentation security best practices.

Start with Granular Visibility Into Your Environment

It’s simple when you think about it – how can you secure what you can’t see? Whether you’re using application segmentation to reduce the risk between individual or groups of applications, or tier segmentation to define the rules for communication within the same application cluster, you need visibility into workloads and flows, at a process level. Process-level visibility allows security administrators to identify servers with similar roles and shared responsibilities so they can be easily grouped for the purpose of establishing security policies.

At first blush, this may seem to be a daunting task and is likely the first impediment to effective micro-segmentation. However, with the aid of graphic visualization tools that enable administrators to automatically discover and accurately map their data center applications and the communication processes between them, the complexity of implementing an effective micro-segmentation strategy can be greatly simplified.

Once administrators have gained this depth of visibility, they can begin to filter and organize applications into groups for the purpose of setting common security policies – for example, all applications related to a particular workflow or business function. The micro-segmentation best practices are to then create policies that can be tested and refined as needed for each defined group.

Micro-Segmentation Best Practices for Choosing the Right Model

There are two basic models for the implementation of micro-segmentation in a data center or cloud environment: network-centric, which typically leverages hypervisor-based virtual firewalls or security groups in cloud environments, and application-centric, which typically are agent-based distributed firewalls. Both have some pros and cons.

In a network-centric model, traffic control is managed by network choke points, third-party controls, or by trying to enforce rules onto each workload’s existing network enforcement.

In contrast, an application-centric model deploys agents onto the workload itself. This has a number of benefits. Visibility is incomparable, available down to Layer 7, and without the constraints or encryption that proprietary applications may enforce. An agent-based solution is also suitable across varied infrastructures, as well as any operational environment. This gives one consistent method across technologies, even when you consider new investments in containers and other microservices-based application development and delivery models.

Additionally, as there are no choke points to consider, the policy is entirely scalable, and can follow the workload even as it moves between environments, from on-premises to public cloud and back. Also, an application-centric approach allows you to define more granular policies, which reduces the attack surface beyond what can be accomplished with a network-centric model. Tools built for a specific environment are simply not good enough for hybrid multi-cloud data center needs, which explains why agent-based solutions have become micro-segmentation security best practices in recent years.

Also with agent-based approaches, one can more easily align with the DevOps models most enterprises use today. Business can leverage automation and autoscaling to streamline provisioning and management of workloads. Micro-segmentation policies are able to be easily and dynamically incorporated. There is no need for manual moves, adds and changes like you would have in the network-centric model.

Define “Early Win” Use Cases

Organizations that are successful with micro-segmentation typically start by focusing on projects that are tangible, fairly easy to complete, and in which the benefits will be readily apparent. These typically include something as basic as environment segmentation, such as separating servers and workloads in development or quality assurance from those in production.

Another common starting point is the isolation of applications for compliance purposes, known to be one of micro-segmentation security best practices. Regulatory regimes such as SWIFT, PCI, or HIPAA typically spell out the types of data and processes that must be protected from everyday network traffic. Micro-segmentation allows for the quick isolation of these applications and data, even if the application workloads are distributed across different environments, such as on-premises data centers and public clouds.

Organizations may also undertake projects to restrict access to data center assets or services from outside users or Internet of Things devices. In health care, hospitals will use micro-segmentation to isolate medical devices from the general network. Businesses might use micro-segmentation as a means of traditional ring-fencing to isolate their most critical applications.

The common thread running through these examples is that they represent business needs and challenges for which micro-segmentation is ideally suited. They are easily defined projects with clear business objectives while at the same providing a proving ground for micro-segmentation.

Think Long Term and Consider the Cloud

Organizations that have successfully implemented micro-segmentation typically take a phased approach, piloting on a few priority projects, getting comfortable with the tools and the process, and gradually expanding. A pragmatic approach to micro-segmentation is to align your requirements with both your current and future-state data center architectures.

A key component of this is consideration of “coverage” in your micro-segmentation tool stack. Look for tools that cover not only a single environment, but provide support for workloads in both your current and future data center architectures. This typically includes workloads running on legacy systems, bare metal servers, virtualized environments, containers and public cloud.

In addition, don’t assume that native security controls offered by IaaS or public cloud services will be adequate enough to fully protect your cloud workloads. Cloud service providers operate on a shared security model, in which the provider takes responsibility for securing the cloud infrastructure while customers are responsible for their own operating systems, applications and data. A cloud provider’s controls are only effective in that provider’s environment. Enterprises would have to manage multiple security platforms and make manual adjustments as applications move among different cloud environments. Furthermore, most native security controls are directed at the port level (Layer 4) and not at the process level (Layer 7) where vulnerable applications reside. That means they will not reduce the attack surface sufficiently to be effective.

Integrate with Complementary Controls

When evaluating solutions, another of micro-segmentation best practices is to look for those where there are value-added and integrated complementary controls. This helps reduce security management complexity, as you can find solutions that give you more than just micro-segmentation out of the box.

Single-platform micro-segmentation solutions might be effective at segmenting your applications and workloads to reduce risk. Micro-segmentation security best practices, however, are to look for a choice that takes you to the next level. Threat detection and response is a perfect example of a valuable complementary control. It allows you to do more than simply protect processes and check compliance off your to-do list. Of course, both breach detection and incident response are must-haves for any complex IT infrastructure.

The difference with an all-in-one solution is the reduction in administrative overhead of attempting to make disparate solutions work in tandem. As micro-segmentation tackles risk reduction in both data centers and clouds – threat detection and incident response can take the obvious next step in quickly detecting and mitigating active breaches, which can help your dramatically reduce dwell time and reduce the cost and impact of a breach.

A Summary of Micro-Segmentation Security Best Practices

From choosing an application-centric model that deploys agents onto the workload itself and comes with valuable complementary controls, to ensuring visibility from the start and looking for the ‘quick wins’ that provide early value, following these micro-segmentation security best practices will give your business the best chance of successful implementation.

For more information on micro-segmentation, visit our Micro-Segmentation Hub

GuardiCore Integrates with AWS Security Hub

Today at re:Invent, Amazon revealed the AWS Security Hub, a security service that provides AWS cloud customers with a comprehensive view of their security state within AWS. GuardiCore has worked with AWS over the past weeks to provide support and integration to this service. While AWS provides some built-in security capabilities, customers require additional capabilities that can only be provided by third-party companies like GuardiCore.

Both GuardiCore Centra and Infection Monkey now integrate with the AWS Security Hub. This integration provides a lot of value to customers. Early feedback is extremely positive and AWS customers would find it interesting to test both integrations:

GuardiCore Centra Integration with AWS Security Hub

GuardiCore Centra, our flagship product, secures any cloud-private or public. Security Incidents will be forwarded to the AWS Security Hub and can be managed through the console or consumed by other security products.

Infection Monkey Integration with AWS Security Hub

The Infection Monkey is an open source Breach and Attack Simulation (BAS) tool that assesses the resiliency of private and public cloud environments to post-breach attacks and lateral movement. Its integration with the AWS Security Hub allows anyone to verify and test the resilience of their AWS environment and correlate this information with the native security solutions and benchmark score.

Working on the integration was fun. Since both Centra and Infection Monkey have integration points and can run on AWS, adding reporting interfaces to the Security Hub was a straightforward task.

We believe that the AWS Security Hub represents a good approach, allowing for more shared security insights from more vendors in order to improve the overall security posture of your environment. It detects security findings and alerts generated by other AWS security services, other security solutions (like GuardiCore Centra and Infection Monkey) and aggregates those findings and alerts within each supported AWS region.

During the beta period the service provided integration with Amazon GuardDuty, Amazon Inspector, and Amazon Macie and added new capabilities by running CIS benchmark check for AWS workloads. We are looking forward to your feedback. Tell us- what do you think about the integration?

Policy Enforcement Essentials for your Micro-Segmentation Strategy Policy

Policy enforcement is one of those terms that can have varied meanings depending on the context. When discussing data center security, application and network policy enforcement refers to any controls that your business uses to govern behavior and access to your network and applications, with special emphasis on east-west (E-W) traffic patterns. The data center and your cloud environments are harder to manage than other parts of your network, due to the special characteristics of the virtualization layers. In addition, most of the traffic inside the data center never traverses a choke point, and so businesses lose the ability to use these parts of a topology as as a control point.

Network policy enforcement should in theory allow organizations to ensure that only authorized applications and users are communicating with each other while enabling them to meet their own governance, security, and compliance requirements. There are a number of challenges in a hybrid data center that make creating and enforcing flexible policy more difficult. These include:

  • Creating policy that finds the right scope
  • Reducing attack surface in case of a breach
  • Remaining adaptable despite granular policy
  • Managing networks with thousands or more workloads over varied locations
  • Creating policy across multiple cloud architectures
  • Enforcing at both the network and process level for ultimate risk reduction

Micro-segmentation technology was invented to solve these challenges, allowing businesses to secure the data center from the inside, prevent lateral movements, meet compliance requirements and gain east-west traffic visibility. Using the right micro-segmentation policy, these rules can be truly granular – not only keeping environments from interacting with one another with coarse segmentation, but also making fine-grained policy. With GuardiCore Centra, your micro-segmentation project is enabled with enforcement capabilities that allow you to orchestrate at the flow level and even down to the process level on all platforms, so that stakeholders can meet different security and compliance mandates, using micro-segmentation as a security solution as well as a compensating control for compliance where other tools can’t be used.

policy enforcement visualization

Finding the Right Scope for your Policy Enforcement Strategy

Anyone involved in compliance and security knows that defining the scope is the biggest initial challenge. One of the first stages of creating effective micro-segmentation policy is to be clear about your policy objectives, both for business and security. If you’re just looking at security, the more granular your policies are, the stronger your security posture is. However, this could also limit communication and flexibility. The wrong policy choice could cause frustration or delays for your business. Overall, smart micro-segmentation policy allows you to enforce a strong security policy without compromising your communications or your business goals.

Network policy enforcement is well known as a method to help businesses meet compliance regulations. Take PCI DSS for example. By reducing the scope of what can reach your CDE, you are dramatically reducing the work it takes to achieve compliance. By building application-aware policies, you can enforce system access to specific data. If your policies segment all the way to Layer 7, the application layer, attackers who have breached your perimeter still can’t pivot from an out-of-scope area to one that is in scope. With tier segmentation, this can be enforced even within the same application cluster.

Strategies for Practical Implementation of Micro-segmentation Policy

First, you will want to map out your business objectives and gain visibility of your environment, understanding application dependencies and flows within your architecture as a whole. Then, you can start to think about the kinds of controls that are required for your business and teams. This will allow you to set the right policy enforcement. A good security solution will allow you to start with global, high-level rules, and then add layers, increasing the granularity of your policies.

Some rules will apply to large segments, such as only allowing the sales staff to access the sales applications, or allowing DNS resolving through the internal, secure DNS cluster, keeping the production environment separate from the test environment altogether. With GuardiCore you can also define blocking rules as part of your micro-segmentation policy strategy. In fact, you can combine both allow rules and block rules within the same policy. This will enable you to define rules like blocking non-admin access to SSH on the network.

Micro-segmentation policy should allow you to be creative, providing the ability to use different collection and enforcement methods based on your clouds and network topologies. With this technology, you can set up the rules that balance your unique needs for flexibility and security.

The following are some examples of policy creation ideas. Some are coarse, while others show the benefits of enforcing at a granular level.

  • Separating between Development, Production and Test environments (as requested by regulations like PCI-DSS)
  • Restricting access to servers from non-server environments
  • Application segmentation inside environments, for example allowing the Sharepoint applications to communicate with internal storage while limiting other types of traffic
  • Tier segmentation inside application environments, such as communication between a web server and a DB server
  • Restricting admin access to servers to comply with EU regulations such as GDPR
  • Blocking all un-encrypted protocols such as FTP or Telnet within your data center traffic
  • Denying a specific application tier or data center area from communicating with the internet

Building Flexible Policy Enforcement That Works in the Real World

Businesses are increasingly moving away from static business environments that have flat structures or on-premise data centers. Whichever policy engine a company chooses needs to be future-proof, allowing policy creation that gives control over auto-scaled workflows, expanding and contracting services, or processes that are constantly changing and adapting. Hybrid-cloud data centers are a great example of this kind of environment, where traditional inflexible policy engines can’t provide adequate dynamic provision.

In contrast, a flexible policy engine will support the latest breakthroughs in policy enforcement, such as the ability to support auto-scaling environments , or to allow the policy to follow the workload, no matter what platform it is on or on which kind of cloud it is deployed. This is impossible if policy is expressed in IP address, ranges, or VLANs. In order to really get the benefits of micro-segmentation technology in network policy enforcement, your policy engine and labeling need to be able to breathe with your data center, providing different models of control methods, able to give both quick wins and ongoing risk-reduction. In other words, you are able to see clearly how the entire data center behaves and communicates, application-wise and turn it into a policy using allow rules, while adding block rules to enforce compliance and best-practice security requirements.

Being Smart About Network Policy Enforcement

Not all micro-segmentation policy enforcement solutions are created equally. With the help of a flexible policy engine that supports both allow and block rules and includes policies built and enforced on a process level as well as a network level, you can take policy enforcement to the next level. Firstly, you can achieve visibility over your entire environment. Secondly, it allows you to enforce at a level of granularity that your organizational maturity can tolerate. Thirdly, it allows you to eliminate a lot of risk fast, using a small number of rules.

Using a micro-segmentation policy enforcement engine that supports granular deep visibility and micro-segmentation policies makes your investment yield even more results, faster. With such an engine, the real-time view of the dependencies and communication on your network can be turned into policies that suit the context of your unique business objectives and needs, strengthening your security posture without limiting business agility.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

Implementing Micro-Segmentation Insights, Part 2: Getting Internal Buy-In

In a recent blog, I revealed part one of my insights from implementing micro-segmentation projects with large customers. Vendors don’t always get the perspective of deep involvement in the execution of these projects, but I have been fortunate enough to cultivate relationships with our customers and have thus been granted an inner view. In my first blog of this series, I discussed short versus long-term objectives and the importance of knowing your goals and breaking them down into phases to ensure that you’re truly working on the important matters first and foremost. In this blog, I want to get into the importance of getting internal buy-in from other teams in order to enable improved implementation. To make the process easier for all involved, “selling” the project to other teams is an important early step.

More often than not the segmentation project is driven by the security organization and they are the ones who see immediate value out of it, but they need the collaboration of other teams to help them deploy the product. It doesn’t really matter if the product is an overlay (usually based on agents) or underlay (comes as a part of the infrastructure).

Some teams will need to carry more of the weight and bear more in the processes of deploying such a project- networking, application, infrastructure, system etc. To achieve collaboration of those teams it helps to carry a carrot, not just a stick. Getting early buy-in on these projects and planning out collaboration from the right people will make the process much easier in the long run. In order to do so, the solution needs to be “sold” internally. And just like with any sales process, the more you are prepared and equipped for this, the smoother it will go.

As is true with any sales process, there will be the “early adopters” and the “late adopters.” In our experience, when the project team was well prepared and presented the benefits of the solution to the other teams, the impact was significant. If you can show the application team how they can benefit from the solution they will not just ease up on the objections, in fact they will be pushing the deployment.

But, there is a significant BUT to this. The product you choose for your segmentation needs to be able to deliver those carrots. It needs to be able to support use cases that are not the clear immediate concern of its direct, original audience. Here is a small example: let’s say you need to convince the application owner to install the agents that will enforce the policy. The application owner usually has little interest in the security use case and might object or hesitate because of the additional player introduced in the mix: wondering how will it affect the stability, and performance etc. But if you are able to demonstrate that he will gain value from the product, he in fact will become the one who pushes the deployment.

One of such value propositions that we constantly see is the value of “getting visibility into your application.” This is of course a great promise, but for the application owner to actually be able to gain value from this visibility, the visibility should have certain properties: it needs to be L7 with application context, it needs to collect data and store it historically (pay attention here that for the sake of building policy the historical aspect of the data is not important at all, you just need to know what connections might happen), and it needs to be searchable and filterable to allow simple and convenient consumption by the application owner. This of course is just one example, but the important lesson is to show the many ways in which the solution can expand beyond the obvious reason for implementation to help ensure buy-in of other teams through the illustration of other benefits.

So, when starting to deploy a segmentation solution make sure that you prepare an on-boarding package for the teams you need to cooperate with that includes ways they can leverage the product to expedite the adoption process and make the projects deadlines. Equally as important, when doing so make sure that the product you choose can actually cater to those use-cases, and many of the products on the market today miss that important point.

Learn more on our micro-segmentation hub or read about GuardiCore Centra for best practices.

Read part one of my insight from implementing micro-segmentation.

Do You Have an Effective Security Incident Response Plan? – Assess your Readiness

The Ponemon Institute has found that the survival rate for businesses without a security incident response plan is just 10%. Enterprises will often focus on creating a strong security posture to detect and thwart attackers, but fail to detail what to do if and when a breach actually occurs. That’s not unusual; it can feel defeatist to prepare for the worst. However, with new attacks being discovered all the time, and increasingly connected networks putting us all at risk, an incident response plan is essential.

1. Understanding the Consequences of Ignoring a Security Incident Response Plan

The first stage in your security incident response strategy needs to be recognizing the ramifications of an attack. From the obvious problems, such as asset and data breach, to reputational damage, compliance failures and public image breakdown, it’s in your company’s best interests to be fully prepared. Detailing these threats in writing can help your staff focus on maintaining a strong security posture to prevent attacks, and encourage everyone to work together with a mutual understanding of what’s at risk if the worst happens.

2. Assigning Roles Before an Emergency

Especially in large organizations, it can be hard to keep everyone in the loop when there is a crisis. Identifying the core stakeholders for a security incident before a breach occurs is therefore essential. Here are some key personnel who need to be detailed in your security incident response plan. In some cases they may be obvious, while in others you might need to choose staff to take on responsibilities for some of these roles in your cyber-security incident response team.

  • Incident response managers . It’s worth having at least two members of staff on hand who can oversee and prioritize the incident response plan, communicating information and tasks throughout the business.
  • Security analysts. Maintain the investigation, support the managers in following the plan, and filter out false positives. They may also alert others to potential attacks. It’s essential to ensure that they are given the right tools to be able to manage their role effectively.
  • Threat researchers. These personnel will be the port of call for contextual information around a threat. Using the web, as well as other threat intelligence, they can build and maintain a database of intelligence internally.
  • Key internal stakeholders. Who needs to be kept in the loop when a threat occurs? From board level personnel who may need to sign off on your actions or give the go-ahead for your response plan, to your CISO or human resources representative if human error is involved.
  • Third-party organizations, such as legal counsel, law enforcement, forensics experts or breach remediation companies.

3. Create a High-Level Document Outlining the Security Incident Response Procedure

Many organizations have multiple playbooks with granular detail on the technical side of an attack, in order to help IT manage and contain a breach. However, if you’ve ever experienced a security incident, you know that IT are far from the only department affected by an attack. Your incident response plan needs to be easily communicated and understood by C-suite employees, Human Resources, Vendor Management and all other lines of business stakeholders including global offices or teams in the field. As regulation increasingly dictates that customers are kept informed when their data is at risk, you may even need customer experience managers to be able to relay your position.

Some of the best security incident response plans are one or two pages, and give a high-level overview of how to manage the consequences of an incident. While playbooks might hold specific information for targeting a type of attack, such as Ransomware, your incident response plan should be written so that it can be read by anyone and understood easily in a moment of crisis.

4. Outline Response Priorities

Not every key stakeholder is going to have the same priorities when an attack hits, and not all priorities can be taken into consideration. For example, your board might want to get your operations up and running as quickly as possible, while legal counsel may suggest staying offline until vendors have been notified or customers contacted. Without a clear outline of whose priorities take precedence, existing relationships can dictate what procedure is followed after a breach, following tribal knowledge rather than smart decision making.

Assessing the scale of an attack and making quick decisions about revenue over security for example should not be done in the moment, or by whomever has the ear of the CISO that day. While you’re building your incident response plan, think about who should have autonomy over decisions that manage risk, and engage them in creating priorities based on levels of threat.

Detailed performance objectives can help here. In the event of a customer data breach, your security team might be tasked with finding out what has been exposed and how many customers are affected within a given amount of time. Making smart decisions about the action needed before a problem becomes a reality means all relevant teams can hit the ground running.

5. Simulate Breaches to Troubleshoot in a Safe Environment

Having an incident response plan is not enough in and of itself. Without testing and simulation, there is no way to recognize gaps in protocol or resources, or to uncover changes in third-party procedure. Regular simulations can ensure that your security incident response strategy remains up to date and nothing falls through the cracks. This can include finding replacements for staff who took on security roles and have now left the company, or for external vendors with lapsed service agreements. It can also help you keep up with changes in regulation, and keep new staff informed of the process in case of a breach.

A simulation can be as in-depth as you would like and can range from table top exercises to injecting your system with a known and containable malware, but a few basics to cover include:

  • Going over the lines of communication from detection to resolution
  • Understanding who is authorized to make decisions on security and risk
  • Confirming you have the third-party services in place you need to control a breach
  • Who needs to be contacted in case of a breach for continued regulatory compliance/operations?

The more you make simulation and testing part of your usual security posture, the more likely it will be second nature for the relevant stakeholders when the incident is no longer theoretical.

6. Identify the Scope of a Breach

Many companies act too quickly when they see a threat. Failing to recognize the size of a breach can cause more problems in the long run. Finding one point of entry does not mean that you’ve identified all the endpoints that have been compromised for example. Acting like you have found patient zero when it’s actually patient 10 or 15 can slow down recovery time overall. Modern day attacks are stealthy and subtle, and could have caused more damage than you might have first assumed.

The best security solutions will intercept suspicious activity on threat detection and reroute it to where it cannot do any harm using dynamic deception. The full extent of the breach can then be searched for and contained in real-time, giving your security team an accurate dynamic map of your entire data center and network. Your automatically generated report shows you the deception incidents, including integral information you need to investigate the breach. What passwords were used, and where did the attacker gain entry? Were there malicious binaries used, or suspicious C&C servers? With this level of detail, your security teams are able to start building up a clear picture of root cause.

Containment of this kind can also give you more time to understand what you’re dealing with in a safe environment. By rerouting an attacker using dynamic deception, you can isolate them safely, and monitor and learn from their activities rather than frighten them away by alerting them that you know they’ve gained entry. In this way, you can take back the upper hand, responding to the attackers behavior without going into crisis mode, calmly following your incident response plan priorities – risk free.

7. Limit Dwell Time

Having this level of granular visibility manages the next part of your incident response plan, limiting the amount of time that attackers are on your network. The SANS Institute found that a shocking 50% of organizations didn’t notice a breach for more than 48 hours, while 7% had no idea how long an attacker had breached their network for, even after the fact. The longer an attack continues for without being stopped, the more damage can be done, so having a plan for limiting this is essential.

Your security solution should be able to limit dwell time by provide application layer visibility. This uncovers and tracks process-level activity (not just at the transport layer) across applications in real-time. This can then be automatically correlated with network events and context, allowing you to access reports on suspected incidents and any anomalies detected across all workloads. With this, even new attack vectors are isolated in real-time. With nowhere for attackers to hide, dwell time is automatically minimized at a policy level.

8. Including Recovery Plans

The clearest part of your security incident response plan should outline what happens when a breach has been confirmed. Detail the processes that are automated so that all key stakeholders understand what has already been put into place.

Does your security solution allow IOCs (Indicators of Compromise) to be automatically exported to your SIEM or security gateways to speed up incident response? Can you update your micro-segmentation policies quickly and seamlessly in response to traffic violations? There might be different automated procedures needed for various environments. For example, stopping the spread of damage from VMs or Containers could involve an IOC halting or disconnecting service entirely. The best solutions will provide an integrated platform that shows the full picture from both a security and an infrastructure point of view.

Recovery plans might need their own smaller security incident response plans or playbooks. A DDoS attack is different from an injection of malware. An external bad actor is a different adversary from an insider with high level access who has compromised the network. Your company might have one set of response plans for a breach to customer data, another for artificial intelligence, and yet another for asset recovery. Make sure the right documentation is ready for any event, and the right personnel are equipped with a plan of action.

9. What Lessons Can You Add to Your Security Incident Response Plan?

By utilizing a smart incident response plan, you can use a breach to help prepare for the future. Once the attack is contained and eradicated, make sure to complete any incident documentation for regulation or internal records. You can also perform your own analysis internally to learn from the attack and your responses to it as a company. With the lessons you’ve learned, you can update your security incident response plan. What can you improve for next time, and what gaps did you uncover if any?

A strong security incident response plan is a must-have in today’s increasingly interconnected IT environment. If and when a breach occurs, your business will be asked how you prepared for an incident. This could be used to establish regulatory compliance as well as assessment of the attack and even blame. Creating a detailed analysis of how your company prepares for a threat, as well as responds in the moment and learns from the experience puts you one step ahead, and ready for anything.