Implementing Micro-Segmentation- Insights from the Trenches, Part One

Recently I have been personally engaged in implementing micro-segmentation for our key customers, which include a top US retail brand, major Wall Street bank, top international pharmaceutical company, and leading European telco. Having spent significant time with each one of the customers including running weekly project calls, workshops, planning meetings, and more allows me a unique glimpse into the reality of what it means to implement such projects in a major enterprise – a point of view that is not always available for a vendor.

I would like to share some observations of how those projects roll out, and hope you will find these insights useful and especially helpful if you are planning to implement micro-segmentation in your network. Each blog in this short series will focus on one insight I’ve gathered from my time both in the boardroom and in the trenches, and I hope you find some practical pieces to help you improve your understanding and implementation of any current or upcoming security projects.

Application segmentation is not necessarily the short-term objective

If you look into the online material for micro-segmentation, vendors, analyst and experts all talk about breaking your data center into applications and those applications into tiers, and limiting the access among those to only what is needed by the applications.

I was surprised to discover that many customers look at the problem from a slightly different angle. For them, segmentation is a risk-reduction project driven by regulations, internal auditing requirements, or simply a desire to reduce the attack surface. These drivers do not always translate into segmenting applications from each other; if segmentation is a priority, it usually is not as the primary objective but instead a means to an end, and is not necessarily a comprehensive process in the short term as they look to achieve their goals. Let me give you a couple examples:

  1. A major Wall Street bank was required by the auditor to validate that admin access to servers is only done through a Jumpbox or a CyberArk-like solution. It means in reality the bank wanted to set a policy that “Windows machines can only be accessed from this set of machines’ RDP – all other RDP connections to it are not allowed. Linux machines are only allowed to be accessed from this set of machines by SSH – all other SSH connections are not allowed.” I think there is no need to explain the risk reduction contribution of such simple policy, but this has nothing to do with segmenting your data center by applications. Theoretically speaking one could achieve this goal as a side effect of complete data-center segmentation, but that would require significantly more effort and in fact the result would be somewhat implicit and a bit harder to demonstrate to the auditor.
  2. A European bank needed to implement a simple risk reduction scheme – to mark each server in their DC as “accessible from ATMs,” “accessible from printer area,” accessible from user area,” or “not accessible from non-server area” with very simple, well-defined rules for each one of the groups. Again, the attack surface reduction is quite simple and in their case is in fact very significant, but it has little to do with textbook application segmentation. Here too you could theoretically achieve the same goal by implementing classic micro-segmentation, but Confucius taught us to not try to kill a mosquito with a cannon. Most of those organizations do plan to implement micro-segmentation as the market defines it, but they know it takes a bit of time and they want to first hit the low-hanging fruit in risk reduction early on while implementing this crucial security project incrementally in a way that makes the most sense for their business.

So if you are looking to implement a micro-segmentation project – understand your goals, drivers and motivations and remember that this is a risk reduction project after all and as they say there are many ways to peel an orange – some of them are simpler, faster, more straightforward, and more efficient than the others. But the sooner you get started, the sooner you can enjoy the taste of your success. In any case, when choosing technology to help you with a segmentation project, make sure you choose one that is flexible enough that will help you do textbook micro-segmentation, but also address those numerous other use cases that you might not even be aware of at the initial stages.

Stay tuned to our blog to catch more of my upcoming insights from the trenches.

Learn more information about choosing a micro-segmentation solution.

Using Dynamic Honeypot Cyber Security: What Do I Need to Know?

Honeypots are systems on your network that attract and reroute hackers away from your servers, trapping them to identify malicious activities before they can cause harm. The perfect decoy, they often containing false information, without providing access to any live data. Honeypots are a valuable tool for uncovering information about your adversaries in a no-risk environment. A more sophisticated honeypot can even divert attackers in real-time as they attempt to access your network.

How Does Honeypot Security Work?

The design of the honeypot security system is extremely important. The system should be created to look as similar as possible to your real servers and databases, both internally and externally. While it looks like your network, the actual honeypot is a replica, entirely disparate from your real server. Throughout an attack, your honeypot is able to be monitored closely by your IT team.

A honeypot is built to trick attackers into breaking into that system instead of elsewhere. The value of a honeypot is in being hacked. This means that the security controls on your honeypot need to be weaker than on your real server. The balance is essential. Too strong, and attackers won’t be able to make a move. Too weak, and they may suspect a trap.

Your security team will need to decide whether to deploy a low-interaction honeypot or a high-interaction honeypot. A low-interaction solution will be a less effective decoy, but easier to create and manage, while a high-interaction system will provide a more perfect replica of your network, but involve more effort for IT. This could include tools for tricking returning attackers or separating external and internal deception.

What Can a Honeypot Cyber Security System Do?

Your honeypot cyber security system should be able to simulate multiple virtual hosts at the same time, assign hackers with a unique passive fingerprint, simulate numerous TCP/IP stacks and network topologies, and set up HTTP and FTP servers as well as virtual IP addresses with UNIX applications.

The type of information you glean depends on the kind of honeypot security you have deployed. There are two main kinds:

Research Honeypot: This type of honeypot security is usually favored by educational institutions, researchers and non-profits. By uncovering the motives and behavior of hackers, research teams such as Guardicore Labs can learn the tactics the hacking community are using. They can then spread awareness and new intelligence to prevent threats, promoting innovation and collaboration within the cyber security community.

Production Honeypot: More often used by enterprises and organizations, production honeypot cyber security measures are used to mitigate the risk of an attacker on their own network, and to learn more about the motives of bad actors on their data and security.

These honeypots have one particular element in common: the drive to get into the mind of the attacker and recognize the way they move and respond. By attracting and tracking adversaries, and wasting their time, you can reinforce your security posture with accurate information.

What are the Benefits of Honeypot Security?

Unlike a firewall, a honeypot is designed to identify both internal and external threats. While a firewall can prevent attackers getting in, a honeypot can detect internal threats and become a second line of defense when a firewall is breached. A honeypot cyber security method therefore gives you greater intelligence and threat detection than a firewall alone, and an added layer of security against malware and database attacks.

As honeypots are not supposed to have any traffic, all traffic found is malicious by its very existence. This means you have unparalleled ease of detection and no anomalies to question before you start learning about possible attacks. This system provides smaller datasets that are entirely high-value, as your IT and analytics team does not have to filter out legitimate traffic.

Honeypot security also puts you ahead of the game. While your attackers believe they have made their way into your network, you have diverted their attacks to a system with no value. Your security team is given early warning against new and emerging attacks, even those that do not have known attack signatures.

Making Valuable Use of Honeypot Security

More recently, sophisticated honeypots support the active prevention of attacks. A comprehensive honeypot security solution can redirect opportunistic hackers from real servers to your honeypot, learning about their intentions and following their moves, before ending the incident internally with no harm done.

Using cutting-edge security technology, a honeypot can divert a hacker in real-time, re-routing them away from your actual systems and to a virtualized environment where they can do no harm. Dynamic deception methods generate live environments that adapt to the attackers, identifying their methods without disrupting your data center performance.

You can then use the information you receive from the zero-risk attack to build policies against malicious domains, IP addresses and file hashes within traffic flows, creating an environment of comprehensive breach detection.

It’s important to remember that a high-interaction honeypot without endpoint security could be used as a launch pad for attacks against legitimate data and truly valuable assets. Honeypots are intended to invite attackers, and therefore add risk and complexity to your IT ecosystem. As with any tool, honeypots work best when they are integrated as part of a comprehensive solution for a strong security posture. The best cyber-security choice for your organization will incorporate honeypots as a detection and prevention tool, while utilizing additional powerful security measures to protect your live production environment.

Virtualization and Cloud review comment that while honeypots and other methods of intrusion detection “are usable in a classical environment, they really shine in the kinds of highly automated and orchestrated environments that make use of microsegmentation.”

Honeypot security systems can add a valuable layer of security to your IT systems and give you an incomparable chance to observe hackers in action, and learn from their behavior. You can gather valuable insight on new attack vectors, security weaknesses and malware, using this to better train your staff and defend your network. With the help of micro-segmentation, your honeypot security strategy does not need to leave you open to risk, and can support an advanced security posture for your entire organization.

What is File Integrity Monitoring and Why Do I Need It?

File integrity monitoring (FIM) is an internal control that examines files to see the way that they change, establishing the source, details and reasons behind the modifications made and alerting security if the changes are unauthorized. It is an essential component of a healthy security posture. File integrity monitoring is also a requirement for compliance, including for PCI-DSS and HIPAA, and it is one of the foremost tools used for breach and malware detection. Networks and configurations are becoming increasingly complex, and file integrity monitoring provides an increased level of confidence that no unauthorized changes are slipping through the cracks.

How Does File Integrity Monitoring Work?

In a dynamic, agile environment, you can expect continuous changes to files and configuration. The trick is to separate between authorized changes due to security, communication, or patch management, and problems like configuration errors or malicious intent that need your immediate attention.

File integrity monitoring uses the process of baseline comparison to make this differentiation. One or more file attributes are stored internally as a baseline, and this is then compared periodically when the file is being checked. Examples of baseline data used could be user credentials, access rights, creation dates, or last known modification dates. In order to ensure the data is not tampered with, the best solutions calculate a known cryptographic checksum, and can then use this against the current state of the file at a later date.

File Integrity Monitoring: Essential for Breach Detection and Prevention

File integrity monitoring is a prerequisite for many compliance regulations. PCI DSS for example mentions this foundational control in two sections of its policy, For GDPR, this kind of monitoring can support five separate articles on the checklist. From HIPAA for health organizations, to NERC-CIP for utility providers, file integrity monitoring is explicitly mentioned to support best practice in preventing unauthorized access or changes to data and files.

Outside of regulatory assessment, although file integrity monitoring can alert you to configuration problems like storage errors or software bugs, it’s most widely used as a powerful tool against malware.

There are two main ways that file integrity monitoring makes a difference, Firstly, once attackers have gained entry to your network, they often make changes to file contents to avoid being detected. By utilizing in-depth detection of every change happening on your network and contextually supporting alerts based on unauthorized policy violations, file integrity monitoring ensures attackers are stopped in their tracks.
Secondly, the monitoring tools give you the visibility to see exactly what changes have been made, by whom, and when. This is the quickest way to detect and limit a breach in real-time, getting the information in front of the right personnel through alerts and notifications before any lateral moves can be made or a full-blown attack is launched.

Incorporating file integrity monitoring as part of a strong security solution can give you even more benefits. Micro-segmentation is an essential tool that goes hand in hand for example. File integrity monitoring can give you the valuable information you need about where the attack is coming from, while micro-segmentation allows you to reduce the attack surface within your data centers altogether, so that even if a breach occurs, no lateral movement is possible. You can create your own strict access and communication policies, making it easier to use your file integrity monitoring policies to see the changes that are authorized and those which are not. As micro-segmentation works in hybrid environments, ‘file’ monitoring becomes the monitoring of your entire infrastructure. This extended perimeter protection can cover anything from servers, workstations and network devices, to VMware, containers, routers and switches, directories, IoT devices and more.

Features to Look for in a File Integrity Monitoring Solution

Of course, file integrity monitoring can vary between security providers. Your choice needs to be integrated as part of a full-service platform that can help to mitigate the breach when it’s detected, rather than just hand-off the responsibility to another security product down the line.

Making sure you find that ideal security solution involves checking the features on offer. There are some must-haves, which include real-time information so you always have an accurate view of your IT environment, and multi-platform availability. Most IT environments now use varied platforms including different Windows and Linux blends.

Another area to consider is how the process of file integrity monitoring seamlessly integrates with other areas of your security posture. One example would be making sure you can compare your change data with other event and log data for easy reporting, allowing you to quickly identify causes and correlative information.

If you’re using a micro-segmentation approach, creating rules is something you’re used to already. You want to look for a file integrity monitoring solution that makes applying rules and configuring them as simple as possible. Preferably, you would have a template that allows you to define the files and services that you want monitored, and which assets or asset labels contain those files. You can then configure how often you want these monitored, and be alerted of incidents as they occur, in real-time.

Lastly, the alerts and notifications themselves will differ between solutions. Your ideal solution is one that provides high level reporting of all the changes throughout the network, and then allows you to drill down for more granular information for each file change, as well as sending information to your email or SIEM (security information and event management) for immediate action.

File Integrity Monitoring with Micro-Segmentation – A Breach Detection Must Have

It’s clear that file integrity monitoring is essential for breach detection, giving you the granular, real-time information on every change to your files, including the who, what, where and when. Alongside a powerful micro-segmentation strategy, you can detect breaches faster, limit the attack area ahead of time, and extend your perimeter to safeguard hybrid and multi-platform environments, giving you the tools to stay one step ahead at all times.

Application Segmentation

Business applications are the principal target of attackers seeking access to an organization’s most sensitive information, and as application deployment approaches become more dynamic and extend to the external cloud platforms, the number of possible attack vectors is multiplying. This is driving a shift from traditional perimeter security to increased focus on detection and prevention of lateral movement within both on-premises and cloud infrastructure..

Most security pros and industry experts agree that greater segmentation is the best step that an organization can take to stop lateral movement, but it can be challenging to parse the various available segmentation techniques. For example, IT pros and security vendors alike often use the terms application segmentation and micro-segmentation interchangeably. There is, in fact, some overlap between these two techniques, but selecting the right approach for a specific set of security and compliance needs requires a clear understanding of the different ways in which segmentation can be performed.

What is Application Segmentation?

Application segmentation is the practice of implementing Layer 4 controls that can both isolate an application’s distinct service tiers from one another and create a security boundary around the complete application to reduce its exposure to attacks originating from other applications.

This serves two purposes:

  • Enforcing clear separation between the tiers of an individual application, allowing only the minimum level of access to each tier required to deliver the application functionality
  • Isolating a complete application from unrelated applications and other resources that could be possible sources of lateral movement attempts if compromised

Intra-Application Segmentation

It is a longstanding IT practice to separate business applications into tiers to improve both scalability and security. For example, a typical business application may include a set of load balancers that field inbound connections, one or more application servers that deliver core application functionality, and one or more database instances that store underlying application data.

Each tier has its own distinct security profile. For example, access to the load balancer is broad, but its capabilities are narrowly limited to directing traffic. In contrast, a database may contain large amounts of sensitive data, so access should be tightly limited.

This is where intra-application segmentation comes into play, as security teams may, for example, limit access to the database to specific IP addresses (e.g., the application server) over specific ports.

Application Isolation

The second important role that application segmentation can play is isolating an entire application cluster, such as the example above, from other applications and IT resources. There are a number of reasons that IT teams may wish to achieve this level of isolation.

One common reason is to reduce the potential for unauthorized lateral movement within the environment. Even with strong intra-application isolation between tiers in place, an attacker who compromises a resource in another application cluster may be able to exploit vulnerabilities or mis-configurations to move laterally to another cluster. Implementing a security boundary around each sensitive application cluster reduces this risk.

There may also be business or compliance reasons for isolating applications. For example, compliance with industry-specific regulations, such as HIPAA, PCI-DSS, and SWIFT security standards are simplified by establishing clear isolation of in-scope IT resources. This is also true for jurisdictional regulations like the EU General Data Protection Regulation (GDPR).

Application Segmentation vs. Micro-Segmentation

The emergence of micro-segmentation as a best practice has created some confusion for IT pros evaluating possible internal security techniques. Micro-segmentation is, in fact, a method of implementing application segmentation. However, micro-segmentation capabilities significantly improve an organization’s ability to perform application segmentation through greater visibility and granularity.

Traditional application segmentation approaches have relied primarily on Layer 4 controls. This does have value, but firewalls and other systems used to implement such controls do not give security teams a clear picture of the impact of these controls. As a result, they are time-consuming to manage and susceptible to configuration errors, particularly as environments evolve to include cloud services and new deployment models like containers.

Moreover, Layer 4 controls alone are very coarse. Sophisticated attackers are skilled at spoofing IP addresses and piggybacking on allowed ports to circumvent Layer 4 controls.

Micro-segmentation improves upon traditional application segmentation techniques in two ways. The first is giving security teams a visual representation of the environment and the policies protecting it. Effective visualization makes it possible for security teams to better understand the policies they need and identify whether gaps in policy coverage exist. This level of visibility rarely exists when organizations are attempting to perform application segmentation using a mix of existing network-centric technologies.

A second major advantage that micro-segmentation offers is greater application awareness. Leading micro-segmentation technologies can display and control activity at Layer 7 in addition to Layer 4. An application-centric micro-segmentation approach can do more than simply create a coarse boundary between application tiers or around an application cluster. It allows specific processes – and their associated data flows – to be viewed in an understandable way and serve as the basis for segmentation policies. Rather than relying solely on IP addresses and ports, micro-segmentation rules can white-list very specific processes and flows while blocking everything else by default. This enables far superior application isolation than traditional application segmentation techniques.

Balancing Application Segmentation with Business Agility

Application segmentation is more important than ever as dynamic hybrid cloud environments and fast-paced DevOps deployment models become the norm. The business agility that these advances enable are highly valuable to the organizations that adopt them. However, heterogeneous environments that are constantly evolving are also more challenging to secure. Security teams can easily find themselves facing a lose/lose proposition of either slowing down innovation or overlooking new possible security risks.

The granular visibility and control that application-centric micro-segmentation offers makes it possible to proactively secure new or updated applications at the time of deployment without added complexity or delay. It also ensures that security teams can quickly detect any abnormal application activity that slips through the cracks and respond rapidly to new security risks before they can be exploited.

For more information on micro-segmentation, visit our Micro-Segmentation Hub.

The Average Cost of a Data Breach, and how Micro-Segmentation can Make a Difference

In the US, the financial cost of a data breach is rising year on year. IBM’s Cost of a Data Breach Report, is independently conducted annually by the Ponemon Institute. This year, the report included data from more than 15 regions, across 17 industries. They interviewed IT, compliance, and data protection experts from 477 companies. As a result, the true average cost of a data breach is more accurate than ever.

Crunching the Numbers: The Average Cost of a Data Breach

According to the study, the average cost of a data breach in 2018 is $3.86 million, which has increased by 6.4% since last year’s report.

While the risk of a data breach is around 1 in 4, not all breaches are created equally. Of course, the more records that are exposed, the more expensive and devastating a breach will be. A single stolen or exposed data record costs a company an average of $148, while 1 million, considered a Mega Breach, will cost $40 million. 50 million may be reserved for the largest enterprises, but this will raise the financial cost to $350 million.

Beyond a Ransom: The Hidden Cost of Data Breach

Although many businesses worry about the rise in ransomware, the cost of a data breach is about much more than any malicious demand from a hacker could be. The true cost can be broken down into dozens of areas, from security upgrades in response to the attack to a drop in your stock price when word of the breach gets out. Research by Comparitech found that companies tend to see a stock price slide of 42% following a breach. Other costly elements of a data breach include Incident investigation, legal and regulatory activity, and even updating customers. These all contribute to the escalating cost when you fail to adequately protect your company against a data breach.

The Ponemon study found that the largest cost comes from customer churn. The US sees the highest cost in the world in terms of lost business due to a data breach, more than two times the average figure, at $4.2 million per incident. Most analysts put this discrepancy down to the nature of commerce in the United States. In the US, there is far more competition and choice, and customer loyalty is both harder to hold onto and almost impossible to retrieve once trust is lost.

Customers also have more awareness of data breaches in the US, as laws dictate they must be informed of any issues as they are uncovered. This kind of reputational damage is devastating, especially in the case of a Mega Breach. In fact, 1/3 of the cost of Mega Breaches can be attributed to lost business.

Of course, there is also the fear that even if you manage to recover from a data breach, the worst is not over. The IBM study found that there is a 27.9% chance of another breach in the following two years after an attack, making your company extremely vulnerable unless you can make considerable changes, and fast.

Preparing Your Business for the Average Cost of a Data Breach

The numbers don’t lie. The speed and impact of data breaches is something to which every company, no matter the size, should be paying attention. There are definitely ways to protect your business and to position yourself responsibly for the worst case scenarios.

According to Verizon, 81% of all breaches exploit identity, often through weak passwords or human error. Malware can piggyback onto a legitimate user to get behind a physical firewall, which is why most IT professionals agree that even next-gen firewalls are insufficient. To limit the potential repercussions of this, all businesses need to be employing a zero-trust model.

With micro-segmentation, perimeters can be created specifically for the protection of sensitive or critical data. This ensures that all networks are considered not trusted. Using a granular approach to limit communications, and tagging workloads themselves with labels and restrictions. Containment of attacks is built into your security from the outset, by limiting the attacker’s freedom of movement and restricting ability for any lateral movement at all. As the financial impact of a data breach rises with the amount of data records stolen, this is a significant weapon to have at your disposal.

Rapid Response Can Limit the Cost of Data Breaches

Efficiency in identifying an incident as well as the speed of the response itself has a huge impact. Rapid response can save money, as well as proving to your customers that you still deserve their trust. According to the IBM report, the average time it took companies to identify the data breach was 197 days. Even once a breach was detected, the average time to contain it was a further 69. When it came to a Mega Breach – it could take an entire year to detect and contain.

With micro-segmentation, the visibility is immediate. All communications are logged, including East-West traffic. This includes private architecture, cloud-based systems, and even hybrid solutions. The best solutions will offer alerts and notifications in case of any unusual behavior, allowing you to stop threats in their tracks, before any damage has been done.

The quicker this happens, the less financial damage will be done. In fact, on average, companies who suffered a breach that managed to contain it within 30 days saved more than $1 million over companies who couldn’t. The larger the breach – the more significant these savings are likely to be.

Ensure You’re Fully Armed Against a Data Breach

The complex nature of most businesses IT systems explains the growing threat of cyber-crime, and the increasing financial cost of lax security holding us all to ransom. Traditional security systems are not enough to ensure adequate protection from a data breach, or rapid detection and response in case the worst happens.

Micro-segmentation offers granular flexible security that adapts to your exact environment, detecting and limiting the force of an attack, and providing the visibility and response tools you need to keep your customers loyal.

Protecting your Business Against Attack Vectors and the Evolving Threat Landscape

Understanding Attack Vectors

An attack vector is the way that an adversary can gain unauthorized access to your network or devices. Over the years, there have been dozens of different attack vectors, many of which have adapted and evolved over time to cause harm or hold companies hostage. Today, networks and organizations are interconnected using both private and public clouds leaving the door ajar for attack vectors that are more sophisticated than ever. What should smart businesses look out for, and how can they protect themselves?

The Evolution of Cyber Attack Vectors

Traditionally, having hardened perimeter security was enough to protect data centers. Layers of security to detect and prevent a breach coming in or out of data centers meant that you could ward off attack vectors to your infrastructure and hardware, which was almost exclusively on-premise.

The Cloud and mobile solutions have changed all of this. The reality for data centers today is keeping data private and secure while running an environment that spans public, private and hybrid clouds. Companies now use a mix of compute resources: Containers, Serverless Functions and VMs. However hackers are not just targeting your compute resources, they are sneaking in via routers and switches, or storage controllers, and sensors. From this vantage point, attackers can then scale their attack, compromising an entire network with lateral movements and connected devices. The MITRE ATT&CK Framework is a great resource to dive deeper on the different initial access attempts¹.

As the way we access the internet changes, cyber attack vectors adapt their own designs right alongside. Assuming that we are plugging all the holes on the IT side is not enough. The human factor has always been a key vulnerability in the security scheme. It has become more prevalent with the advance in end-user technology in recent years. Smartphones are a good example of this. Mobile attack vectors are not something that any organization had to be aware of a decade ago, and now they are an ever-present reality providing an easy gateway into many organizations.

While most people know not to click on dangerous links that arrive via SMS from unknown numbers, and no longer fall prey to email phishing campaigns like unexpected warnings of your bank password being changed, new attack vectors come from unexpected places. The recent Man in the Disk attacks on Android devices are something no one could have anticipated. This malware relies on vulnerabilities in third-party application storage protocols that are not regulated by sandbox restrictions through Android². This careless use of external storage can lead to potential malicious code injection, or the silent installation of unrequested apps to the user’s device. From there, the journey of an attacker to leverage this access to a deeper data center one is very short.

As technology evolves, there are more ways than ever for bad actors to launch attacks. Smart devices and Cloud-solutions only serve to increase the number of platforms which can be used for malicious intent.

Which Attack Vectors Are the Biggest Threats Today?

Email and phishing schemes have been the attack vectors of choice for a large amount of malicious attacks over the past few years. However, as simple attacks are becoming more recognizable, more complex threats are increasingly in vogue. Worryingly, the trend in malware is a movement away from reliance on human error, to clever attack vectors that can strike without any conscious act by the user whatsoever³. Man-in-the-Disk was just one example of this.

Take Drive-by-Downloads. A user only has to visit a compromised website, and malicious code can be injected through their web browser. Once done, this can swiftly move laterally across a network. Mouse Hovering hacking is also growing, a technique that launches javascript when a user hovers over a link to see where it goes. This has been seen in familiar applications such as PowerPoint, showing that even what users consider to be ‘safe’ environments can be dangerous. Increasingly sophisticated attack vectors that can spread without a user’s knowledge or their initial action are only going to become more common over time. If these tactics are leveraged against a user with administrator access to your data centers, the results could be catastrophic.

Administrator access could be the weak link when it comes to keeping your data centers safe overall. By accessing admin privileges, adversaries have access to the most valuable information you store, and can therefore cause the most harm. It’s important to think about the way your business works in a crisis when you’re planning preventive security measures. Used in an emergency, local authentication options are often not logged in the same way as your admins usual activity, and the credentials may even be shared across workloads and hosts for the sake of ease of use.

As well as smarter attack vectors, the growth in threats such as file-less attacks show that attackers are getting better at learning how to cover their tracks. 77% of cyber-crime in the US last year used a form of file-less attack⁴. Research shows that this type of malware is ten times as likely to succeed as traditional file based attacks, and helps attackers stay well beneath the radar.

AI is also an area that is likely to be compromised in the near future, with many companies creating chatbots and machine learning tools as the customer-facing representative of their websites and apps. As virtual assistants are built by humans, they are subject to the same gaps that human knowledge has. Studies are beginning to show that AI has problems with hallucinations and recognition⁵. Let loose on customer data and processes, it’s easy to see how advanced malware may slip through the cracks.

More than ever, in preparation for the next stage of intelligent malware, companies need to secure their data centers effectively against the latest attack vectors.

How Can Businesses Protect Themselves from Cyber Attack Vectors?

Keeping your IT environment safe from the latest attack vectors means being able to detect threats faster, and with better intelligence.

This starts with visibility. Being able to identify application flows across your entire infrastructure means that you have granular visibility across your whole IT stack. Dynamic deception tactics automatically trap attackers, even when the end-user isn’t aware of what is going on under the surface. Reputation analysis instantly uncovers anything suspicious or out of the ordinary, from unexpected IP addresses and domain names to file hashes within application flows. Even new attack vectors are isolated in real-time, with mitigation recommendations so that incident response is streamlined.

Ring-fencing, the separation of one specific application from the rest of the IT landscape is one way that companies are limiting the reach of the latest attack vectors from their most sensitive data or valuable assets. This and other kinds of micro-segmentation allow your business to truly limit the attack surface of any potential breach.

There are a number of benefits to this. Regardless of operating system limitations, communication policy can be enforced at the layer 4 transport level as well as the Layer 7 process level. By segmenting your flows by the principle of least privilege, even if a breach occurs, you ensure that it is quickly isolated, and attackers are unable to make lateral moves or scale their intrusion any further. When micro-segmentation is enforced alongside breach detection and threat resolution, even new attack vectors can quickly become a known quantity, and are unable to pose real danger.

Staying Safe Against Future Cyber-Attack Vectors

The way that data is stored and transferred is dynamic in and of itself. Our methods and processes are always changing as the capabilities of the cloud and the hybrid nature of our IT environments continue to grow. In direct response, attack vectors will never stay the same for long, and hackers will always have new tricks up their sleeve to compromise the latest solutions and catch us unaware. As well as current attack vectors that take advantage of IoT devices and no-fault infiltration, predictions for the future include AI-driven malware and an increase in file-less malware attacks, allowing hackers to hide their activities from detection.

The only solution is true visibility of all your applications and workflows. Using this mapping alongside segmentation policy that controls communication flows can restrict attackers in their tracks at the smallest sign of an anomaly. Even against new or unknown attack-vectors, these tools enable true threat resolution that can protect your entire infrastructure in real-time.


1. https://attack.mitre.org/wiki/Main_Page
2. https://research.checkpoint.com/androids-man-in-the-disk/
3. https://churchm.ag/was-it-human-error/
4. https://www.securityweek.com/fileless-attacks-ten-times-more-likely-succeed-report
5. https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix

GuardiCore’s Journey from Vision to Best-in-Class Micro-Segmentation

Micro-segmentation as we know it today has gone through several stages in the last few years, moving from a rising trend for securing software-defined data centers to a full-blown cyber security technology and a top priority on the agenda of nearly every CISO.

Built on the vision of securing the hybrid cloud and software defined data centers, we started our journey in 2013, thinking how to solve what in our opinion was a huge challenge for a market that did not exist at that time. In this post we’ll share how we created the micro-segmentation solution that is considered the best on the market – from vision to execution.

2015: First steps towards segmentation

Throughout the second half of 2015, we started delivering our micro-segmentation methodology after realizing that understanding how applications communicate inside the cloud was the key to success and as such – must be addressed first. “You can’t protect what you can’t see” wasn’t coined by GuardiCore but was immediately embraced by us when we started planning our micro-segmentation solution. We started developing our visibility solution Reveal, a visual map of all the applications running in the data center, all the way down to the process level. Reveal allows you to view applications and the flow they create in real time while also providing historic views. For the first time, admins and security teams were able to easily discover the running applications, one by one, and then review relations between the application tiers. Early releases supported general data center topologies as well as Docker containers.

2016: Gartner names micro-segmentation a top information security technology

We launched our segmentation solution at the RSA conference 2016 with a big splash. Reveal gained a lot of coverage and was well received by security teams who were lacking the proper tools to see the application flows in their data centers. It was one of the hottest security products at RSA 2016 and for a good reason!

Important to note that when micro-segmentation was introduced in Gartner’s Top 10 Technologies for Information Security in 2016 time in June 2016, many security professionals were unaware of the concept. In that report Gartner stated that to prevent attackers from moving “unimpeded laterally to other systems” there was “an emerging requirement for microsegmentation of east/west traffic in enterprise networks”. Enthusiasm was then at its peak, micro-segmentation was widely covered in the media and conferences dealing with the technology abound.

2017: Micro-segmentation for early adopters

Micro-segmentation was gaining traction as one of the most effective ways to secure data centers and clouds, but organizations learned the hard way that the path to meaningful micro-segmentation was full of challenges. Incomplete visibility into east-west traffic flows, inflexible policy engines and lack of multi-cloud support were among the most cited reasons. Throughout 2017 market penetration was around 5% of target audience and micro-segmentation was far from being mainstream. Andrew Lerner, Research Vice President at Gartner, noted in a blog post that “Micro-segmentation is the future of modern data center and cloud security; but not getting the micro-segmentation-supporting technology right can be analogous to building the wrong foundation for a building and trying to adapt afterward”.

That year GuardiCore tackled these challenges head on and based on the feedback we received from our growing customer base, we added flexible policy management and moved on from using only 3rd party integration to add native enforcement at the flow and process levels. Customers were able to move from zero-segmentation to native enforcement in 3 easy steps, based on revealing applications, building policies and natively enforcing policies.

2018: Our solution takes complexity out of micro-segmentation

Today, micro-segmentation serves as a foundational element of data center security in any data center. According to a Citi group’s report, cloud security is the number one priority among CISOs in 2018, with micro-segmentation the top priority in plans to purchase in this category. Concentrated effort on the part of organizations from different industries has resulted in better understanding of the technology. This year we were able to deploy micro-segmentation across all types of environments, from bare metal to virtualized machines, through public cloud instances and recently to containerized environments.

So if you are planning a micro-segmentation project let’s talk. We can show you how to do it in a way that is quick, affordable, secure, and provable across any environment.

Lateral Movement Security

Security teams often focus significant effort and resources on protecting the perimeter of their IT infrastructure and tightly controlling north-south traffic, or traffic that flows between clients and servers. However, several major transformations in enterprise computing are causing east-west traffic, or server-to-server communication within the data center, to outgrow north-south traffic in both volume and strategic importance.

For example:

  • Traditional on-premises data centers increasingly use horizontal scaling techniques that employ large sets of peer nodes to service the requests of clients, rather than a simple north-south flow.
  • The emergence of big data analytics as an essential competency is also driving substantial growth of east-west traffic, as processing of large data sets distributed across many nodes is generally required.
  • The growing adoption of public cloud infrastructure makes the traditional concept of a network perimeter obsolete, increasing the importance of securing east-west traffic among nodes.

While many organizations remain heavily invested in perimeter security, they are often extremely limited in their ability to detect and prevent lateral movement within their data center and cloud infrastructure.

What is Lateral Movement?

Lateral movement is the set of steps that attackers who have gained a foothold in a trusted environment take to identify the most vulnerable and/or valuable assets, expand their level of access, move to additional trusted assets, and further advance in the direction of high-value targets. Lateral movement typically starts with an infection or credential-based compromise of an initial data center or cloud node. From there, an attacker may employ various discovery techniques to learn more about the networks, nodes, and applications surrounding the compromised resource.

As attackers are learning about the environment, they often make parallel efforts to steal credentials, identify software vulnerabilities, or exploit misconfigurations that may allow them to move successfully to their next target node.

When an attacker executes an effective combination of lateral movement techniques, it can be extremely difficult for IT teams to detect, as these movements often blend in with the growing volume of legitimate east-west traffic. The more they learn about how legitimate traffic flows work, the easier it is for them to attempt to masquerade their attacks as a sanctioned activity. This, combined with many organizations’ insufficient investment in lateral movement security, can cause security breaches to escalate quickly.

Assessing Lateral Movement Security

One fast, simple, and inexpensive step that organizations concerned about lateral movement security can take is to test how vulnerable their environment is to unsanctioned east-west traffic. GuardiCore Labs offers a free, open-source breach and attack simulation tool called Infection Monkey that can be used for this purpose.

Infection Monkey scans the environment, identifies potential points of vulnerability, and attempts predetermined attack scenarios to attempt lateral movement. The output is a security report that identifies the security issues that were discovered and includes actionable remediation recommendations.

Infection Monkey Warns of Danger of Lateral Movement

Visualizing East-West Traffic

Organizations seeking more proactive lateral movement security can begin by visualizing the east-west traffic in their environment. Once a clear baseline of sanctioned east-west traffic is established and viewable on a real-time and historical basis, it becomes much easier to identify unsanctioned lateral movement attempts.

This is one of the flagship capabilities of GuardiCore Centra. Centra uses network and host-based sensors to collect detailed information about assets and flows in data center, cloud, and hybrid environments, combines this information with available labeling information from orchestration tools, and displays a visual representation of east-west traffic in the environment.

Visibility for Lateral Movement

This added visibility alone delivers immediate benefits to organizations seeking a greater understanding of potential lateral movement risks. It also provides the foundation for more sophisticated lateral movement security techniques.

Improving Lateral Movement Security

Once an organization has a clear view of both sanctioned and unsanctioned east-west traffic in its data center and cloud infrastructure, it can use this information to take active steps to stop lateral movement. An optimal approach includes a mix of both proactive and reactive lateral movement security techniques.

Micro-Segmentation Policies

Once an IT team has visualized its east-west traffic, the addition of micro-segmentation policies can significantly reduce attackers’ ability to move laterally. Micro-segmentation applies workload and process-level security controls to data center and cloud assets that have an explicit business purpose for communicating with each other. When strong micro-segmentation policies are implemented, attempts at lateral movement that do not explicitly match sanctioned flows – down to the specific process level – can generate alerts to the security operations team or even be blocked proactively.

Detecting and Responding to Unauthorized East-West Traffic

While micro-segmentation policies significantly improve lateral movement security, it is important to complement policy measures with additional detection and response capabilities. In addition to providing information-risk alerts when policy violations occur, GuardiCore Centra can detect and respond to unauthorized east-west traffic by leveraging deception technology to monitor and investigate suspicious behavior within east-west traffic.

Deception

GuardiCore Centra applies deception technology to analyze all failed attempts at lateral movement and then redirect suspicious behavior to a high-interaction deception engine. The attacker is fed responses that suggest that their attack techniques are successful, but all their tools, techniques and exploits are being recorded and analyzed in a fully isolated environment.

Deception

This helps IT teams learn more about the lateral movement being attempted in the environment and assess how to best improve security policies over time.

A Growing Strategic Priority

While strong perimeter security remains essential, the transition from traditional on-premises infrastructure to hybrid-cloud and multi-cloud architectures is increasing the strategic importance of lateral movement security.

It’s essential for security teams to:

  • Gain ongoing visibility into their organization’s east-west traffic
  • Develop techniques for differentiating between sanctioned and unsanctioned east-west traffic
  • Implement controls like micro-segmentation to tightly govern infrastructure activity
  • Actively monitor for unauthorized lateral movement to both contain breaches quickly and continuously refine policies based on the latest attack techniques.

Organizations that move beyond perimeter-focused thinking and place greater emphasis on lateral movement security will ensure that their security measures remain in step as IT infrastructure becomes more dynamic and heterogeneous.

For more information about Micro-Segmentation, visit our Micro-Segmentation Hub

Streamlining a Rolling PCI Compliance Process Within Your Organization

Compliance with PCI regulations is not a one-time job that you can complete, and then check off your list. According to Verizon, who publish their regular Payment Security Report, “80% of companies that passed their annual assessment failed a subsequent interim assessment, which indicates that they’ve failed to sustain the security controls they put in place.”

Any business that works with payment data recognizes the challenges involved with maintaining a PCI compliant data center. IT environments are becoming increasingly complex, with diverse and dynamic technologies that are constantly changing to best support customer needs and to provide competitive differentiation. Even small companies with relatively simple company structures still may have on-premise data centers, virtual backups, SaaS applications or IaaS in both the public and private Cloud, and payment information on physical machines or devices internally. Many of these go through regular application or organizational changes that disrupt your ability to be compliant, as they shift data and workloads to meet demand.

Additionally, PCI regulations are not static, they change as the industry learns more about security and as wider threats evolve. This obviously has an influence on the security tools your business needs. With all this to consider, how can you bring your organization on board for sustainable compliance?

Reduce the Scope

According to the PCI Security Standards Council (SSC), the cardholder data environment (CDE) and all connected systems are all considered to be ‘in-scope.’ In fact, a system component can only be ‘out of scope’ if it is unable to communicate with any other component within the compliance environment, and therefore cannot compromise the CDE security. It’s worth remembering that even isolated networks need to be documented in your compliance report. This definition makes reducing scope, and thereby reducing the elements you need to include in your annual assessment difficult.

    • Tokenization: One way to go about the task of reducing scope is to reduce the data itself. Think about truncating or masking PAN (primary account number) data, which is rarely required in full, or consolidating the systems that store cardholder data, whether that’s hardware or software. Some companies replace PAN data with fixed-length message digest or use Tokenization which allows this data to then be removed from scope. Point to Point Encryption is becoming more popular in order to remove the whole of Merchant Services from scope altogether.
    • Segmentation & Micro-segmentation: Another tactic is reducing scope using architecture. Traditionally, firewalls were used to create partitions and enforce network zones, while segmentation gateways were shown to improve access control both internally and externally. Virtual LANs with strong ACLs were shown to have the same effect. Everything changed with the advent of Cloud-based and hybrid solutions, and today – there is no such thing as a simple IT environment. While segregation can help reduce scope using a combination of methods such as IP address restriction, communication protocol restriction, port restriction and application level restriction, micro-segmentation is garnering the most attention.

Micro-segmentation supports your staff to work at a process or identity level, setting the rules you need to keep your network secure. As you control the flow of data from process to process, the idea of a breach is no longer catastrophic, as even in the worst-case scenario it is automatically isolated and easily resolved.The benefits are clear. As well as gaining deep visibility and wide coverage of your architecture, micro-segmentation limits its complexity, making continued compliance that much easier.

Learn more about the benefits of Micro-Segmentation

Outsourcing for Compliance

Most enterprises have identified that while their environments continue to grow in complexity, their staffing size and skill sets remain somewhat static. There is a growing demand for qualified IT staff, and the growth in the workforce hasn’t kept up with the pace. Executives continue to complain about a shortage of skilled employees. In fact, a January 2018 research study by ESG showed 51 percent of respondents claimed their organization had a problematic shortage of cybersecurity skills.

Many enterprises have found that outsourcing specific components of their PCI strategy to Managed Security Services Providers is the right solution. In the right situations, outsourcing might help you reduce scope, or add tools that help maintain a compliant data center.

  • Security Outsourcing to MSSPs: PCI regulations include ensuring you have an up-to-date Antivirus solution, Think about SIEM/logging capabilities, File Integrity Monitoring, vulnerability and patch management solutions. These are great examples of things that can be outsourced to competent MSSPs, effectively outsourcing compliance, in an affordable and smart way of taking advantage of third-party expertise. Of course, Antivirus solutions are not all created equal. Some options will provide an added layer of vulnerability management, helping you achieve compliance without you lifting a finger on your side. Look for MSSPs who have solutions that check as many of the boxes as possible for you when it comes to technical requirements.
  • Other options for outsourcing include Storage, Processing and Handling, all of which can partially or completely remove cardholder data from your CDE, supporting your company in reducing scope.

Selecting Comprehensive Platform Solution over Multiple Point Products

Comprehensive Platform Approach: Since multiple tool sets often lend themselves to confusion and complexity, we’ve seen a shift from enterprises selecting multiple point solutions to unifying, comprehensive platforms. A solution may provide adequate threat detection for example, but do they have a distributed firewall, or response to breaches from the same platform? Dynamic environments need a lot of attention, so using one platform/solution instead of multiple to manage a whole area of compliance is invaluable when it comes to policy management and proof process.

Continued Compliance Enhances Enterprise Security as a Whole

It’s important to facilitate an environment where compliance isn’t viewed as a hassle or even a hindrance but instead a part of having a healthy, vibrant, safe and secure enterprise. While it’s true that PCI compliance is not a be-all and end-all, these continued compliance checks when done correctly lend themselves to the improvement of the organization as a whole. Here are some examples where continued PCI compliance lends itself to overall enterprise comprehensive health:

  • Flow Visualization: If you can access a visual map of all application workloads in granular detail, you can use working towards PCI compliance to uncover underlying security issues. Proper visualization could catch ineffective oversight mechanisms, organizational silos, wasted resources, or poor architecture design. Lack of data compromises security integrity. In addition to sustaining compliance, maintaining process-level visibility keeps an accurate tab on the state of your overall security.
  • Set Policies and Rules for Cardholder Data: Intelligent rule design can protect you in case of a breach, but also helps you refine and strengthen your compliance policies. Setting and enforcing strict compliance rules using a flexible policy engine is essential. These can be higher-level best practices for security when considering larger segments, and then more specific rules for micro-segments. Of course, these need to work across your entire Network, including in hybrid environments.
  • Reduce Complexity and Maintain Control: Simplify your IT architecture with business process corrections and investment in new hardware or software, reducing costs for the business. Using a single platform for visualization, micro-segmentation, and breach detection means you don’t have to fear becoming more vulnerable to attacks or less compliant to regulations.
  • Detailed Forensics: The immediate benefits of compliance may not always be clear. Continuous monitoring and sharing of detailed actionable analytics of breach detection or resolution can improve security posture and increase awareness and appreciation of these efforts among your staff. This creates an environment where data protection and compliance are shown to have true value.

Sustainable Compliance Needs Dedication

Ensuring that your security supports continued compliance doesn’t happen without work. All areas of the business need to be on board, from business strategists to customer call representatives. Simplifying your business process through reducing scope, outsourcing, selecting comprehensive platforms over multiple point solutions and understanding how continuous PCI compliance positively affects the health of your enterprise security overall will help make it an integral part of your company culture.

What is Micro-Segmentation?

Micro-segmentation is the emerging IT security best practice of applying workload and process-level security controls to data center and cloud assets that have an explicit business purpose for communicating with each other. It offers more flexibility and granularity than established security techniques like network segmentation and application segmentation, making it more effective at detecting and blocking lateral movement in data center, cloud, and hybrid-cloud environments.

Read more