AWS Security Best Practices

AWS is the biggest player in the public IaaS (Infrastructure as a Service) market and a critical component of the hybrid-cloud infrastructure in many enterprises. Understanding how to secure AWS resources and minimize the impact of any breaches that do occur has become more important than ever. For this reason, after closing 2018 with Infection Monkey & GuardiCore Centra’s integration into AWS Security Hub, we decided to open 2019 with a crash course on AWS security best practices.

In this piece, we’ll dive into some of the basics of AWS security, provide some tips to help you get started, and supply you with information on where you can learn more.

#1 AWS security best practice: Get familiar with the AWS shared responsibility model

Understanding the AWS security paradigm at a high level is an important part of getting started securing your AWS infrastructure. AWS uses the shared responsibility model to define who is responsible for securing what in the world of AWS. To help conceptualize the model, the public cloud infrastructure giant has come up with succinct verbiage to describe what they are responsible for and what you (the customer) are responsible for. In short:

AWS is responsible for “security of the cloud”- This means select software, hardware, and global infrastructure (think racks in physical data centers, hypervisors, switches, routers, storage, etc.) are AWS’s responsibility to secure.

Customers are responsible “for security in the cloud”- This means customers are responsible for ensuring things like customer data, applications, operating systems, firewalls, authentication, access management, etc.

Worded differently, AWS gives you the public cloud infrastructure to build upon, but it’s up to you to do so responsibly. It is expected that not everything you need will be baked into any given AWS solution. Third-party security tools like Centra can help fill those gaps. Understanding the shared responsibility model and what tools can help will allow you to ensure you’re doing your part to secure your infrastructure.

#2 AWS security best practice: Use IAM wisely

AWS Identity and Access Management (IAM) is a means of managing access to AWS resources and services, and is built-into AWS accounts. In a nutshell, IAM enables you to configure granular permissions and access rights for users, groups, and roles. Here are a few useful high-level recommendations to help you get started with IAM:

  • Grant least privilege – The principle of least privilege is a popular concept in the world of InfoSec and it is even more important to adhere to in the cloud. Only grant users and services the privileges necessary for the given set of tasks they should be legitimately responsible for, and nothing more.
  • Use IAM groups – Using groups to assign permissions to users significantly simplifies and streamlines access management.
  • Regularly rotate credentials – Enforcing expiration dates on credentials helps ensure that if a given set of credentials is compromised, there is a limited window for an attacker to access your infrastructure.
  • Limit use of root – Avoid using the Linux “root” user. Being conservative with your use of root access helps keep your infrastructure secure.
  • Use MFA – Multi-factor authentication (MFA) should be considered a must for users with high-level privileges.

#3 AWS security best practice: Disable SSH password authentication

If you’re familiar with Linux server administration in general, you’re likely familiar with the benefits of SSH keys over passwords. If you’re not, the short version is:

  • SSH keys are less susceptible to brute force attacks than passwords.
  • To compromise SSH public-key authentication used with a passphrase, an attacker would need to obtain the SSH private-key AND determine (or guess) the passphrase.
  • While SSH keys may require a little more work when it comes to key management, the pros far outweigh the cons from a security perspective.

#4 AWS security best practice: Use security groups

First, to clear up a common misconception: AWS security groups are NOT user groups or IAM groups. An AWS security group is effectively a virtual firewall. If you’re comfortable understanding the benefits of a firewall within a traditional network infrastructure, conceptualizing the benefits of AWS security groups will be intuitive.

AWS security group best practices

Now that we’ve clarified what a security group is, we’ll dive into a few AWS security group best practices to help you get started using them.

    • Minimize open ports – Unless there is a highly compelling argument to do so, only allow access to required ports on any given instance. For example, if you’re running a cluster of instances for a web-server, access to TCP ports 80 and 443 makes sense (and maybe 22 for SSH), but opening other ports is an unnecessary risk.
    • Don’t expose database ports to the Internet – In most cases, there is no need to expose the database to the Internet – doing so puts your infrastructure at risk. Use security group policies to restrict database port (e.g. TCP 3306 for MySQL) access to other specific AWS security groups.
    • Regularly audit your security group policies – Requirements change, rules that were once needed become liabilities, and people make mistakes. Regularly auditing your security rules for relevance and proper configuration help you minimize the likelihood that an outdated or misconfigured security group creates a network breach.

This is just the tip of the iceberg when it comes to AWS security group best practices. For more information, check out the AWS Security Groups User Guide and our Strategies for Protecting Cloud Workloads with Shared Security Models whitepaper.

#5 AWS security best practice: Leverage micro-segmentation

One of the most important components of securing public-cloud infrastructure, particularly in hybrid-cloud environments, is micro-segmentation. Micro-segmentation helps limit both north-south and east-west movement of breaches when they occur, which helps mitigate the spread of threats from one node to another. Further, Guardicore’s intelligent micro-segmentation solution can limit one of the biggest drivers of breach impact: dwell time. If you’re interested in learning more, check out this blog post for a crash course on micro-segmentation best practices.

How micro-segmentation complements AWS security groups

Security groups are an important part of AWS security, and micro-segmentation is excellent way to complement them and round out a hybrid-cloud security plan. A micro-segmentation solution like Guardicore Centra helps ensure you are able to implement micro-segmentation seamlessly both on-premises and in the cloud. Specific benefits of using Centra to complement AWS security groups include:

  • Enhanced visibility – Centra is able to automatically discover applications and flows, use its AWS API integration to pull labels and asset information, and provide granular visibility and baselining for your entire infrastructure.
  • Application aware policies- Next Generation Firewalls (NGFWs) are a big part of on-premises security, and Centra helps bring the same features to your AWS cloud. You wouldn’t compromise on application-aware security in a physical datacenter, and with Centra you don’t have to in the cloud either.
  • Protection across multiple cloud platforms & on-prem- It is common for the modern enterprise to have workloads scattered across multiple cloud service providers as well as physical servers on-premises. Centra is able to provide micro-segmentation for workloads running in AWS, other IaaS providers, and on physical servers in corporate offices and data centers. This helps enterprises ensure that their security is robust across the entirety of their infrastructure.

If you’re interested in learning more about the benefits of Centra for AWS, check out this solution brief (PDF).

Putting it all together: a holistic approach to AWS security

As we have seen, there is no single magic bullet when it comes to securing your AWS infrastructure. Understanding the AWS shared responsibility model enables you to know where to focus your attention, and leveraging built-in AWS features like security groups and IAM are a great start. However, there are still gaps left unaccounted for by AWS tools, and 3rd party solutions are needed to address them. Guardicore Centra provides users with micro-segmentation, breach detection & response, and application-level visibility that help round out a holistic approach to AWS security.

Want to learn more?

For more on how Guardicore Centra and micro-segmentation can help you keep your AWS resources secure,  contact us today or sign up for a demo of the Centra Security Platform.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More

CVE-2019-5736 – runC container breakout

A major vulnerability related to containers was released on Feb 12th. The vulnerability allows a malicious container that is running as root to break out into the hosting OS and gain administrative privileges.

Adam Iwanuik, one of the researchers who took part in the discovery shares in detail the different paths taken to discover this vulnerability.

The mitigations suggested as part of the research for unpatched systems are:

  1. Use Docker containers with SELinux enabled (–selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary.
  2. Use read-only file system on the host, at least for storing the docker-runc binary.
  3. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runC binary on the host).

The first two suggestions are pretty straightforward but I would like to elaborate on the third one. It’s important to understand that Docker containers run as root by default unless stated otherwise. This does not explicitly mean that the container also has root access to the host OS but it’s the main prerequisite for this vulnerability to work.

To run a quick check whether your host is running any containers as root:


#!/bin/bash

# get all running docker container names
containers=$(docker ps | awk '{if(NR>1) print $NF}')

echo "List of containers running as root"

# loop through all containers
for container in $containers
do
    uid=$(docker inspect --format='{{json .Config.User}}' $container)
    if [ $uid = '"0"' ] ; then
        echo "Container name: $container"
    fi
done

In any case, as a best practice you should prevent your users from running containers as root. This can be enforced by existing controls of the common orchestration\management system. For example, OpenShift prevents users from running containers as root out of the box so your job here is basically done. However, in Kubernetes your can run as root by default but you can easily configure PodSecurityPolicy to prevent this as described here.

In order to fix this issue, you should patch the version of your container runtime. Whether you are just using a container runtime (docker) or some flavor of a container orchestration system (Kubernetes, Mesos, etc…) you should look up the instructions for your specific software version and OS.

How can Guardicore help?

Guardicore provides a network security solution for hybrid cloud environments that spans across multiple compute architectures, containers being one of them. Guardicore Centra is a holistic micro-segmentation solution that provides process-level visibility and enforcement of the traffic flows both for containers and VMs. This is extremely important in the case of this CVE, as the attack would originate from the host VM or a different container and not the original container in case of a malicious actor breaking out.

Guardicore can mitigate this risk by controlling which processes can actually communicate between the containers or VMs covered by the system.

Learn more about containers and cloud security

Operationalizing Micro-Segmentation to Get You Started

Micro-segmentation is the way forward in protecting networks. But a successful micro-segmentation deployment cannot be slapped together – it requires deliberate and detailed forethought in order to get it all right — the first time around.

Learning from the Equifax Data Breach: Understanding the Details of One of the Largest Cyber Attacks of All Time

148 million consumers were affected by the Equifax Data Breach in 2017, more than half of all American adults. The US House of Representatives recently published an extensive report that allows the public to see what happened throughout the attack step by step, the techniques the attackers used to penetrate, move laterally, and gain access to valuable information, and how they managed to achieve this without being detected. Significantly, the report discusses what could have been done to prevent the extent of the damage. So, what went wrong for Equifax?

Two Missing Security Protocols that Could Have Stopped the Breach

With a breach of this scale, it would be easy to assume that the attackers used a complex attack pattern or took advantage of a new vulnerability that flew under the public radar. Interestingly, the committee outlines basic steps that Equifax failed to put into place that could have prevented the breach and limited its impact.

In particular, the report mentions “the company’s failure to implement basic security protocols, including file integrity monitoring and network segmentation” as an insight into how Equifax “allowed the attackers to access and remove large amounts of data.” Without these in place, the attack lasted 76 days, and attackers were able to use the unprotected credentials they found to access 48 additional databases. The attack was only stopped because the company updated an expired security certificate, one of more than 300 they had failed to update.

The limitations of Equifax’s security protocol were not due to a lack of in-depth tools or the company failing to upgrade to the latest expensive or cutting-edge technology. Many weaknesses could have been improved or even solved with security measures that are often cited by various industry standards and cybersecurity experts.

Below, you can see an informative pyramid by Gartner that details the protection controls that an enterprise needs when handling cloud workloads in a dynamic environment. The top point of the pyramid references what Gartner refer to as less critical technology, and as the pyramid widens, the tools become increasingly essential as a foundation for cloud workload protection.

GartnerPyramid for Cloud Workload Protection Platforms shows Need for Micro-Segmentation

The “optional” top section includes Antivirus and deception tools. The middle section contains controls that are often included outside of cloud workload protection, such as encryption and monitoring. The bottom section is the most essential, and Gartner goes as far as to label these tools core server protection strategies, foundational to a cloud workload environment. System integrity monitoring, vulnerability management, and segmentation and application control all play central roles in this category.

It’s not only Gartner who considers these controls essential. When it comes to protecting valuable customer information and achieving regulatory compliance, organizations such as PCI-DSS and SWIFT recommend the same basic steps. For financial information, PCI-DSS regulations enforce file integrity monitoring on your Cardholder Data Environment itself, to examine the way that files change, establish the origin of such changes, and determine if they are suspicious in nature. SWIFT regulations require customers to “Restrict internet access and protect critical systems from the general IT environment” as well as encourage companies to implement internal segmentation within each secure zone to further reduce the attack surface.

Equifax’s lack of a well implemented segmentation strategy allowed attackers to gain access to additional databases that contained Personally Identifiable Information. Without drawing attention to their activity, these hackers managed to access and remove large amounts of data held in dozens of different databases.

It’s not a coincidence that the same steps to mitigate these threats are suggested so widely- from industry experts like Gartner analysts to regulatory authorities such as SWIFT and PCI. Vulnerability management and system hardening reduce the risk of being breached in the first place, while segmentation limits the impact that a breach could have if successful.

Similar recommendations are coming from all directions. Implementing these basic steps significantly and measurably reduces the risk to protect your data center, starting with your business’s most critical assets.

The real question is, why aren’t businesses putting these steps into place?

One answer is the growing complexity of IT environments. Take the SWIFT regulations for example. Even identifying the assets that belong in a secure zone can be tough in a large financial institution that may have hundreds of components to track. Some of these are physical, while others might be virtual. They are increasingly hosted on varying kinds of architecture and could be spread across different locations and teams. Gaining visibility of an increasingly complex and dynamic ecosystem is a must before you can put any policy or controls into place, and yet the visibility itself can be a sticking point for many businesses, even before they start considering smart segmentation strategy.

At Guardicore, we recognize how important it is to put these foundational controls in place, making it harder for attackers to gain entry to your environment and reducing the impact of an attack, limiting dwell time to minutes rather than days. That’s why we start with visibility, making it easier to enforce policy in the right places. A clear map of every asset and its dependencies allows businesses to create secure boundaries, track communication within the data center, as well as identify flows between the data center and the rest of the network.

Circling Back to Equifax

It’s unlikely that the Equifax team were not aware of the benefits of these controls. As we’ve seen, these protocols are recommended by experts and even required for various types of compliance. However, knowing is one thing and implementing is another. These steps are some of the first suggestions we put on the roadmap as we partner with new customers, but we’re often met with trepidation. They tell us for example that their past experiences with traditional segmentation tools have shown them to be slow and expensive, and it’s difficult to know where to start.

Guardicore Centra has evolved to tackle this challenge head on, moving away from traditional segmentation methods to provide micro-segmentation that provides foundational visibility and shows quick time to value. Our customers benefit from early wins like protecting critical assets or achieving regulatory compliance, avoiding the trap of “all or nothing segmentation” that can happen when competitors do not implement a phased approach.

Our expertise allows enterprises to build this contextually superior phased approach to micro-segmentation. Risk reduction is one essential element, but at Guardicore we offer a whole package solution that includes breach detection and incident response, too, strengthening overall security posture.

Micro-segmentation is not a luxury for the few. Anyone can now implement the basic security measures needed to stay protected, shield themselves from obvious security gaps, and prevent attackers from gaining unchecked access to sensitive information in a hybrid IT environment.

Interested in hearing more? Get in touch for a demo.

Want to learn more about operationalizing micro-segmentation for quick time to value?

Read Our White Paper

Understanding the Types of Cyber Threats on the Rise in 2019

Keeping your IT environment safe means ensuring your finger is on the pulse of the latest threats in cyber-security. However, while there are always the latest zero-day threats and new attack vectors, each year we see some fundamental repeats. Often attackers find it easy to penetrate networks that have poor hygiene such as old exploits left unpatched, authentication issues such as a lack of two factor authentication and weak passwords. These types of network threats threaten the security of your enterprise, endanger your public image, and put customer data and privacy at risk.

While some types of cyber threats have been around for many years, as we enter 2019, many are growing in complexity or changing in design. This risk is growing, especially as businesses continue to move their workloads and processes to multi and hybrid-cloud environments. Virtualization and hypervisors, container orchestration, and auto-scaling workloads are all realities of a modern enterprise. If we really think about what was new in 2018 and will surely continue in 2019, it is attackers attacking critical applications, data centers and clouds directly. In order to stay secure, as well as manage compliance and keep control despite potential gaps in vendor security, your own solution needs to step up. Businesses will increasingly need to choose a security solution that can effortlessly manage a hybrid and multi-cloud infrastructure.

Attackers are regularly learning new methods to gain entry or cause damage. Here are the top threats to look out for in 2019.

Direct Attacks on Data Centers and Clouds

What we’ve seen through our work with our customers and through our Guardicore Global Sensor Network is an increase in attacks on data centers and clouds directly. These types of cyber-security threats do not use targeted spear phishing campaigns to gain entry through a user within an enterprise. Instead, we see attackers finding known and zero day vulnerabilities in applications they can reach directly and exploiting these to get inside. In many cases their work is assisted by fundamental weaknesses like insecure passwords and a lack of dual factor authentication. One of Guardicore Labs’ most important finds this year was the Butter campaign. The attacker(s) started their attack by merely brute forcing poorly passworded SSH servers to gain access. Once they gained access – we found attackers moving incredibly easily across these applications and data centers due to poor segmentation.

While these attacks on the data centers are easy to accomplish, they remain difficult to spot. In fact, for some companies, security teams are not even the ones to ring the alarm bell. Dwell time is not reduced or mitigation started with an enterprise finding the attackers and blocking the threat, but with a third-party letting the enterprise know there is something wrong. In some cases this could be White Hat researchers or the customers themselves, and in the case of attackers seeking monetization – it could be credit card or law enforcement companies that notify the compromised enterprise.

Crypto-jacking

Many experts failed to predict the increase of cryptocurrency attacks for 2018, but no one is making that mistake this year. Attackers are often financially-driven, and mining for cryptocurrency is one way to attempt a quick payout, with more guaranteed results than ransomware. Besides merely offering DDoS as RAT as a service to their customers, the attackers are seeking an additional revenue stream. In fact, while crypto-jacking has risen 44.5% since 2017, ransomware has dropped by almost 30%. Mining malware often looks to exploit vulnerabilities such as unpatched software or known bugs such as this year’s Microsoft Windows Server 2003 vulnerability, or the Oracle Web Logic flaw.
The impact of these attacks is huge, and attackers can steal vast amounts of CPU usage from victims, slowing down performance overall and having a negative effect on both business and customers. Like a worm, virus, or other types of cyber-security threats, crypto-jacking attacks can be tough to find, leaving stakeholders using time-wasting trial and error to find the source of the slowdown. Visibility into the traffic on your network is essential, so that you can track CPU usage and compare real-time activity to historical baselines.

APT

An APT is an Advanced Persistent Threat, where an attacker can breach a network and stay undetected for a long period of time. The goal of these attacks is not to cause instant damage or immediately ask for ransom, drawing attention to your breach, but rather to insidiously steal information or security data in an unobtrusive way. An APT could breach your network using malware, exploit kits or by piggybacking on legitimate traffic. This could make it difficult to spot. Once your network is infected, an APT could find login credentials, and then use these to make lateral moves around your data center or wider system.

Origins of APTs are usually found to be state actors – either direct or sponsored government attackers. Probably the best example this year was the Marriott/SPG attack. With a dwell time that began in 2014 the state actor enjoyed great benefit from their access to Marriott’s SPG network. The data stolen included names, phone numbers, email addresses, passport numbers, dates of birth and arrival and departure information.

This personally identifiable data from an attack of this kind could offer an intelligence agency all sorts of very tangible benefits. One example could be the ability to create more legitimate looking false passports with the use of real identification documents.

This kind of breach would also provide actionable tracking information, allowing an agency or a bad actor to track people’s movements. They could see if someone was checking into particular locations or even catch a meeting between multiple people of interest. The data would also allow them to learn travel patterns and even potentially set up intelligence agencies to “intercept” people of interest.
Because APTs and similar types of cyber-security threats are designed to go unnoticed, they can be difficult to spot. Signs to look out for could be unusual network activity such as spikes in data access. Key defense tactics could be isolating critical data using micro-segmentation and using white lists to limit access to only the applications that should be allowed to communicate with one another.

File-less Malware

One dangerous type of attack that is typically found as part of an APT is file-less malware. As the name suggests, a file is never created, so standard antivirus file-based detection does not work against these breaches. While traditionally, file-less techniques were the first step in malware infection, in recent months fully file-less attacks are gaining traction.

These types of network threats often pivot from memory exploits to highly trusted system tools and then move to access of the rest of a network, undetected. The most common kinds of file-less malware attacks are remote logins, WMI-based attacks, and PowerShell or Microsoft Office based. In short – no malware doesn’t mean no breach. Micro-segmentation, especially if done with effective rules and in even more thorough projects down to the process level, can keep your most critical applications safe from lateral moves even within the same application cluster, even against the threats you can’t see coming.

Attacks on Critical IoT Devices

The final and perhaps the most frightening increase we have seen through 2018 is attackers commandeering critical IoT devices. Often unpatched, and residing in what are generally flat networks (ones without any segmentation), medical devices have been a big target in 2018 and are likely to be further exploited in 2019.

Furthermore, “point of sale” systems are another attack environment we’ve seen increase in popularity, as they also often suffer from a lack of patching and security, and are an easy target for both physical and remote attacks.

Recognizing how to Ward off These Types of Cyber-Security Threats

The combination of increasingly complex IT environments and the growing sophistication of cyber threats is a dangerous one. Micro-segmentation technology can reduce the attack surface in case of a breach, isolating attackers and keeping them away from critical assets and sensitive customer data. Building a smart segmentation strategy starts with a map of your entire IT environment, with application dependency mapping to visualize all the communications and flows in your ecosystem. This true visibility and real-time control over your entire infrastructure, from on premises data centers to multi and hybrid cloud IaaS is essential, in 2019 and beyond.

Want to learn more about breach detection to help prevent damage from cyber threats to your environment?

Read More

A Deep Dive into Point of Sale Security

Many businesses think of their Point of Sale (POS) systems as an extension of a cashier behind a sales desk. But with multiple risk factors to consider, such as network connectivity, open ports, internet access and communication with the most sensitive data a company handles, POS solutions are more accurately an extension of a company’s data center, a remote branch of their critical applications. This being considered, they should be seen as a high-threat environment, which means that they need a targeted security strategy.

Understanding a Unique Attack Surface

Distributed geographically, POS systems can be found in varied locations at multiple branches, making it difficult to keep track of each device individually and to monitor their connections as a group. They cover in-store terminals, as well as public kiosks and self-service stations in places like shopping malls, airports, and hospitals. Multiple factors, from a lack of resources to logistical difficulties, can make it near impossible to secure these devices at the source or react quickly enough in case of a vulnerability or a breach. Remote IT teams will often have a lack of visibility when it comes to being able to accurately see data and communication flows. This creates blind spots which prevent a full understanding of the open risks across a spread-out network. Threats are exacerbated further by the vulnerabilities of old operating systems used by many POS solutions.

Underestimating the extent of this risk could be a devastating oversight. POS solutions are connected to many of a business’s main assets, from customer databases to credit card information and internal payment systems, to name a few. The devices themselves are very exposed, as they are accessible to anyone, from a waiter in a restaurant to a passer-by in a department store. This makes them high-risk for physical attacks such as downloading a malicious application through USB, as well as remote attacks like exploiting the terminal through exposed interfaces, Recently, innate vulnerabilities have been found in mobile POS solutions from vendors that include PayPal, Square and iZettle, because of their use of Bluetooth and third-party mobile apps. According to the security researchers who uncovered the vulnerabilities, these “could allow unscrupulous merchants to raid the accounts of customers or attackers to steal credit card data.”

In order to allow system administrators remote access for support and maintenance, POS are often connected to the internet, leaving them exposed to remote attacks, too. In fact, 62% of attacks on POS environments are completed through remote access. For business decision makers, ensuring that staff are comfortable using the system needs to be a priority, which can make security a balancing act. A straightforward on-boarding process, a simple UI, and flexibility for non-technical staff are all important factors, yet can often open up new attack vectors while leaving security considerations behind.

One example of a remote attack is the POSeidon malware which includes a memory scraper and keylogger, so that credit card details and other credentials can be gathered on the infected machine and sent to the hackers. POSeidon gains access through third party remote support tools such as LogMeIn. From this easy access point, attackers then have room to move across a business network by escalating user privileges or making lateral moves.

High risk yet hard to secure, for many businesses POS are a serious security blind spot.

Safeguarding this Complex Environment and Getting Ahead of the Threat Landscape

Firstly, assume your POS environment is compromised. You need to ensure that your data is safe, and the attacker is unable to make movements across your network to access critical assets and core servers. At the top of your list should be preventing an attacker from gaining access to your payment systems, protecting customer cardholder information and sensitive data.

The first step is visibility. While some businesses will wait for operational slowdown or clear evidence of a breach before they look for any anomalies, a complex environment needs full contextual visibility of the ecosystem and all application communication within. Security teams will then be able to accurately identify suspicious activity and where it’s taking place, such as which executables are communicating with the internet where they shouldn’t be. A system that generates reports on high severity incidents can show you what needs to be analyzed further.

Now that you have detail on the communication among the critical applications, you can identify the expected behavior and create tight segmentation policy. Block rules,with application process context, can be used to contain any potential threat, ensuring that any future attackers in the data center would be completely isolated without disrupting business process or having any effect on performance.

The risk goes in both directions. Next, let’s imagine your POS is secure, but it’s your data center that is under attack. Your POS is an obvious target, with links to sensitive data and customer information. Micro-segmentation can protect this valuable environment, and stop an attack getting any further once it’s already in progress, without limiting the communication that your payment system needs to keep business running as usual.

With visibility and clarity, you can create and enforce the right policies, crafted around the strict boundaries that your POS application needs to communicate, and no further. Some examples of policy include:

    • Limiting outgoing internet connections to only the relevant servers and applications
    • Limiting incoming internet connections to only specific machines or labels
    • Building default block rules for ports that are not in use
    • Creating block rules that detail known malicious processes for network connectivity
    • Whitelisting rules to prevent unauthorized apps from running on the POS
    • Create strict allow rules to enable only the processes that should communicate, and block all other potential traffic

Tight policy means that your business can detect any attempt to connect to other services or communicate with an external application, reducing risk and potential damage. With a flexible policy engine, these policies will be automatically copied to any new terminal that is deployed within the network, allowing you to adapt and scale automatically, with no manual moves, changes, or adds slowing down business processes.

Don’t Risk Leaving this Essential Touchpoint Unsecured

Point of Sale solutions are a high-risk open door for attackers to access some of your most critical infrastructure and assets. Without adequate protection, a breach could grind your business to a halt and cost you dearly in both financial damage and brand reputation.

Intelligent micro-segmentation policy can isolate an attacker quickly to stop them doing any further damage, and set up strong rules that keep your network proactively safe against any potential risk. Combined with integrated breach detection capabilities, this technology allows for quick response and isolation of an attacker before the threat is able to spread and create more damage.

Want to learn more about how micro-segmentation can protect your endpoints while hardening the overall security for your data center?

Read More

Considering Cyber Insurance in the Aftermath of the NotPetya Attack

It’s been 18 months since June 2017 when the Petya/NotPetya cyber attacks fell on businesses around the globe, resulting in a dramatic loss of income and intense business disruption. Has cyber insurance limited the fallout for the victims of the ransomware attacks, and should proactive businesses follow suit and ensure they are financially covered in case of a breach?

Monetizing the Impact of Cybercrime

The effect on the IT and insurance industries of last years wave of cybercrime continues to grow as businesses disclose silent cyber impacts, as well as affirmative losses from WannaCry/Petya. The latest reports from Property Claim Services put the loss at over $3.3 billion, and growing.

Despite this, for some businesses, reliance on insurance schemes has proven inadequate. US Pharmaceutical company Merck disclosed that the Petya cyberattacks have cost them as much as $580 million since June 2017, and predicted an additional $200 million in costs by the end of 2018. In contrast, experts estimated their insurance pay-out would be around $275 million, a huge number, but under half of the amount they have incurred so far, let alone as their silent costs continue to rise.

Other companies have been left even worse off, such as snack food company Mondolez International Inc, who are in a continuing battle with their property insurer, Zurich American Insurance Company. Mondolez claimed for the Petya attacks under a policy that included “all risks of physical loss or damage” specifying “physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”

However, Zurich disputed the claim, due to a clause that excludes insurance coverage for any “hostile or war-like act by any government or sovereign power.” As US Intelligence officials have determined that the NotPetya malware originated as an attack by the Russian military against the Ukraine, Zurich are fighting the claim by Mondelez that they are wrongfully denying coverage.

How Does This Lawsuit Affect the Cyber-Insurance Market Overall?

As cyber crime continues to rise, cyber insurance is understandably becoming big business. For companies deciding on whether to take out coverage, CISO’s need to find space in the budget for monthly costs and potentially large premiums. For this risk to be worthwhile, businesses want to be confident that they will recover their costs if a breach happens.

The insurance pay-outs around the Petya cyberattacks, and in particular the Mondolez case, throw this into question. This is especially true considering the rise in cyberattacks that are nation-backed or could plausibly be claimed to be nation-backed by insurance companies in order to dispute a claim. As regulations change and the US military are given more freedom to launch preventative cyberattacks against foreign government hackers, any evidence that suggests governmental or military attribution could be legitimately used against claimants looking to settle their losses.

The Effect on Public Research

The ripple effect of this could go beyond the claims sector, and have a connected impact on security research, as well as free press and journalism in the long run, something we feel strongly about at Guardicore Labs. Traditionally, researchers have had the freedom to comment and even speculate on the attribution of cyber attacks, through information on the attackers’ behavior behind the scenes and the attack signatures they use. If insurance companies and claims handlers begin using public research as a reason to deny coverage to the victims, this could put research teams in an ethical bind, reducing the amount of public research and the transparency of the industry overall.

How Much of a ‘Guarantee’ Can Security Companies Provide?

The issue of what claims to honor extends to financial guarantees from security companies, not only to insurance handlers. It is becoming increasingly popular to offer guarantees to customers who purchase cybersecurity products, in order to ‘put your money where your mouth is’ on the infallibility of a particular solution.

However, many experts believe that these policies have so many loopholes that they negate the benefit of the warranty overall. One example is the often cited ‘nation state or act of god’ exception, which includes cyberterrorism. Others include exclusions of coverage for portable devices, insider threats, or intentional acts. Even if you are widely covered for an event, does that extend to all employees? According to the latest Cyber Insurance Buying Guide, “most policies do not adequately provide for both first-party and third-party loss.”

Your ‘Guarantee’ is not a Guarantee

The bottom line for CISOs looking to protect their business is that cyber insurance is not a catch-all solution by any means. Whether it’s insurance companies paying out a limited figure or skirting a pay-out altogether, or cybersecurity companies making big promises that are ultimately undermined by the small print, cyber insurance has a way to go.

Focus on your cybersecurity solution, including strong technology like micro-segmentation to limit the attack surface in the case of a breach. With this in place, you can ensure that your critical assets and data are ring-fenced and isolated, no matter what your infrastructure looks like and what direction the attack comes from. Integration with powerful breach detection and incident response capabilities strengthens your position even further, reducing dwell time, and giving you a security posture you can rely on.

What’s the Difference Between a High Interaction Honeypot and a Low Interaction Honeypot?

A honeypot is a decoy system that is intentionally insecure, used to detect and alert on an attacker’s malicious activity. A smart honeypot solution can divert hackers from your real data center, and also allow you to learn about their behavior in greater detail, without any disruption to your data center or cloud performance.

Honeypots differ in the way that they’re deployed and the sophistication of the decoy. One way to classify the different kinds of honeypots is by their level of involvement, or interaction. Businesses can choose from a low interaction honeypot, a medium interaction honeypot or a high interaction honeypot. Let’s look at the key differences, as well as the pros and cons of each.

Choosing a Low Interaction Honeypot

A low interaction honeypot will only give an attacker very limited access to the operating system. ‘Low interaction’ means exactly that, the adversary will not be able to interact with your decoy system in any depth, as it is a much more static environment. A low interaction honeypot will usually emulate a small amount of internet protocols and network services, just enough to deceive the attacker and no more. In general, most businesses simulate protocols such as TCP and IP, which allows the attacker to think they are connecting to a real system and not a honeypot environment.

A low interaction honeypot is simple to deploy, does not give access to a real root shell, and does not use significant resources to maintain. However, a low interaction honeypot may not be effective enough, as it is only the basic simulation of a machine. It may not fool attackers into engaging, and it’s certainly not in-depth enough to capture complex threats such as zero-day exploits.

Is a High Interaction Honeypot a More Effective Choice?

A high interaction honeypot is the opposite end of the scale in deception technology. Rather than simply emulate certain protocols or services, the attacker is provided with real systems to attack, making it far less likely they will guess they are being diverted or observed. As the systems are only present as a decoy, any traffic that is found is by its very existence malicious, making it easy to spot threats and track and trace an attackers behavior. Using a high interaction honeypot, researchers can learn the tools an attacker uses to escalate privileges, or the lateral movements they make to attempt to uncover sensitive data.

With today’s cutting-edge dynamic deception methods, a high interaction honeypot can adapt to each incident, making it far less likely that the attacker will realize they are engaging with a decoy. If your vendor team or in-house team has a research arm that works behind the scenes to uncover new and emerging cyber threats, this can be a great tool to allow them to learn relevant information about the latest tactics and trends.

Of course, the biggest downside to a high interaction honeypot is the time and effort it takes to build the decoy system at the start, and then to maintain the monitoring of it long-term in order to mitigate risk for your company. For many, a medium interaction honeypot strategy is the best balance, providing less risk than creating a complete physical or virtualized system to divert attackers, but with more functionality. These would still not be suitable for complex threats such as zero day exploits, but could target attackers looking for specific vulnerabilities. For example, a medium interaction honeypot might emulate a Microsoft IIS web server and have sophisticated enough functionality to attract a certain attack that researchers want more information about.

Reducing Risk When Using a High Interaction Honeypot

Using a high interaction honeypot is the best way of using deception technology to fool attackers and get the most information out of an attempted breach. Sophisticated honeypots can simulate multiple hosts or network topologies, include HTTP and FTP servers and virtual IP addresses. The technology can identify returning hackers by marking them with a unique passive fingerprint. You could also use your honeypot solution to separate internal and external deception, keeping you safe from cyber threats that move East-West as well as North-South.

Mitigating the risk of using a high interaction honeypot is easiest when you choose a security solution that uses honeypot technology as one branch of an in-depth solution. Micro-segmentation technology is a powerful way to segment your live environment from your honeypot decoy, ensuring that attackers cannot make lateral moves to sensitive data. With the information you glean from an isolated attacker, you can enforce and strengthen your policy creation to double down on your security overall.

Sweeter than Honey

Understanding the differences between low, medium and high interaction honeypot solutions can help you make the smart choice for your company. While a low interaction honeypot might be simple to deploy and low risk, the real benefits come from using a strong, multi-faceted approach to breach detection and incident response that uses the latest high interaction honeypot technology. For ultimate security, a solution that utilizes micro-segmentation ensures an isolated environment for the honeypot. This lets you rest assured that you aren’t opening yourself up to unnecessary risk while reaping the rewards of a honeypot solution.

Micro-Segmentation and Application Discovery – Gaining Context for Accurate Action

Application discovery across all environments and application delivery technologies helps organizations achieve the best possible security protection, compliance posture, and application performance levels.