Determining security posture, and how micro-segmentation can improve it

As the recent Quora breach that compromised 100 million user accounts demonstrates, the threat of a cyber attack is ever present in the modern IT environment. Cybercrime and data breaches continue to plague small businesses and enterprises alike, and network security teams are constantly working to stay one step ahead of an attack. This is no easy task since intrusion attempts occur daily and are constantly evolving to find the smallest weakness to exploit.

Attackers can employ direct attacks on data centers and clouds, enact crypto-jacking threats to mine cryptocurrency, devise advanced persistent threat (APT) attacks to extract data while remaining hidden within a network, or even add fileless malware to manipulate in-memory vulnerabilities and access sensitive system resources.

For these reasons, it’s more important than ever for IT teams to evaluate their current security posture to ensure the safety of their sensitive information and assets. This is particularly true in hybrid cloud environments where discrete platforms take siloed approaches to security that can make infrastructure-wide visibility and a holistic approach to security policies extremely difficult. In this piece, we’ll dive into the basics of security posture and explain how Guardicore Centra can help you improve yours.

Security posture defined

Security posture is the overall defensive capability a business has over its computing system infrastructure. Also referred to as cybersecurity posture, the term focuses not only on hardware and software resources, but also the people, policies, and processes in place to maintain security. It is then necessary to prioritize what areas require the most protection, managing the greatest risk, identify weaknesses, and have incident response and disaster recovery plans in place in the event a breach does occur. All of these factors determine the effectiveness, or lack thereof, of an organization’s security posture.

Identifying the areas that deserve attention

In order to determine an organization’s security posture, first it’s the responsibility of a security team to have complete and thorough understanding of the risks associated with the operation of their computing systems. Research must be conducted to quantify attack surfaces, determine risk tolerance, and identify areas within the infrastructure that require more focus.

This planning stage is particularly difficult when attempting to account for the complexities that come with a hybrid cloud infrastructure, as the dynamics of a hybrid cloud make it difficult to get a holistic view of enterprise information systems. Often different policies and controls are in place for different endpoints that exist in different clouds or on-premises.

All of this internal assessment and process scrutiny is essential to develop a foundation for a robust security posture. However, the right tools are required to enforce policies that support it. Modern integrated security techniques such as micro-segmentation and process-level visibility, which are enabled by solutions like Guardicore Centra, help enterprises ensure that they are effectively implementing their strategy and capable of meeting the security challenges of the modern hybrid cloud.

The impact of enhanced visibility on security posture

The heterogeneous nature of a hybrid cloud environment makes it difficult to scale security policies, since there usually is not an effective way to account for the entire infrastructure. Further, because you are dealing with multiple platforms and varying security controls, the possibility of blind spots and oversights increases.

The visualization features of Guardicore Centra were created with these challenges in mind. Using Centra, enterprises can drill down and rapidly discover specific applications and flows within a network, regardless of the particular platform a given node may be running on. Since Guardicore can provide visibility to the process level and enable inspection of systems down to the TCP/UDP port level, blind spots that may otherwise become exploit targets can be eliminated. In a hybrid cloud environment this means you are able to automatically and rapidly learn how applications behave within your network to build a baseline of expected behavior, and better understand how to harden your infrastructure.

The value of micro-segmentation

Given that the greater potential for lateral movement an attacker can perform after a breach, the more damage they can do, it is easy to conceptualize the value of micro-segmentation. We’re all familiar with the benefits of network segmentation using techniques such as access control lists, firewalls and VLANs, and micro-segmentation brings these down to the most granular levels and applies them across the entire hybrid cloud infrastructure. For users of Centra, this means least-access policies can be implemented that limit access to specific groups of users (e.g. database admins), restrict access to certain applications (e.g. a MySQL database server), and restrict access to specific ports (e.g. TCP 3306), with the flexibility of process-level context and cross-platform coverage.

As an added benefit, Centra suggests rules based on analysis of historical data, and development of robust policies becomes significantly easier. By removing complexity, enabling micro-segmentation, and providing process-level visibility, Centra reduces blind spots and limits exposed attack surfaces, two key components of improving security posture.

The importance of threat detection and proactive responses

In addition to enhanced visibility and micro-segmentation, identifying unrecognized and malicious intrusions and reducing dwell-time is an important part of improving security posture. A pragmatic, modern organization understands that despite the best laid plans, breaches may occur and if and when they do, they must be rapidly detected, contained, and remediated.

To this end, Centra is uniquely capable of meeting the breach detection and incident response challenges enterprises with hybrid cloud infrastructures face. Centra uses three different detection methods (Dynamic Deception, Reputation Analysis, and Policy-Based Detection) to rapidly identify and react to attacks. By doing so, Centra helps ensure that in the event a security breach does occur, you are able to reduce the damage and minimize dwell time. This proactive approach to threat detection and response rounds out the Centra offering and helps you ensure your hybrid cloud infrastructure is secure and flexible enough to meet the challenges of modern IT security without sacrificing the performance of your infrastructure or adding unnecessary complexity.

Interested in learning more?

Guardicore Centra can help you significantly enhance your security posture, particularly in complex, difficult-to-manage, hybrid cloud environments. The benefits of hybrid cloud infrastructure are clear from a capex and scalability standpoint, but the tech is not without inherent risk. Hybrid cloud suffers with a myriad of siloed approaches to security policies and controls for reducing attack surfaces in an environment.

Adopting a proactive approach to security and leveraging security solutions that enable micro-segmentation are important steps towards enhancing your security posture and protecting your systems from falling victim to the next data breach.

To learn more about how micro-segmentation can benefit your enterprise, check out the micro-segmentation hub, or set up a demo to see Guardicore Centra in action.

Want to learn more about securing your hybrid cloud environment and strengthening your security posture? Get our white paper on best practices for the technical champion.

Read More

You don’t have to be mature in order to be more secure – cloud, maturity, and micro-segmentation

Whether you’ve transitioned to the cloud, are still using on-prem servers, or are operating on a hybrid system, you need security services that are up to the task of protecting all your assets. Naturally, you want the best protection for your business assets. In the cybersecurity world, it’s generally agreed that micro-segmentation is the foundation for truly powerful, flexible, and complete cloud network security. The trouble is that conventional wisdom might tell you that you aren’t yet ready for it.

If you are using a public cloud or VMware NSX-V, you already have a limited set of basic micro-segmentation capabilities built-in with your cloud infrastructure, using security groups and DFW (NSX-V). But security requirements, the way that you have built your network, or your use of multiple vendors require more than a limited set of basic capabilities.

The greatest security benefits can be accessed by enterprises that can unleash the full potential of micro-segmentation beyond layers 3 and 4 of the OSI model, and use application-aware micro-segmentation. Generally, your cloud security choices will be based on the cloud maturity level of your organization. It’s assumed that enterprises that aren’t yet fully mature, according to typical cloud maturity models, won’t have the resources to implement the most advanced cloud security solutions.

But what if that’s not the case? Perhaps a different way of thinking about organizational maturity would show that you can enjoy at least some of the benefits of advanced cloud security systems. Take a closer look at a different way to assess your enterprise’s maturity.

A different way to think about your organizational maturity

Larger organizations already have a solid understanding of their maturity. They constantly monitor and reevaluate their maturity profile, so as to make the best decisions about cloud services and cloud security options. We like to compare an organization learning about the best cloud security services to people who are learning to ski.

When an adult learns how to ski, they’ll begin by buying ski equipment and signing up for ski lessons. Then they’ll spend some time learning how to use their skis and getting used to the feeling of wearing them, before they’re taught to actually ski. It could take a few lessons until an adult skis downhill. If they don’t have strong core muscles and a good sense of balance, they are likely to be sent away to improve their general fitness before trying something new. But when a child learns how to ski, they usually learn much faster than an adult, without taking as long to adjust to the new movements.

Just like an adult needs to be strong enough to learn to ski, an organization needs to be strong enough to implement cloud security services. While adults check their fitness with exercises and tests, organizations check their fitness using cloud maturity models. But typical cloud maturity models might not give an accurate picture of your maturity profile. They usually use 4, 5, or 6 levels of maturity to evaluate your organization in a number of different areas. If your enterprise hasn’t reached a particular level in enough areas, you’ll have to build up your maturity before you can implement an advanced cloud security solution.

At Guardicore, we take a different approach. We developed a solution that yields high security dividends, even if the security capabilities of your organization are not fully mature.

Assessing the maturity of ‘immature’ organizations

Most cloud security providers assume that a newer enterprise doesn’t have the maturity to use advanced cloud security systems. But we view newer enterprises like children who learn to ski. Children have less fear and more flexibility than an adult. They don’t worry about falling, and when they do fall, they simply get up and carry on. The consequences of falling can be a lot more serious for adults. In the same way, newer enterprises can be more agile, less risk-averse, and more able to try something new than an older enterprise that appears to be more mature.

Newer organizations often have these advantages:

  • Fewer silos between departments
  • Better visibility into a less complex environment
  • A much higher tolerance for risk that enables them to test new cloud services and structures, due to a lower investment in existing architecture and processes
  • A more agile and streamlined environment
  • A lighter burden of inherited infrastructure
  • A more unified environment that isn’t weakened by a patchwork of legacy items

While a newer enterprise might not be ready to run a full package of advanced cloud security solutions, it could be agile enough to implement many or most of the security features while it continues to mature. Guardicore allows young organizations to leapfrog the functions that they aren’t yet ready for, while still taking advantage of the superior protection offered by micro-segmentation. Like a child learning to ski, we’ll help you enjoy the blue runs sooner, even if you can’t yet head off-piste.

Organizational maturity in ‘mature’ organizations

Although an older, longer-established organization might seem more cloud mature, it may not be ready for advanced cloud security systems. Many older enterprises aren’t even sure what is within their own ecosystem. They face data silos, duplicate workflows, and cumbersome business processes. Factors holding them back can include:

  • Inefficient workflows
  • Long-winded work processes
  • Strange and divisive infrastructure
  • Awkward legacy environments
  • Business information that is siloed in various departments
  • Complex architectures

Here, Guardicore Centra will be instrumental in bridging the immaturity gap: It provides deep visibility through clear visualization of the entire environment, even those parts that are siloed. Guardicore Centra delivers benefits for multiple teams, and its policy engine supports (almost) any kind of organizational security policy.

What’s more, Guardicore supports phased deployment. It is not an all-or-nothing solution. An organization that can’t yet run a full set of advanced cloud security services still needs the best protection it can get for its business environment. In these situations, Guardicore helps implement only those features that your organization is ready for, while making alternative security arrangements for the rest of your enterprise. By taking it slowly, you can grow into your cloud capabilities and gradually implement the full functionality of micro-segmentation.

Flexible cloud security solutions for every organization

Guardicore’s advanced cloud security solutions provide the highest level of protection for your critical business assets. They are flexible enough to handle legacy infrastructure and complex environments, while allowing for varying levels of cloud maturity.

Whether you are a ‘young’ organization that’s not seen as cloud-mature, or an older enterprise struggling with organizational immaturity, Guardicore can help you to get your skis on. As long as you have a realistic understanding of your organization’s requirements and capabilities, you can apply the right Guardicore security solution to your business and enjoy superior protection without breaking a leg.

Lessons Learned from One of the Largest Bank Heists in Mexico

News report: $20M was stolen from Mexican banks, with the initial intention to steal $150M. Automatically we are drawn to think of a “Casa de Papel” style heist, bank robbers wearing masks hijacking a bank and stealing money from an underground vault. This time, the bank robbers were hackers, the vault is the SPEI application and well, no mask was needed. The hackers were able to figuratively “walk right in” and take the money. Nothing was stopping them from entering the back door and moving laterally until they reached the SPEI application.

Central bank Banco de México, also known as Banxico, has published an official report detailing the attack, the techniques used by the attackers and how they were able to compromise several banks in Mexico to steal $20M. The report clearly emphasizes how easy it was for the attackers to reach their goal, due to insecure network architecture and lack of controls.

The bank heist was directed at the Mexican financial system called SPEI, Mexico’s domestic money transfer platform, managed by Banxico. Once the attackers found their initial entrance into the network, they started moving laterally to find the “crown jewels”, the SPEI application servers. The report states that the lack of network segmentation enabled the intruders to use that initial access to go deeper in the network with little to no interference and reach the SPEI transaction servers easily. Moreover, the SPEI app itself and its different components had bugs and lacked adequate validation checks of communication between the application servers. This meant that within the application the attackers could create an infrastructure of control that eventually enabled them to create bogus transactions and extract the money they were after.

Questions arise: what can be learned from this heist? How do we prevent the next one? Attackers will always find their way in to the network, so how do you prevent them from getting the gold?

Follow Advice to Remain Compliant

When it comes to protecting valuable customer information and achieving regulatory compliance, organizations such as PCI-DSS and SWIFT recommend the following basic steps: system integrity monitoring, vulnerability management, and segmentation and application control. For financial information, PCI-DSS regulations enforce file integrity monitoring on your Cardholder Data Environment itself, to examine the way that files change, establish the origin of such changes, and determine if they are suspicious in nature. SWIFT regulations require customers to “Restrict internet access and protect critical systems from the general IT environment” as well as encourage companies to implement internal segmentation within each secure zone to further reduce the attack surface.

Let’s look at a few guidelines, as detailed by SWIFT while incorporating our general advice on remaining compliant in a hybrid environment.

  • Inbound and outbound connectivity for the secure zone is fully limited.
    Transport layer stateful firewalls are used to create logical separation at the boundary of the secure zone.
  • No “allow any” firewall rules are implemented, and all network flows are explicitly authorized.
    Operators connect from dedicated operator PCs located within the secure zone (that is, PCs located within the secure zone, and used only for secure zone purposes).
  • SWIFT systems within the secure zone restrict administrative access to only expected ports, protocols, and originating IPs.
  • Internal segmentation is implemented between components in the secure zone to further reduce the risk.

SPEI servers, that serve a similar function to SWIFT application servers should adhere to similar regulatory requirements, and as elaborated on by Banxico in the official analysis report, such regulations are forming for this critical application.

Don’t Rely on Traditional Security Controls

The protocols detailed above are recommended by security experts and compliance regulations worldwide, so it’s safe to assume the Mexican bank teams were aware of the benefits of such controls. Many of them have even been open about their attempts to implement these kinds of controls with traditionally available tools such as VLANS and endpoint FWs. This has proven to be a long, costly and tiresome process, sometimes requiring 9 months of work to segment a single SWIFT application! Would you take 9 months to install a metal gate around your vault and between your vault compartments? I didn’t think so…

Guardicore Centra is set on resolving this challenge. Moving away from traditional segmentation methods to use micro-segmentation that provides foundational actionable data center visibility, this technology shows quick time to value, with controls down to the process level. Our customers, including Santander Brasil and BancoDelBajio in Mexico, benefit from early wins like protecting critical assets or achieving regulatory compliance, avoiding the trap of “all or nothing segmentation” that can happen when competitors do not implement a phased approach.

Guardicore provides the whole package to secure the data center, including real-time and historical visibility down to the process level, segmentation and micro-segmentation supporting various segmentation use cases, and breach detection and response, to thoroughly strengthen our client’s security posture overall.

Micro-segmentation is more achievable than ever before. Let’s upgrade your company’s security practices to prevent attackers from gaining access to sensitive information and crown jewels in your hybrid data center. Request a demo now or read more about smart segmentation.

Read More

Micro-Segmentation: Getting Done Faster With Machine Learning

Building micro-segmentation policies around workloads to address compliance, reduce attack surfaces and prevent threat propagation between machines is on every organization’s security agenda and made it to the CISO’s 2019 shortlist. Many times, deploying segmentation policies in hybrid data centers proves harder than it looks. At Guardicore, we are very proud of our ability to assist customers segment and micro-segment their clouds and data centers quickly, protecting their workloads across any environment and achieving fast return on security investments.

But, we always think that there is room for improvement. Analyzing the different assignments that are involved with the task of micro-segmentation, we have identified several steps that can be accelerated with more sophisticated code. Using data that was collected from our customers and studied by Guardicore Labs, we added machine learning capabilities that accelerate micro-segmentation.

In order to properly micro-segment a large environment, one should discover all the workloads, create application dependency mappings, classify the workloads and label accordingly. Next, one is required to understand how the application is tiered and its behavior in order to set micro-segmentation policies both for its internal components as well as the other entities it is serving.

This is where our machine learning capabilities can assist.

We are taking advantage of the fact that in Guardicore deployments we collect information about every flow in the network. Discovery is automatic, creating a visualization of all application communications and dependencies. The visualized map shows how workloads are communicating. The algorithms use this data and model the network as an annotated graph and use our customized unsupervised machine learning technique to cluster similar workloads into groups, based on communication patterns. Then, Centra can perform the following tasks:

  • Automatic classification of workloads
  • Automatic label creation for applications and their tiers
  • Automatic rule suggestion for flow level-segmentation and process level micro-segmentation

Here is an example of running classification from Reveal’s data center map:

running classification from Reveal with ML

Below is a visualization of results of automatic workload classification:

results of automatic workload classification with machine learning

 

And this is how this looks in Reveal, at the application tier:

Reveal view with ML

 

Want to learn more about our solution? Contact us.

Cloud migration challenges and risks – prevent and overcome them

Even though it seems to be almost ubiquitous, cloud computing continues to grow at an impressive rate. According to Gartner, public cloud revenues as a whole will grow by 17.3% in 2019, and the IaaS (Infrastructure as a Service) market will experience 27.6% growth. What this means is that more and more organizations will need to navigate the cloud migration challenges associated with maintaining a hybrid cloud infrastructure in order to reap the benefits of the cloud.

While there are a number of benefits to cloud migration, there are also operational, security, and financial risks that must be accounted for. In this piece, we’ll dive into the different approaches to cloud migration, some of the cloud migration challenges many organizations face, and how to effectively address those challenges to minimize your risk and maximize the upside of the cloud.

Approaches to cloud migration

At a high-level, there are 3 different approaches an organization can take to cloud migration, each with its own set of pros and cons. Aater Suleman did a good job summarizing the 3 main approaches in his Forbes piece, they are:

  • Rehost. Simply move workloads as they are. While simple and less work-intensive than the other methods, the downside here is the inability to maximize the cost and performance benefits of operating in the cloud (e.g. elasticity).
  • Replatform. Make minor changes to workloads to help capture some of the inherent benefits of the cloud (e.g. use a managed database for an app). Replatforming seeks to find a middle ground between the benefits of rehosting and refactoring.
  • Refactor. Re-architect the workloads to maximize the benefits of the cloud. While refactoring is the most work-intensive upfront, it also positions enterprises to maximize the cost and performance benefits of the cloud.

Common challenges and risks of cloud migration

In addition to weighing the pros and cons of the different cloud migration strategies, organizations must be able to identify and overcome the inherent cloud migration risks and challenges that come with shifting workloads off of on-premises hardware. Below, we’ll review three of the most common.

Developing the right strategy to address cloud migration risks

Strategy is vital to any major IT endeavor, and cloud migration is no different. A major part of developing the right strategy is selecting the right approach (rehost, replatform, or refactor) to your migration. While this will have a major impact on ROI and operations, it is not the only area to consider when planning a cloud migration.

Another key component of a cloud migration strategy is knowing what solutions you should say “no” to. Wasted spend is a big cloud migration risk. How big? Consider the statistics that suggest 35% of cloud spend is wasted. Understanding what your business needs, and what it doesn’t, will help you properly plan and avoid wasted spend. Paying for additional cloud infrastructure you don’t need and won’t use isn’t only a poor investment, it also unnecessarily increases your attack surface.

Maintaining application visibility in a hybrid cloud

The cloud comes with challenges beyond wasted spend as well. Generally, security policies are applied within the context of a given cloud platform (e.g. AWS, Azure, GCP, private clouds, etc) or on-premsies data center. This siloed approach to infrastructure leads to disjointed security policies and one-off configurations that make capturing a holistic and granular view of data across the entirety of a network a real challenge.
Lack of visibility can hurt both before and after a migration, particularly when using a “rehost” approach. For example, in order to understand how an application performs, its dependencies, and what ports it uses, granular, process level visibility is required. Similarly, detailed visibility is required after the migration to ensure the app is operating as expected.

Adapting security to fit the hybrid cloud model

Another important part of executing a cloud migration is understanding and accounting for the complexity it can add to network security once it is complete. We often think of cloud migrations as a way to minimize complexity in IT. After all, the provisioning, maintenance, and patching of software and hardware can be abstracted away and taken care of by a service provider. However, from a security perspective, the more discrete clouds and solutions you implement, the more silos you create. As a result, it becomes more difficult to maintain robust, scalable, and holistic security policies. This complexity is only compounded when a single application spans multiple cloud configurations.

In short, the hybrid cloud model is fundamentally different than an on-premises model. Multiple discrete infrastructures and services each have their own wrinkles that make developing policies that can scale and span the entirety of an enterprise difficult. As a result, you are left with multiple silos within your infrastructure that create blind spots, lead to more maintenance, require more resources, and demand more time and energy from the security professionals on your team.

Addressing cloud migration challenges with Guardicore

Some of the challenges we have discussed thus far, namely selecting the right approach for your cloud migration and knowing when to say “no” to unnecessary solutions, can be mitigated with proper planning and an understanding of your infrastructure needs. However, from an operational perspective, you’ll still require tools that enable the visibility, flexibility, and security required to effectively execute a cloud migration and implement enterprise-grade security thereafter.

This is where a solution like Guardicore Centra can add a tremendous amount of value. Since it is designed from the ground up to solve the security and visibility problems facing the modern enterprise, Centra users are able to create and enforce security policies that span clouds and on-premises environments, helping to break through silos. Further, Centra enables the creation of cloud-ready policies with features like auto-scaling that enable users to get the most out of the flexible, burstable nature of the cloud without compromising security.

Centra offers process level visibility across clouds and on-premises which enables detailed planning before a migration and performance monitoring after. Centra also supports a wide variety of cloud API integrations that enable enterprises to capture granular details on migrated infrastructure. Additionally, Centra is able to use dynamic labeling and integrate with Software Defined Data Center (SDDC) controllers, orchestration tools, and bare metal hardware to ensure that security policies follow instances no matter where they are deployed. You can learn more about Centra on the Centra Product Page.

Ready to get started with your cloud migration?

As we have seen, there are a number of factors to consider when planning a cloud migration. Enterprises must be diligent and ensure they aren’t making strategic or operational errors when making the leap. By properly strategizing prior to your migration and leveraging a solution like Guardicore Centra, you can help resolve the inherent cloud migration challenges involved in shifting workloads to the cloud. This will position your business to get the most ROI on your cloud spend and help ensure your IT security is not compromised due to silos and blind spots.

If you’re interested in learning more about how Guardicore can help ensure your next cloud migration is a success, check out our Cloud Migration Use Case Page or contact us today.

The AWS Cloud Security Issues You Don’t Want to Ignore

According to Gartner, through 2022, 95% of cloud security failures will be the customer’s fault. Using the cloud securely on AWS means building a cloud security strategy that faces the challenges head on, with a full understanding of the shared responsibility model and its blind spots.

Securing Containers in AWS

One of the biggest issues when using AWS is securing the container network. This is due to the lack of context that the VPC has for any overlay network running on top. Amazon Security Groups can apply security policies to each cluster, but are unable to do this with individual pods, making this technology insufficient. When your business is attempting to troubleshoot or to gain better visibility into communications, insight will stop at the traffic between the hosts in the cluster rather than the pods resulting in security blind-spots.

As a result, you need two solutions to control your cloud hosted network. One handles your VM policies, while another governs your containers. As such, creating network policies for a single application that includes both containers and VMs requires using separate solutions.Your business now has two sets of controls to manage, with all the maintenance and administration that comes with it. This adds complexity and risk, when your move to the cloud was probably meant to make your infrastructure and security easier, not more complicated.

Lack of visibility in AWS

62% of IT decision makers at large enterprises believe that their on-premises security is stronger than their cloud security. On premises, these security experts feel that they have control over their IT environment and the data and communications within, and by moving to the cloud, they lose that control and visibility.

With smart micro-segmentation, this doesn’t have to be the case. Going further than AWS security groups, Guardicore Centra provides enhanced visibility, automatically discovering all applications and flows down to process level (Layer 7). It includes an AWS API that can pull orchestration data and labels to get valuable context for application mapping, and allows you to baseline your infrastructure in an intelligent and informed way, understanding how your applications behave and communicate, which in turn enables detecting and alerting on changes. As the Centra solution works across multiple cloud vendors, businesses can use it to gain visibility and apply policy controls across a heterogeneous environment without being tied to any one cloud vendor or infrastructure.

Application-Aware Policy Creation and Control

On premises, companies are used to being able to utilize NGFWs (Next-Gen Firewalls) to protect and segment applications. In the cloud, AWS doesn’t provide the same functionality. Segmenting applications can be done using AWS security groups in a restricted manner, only supporting controlling traffic down to Layer 4, ports and IPs. With Centra, you can benefit from application-aware security policies that work with dynamic AWS applications down to process level. Rather than manage two or more sets of controls, Centra works across any infrastructure, including multi-cloud and hybrid data centers or multiple IaaS providers, physical servers on premises, containers and microservices. As the policy follows the workload, enterprises can enjoy dynamic flexibility without compromising security.

One solution across all of these environments promotes an atmosphere of simplicity in your data centers, with smart labeling and grouping that provides one ‘single pane of glass’ view into the most complex of infrastructures. Your staff have easy navigation and insight into problems when they occur, and can define segmentation policy in a matter of minutes, rather than relying on trial and error.

Navigating the Blind Spots to Securely Benefit from AWS

Using AWS securely means understanding that it is your role as the customer to stay on top of securing customer data, as well as platform, application, identity and access management, and any OS, network or firewall configuration. Cloud users need to be prepared to go above and beyond to ensure that their workloads are safe, especially when working across multi or hybrid-cloud environments.

When implemented correctly, micro-segmentation offers a simple way to secure a hybrid environment, including solving the unique challenges of containers on AWS and providing the ability to create dynamic application policies down to process level. We believe the best solutions start with foundational visibility, automatically discovering all network flows and dependencies. This allows your business to take advantage of the latest technological advancements without increasing risk or complexity for your security teams.

5 Docker Security Best Practices to Avoid Breaches

Docker has had a major impact on the world of IT over the last five years, and its popularity continues to surge. Since its release in 2013, 3.5 million apps have been “Dockerized” and 37 billion Docker containers have been downloaded. Enterprises and individual users have been implementing Docker containers in a variety of use-cases to deploy applications in a fast, efficient, and scalable manner.

There are a number of compelling benefits for organizations that adopt Docker, but like with any technology, there are security concerns as well. For example, the recently discovered runc container breakout vulnerability (CVE-2019-5736) could allow malicious containers to compromise a host machine. What this means is organizations that adopt Docker need to be sure to do so in a way that takes security into account. In this piece, we’ll provide an overview of the benefits of Docker and then dive into 5 Docker security best practices to help keep your infrastructure and applications secure.

Benefits of Docker

Many new to the world of containerization and Docker are often confused about what makes containers different from running virtual machines on top of a hypervisor. After all, both are ways of running multiple logically isolated apps on the same hardware.

Why then would anyone bother with containerization if virtual machines are available? Why are so many DevOps teams such big proponents of Docker? Simply put, containers are more lightweight, scalable, and a better fit for many use cases related to automation and application delivery. This is because containers abstract away the need for an underlying hypervisor and can run on a single operating system.

Using web apps as an example, let’s review the differences.

In a typical hypervisor/virtual machine configuration you have bare metal hardware, the hypervisor (e.g. VMware ESXi), the guest operating system (e.g. Ubuntu), the binaries and libraries required to run an application, and then the application itself. Generally, another set of binaries and libraries for a different app would require a new guest operating system.

With containerization you have bare metal hardware, an operating system, the container engine, the binaries and libraries required to run an application, and the application itself. You can then stack more containers running different binaries and libraries on the same operating system, significantly reducing overhead and increasing efficiency and portability.

When coupled with orchestration tools like Kubernetes or Docker Swarm, the benefits of Docker are magnified even further.

Docker Security Best Practices

With an understanding of the benefits of Docker, let’s move on to 5 Docker security best practices that can help you address your Docker security concerns and keep your network infrastructure secure.

#1 Secure the Docker host

As any infosec professional will tell you, truly robust security must be holistic. With Docker containers, that means not only securing the containers themselves, but also the host machines that run them. Containers on a given host all share that host’s kernel. If an attacker is able to compromise the host, all your containers are at risk. This means that using secure, up to date operating systems and kernel versions is vitally important. Ensure that your patch and update processes are well defined and audit systems for outdated operating system and kernel versions regularly.

#2 Only use trusted Docker images

It’s a common practice to download and leverage Docker images from Docker Hub. Doing so provides DevOps teams an easy way to get a container for a given purpose up and running quickly. Why reinvent the wheel?

However, not all Docker images are created equal and a malicious user could create an image that includes backdoors and malware to compromise your network. This isn’t just a theoretical possibility either. Last year it was reported by Ars Technica that a single Docker Hub account posted 17 images that included a backdoor. These backdoored images were downloaded 5 million times. To help avoid falling victim to a similar attack, only use trusted Docker images. It’s good practice to use images that are “Docker Certified” whenever possible or use images from a reputable “Verified Publisher”.

#3 Don’t run Docker containers using –privileged or –cap-add

If you’re familiar with why you should NOT “sudo” every Linux command you run, this tip will make intuitive sense. The –privileged flag gives your container full capabilities. This includes access to kernel capabilities that could be dangerous, so only use this flag to run your containers if you have a very specific reason to do so.

Similarly, you can use the –cap-add switch to grant specific capabilities that aren’t granted to containers by default. Following the principle of least privilege, you should only use –cap-add if there is a well-defined reason to do so.

#4 Use Docker Volumes for your data

By storing data (e.g. database files & logs) in Docker Volumes as opposed to within a container, you help enhance data security and help ensure your data persists even if the container is removed. Additionally, volumes can enable secure data sharing between multiple containers, and contents can be encrypted for secure storage at 3rd party locations (e.g. a co-location data center or cloud service provider).

#5 Maintain Docker Network Security

As container usage grows, teams develop a larger and more complex network of Docker containers within Kubernetes clusters. Analyzing and auditing traffic flows as these networks grow becomes more complex. Finding a balance between security and performance in these instances can be a difficult balancing act. If security policies are too strict, the inherent advantages of agility, speed, and scalability offered by containers is hamstrung. If they are too lax, breaches can go undetected and an entire network could be compromised.

Process-level visibility, tracking network flows between containers, and effectively implementing micro-segmentation are all important parts of Docker network security. Doing so requires tools and platforms that can help integrate with Docker and implement security without stifling the benefits of containerization. This is where Guardicore Centra can assist.

How Guardicore Centra helps enhance Docker Network Security

The Centra security platform takes a holistic approach to network security that includes integration with containers. Centra is able to provide visibility into individual containers, track network flows and process information, and implement micro-segmentation for any size deployment of Docker & Kubernetes.

For example, with Centra, you can create scalable segmentation policies that take into account both pod to pod traffic flows and bare metal or virtual machine to flows without negatively impacting performance. Additionally, Centra can help DevSecOps teams implement and demonstrate the monitoring and segmentation required for compliance to standards such as PCI-DSS 3.2. For more on how Guardicore Centra can help enable Docker network security, check out the Container Security Use Case page.

Interested in learning more?

There are a variety of Docker security issues you’ll need to be prepared to address if you want to securely leverage containers within your network. By following the 5 Docker security best practices we reviewed here, you’ll be off to a great start. If you’re interested in learning more about Docker network security, check out our How to Leverage Micro-Segmentation for Container Security webinar. If you’d like to discuss Docker security with a team of experts that understand Docker security requires a holistic approach that leverages a variety of tools and techniques, contact us today!

Are you Protected against These Common Types of Cyber Attacks?

The types of cyber-security attacks that businesses need to protect themselves from are continually growing and evolving. Keeping your company secure means having insight into the most common threats, and the categories of cyber attacks that might go unnoticed. From how to use the principle of least privilege to which connections you need to be monitoring, we look at the top types of network attacks and how to level up your security for 2019.

Watering Hole Attacks

A watering hole attack is an infected website, where vulnerabilities in software or design can be leveraged to embed malicious code. One well-known example is MageCart, the consumer website malware campaign. There are at least half a dozen criminal groups using this toolkit, notably in a payment-card information skimming exploit that has used JavaScript code on the checkout pages of major retailers to steal credentials.

Last year, Guardicore Labs discovered Operation Prowli, a campaign that compromised more than 40,000 machines around the world, using attack techniques such as brute-force, exploits, and the leveraging of weak configurations. This was achieved by targeting CMS servers hosting popular websites, backup servers running HP Data Protector, DSL modems and IoT devices among other infrastructure. Consumers were tricked and diverted from legitimate websites to fake ones, and the attackers then spread malware and malicious code to over 9,000 companies through scam services and browser extensions. This kind of attack puts a whole organization in jeopardy.

More effective watering hole attacks can be achieved if an attacker homes in on the websites that you and your employees use regularly. On top of this, always make sure that your software is up to date so that attackers cannot leverage vulnerabilities to complete these types of cyber attacks. Lastly, ensure you have a method in place to closely watch network traffic and prevent intrusions.

Third-Party Service Vulnerabilities

Today’s surge in connectivity means that enterprises are increasingly relying on third party services for backup, storage, scale, or MSSP’s, to name a few examples. Attackers are increasingly managing to infiltrate your network through your connection with other businesses who have access to your data center or systems. According to the Ponemon Institute, more than half of businesses have suffered a breach due to access through a third-party vendor, one example being the devastating Home Depot breach where attackers used a third-party vendors credentials to steal more than 56 million customer credit and debit card details.

As well as current suppliers, businesses need to be aware of previous suppliers who might not have removed your information from their systems, and breach of confidentiality where third-parties have sold or shared your data with another unknown party. As such, your company needs visibility into all your communication flows, including those with third-party vendors, suppliers, or cloud services, as well as in-depth incident response to handle these kinds of attacks.

Web Application Attacks

When it comes to categories of cyber attacks that use web applications, SQL injection is one of the most common. An attacker simply inserts additional SQL commands into a application database query, allowing them to access data from the database, modify or delete the data, and sometimes even execute operations or issue commands to the operating system itself. This can be done in a number of ways, often through client-server web forms, by modifying cookies, or by using server variables such as HTTP headers.

Another example of a web application attack is managed through deserialization vulnerabilities. There are inherent design flaws in many serialization and deserialization specifications that means that systems will convert any serialized stream, into an object without validating its content. At an application level, companies need to be sure that deserialization end points are only accessible by trusted users.
Giving web applications the minimum privilege necessary is one way to limit these types of cyber-security attacks from breaching your network. Ensuring you have full visibility of connections and flows to your database server is also essential, with alerts set up for any suspicious activity.

What Can Attackers Do Once They Have Access to Your Network?

Ransomware: Attackers can use all types of network attacks to withhold access to your data and operations, usually through encryption, in the hope of a pay-out.
Data destruction/theft: Once attackers have breached your perimeter, without controls they can access critical assets such as customer data. This can be destroyed or stolen causing untold brand damage and legal consequences.
Crypto-jacking: These types of cyber attacks are usually initiated when a user downloads malicious crypto-mining code onto their machine, or by brute-force using SSH credentials, like the ‘Butter’ attacks monitored by Guardicore labs over the past few years.
Pivot to attack other internal applications: If a hacker breaches one area, they can leverage user credentials to escalate their privileges or make lateral moves to another more sensitive area. This is why it’s so important to isolate critical assets as well as take advantage of easy and early wins like separating the production arm of your company from development.

The Most Common Types of Cyber-Security Attacks are Always Evolving

With so many types of cyber attacks risking your network, and subtle changes turning even known quantities into new threats, visibility of your whole ecosystem is foundational for a well-protected IT environment.

As well as using micro-segmentation to separate environments, you can create policy that secures end points and servers with application segmentation. This helps to stop a breach from escalating, with strong segmentation policies that secure your communication flows with the principle of least privilege.

On top of this, complementary controls that include breach detection and incident response with visibility at their core ensures that nothing sinister can fly underneath your radar.

The cost of over-compliance

A few weeks ago I visited a prospect who presented me with an interesting business case.
They are a financial services company with all their applications hosted on their premises.
As expected from a financial services company, they are heavily regulated – having to meet PCI DSS and other standards and requirements.

When they started their business ~10 years ago, the core set of their applications were under that or another regulation. At that time a plausible solution was to define all of their production environment as “regulated” and implement all the requirements there. The overhead was small and it made a lot of sense to simplify the management of segregation of regulated from non-regulated.

But over the years the situation has changed quite a lot. In addition to financial applications that remain regulated, they added tens of other applications to their production environment and now the situation is that in fact fewer than 50% of their servers run regulated applications, and the overhead becomes quite big. They estimated a few hundreds of thousands of dollars annually “wasted” on compliance where it is not needed (from licenses on software, auditing hours, and time of compliance oriented engineers internally etc.)

So “why not separate the irrelevant applications from the regulated data-center?” you might ask, and so did I. But here are a few challenges that the prospect presented me with:

  1. The data center is quite complex today, spanning a few different virtualization solutions, networking equipment etc, so separating them into different VLANs will require quite a lot of networking effort.
  2. The regulated and non-regulated applications are interconnected – mapping those dependencies (for identifying the FW rules) is a very complex task without the right visibility.
  3. Some applications are business critical and they cannot afford the down-time associated with moving them to another VLAN, changing their IPs etc – just the thought of that scares away everyone from application owners to leadership.
  4. When looking deeper into the regulation requirements – they would like to separate the “regulated part” even further into separate segments, thus driving the compliance and auditing costs event further down. So take all the problems above and multiply them…
  5. As with all modern organizations, they would like to embrace “new” technologies such as cloud – so they would like to enable this easily within any change they implement in their IT and plan for future expansions.

What a perfect use-case for an overlay segmentation solution as Guardicore!!! We can help implement any size of segments, across any infrastructure, without any downtime, and help save quite a lot of money in the process of uplifting their security posture.

Want to hear more – talk to us.

Understanding and Avoiding Security Misconfiguration

Security Misconfiguration is simply defined as failing to implement all the security controls for a server or web application, or implementing the security controls, but doing so with errors. What a company thought of as a safe environment actually has dangerous gaps or mistakes that leave the organization open to risk. According to the OWASP top 10, this type of misconfiguration is number 6 on the list of critical web application security risks.

How to Detect Security Misconfiguration -Diagnosing and Determining the Issue

The truth is, you probably do have misconfigurations in your security, as this is a widespread problem, and can happen at any level of the application stack. Some of the most common misconfigurations in traditional data centers include default configurations that have never been changed and remain insecure, incomplete configurations that were intended to be temporary, and wrong assumptions about the application expected network behaviour and connectivity requirements.

In today’s hybrid data centers and cloud environments, and with the complexity of applications, operating systems, frameworks and workloads, this challenge is growing. These environments are technologically diverse and rapidly changing, making it difficult to understand and introduce the right controls for secure configuration. Without the right level of visibility, security misconfiguration is opening new risks for heterogeneous environments. These include:

  • Unnecessary administration ports that are open for an application. These expose the application to remote attacks.
  • Outbound connections to various internet services. These could reveal unwanted behavior of the application in a critical environment.
  • Legacy applications that are trying to communicate with applications that do not exist anymore. Attackers could mimic these applications to establish a connection.

The Enhanced Risk of Misconfiguration in a Hybrid-Cloud Environment

While security misconfiguration in traditional data centers put companies at risk of unauthorized access to application resources, data exposure and in-organization threats, the advent of the cloud has increased the threat landscape exponentially. It comes as no surprise that “2017 saw an incredible 424 percent increase in records breached through misconfigurations in cloud servers” according to a recent report by IBM. This kind of cloud security misconfiguration accounted for almost 70% of the overall compromised data records that year.

One element to consider in a hybrid environment is the use of public cloud services, third party services, and applications that are hosted in different infrastructure. Unauthorized application access, both from external sources or internal applications or legacy applications can open a business up to a large amount of risk.

Firewalls can often suffer from misconfiguration, with policies left dangerously loose and permissive, providing a large amount of exposure to the network. In many cases, production environments are not firewalled from development environments, or firewalls are not used to enforce least privilege where it could be most beneficial.

Private servers with third-party vendors or software can lack visibility or an understanding of shared responsibility, often resulting in misconfiguration. One example is the 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility.

With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe?

Learning about Application Behavior to Mitigate the Risk of Misconfiguration

Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices.

This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features.

Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration.

Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas.

Visibility and Smart Policy Limit the Risk of Security Misconfiguration

The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems.

Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication.

Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole.

Want to hear more about Guardicore Centra and micro-segmentation? Get in touch