Posts

Trials and Tribulations – A Practical Look at the Challenges of Azure Security Groups and Flow Logs

Cloud Security Groups are the firewalls of the cloud. They are built-in and provide basic access control functionality as part of the shared responsibility model. However, Cloud Security Groups do not provide the same protection or functionality that enterprises have come to expect with on-premises deployments. While Next-Generation firewalls protect and segment applications on premises’ perimeter (mostly), AWS, Azure, and GCP do not mirror this in the cloud. Segmenting applications using Cloud Security Groups is done in a restricted manner, supporting only layer 4 traffic, ports and IPs. This means that to benefit from application-aware security capabilities with your cloud applications you will need an additional set of controls which is not available with the built-in functionality of Cloud Security Groups.

The basic function that Cloud Security Groups should provide is network separation, so they can be best compared to what VLANs provides on premises, Access Control Lists on switches and endpoint FWs. Unfortunately, like VLANs, ACLs and endpoint FWs, Cloud Security Groups come with similar ailments and limitations. This makes using them complex, expensive and ultimately ineffective for modern networks that are hybrid and require adequate segmentation. To create application aware policies, and micro-segment an application, you need to visualize application dependencies, which Cloud Security Groups do not support. Furthermore, if your application dependencies cross regions within the same cloud provider or between clouds and on premises, application security groups are ineffective by design. We will touch on this topic in upcoming posts.

In today’s post we will focus on a specific scenario and use case that is common to most organizations, discussing Cloud Security Groups and flow logs limitations within a specific vNet, and illustrating what Guardicore’s value is in this scenario.

Experiment: Simulate a SWIFT Application Migration to Azure

Let’s look at the details from an experiment performed by one of our customers during a simulation of a SWIFT application migration to Azure.

Our customer used a subscription in Azure, in the Southern region of Brazil. Within the subscription, there is a Virtual Network (vNet). The vNet includes a Subnet 10.0.2.0/24 with various application servers that serve different roles.

This customer attempted to simulate the migration of their SWIFT application to Azure given the subscription above. General segmentation rules for their migrated SWIFT application were set using both NSGs (Network Security Groups) & ASGs (Application Security Groups). These were used to administrate and control network traffic within the virtual network (vNet) and specifically to segment this application.

Let’s review the difference:

  • An NSG is the Azure resource that is used to enforce and control the network traffic. NSGs control access by permitting or denying network traffic. All traffic entering or leaving your Azure network can be processed via an NSG.
  • An ASG is an object reference within a Network Security Group. ASGs are used within an NSG to apply a network security rule to a specific workload or group of VMs. An ASG is a “network object,” and explicit IP addresses are added to this object. This provides the capability to group VMs into associated groups or workloads.

The lab setup:
The cloud setup in this experiment included a single vNet, with a single Subnet, which has its own Network Security Group (NSG) assigned.

ASGs

  • Notice that they are all contained within the same Resource Group, and belong to the Location of the vNet (Brazil South).

NSGs:

The following NSG rules were in place for the simulated migrated SWIFT Application:

  • Load Balancers to Web Servers, over specific ports, allow.
  • Web Servers to Databases, over specific ports, allow.
  • Deny all else between SWIFT servers.

The problem:

A SWIFT application team member in charge of the simulation project called the cloud security team telling them a critical backup operation had stopped working on the migrated application, and he suspects the connection is blocked. The cloud network team, at this point, had to verify the root cause of the problem, partially through process of elimination, out of several possible options:

  1. The application team member was wrong, it’s not a policy issue but a configuration issue within the application.
  2. The ASGs are misconfigured while NSGs are configured correctly.
  3. The ASGs are configured correctly but the NSGs are misconfigured or missing a rule.

The cloud team began the process of elimination. They used Azure flow logs to try to detect the possible blocked connections. The following is an example of such a log:

Using the Microsoft Azure Log Analytics platform, the cloud team sifted through the data, with no success. They were searching for a blocked connection that could potentially be the backup process. The blocked connection was non-detectable. The cloud team members therefore dismissed the issue as a misconfiguration in the application.

The SWIFT team member insisted it was not an application issue and several days passed with no solution, all while the SWIFT backup operation kept failing. In a live environment, this stalemate would have been a catastrophe, with team members likely working around the clock to find the blocked connection, or prove misconfiguration in the application itself. In many cases an incident like this would lead to removing the security policy for the sake of business continuity as millions of dollars are at stake daily.

After many debates and an escalation of the incident, it was decided- based on the Protect team’s recommendation- to leverage Guardicore Centra in the Azure cloud environment to help with the investigation and migration simulation project.

Using Guardicore Centra, the team used Reveal to filter for all failed connections related to the SWIFT application. This immediately revealed an attempted failed connection, between the SWIFT load balancer and the SWIFT databases. The connection failed due to missing allow security groups. There was no NSG in place to allow SWIFT LBs to talk to SWIFT DBs in the policy.

The filters in Reveal

 

Discovering the process

Guardicore was able to provide visibility down to the process level for further context and identification of the failed backup process.

Application Context is a Necessity

The reason the flow logs were inadequate to detect the connection was that IPs were constantly changing as the application scaled up and down and the migration simulation project moved forward. Throughout this, the teams had no context of when the backup operation was supposed to occur or what servers initiated these attempted connections, therefore the search came up empty handed. They were searching for what they thought would reveal the failed connections. As flow logs are limited to IPs and ports, they were unable to search based on application context.

The cloud team decided to use Guardicore Centra to manage the migration and segmentation of the SWIFT application simulation for ease of management and ease of maintenance. Additionally, they added process and user context to the rules for more granular security and testing. Guardicore Centra enabled comparing the on-premises application deployment with the cloud setup to make sure all configurations were in place.

The team then went on to use Guardicore Centra to simulate the SWIFT policy over real SWIFT traffic. Making sure they are not blocking additional critical services, and will not inadvertently block these in the future.

 

Guardicore Centra provided the cloud security team with:

  • Visibility and speed to detect the relevant blocked flows
  • Process and user context to identify the failed operation as the backup operation
  • Ability to receive real-time alerts on any policy violation
  • Applying process level rules & user level rules required for the critical SWIFT Application
  • Simulation and testing capabilities to simulate the policies over real application traffic before blocking

All of these features are not available in Azure. These limitations cause serious implications, such as the backup operation failure and no ability to adequately investigate and resolve the issue.

Furthermore, as part of general environment hygiene, our customer attempted to add several rules to govern the whole vNet, blocking Telnet and insecure FTP. For Telnet, our customer could add a block rule in Azure on port 23; For FTP, an issue was raised. FTP can communicate over high range ports that many other applications will need to use, how could it be blocked? Using Guardicore, a simple block rule over the ftpd process was put in place with no port restriction, immediately blocking any insecure ftp communication at process level regardless of the ports used.

Visibility is key to any successful application migration project. Understanding your application dependencies is a critical step, enabling setting up the application successfully in the cloud. Guardicore Centra provides rich context for each connection, powerful filtering capabilities, flexible timeframes and more. We collect all the flows, show successful, failed, and blocked connections, and store historical data, not just short windows of it, to be able to support many use cases. These include troubleshooting, forensics, compliance and of course, segmentation. This enables us to help our customers migrate to the cloud 30x faster and achieve their segmentation and application migration goals across any infrastructure.

Securing a Hybrid Data Center – Strategies and Best Practices

Today’s data centers exist in a hybrid reality. They often include on-premises infrastructure such as Bare Metal or Virtual Machines, as well as both Public and Private cloud. At the same time, most businesses have legacy systems that they need to support. Even as you embrace cutting-edge infrastructure like containers and microservices, your legacy systems aren’t going anywhere, and it’s probably not even on your near future road-map to replace them. As a result, your security strategy needs to be suitable across a hybrid ecosystem, which is not as simple as it sounds.

The Top Issues with Securing a Hybrid Data Center

Many businesses attempt to use traditional security tools to manage a hybrid data center, and quickly run into problems.

Here are the most common problems that companies encounter when traditional tools are used to secure a modern, hybrid data center:

  • Keeping up with the pace of change: Business moves fast, and traditional security tools such as legacy firewalls, ACLs, VLANs and cloud security groups are ineffectual. This is because these solutions are made for one specific underlying infrastructure. VLANs will work well for on premises – but fall short when it comes to cloud and container infrastructure. Cloud security groups work for the cloud, but won’t support additional cloud providers or on premises. If you want to migrate, security will seriously affect the speed and flexibility of your move, slowing down the whole process – and probably negating the reasons you chose cloud to begin with.
  • Management overhead: Incorporating different solutions for different infrastructure is nothing short of a headache. You’ll need to hire more staff, including experts in each area. A cross-platform security strategy that incorporates everyone’s field of expertise is costly, complex, and prone to bottlenecks because of the traditional ‘too many cooks’ issue.
  • No visibility: Your business will also need to think about compliance. This could involve an entirely different solution and staff member dedicated to compliance and visibility. Without granular insight into your entire ecosystem, it’s impossible to pass an audit. VLANs for example offer no visibility into application dependencies, a major requirement for audit-compliance. When businesses use VLANs, compliance therefore becomes an additional headache.
  • Insufficient control: Today’s security solutions need Layer 7 control, with granularity that can look at user identity, FQDN (fully qualified domain names), command lines and more. Existing solutions rely on IPs and ports, which are insufficient to say the least.
    Take cloud security groups for example, which for many has become the standard technology for segmenting applications, the same way as they would on-premises. However, on the cloud this solution stops at Layer 4 traffic, ports and IPs. For application-aware security on AWS, you will need to add another set of controls. In a dynamic data center, security needs to be decoupled from the IPs themselves, allowing for migration of machines. Smart security uses an abstraction level, enabling the policy to follow the workload, rather than the IP.
  • Lack of automation: In a live hybrid cloud data center, automation is essential. Without automation as standard, for example using VLANs, changes can take weeks or even months. Manually implementing rules can result in the downtime of critical systems, as well as multiple lengthy changes in IPs, configurations of routers, and more.

Hybrid Data Center Security Strategies that Meet These Issues Head-On

The first essential item on your checklist should be infrastructure-agnostic security. Centralized management means one policy approach across everything, from modern and legacy technology on-premises to both public and private cloud. Distributed enforcement decouples the security from the IP or any underlying infrastructure – allowing policy to follow the workload, however it moves or changes. Security policy becomes an enabler of migration and change, automatically moving with the assets themselves.

The most effective hybrid cloud solutions will be software-based, able to integrate with any other existing software solution, including ansible, chef, puppet, SCCM, and more. This will also make deployment fast and seamless, with implementation taking hours rather than days. At Guardicore, our customers often express surprise when we request three hours to install our solution for a POC, as competitors have asked for three days!

The ease of use should continue after the initial deployment. An automated, readable visualization of your entire ecosystem makes issues like compliance entirely straightforward, and provides an intuitive and knowledgeable map that is the foundation to policy creation. Coupling this with a flexible labeling system means that any stakeholder can view the map of your infrastructure, and immediately understand what they are looking at.

These factors allow you to implement micro-segmentation in a highly effective way, with granular control down to the process level. In comparison to traditional security tools, Guardicore can secure and micro-segment an application in just weeks, while for one customer it had taken 9 months to do the same task using VLANs.

What Makes Guardicore Unique When it Comes to Hybrid Data Center Security Strategies?

For Guardicore, it all starts with the map. We collect all the flows, rather than just a sample, and allow you to access all your securely stored historical data rather than only snap-shotting small windows in time. This allows us to support more use cases for our customers, from making compliance simple to troubleshooting a slowdown or forensic investigation into a breach. We also use contextual analysis on all application dependencies and traffic, using orchestration data, as well as the process, user, FQDN and command line of all traffic. We can enable results, whatever use case you’re looking to meet.

Guardicore is also known for our flexibility, providing a grouping and labeling process that lets you see your data center the way you talk about it, using your own labels rather than pre-defined ones superimposed on you by a vendor, and Key:Value formats instead of tags. This makes it much easier to create the right policies for your environment, and use the map to see a hierarchical view of your entire business structure, with context that makes sense to you. Taking this a step further into policy creation, your rules methodology can be a composite of whitelisting and blacklisting, giving less risk of inflexibility and complexity in your data center, and even allowing security rules that are not connected to segmentation use cases. In contrast, competitors use white-list only approaches with fixed labels and tiers.

Fast & Simple Segmentation with Guardicore

Your hybrid data center security strategies should enable speed and flexibility, not stand in your way. First, ensure that your solution supports any environment. Next, gain as much visibility as possible, including context. Use this to glean all data in an intuitive way, without gaps, before creating flexible policies that focus on your key objectives – regardless of the underlying infrastructure.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

How to Establish your Next-Gen Data Center Security Strategy

In 2019, 46 percent of businesses are expected to use hybrid data centers, and it is therefore critical for these businesses to be prepared to deal with the inherent security challenges. Developing a next gen data center security strategy that takes into account the complexity of hybrid cloud infrastructure can help keep your business operations secure by way of real-time responsiveness, enhanced scalability, and improved uptime.

One of the biggest challenges of securing the next gen data center is accounting for the various silos that develop. Every cloud service provider has its own methods to implement security policies, and those solutions are discrete from one another. These methods are also discrete from on-premises infrastructure and associated security policies. This siloed approach to security adds complexity and increases the likelihood of blind spots in your security plan, and isn’t consistent with the goals of developing a next gen data center. To overcome these challenges, any forward-thinking company with security top of mind requires security tools that enable visibility and policy enforcement across the entirety of a hybrid cloud infrastructure.

In this piece, we’ll review the basics of the next gen data center, dive into some of the details of developing a next gen data center security strategy, and explain how Guardicore Centra fits into a holistic security plan.

What is a next gen data center?

The idea of hybrid cloud has been around for a while now, so what’s the difference between what we’re used to and a next gen data center? In short, next gen data centers are hybrid cloud infrastructures that abstract away complexity, automate as many workflows as possible, and include scalable orchestration tools. Scalable technologies like SDN (software defined networking), virtualization, containerization, and Infrastructure as Code (IaC) are hallmarks of the next gen data center.

Given this definition, the benefits of the next gen data center are clear: agile, scalable, standardized, and automated IT operations that limit costly manual configuration, human error, and oversights. However, when creating a next gen data center security strategy, enterprises must ensure that the policies, tools, and overall strategy they implement are able to account for the inherent challenges of the next gen data center.

Asking the right questions about your next gen data center security strategy

There are a number of questions enterprises must ask themselves as they begin to design a next gen data center and a security strategy to protect it. Here, we’ll review a few of the most important.

  • What standards and compliance regulations must we meet?Regulations such as HIPAA, PCI-DSS, and SOX subject enterprises to strict security and data protection requirements that must be met, regardless of other goals. Failure to account for these requirements in the planning stages can prove costly in the long run should you fail an audit due to a simple oversight.
  • How can we gain granular visibility into our entire infrastructure? One of the challenges of the next gen data center is the myriad of silos that emerge from a security and visibility perspective. With so many different IaaS, SaaS, and on-premises solutions going into a next gen data center, capturing detailed visibility of data flows down to the process level can be a daunting task. However, in order to optimize security, this is a question you’ll need to answer in the planning stages. If you don’t have a baseline of what traffic flows on your network look like at various points in time (e.g. peak hours on a Monday vs midnight Saturday) identifying and reacting to anomalies becomes almost impossible.
  • How can we implement scalable, cross-platform security policies?As mentioned, the variety of solutions that make up a next gen data center can lead to a number of silos and discrete security policies. Managing security discretely for each platform flies in the face of the scalable, DevOps-inspired ideals of the next gen data center. To ensure that your security can keep up with your infrastructure, you’ll need to seek out scalable, intelligent security tools. While security is often viewed as hamstringing DevOps efforts, the right tools and strategy can help bridge the gap between these two teams.

Finding the right solutions

Given what we have reviewed thus far, we can see that the solutions to the security challenges of the next gen data center need to be scalable and compliant, provide granular visibility, and function across the entirety of your infrastructure.

Guardicore Centra is uniquely capable of addressing these challenges and helping secure the next gen data center. For example, not only can micro-segmentation help enable compliance to standards like HIPAA and PCI-DSS, but Centra offers enterprises the level of visibility required in the next gen data center. Centra is capable of contextualizing all application dependencies across all platforms to ensure that your micro-segmentation policies are properly implemented. Regardless of where your apps run, Centra helps you overcome silos and provides visibility down to the process level.

Further, Centra is capable of achieving the scalability that the next gen data center demands. To help conceptualize how scalable micro-segmentation with Guardicore Centra can be, consider that a typical LAN build-out that can last for a few months and require hundreds of IT labor hours. On the other hand, a comparable micro-segmentation deployment takes about a month and significantly fewer IT labor hours.

Finally, Centra can help bridge the gap between DevOps and Security teams by enabling the use of “zero trust” security models. The general idea behind zero trust is, as the name implies, nothing inside or outside of your network should be trusted by default. This shifts focus to determining what is allowed as opposed to being strictly on the hunt for threats, which is much more conducive to a modern DevSecOps approach to the next gen data center.

Guardicore helps enable your next gen data center security strategy

When developing a next gen data center security strategy, you must be able to account for the nuances of the various pieces of on-premises and cloud infrastructure that make up a hybrid data center. A big part of doing so is selecting tools that minimize complexity and can scale across all of your on-premises and cloud platforms. Guardicore Centra does just that and helps implement scalable and granular security policies to establish the robust security required in the next gen data center.

If you’re interested in redefining and adapting the way you secure your hybrid cloud infrastructure, contact us to learn more.

Want to know more about proper data center security? Get our white paper about operationalizing a proper micro-segmentation project.

Read More

4 Insights about the Salesforce Outage

On May 17th, Salesforce announced a significant outage to its service, resulting in customers losing access to one of the most critical applications being used daily. The issue was acknowledged by Parker Harris, Salesforce’s chief technology officer and a co-founder, while the company worked together to try to resolve the critical outage as soon as possible.

At the center of the disaster was a faulty database script that was deployed in the production environment. Salesforce announced that “a database script deployment inadvertently gave users broader data access than intended.” This affected Salesforce customers who use Salesforce Pardot, a b2b marketing CRM, as well as any customers who have used Pardot in the past. The inadvertent access allowed users to both read and write permissions to restricted data.

Salesforce took initial steps to mitigate the problem by blocking access to all instances that contained impacted customers, and by shutting down other Salesforce services. This heat map below shows the extent of the blackout for Salesforce customers.

Salesforce outage map

The essential nature of the Salesforce application is self-evident, so these outages were extremely significant. Users who need Salesforce on a daily basis as part of their job found themselves idle, forcing many businesses to simply send them home.

As a data center company, focused on protecting the most critical applications, here are our essential four insights following the crisis:

  1. Think Further than Cyber-Attacks
    Always remember that cyber-attacks are not the only threats on your data center. When evaluating your data-center risks, it is important to take into account internal “threats” and implement the right controls that will protect your “digital crown jewels” – the most critical business applications and processes. For example, separating your production and development environments is foundational for strong security, ensuring that testing scripts cannot run in your production environment, even in the case of human error.
  2. Always Consider the Cloud
    Companies are increasing their presence on the cloud, for reasons such as a positive impact on cost, maintenance efforts, and flexibility. However, security needs to be considered from the outset of your cloud strategy. Some companies are unaware that cloud apps have a greater exposure to different threats due to lack of visibility and the difficulty to introduce policy and controls. On the cloud, your business is at greater risk in the case of a breach or an outage.
  3. Zero Trust
    You cannot trust your single point of configuration to control and isolate your environment. Best practice is to criticize your controls and simulate the situation of failures. Zero Trust, the approach of “never trust, always verify,” can be focused on lateral movement and breach detection attempts in internal vs. external networks. However, it can also be relevant for any security controls that are being used or updated. In many cases, your business is in danger from internal threats, misconfigurations, and innocent mistakes, all of which can be as catastrophic as a malicious cyber-attack. The zero trust approach helps to limit the damage.
  4. Be Ready for a Crisis
    Distributed controls are your strongest weapon to ensure that you are prepared for any eventuality. These will allow you to act quickly against the unexpected, especially in hybrid cloud environments where you need to manage multiple clusters and control planes. Make sure that you have the visibility and control of your entire environment that allows you to instantly isolate any affected environments. This will give you time to put your incident response plan into place, and protect your critical assets until a solution has been found.

The Salesforce outage shows that mistakes can happen to anyone, and the best protection is always going to be preparation. Start by separating your environments, limiting the exposed surface, and then move on to using the zero trust model to keep your most critical assets safe from harm, even in a hybrid-cloud infrastructure. Remember that without adequate segmentation, you are exposing your applications to internal threats as well as external ones. With strong data center security, you are one step ahead at all times.

Want to learn more about micro-segmentation in the cloud? Read our white paper on how to secure today’s modern data centers.

Download now

Easy Ways to Greatly Reduce Risk in Today’s Data Centers

Whether your infrastructure is on premises, in the cloud, or a combination of hybrid cloud, there are core characteristics of breached data centers that make them vulnerable to attack. These data centers are easier to penetrate and utilize, making them higher value targets for opportunistic hackers to exploit.

The truth is, protection is not that complicated. There are common, easily fixable data center problems that come up again and again in the biggest breaches, and best practices that can be easily implemented to provide significant risk reduction for your company against these kinds of threats. While security professionals often feel inundated with content that discusses ideas like “IT ecosystems are increasingly complex and fast-changing, and are therefore so difficult to secure” this is – in most cases, simply wrong.

What Are the Attackers Looking For?

Data centers offer the biggest bang for the criminal’s buck, whether that’s harvesting PII or other sensitive information such as technical intellectual property and best practices. Beyond direct gain, data centers offer a wealth of processing power which many attackers hijack for additional revenue opportunities to resell to other criminal groups. The black market for cyber-crime is continuously growing, with examples such as DDoS-as-a-service, and RAT-as-a-service giving attackers access to your compute infrastructure, to inject malware or to achieve remote access. We’ve even seen victims become the “false flag” bounce network to obfuscate an attack’s origin. Using hijacked resources for cryptocurrency mining is a steadily growing threat as well, up 459% in 2018.

The Simple Fixes That if Ignored, make a Data Center Easy to Compromise

Just over three years ago, In proposing a Zero Trust model, John Kindervag of Forrester said that we need to move to architectures with “no more chewy centers.” When we look broadly at data centers there are several things that lead them naturally to be what we don’t want, very soft in the middle. By making small changes, we can turn these deficits into enterprise positives, doing much to prevent future attacks from occurring and catching them quicker when they do happen.

  1. Good hygiene: Far too often attacks in data centers start by taking advantage of poor hygiene. By merely shoring up the below, attackers would have a much more difficult time getting in.
    1. Better patching acumen – doing a better job at finding unpatched vulnerabilities in applications.
    2. Better password and account management and enabling two factor authentication – many attacks come from simple brute force password attacks against single factor authentication applications.
    3. Better automation including OS, Application and kernel checks – while we have become very good at applying DevOps scripting in the form of auto-provisioning and managing playbooks/scripts like chef, puppet, ansible, we have not always added easy to incorporate OS, application and kernel update checks into those scripts. Instead of spinning up new automations that are only as good as the day they were born, it would be very easy to perpetually – and automatically update these scripts with these added checks cutting down exploitable vulnerabilities easily.
  2. Better segmentation & micro-segmentation – when an enterprise incorporates modern segmentation techniques – even if sparingly, it finds its risk greatly reduced. What makes these modern segmentation techniques different than what we have used in the past? Several things.
    1. Segmentation that is platform-agnostic and which provides visibility and enforcement to all platforms quickly and easily – Today’s data centers are heterogeneous in nature. Enterprises have embraced modern hypervisors and operating systems, containers and clouds, as well as serverless technology. Most enterprises also contain a good number of legacy systems and EoL operating systems such as Solaris, HP/UX, AIX, EoL Windows or EoL Linux as well.
    2. Segmentation that can be automated and works like your DevOps-based enterprise – Traditional security devices such as legacy firewalls, ACLs, and VLANs are extremely resource-intensive and impossible to manage in this kind of complex and dynamic environment. In some cases, such as in a hybrid cloud infrastructure, legacy security is not just insufficient, it’s unfeasible as a solution altogether. Enterprises need visibility across all of your platforms easily and seamlessly. Micro-segmentation technology is built for the dynamic and platform-agnostic nature of today’s enterprises, without the need for manual moves, adds, changes, or deletes. What is extremely important to understand – these modern techniques have been proven time and time again to be able to be implemented 30x faster than legacy techniques can be deployed and maintained.
    3. Segmentation – even when applied sparingly in “just a start” manner – this begins to reduce attack surface greatly. Grabbing these low hanging fruit makes it easy. Such examples include, but are not limited to:
      1. Isolating/securing off a compliance mandated environments
      2. Segmenting your “critical crown jewels” applications
      3. Sectioning off your vendors, suppliers, distributors, contractors off from the rest of the enterprise
      4. Securing off critical enterprise services and applications like remote access, network services and others
  3. Adequate Incident Response Plans & Practice – the final critical ingredient that can easily change an enterprise data center posture is having a well-thought -out incident response plan. One which incorporates not only the technical staff but also the business and legal parties that need to be involved as well. These plans should be practiced with incident response drills planned and run to establish blind spots or gaps in security.

Don’t believe everything you hear. Many of today’s biggest breaches are entirely preventable. In my next blog, I’ll take a look at four of the most devastating data center breaches from the last five years, and see how the checklist above could have made all the difference.

Interested in learning more about how to secure modern data centers and hybrid cloud environments?

Check out our White Paper on re-evaluating your security architecture

A Deep Dive into Point of Sale Security

Many businesses think of their Point of Sale (POS) systems as an extension of a cashier behind a sales desk. But with multiple risk factors to consider, such as network connectivity, open ports, internet access and communication with the most sensitive data a company handles, POS solutions are more accurately an extension of a company’s data center, a remote branch of their critical applications. This being considered, they should be seen as a high-threat environment, which means that they need a targeted security strategy.

Understanding a Unique Attack Surface

Distributed geographically, POS systems can be found in varied locations at multiple branches, making it difficult to keep track of each device individually and to monitor their connections as a group. They cover in-store terminals, as well as public kiosks and self-service stations in places like shopping malls, airports, and hospitals. Multiple factors, from a lack of resources to logistical difficulties, can make it near impossible to secure these devices at the source or react quickly enough in case of a vulnerability or a breach. Remote IT teams will often have a lack of visibility when it comes to being able to accurately see data and communication flows. This creates blind spots which prevent a full understanding of the open risks across a spread-out network. Threats are exacerbated further by the vulnerabilities of old operating systems used by many POS solutions.

Underestimating the extent of this risk could be a devastating oversight. POS solutions are connected to many of a business’s main assets, from customer databases to credit card information and internal payment systems, to name a few. The devices themselves are very exposed, as they are accessible to anyone, from a waiter in a restaurant to a passer-by in a department store. This makes them high-risk for physical attacks such as downloading a malicious application through USB, as well as remote attacks like exploiting the terminal through exposed interfaces, Recently, innate vulnerabilities have been found in mobile POS solutions from vendors that include PayPal, Square and iZettle, because of their use of Bluetooth and third-party mobile apps. According to the security researchers who uncovered the vulnerabilities, these “could allow unscrupulous merchants to raid the accounts of customers or attackers to steal credit card data.”

In order to allow system administrators remote access for support and maintenance, POS are often connected to the internet, leaving them exposed to remote attacks, too. In fact, 62% of attacks on POS environments are completed through remote access. For business decision makers, ensuring that staff are comfortable using the system needs to be a priority, which can make security a balancing act. A straightforward on-boarding process, a simple UI, and flexibility for non-technical staff are all important factors, yet can often open up new attack vectors while leaving security considerations behind.

One example of a remote attack is the POSeidon malware which includes a memory scraper and keylogger, so that credit card details and other credentials can be gathered on the infected machine and sent to the hackers. POSeidon gains access through third party remote support tools such as LogMeIn. From this easy access point, attackers then have room to move across a business network by escalating user privileges or making lateral moves.

High risk yet hard to secure, for many businesses POS are a serious security blind spot.

Safeguarding this Complex Environment and Getting Ahead of the Threat Landscape

Firstly, assume your POS environment is compromised. You need to ensure that your data is safe, and the attacker is unable to make movements across your network to access critical assets and core servers. At the top of your list should be preventing an attacker from gaining access to your payment systems, protecting customer cardholder information and sensitive data.

The first step is visibility. While some businesses will wait for operational slowdown or clear evidence of a breach before they look for any anomalies, a complex environment needs full contextual visibility of the ecosystem and all application communication within. Security teams will then be able to accurately identify suspicious activity and where it’s taking place, such as which executables are communicating with the internet where they shouldn’t be. A system that generates reports on high severity incidents can show you what needs to be analyzed further.

Now that you have detail on the communication among the critical applications, you can identify the expected behavior and create tight segmentation policy. Block rules,with application process context, can be used to contain any potential threat, ensuring that any future attackers in the data center would be completely isolated without disrupting business process or having any effect on performance.

The risk goes in both directions. Next, let’s imagine your POS is secure, but it’s your data center that is under attack. Your POS is an obvious target, with links to sensitive data and customer information. Micro-segmentation can protect this valuable environment, and stop an attack getting any further once it’s already in progress, without limiting the communication that your payment system needs to keep business running as usual.

With visibility and clarity, you can create and enforce the right policies, crafted around the strict boundaries that your POS application needs to communicate, and no further. Some examples of policy include:

    • Limiting outgoing internet connections to only the relevant servers and applications
    • Limiting incoming internet connections to only specific machines or labels
    • Building default block rules for ports that are not in use
    • Creating block rules that detail known malicious processes for network connectivity
    • Whitelisting rules to prevent unauthorized apps from running on the POS
    • Create strict allow rules to enable only the processes that should communicate, and block all other potential traffic

Tight policy means that your business can detect any attempt to connect to other services or communicate with an external application, reducing risk and potential damage. With a flexible policy engine, these policies will be automatically copied to any new terminal that is deployed within the network, allowing you to adapt and scale automatically, with no manual moves, changes, or adds slowing down business processes.

Don’t Risk Leaving this Essential Touchpoint Unsecured

Point of Sale solutions are a high-risk open door for attackers to access some of your most critical infrastructure and assets. Without adequate protection, a breach could grind your business to a halt and cost you dearly in both financial damage and brand reputation.

Intelligent micro-segmentation policy can isolate an attacker quickly to stop them doing any further damage, and set up strong rules that keep your network proactively safe against any potential risk. Combined with integrated breach detection capabilities, this technology allows for quick response and isolation of an attacker before the threat is able to spread and create more damage.

Want to learn more about how micro-segmentation can protect your endpoints while hardening the overall security for your data center?

Read More

Globes High Tech Promising Startups: GuardiCore

GuardiCore is featured as one of Globes High Tech Promising Startups. Hundreds of Israeli startups are currently active in the hot cybersecurity market. “There’s a rash of cyber companies,” says GuardiCore CEO Pavel Gurvich. “The only thing that has grown faster than investment in this sector is the damage caused by the attacks.”

Micro-Segmented Data Center Security

Guest blog by Edward Amoroso, Founder and CEO of TAG Cyber – he summarizes a recent discussion with GuardiCore on their approach to securing the modern data center.

I recently discovered Matt Butcher’s awesome Illustrated Children’s Guide to Kubernetes. Available in book, video, and blog form (https://deis.com/blog/2016/kubernetes-illustrated-guide/), the cartoon narrative starring a PHP app named Phippy is exactly what good cyber technology writing should be: Fun, simple, and informative. Even if you have no interest in Docker container orchestration, check out Matt’s work. You’ll like it.
Read more

Santander Brasil Chooses GuardiCore Centra Security Platform to Protect Data Center

San Francisco, CA and Tel Aviv, Israel – GuardiCore, a leader in internal data center security and breach detection, today announced that Santander Brasil, the largest subsidiary of Santander Group, has selected GuardiCore’s Centra Security Platform to provide advanced data center security.

Read more