Posts

Where to Start? Moving from the Theory of Zero Trust to Making it Work in Practice

Going back many years, perimeter controls were traditionally adequate for protecting enterprise networks that held critical assets and data. The hypothesis was that if you had strong external perimeter controls, watching your ingress and egress should be adequate for protection. If you were a more sophisticated or larger entity, there would be additional fundamental separation points between portions of your environment. However these were still viewed and functioned as additional perimeter points, merely concentric circles of trust with the ability, more or less, to move freely about. In cases where threats occurred within your environment, you would hope to catch them as they crossed one of these rudimentary borders.

The Moment I Realized that Perimeters Aren’t Enough

This practice worked moderately well for a while. However, around fifteen years ago, security practitioners began to feel a nascent itch, a feeling that this was not enough. I personally remember working on a case, a hospital – attacked by a very early spear phishing attack that mimicked a help desk request for a password reset. Clicking on a URL in a very official looking email, staff were sent to a fakebut official looking website where these hospital professionals were prompted to reset their credentials – or so they thought. Instead, the attack began. This was before the days of the Darknet and we even caught the German hacker boasting about what he had done – sharing the phishing email and fake website on a hacker messaging board. I worked for a company that had a fantastic IPS solution and upon deploying it, we were able to quickly catch the individual’s exfils. At first, we seemed to be winning. We cut the attacker off from major portions of a botnet that resided on the cafeteria cash registers, most of the doctors machines and to my horror, even on the automated pharmacy fulfillment computers. Two weeks later, I received a call, the attacker was back,trying to get around the IPS device in new ways. While we were able to suppress the attack for the most part, I finally had to explain to the hospital IT staff that my IPS was merely at the entrances and exits of their network and that to really stop these attacks, we needed to look at all of the machines and applications that resided within their environment. We needed the ability to look at traffic before it made its way to and from the exits. This was to be the first of many realizations for me that the reliance on perimeter-based security was slowly and surely eroding.

In the years since, the concept of a perimeter has all but completely eroded. Of course, it took quite a while for the larger population to accept. This was helped along by the business and application interdependencies that bring vendors, contractors, distributors and applications through your enterprise as well as the emergence of cloud and cloud like provisioning utilized by Dev Ops. The concept of being able to have true perimeters as a main method of prevention is no longer tangible.

It was this reality that spurred the creation of Forrester’s Zero Trust model- almost a decade ago. The basic premise is that no person or device is automatically given access or trusted without verification. In theory, this is simple. In practice, however, especially in data centers that have become increasingly hybrid and complex, this can get complicated fast.

Visibility is Foundational for Zero Trust

A cornerstone of Zero Trust is to ‘assume access.’ This means that any enterprise should assume than an attacker has already breached the perimeter. This could be through stealing credentials, a phishing scam, basic hygiene issues like poor passwords, account control and patching regimen, an IoT or third-party device, a brute force attack, or literally limitless other new vectors that make up today’s dynamic data centers.

Protecting your digital crown jewels through this complex landscape is getting increasingly tough. From isolating sensitive data for compliance or customer security, to protecting the critical assets that your operation relies on to run smoothly, you need to be able to visualize, segment and enforce rules to create an air-tight path for communications through your ecosystem.

As John Kindervag, founder of Zero Trust once said, in removing “the Soft Chewy Center” and moving towards a Zero Trust environment, visibility is step one. Without having an accurate, real-time and historical map of your entire infrastructure, including on-premises and both public and private clouds, it’s impossible to be sure that you aren’t experiencing gaps or blind spots. As Forrester analyst Chase Cunningham mandates in the ZTX Ecosystem Strategic Plan, “Visibility is the key in defending any valuable asset. You can’t protect the invisible. The more visibility you have into your network across your business ecosystem, the better chance you have to quickly detect the tell-tale signs of a breach in progress and to stop it.”

What Should Enterprises Be Seeing to Enable a Zero Trust Model?

Visibility itself is a broad term. Here are some practical necessities that are the building blocks of Zero Trust, and that your map should include.

  • Automated logging and monitoring: With an automated map of your whole infrastructure that updates without the need for manual support, your business has an always-accurate visualization of your data center. When something changes unexpectedly, this is immediately visible.
  • Classification of critical assets and data: Your stakeholders need to be able to read what they can see. Labeling and classification are therefore an integral element of visibility. Flexible labeling and grouping of assets streamlines visibility, and later, policy creation.
  • Relationships and dependencies: The best illustration of the relationships and dependencies of assets, applications and flows will give insight all the way down to process level.
  • Context: This starts with historical data as well as real-time, so that enterprises can establish baselines to use for smart policy creation. Your context can be enhanced with orchestration metadata from the cloud or third-party APIs, imported automatically to give more understanding to what you’re visualizing.

Next Step… Segmentation!

Identifying all resources across all environments is just step one, but it’s an essential first step for a successful approach to establishing a Zero Trust model. Without visibility into users, their devices, workloads across all environments, applications, and data itself, moving onto segmentation is like grasping in the dark.

In contrast, with visibility at the start, it’s intuitive to sit down and identify your enterprise’s most critical assets, decide on your unique access permissions and grouping strategy for resources, and to make intelligent and dynamic modifications to policy at the speed of change.

Want to read more about visibility and Zero Trust? Get our white paper about how to move toward a Zero Trust framework faster.

Read More

Moving Zero Trust from a Concept to a Reality

Most people understand the reasoning and the reality behind a zero trust model. While historically, a network perimeter was considered sufficient to keep attacks at bay, today this is not the case. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to be much more useful and capable in preventing breaches.

But how organizations can move from a concept or idea into implementation? Using the same tools that are developed with 15-20 year old technologies is not adequate.

There is a growing demand for IT resources that can be accessed in a location-agnostic way, and cloud services are being used more widely than ever. These facts, on top of businesses embracing broader use of distributed application architectures, mean that both the traditional firewall and the Next Generation are no longer effective for risk reduction.
The other factor to consider is that new malware and attack vectors are being discovered every day, and businesses have no idea where the next threat might come from. It’s more important than ever to use micro-segmentation and micro-perimeters to limit the fallout of a cyber attack.

How does applying the best practices of zero trust combat these issues?

Simply put, implementing the zero trust model creates and enforces small segments of control around sensitive data and applications, increasing your data security overall. Businesses can use zero trust to monitor all network traffic for malicious activity or unauthorized access, limiting the risk of lateral movement through escalating user privileges and improving breach detection and incident response. As Forrester Research, who originally introduced the concept, explain, with zero trust, network policy can be managed from one central console through automation.

The Guardicore principles of zero trust

At Guardicore, we support IT teams in implementing zero trust with the support of our four high level principles. Together, they create an environment where you are best-placed to glean the benefits of zero trust.

  • A least privilege access strategy: Access permissions are only assigned based on a well-defined need. ‘Never trust- always verify’. This doesn’t stop at users alone. We also include applications, and even the data itself, with continuous review of the need for access. Group permissions can help make this seamless, and then individual assets or elements can be removed from each group as necessary.
  • Secure access to all resources: This is true no matter the location or its user. Our authentication level is the same both inside and outside of the local area network, for example services from the LAN will not be available via VPN.
  • Access control at all levels: Both the network itself and each resource or application need multi-factor authentication.
  • Audit everything: Rather than simply collecting data, we review all the logs that are manually collected, using automation to generate alerts where necessary. These bots perform multiple actions, such as our ‘nightwatch bot’ that generates phone calls to the right member of staff in the case of an emergency.

However, knowing these best principles and understanding the benefits behind zero trust is not the same as being able to implement securely and with the right amount of flexibility and control.

Many companies fall at the first hurdle, unsure how to gain full visibility of their ecosystem. Without this, it is impossible to define policy clearly, set up the correct alerts so that business can run as usual, or stay on top of costs. If your business does not have the right guidance or skill-sets, the zero trust model becomes a ‘nice to have’ in theory but not something that can be achieved in practice.

It all starts with the map

With a zero trust model that starts with deep visibility, you can automatically identify all resources across all environments, at both the application and network level. At this point, you can work out what you need to enforce, turning to technology once you know what you’re looking to build as a strategy for your business. Other solutions will start with their capabilities, using these to suggest enforcement, which is the opposite of what you need, and can leave gaps where you need policy the most.

It’s important to ensure that you have a method in place for classification so that stakeholders can understand what they are looking at on your map. We bring in data from third-party orchestration, using automation to create a highly accessible map that is simple to visualize across both technical and business teams. With a context-rich map, you can generate intelligence on malicious activity even at the application layer, and tightly enforce policy without worrying about the impact on business as usual.

With these best practices in mind, and a map as your foundation – your business can achieve the goals of zero trust, enforcing control around sensitive data and apps, finding malicious activity in network traffic, and centrally managing network policy with automation.

Want to better understand how to implement segmentation for securing modern data centers to work towards a zero trust model?

Download our white paper

You don’t have to be mature in order to be more secure – cloud, maturity, and micro-segmentation

Whether you’ve transitioned to the cloud, are still using on-prem servers, or are operating on a hybrid system, you need security services that are up to the task of protecting all your assets. Naturally, you want the best protection for your business assets. In the cybersecurity world, it’s generally agreed that micro-segmentation is the foundation for truly powerful, flexible, and complete cloud network security. The trouble is that conventional wisdom might tell you that you aren’t yet ready for it.

If you are using a public cloud or VMware NSX-V, you already have a limited set of basic micro-segmentation capabilities built-in with your cloud infrastructure, using security groups and DFW (NSX-V). But security requirements, the way that you have built your network, or your use of multiple vendors require more than a limited set of basic capabilities.

The greatest security benefits can be accessed by enterprises that can unleash the full potential of micro-segmentation beyond layers 3 and 4 of the OSI model, and use application-aware micro-segmentation. Generally, your cloud security choices will be based on the cloud maturity level of your organization. It’s assumed that enterprises that aren’t yet fully mature, according to typical cloud maturity models, won’t have the resources to implement the most advanced cloud security solutions.

But what if that’s not the case? Perhaps a different way of thinking about organizational maturity would show that you can enjoy at least some of the benefits of advanced cloud security systems. Take a closer look at a different way to assess your enterprise’s maturity.

A different way to think about your organizational maturity

Larger organizations already have a solid understanding of their maturity. They constantly monitor and reevaluate their maturity profile, so as to make the best decisions about cloud services and cloud security options. We like to compare an organization learning about the best cloud security services to people who are learning to ski.

When an adult learns how to ski, they’ll begin by buying ski equipment and signing up for ski lessons. Then they’ll spend some time learning how to use their skis and getting used to the feeling of wearing them, before they’re taught to actually ski. It could take a few lessons until an adult skis downhill. If they don’t have strong core muscles and a good sense of balance, they are likely to be sent away to improve their general fitness before trying something new. But when a child learns how to ski, they usually learn much faster than an adult, without taking as long to adjust to the new movements.

Just like an adult needs to be strong enough to learn to ski, an organization needs to be strong enough to implement cloud security services. While adults check their fitness with exercises and tests, organizations check their fitness using cloud maturity models. But typical cloud maturity models might not give an accurate picture of your maturity profile. They usually use 4, 5, or 6 levels of maturity to evaluate your organization in a number of different areas. If your enterprise hasn’t reached a particular level in enough areas, you’ll have to build up your maturity before you can implement an advanced cloud security solution.

At Guardicore, we take a different approach. We developed a solution that yields high security dividends, even if the security capabilities of your organization are not fully mature.

Assessing the maturity of ‘immature’ organizations

Most cloud security providers assume that a newer enterprise doesn’t have the maturity to use advanced cloud security systems. But we view newer enterprises like children who learn to ski. Children have less fear and more flexibility than an adult. They don’t worry about falling, and when they do fall, they simply get up and carry on. The consequences of falling can be a lot more serious for adults. In the same way, newer enterprises can be more agile, less risk-averse, and more able to try something new than an older enterprise that appears to be more mature.

Newer organizations often have these advantages:

  • Fewer silos between departments
  • Better visibility into a less complex environment
  • A much higher tolerance for risk that enables them to test new cloud services and structures, due to a lower investment in existing architecture and processes
  • A more agile and streamlined environment
  • A lighter burden of inherited infrastructure
  • A more unified environment that isn’t weakened by a patchwork of legacy items

While a newer enterprise might not be ready to run a full package of advanced cloud security solutions, it could be agile enough to implement many or most of the security features while it continues to mature. Guardicore allows young organizations to leapfrog the functions that they aren’t yet ready for, while still taking advantage of the superior protection offered by micro-segmentation. Like a child learning to ski, we’ll help you enjoy the blue runs sooner, even if you can’t yet head off-piste.

Organizational maturity in ‘mature’ organizations

Although an older, longer-established organization might seem more cloud mature, it may not be ready for advanced cloud security systems. Many older enterprises aren’t even sure what is within their own ecosystem. They face data silos, duplicate workflows, and cumbersome business processes. Factors holding them back can include:

  • Inefficient workflows
  • Long-winded work processes
  • Strange and divisive infrastructure
  • Awkward legacy environments
  • Business information that is siloed in various departments
  • Complex architectures

Here, Guardicore Centra will be instrumental in bridging the immaturity gap: It provides deep visibility through clear visualization of the entire environment, even those parts that are siloed. Guardicore Centra delivers benefits for multiple teams, and its policy engine supports (almost) any kind of organizational security policy.

What’s more, Guardicore supports phased deployment. It is not an all-or-nothing solution. An organization that can’t yet run a full set of advanced cloud security services still needs the best protection it can get for its business environment. In these situations, Guardicore helps implement only those features that your organization is ready for, while making alternative security arrangements for the rest of your enterprise. By taking it slowly, you can grow into your cloud capabilities and gradually implement the full functionality of micro-segmentation.

Flexible cloud security solutions for every organization

Guardicore’s advanced cloud security solutions provide the highest level of protection for your critical business assets. They are flexible enough to handle legacy infrastructure and complex environments, while allowing for varying levels of cloud maturity.

Whether you are a ‘young’ organization that’s not seen as cloud-mature, or an older enterprise struggling with organizational immaturity, Guardicore can help you to get your skis on. As long as you have a realistic understanding of your organization’s requirements and capabilities, you can apply the right Guardicore security solution to your business and enjoy superior protection without breaking a leg.

Lessons Learned from One of the Largest Bank Heists in Mexico

News report: $20M was stolen from Mexican banks, with the initial intention to steal $150M. Automatically we are drawn to think of a “Casa de Papel” style heist, bank robbers wearing masks hijacking a bank and stealing money from an underground vault. This time, the bank robbers were hackers, the vault is the SPEI application and well, no mask was needed. The hackers were able to figuratively “walk right in” and take the money. Nothing was stopping them from entering the back door and moving laterally until they reached the SPEI application.

Central bank Banco de México, also known as Banxico, has published an official report detailing the attack, the techniques used by the attackers and how they were able to compromise several banks in Mexico to steal $20M. The report clearly emphasizes how easy it was for the attackers to reach their goal, due to insecure network architecture and lack of controls.

The bank heist was directed at the Mexican financial system called SPEI, Mexico’s domestic money transfer platform, managed by Banxico. Once the attackers found their initial entrance into the network, they started moving laterally to find the “crown jewels”, the SPEI application servers. The report states that the lack of network segmentation enabled the intruders to use that initial access to go deeper in the network with little to no interference and reach the SPEI transaction servers easily. Moreover, the SPEI app itself and its different components had bugs and lacked adequate validation checks of communication between the application servers. This meant that within the application the attackers could create an infrastructure of control that eventually enabled them to create bogus transactions and extract the money they were after.

Questions arise: what can be learned from this heist? How do we prevent the next one? Attackers will always find their way in to the network, so how do you prevent them from getting the gold?

Follow Advice to Remain Compliant

When it comes to protecting valuable customer information and achieving regulatory compliance, organizations such as PCI-DSS and SWIFT recommend the following basic steps: system integrity monitoring, vulnerability management, and segmentation and application control. For financial information, PCI-DSS regulations enforce file integrity monitoring on your Cardholder Data Environment itself, to examine the way that files change, establish the origin of such changes, and determine if they are suspicious in nature. SWIFT regulations require customers to “Restrict internet access and protect critical systems from the general IT environment” as well as encourage companies to implement internal segmentation within each secure zone to further reduce the attack surface.

Let’s look at a few guidelines, as detailed by SWIFT while incorporating our general advice on remaining compliant in a hybrid environment.

  • Inbound and outbound connectivity for the secure zone is fully limited.
    Transport layer stateful firewalls are used to create logical separation at the boundary of the secure zone.
  • No “allow any” firewall rules are implemented, and all network flows are explicitly authorized.
    Operators connect from dedicated operator PCs located within the secure zone (that is, PCs located within the secure zone, and used only for secure zone purposes).
  • SWIFT systems within the secure zone restrict administrative access to only expected ports, protocols, and originating IPs.
  • Internal segmentation is implemented between components in the secure zone to further reduce the risk.

SPEI servers, that serve a similar function to SWIFT application servers should adhere to similar regulatory requirements, and as elaborated on by Banxico in the official analysis report, such regulations are forming for this critical application.

Don’t Rely on Traditional Security Controls

The protocols detailed above are recommended by security experts and compliance regulations worldwide, so it’s safe to assume the Mexican bank teams were aware of the benefits of such controls. Many of them have even been open about their attempts to implement these kinds of controls with traditionally available tools such as VLANS and endpoint FWs. This has proven to be a long, costly and tiresome process, sometimes requiring 9 months of work to segment a single SWIFT application! Would you take 9 months to install a metal gate around your vault and between your vault compartments? I didn’t think so…

Guardicore Centra is set on resolving this challenge. Moving away from traditional segmentation methods to use micro-segmentation that provides foundational actionable data center visibility, this technology shows quick time to value, with controls down to the process level. Our customers, including Santander Brasil and BancoDelBajio in Mexico, benefit from early wins like protecting critical assets or achieving regulatory compliance, avoiding the trap of “all or nothing segmentation” that can happen when competitors do not implement a phased approach.

Guardicore provides the whole package to secure the data center, including real-time and historical visibility down to the process level, segmentation and micro-segmentation supporting various segmentation use cases, and breach detection and response, to thoroughly strengthen our client’s security posture overall.

Micro-segmentation is more achievable than ever before. Let’s upgrade your company’s security practices to prevent attackers from gaining access to sensitive information and crown jewels in your hybrid data center. Request a demo now or read more about smart segmentation.

Read More

What’s New in Infection Monkey Release 1.6

We are proud to announce the release of a new version of the Infection Monkey, GuardiCore’s open-source Breach and Attack Simulation (BAS) tool. Release 1.6 introduces several new features and a few bug fixes.

Portfolio Items