Posts

Are You up to Date with the Latest Guardicore Cyber Security Ecosystem News?

2019 was an incredible year of growth and innovation for Guardicore and the world-class technology ecosystem that passionately supports it. The future for software-defined cloud and data center security transformation looks more attainable than ever. A growing number of technology vendors, both large and small, now work with us to deliver joint solutions to solve some of the biggest cyber security pain points of today’s enterprise customers.

We are honored to have some of the world’s most well-known companies as our customers, and to work together with them to secure their most critical assets as part of their digital transformation strategy. These customers build, run and manage an integrated set of applications and services to deliver a unique experience for their own internal and external customers in turn. Guardicore, alongside our technology alliance partners provides a pragmatic enterprise-ready solution that allows our customers to embrace a complex and innovative hybrid cloud environment, both culturally and through technology. As we continue to evolve in this new year, I wanted to mention and highlight a few updates.

Cloud Updates:

Guardicore is now available on the Microsoft Azure marketplace as a preferred solution after earning an IP co-sell status. Customers worldwide can now gain access to the Guardicore Centra security platform directly from the Azure marketplace.

Guardicore was selected to join the AWS’ Outpost announcement. Outposts are developed, installed and deployed by AWS on customer premises and managed as if they are part of the cloud. Read more about it in our recent blog.

Don’t miss the recent AWS and Guardicore Webinar featuring our own Dave Klein and Moe Alhassan, Partner Solutions Architect at AWS, on securing and monitoring critical assets and applications on AWS.

Native Cloud Orchestration Updates:

Guardicore now provides out-of-the-box native integration with all large Cloud Service Providers: Amazon Web Services, Microsoft Azure, Google Cloud Platform and Oracle Cloud Infrastructure. This is in addition to VMware and OpenStack integration and support for other orchestration services via built-in RESTful API. This allows our customers to truly embrace and use a hybrid cloud infrastructure, allowing them to migrate from on-premises data centers to any cloud or clouds choosing the right technology that meet their needs, whether that’s hosted servers, IaaS, PaaS or hybrid.

New Eco-system Product Certifications:

We are happy to announce that the Splunk application for Guardicore has passed the Splunk certification process. The application and the add on are now available directly from Splunkbase. Guardicore integration is available for version 7.3 and above, including the newly released Splunk version 8.x

Guardicore Centra is now listed in the SUSE catalogue which you can find here, and is a proud member of the SolidDriver program. It is also available in the IBM Global Solution Directory.

Identity Management Updates

Guardicore completed an integration as well as product certifications with Privileged Access Management solution provider CyberArk (Centra Privileged Session Management plugin available from the CyberArk marketplace) and identity providers Okta, Duo, Ping Identity, Ilex International, and Redhat SSO using SAML and Active Directory Integration. To learn more about using Guardicore Centra with CyberArk read our blog on the integration.

On-premises Virtual Desktops and Desktop-as-a-Service

Guardicore Centra is validated as Citrix ready for Citrix Virtual Apps and Desktops and is listed in the Citrix Ready Marketplace. You can read more about it in this blog. In addition, we have shared information on how Centra can be used to segment users on Amazon Workspaces (DaaS).

We’re also excited about the future innovation that will be announced and demonstrated later this year. As our technology partners continue to work with us to deliver integrated solutions, you can expect more exciting announcements. Stay tuned and keep up with our blog for the most up-to-date information.

Want to learn more about how Guardicore micro-segmentation can help you protect AWS workloads? Download our white paper on supplementing cloud security and going beyond the shared security model.

Read More

When Firewalls & Traditional Segmentation Fail, What’s the Next Big Thing?

Ask many of today’s enterprise businesses what the most important factors are to remain competitive in their industry, and you’re likely to get an answer that includes both speed and innovation. There’s always another competitor snapping at your heels, and there aren’t enough hours in the day to get down your to-do lists. The faster you can go live with new features and updates, the better.

For many, this comes at a severely high price – security. If speed and innovation are the top items on the agenda, how can you balance this with keeping your sensitive information or critical assets safe? Of course, pushing security onto the back burner is never a solution, as increased risk, compliance and internal governance mandates will continually remind us.

A fellow cybersecurity evangelist Tricia Howard and I discussed this conundrum a while back. She came up with a terrific visual representation of this dilemma which can be seen in the Penrose Triangle, below. This diagram, also known as the ‘impossible triangle’ is an optical illusion. In this drawing, the two bottom points, speed and innovation, make the top point, security, seem like it’s further away – but it’s not.

penrose triangle

Penrose “Impossible” Triangle. Used in an analogy to modern IT challenges as proposed by cyber evangelist Tricia Howard.

First, let’s look at how organizations are achieving the speed and innovation corners of this triangle, and then we can see why securing our IT environments has become more of a challenge while still an ACHIEVABLE one.

Understanding the Cloud and DevOps Best Practices

There are two key elements to the DevOps process as we know it today. The first one is simplifying management by decoupling it from underlying platforms. Instead of managing each system/platform separately, DevOps and Cloud best practices seek solutions that provide an abstraction layer. Using this layer, enterprises can work across all systems, from legacy to future-focused, without impediment. It’s streamlining that has become essential in today’s enterprises which have everything from legacy, end of life operating systems and platforms, to modern virtualized environments, clouds and containers.

Secondly, DevOps and Cloud best practices utilize automated provisioning, management and autoscaling of workloads, allowing them to work faster and smarter. These are implemented through playbooks, scripts like Chef, Puppet and Ansible to name a few.

Sounds Great, but not for Traditional Segmentation Tools

These new best practices allow enterprises to push out new features quickly, remain competitive, and act at the speed of today’s fast-paced world. However, securing these by traditional security methods is all but impossible.

Historically, organizations would use firewalls, VLANs and ACLs for on-premises systems, and then virtualized firewalls and Security Groups in their cloud environments. Without an established external perimeter, with so many advanced cyberattacks, and with dynamic change happening all the time, these have now become yesterday’s solution. Here are just some of the problems:

  • Complex to manage: Having multiple systems just isn’t realistic. Using Firewalls, VLANs and ACLs on-premises and security groups in the cloud for example means that you have multiple systems to manage, which add to management complexity, are resource intensive and do not provide the seamless visibility required. The rule-sets vary, and can even contradict one another, and you don’t know if you have gaps that could leave you open to unnecessary risk.
  • Increased maintenance: Changes for these systems need to be carried out manually, and nothing less than automation is enough for today’s complex IT environments. You may have tens of thousands of servers or communication flows to handle, and it’s impossible to do this with the human touch.
  • Low visibility: For strong security, your business needs to be able to see down to process level, include user/identity and domain name information across all systems and assets. With a lack of basic visibility, your IT teams cannot understand application and user workflows or behavior. Any simple change could cause an outage or a problem that slows down business as usual.
  • Platform-specific: For example, VLANs do not work on the cloud, or Security Groups won’t help on-premises. To ensure you have wide coverage, you need a security solution that can visualize and control everything, from the most legacy infrastructure or bare metal servers all the way through to clouds, containers and serverless computing.
  • Coarse controls: The most common traditional segmentation tools are port and IP-based, despite today’s attackers going after processes, users or workloads for their attacks. Firewalls are innately perimeter controls, so cannot be placed between most traffic points. While companies attempt to fix this by re-engineering traffic flows, this is a huge effort that can become a serious bottleneck.

Introducing Software-Defined Segmentation: An Approach That Works with DevOps From the Start

With these challenges in mind, there are security solutions that take advantage of DevOps and cloud best practices, and allow us to build an abstraction layer that simplifies visibility and control across our environment in a seamless, streamlined fashion. One that allows us to take advantage of DevOps and cloud automation to gain speed as well.

Software-defined segmentation is built to address the challenges of traditional tools for the hybrid cloud and modern data center from the start. Just like with cloud or DevOps processes, the visibility and policy management is decoupled from the underlying platforms, working on an abstraction layer across all environments and operating systems. On one unique platform, organizations can gain deep visibility and control over their entire IT ecosystem, from legacy systems through to the most future-focused technology. The insight you receive is far more granular than with any traditional segmentation tools, allowing you to see at a glance the dependencies among applications, users, and workloads, making it simple to define and enforce the right policy for your business needs. These policies can be enforced by process, user identity, and FQDN, rather than relying on port and IP that will do little to thwart today’s advanced threats.

Software-defined segmentation follows the DevOps mindset in more ways than one. It incorporates the same techniques for efficiency, innovation and speed, such as automated provisioning, management, and autoscaling. Developers can continue to embrace a ‘done once, done right’ attitude, using playbooks and scripts such as Chef, Puppet and Ansible to speed up the process from end to end, and automate faster, rather than rely on manual moves, changes, adds or deletes.

Embrace the New, but Cover the Old

Software-defined segmentation is a new age for cybersecurity, providing a faster, more granular way for enterprises to protect their critical assets. Projects that in the past may have spanned many years can now be done in a matter of a few weeks with this new approach, quickly reducing risk and validating compliance.

If your segmentation solution is stuck in the past, you’re leaving yourself open to risk, making it far easier for hackers to launch an attack, and you’re unlikely to be living up to the necessary compliance mandates for your industry.

Instead, think about a new approach that, just like your DevOps practices, is decoupled from any particular infrastructure, and is both automatable and auto-scalable. On top of this, make sure that it provides equal visibility and control across the board in a granular way, so that speed and innovation can thrive, with security an equal partner in the triangle of success.

Securing modern data centers and clouds needs a whole new approach to segmentation. To learn more about it, check out our white paper.

Download now

Segmenting Users on AWS WorkSpaces – Yes It’s a Thing, and Yes, You Should Be Doing It!

I recently came across a Guardicore financial services customer that had a very interesting use case. They were looking to protect their Virtual Desktop (VDI) environment, in the cloud.

The customer’s setup is a hybrid cloud: it has legacy systems that include bare metal servers, Solaris and some old technologies on-premises. It also utilizes many Virtual environments such as VMware ESX, Nutanix and Openstack.

Concurrently with this infrastructure, the customer has started using AWS and Azure and plans to use containers in these platforms, but has not yet committed to anything specific.

One interesting element to see, was how the customer was migrating its on-premises Citrix VDI environment to AWS workspaces. The customer was happy using AWS workspaces and had therefore decided to migrate to using them in full production. AWS workspaces were especially useful for our customer since the majority of its users work remotely, and it was so much easier to have those users working with an AWS WorkSpace than relying on the on-premises, Citrix environment.

So, what is an AWS WorkSpace anyway?

In Forrester’s Now Tech: Cloud Desktops, Q4 2019 report, cloud desktops and their various offerings are discussed. Forrester states that “you can use cloud desktops to improve employee experience (eX), enhance workforce continuity, and scale business operations rapidly.” This is exactly what our customer was striving to achieve with AWS WorkSpaces.

AWS Desktops are named “Amazon WorkSpaces”, and they are a Desktop-as-a-Service (DaaS) solution that run on either Windows or Linux desktops. AWS provides this pay-as-you-launch service all around the world. According to AWS “Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.”

To get started with AWS workspaces click here.

Our customer was using AWS WorkSpaces and scaling their utilization rapidly. This resulted in a need to add a security layer to these cloud desktops. In AWS when users access the WorkSpaces, upon access, they are automatically assigned a workspace, and a dynamic IP. Controlling this access is challenging using traditional network segmentation solutions that are IP based. Thus, our customer was looking for a solution with the following features:

    • Visibility:
      • First and foremost within the newly adopted cloud platform
      • Secondly, not just an understanding of traffic between legacy systems on-premises and in the cloud individually, but visibility into inter-platform communications, too.
    • Special attention for Amazon WorkSpaces:
      • User-level protection: Controlling which users from AWS workspaces should and could interact with the various applications the customer owned, on-premises or in the cloud.
      • Single policy across hybrid-cloud: What was once implemented on-premises alone, now needed to be implemented in the cloud, and not only in the cloud, but cross cloud to on-premises applications. The customer was looking for simplicity, a single tool to control all policies across any environment.

Tackling this Use Case with Guardicore Centra

Our customer evaluated several solutions, for visibility, segmentation and user identity management.The customer eventually choose Guardicore Centra, for the ability to deliver all of the above, from a single pane of glass, and do so swiftly and simply.

Guardicore was able to provide visibility of all workloads, on premises or in the cloud, across virtual, bare metal and cloud environments, including all assets, giving our customer the governance they needed of all traffic and flows, including between environments.

On top of visibility, Centra allowed an unprecedented amount of control for the customer. Guardicore policies were set to control and enforce allowed traffic and add an additional layer of user identity policies to control which users from the AWS workspaces could talks to which on-premises applications. As mentioned previously, upon access to AWS workspaces, users are automatically assigned a workspace, with a dynamic IP. Thus traditional tools that are IP based are inadequate, and do not provide the flexibility needed to control these user’s access. In contrast, Guardicore Centra enables creating policies based on the user’s identity to the datacenter and applications, regardless of IP or WorkSpace.

 

Where Guardicore Centra Stands Apart from the Competition

Guardicore Centra provides distributed, software-based segmentation, enabling user identity access management. This enables additional control of the network, among any workloads.

Centra enables creating policy rules based on the identity of the logged in user. Identities are pulled from the organizational Active Directory integrated with Centra. Centra requires no network changes and no downtime or reboot of systems. Policies are seamlessly created, and take real time effect, controlling new and active sessions alike.

This use case is just one example of how Guardicore Centra simplifies segmentation, and enables customers fine-grained visibility and control. Centra allows an enterprise to control user’s access anywhere, setting policy that applies even when multiple users are logged in at the same time to the same system, as well as managing third party, administrators and network users’ access to the network.

Want to learn more about securing and monitoring critical assets and applications on AWS? Join our live webinar with AWS on Thursday, December 12th at 1:00pm Eastern.
Register Now

Where to Start? Moving from the Theory of Zero Trust to Making it Work in Practice

Going back many years, perimeter controls were traditionally adequate for protecting enterprise networks that held critical assets and data. The hypothesis was that if you had strong external perimeter controls, watching your ingress and egress should be adequate for protection. If you were a more sophisticated or larger entity, there would be additional fundamental separation points between portions of your environment. However these were still viewed and functioned as additional perimeter points, merely concentric circles of trust with the ability, more or less, to move freely about. In cases where threats occurred within your environment, you would hope to catch them as they crossed one of these rudimentary borders.

The Moment I Realized that Perimeters Aren’t Enough

This practice worked moderately well for a while. However, around fifteen years ago, security practitioners began to feel a nascent itch, a feeling that this was not enough. I personally remember working on a case, a hospital – attacked by a very early spear phishing attack that mimicked a help desk request for a password reset. Clicking on a URL in a very official looking email, staff were sent to a fakebut official looking website where these hospital professionals were prompted to reset their credentials – or so they thought. Instead, the attack began. This was before the days of the Darknet and we even caught the German hacker boasting about what he had done – sharing the phishing email and fake website on a hacker messaging board. I worked for a company that had a fantastic IPS solution and upon deploying it, we were able to quickly catch the individual’s exfils. At first, we seemed to be winning. We cut the attacker off from major portions of a botnet that resided on the cafeteria cash registers, most of the doctors machines and to my horror, even on the automated pharmacy fulfillment computers. Two weeks later, I received a call, the attacker was back,trying to get around the IPS device in new ways. While we were able to suppress the attack for the most part, I finally had to explain to the hospital IT staff that my IPS was merely at the entrances and exits of their network and that to really stop these attacks, we needed to look at all of the machines and applications that resided within their environment. We needed the ability to look at traffic before it made its way to and from the exits. This was to be the first of many realizations for me that the reliance on perimeter-based security was slowly and surely eroding.

In the years since, the concept of a perimeter has all but completely eroded. Of course, it took quite a while for the larger population to accept. This was helped along by the business and application interdependencies that bring vendors, contractors, distributors and applications through your enterprise as well as the emergence of cloud and cloud like provisioning utilized by Dev Ops. The concept of being able to have true perimeters as a main method of prevention is no longer tangible.

It was this reality that spurred the creation of Forrester’s Zero Trust model- almost a decade ago. The basic premise is that no person or device is automatically given access or trusted without verification. In theory, this is simple. In practice, however, especially in data centers that have become increasingly hybrid and complex, this can get complicated fast.

Visibility is Foundational for Zero Trust

A cornerstone of Zero Trust is to ‘assume access.’ This means that any enterprise should assume than an attacker has already breached the perimeter. This could be through stealing credentials, a phishing scam, basic hygiene issues like poor passwords, account control and patching regimen, an IoT or third-party device, a brute force attack, or literally limitless other new vectors that make up today’s dynamic data centers.

Protecting your digital crown jewels through this complex landscape is getting increasingly tough. From isolating sensitive data for compliance or customer security, to protecting the critical assets that your operation relies on to run smoothly, you need to be able to visualize, segment and enforce rules to create an air-tight path for communications through your ecosystem.

As John Kindervag, founder of Zero Trust once said, in removing “the Soft Chewy Center” and moving towards a Zero Trust environment, visibility is step one. Without having an accurate, real-time and historical map of your entire infrastructure, including on-premises and both public and private clouds, it’s impossible to be sure that you aren’t experiencing gaps or blind spots. As Forrester analyst Chase Cunningham mandates in the ZTX Ecosystem Strategic Plan, “Visibility is the key in defending any valuable asset. You can’t protect the invisible. The more visibility you have into your network across your business ecosystem, the better chance you have to quickly detect the tell-tale signs of a breach in progress and to stop it.”

What Should Enterprises Be Seeing to Enable a Zero Trust Model?

Visibility itself is a broad term. Here are some practical necessities that are the building blocks of Zero Trust, and that your map should include.

  • Automated logging and monitoring: With an automated map of your whole infrastructure that updates without the need for manual support, your business has an always-accurate visualization of your data center. When something changes unexpectedly, this is immediately visible.
  • Classification of critical assets and data: Your stakeholders need to be able to read what they can see. Labeling and classification are therefore an integral element of visibility. Flexible labeling and grouping of assets streamlines visibility, and later, policy creation.
  • Relationships and dependencies: The best illustration of the relationships and dependencies of assets, applications and flows will give insight all the way down to process level.
  • Context: This starts with historical data as well as real-time, so that enterprises can establish baselines to use for smart policy creation. Your context can be enhanced with orchestration metadata from the cloud or third-party APIs, imported automatically to give more understanding to what you’re visualizing.

Next Step… Segmentation!

Identifying all resources across all environments is just step one, but it’s an essential first step for a successful approach to establishing a Zero Trust model. Without visibility into users, their devices, workloads across all environments, applications, and data itself, moving onto segmentation is like grasping in the dark.

In contrast, with visibility at the start, it’s intuitive to sit down and identify your enterprise’s most critical assets, decide on your unique access permissions and grouping strategy for resources, and to make intelligent and dynamic modifications to policy at the speed of change.

Want to read more about visibility and Zero Trust? Get our white paper about how to move toward a Zero Trust framework faster.

Read More

Moving Zero Trust from a Concept to a Reality

Most people understand the reasoning and the reality behind a zero trust model. While historically, a network perimeter was considered sufficient to keep attacks at bay, today this is not the case. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to be much more useful and capable in preventing breaches.

But how organizations can move from a concept or idea into implementation? Using the same tools that are developed with 15-20 year old technologies is not adequate.

There is a growing demand for IT resources that can be accessed in a location-agnostic way, and cloud services are being used more widely than ever. These facts, on top of businesses embracing broader use of distributed application architectures, mean that both the traditional firewall and the Next Generation are no longer effective for risk reduction.
The other factor to consider is that new malware and attack vectors are being discovered every day, and businesses have no idea where the next threat might come from. It’s more important than ever to use micro-segmentation and micro-perimeters to limit the fallout of a cyber attack.

How does applying the best practices of zero trust combat these issues?

Simply put, implementing the zero trust model creates and enforces small segments of control around sensitive data and applications, increasing your data security overall. Businesses can use zero trust to monitor all network traffic for malicious activity or unauthorized access, limiting the risk of lateral movement through escalating user privileges and improving breach detection and incident response. As Forrester Research, who originally introduced the concept, explain, with zero trust, network policy can be managed from one central console through automation.

The Guardicore principles of zero trust

At Guardicore, we support IT teams in implementing zero trust with the support of our four high level principles. Together, they create an environment where you are best-placed to glean the benefits of zero trust.

  • A least privilege access strategy: Access permissions are only assigned based on a well-defined need. ‘Never trust- always verify’. This doesn’t stop at users alone. We also include applications, and even the data itself, with continuous review of the need for access. Group permissions can help make this seamless, and then individual assets or elements can be removed from each group as necessary.
  • Secure access to all resources: This is true no matter the location or its user. Our authentication level is the same both inside and outside of the local area network, for example services from the LAN will not be available via VPN.
  • Access control at all levels: Both the network itself and each resource or application need multi-factor authentication.
  • Audit everything: Rather than simply collecting data, we review all the logs that are manually collected, using automation to generate alerts where necessary. These bots perform multiple actions, such as our ‘nightwatch bot’ that generates phone calls to the right member of staff in the case of an emergency.

However, knowing these best principles and understanding the benefits behind zero trust is not the same as being able to implement securely and with the right amount of flexibility and control.

Many companies fall at the first hurdle, unsure how to gain full visibility of their ecosystem. Without this, it is impossible to define policy clearly, set up the correct alerts so that business can run as usual, or stay on top of costs. If your business does not have the right guidance or skill-sets, the zero trust model becomes a ‘nice to have’ in theory but not something that can be achieved in practice.

It all starts with the map

With a zero trust model that starts with deep visibility, you can automatically identify all resources across all environments, at both the application and network level. At this point, you can work out what you need to enforce, turning to technology once you know what you’re looking to build as a strategy for your business. Other solutions will start with their capabilities, using these to suggest enforcement, which is the opposite of what you need, and can leave gaps where you need policy the most.

It’s important to ensure that you have a method in place for classification so that stakeholders can understand what they are looking at on your map. We bring in data from third-party orchestration, using automation to create a highly accessible map that is simple to visualize across both technical and business teams. With a context-rich map, you can generate intelligence on malicious activity even at the application layer, and tightly enforce policy without worrying about the impact on business as usual.

With these best practices in mind, and a map as your foundation – your business can achieve the goals of zero trust, enforcing control around sensitive data and apps, finding malicious activity in network traffic, and centrally managing network policy with automation.

Want to better understand how to implement segmentation for securing modern data centers to work towards a zero trust model?

Download our white paper

You don’t have to be mature in order to be more secure – cloud, maturity, and micro-segmentation

Whether you’ve transitioned to the cloud, are still using on-prem servers, or are operating on a hybrid system, you need security services that are up to the task of protecting all your assets. Naturally, you want the best protection for your business assets. In the cybersecurity world, it’s generally agreed that micro-segmentation is the foundation for truly powerful, flexible, and complete cloud network security. The trouble is that conventional wisdom might tell you that you aren’t yet ready for it.

If you are using a public cloud or VMware NSX-V, you already have a limited set of basic micro-segmentation capabilities built-in with your cloud infrastructure, using security groups and DFW (NSX-V). But security requirements, the way that you have built your network, or your use of multiple vendors require more than a limited set of basic capabilities.

The greatest security benefits can be accessed by enterprises that can unleash the full potential of micro-segmentation beyond layers 3 and 4 of the OSI model, and use application-aware micro-segmentation. Generally, your cloud security choices will be based on the cloud maturity level of your organization. It’s assumed that enterprises that aren’t yet fully mature, according to typical cloud maturity models, won’t have the resources to implement the most advanced cloud security solutions.

But what if that’s not the case? Perhaps a different way of thinking about organizational maturity would show that you can enjoy at least some of the benefits of advanced cloud security systems. Take a closer look at a different way to assess your enterprise’s maturity.

A different way to think about your organizational maturity

Larger organizations already have a solid understanding of their maturity. They constantly monitor and reevaluate their maturity profile, so as to make the best decisions about cloud services and cloud security options. We like to compare an organization learning about the best cloud security services to people who are learning to ski.

When an adult learns how to ski, they’ll begin by buying ski equipment and signing up for ski lessons. Then they’ll spend some time learning how to use their skis and getting used to the feeling of wearing them, before they’re taught to actually ski. It could take a few lessons until an adult skis downhill. If they don’t have strong core muscles and a good sense of balance, they are likely to be sent away to improve their general fitness before trying something new. But when a child learns how to ski, they usually learn much faster than an adult, without taking as long to adjust to the new movements.

Just like an adult needs to be strong enough to learn to ski, an organization needs to be strong enough to implement cloud security services. While adults check their fitness with exercises and tests, organizations check their fitness using cloud maturity models. But typical cloud maturity models might not give an accurate picture of your maturity profile. They usually use 4, 5, or 6 levels of maturity to evaluate your organization in a number of different areas. If your enterprise hasn’t reached a particular level in enough areas, you’ll have to build up your maturity before you can implement an advanced cloud security solution.

At Guardicore, we take a different approach. We developed a solution that yields high security dividends, even if the security capabilities of your organization are not fully mature.

Assessing the maturity of ‘immature’ organizations

Most cloud security providers assume that a newer enterprise doesn’t have the maturity to use advanced cloud security systems. But we view newer enterprises like children who learn to ski. Children have less fear and more flexibility than an adult. They don’t worry about falling, and when they do fall, they simply get up and carry on. The consequences of falling can be a lot more serious for adults. In the same way, newer enterprises can be more agile, less risk-averse, and more able to try something new than an older enterprise that appears to be more mature.

Newer organizations often have these advantages:

  • Fewer silos between departments
  • Better visibility into a less complex environment
  • A much higher tolerance for risk that enables them to test new cloud services and structures, due to a lower investment in existing architecture and processes
  • A more agile and streamlined environment
  • A lighter burden of inherited infrastructure
  • A more unified environment that isn’t weakened by a patchwork of legacy items

While a newer enterprise might not be ready to run a full package of advanced cloud security solutions, it could be agile enough to implement many or most of the security features while it continues to mature. Guardicore allows young organizations to leapfrog the functions that they aren’t yet ready for, while still taking advantage of the superior protection offered by micro-segmentation. Like a child learning to ski, we’ll help you enjoy the blue runs sooner, even if you can’t yet head off-piste.

Organizational maturity in ‘mature’ organizations

Although an older, longer-established organization might seem more cloud mature, it may not be ready for advanced cloud security systems. Many older enterprises aren’t even sure what is within their own ecosystem. They face data silos, duplicate workflows, and cumbersome business processes. Factors holding them back can include:

  • Inefficient workflows
  • Long-winded work processes
  • Strange and divisive infrastructure
  • Awkward legacy environments
  • Business information that is siloed in various departments
  • Complex architectures

Here, Guardicore Centra will be instrumental in bridging the immaturity gap: It provides deep visibility through clear visualization of the entire environment, even those parts that are siloed. Guardicore Centra delivers benefits for multiple teams, and its policy engine supports (almost) any kind of organizational security policy.

What’s more, Guardicore supports phased deployment. It is not an all-or-nothing solution. An organization that can’t yet run a full set of advanced cloud security services still needs the best protection it can get for its business environment. In these situations, Guardicore helps implement only those features that your organization is ready for, while making alternative security arrangements for the rest of your enterprise. By taking it slowly, you can grow into your cloud capabilities and gradually implement the full functionality of micro-segmentation.

Flexible cloud security solutions for every organization

Guardicore’s advanced cloud security solutions provide the highest level of protection for your critical business assets. They are flexible enough to handle legacy infrastructure and complex environments, while allowing for varying levels of cloud maturity.

Whether you are a ‘young’ organization that’s not seen as cloud-mature, or an older enterprise struggling with organizational immaturity, Guardicore can help you to get your skis on. As long as you have a realistic understanding of your organization’s requirements and capabilities, you can apply the right Guardicore security solution to your business and enjoy superior protection without breaking a leg.

Lessons Learned from One of the Largest Bank Heists in Mexico

News report: $20M was stolen from Mexican banks, with the initial intention to steal $150M. Automatically we are drawn to think of a “Casa de Papel” style heist, bank robbers wearing masks hijacking a bank and stealing money from an underground vault. This time, the bank robbers were hackers, the vault is the SPEI application and well, no mask was needed. The hackers were able to figuratively “walk right in” and take the money. Nothing was stopping them from entering the back door and moving laterally until they reached the SPEI application.

Central bank Banco de México, also known as Banxico, has published an official report detailing the attack, the techniques used by the attackers and how they were able to compromise several banks in Mexico to steal $20M. The report clearly emphasizes how easy it was for the attackers to reach their goal, due to insecure network architecture and lack of controls.

The bank heist was directed at the Mexican financial system called SPEI, Mexico’s domestic money transfer platform, managed by Banxico. Once the attackers found their initial entrance into the network, they started moving laterally to find the “crown jewels”, the SPEI application servers. The report states that the lack of network segmentation enabled the intruders to use that initial access to go deeper in the network with little to no interference and reach the SPEI transaction servers easily. Moreover, the SPEI app itself and its different components had bugs and lacked adequate validation checks of communication between the application servers. This meant that within the application the attackers could create an infrastructure of control that eventually enabled them to create bogus transactions and extract the money they were after.

Questions arise: what can be learned from this heist? How do we prevent the next one? Attackers will always find their way in to the network, so how do you prevent them from getting the gold?

Follow Advice to Remain Compliant

When it comes to protecting valuable customer information and achieving regulatory compliance, organizations such as PCI-DSS and SWIFT recommend the following basic steps: system integrity monitoring, vulnerability management, and segmentation and application control. For financial information, PCI-DSS regulations enforce file integrity monitoring on your Cardholder Data Environment itself, to examine the way that files change, establish the origin of such changes, and determine if they are suspicious in nature. SWIFT regulations require customers to “Restrict internet access and protect critical systems from the general IT environment” as well as encourage companies to implement internal segmentation within each secure zone to further reduce the attack surface.

Let’s look at a few guidelines, as detailed by SWIFT while incorporating our general advice on remaining compliant in a hybrid environment.

  • Inbound and outbound connectivity for the secure zone is fully limited.
    Transport layer stateful firewalls are used to create logical separation at the boundary of the secure zone.
  • No “allow any” firewall rules are implemented, and all network flows are explicitly authorized.
    Operators connect from dedicated operator PCs located within the secure zone (that is, PCs located within the secure zone, and used only for secure zone purposes).
  • SWIFT systems within the secure zone restrict administrative access to only expected ports, protocols, and originating IPs.
  • Internal segmentation is implemented between components in the secure zone to further reduce the risk.

SPEI servers, that serve a similar function to SWIFT application servers should adhere to similar regulatory requirements, and as elaborated on by Banxico in the official analysis report, such regulations are forming for this critical application.

Don’t Rely on Traditional Security Controls

The protocols detailed above are recommended by security experts and compliance regulations worldwide, so it’s safe to assume the Mexican bank teams were aware of the benefits of such controls. Many of them have even been open about their attempts to implement these kinds of controls with traditionally available tools such as VLANS and endpoint FWs. This has proven to be a long, costly and tiresome process, sometimes requiring 9 months of work to segment a single SWIFT application! Would you take 9 months to install a metal gate around your vault and between your vault compartments? I didn’t think so…

Guardicore Centra is set on resolving this challenge. Moving away from traditional segmentation methods to use micro-segmentation that provides foundational actionable data center visibility, this technology shows quick time to value, with controls down to the process level. Our customers, including Santander Brasil and BancoDelBajio in Mexico, benefit from early wins like protecting critical assets or achieving regulatory compliance, avoiding the trap of “all or nothing segmentation” that can happen when competitors do not implement a phased approach.

Guardicore provides the whole package to secure the data center, including real-time and historical visibility down to the process level, segmentation and micro-segmentation supporting various segmentation use cases, and breach detection and response, to thoroughly strengthen our client’s security posture overall.

Micro-segmentation is more achievable than ever before. Let’s upgrade your company’s security practices to prevent attackers from gaining access to sensitive information and crown jewels in your hybrid data center. Request a demo now or read more about smart segmentation.

Read More

What’s New in Infection Monkey Release 1.6

We are proud to announce the release of a new version of the Infection Monkey, GuardiCore’s open-source Breach and Attack Simulation (BAS) tool. Release 1.6 introduces several new features and a few bug fixes.

Portfolio Items