Environment Segmentation is your Company’s First Quick Micro-Segmentation Win

We often tell our customers that implementing micro-segmentation technology should be a phased project. Starting with a thorough map of your entire IT ecosystem, your company should begin with the ‘low hanging fruit’, the easy wins that can show quick time to value, and have the least impact on other parts of the business. From here, you’ll be in a strong vantage point to get buy in for more complex or granular segmentation projects, perhaps even working towards a zero-trust model for your security.

One of the first tasks that many customers take on is separating environments from one another. Let’s see how it works.

Understanding the Context of your Data Center

Whether your workloads are on-premises, in the cloud, or in a hybrid mix of the two, your data center will be split into environments. These include:

  • Development: Where your developers create code, try out experiments, fix bugs, and use trial and error to create new features and tools.
  • Staging: This is where testing is done, either manually or through automation. Resource-heavy, and as similar as possible to your production environment. This is where you would do your final checks.
  • Production: Your live environment is your production environment. If any errors or bugs make it this far, they could be discovered by your users. If this happens in this environment, it could have the greatest impact on your business through your most critical applications. While all environments are vulnerable, and some may even be more easily breached, penetration and movement in this environment can have the most impact and cause the most damage.

Of course, every organization is different. In some cases, you might have environments such as QA, Local, Feature, or Release, to name just a few. Your segmentation engine should be flexible enough to meet any business structure, suiting your organization rather than the other way around.

It’s important to note that these environments are not entirely separate. They share the same infrastructure and have no physical separation. In this reality, there will be traffic which needs to be controlled or blocked between the different environments to ensure best-practice security. At the same time however, in order for business to run as usual, specific communication flows need to be allowed access despite the environment separations. Mapping those flows, analyzing them and white-listing them is often not an easy process in itself, adding another level of complexity to traditional segmentation projects carried out without the right solution.

Use cases for environment segmentation include keeping business-critical servers away from customer access, and isolating the different stages of the product life cycle. This vital segmentation project also allows businesses to keep up with compliance regulations and prevents attackers from exploiting security vulnerabilities to access critical data and assets.

Traditional Methods of Environment Segmentation

Historically, enterprises would separate their environments using firewalls and VLANs, often physically creating isolation between each area of the business. They may have relied on cloud platforms for development, and then used on-premises data centers for production for example.

Today, some organizations adapt VLANs to create separations inside a data center. This relies on multiple teams spending time configuring network switches, connecting servers, and making application and code changes where necessary. Despite this, In static environments, hosted in the same infrastructure, and without dynamic changes or the need for large scale, VLANs get the job done.

However, the rise in popularity of cloud and containers, as well as fast-paced DevOps practices, has made quick implementation and flexibility more important than ever before. It can take months to build and enforce a new VLAN, and become a huge bottleneck for the entire business, even creating unavoidable downtime for your users. Manually maintaining complex rules and changes can cause errors, while out of date rules leave dangerous gaps in security that can be exploited by sophisticated attackers. VLANs do not extend to the cloud, which means your business ends up trying to reconcile multiple security solutions that were not built to work in tandem. Often this results in compromises being made which put you at risk.

A Software-Based Segmentation Solution Helps Avoid Downtime, Wasted Resources, and Bottlenecks

A policy that follows the workload using software bypasses these problems. Using micro-segmentation technology, you can isolate low-value environments such as Development from Production, so that even in case of a breach, attackers cannot make unauthorized movement to critical assets or data. With intelligent micro-segmentation, this one policy will be airtight throughout your environment. This includes on-premises, in the public or private cloud, or in a hybrid data center.

The other difference is the effort in terms of implementation. Unlike with VLANs, with software-based segmentation, there is no complex coordination among teams, no downtime, and no bottlenecks while application and networking teams configure switches, servers and code. Using Guardicore Centra as an example, it takes just days to deploy our agents, and your customers won’t experience a moment of downtime.

Achieve Environment Segmentation without Infrastructure Changes

Environment segmentation is a necessity in today’s data centers: to achieve compliance, reduce the attack surface, and maintain secure separation between the different life stages of the business. However, this project doesn’t need to be manually intensive. When done right, it shouldn’t involve multiple teams, result in organizational downtime or even require infrastructure changes. In contrast, it can be the first stage of a phased micro-segmentation journey, making it easier to embrace new technology on the cloud, and implement a strong posture of risk-reduction across your organization.

Want to learn more about what’s next after environment segmentation as your first micro-segmentation project? Read up on securing modern data centers and clouds.

More Here.

Are you Prepared for a Rise in Nation State Attacks and Ransomware in 2020?

Once you know what you’re up against, keeping your business safe might be easier than you think. In this blog, we’re going to look at two kinds of cyber threats: nation state cyber attacks and ransomware. Neither is a new concern, but both are increasing in sophistication and prevalence. Many businesses feel powerless to protect against these events, and yet a list of relatively simple steps could keep you protected in the event of an attack.

Staying Vigilant Against Nation State Actors

According to the 2019 Verizon Data Breach study, nation state attacks have increased from 12 percent of attacks in 2017 to 23 percent in 2018.

One of the most important things to recognize about nation state attacks is that it is getting harder to ascertain where these attacks are coming from. Attackers learn to cleverly obfuscate their attacks through mimicking other state actor behavior, tools, and coding and through layers of hijacked, compromised networks. In some cases, they work through proxy actors. This makes the process of attribution very difficult. One good example is the 2018 Olympics in Pyongyang, where attackers launched the malware Olympic Destroyer. This took down the Olympic network’s wireless access points, servers, ticketing system, and even reporters Internet access for 12 hours, immediately prior to the start of the games. While at first, metadata in the malware was thought to attribute the attack to North Korea, this was actually down to manipulations of the code. Much later, researchers realized it was of Russian origin.

These ‘false flag’ attacks have a number of benefits for the perpetrators. Firstly, the real source of the threat may never be discovered. Secondly, even if the correct attribution is eventually found, the news cycle has died down, the exposure is less, and many people may not believe the new evidence.

This has contributed to nation state actors feeling confident to launch larger and more aggressive attacks, such as Russian attacks on Ukrainian power grids and communications, or the Iranian cyber-attack APT 33, that recently took down more than 30,000 Saudi oil production laptops and servers.

Ransomware often Attacks the Vulnerable, including Local Government and Hospitals

State sponsored attacks have the clout to do damage where it hurts the most, as seen by the two largest ransomware attacks ever experienced, WannaCry and NotPetya. These were created using what was allegedly a stolen US NSA tool kit called EternalBlue, as well as a French password stealer called Mimikatz.

This strength, combined with the tight budgets and flat networks of local governments and healthcare systems, is a recipe for catastrophe. Hospitals in particular are known for having flat networks and medical devices based on legacy and end-of-life operating systems. According to some estimates, hospitals are the targets of up to 70% of all ransomware incidents. The sensitive nature of PII and health records and the direct impact on safety and human life makes the healthcare industry a lucrative target for hackers looking to get their ransom paid by attacking national infrastructure.

As attackers become increasingly brazen, and go after organizations that are weak-placed to stand up to the threat, it’s more important than ever that national infrastructure thinks about security, and takes steps to handle these glaring gaps.

Shoring Up Your Defenses is Easier Than You Think

The party line often seems to be that attackers are getting smarter and more insidious, and data centers are too complex to handle this threat. It’s true that today’s networks are more dynamic and interconnected, and that new attack vectors and methods to hide these risks are cropping up all the time. However, what businesses miss, is the handful of very achievable and even simple steps that can help to limit the impact of an attack, and perhaps even prevent the damage occurring in the first place.

Here’s what enterprises can do:

  • Create an Incident Response Plan: Make sure that anyone can understand what to do in case of an incident, not just security professionals. Think about the average person on your executive board, or even your end users. You need to assume that a breach or a ransomware attack will happen, you just don’t know when. With this mindset, you’ll be more likely to create a thorough plan for incident response, including drills and practice runs.
  • Protect your Credentials: This starts with utilizing strong passwords and two-factor authentication, improving the posture around credentials in general. On top of this, the days of administrative rights are over. Every user should have only the access they need, and no further. This stops bad actors from escalating privileges and moving laterally within your data center, taking control of your devices.
  • Think Smart on Security Hygiene: Exploits based on the Eternal Blue tool kit – the Microsoft SMB v1 vulnerability, were able to cause damage because of a patch that had been released by Microsoft by May 2017. Software vulnerabilities can be avoided through patching, vulnerability testing, and certification.
  • Software-Defined Segmentation: If we continue the mindset that an attack will occur, it’s important to be set up to limit the blast radius of your breach. Software-defined segmentation is the smartest way to do this. Without any need to make infrastructure changes, you can isolate and protect your critical applications. This also works to protect legacy or end-of-life systems that are business critical but cannot be secured with existing modern solutions, a common problem in the healthcare industry. Also unlike VLANs and cloud security groups these take no physical infrastructure changes and take hours not months to implement.

Following this Advice for Critical Infrastructure

This advice is a smart starting point for national infrastructure as well as enterprises, but it needs more planning and forethought. When it comes to critical infrastructure, your visibility is essential, especially as you are likely to have multiple platforms and geographies. The last thing you want is to try to make one cohesive picture out of multiple platform-specific disparate solutions.

It’s also important to think about modern day threat vectors. Today, attacks can come through IP connected IoT devices or networks, and so your teams need to be able to detect non-traditional server compute nodes.

Incident response planning is much harder on a governmental or national level, and therefore needs to be taken up a notch in preparation. You may well need local, state, and national participation and buy-in for your drills, including law enforcement and emergency relief in case of panic or disruption. How are you going to communicate and share information on both a local and international scale, and who will have responsibility for what areas of your incident response plan?

Learning from the 2018 Olympics

Attacks against local government, critical infrastructure and national systems such as healthcare are inevitable in today’s threat landscape. The defenses in place, and the immediate response capabilities will be the difference between disaster and quick mitigation.

The 2018 Olympics can serve as proof. Despite Russia’s best attempts, the attack was thwarted within 12 hours. A strong incident response plan was put into place to find the malware and come up with signatures and remediation scripts within one hour. 4G access points had been put in place to provide networking capabilities, and the machines at the venue were reimaged from backups.

We can only hope that Qatar 2022 is already rehearsing as strong an incident response plan for its upcoming Olympics, especially with radical ‘semi-state actors’ in the region such as the Cyber Caliphate Army and the Syrian Electronic Army who could act as a proxy for a devastating state actor attack.

We Can Be Just as Skilled as the Attackers

The attitude that ‘there’s nothing we can do’ to protect against the growth in nation state attacks and ransomware threats is not just unhelpful, it’s also untrue. We have strong security tools and procedures at our disposal, we just need to make sure that we put these into place. These steps are not complicated, and they don’t take years or even months to implement. Staying ahead of the attackers is a simple matter of taking these steps seriously, and using our vigilance to limit the impact of an attack when it happens.

Want to understand more about how software defined segmentation can make a real difference in the event of a cyber attack? Check out this webinar.

A Case Study for Security and Flexibility in Multi-cloud Environments

Most organizations today opt for a multi-cloud setup when migrating to the cloud. “In fact, most enterprise adopters of public cloud services use multiple providers. This is known as multicloud computing, a subset of the broader term hybrid cloud computing. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Gartner comments that “most organizations adopt a multicloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions”.

When considering segmentation solutions for the cloud, avoiding vendor lock-in is equally important, especially considering security concerns.

Let’s consider the following example, following up on an experiment that was performed by one of our customers. As we discussed in the previous posts in the series, the customer created a simulation of multiple applications running in Azure and AWS. For the specific setup in Azure please consider the first and second posts in this series.

Understanding the Experiment

 

Phase 1 – Simulate an application migration between cloud providers:

The customer set up various applications in Azure. One of these is the CMS application. NSGs and ASGs have been set up for CMS, using a combination of allow and deny rules.

The customer attempted to migrate CMS from Azure to AWS. After the relevant application components were set up in AWS, the customer attempted to migrate the policies from Azure Security Groups to AWS Security Groups and its Access Control List. In order for the policies to migrate with the application, the deny rules in Azure Security Groups had to be translated into allow rules for all other traffic in AWS security groups or to network layer deny rules in AWS access control lists (ACLs).

Important differences between AWS security groups and ACLs:

  1. Security groups – security groups are applied at the EC2 level and are tied to an asset, not an IP. They only enable whitelisting traffic and are stateful. This is the first layer of defense, thus traffic must be allowed by Security Groups to then be analyzed by an ACL.
  2. ACLs – access control lists are applied at the VPC level, thus are directly tied to IPs. They support both allow and deny rules, but as they are tied to specific IPs, they do not support blocking by application context. They are not stateful and thus are not valid for compliance requirements.
  3. AWS security groups do not support blacklisting functionalities and only enable whitelisting, while AWS ACLs support both deny and allow rules, but are tied to an IP address within a VPC in AWS, enabling blocking only static IPs or a whole subnet.

Given the differences between security groups and ACLs (see sidebar), migrating the CMS application from Azure to AWS alongside the policies, required an employee to evaluate each Azure rule, and translate it to relevant rules in AWS Security Groups and ACLs. This unexpectedly set back the migration tests and simulation.

This is just one example. Each major public cloud provider provides their own tools for policy management. Working with multiple cloud native tools requires a lot of time and resources, and results in a less secure policy and added inflexibility. The more hybrid your environment, and the more you depend on native tools, the more tools you will end up using. Each tool requires an expert, someone who knows how to use it and can overcome its limitations. You will need to have security experts for each cloud provider you choose to use, as each cloud provider has a completely different solution with it’s own limitations. An example of one such limitation that all cloud provider native segmentation tools share is that cloud-based security groups only provide L4 policy control, and so additional tools will be required to secure your application layer.

Guardicore Provides a Single Pane of Glass for Segmentation Rules

When using Guardicore, each rule is applied on all workloads: virtual machines, public clouds (AWS, Azure, GCP…), bare-metal, and container systems. Rules follow workloads and applications when migrating to/between clouds or from on-premises to the cloud. Security teams can work with a single tool instead of multiple solutions to build a single, secure policy which saves time and resources, and ensure consistency across a heterogeneous environment.
Guardicore therefore enables migrating workloads with no security concerns. Policies will migrate with the workloads wherever they go. All you will need to take into account in your decision of workload migration is your cloud provider’s offering.

Our customer used Guardicore to create the CMS application policies, adding an additional layer 7 security with rules at process level, enhancing the Layer 4 controls from the native cloud provider. To migrate CMS from Azure to AWS seamlessly, policies were no longer a concern. Guardicore Centra policies follow the application wherever it goes. As policies are decoupled from the underlying infrastructure, and are created based on labels. The policies in Guardicore followed the workloads from Azure to AWS with no changes necessary.

Phase 2 – Create policies for cross-cloud application dependencies

The customer experiment setup in AWS included an Accounting application in the London, UK region, that periodically needed to access data from the Billing application databases. The billing application was set up in Azure.

The Accounting application had 2 instances, one in the production environment and another in the development environment. The goal was for only the Accounting application in production to have access to the Billing application.

In a recent Gartner analysis Infrastructure Monitoring With the Native Tools of AWS, Google Cloud Platform and Microsoft Azure, Gartner mentions that “Providers’ native tools do not offer full support for other providers’ clouds, which can limit their usability in multicloud environments and drive the need for third-party solutions.” One such limitation was encountered by our customer.

Azure and AWS Security Groups or ACLs enable controlling cross-cloud traffic based only on the public IPs of the cloud providers. One must allow the whole IP region range of one cloud provider to communicate with another in order for two applications to communicate cross-cloud.
No external IP can be statically set for a server in Azure or AWS. Thus without introducing a third-party solution, there is no assurance that traffic is coming from a specific application in Azure when talking to a specific application in AWS and vice versa.

As public IPs are dynamically assigned to workloads within both Azure and AWS, our customer had to permit the whole IP range of the AWS London, UK region to communicate with the Azure environment, with no application context, and no control, introducing risk. Moreover, there was no way to prevent the Accounting application in the development environment from creating such a connection, without introducing an ACL in AWS to block all communication from that application instance to the Azure range. This would be problematic and restrictive, for example if the dev app had dependencies on another application in Azure.

Guardicore Makes Multi-cloud Policy Management Simple

As we have already discussed, policies in Guardicore are decoupled from the underlying infrastructure. The customer created policies based on Environment and Application labels with no dependency on the underlying cloud provider hosting the applications or on the application’s private or public IPs. This enabled easy policy management, blocking the Accounting application in the Development environment on AWS while allowing the Production application instance access to the Billing application in Azure. Furthermore, this gave our customer ultimate flexibility and the opportunity for future application migration seamlessly between cloud providers.

Guardicore provided a single pane of glass for multi-cloud segmentation rules. Each rule was applied on all relevant workloads regardless of the underlying infrastructure. Security teams were able to work with a single tool instead of managing multiple silos.

The same concept can be introduced for controlling and managing how your on-premises applications communicate with your cloud applications, ensuring a single policy across your whole data center, on premises or in the cloud. Using Guardicore, any enterprise can build a single, secure policy and save time and resources, while ensuring best-of-breed security.

Check out this blog to learn more about Guardicore and security in Azure or read more about Guardicore and AWS security here.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Using Zero Trust Security to Ease Compliance

Data privacy in cyber-security is a hugely regulated sector. New regulations such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR) have added to the list of compliance mandates that already included PCI-DSS for financial data and HIPAA for patient information. Many enterprises now have compliance officers or even teams established, who have a heavy workload in achieving and proving compliance for these regulations, in order to be prepared for an audit and to put best-practices into place.


As data centers have become increasingly complex and dynamic, this workload has increased exponentially. Visibility is understandably hard to achieve in a heterogeneous environment, and if you don’t know where your data is – how can you secure it?

Traditional Perimeter Security Causes Problems for Compliance

If your business relies on perimeter-based security, any breach is a breach of your whole network. Everything is equally accessible once an attacker has made it through your external perimeter. This security model cannot distinguish between types of data or applications, and does not define or visualize critical assets, giving everything in your data center an equal amount of protection.

This reality is a struggle for any IT or Security teams responsible for compliance. Multiple compliance authorities enforce strict controls over the management of customer data, including how it is held, deleted, shared and accessed. Personally identifiable information (PII) and anywhere that financial information is stored (eg: CDE) needs added security measures or governance for compliance mandates, and yet these are often left unidentified, let alone secured. This is made more complicated today by a growing amount of data that resides or communicates outside of the firewall, for example in the cloud. Visibility is the first hurdle, and many enterprises fall immediately at the challenge.

On top of this, with border controls alone, as soon as your perimeter is breached, all your data is up for grabs by attackers who can make lateral movements inside your network. Even if you could see what you have, perimeter security simply can’t protect critical data that falls in scope for compliance at the required level.

Zero Trust as a Solution for Compliance

Many enterprises know that a Zero Trust model would provide a stronger security posture, and are worried about the movement of east-west traffic that remains unprotected, but think of moving to a Zero Trust paradigm as an incredibly complex initiative. Segmenting applications, writing policy for different areas of the business, establishing what access to give permissions to and where, it sounds like it would complicate security, not make it simpler.

However, when completed intelligently, principal analyst at Forrester Research, Renee Murphy explains how a Zero Trust model actually makes security and compliance a whole lot easier. “You end up with a less complex environment and doing less work overall. Once you know what [your data] is, where it is and how important it is, you can [then] put your efforts towards it.”

For this to be successful, and remain simple, your Zero Trust model’s implementation needs to start with visibility. Data classification is not an IT problem, it’s a business problem, and the business needs to be able to automatically discover all assets and data, both in real-time, and with historical baselines for comparison and policy creation.

Your partner in creating a Zero Trust model should be able to provide an automatic map of all applications, databases, communications and flows, including dependencies and relationships. This needs to be both deep, providing granular insight, and also broad, across your hybrid environment covering everything from legacy on-premises to container systems.

Furthermore, pick a vendor with good granular enforcement capabilities. The best protection leaves the least possible exposure. Policies that can lock compliance environments down farther than port and IP are required. Seek those that can create policies at the process, user, and domain name level.

Not only does this provide the best starting point for Zero Trust initiatives, but it also means that compliance becomes far easier as a result of best-in-class documentation and records at every stage.

Regardless of which standard you wish to comply with, utilizing the Zero Trust model for visibility and segmentation to effectively limit scope and resources is essential. For example the PCI-DSS Security Council has come out with the Information Supplement: Guidance for PCI DSS Scoping and Network Segmentation Guidance Scoping in which this is directly called out.

When You Establish Zero Trust, All Data Can be Treated Unequally

Once visibility is established, and you have an accurate view of your network, you can easily identify what needs protecting. Compliance mandates are usually very clear about what data is in scope and out of scope, and only insist on what is in scope keeping to regulations. While perimeter security made it impossible to apportion security differently throughout your data center, this is where micro-segmentation and zero-trust thrive.

With zero trust, your security strategy can recognize that not everything is created equally. Some data or applications need more security and governance than others, and while certain assets need to be watched and controlled closely, others can be left with minimal controls.

With the right partner in place, enterprises can use a distributed firewall to prioritize where to put their compliance, moving from the most essential tasks forward. Granular rules can be put in place, down to process level or based on user-identity, strictly enforcing micro-perimeters around systems and data that are in scope. This is a much easier task than ‘protect everything, all the time.’

Demonstrating Compliance using a Zero Trust Environment

Adopting a Zero Trust mentality is also a really strong way to show auditors that you’re doing your part. A huge part of compliance is being able to guarantee that even in case of a breach, you have taken all reasonable steps to ensure that your data was protected from malicious intent. Each time an east-west movement is attempted, this communication is checked and verified. As such, your enterprise has never assumed that broad permissions are enough to guarantee a safe connection, and with micro-segmentation, you have reduced the attack surface as much as possible. This process also provides an audit trail, making incident response and documentation much simpler in case of a breach.

Consider partnering with a vendor that includes monitoring and analytics, as well as breach detection and incident response, to lower the chance of a cyber-attack, and create a plan for any events that violate policy or suggest malicious intent. This can dramatically improve your chances of an attack, as well as help to bolster a robust compliance checklist.

The days of relying on perimeter-based controls to stay compliant and secure are long gone. In a world where Zero Trust models are gaining acceptance and improving security posture so widely, enterprises need to do more to prove that they are compliant with the latest regulations.

The Zero Trust framework acknowledges that internal threats are now almost a guarantee, and enterprises need to protect sensitive data and crown jewel applications with more than just border control alone. Remaining compliant is an important yardstick to measure the security of your infrastructure against, and Zero Trust is an effective model to achieve that compliance.

Want to read more about implementing cloud security toward an effective Zero Trust model? Get our white paper about how to move toward a Zero Trust framework faster.
Read More

Guardicore Achieves Microsoft IP Co-Sell Status: Available for Download on the Azure Marketplace – Here’s What That Means for You

A couple of weeks ago we announced that the Guardicore Centra security platform is available in the Microsoft Azure Marketplace. As you might know, Centra was available in the marketplace before, as Guardicore has worked with Microsoft for a very long time, providing various integrations as well as research for Azure and Azure Stack. Now, the latest version of Centra is available and Guardicore has achieved an IP Co-Sell status.

One of the most important capabilities that we developed for Azure provides Centra with real-time integration to Azure orchestration. This provides metadata on the assets deployed in your Azure cloud environment, complementing the information provided by Guardicore agents.

For example, information coming from orchestration may include data that can’t be collected from the VM itself, including: Source Image, Instance Name, Private DNS name, Instance Id, Instance Type , Security groups, Architecture, Power State, Private IP Address and Subscription Name.

Using this information, Centra will accelerate security migration from an on-premises data center to Azure.

In addition, we are very proud that Guardicore has achieved the Microsoft IP Co-Sell status. This designation recognizes that Guardicore has demonstrated its proven technology and deep expertise that helps customers achieve their cloud security goals. Achieving this status demonstrates our commitment to the Microsoft partner ecosystem. It also proves our ability to deliver innovative solutions that help forward-thinking enterprise customers to secure their business-critical applications and data with quick time to value, reduce the cost and burden of compliance, and securely embrace cloud adoption.

Where to Start? Moving from the Theory of Zero Trust to Making it Work in Practice

Going back many years, perimeter controls were traditionally adequate for protecting enterprise networks that held critical assets and data. The hypothesis was that if you had strong external perimeter controls, watching your ingress and egress should be adequate for protection. If you were a more sophisticated or larger entity, there would be additional fundamental separation points between portions of your environment. However these were still viewed and functioned as additional perimeter points, merely concentric circles of trust with the ability, more or less, to move freely about. In cases where threats occurred within your environment, you would hope to catch them as they crossed one of these rudimentary borders.

The Moment I Realized that Perimeters Aren’t Enough

This practice worked moderately well for a while. However, around fifteen years ago, security practitioners began to feel a nascent itch, a feeling that this was not enough. I personally remember working on a case, a hospital – attacked by a very early spear phishing attack that mimicked a help desk request for a password reset. Clicking on a URL in a very official looking email, staff were sent to a fakebut official looking website where these hospital professionals were prompted to reset their credentials – or so they thought. Instead, the attack began. This was before the days of the Darknet and we even caught the German hacker boasting about what he had done – sharing the phishing email and fake website on a hacker messaging board. I worked for a company that had a fantastic IPS solution and upon deploying it, we were able to quickly catch the individual’s exfils. At first, we seemed to be winning. We cut the attacker off from major portions of a botnet that resided on the cafeteria cash registers, most of the doctors machines and to my horror, even on the automated pharmacy fulfillment computers. Two weeks later, I received a call, the attacker was back,trying to get around the IPS device in new ways. While we were able to suppress the attack for the most part, I finally had to explain to the hospital IT staff that my IPS was merely at the entrances and exits of their network and that to really stop these attacks, we needed to look at all of the machines and applications that resided within their environment. We needed the ability to look at traffic before it made its way to and from the exits. This was to be the first of many realizations for me that the reliance on perimeter-based security was slowly and surely eroding.

In the years since, the concept of a perimeter has all but completely eroded. Of course, it took quite a while for the larger population to accept. This was helped along by the business and application interdependencies that bring vendors, contractors, distributors and applications through your enterprise as well as the emergence of cloud and cloud like provisioning utilized by Dev Ops. The concept of being able to have true perimeters as a main method of prevention is no longer tangible.

It was this reality that spurred the creation of Forrester’s Zero Trust model- almost a decade ago. The basic premise is that no person or device is automatically given access or trusted without verification. In theory, this is simple. In practice, however, especially in data centers that have become increasingly hybrid and complex, this can get complicated fast.

Visibility is Foundational for Zero Trust

A cornerstone of Zero Trust is to ‘assume access.’ This means that any enterprise should assume than an attacker has already breached the perimeter. This could be through stealing credentials, a phishing scam, basic hygiene issues like poor passwords, account control and patching regimen, an IoT or third-party device, a brute force attack, or literally limitless other new vectors that make up today’s dynamic data centers.

Protecting your digital crown jewels through this complex landscape is getting increasingly tough. From isolating sensitive data for compliance or customer security, to protecting the critical assets that your operation relies on to run smoothly, you need to be able to visualize, segment and enforce rules to create an air-tight path for communications through your ecosystem.

As John Kindervag, founder of Zero Trust once said, in removing “the Soft Chewy Center” and moving towards a Zero Trust environment, visibility is step one. Without having an accurate, real-time and historical map of your entire infrastructure, including on-premises and both public and private clouds, it’s impossible to be sure that you aren’t experiencing gaps or blind spots. As Forrester analyst Chase Cunningham mandates in the ZTX Ecosystem Strategic Plan, “Visibility is the key in defending any valuable asset. You can’t protect the invisible. The more visibility you have into your network across your business ecosystem, the better chance you have to quickly detect the tell-tale signs of a breach in progress and to stop it.”

What Should Enterprises Be Seeing to Enable a Zero Trust Model?

Visibility itself is a broad term. Here are some practical necessities that are the building blocks of Zero Trust, and that your map should include.

  • Automated logging and monitoring: With an automated map of your whole infrastructure that updates without the need for manual support, your business has an always-accurate visualization of your data center. When something changes unexpectedly, this is immediately visible.
  • Classification of critical assets and data: Your stakeholders need to be able to read what they can see. Labeling and classification are therefore an integral element of visibility. Flexible labeling and grouping of assets streamlines visibility, and later, policy creation.
  • Relationships and dependencies: The best illustration of the relationships and dependencies of assets, applications and flows will give insight all the way down to process level.
  • Context: This starts with historical data as well as real-time, so that enterprises can establish baselines to use for smart policy creation. Your context can be enhanced with orchestration metadata from the cloud or third-party APIs, imported automatically to give more understanding to what you’re visualizing.

Next Step… Segmentation!

Identifying all resources across all environments is just step one, but it’s an essential first step for a successful approach to establishing a Zero Trust model. Without visibility into users, their devices, workloads across all environments, applications, and data itself, moving onto segmentation is like grasping in the dark.

In contrast, with visibility at the start, it’s intuitive to sit down and identify your enterprise’s most critical assets, decide on your unique access permissions and grouping strategy for resources, and to make intelligent and dynamic modifications to policy at the speed of change.

Want to read more about visibility and Zero Trust? Get our white paper about how to move toward a Zero Trust framework faster.

Read More

Limitations of Azure Security Groups: Policy Creation Across Multiple vNets

In our previous post, we discussed the limitations of Cloud Security Groups and flow logs within a specific vNet. In today’s post, we will focus on another specific scenario and use case that is common to most organizations, discussing Cloud Security Group limitations across multiple regions and vNets. We will then deep dive into Guardicore’s value in this scenario.

In a recent analysis, Gartner mentions the inherent incompatibility between existing monitoring tools and the cloud providers’ native monitoring platforms and data handling solutions. Gartner explains that an organization’s own monitoring strategies must evolve to accommodate these differences.

As the infrastructure monitoring feature sets offered by cloud providers’ native tools are continuing to evolve and mature, Gartner comments that “Gaps still exist between the capabilities of these tools and the monitoring requirements of many production applications… Remediation mechanisms can still require significant development and integration efforts, or the introduction of a third-party tool or service.”

To understand the challenges faced when using native monitoring tools, in this post I’ll again share details from an experiment that was performed by one of our customers. The customer created a simulation of multiple applications running in Azure, and created security policies between these applications.

The lab setup

Let’s look at the simulation environment. There are multiple Azure subscriptions, and within each subscription, there is a Virtual Network (VNet). In this case, SubscriptionA is the Production environment based in the Brazil region, and SubscriptionB is the Development environment, based in West Europe. Each has its own vNet. Both VNets are peered together.

ASGs:
The team created 3 Application Security Groups (ASGs). Note that the locations correspond to the locations used for the Virtual Networks (VNets).

The customer wanted to test the following scenario:
Block all communication from the CMS application over port 80, unless CMS communicates over this port with the SWIFT and Billing applications.

However, CMS application servers reside in the West Europe region, and the Swift and Billing application servers reside in the Brazil South region.

In this scenario, with 2 Virtual Networks (vNets), our customer wanted to know, will an Application Security Group (ASG) that exists in one Virtual Network (VNet), be available for reference in the opposite Virtual Network’s (vNet’s) Network Security Group (NSG)? Would it be possible to create a rule with an ASG for the CMS App servers to the SWIFT & Billing applications even though they are in separate vNets?

The limitations and constraints of using Azure Security Groups were immediately clear

The team attempted to add a new inbound security rule from the CMS servers’ ASG to the SWIFT servers’ ASG. As you can see from the screenshot, the only Application Security Group (ASG) that appears in the list of options, is the local one, CMS servers ASGs.

Let’s explore what happened above. According to the documentation provided by Azure:
Each subscription in Azure is assigned to a specific, single, region.
Multiple subscriptions cannot share the same vNet.
NSGs can only be applied within a vNet.

Thus each region must contain a single vNet, and each region will have its own specific NSGs in place. The team attempted a few options to troubleshoot this issue using Security Groups.

First, they attempted to use ASGs to resolve this and create policies cross regions. However, the customer came up against the following Azure rule.
All network interfaces assigned to an ASG have to exist in the same vNet. You cannot add network interfaces from different vNets to the same application security group.
If your application spans cross regions or vNets, you cannot create a single ASG to include all servers within this application. A similar rule applies when application dependencies cross regions. ASGs therefore couldn’t solve the problem with policy creation.

Next, the customer tried combining two ASGs from different vNets to achieve this policy. Again, Azure rules made this impossible, as you can see below.
If you specify an application security group as the source and destination in a security rule, the network interfaces in both application security groups must exist in the same virtual network. For example, if AsgLogic contained network interfaces from VNet1, and AsgDb contained network interfaces from VNet2, you could not assign AsgLogic as the source and AsgDb as the destination in a rule. All network interfaces for both the source and destination application security groups need to exist in the same virtual network.

Simply put, according to Azure documentation, it is not possible to create an NSG containing two ASGs from different vNets.

Thus if your application spans multiple vNets, using a single ASG for all application components is not an option, nor is combining two ASGs in an NSG. You’ll see the same problem when application dependencies cross regions, like in the case of our CMS, SWIFT and billing applications above.

Bottom line: It is not possible to create NSG rules, using ASGs for cross-region and vNet traffic.

Introducing Guardicore to the Simulation

The team had an entirely different experience when using Guardicore Centra to enforce the required policy settings.

The team had already been using Guardicore Centra for visibility to explore the network. In fact, this visibility had helped the team realize they needed to permit the CMS application to communicate with SWIFT over port 8080 in the first place. The team was therefore immediately able to view the real traffic between both regions/vNets and within each region/vNet, visualizing the connections between the CMS application in West Europe and the SWIFT and Billing application in the Brazil region.

With Guardicore, policies are created based on labels, and are therefore decoupled from the underlying infrastructure, supporting seamless migration of policies alongside workloads, wherever they may go in the future. As the customer planned to test migrating the CMS application to AWS, policies were created based on the environments and applications, not based on the infrastructure or the underlying “Cloud” context.

A critical layer added to Guardicore Centra’s visibility is labeling and grouping. This context enables deep comprehension of application dependencies. While Centra provides a standard hierarchy that many customers follow, our labeling approach is highly customizable. Flexible grouping enables you to see your data center in the context of how you as a business speak about your data center.

Labeling decouples the IP address from the segmentation process and enables application migration between environments, seamlessly, without the need to change the policies in place. With this functionality, the lab team were able to put the required policies into place.

 

One of the most impactful things we can do to make Guardicore’s visualization relevant to your organization quickly, is integrate with any existing sources of metadata, such as data center or cloud orchestration tools or configuration management databases. In the case above, all labels were received automatically from the existing Azure orchestration tags.

As Guardicore does not rely on the underlying infrastructure to enforce policies, such as Security Groups or endpoint firewalls, policies are completely decoupled from the underlying infrastructure. This enables the creation of a single policy across the whole environment, and covers those use cases that are cross environment, too. In the case of Azure, it allowed our customer to simulate policies that cross vNet and Region, while doing so seamlessly from a single pane of glass.

Trials and Tribulations – A Practical Look at the Challenges of Azure Security Groups and Flow Logs

Cloud Security Groups are the firewalls of the cloud. They are built-in and provide basic access control functionality as part of the shared responsibility model. However, Cloud Security Groups do not provide the same protection or functionality that enterprises have come to expect with on-premises deployments. While Next-Generation firewalls protect and segment applications on premises’ perimeter (mostly), AWS, Azure, and GCP do not mirror this in the cloud. Segmenting applications using Cloud Security Groups is done in a restricted manner, supporting only layer 4 traffic, ports and IPs. This means that to benefit from application-aware security capabilities with your cloud applications you will need an additional set of controls which is not available with the built-in functionality of Cloud Security Groups.

The basic function that Cloud Security Groups should provide is network separation, so they can be best compared to what VLANs provides on premises, Access Control Lists on switches and endpoint FWs. Unfortunately, like VLANs, ACLs and endpoint FWs, Cloud Security Groups come with similar ailments and limitations. This makes using them complex, expensive and ultimately ineffective for modern networks that are hybrid and require adequate segmentation. To create application aware policies, and micro-segment an application, you need to visualize application dependencies, which Cloud Security Groups do not support. Furthermore, if your application dependencies cross regions within the same cloud provider or between clouds and on premises, application security groups are ineffective by design. We will touch on this topic in upcoming posts.

In today’s post we will focus on a specific scenario and use case that is common to most organizations, discussing Cloud Security Groups and flow logs limitations within a specific vNet, and illustrating what Guardicore’s value is in this scenario.

Experiment: Simulate a SWIFT Application Migration to Azure

Let’s look at the details from an experiment performed by one of our customers during a simulation of a SWIFT application migration to Azure.

Our customer used a subscription in Azure, in the Southern region of Brazil. Within the subscription, there is a Virtual Network (vNet). The vNet includes a Subnet 10.0.2.0/24 with various application servers that serve different roles.

This customer attempted to simulate the migration of their SWIFT application to Azure given the subscription above. General segmentation rules for their migrated SWIFT application were set using both NSGs (Network Security Groups) & ASGs (Application Security Groups). These were used to administrate and control network traffic within the virtual network (vNet) and specifically to segment this application.

Let’s review the difference:

  • An NSG is the Azure resource that is used to enforce and control the network traffic. NSGs control access by permitting or denying network traffic. All traffic entering or leaving your Azure network can be processed via an NSG.
  • An ASG is an object reference within a Network Security Group. ASGs are used within an NSG to apply a network security rule to a specific workload or group of VMs. An ASG is a “network object,” and explicit IP addresses are added to this object. This provides the capability to group VMs into associated groups or workloads.

The lab setup:
The cloud setup in this experiment included a single vNet, with a single Subnet, which has its own Network Security Group (NSG) assigned.

ASGs

  • Notice that they are all contained within the same Resource Group, and belong to the Location of the vNet (Brazil South).

NSGs:

The following NSG rules were in place for the simulated migrated SWIFT Application:

  • Load Balancers to Web Servers, over specific ports, allow.
  • Web Servers to Databases, over specific ports, allow.
  • Deny all else between SWIFT servers.

The problem:

A SWIFT application team member in charge of the simulation project called the cloud security team telling them a critical backup operation had stopped working on the migrated application, and he suspects the connection is blocked. The cloud network team, at this point, had to verify the root cause of the problem, partially through process of elimination, out of several possible options:

  1. The application team member was wrong, it’s not a policy issue but a configuration issue within the application.
  2. The ASGs are misconfigured while NSGs are configured correctly.
  3. The ASGs are configured correctly but the NSGs are misconfigured or missing a rule.

The cloud team began the process of elimination. They used Azure flow logs to try to detect the possible blocked connections. The following is an example of such a log:

Using the Microsoft Azure Log Analytics platform, the cloud team sifted through the data, with no success. They were searching for a blocked connection that could potentially be the backup process. The blocked connection was non-detectable. The cloud team members therefore dismissed the issue as a misconfiguration in the application.

The SWIFT team member insisted it was not an application issue and several days passed with no solution, all while the SWIFT backup operation kept failing. In a live environment, this stalemate would have been a catastrophe, with team members likely working around the clock to find the blocked connection, or prove misconfiguration in the application itself. In many cases an incident like this would lead to removing the security policy for the sake of business continuity as millions of dollars are at stake daily.

After many debates and an escalation of the incident, it was decided- based on the Protect team’s recommendation- to leverage Guardicore Centra in the Azure cloud environment to help with the investigation and migration simulation project.

Using Guardicore Centra, the team used Reveal to filter for all failed connections related to the SWIFT application. This immediately revealed an attempted failed connection, between the SWIFT load balancer and the SWIFT databases. The connection failed due to missing allow security groups. There was no NSG in place to allow SWIFT LBs to talk to SWIFT DBs in the policy.

The filters in Reveal

 

Discovering the process

Guardicore was able to provide visibility down to the process level for further context and identification of the failed backup process.

Application Context is a Necessity

The reason the flow logs were inadequate to detect the connection was that IPs were constantly changing as the application scaled up and down and the migration simulation project moved forward. Throughout this, the teams had no context of when the backup operation was supposed to occur or what servers initiated these attempted connections, therefore the search came up empty handed. They were searching for what they thought would reveal the failed connections. As flow logs are limited to IPs and ports, they were unable to search based on application context.

The cloud team decided to use Guardicore Centra to manage the migration and segmentation of the SWIFT application simulation for ease of management and ease of maintenance. Additionally, they added process and user context to the rules for more granular security and testing. Guardicore Centra enabled comparing the on-premises application deployment with the cloud setup to make sure all configurations were in place.

The team then went on to use Guardicore Centra to simulate the SWIFT policy over real SWIFT traffic. Making sure they are not blocking additional critical services, and will not inadvertently block these in the future.

 

Guardicore Centra provided the cloud security team with:

  • Visibility and speed to detect the relevant blocked flows
  • Process and user context to identify the failed operation as the backup operation
  • Ability to receive real-time alerts on any policy violation
  • Applying process level rules & user level rules required for the critical SWIFT Application
  • Simulation and testing capabilities to simulate the policies over real application traffic before blocking

All of these features are not available in Azure. These limitations cause serious implications, such as the backup operation failure and no ability to adequately investigate and resolve the issue.

Furthermore, as part of general environment hygiene, our customer attempted to add several rules to govern the whole vNet, blocking Telnet and insecure FTP. For Telnet, our customer could add a block rule in Azure on port 23; For FTP, an issue was raised. FTP can communicate over high range ports that many other applications will need to use, how could it be blocked? Using Guardicore, a simple block rule over the ftpd process was put in place with no port restriction, immediately blocking any insecure ftp communication at process level regardless of the ports used.

Visibility is key to any successful application migration project. Understanding your application dependencies is a critical step, enabling setting up the application successfully in the cloud. Guardicore Centra provides rich context for each connection, powerful filtering capabilities, flexible timeframes and more. We collect all the flows, show successful, failed, and blocked connections, and store historical data, not just short windows of it, to be able to support many use cases. These include troubleshooting, forensics, compliance and of course, segmentation. This enables us to help our customers migrate to the cloud 30x faster and achieve their segmentation and application migration goals across any infrastructure.

Securing a Hybrid Data Center – Strategies and Best Practices

Today’s data centers exist in a hybrid reality. They often include on-premises infrastructure such as Bare Metal or Virtual Machines, as well as both Public and Private cloud. At the same time, most businesses have legacy systems that they need to support. Even as you embrace cutting-edge infrastructure like containers and microservices, your legacy systems aren’t going anywhere, and it’s probably not even on your near future road-map to replace them. As a result, your security strategy needs to be suitable across a hybrid ecosystem, which is not as simple as it sounds.

The Top Issues with Securing a Hybrid Data Center

Many businesses attempt to use traditional security tools to manage a hybrid data center, and quickly run into problems.

Here are the most common problems that companies encounter when traditional tools are used to secure a modern, hybrid data center:

  • Keeping up with the pace of change: Business moves fast, and traditional security tools such as legacy firewalls, ACLs, VLANs and cloud security groups are ineffectual. This is because these solutions are made for one specific underlying infrastructure. VLANs will work well for on premises – but fall short when it comes to cloud and container infrastructure. Cloud security groups work for the cloud, but won’t support additional cloud providers or on premises. If you want to migrate, security will seriously affect the speed and flexibility of your move, slowing down the whole process – and probably negating the reasons you chose cloud to begin with.
  • Management overhead: Incorporating different solutions for different infrastructure is nothing short of a headache. You’ll need to hire more staff, including experts in each area. A cross-platform security strategy that incorporates everyone’s field of expertise is costly, complex, and prone to bottlenecks because of the traditional ‘too many cooks’ issue.
  • No visibility: Your business will also need to think about compliance. This could involve an entirely different solution and staff member dedicated to compliance and visibility. Without granular insight into your entire ecosystem, it’s impossible to pass an audit. VLANs for example offer no visibility into application dependencies, a major requirement for audit-compliance. When businesses use VLANs, compliance therefore becomes an additional headache.
  • Insufficient control: Today’s security solutions need Layer 7 control, with granularity that can look at user identity, FQDN (fully qualified domain names), command lines and more. Existing solutions rely on IPs and ports, which are insufficient to say the least.
    Take cloud security groups for example, which for many has become the standard technology for segmenting applications, the same way as they would on-premises. However, on the cloud this solution stops at Layer 4 traffic, ports and IPs. For application-aware security on AWS, you will need to add another set of controls. In a dynamic data center, security needs to be decoupled from the IPs themselves, allowing for migration of machines. Smart security uses an abstraction level, enabling the policy to follow the workload, rather than the IP.
  • Lack of automation: In a live hybrid cloud data center, automation is essential. Without automation as standard, for example using VLANs, changes can take weeks or even months. Manually implementing rules can result in the downtime of critical systems, as well as multiple lengthy changes in IPs, configurations of routers, and more.

Hybrid Data Center Security Strategies that Meet These Issues Head-On

The first essential item on your checklist should be infrastructure-agnostic security. Centralized management means one policy approach across everything, from modern and legacy technology on-premises to both public and private cloud. Distributed enforcement decouples the security from the IP or any underlying infrastructure – allowing policy to follow the workload, however it moves or changes. Security policy becomes an enabler of migration and change, automatically moving with the assets themselves.

The most effective hybrid cloud solutions will be software-based, able to integrate with any other existing software solution, including ansible, chef, puppet, SCCM, and more. This will also make deployment fast and seamless, with implementation taking hours rather than days. At Guardicore, our customers often express surprise when we request three hours to install our solution for a POC, as competitors have asked for three days!

The ease of use should continue after the initial deployment. An automated, readable visualization of your entire ecosystem makes issues like compliance entirely straightforward, and provides an intuitive and knowledgeable map that is the foundation to policy creation. Coupling this with a flexible labeling system means that any stakeholder can view the map of your infrastructure, and immediately understand what they are looking at.

These factors allow you to implement micro-segmentation in a highly effective way, with granular control down to the process level. In comparison to traditional security tools, Guardicore can secure and micro-segment an application in just weeks, while for one customer it had taken 9 months to do the same task using VLANs.

What Makes Guardicore Unique When it Comes to Hybrid Data Center Security Strategies?

For Guardicore, it all starts with the map. We collect all the flows, rather than just a sample, and allow you to access all your securely stored historical data rather than only snap-shotting small windows in time. This allows us to support more use cases for our customers, from making compliance simple to troubleshooting a slowdown or forensic investigation into a breach. We also use contextual analysis on all application dependencies and traffic, using orchestration data, as well as the process, user, FQDN and command line of all traffic. We can enable results, whatever use case you’re looking to meet.

Guardicore is also known for our flexibility, providing a grouping and labeling process that lets you see your data center the way you talk about it, using your own labels rather than pre-defined ones superimposed on you by a vendor, and Key:Value formats instead of tags. This makes it much easier to create the right policies for your environment, and use the map to see a hierarchical view of your entire business structure, with context that makes sense to you. Taking this a step further into policy creation, your rules methodology can be a composite of whitelisting and blacklisting, giving less risk of inflexibility and complexity in your data center, and even allowing security rules that are not connected to segmentation use cases. In contrast, competitors use white-list only approaches with fixed labels and tiers.

Fast & Simple Segmentation with Guardicore

Your hybrid data center security strategies should enable speed and flexibility, not stand in your way. First, ensure that your solution supports any environment. Next, gain as much visibility as possible, including context. Use this to glean all data in an intuitive way, without gaps, before creating flexible policies that focus on your key objectives – regardless of the underlying infrastructure.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

What Makes a Business Successful? Celebrating the Culture of Guardicore

Culture is what takes good businesses and makes them successful organizations. Of course you need to have great technology, which we have. We also pride ourselves on addressing a clear need for our customers, which has given us the financial stability to expand. Once you have this foundation, your culture is what is going to add the special spark and engage your team. With it, your business becomes a well-oiled machine, and without it – even a great product with a strong market fit can fail.

One important part of managing growth and global expansion for Guardicore is to keep our finger on the pulse of the organization. For us, this means making sure that the strong and unique culture that we created when we were a small company starting out, remains an integral part of our organization, even as we expand. Not only does this keep our employees happy, it also speaks to our customers, making more businesses want to choose us as their security partner, creating relationships that resonate more than the competition.

We recently ran a global internal survey of our staff, asking for anonymous feedback. Our intention was to highlight the areas in which we’re doing well, and to better understand how we can improve, both as an organization and as an employer. As a part of this survey, we looked at both satisfaction and engagement.

Satisfaction vs Engagement: What’s the Difference?

For us, satisfaction means how happy our employees are at Guardicore. This could include the work environment, how interesting each employee finds their work, , their compensation, and more. An employee’s level of engagement derives from their level of satisfaction, and this refers to how connected our staff feel to Guardicore. How much will they go the extra mile for the company and for its customers? The level of engagement is heavily reliant on the culture.

The results from the survey were extremely high, which showed us that our staff know that we’re on their side, looking to take their feedback into consideration. The results helped us to see that our staff have a high level of satisfaction at work, and gave us some fantastic guidance on where we can support them further by making improvements. However, the level of engagement our employees experience was even higher. It was clear from the results that our team feel incredibly engaged.

As I believe that engagement levels are connected intrinsically to culture, identifying this culture is very important. As we grow and expand, what is the unique make-up of our culture which we need to hold on to and how can we do this?

Culture is Where the Magic Happens

If I were to summarize the Guardicore culture in four words, I would say Fun, Straightforward, Caring, and Smart. This is our ethos, but it is not written on posters. It is most strongly represented by our people. When we’re hiring new people, we look for candidates who check these boxes, people who you can learn from, but who you also love spending time with. People who are trustworthy and straightforward, and who care about their work and the people who they see every day. If candidates walk into an interview and they are fantastic at what they do professionally, but don’t seem to care, or don’t have a spark of fun – they aren’t likely to be the right choice for the position.

Cultivating this Culture at Guardicore

The level of engagement we achieve is a lot about how we support our employees in connecting with one another. Of course, we organize team bonding days, corporate events and at-work initiatives, such as volunteering day. As well as this though, we’ve seen that get-togethers are organized externally, and led by the employees themselves.

As our staff love spending time together, they arrange activities outside of working hours. We have a soccer team that was put together by our staff, a band – Layer 7, and spontaneous board game evenings that employees arrange and initiate. Our teams make plans to go out after work for drinks, or get together on the weekends for beach days and outings. We’ve found that when you recruit like-minded people, and bring together individuals who don’t just work together, but have the potential to form friendships too, the effects on engagement, productivity and success are clear to see.

Understanding the Effect of Engagement

This work culture is the magic ingredient that creates engagement in our staff. Because we’re building friendships, and not just co-working relationships, we have found that our staff is a lot more willing to go above and beyond for one another. This isn’t connected to a financial reward, or the promise of any incentives. It truly comes from a place of caring, and liking one another.

This has a powerful effect on our customers, who have a better experience working with us because we’re a cohesive team filled with people who really care. It also affects the employees themselves, because this company-wide culture means that no one is ‘just another member of staff.’ From the toast when we take on a new customer, to the long tables in the dining room where everyone eats together regardless of department or job title, the inclusive nature of Guardicore means that we’re always working with friends.

Company culture is often a product of the people who work there. The original team at Guardicore, who created the company and have been here since the very first weeks and months of the organization, are smart, fun, straightforward and caring. They hired more of the same type of people, and it’s grown from there and continues to be built around this personality. We are now 160 people strong, building the organization with these core principles at heart. The challenge we have is to keep this culture as we grow.

Does it sound like the Guardicore magic culture would be a great fit for you?

Take a look at
our open positions
and get in touch if you see something that sparks your interest!