Guardicore Centra Integration now available on CyberArk Marketplace

We had our first integration with CyberArk in 2016. One of our very early adopters, a CISO for a large telecommunications company, realized that Guardicore Centra was becoming a critical part of his security infrastructure and decided to integrate the two products.

The CISO understood that one of the biggest security threats for his organization was the misuse of privileged accounts with elevated permissions on IT systems. He decided to use CyberArk with Guardicore in order to manage privileged accounts and protect his critical assets. Guardicore secured access to critical assets via micro-segmentation and detection capabilities, and CyberArk managed the privileged access on these systems.

Since then, we have added additional features such as identity-based policies to provide a stronger overall solution, and many other customers have benefited from these integrated capabilities.

I am happy to update you that this integration of Guardicore Centra security platform and the CyberArk Privileged Access Security Solution has recently been made available on the CyberArk Marketplace, helping our joint customers accelerate their ability to meet compliance requirements and reduce security risk without introducing additional operational complexity.

By providing the Guardicore plug-in via the CyberArk Marketplace, customers can now more easily evolve their privileged access management programs. Our integration enables CyberArk customers to protect their hybrid cloud and data center while maintaining strong privileged access controls.

As a CyberArk C3 Alliance member, Guardicore will continue to work alongside CyberArk to deliver value to shared customers through an integrated plug-in, as part of their security stack.

Privileged access is pervasive and provides attackers the “keys to the IT kingdom.”

It is widely recognized that nearly all damaging cyber-attacks involve privileged account compromise. Attackers are then able to exploit this legitimate privileged access to establish a foothold and make lateral moves across enterprise IT infrastructure. Additionally, without least privilege, internal users might abuse their access rights. By integrating the capabilities of Guardicore Centra with the CyberArk solution, customers can be better positioned to detect and stop lateral movement using both software-defined segmentation and privileged access management.

Thinking about zero trust implementation? CyberArk combines with Guardicore to take you that much closer to the adoption of the zero trust model of security.

Want to read more about how Guardicore micro-segmentation can take you closer to adopting a zero trust framework? Download our white paper on getting there faster.

Read More

Guardicore vs. VLANs. No Contest. All That’s Left is Deciding What to Do with Your Free Time

A fast-paced business world deserves security solutions that can keep up. Speed isn’t everything, but reducing complexity and time when deploying a new strategy can be the difference between success and failure. Let’s look at the process of segmenting just one business critical application via VLANs, and then compare how it works with Guardicore Centra micro-segmentation. Then you can decide how to use all that spare time wisely.

VLANs – How Long Does it Take?

If you decide to go down the VLAN route, you will need to spend around 4-6 months preparing your network and application changes. On the networking side, teams will configure switches, connect servers, and generally get the network ready for the new VLANs. On the application side, teams will build a migration strategy, starting with discovering all the relevant infrastructure, making changes to application code where necessary and preparing any pre-existing dependent applications for the change ahead of time.

After this 6-month period, you can start to build policy. It can take anywhere from 2-4 months to submit firewall change requests and have fixes and changes signed off and approved by the firewall governance teams. Meanwhile, your critical applications remain vulnerable.

Once you’re ready to move on to policy enforcement, you’ll need to spend a weekend migrating the application to the new VLAN. This includes manually reconfiguring IP addresses, applications and integration points. Don’t forget to warn your users, as there will be some application downtime that you can’t avoid. Altogether, you’ve spent up to 10 months performing this one segmentation task.

VLANs vs Guardicore

Guardicore Centra – How Long Does it Take?

Now let’s take a look at how it works when you choose smart segmentation for hybrid cloud and modern data center security with Guardicore. The preparation time is just a few days, as opposed to half a year, while Guardicore agents are deployed onto your application. This installation is simple and painless, and works with any platform. Labeling is also done during this time, integrating with your organizational inventory such as CMDB or cloud tags. Guardicore’s Reveal platform automatically discovers all traffic and flows, giving you an accurate map of your IT ecosystem, in real time, and continues to give you historical views as you proceed as well.

As policy creation is automatic, your policy suggestions can be tested immediately, and then run in ‘alert mode’ for two weeks while you tweak your policy to make sure it’s optimized to its full potential. When you’re ready to go – pick a day and switch from alert to enforce mode, with no impact on performance, and no downtime.

You’ve Just Saved 9 Months – Let’s Use It!

With security handled, and 9 months of time to kill, here are just some of the things you could achieve in your organization.

Start a Language Lunch Club

quick segmentation - start a language lunch club

90% of employees say that taking a regular lunch break helps them to feel more productive in the afternoon. Despite this, most of us often grab a quick sandwich, or don’t even manage to get up from our desks. Why not use some of your newfound company “free time” to encourage teams to eat lunch together, socializing and enjoying some much needed down-time? This time ‘off’can give colleagues a chance to get to know one another, forming new friendships, social bonds and levels of trust between your staff. If you want to try to combine this with learning a new skill and further enriching your staff (expanding their minds and improving memory and brain function), you could start a language club where your team members can learn basic skills that can support them in reaching global customers. With 180 hours to kill – that’s a whole lot of lazy, or super-productive, lunches!

Play with Lego!

quick segmentation - play with lego

Many organizations struggle with how to make team meetings more productive, especially when everyone is always so short on time. If you’re known for sharing memes like “I survived another meeting that should have been an email,” then isn’t it time you did something about it?

Lego Serious Play is one great methodology that can get staff thinking and working outside of the box. As 80% of our brain cells are connected to our hands, building and creating can unlock hidden thoughts and ideas. It’s also a fantastic way to get input from quieter team members, as it works for both introverts and extroverts, and uses visual, kinaesthetic and auditory communication. If you have some free time left over, why not try beating the world record for the tallest Lego tower, built in Tel Aviv in 2017. You’ll have to make it to 36 meters to stand a chance though!

Put more Time into Health and Wellness

quick segmentation - put time into health and wellness

With more time in the day, there’s no need to take shortcuts that adversely affect your health. Tell your employees to skip the elevator and take the stairs, or to come in slightly later and cycle instead of jumping on available public transport. If your staff take the stairs twice a day for the whole nine months of saved time – that’s 12,600 calories, or the equivalent of 50 pieces of cheesecake!

Research has shown that employees who have work wellness programs report taking 56% fewer sick days than those without. Use some of the free time you’re saving to set up 8:30am or 5:00pm wellness classes, such as yoga, mindfulness, aerobics or Zumba and give your employees more reasons to love coming to work! Activity also encourages greater focus and productivity while on the job, so consider it a triumph to flex the muscles of your body and your mind.

Do More with Your Day Job

quick segmentation - do more with your day job

Spend some time getting to know other departments in the company, sitting down with Procurement to understand recent contracts, or heading over to R&D and having that conversation you’ve been meaning to have about Intellectual Property. Nine months makes 1440 hour-long coffee meetings! Better yet, why not plan a stint to an at least semi-exotic location to visit your offshore development teams on site? Allow yourself a bit of time out of the office while getting some all-important face-time with other members of your team.

You could also use some of your extra time to visit some customers or other stakeholders in the supply chain, identifying the risks that they pose to your organization and the mitigation you could put in place. Interested in some more informal professional development? It’s the perfect time to start a training to develop or expand a new skill, or mentoring some junior employees, or think about your own career enrichment. After all, you’ve just saved nine months!

Encourage Innovation

quick segmentation - encourage innovation

Most people have heard of Google’s 20% rule, where employees are encouraged to work on side projects, new hustles, or research for 20% of their working day. But for many companies this is a huge privilege – only possible if you have enough time in the day to get all the urgent work off your desk- which we know is never the case. But now with more time to play with, literally, you can implement some enforced innovation time. With 9 months of extra time to use up, it will take four and a half years of an hour a day before your staff have used up the surplus.

Now It’s your Turn to Innovate: What Will Your Teams Do With Their Free Time?

Why not draw up a bucket list of what you could do with an extra nine months, and how it could benefit your company?

Take a look at the seven steps to operationalize micro-segmentation so you can see just how simple it would be to get started.

Read More

Guardicore Extends Support to AWS Outposts, Providing Holistic Visibility and Control Across the Hybrid Cloud

Like the real clouds that can be seen in the Earth’s atmosphere, the IT clouds are constantly changing in the DCsphere. Last year, AWS announced plans to expand the public cloud into on-premises data centers and introduced AWS outposts, which will allow customers to run AWS infrastructure on-premises or other co-location facilities, creating a new type of hybrid cloud.  AWS customers can expect to have a consistent experience, whether they are managing infrastructure on the public cloud or using Outposts. 

Today, I am excited to share the news that we will support AWS outposts just like any other part of the hybrid cloud. Together with AWS and their hardware partners we are looking forward to expanding the Guardicore ecosystem to additional areas of the ever-expanding cloud, securing customers wherever they might be.

Highlighting the Benefits of AWS Outposts

Using AWS Outposts, organizations can run services such as EC2, EBS, and EKS on-premises, as well as database services like Amazon RDS or EMR analytics. Running AWS services locally, you will still be able to connect to services from the local AWS Region, and use the same tools and technology to manage your applications. With this announcement from Guardicore Centra, security can also remain the same on-premises as you’ve come to expect with AWS on the cloud.

The value of this technology for data storage and management is powerful. For organizations that are bound by regulations for storing data off the cloud, or in countries with data sovereignty requirements or no AWS Region, Outposts is a valuable alternative that makes data processing and storage seamless.

Healthcare is a strong example of a vertical that can benefit from Outposts. Organizations can simply run Machine Learning and analytics models to their health management platforms, even where low latency processing requirements dictate that they remain on-premises. When it’s time to retrieve data, this information is stored locally and therefore quick to retrieve. Financial services is an example of another use case that can leverage Outposts to deliver banking or processing requirements within the confines of local data requirements.

Making it Happen

To provide the widest possible coverage, Guardicore will support the two variants of AWS Outposts: both VMware Cloud on AWS Outposts with our existing VMware orchestration integration, as well as the AWS native variant of AWS Outposts running on premises.

Read more about our ever-evolving capabilities for AWS security as a trusted AWS Technology Partner, and stay tuned for more details on this exciting news and other collaborations.

Want to know more about how Guardicore, a trusted AWS technology partner, helps you nail hybrid cloud security by partnering with AWS? Download our white paper on the shared security model.

Read More

Segmenting Users on AWS WorkSpaces – Yes It’s a Thing, and Yes, You Should Be Doing It!

I recently came across a Guardicore financial services customer that had a very interesting use case. They were looking to protect their Virtual Desktop (VDI) environment, in the cloud.

The customer’s setup is a hybrid cloud: it has legacy systems that include bare metal servers, Solaris and some old technologies on-premises. It also utilizes many Virtual environments such as VMware ESX, Nutanix and Openstack.

Concurrently with this infrastructure, the customer has started using AWS and Azure and plans to use containers in these platforms, but has not yet committed to anything specific.

One interesting element to see, was how the customer was migrating its on-premises Citrix VDI environment to AWS workspaces. The customer was happy using AWS workspaces and had therefore decided to migrate to using them in full production. AWS workspaces were especially useful for our customer since the majority of its users work remotely, and it was so much easier to have those users working with an AWS WorkSpace than relying on the on-premises, Citrix environment.

So, what is an AWS WorkSpace anyway?

In Forrester’s Now Tech: Cloud Desktops, Q4 2019 report, cloud desktops and their various offerings are discussed. Forrester states that “you can use cloud desktops to improve employee experience (eX), enhance workforce continuity, and scale business operations rapidly.” This is exactly what our customer was striving to achieve with AWS WorkSpaces.

AWS Desktops are named “Amazon WorkSpaces”, and they are a Desktop-as-a-Service (DaaS) solution that run on either Windows or Linux desktops. AWS provides this pay-as-you-launch service all around the world. According to AWS “Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.”

To get started with AWS workspaces click here.

Our customer was using AWS WorkSpaces and scaling their utilization rapidly. This resulted in a need to add a security layer to these cloud desktops. In AWS when users access the WorkSpaces, upon access, they are automatically assigned a workspace, and a dynamic IP. Controlling this access is challenging using traditional network segmentation solutions that are IP based. Thus, our customer was looking for a solution with the following features:

    • Visibility:
      • First and foremost within the newly adopted cloud platform
      • Secondly, not just an understanding of traffic between legacy systems on-premises and in the cloud individually, but visibility into inter-platform communications, too.
    • Special attention for Amazon WorkSpaces:
      • User-level protection: Controlling which users from AWS workspaces should and could interact with the various applications the customer owned, on-premises or in the cloud.
      • Single policy across hybrid-cloud: What was once implemented on-premises alone, now needed to be implemented in the cloud, and not only in the cloud, but cross cloud to on-premises applications. The customer was looking for simplicity, a single tool to control all policies across any environment.

Tackling this Use Case with Guardicore Centra

Our customer evaluated several solutions, for visibility, segmentation and user identity management.The customer eventually choose Guardicore Centra, for the ability to deliver all of the above, from a single pane of glass, and do so swiftly and simply.

Guardicore was able to provide visibility of all workloads, on premises or in the cloud, across virtual, bare metal and cloud environments, including all assets, giving our customer the governance they needed of all traffic and flows, including between environments.

On top of visibility, Centra allowed an unprecedented amount of control for the customer. Guardicore policies were set to control and enforce allowed traffic and add an additional layer of user identity policies to control which users from the AWS workspaces could talks to which on-premises applications. As mentioned previously, upon access to AWS workspaces, users are automatically assigned a workspace, with a dynamic IP. Thus traditional tools that are IP based are inadequate, and do not provide the flexibility needed to control these user’s access. In contrast, Guardicore Centra enables creating policies based on the user’s identity to the datacenter and applications, regardless of IP or WorkSpace.

 

Where Guardicore Centra Stands Apart from the Competition

Guardicore Centra provides distributed, software-based segmentation, enabling user identity access management. This enables additional control of the network, among any workloads.

Centra enables creating policy rules based on the identity of the logged in user. Identities are pulled from the organizational Active Directory integrated with Centra. Centra requires no network changes and no downtime or reboot of systems. Policies are seamlessly created, and take real time effect, controlling new and active sessions alike.

This use case is just one example of how Guardicore Centra simplifies segmentation, and enables customers fine-grained visibility and control. Centra allows an enterprise to control user’s access anywhere, setting policy that applies even when multiple users are logged in at the same time to the same system, as well as managing third party, administrators and network users’ access to the network.

Want to learn more about securing and monitoring critical assets and applications on AWS? Join our live webinar with AWS on Thursday, December 12th at 1:00pm Eastern.
Register Now

Environment Segmentation is your Company’s First Quick Micro-Segmentation Win

We often tell our customers that implementing micro-segmentation technology should be a phased project. Starting with a thorough map of your entire IT ecosystem, your company should begin with the ‘low hanging fruit’, the easy wins that can show quick time to value, and have the least impact on other parts of the business. From here, you’ll be in a strong vantage point to get buy in for more complex or granular segmentation projects, perhaps even working towards a zero-trust model for your security.

One of the first tasks that many customers take on is separating environments from one another. Let’s see how it works.

Understanding the Context of your Data Center

Whether your workloads are on-premises, in the cloud, or in a hybrid mix of the two, your data center will be split into environments. These include:

  • Development: Where your developers create code, try out experiments, fix bugs, and use trial and error to create new features and tools.
  • Staging: This is where testing is done, either manually or through automation. Resource-heavy, and as similar as possible to your production environment. This is where you would do your final checks.
  • Production: Your live environment is your production environment. If any errors or bugs make it this far, they could be discovered by your users. If this happens in this environment, it could have the greatest impact on your business through your most critical applications. While all environments are vulnerable, and some may even be more easily breached, penetration and movement in this environment can have the most impact and cause the most damage.

Of course, every organization is different. In some cases, you might have environments such as QA, Local, Feature, or Release, to name just a few. Your segmentation engine should be flexible enough to meet any business structure, suiting your organization rather than the other way around.

It’s important to note that these environments are not entirely separate. They share the same infrastructure and have no physical separation. In this reality, there will be traffic which needs to be controlled or blocked between the different environments to ensure best-practice security. At the same time however, in order for business to run as usual, specific communication flows need to be allowed access despite the environment separations. Mapping those flows, analyzing them and white-listing them is often not an easy process in itself, adding another level of complexity to traditional segmentation projects carried out without the right solution.

Use cases for environment segmentation include keeping business-critical servers away from customer access, and isolating the different stages of the product life cycle. This vital segmentation project also allows businesses to keep up with compliance regulations and prevents attackers from exploiting security vulnerabilities to access critical data and assets.

Traditional Methods of Environment Segmentation

Historically, enterprises would separate their environments using firewalls and VLANs, often physically creating isolation between each area of the business. They may have relied on cloud platforms for development, and then used on-premises data centers for production for example.

Today, some organizations adapt VLANs to create separations inside a data center. This relies on multiple teams spending time configuring network switches, connecting servers, and making application and code changes where necessary. Despite this, In static environments, hosted in the same infrastructure, and without dynamic changes or the need for large scale, VLANs get the job done.

However, the rise in popularity of cloud and containers, as well as fast-paced DevOps practices, has made quick implementation and flexibility more important than ever before. It can take months to build and enforce a new VLAN, and become a huge bottleneck for the entire business, even creating unavoidable downtime for your users. Manually maintaining complex rules and changes can cause errors, while out of date rules leave dangerous gaps in security that can be exploited by sophisticated attackers. VLANs do not extend to the cloud, which means your business ends up trying to reconcile multiple security solutions that were not built to work in tandem. Often this results in compromises being made which put you at risk.

A Software-Based Segmentation Solution Helps Avoid Downtime, Wasted Resources, and Bottlenecks

A policy that follows the workload using software bypasses these problems. Using micro-segmentation technology, you can isolate low-value environments such as Development from Production, so that even in case of a breach, attackers cannot make unauthorized movement to critical assets or data. With intelligent micro-segmentation, this one policy will be airtight throughout your environment. This includes on-premises, in the public or private cloud, or in a hybrid data center.

The other difference is the effort in terms of implementation. Unlike with VLANs, with software-based segmentation, there is no complex coordination among teams, no downtime, and no bottlenecks while application and networking teams configure switches, servers and code. Using Guardicore Centra as an example, it takes just days to deploy our agents, and your customers won’t experience a moment of downtime.

Achieve Environment Segmentation without Infrastructure Changes

Environment segmentation is a necessity in today’s data centers: to achieve compliance, reduce the attack surface, and maintain secure separation between the different life stages of the business. However, this project doesn’t need to be manually intensive. When done right, it shouldn’t involve multiple teams, result in organizational downtime or even require infrastructure changes. In contrast, it can be the first stage of a phased micro-segmentation journey, making it easier to embrace new technology on the cloud, and implement a strong posture of risk-reduction across your organization.

Want to learn more about what’s next after environment segmentation as your first micro-segmentation project? Read up on securing modern data centers and clouds.

More Here.

Are you Prepared for a Rise in Nation State Attacks and Ransomware in 2020?

Once you know what you’re up against, keeping your business safe might be easier than you think. In this blog, we’re going to look at two kinds of cyber threats: nation state cyber attacks and ransomware. Neither is a new concern, but both are increasing in sophistication and prevalence. Many businesses feel powerless to protect against these events, and yet a list of relatively simple steps could keep you protected in the event of an attack.

Staying Vigilant Against Nation State Actors

According to the 2019 Verizon Data Breach study, nation state attacks have increased from 12 percent of attacks in 2017 to 23 percent in 2018.

One of the most important things to recognize about nation state attacks is that it is getting harder to ascertain where these attacks are coming from. Attackers learn to cleverly obfuscate their attacks through mimicking other state actor behavior, tools, and coding and through layers of hijacked, compromised networks. In some cases, they work through proxy actors. This makes the process of attribution very difficult. One good example is the 2018 Olympics in Pyongyang, where attackers launched the malware Olympic Destroyer. This took down the Olympic network’s wireless access points, servers, ticketing system, and even reporters Internet access for 12 hours, immediately prior to the start of the games. While at first, metadata in the malware was thought to attribute the attack to North Korea, this was actually down to manipulations of the code. Much later, researchers realized it was of Russian origin.

These ‘false flag’ attacks have a number of benefits for the perpetrators. Firstly, the real source of the threat may never be discovered. Secondly, even if the correct attribution is eventually found, the news cycle has died down, the exposure is less, and many people may not believe the new evidence.

This has contributed to nation state actors feeling confident to launch larger and more aggressive attacks, such as Russian attacks on Ukrainian power grids and communications, or the Iranian cyber-attack APT 33, that recently took down more than 30,000 Saudi oil production laptops and servers.

Ransomware often Attacks the Vulnerable, including Local Government and Hospitals

State sponsored attacks have the clout to do damage where it hurts the most, as seen by the two largest ransomware attacks ever experienced, WannaCry and NotPetya. These were created using what was allegedly a stolen US NSA tool kit called EternalBlue, as well as a French password stealer called Mimikatz.

This strength, combined with the tight budgets and flat networks of local governments and healthcare systems, is a recipe for catastrophe. Hospitals in particular are known for having flat networks and medical devices based on legacy and end-of-life operating systems. According to some estimates, hospitals are the targets of up to 70% of all ransomware incidents. The sensitive nature of PII and health records and the direct impact on safety and human life makes the healthcare industry a lucrative target for hackers looking to get their ransom paid by attacking national infrastructure.

As attackers become increasingly brazen, and go after organizations that are weak-placed to stand up to the threat, it’s more important than ever that national infrastructure thinks about security, and takes steps to handle these glaring gaps.

Shoring Up Your Defenses is Easier Than You Think

The party line often seems to be that attackers are getting smarter and more insidious, and data centers are too complex to handle this threat. It’s true that today’s networks are more dynamic and interconnected, and that new attack vectors and methods to hide these risks are cropping up all the time. However, what businesses miss, is the handful of very achievable and even simple steps that can help to limit the impact of an attack, and perhaps even prevent the damage occurring in the first place.

Here’s what enterprises can do:

  • Create an Incident Response Plan: Make sure that anyone can understand what to do in case of an incident, not just security professionals. Think about the average person on your executive board, or even your end users. You need to assume that a breach or a ransomware attack will happen, you just don’t know when. With this mindset, you’ll be more likely to create a thorough plan for incident response, including drills and practice runs.
  • Protect your Credentials: This starts with utilizing strong passwords and two-factor authentication, improving the posture around credentials in general. On top of this, the days of administrative rights are over. Every user should have only the access they need, and no further. This stops bad actors from escalating privileges and moving laterally within your data center, taking control of your devices.
  • Think Smart on Security Hygiene: Exploits based on the Eternal Blue tool kit – the Microsoft SMB v1 vulnerability, were able to cause damage because of a patch that had been released by Microsoft by May 2017. Software vulnerabilities can be avoided through patching, vulnerability testing, and certification.
  • Software-Defined Segmentation: If we continue the mindset that an attack will occur, it’s important to be set up to limit the blast radius of your breach. Software-defined segmentation is the smartest way to do this. Without any need to make infrastructure changes, you can isolate and protect your critical applications. This also works to protect legacy or end-of-life systems that are business critical but cannot be secured with existing modern solutions, a common problem in the healthcare industry. Also unlike VLANs and cloud security groups these take no physical infrastructure changes and take hours not months to implement.

Following this Advice for Critical Infrastructure

This advice is a smart starting point for national infrastructure as well as enterprises, but it needs more planning and forethought. When it comes to critical infrastructure, your visibility is essential, especially as you are likely to have multiple platforms and geographies. The last thing you want is to try to make one cohesive picture out of multiple platform-specific disparate solutions.

It’s also important to think about modern day threat vectors. Today, attacks can come through IP connected IoT devices or networks, and so your teams need to be able to detect non-traditional server compute nodes.

Incident response planning is much harder on a governmental or national level, and therefore needs to be taken up a notch in preparation. You may well need local, state, and national participation and buy-in for your drills, including law enforcement and emergency relief in case of panic or disruption. How are you going to communicate and share information on both a local and international scale, and who will have responsibility for what areas of your incident response plan?

Learning from the 2018 Olympics

Attacks against local government, critical infrastructure and national systems such as healthcare are inevitable in today’s threat landscape. The defenses in place, and the immediate response capabilities will be the difference between disaster and quick mitigation.

The 2018 Olympics can serve as proof. Despite Russia’s best attempts, the attack was thwarted within 12 hours. A strong incident response plan was put into place to find the malware and come up with signatures and remediation scripts within one hour. 4G access points had been put in place to provide networking capabilities, and the machines at the venue were reimaged from backups.

We can only hope that Qatar 2022 is already rehearsing as strong an incident response plan for its upcoming Olympics, especially with radical ‘semi-state actors’ in the region such as the Cyber Caliphate Army and the Syrian Electronic Army who could act as a proxy for a devastating state actor attack.

We Can Be Just as Skilled as the Attackers

The attitude that ‘there’s nothing we can do’ to protect against the growth in nation state attacks and ransomware threats is not just unhelpful, it’s also untrue. We have strong security tools and procedures at our disposal, we just need to make sure that we put these into place. These steps are not complicated, and they don’t take years or even months to implement. Staying ahead of the attackers is a simple matter of taking these steps seriously, and using our vigilance to limit the impact of an attack when it happens.

Want to understand more about how software defined segmentation can make a real difference in the event of a cyber attack? Check out this webinar.

A Case Study for Security and Flexibility in Multi-cloud Environments

Most organizations today opt for a multi-cloud setup when migrating to the cloud. “In fact, most enterprise adopters of public cloud services use multiple providers. This is known as multicloud computing, a subset of the broader term hybrid cloud computing. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Gartner comments that “most organizations adopt a multicloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions”.

When considering segmentation solutions for the cloud, avoiding vendor lock-in is equally important, especially considering security concerns.

Let’s consider the following example, following up on an experiment that was performed by one of our customers. As we discussed in the previous posts in the series, the customer created a simulation of multiple applications running in Azure and AWS. For the specific setup in Azure please consider the first and second posts in this series.

Understanding the Experiment

 

Phase 1 – Simulate an application migration between cloud providers:

The customer set up various applications in Azure. One of these is the CMS application. NSGs and ASGs have been set up for CMS, using a combination of allow and deny rules.

The customer attempted to migrate CMS from Azure to AWS. After the relevant application components were set up in AWS, the customer attempted to migrate the policies from Azure Security Groups to AWS Security Groups and its Access Control List. In order for the policies to migrate with the application, the deny rules in Azure Security Groups had to be translated into allow rules for all other traffic in AWS security groups or to network layer deny rules in AWS access control lists (ACLs).

Important differences between AWS security groups and ACLs:

  1. Security groups – security groups are applied at the EC2 level and are tied to an asset, not an IP. They only enable whitelisting traffic and are stateful. This is the first layer of defense, thus traffic must be allowed by Security Groups to then be analyzed by an ACL.
  2. ACLs – access control lists are applied at the VPC level, thus are directly tied to IPs. They support both allow and deny rules, but as they are tied to specific IPs, they do not support blocking by application context. They are not stateful and thus are not valid for compliance requirements.
  3. AWS security groups do not support blacklisting functionalities and only enable whitelisting, while AWS ACLs support both deny and allow rules, but are tied to an IP address within a VPC in AWS, enabling blocking only static IPs or a whole subnet.

Given the differences between security groups and ACLs (see sidebar), migrating the CMS application from Azure to AWS alongside the policies, required an employee to evaluate each Azure rule, and translate it to relevant rules in AWS Security Groups and ACLs. This unexpectedly set back the migration tests and simulation.

This is just one example. Each major public cloud provider provides their own tools for policy management. Working with multiple cloud native tools requires a lot of time and resources, and results in a less secure policy and added inflexibility. The more hybrid your environment, and the more you depend on native tools, the more tools you will end up using. Each tool requires an expert, someone who knows how to use it and can overcome its limitations. You will need to have security experts for each cloud provider you choose to use, as each cloud provider has a completely different solution with it’s own limitations. An example of one such limitation that all cloud provider native segmentation tools share is that cloud-based security groups only provide L4 policy control, and so additional tools will be required to secure your application layer.

Guardicore Provides a Single Pane of Glass for Segmentation Rules

When using Guardicore, each rule is applied on all workloads: virtual machines, public clouds (AWS, Azure, GCP…), bare-metal, and container systems. Rules follow workloads and applications when migrating to/between clouds or from on-premises to the cloud. Security teams can work with a single tool instead of multiple solutions to build a single, secure policy which saves time and resources, and ensure consistency across a heterogeneous environment.
Guardicore therefore enables migrating workloads with no security concerns. Policies will migrate with the workloads wherever they go. All you will need to take into account in your decision of workload migration is your cloud provider’s offering.

Our customer used Guardicore to create the CMS application policies, adding an additional layer 7 security with rules at process level, enhancing the Layer 4 controls from the native cloud provider. To migrate CMS from Azure to AWS seamlessly, policies were no longer a concern. Guardicore Centra policies follow the application wherever it goes. As policies are decoupled from the underlying infrastructure, and are created based on labels. The policies in Guardicore followed the workloads from Azure to AWS with no changes necessary.

Phase 2 – Create policies for cross-cloud application dependencies

The customer experiment setup in AWS included an Accounting application in the London, UK region, that periodically needed to access data from the Billing application databases. The billing application was set up in Azure.

The Accounting application had 2 instances, one in the production environment and another in the development environment. The goal was for only the Accounting application in production to have access to the Billing application.

In a recent Gartner analysis Infrastructure Monitoring With the Native Tools of AWS, Google Cloud Platform and Microsoft Azure, Gartner mentions that “Providers’ native tools do not offer full support for other providers’ clouds, which can limit their usability in multicloud environments and drive the need for third-party solutions.” One such limitation was encountered by our customer.

Azure and AWS Security Groups or ACLs enable controlling cross-cloud traffic based only on the public IPs of the cloud providers. One must allow the whole IP region range of one cloud provider to communicate with another in order for two applications to communicate cross-cloud.
No external IP can be statically set for a server in Azure or AWS. Thus without introducing a third-party solution, there is no assurance that traffic is coming from a specific application in Azure when talking to a specific application in AWS and vice versa.

As public IPs are dynamically assigned to workloads within both Azure and AWS, our customer had to permit the whole IP range of the AWS London, UK region to communicate with the Azure environment, with no application context, and no control, introducing risk. Moreover, there was no way to prevent the Accounting application in the development environment from creating such a connection, without introducing an ACL in AWS to block all communication from that application instance to the Azure range. This would be problematic and restrictive, for example if the dev app had dependencies on another application in Azure.

Guardicore Makes Multi-cloud Policy Management Simple

As we have already discussed, policies in Guardicore are decoupled from the underlying infrastructure. The customer created policies based on Environment and Application labels with no dependency on the underlying cloud provider hosting the applications or on the application’s private or public IPs. This enabled easy policy management, blocking the Accounting application in the Development environment on AWS while allowing the Production application instance access to the Billing application in Azure. Furthermore, this gave our customer ultimate flexibility and the opportunity for future application migration seamlessly between cloud providers.

Guardicore provided a single pane of glass for multi-cloud segmentation rules. Each rule was applied on all relevant workloads regardless of the underlying infrastructure. Security teams were able to work with a single tool instead of managing multiple silos.

The same concept can be introduced for controlling and managing how your on-premises applications communicate with your cloud applications, ensuring a single policy across your whole data center, on premises or in the cloud. Using Guardicore, any enterprise can build a single, secure policy and save time and resources, while ensuring best-of-breed security.

Check out this blog to learn more about Guardicore and security in Azure or read more about Guardicore and AWS security here.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Using Zero Trust Security to Ease Compliance

Data privacy in cyber-security is a hugely regulated sector. New regulations such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR) have added to the list of compliance mandates that already included PCI-DSS for financial data and HIPAA for patient information. Many enterprises now have compliance officers or even teams established, who have a heavy workload in achieving and proving compliance for these regulations, in order to be prepared for an audit and to put best-practices into place.

As data centers have become increasingly complex and dynamic, this workload has increased exponentially. Visibility is understandably hard to achieve in a heterogeneous environment, and if you don’t know where your data is – how can you secure it?

Traditional Perimeter Security Causes Problems for Compliance

If your business relies on perimeter-based security, any breach is a breach of your whole network. Everything is equally accessible once an attacker has made it through your external perimeter. This security model cannot distinguish between types of data or applications, and does not define or visualize critical assets, giving everything in your data center an equal amount of protection.

This reality is a struggle for any IT or Security teams responsible for compliance. Multiple compliance authorities enforce strict controls over the management of customer data, including how it is held, deleted, shared and accessed. Personally identifiable information (PII) and anywhere that financial information is stored (eg: CDE) needs added security measures or governance for compliance mandates, and yet these are often left unidentified, let alone secured. This is made more complicated today by a growing amount of data that resides or communicates outside of the firewall, for example in the cloud. Visibility is the first hurdle, and many enterprises fall immediately at the challenge.

On top of this, with border controls alone, as soon as your perimeter is breached, all your data is up for grabs by attackers who can make lateral movements inside your network. Even if you could see what you have, perimeter security simply can’t protect critical data that falls in scope for compliance at the required level.

Zero Trust as a Solution for Compliance

Many enterprises know that a Zero Trust model would provide a stronger security posture, and are worried about the movement of east-west traffic that remains unprotected, but think of moving to a Zero Trust paradigm as an incredibly complex initiative. Segmenting applications, writing policy for different areas of the business, establishing what access to give permissions to and where, it sounds like it would complicate security, not make it simpler.

However, when completed intelligently, principal analyst at Forrester Research, Renee Murphy explains how a Zero Trust model actually makes security and compliance a whole lot easier. “You end up with a less complex environment and doing less work overall. Once you know what [your data] is, where it is and how important it is, you can [then] put your efforts towards it.”

For this to be successful, and remain simple, your Zero Trust model’s implementation needs to start with visibility. Data classification is not an IT problem, it’s a business problem, and the business needs to be able to automatically discover all assets and data, both in real-time, and with historical baselines for comparison and policy creation.

Your partner in creating a Zero Trust model should be able to provide an automatic map of all applications, databases, communications and flows, including dependencies and relationships. This needs to be both deep, providing granular insight, and also broad, across your hybrid environment covering everything from legacy on-premises to container systems.

Furthermore, pick a vendor with good granular enforcement capabilities. The best protection leaves the least possible exposure. Policies that can lock compliance environments down farther than port and IP are required. Seek those that can create policies at the process, user, and domain name level.

Not only does this provide the best starting point for Zero Trust initiatives, but it also means that compliance becomes far easier as a result of best-in-class documentation and records at every stage.

Regardless of which standard you wish to comply with, utilizing the Zero Trust model for visibility and segmentation to effectively limit scope and resources is essential. For example the PCI-DSS Security Council has come out with the Information Supplement: Guidance for PCI DSS Scoping and Network Segmentation Guidance Scoping in which this is directly called out.

When You Establish Zero Trust, All Data Can be Treated Unequally

Once visibility is established, and you have an accurate view of your network, you can easily identify what needs protecting. Compliance mandates are usually very clear about what data is in scope and out of scope, and only insist on what is in scope keeping to regulations. While perimeter security made it impossible to apportion security differently throughout your data center, this is where micro-segmentation and zero-trust thrive.

With zero trust, your security strategy can recognize that not everything is created equally. Some data or applications need more security and governance than others, and while certain assets need to be watched and controlled closely, others can be left with minimal controls.

With the right partner in place, enterprises can use a distributed firewall to prioritize where to put their compliance, moving from the most essential tasks forward. Granular rules can be put in place, down to process level or based on user-identity, strictly enforcing micro-perimeters around systems and data that are in scope. This is a much easier task than ‘protect everything, all the time.’

Demonstrating Compliance using a Zero Trust Environment

Adopting a Zero Trust mentality is also a really strong way to show auditors that you’re doing your part. A huge part of compliance is being able to guarantee that even in case of a breach, you have taken all reasonable steps to ensure that your data was protected from malicious intent. Each time an east-west movement is attempted, this communication is checked and verified. As such, your enterprise has never assumed that broad permissions are enough to guarantee a safe connection, and with micro-segmentation, you have reduced the attack surface as much as possible. This process also provides an audit trail, making incident response and documentation much simpler in case of a breach.

Consider partnering with a vendor that includes monitoring and analytics, as well as breach detection and incident response, to lower the chance of a cyber-attack, and create a plan for any events that violate policy or suggest malicious intent. This can dramatically improve your chances of an attack, as well as help to bolster a robust compliance checklist.

The days of relying on perimeter-based controls to stay compliant and secure are long gone. In a world where Zero Trust models are gaining acceptance and improving security posture so widely, enterprises need to do more to prove that they are compliant with the latest regulations.

The Zero Trust framework acknowledges that internal threats are now almost a guarantee, and enterprises need to protect sensitive data and crown jewel applications with more than just border control alone. Remaining compliant is an important yardstick to measure the security of your infrastructure against, and Zero Trust is an effective model to achieve that compliance.

Want to read more about implementing cloud security toward an effective Zero Trust model? Get our white paper about how to move toward a Zero Trust framework faster.
Read More

Guardicore Achieves Microsoft IP Co-Sell Status: Available for Download on the Azure Marketplace – Here’s What That Means for You

A couple of weeks ago we announced that the Guardicore Centra security platform is available in the Microsoft Azure Marketplace. As you might know, Centra was available in the marketplace before, as Guardicore has worked with Microsoft for a very long time, providing various integrations as well as research for Azure and Azure Stack. Now, the latest version of Centra is available and Guardicore has achieved an IP Co-Sell status.

One of the most important capabilities that we developed for Azure provides Centra with real-time integration to Azure orchestration. This provides metadata on the assets deployed in your Azure cloud environment, complementing the information provided by Guardicore agents.

For example, information coming from orchestration may include data that can’t be collected from the VM itself, including: Source Image, Instance Name, Private DNS name, Instance Id, Instance Type , Security groups, Architecture, Power State, Private IP Address and Subscription Name.

Using this information, Centra will accelerate security migration from an on-premises data center to Azure.

In addition, we are very proud that Guardicore has achieved the Microsoft IP Co-Sell status. This designation recognizes that Guardicore has demonstrated its proven technology and deep expertise that helps customers achieve their cloud security goals. Achieving this status demonstrates our commitment to the Microsoft partner ecosystem. It also proves our ability to deliver innovative solutions that help forward-thinking enterprise customers to secure their business-critical applications and data with quick time to value, reduce the cost and burden of compliance, and securely embrace cloud adoption.

Where to Start? Moving from the Theory of Zero Trust to Making it Work in Practice

Going back many years, perimeter controls were traditionally adequate for protecting enterprise networks that held critical assets and data. The hypothesis was that if you had strong external perimeter controls, watching your ingress and egress should be adequate for protection. If you were a more sophisticated or larger entity, there would be additional fundamental separation points between portions of your environment. However these were still viewed and functioned as additional perimeter points, merely concentric circles of trust with the ability, more or less, to move freely about. In cases where threats occurred within your environment, you would hope to catch them as they crossed one of these rudimentary borders.

The Moment I Realized that Perimeters Aren’t Enough

This practice worked moderately well for a while. However, around fifteen years ago, security practitioners began to feel a nascent itch, a feeling that this was not enough. I personally remember working on a case, a hospital – attacked by a very early spear phishing attack that mimicked a help desk request for a password reset. Clicking on a URL in a very official looking email, staff were sent to a fakebut official looking website where these hospital professionals were prompted to reset their credentials – or so they thought. Instead, the attack began. This was before the days of the Darknet and we even caught the German hacker boasting about what he had done – sharing the phishing email and fake website on a hacker messaging board. I worked for a company that had a fantastic IPS solution and upon deploying it, we were able to quickly catch the individual’s exfils. At first, we seemed to be winning. We cut the attacker off from major portions of a botnet that resided on the cafeteria cash registers, most of the doctors machines and to my horror, even on the automated pharmacy fulfillment computers. Two weeks later, I received a call, the attacker was back,trying to get around the IPS device in new ways. While we were able to suppress the attack for the most part, I finally had to explain to the hospital IT staff that my IPS was merely at the entrances and exits of their network and that to really stop these attacks, we needed to look at all of the machines and applications that resided within their environment. We needed the ability to look at traffic before it made its way to and from the exits. This was to be the first of many realizations for me that the reliance on perimeter-based security was slowly and surely eroding.

In the years since, the concept of a perimeter has all but completely eroded. Of course, it took quite a while for the larger population to accept. This was helped along by the business and application interdependencies that bring vendors, contractors, distributors and applications through your enterprise as well as the emergence of cloud and cloud like provisioning utilized by Dev Ops. The concept of being able to have true perimeters as a main method of prevention is no longer tangible.

It was this reality that spurred the creation of Forrester’s Zero Trust model- almost a decade ago. The basic premise is that no person or device is automatically given access or trusted without verification. In theory, this is simple. In practice, however, especially in data centers that have become increasingly hybrid and complex, this can get complicated fast.

Visibility is Foundational for Zero Trust

A cornerstone of Zero Trust is to ‘assume access.’ This means that any enterprise should assume than an attacker has already breached the perimeter. This could be through stealing credentials, a phishing scam, basic hygiene issues like poor passwords, account control and patching regimen, an IoT or third-party device, a brute force attack, or literally limitless other new vectors that make up today’s dynamic data centers.

Protecting your digital crown jewels through this complex landscape is getting increasingly tough. From isolating sensitive data for compliance or customer security, to protecting the critical assets that your operation relies on to run smoothly, you need to be able to visualize, segment and enforce rules to create an air-tight path for communications through your ecosystem.

As John Kindervag, founder of Zero Trust once said, in removing “the Soft Chewy Center” and moving towards a Zero Trust environment, visibility is step one. Without having an accurate, real-time and historical map of your entire infrastructure, including on-premises and both public and private clouds, it’s impossible to be sure that you aren’t experiencing gaps or blind spots. As Forrester analyst Chase Cunningham mandates in the ZTX Ecosystem Strategic Plan, “Visibility is the key in defending any valuable asset. You can’t protect the invisible. The more visibility you have into your network across your business ecosystem, the better chance you have to quickly detect the tell-tale signs of a breach in progress and to stop it.”

What Should Enterprises Be Seeing to Enable a Zero Trust Model?

Visibility itself is a broad term. Here are some practical necessities that are the building blocks of Zero Trust, and that your map should include.

  • Automated logging and monitoring: With an automated map of your whole infrastructure that updates without the need for manual support, your business has an always-accurate visualization of your data center. When something changes unexpectedly, this is immediately visible.
  • Classification of critical assets and data: Your stakeholders need to be able to read what they can see. Labeling and classification are therefore an integral element of visibility. Flexible labeling and grouping of assets streamlines visibility, and later, policy creation.
  • Relationships and dependencies: The best illustration of the relationships and dependencies of assets, applications and flows will give insight all the way down to process level.
  • Context: This starts with historical data as well as real-time, so that enterprises can establish baselines to use for smart policy creation. Your context can be enhanced with orchestration metadata from the cloud or third-party APIs, imported automatically to give more understanding to what you’re visualizing.

Next Step… Segmentation!

Identifying all resources across all environments is just step one, but it’s an essential first step for a successful approach to establishing a Zero Trust model. Without visibility into users, their devices, workloads across all environments, applications, and data itself, moving onto segmentation is like grasping in the dark.

In contrast, with visibility at the start, it’s intuitive to sit down and identify your enterprise’s most critical assets, decide on your unique access permissions and grouping strategy for resources, and to make intelligent and dynamic modifications to policy at the speed of change.

Want to read more about visibility and Zero Trust? Get our white paper about how to move toward a Zero Trust framework faster.

Read More