How to Identify Accounts and Prioritize Risk for Privileged Access Management

Privileged Access Management (PAM) is understandably a high priority for today’s enterprises. The misuse of privileged accounts can allow attackers to escalate credentials and permissions across complex IT networks, finding open paths to access critical assets or steal sensitive data. This can have a dangerous impact on an enterprise’s ability to remain compliant with third-party regulations as well as internal governance mandates.

Let’s look in more detail at deploying Privileged Access Management, and how to prioritize risk for your own business needs.

Identifying your privileged accounts and credentials

In some cases, you might have hundreds of thousands of privileged credentials in your IT ecosystem, and in an increasingly connected world, this information might exist in an attack surface that’s larger than you’ve considered before.

Your first step is visibility, ensuring that you can uncover all credentials, from passwords and SSH keys to password hashes, access keys and more, and that you can do so across your entire environment, on premises, on the cloud, and across DevOps processes.

According to CyberArk, there are 7 types of accounts you need to consider, as poor hygiene or practices with any of them makes your enterprise a target for APTs and other dangerous cybercrime.

  • Emergency accounts: Access to these accounts requires IT management approval, and is only given in case of an emergency. As a manual task, it usually does not have any security measures in place.
  • Local Administrative accounts: These accounts are shared to provide admin access to the local host or session. Whenever IT staff need to perform workstation or server maintenance, or work on network devices, mainframes and other systems, these are the accounts they will use. Password hygiene may well be poor across these accounts, as IT professionals sometimes share passwords across an organization to make access easier. This is an open door for attackers.
  • Application accounts: Privileged accounts usually have access to critical applications or databases, used to access databases, run scripts, or provide access to other applications. Passwords might be embedded and stored in plain text files, copied across multiple channels and servers.
  • Active Directory or Windows domain service: Password changes for these accounts are complex, as your business will need to sync any updates across applications and infrastructure. Because of this, many businesses fail to regularly update application account passwords. If this happens in a critical system such as your Active Directory, you have created a single point of failure.
  • Service accounts: These local or domain accounts will interact directly with the operating system using an application or service. These may even have administrative privileges depending on their roles and requirements.
  • Domain Administrative accounts: These accounts have complete control over all domain controllers, and can access and make changes to all administrative accounts within the domain. The access they have extends to all workstations and servers within the organization network, and so therefore, these credentials are under regular attack from hackers, no matter the environment involved.
  • Privileged User accounts: One of the most common forms of account access granted on an enterprise domain, with these accounts users can have admin rights for their local desktops, or across a particular system. Users might choose complex or strong passwords, but this is often the only security control in place.

Identifying the risk of each kind of account will differ from enterprise to enterprise, and depend on your own digital crown jewels and most critical assets, as well as how you store and manage data, what systems hold intellectual property or other sensitive information, and where you’ve uncovered vulnerabilities in your own unique ecosystem. It’s common to start with your highest risk accounts, and then use a phased approach to build out your PAM.

What does protecting these accounts mean in practice?

Once you’ve established the accounts and credentials you want to protect, this should be approached in a number of ways. Credentials can and should be placed in a digital vault which uses multi-factor authentication for access. The best solutions will provide encrypted video monitoring of all privileged sessions, with alerts set up against suspicious activity and an easy playback option. In case of an audit or escalation,

IT admin should be able to access granular information about each session, down to single keystrokes, escalating this to the SOC or the next level where necessary. In case of a breach, automated behavior could include suspending or terminating sessions, or automatically rotating credentials to protect from further harm.

It’s also important to think about the local administrative access, even those these might seem less dangerous at a glance. Protecting these accounts is essential if you are working towards the principle of ‘least privilege’ or a Zero Trust security model. Every endpoint could be an entry point for hackers, allowing them to make lateral moves until they hit what they’re looking for, and many users have far more permissions and access than they need to do their job each day. Look for a solution with least-privilege server protection for both Windows and *NIX, allowing you to tightly manage permissions and gain insight into activity on each user. This can go a long way to remove the coarse controls and anonymity which often exists in today’s data centers. For *NIX, it also removes the risk of unmanaged SSH keys, a known exploit that can be taken advantage of to log in with root access control.

The same mentality needs to be front and center when you’re considering third-party applications and services, many of which require access to your network. These can be hard to keep track of, so a strong monitoring solution is essential. Think about best-practice hygiene for commercial off the shelf apps, such as removing hard-coded credentials and managing and rotating these privileged accounts in your digital vault.

Protect from on-premises to cloud deployments

The vast majority of today’s enterprises are working in a hybrid reality, with a network that spans on-premises and bare metal servers all the way to cloud and container systems. Any PAM solution that you deploy needs to be able to handle both, seamlessly. Managing DevOps secrets and credentials is an important part of your strategy, and that your code can retrieve the information it needs on the fly, rather than having them hardcoded into the application. This will allow you to rotate and secure these secrets and credentials the same way that you can on premises.

Another large area to consider is SaaS. These often have wide permissions, such as CRM software like Salesforce that is used by multiple teams. Privileged business users who access these applications are one click away from sensitive customer data, and the ability to move around a network far more freely than other stakeholders. Multi-factor authentication can help here, as well as isolating access to shared IDs.

Compliance and Privileged Access Management

Many of the benefits of Privileged Access Management support compliance and internal governance strategies. Firstly, you have one centralized repository for all of your audit data, reducing costs and making reporting fat easier. By enforcing privileged access automatically and monitoring this in real-time, many audit requirements are met, protecting all systems that handle information processing across a heterogeneous environment, and enforcing visibility and control over account usage.

In case of a breach, you have immediate insight into the incident, including where the breach occurred, when it happened, exactly what took place, and how to shore up defenses in the future. It’s easy to see how the right PAM solution can support compliance with a wide range of regulatory authorities, from SWIFT, and MAS-TRM, to SOX, GDPR and ISO 27001 certification.

Partnering with the best in the business

Guardicore has recently formed a partnership with market leader CyberArk, providing customers with a Privileged Session Management solution free of charge, ensuring that all Guardicore deployments meet the high security standards held by its customers. Joint customers will be able to leverage centralized control of all their privileged accounts and credentials, without duplication or sharing.

To download the Guardicore Privileged Session Management tool, head to the CyberArk Marketplace.

Windows Server 2008 R2 and Windows 7 are End of Life

Discover the steps to harden machines running Windows 7, Windows Server 2008 and Windows Server 2008 R2 against the inevitable unpatched vulnerability that will be disclosed for these systems.

No System Left Behind: Why Legacy Systems Should be Part of Your Zero Trust Strategy

The rise of digital transformation dictates that businesses move faster, innovate harder and adopt new technologies to remain competitive in their industries. Many times, it means implementation of systems using the latest IT innovation and methods. While the Zero Trust model of security has risen to the challenge for the latest technologies such as cloud, microservices or container systems, it’s essential to ensure that legacy infrastructure has not been forgotten.

Identifying the legacy systems you rely on

Moving to deploy a Zero Trust model is often triggered by digital transformation, understanding that the attack surface is increasing beyond what traditional security controls can maintain and secure. While it used to be sufficient to look at traffic as it entered and exited your environment (North-South), today’s attackers can be assumed to reside inside your network already, and so control over internal traffic East-West is essential. Practically speaking, the Zero Trust model was created for the most modern and dynamic environments, where organizations come up against phishing scams, connections with IoT devices, partnerships with third-party networks and more on a daily basis. Built to secure a digitally transformed network, it’s easy for enterprises to forget about legacy systems and let business-critical applications fall by the wayside. However, unpatched (sometimes there are simply no patches for a current vulnerability for old systems) or decades-old legacy systems are exactly where gaps in security and flaws may occur, making it far easier for attackers to make that first step into your data center.

This is where visibility for Zero Trust is so important. Starting with an accurate, real-time map of your whole infrastructure will uncover the legacy systems that you need to include in your Zero Trust journey, some of which you might not even have been aware existed in the first place. In some cases, this could spur you on to modernize the system, such as updating a machine that is using an old operating system. In other cases, it’s more complex to make changes, such as legacy AIX machines that process financial transactions, or Oracle DBs that run on Solaris servers. These systems can be business-critical, and it can be years before they can be updated or modernized, if ever.

Identifying the legacy technology that you rely on is step one. The more difficult these are to update, the more likely they are to be essential to how your business runs. In which case, these are exactly the areas you need to be sure to secure in today’s high-risk cyber landscape.

Including legacy in your Zero Trust model

Make sure that you have coverage for your legacy servers with micro-segmentation policy enforcement modules. The best micro-segmentation technology can then use a flexible policy engine to help you create policy that includes legacy systems in your Zero Trust model. As a starting point, you should be able to use your map to ascertain the servers and endpoints that are running legacy applications, and how these workloads communicate and interact with other applications and business environments. Ideally, this should be granular enough to look at the process level as well as ports and IPs. This insight can help you to recognize how an attacker could use lateral movement to hurt your business the most, or access your most sensitive data and applications.

With this information in real-time, you can avoid the challenges of traditional security solutions for legacy systems in the same way that you would for the rest of your data center. After all, if you’ve acknowledged the limitations of VLANs and other insufficient security controls for your modernized systems, why would you rely on them for legacy infrastructure that is even more business-critical, or tough to secure? Network segmentation via VLANs often results in all legacy infrastructure being placed into one segment that can be easily accessed by a single well-placed attack, and firewall rules are tough to maintain between legacy VLANs and more dynamic parts of your network.

In contrast to this traditional method, a micro-segmentation vendor that is built for a heterogeneous environment takes legacy systems into consideration from the start. Rather than dropping support for legacy operating systems, hardware, servers and applications, intelligent micro-segmentation technology provides equal visibility and control across the whole stack.

Zero Trust means zero blind spots

Your legacy systems might be quietly running in the background, but the noise of the fallout in case of a breach could silence your business for good. Don’t let your pursuit of modernization allow you to forget to include legacy infrastructure in your Zero Trust model, where sensitive data and critical applications reside, and where you might well need it the most.

Want to read more about how Guardicore micro-segmentation can take you closer to adopting a Zero Trust framework? Download our white paper on getting there faster.

Read More

Guardicore Centra Integration now available on CyberArk Marketplace

We had our first integration with CyberArk in 2016. One of our very early adopters, a CISO for a large telecommunications company, realized that Guardicore Centra was becoming a critical part of his security infrastructure and decided to integrate the two products.

The CISO understood that one of the biggest security threats for his organization was the misuse of privileged accounts with elevated permissions on IT systems. He decided to use CyberArk with Guardicore in order to manage privileged accounts and protect his critical assets. Guardicore secured access to critical assets via micro-segmentation and detection capabilities, and CyberArk managed the privileged access on these systems.

Since then, we have added additional features such as identity-based policies to provide a stronger overall solution, and many other customers have benefited from these integrated capabilities.

I am happy to update you that this integration of Guardicore Centra security platform and the CyberArk Privileged Access Security Solution has recently been made available on the CyberArk Marketplace, helping our joint customers accelerate their ability to meet compliance requirements and reduce security risk without introducing additional operational complexity.

By providing the Guardicore plug-in via the CyberArk Marketplace, customers can now more easily evolve their privileged access management programs. Our integration enables CyberArk customers to protect their hybrid cloud and data center while maintaining strong privileged access controls.

As a CyberArk C3 Alliance member, Guardicore will continue to work alongside CyberArk to deliver value to shared customers through an integrated plug-in, as part of their security stack.

Privileged access is pervasive and provides attackers the “keys to the IT kingdom.”

It is widely recognized that nearly all damaging cyber-attacks involve privileged account compromise. Attackers are then able to exploit this legitimate privileged access to establish a foothold and make lateral moves across enterprise IT infrastructure. Additionally, without least privilege, internal users might abuse their access rights. By integrating the capabilities of Guardicore Centra with the CyberArk solution, customers can be better positioned to detect and stop lateral movement using both software-defined segmentation and privileged access management.

Thinking about zero trust implementation? CyberArk combines with Guardicore to take you that much closer to the adoption of the zero trust model of security.

Want to read more about how Guardicore micro-segmentation can take you closer to adopting a zero trust framework? Download our white paper on getting there faster.

Read More

Guardicore vs. VLANs. No Contest. All That’s Left is Deciding What to Do with Your Free Time

A fast-paced business world deserves security solutions that can keep up. Speed isn’t everything, but reducing complexity and time when deploying a new strategy can be the difference between success and failure. Let’s look at the process of segmenting just one business critical application via VLANs, and then compare how it works with Guardicore Centra micro-segmentation. Then you can decide how to use all that spare time wisely.

VLANs – How Long Does it Take?

If you decide to go down the VLAN route, you will need to spend around 4-6 months preparing your network and application changes. On the networking side, teams will configure switches, connect servers, and generally get the network ready for the new VLANs. On the application side, teams will build a migration strategy, starting with discovering all the relevant infrastructure, making changes to application code where necessary and preparing any pre-existing dependent applications for the change ahead of time.

After this 6-month period, you can start to build policy. It can take anywhere from 2-4 months to submit firewall change requests and have fixes and changes signed off and approved by the firewall governance teams. Meanwhile, your critical applications remain vulnerable.

Once you’re ready to move on to policy enforcement, you’ll need to spend a weekend migrating the application to the new VLAN. This includes manually reconfiguring IP addresses, applications and integration points. Don’t forget to warn your users, as there will be some application downtime that you can’t avoid. Altogether, you’ve spent up to 10 months performing this one segmentation task.

VLANs vs Guardicore

Guardicore Centra – How Long Does it Take?

Now let’s take a look at how it works when you choose smart segmentation for hybrid cloud and modern data center security with Guardicore. The preparation time is just a few days, as opposed to half a year, while Guardicore agents are deployed onto your application. This installation is simple and painless, and works with any platform. Labeling is also done during this time, integrating with your organizational inventory such as CMDB or cloud tags. Guardicore’s Reveal platform automatically discovers all traffic and flows, giving you an accurate map of your IT ecosystem, in real time, and continues to give you historical views as you proceed as well.

As policy creation is automatic, your policy suggestions can be tested immediately, and then run in ‘alert mode’ for two weeks while you tweak your policy to make sure it’s optimized to its full potential. When you’re ready to go – pick a day and switch from alert to enforce mode, with no impact on performance, and no downtime.

You’ve Just Saved 9 Months – Let’s Use It!

With security handled, and 9 months of time to kill, here are just some of the things you could achieve in your organization.

Start a Language Lunch Club

quick segmentation - start a language lunch club

90% of employees say that taking a regular lunch break helps them to feel more productive in the afternoon. Despite this, most of us often grab a quick sandwich, or don’t even manage to get up from our desks. Why not use some of your newfound company “free time” to encourage teams to eat lunch together, socializing and enjoying some much needed down-time? This time ‘off’can give colleagues a chance to get to know one another, forming new friendships, social bonds and levels of trust between your staff. If you want to try to combine this with learning a new skill and further enriching your staff (expanding their minds and improving memory and brain function), you could start a language club where your team members can learn basic skills that can support them in reaching global customers. With 180 hours to kill – that’s a whole lot of lazy, or super-productive, lunches!

Play with Lego!

quick segmentation - play with lego

Many organizations struggle with how to make team meetings more productive, especially when everyone is always so short on time. If you’re known for sharing memes like “I survived another meeting that should have been an email,” then isn’t it time you did something about it?

Lego Serious Play is one great methodology that can get staff thinking and working outside of the box. As 80% of our brain cells are connected to our hands, building and creating can unlock hidden thoughts and ideas. It’s also a fantastic way to get input from quieter team members, as it works for both introverts and extroverts, and uses visual, kinaesthetic and auditory communication. If you have some free time left over, why not try beating the world record for the tallest Lego tower, built in Tel Aviv in 2017. You’ll have to make it to 36 meters to stand a chance though!

Put more Time into Health and Wellness

quick segmentation - put time into health and wellness

With more time in the day, there’s no need to take shortcuts that adversely affect your health. Tell your employees to skip the elevator and take the stairs, or to come in slightly later and cycle instead of jumping on available public transport. If your staff take the stairs twice a day for the whole nine months of saved time – that’s 12,600 calories, or the equivalent of 50 pieces of cheesecake!

Research has shown that employees who have work wellness programs report taking 56% fewer sick days than those without. Use some of the free time you’re saving to set up 8:30am or 5:00pm wellness classes, such as yoga, mindfulness, aerobics or Zumba and give your employees more reasons to love coming to work! Activity also encourages greater focus and productivity while on the job, so consider it a triumph to flex the muscles of your body and your mind.

Do More with Your Day Job

quick segmentation - do more with your day job

Spend some time getting to know other departments in the company, sitting down with Procurement to understand recent contracts, or heading over to R&D and having that conversation you’ve been meaning to have about Intellectual Property. Nine months makes 1440 hour-long coffee meetings! Better yet, why not plan a stint to an at least semi-exotic location to visit your offshore development teams on site? Allow yourself a bit of time out of the office while getting some all-important face-time with other members of your team.

You could also use some of your extra time to visit some customers or other stakeholders in the supply chain, identifying the risks that they pose to your organization and the mitigation you could put in place. Interested in some more informal professional development? It’s the perfect time to start a training to develop or expand a new skill, or mentoring some junior employees, or think about your own career enrichment. After all, you’ve just saved nine months!

Encourage Innovation

quick segmentation - encourage innovation

Most people have heard of Google’s 20% rule, where employees are encouraged to work on side projects, new hustles, or research for 20% of their working day. But for many companies this is a huge privilege – only possible if you have enough time in the day to get all the urgent work off your desk- which we know is never the case. But now with more time to play with, literally, you can implement some enforced innovation time. With 9 months of extra time to use up, it will take four and a half years of an hour a day before your staff have used up the surplus.

Now It’s your Turn to Innovate: What Will Your Teams Do With Their Free Time?

Why not draw up a bucket list of what you could do with an extra nine months, and how it could benefit your company?

Take a look at the seven steps to operationalize micro-segmentation so you can see just how simple it would be to get started.

Read More



Guardicore Extends Support to AWS Outposts, Providing Holistic Visibility and Control Across the Hybrid Cloud

Like the real clouds that can be seen in the Earth’s atmosphere, the IT clouds are constantly changing in the DCsphere. Last year, AWS announced plans to expand the public cloud into on-premises data centers and introduced AWS outposts, which will allow customers to run AWS infrastructure on-premises or other co-location facilities, creating a new type of hybrid cloud.  AWS customers can expect to have a consistent experience, whether they are managing infrastructure on the public cloud or using Outposts. 


Today, I am excited to share the news that we will support AWS outposts just like any other part of the hybrid cloud. Together with AWS and their hardware partners we are looking forward to expanding the Guardicore ecosystem to additional areas of the ever-expanding cloud, securing customers wherever they might be.

Highlighting the Benefits of AWS Outposts

Using AWS Outposts, organizations can run services such as EC2, EBS, and EKS on-premises, as well as database services like Amazon RDS or EMR analytics. Running AWS services locally, you will still be able to connect to services from the local AWS Region, and use the same tools and technology to manage your applications. With this announcement from Guardicore Centra, security can also remain the same on-premises as you’ve come to expect with AWS on the cloud.

The value of this technology for data storage and management is powerful. For organizations that are bound by regulations for storing data off the cloud, or in countries with data sovereignty requirements or no AWS Region, Outposts is a valuable alternative that makes data processing and storage seamless.

Healthcare is a strong example of a vertical that can benefit from Outposts. Organizations can simply run Machine Learning and analytics models to their health management platforms, even where low latency processing requirements dictate that they remain on-premises. When it’s time to retrieve data, this information is stored locally and therefore quick to retrieve. Financial services is an example of another use case that can leverage Outposts to deliver banking or processing requirements within the confines of local data requirements.

Making it Happen

To provide the widest possible coverage, Guardicore will support the two variants of AWS Outposts: both VMware Cloud on AWS Outposts with our existing VMware orchestration integration, as well as the AWS native variant of AWS Outposts running on premises.

Read more about our ever-evolving capabilities for AWS security as a trusted AWS Technology Partner, and stay tuned for more details on this exciting news and other collaborations.

Want to know more about how Guardicore, a trusted AWS technology partner, helps you nail hybrid cloud security by partnering with AWS? Download our white paper on the shared security model.

Read More

Why You Should Segment Users on AWS WorkSpaces and How it Should be Done

I recently came across a Guardicore financial services customer that had a very interesting use case. They were looking to protect their Virtual Desktop (VDI) environment, in the cloud.

The customer’s setup is a hybrid cloud: it has legacy systems that include bare metal servers, Solaris and some old technologies on-premises. It also utilizes many Virtual environments such as VMware ESX, Nutanix and Openstack.

Concurrently with this infrastructure, the customer has started using AWS and Azure and plans to use containers in these platforms, but has not yet committed to anything specific.

Learn More About User Identity Access Management

One interesting element to see, was how the customer was migrating its on-premises Citrix VDI environment to AWS workspaces. The customer was happy using AWS workspaces and had therefore decided to migrate to using them in full production. AWS workspaces were especially useful for our customer since the majority of its users work remotely, and it was so much easier to have those users working with an AWS WorkSpace than relying on the on-premises, Citrix environment.

Working with an AWS WorkSpace – a Use Case

In Forrester’s Now Tech: Cloud Desktops, Q4 2019 report, cloud desktops and their various offerings are discussed. Forrester states that “you can use cloud desktops to improve employee experience (eX), enhance workforce continuity, and scale business operations rapidly.” This is exactly what our customer was striving to achieve with AWS WorkSpaces.

What is an AWS WorkSpace, Anyway?

AWS Desktops are named “Amazon WorkSpaces”, and they are a Desktop-as-a-Service (DaaS) solution that run on either Windows or Linux desktops. AWS provides this pay-as-you-launch service all around the world. According to AWS “Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.”

To get started with AWS workspaces click here.

AWS WorkSpace Infrastructure was missing something?

Our customer was using AWS WorkSpaces and scaling their utilization rapidly. This resulted in a need to add a security layer to these cloud desktops. In AWS when users access the WorkSpaces, upon access, they are automatically assigned a workspace, and a dynamic IP. Controlling this access is challenging using traditional network segmentation solutions that are IP based. Thus, our customer was looking for a solution with the following features:

    • Visibility:
      • First and foremost within the newly adopted cloud platform
      • Secondly, not just an understanding of traffic between legacy systems on-premises and in the cloud individually, but visibility into inter-platform communications, too.
    • Special attention for Amazon WorkSpaces:
      • User-level protection: Controlling which users from AWS workspaces should and could interact with the various applications the customer owned, on-premises or in the cloud.
      • Single policy across hybrid-cloud: What was once implemented on-premises alone, now needed to be implemented in the cloud, and not only in the cloud, but cross cloud to on-premises applications. The customer was looking for simplicity, a single tool to control all policies across any environment.

Tackling User Segmentation with Guardicore Centra

Our customer evaluated several solutions, for visibility, segmentation and user identity management.The customer eventually choose Guardicore Centra, for the ability to deliver all of the above, from a single pane of glass, and do so swiftly and simply.

Guardicore was able to provide visibility of all workloads, on premises or in the cloud, across virtual, bare metal and cloud environments, including all assets, giving our customer the governance they needed of all traffic and flows, including between environments.

On top of visibility, Centra allowed an unprecedented amount of control for the customer. Guardicore policies were set to control and enforce allowed traffic and add an additional layer of user identity policies to control which users from the AWS workspaces could talks to which on-premises applications. As mentioned previously, upon access to AWS workspaces, users are automatically assigned a workspace, with a dynamic IP. Thus traditional tools that are IP based are inadequate, and do not provide the flexibility needed to control these user’s access. In contrast, Guardicore Centra enables creating policies based on the user’s identity to the datacenter and applications, regardless of IP or WorkSpace.

 

Work Safely on VDI with Centra

Guardicore Centra provides distributed, software-based segmentation, enabling user identity access management. This enables additional control of the network, among any workloads.

Centra enables creating policy rules based on the identity of the logged in user. Identities are pulled from the organizational Active Directory integrated with Centra. Centra requires no network changes and no downtime or reboot of systems. Policies are seamlessly created, and take real time effect, controlling new and active sessions alike.

This use case is just one example of how Guardicore Centra simplifies segmentation, and enables customers fine-grained visibility and control. Centra allows an enterprise to control user’s access anywhere, setting policy that applies even when multiple users are logged in at the same time to the same system, as well as managing third party, administrators and network users’ access to the network.

Learn More About User Identity Access Management

Environment Segmentation is your Company’s First Quick Micro-Segmentation Win

We often tell our customers that implementing micro-segmentation technology should be a phased project. Starting with a thorough map of your entire IT ecosystem, your company should begin with the ‘low hanging fruit’, the easy wins that can show quick time to value, and have the least impact on other parts of the business. From here, you’ll be in a strong vantage point to get buy in for more complex or granular segmentation projects, perhaps even working towards a zero-trust model for your security.

One of the first tasks that many customers take on is separating environments from one another. Let’s see how it works.

Understanding the Context of your Data Center

Whether your workloads are on-premises, in the cloud, or in a hybrid mix of the two, your data center will be split into environments. These include:

  • Development: Where your developers create code, try out experiments, fix bugs, and use trial and error to create new features and tools.
  • Staging: This is where testing is done, either manually or through automation. Resource-heavy, and as similar as possible to your production environment. This is where you would do your final checks.
  • Production: Your live environment is your production environment. If any errors or bugs make it this far, they could be discovered by your users. If this happens in this environment, it could have the greatest impact on your business through your most critical applications. While all environments are vulnerable, and some may even be more easily breached, penetration and movement in this environment can have the most impact and cause the most damage.

Of course, every organization is different. In some cases, you might have environments such as QA, Local, Feature, or Release, to name just a few. Your segmentation engine should be flexible enough to meet any business structure, suiting your organization rather than the other way around.

It’s important to note that these environments are not entirely separate. They share the same infrastructure and have no physical separation. In this reality, there will be traffic which needs to be controlled or blocked between the different environments to ensure best-practice security. At the same time however, in order for business to run as usual, specific communication flows need to be allowed access despite the environment separations. Mapping those flows, analyzing them and white-listing them is often not an easy process in itself, adding another level of complexity to traditional segmentation projects carried out without the right solution.

Use cases for environment segmentation include keeping business-critical servers away from customer access, and isolating the different stages of the product life cycle. This vital segmentation project also allows businesses to keep up with compliance regulations and prevents attackers from exploiting security vulnerabilities to access critical data and assets.

Traditional Methods of Environment Segmentation

Historically, enterprises would separate their environments using firewalls and VLANs, often physically creating isolation between each area of the business. They may have relied on cloud platforms for development, and then used on-premises data centers for production for example.

Today, some organizations adapt VLANs to create separations inside a data center. This relies on multiple teams spending time configuring network switches, connecting servers, and making application and code changes where necessary. Despite this, In static environments, hosted in the same infrastructure, and without dynamic changes or the need for large scale, VLANs get the job done.

However, the rise in popularity of cloud and containers, as well as fast-paced DevOps practices, has made quick implementation and flexibility more important than ever before. It can take months to build and enforce a new VLAN, and become a huge bottleneck for the entire business, even creating unavoidable downtime for your users. Manually maintaining complex rules and changes can cause errors, while out of date rules leave dangerous gaps in security that can be exploited by sophisticated attackers. VLANs do not extend to the cloud, which means your business ends up trying to reconcile multiple security solutions that were not built to work in tandem. Often this results in compromises being made which put you at risk.

A Software-Based Segmentation Solution Helps Avoid Downtime, Wasted Resources, and Bottlenecks

A policy that follows the workload using software bypasses these problems. Using micro-segmentation technology, you can isolate low-value environments such as Development from Production, so that even in case of a breach, attackers cannot make unauthorized movement to critical assets or data. With intelligent micro-segmentation, this one policy will be airtight throughout your environment. This includes on-premises, in the public or private cloud, or in a hybrid data center.

The other difference is the effort in terms of implementation. Unlike with VLANs, with software-based segmentation, there is no complex coordination among teams, no downtime, and no bottlenecks while application and networking teams configure switches, servers and code. Using Guardicore Centra as an example, it takes just days to deploy our agents, and your customers won’t experience a moment of downtime.

Achieve Environment Segmentation without Infrastructure Changes

Environment segmentation is a necessity in today’s data centers: to achieve compliance, reduce the attack surface, and maintain secure separation between the different life stages of the business. However, this project doesn’t need to be manually intensive. When done right, it shouldn’t involve multiple teams, result in organizational downtime or even require infrastructure changes. In contrast, it can be the first stage of a phased micro-segmentation journey, making it easier to embrace new technology on the cloud, and implement a strong posture of risk-reduction across your organization.

Want to learn more about what’s next after environment segmentation as your first micro-segmentation project? Read up on securing modern data centers and clouds.

More Here.

Are you Prepared for a Rise in Nation State Attacks and Ransomware in 2020?

Once you know what you’re up against, keeping your business safe might be easier than you think. In this blog, we’re going to look at two kinds of cyber threats: nation state cyber attacks and ransomware. Neither is a new concern, but both are increasing in sophistication and prevalence. Many businesses feel powerless to protect against these events, and yet a list of relatively simple steps could keep you protected in the event of an attack.

Staying Vigilant Against Nation State Actors

According to the 2019 Verizon Data Breach study, nation state attacks have increased from 12 percent of attacks in 2017 to 23 percent in 2018.

One of the most important things to recognize about nation state attacks is that it is getting harder to ascertain where these attacks are coming from. Attackers learn to cleverly obfuscate their attacks through mimicking other state actor behavior, tools, and coding and through layers of hijacked, compromised networks. In some cases, they work through proxy actors. This makes the process of attribution very difficult. One good example is the 2018 Olympics in Pyongyang, where attackers launched the malware Olympic Destroyer. This took down the Olympic network’s wireless access points, servers, ticketing system, and even reporters Internet access for 12 hours, immediately prior to the start of the games. While at first, metadata in the malware was thought to attribute the attack to North Korea, this was actually down to manipulations of the code. Much later, researchers realized it was of Russian origin.

These ‘false flag’ attacks have a number of benefits for the perpetrators. Firstly, the real source of the threat may never be discovered. Secondly, even if the correct attribution is eventually found, the news cycle has died down, the exposure is less, and many people may not believe the new evidence.

This has contributed to nation state actors feeling confident to launch larger and more aggressive attacks, such as Russian attacks on Ukrainian power grids and communications, or the Iranian cyber-attack APT 33, that recently took down more than 30,000 Saudi oil production laptops and servers.

Ransomware often Attacks the Vulnerable, including Local Government and Hospitals

State sponsored attacks have the clout to do damage where it hurts the most, as seen by the two largest ransomware attacks ever experienced, WannaCry and NotPetya. These were created using what was allegedly a stolen US NSA tool kit called EternalBlue, as well as a French password stealer called Mimikatz.

This strength, combined with the tight budgets and flat networks of local governments and healthcare systems, is a recipe for catastrophe. Hospitals in particular are known for having flat networks and medical devices based on legacy and end-of-life operating systems. According to some estimates, hospitals are the targets of up to 70% of all ransomware incidents. The sensitive nature of PII and health records and the direct impact on safety and human life makes the healthcare industry a lucrative target for hackers looking to get their ransom paid by attacking national infrastructure.

As attackers become increasingly brazen, and go after organizations that are weak-placed to stand up to the threat, it’s more important than ever that national infrastructure thinks about security, and takes steps to handle these glaring gaps.

Shoring Up Your Defenses is Easier Than You Think

The party line often seems to be that attackers are getting smarter and more insidious, and data centers are too complex to handle this threat. It’s true that today’s networks are more dynamic and interconnected, and that new attack vectors and methods to hide these risks are cropping up all the time. However, what businesses miss, is the handful of very achievable and even simple steps that can help to limit the impact of an attack, and perhaps even prevent the damage occurring in the first place.

Here’s what enterprises can do:

  • Create an Incident Response Plan: Make sure that anyone can understand what to do in case of an incident, not just security professionals. Think about the average person on your executive board, or even your end users. You need to assume that a breach or a ransomware attack will happen, you just don’t know when. With this mindset, you’ll be more likely to create a thorough plan for incident response, including drills and practice runs.
  • Protect your Credentials: This starts with utilizing strong passwords and two-factor authentication, improving the posture around credentials in general. On top of this, the days of administrative rights are over. Every user should have only the access they need, and no further. This stops bad actors from escalating privileges and moving laterally within your data center, taking control of your devices.
  • Think Smart on Security Hygiene: Exploits based on the Eternal Blue tool kit – the Microsoft SMB v1 vulnerability, were able to cause damage because of a patch that had been released by Microsoft by May 2017. Software vulnerabilities can be avoided through patching, vulnerability testing, and certification.
  • Software-Defined Segmentation: If we continue the mindset that an attack will occur, it’s important to be set up to limit the blast radius of your breach. Software-defined segmentation is the smartest way to do this. Without any need to make infrastructure changes, you can isolate and protect your critical applications. This also works to protect legacy or end-of-life systems that are business critical but cannot be secured with existing modern solutions, a common problem in the healthcare industry. Also unlike VLANs and cloud security groups these take no physical infrastructure changes and take hours not months to implement.

Following this Advice for Critical Infrastructure

This advice is a smart starting point for national infrastructure as well as enterprises, but it needs more planning and forethought. When it comes to critical infrastructure, your visibility is essential, especially as you are likely to have multiple platforms and geographies. The last thing you want is to try to make one cohesive picture out of multiple platform-specific disparate solutions.

It’s also important to think about modern day threat vectors. Today, attacks can come through IP connected IoT devices or networks, and so your teams need to be able to detect non-traditional server compute nodes.

Incident response planning is much harder on a governmental or national level, and therefore needs to be taken up a notch in preparation. You may well need local, state, and national participation and buy-in for your drills, including law enforcement and emergency relief in case of panic or disruption. How are you going to communicate and share information on both a local and international scale, and who will have responsibility for what areas of your incident response plan?

Learning from the 2018 Olympics

Attacks against local government, critical infrastructure and national systems such as healthcare are inevitable in today’s threat landscape. The defenses in place, and the immediate response capabilities will be the difference between disaster and quick mitigation.

The 2018 Olympics can serve as proof. Despite Russia’s best attempts, the attack was thwarted within 12 hours. A strong incident response plan was put into place to find the malware and come up with signatures and remediation scripts within one hour. 4G access points had been put in place to provide networking capabilities, and the machines at the venue were reimaged from backups.

We can only hope that Qatar 2022 is already rehearsing as strong an incident response plan for its upcoming Olympics, especially with radical ‘semi-state actors’ in the region such as the Cyber Caliphate Army and the Syrian Electronic Army who could act as a proxy for a devastating state actor attack.

We Can Be Just as Skilled as the Attackers

The attitude that ‘there’s nothing we can do’ to protect against the growth in nation state attacks and ransomware threats is not just unhelpful, it’s also untrue. We have strong security tools and procedures at our disposal, we just need to make sure that we put these into place. These steps are not complicated, and they don’t take years or even months to implement. Staying ahead of the attackers is a simple matter of taking these steps seriously, and using our vigilance to limit the impact of an attack when it happens.

Want to understand more about how software defined segmentation can make a real difference in the event of a cyber attack? Check out this webinar.

A Case Study for Security and Flexibility in Multi-cloud Environments

Most organizations today opt for a multi-cloud setup when migrating to the cloud. “In fact, most enterprise adopters of public cloud services use multiple providers. This is known as multicloud computing, a subset of the broader term hybrid cloud computing. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Gartner comments that “most organizations adopt a multicloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions”.

When considering segmentation solutions for the cloud, avoiding vendor lock-in is equally important, especially considering security concerns.

Let’s consider the following example, following up on an experiment that was performed by one of our customers. As we discussed in the previous posts in the series, the customer created a simulation of multiple applications running in Azure and AWS. For the specific setup in Azure please consider the first and second posts in this series.

Understanding the Experiment

 

Phase 1 – Simulate an application migration between cloud providers:

The customer set up various applications in Azure. One of these is the CMS application. NSGs and ASGs have been set up for CMS, using a combination of allow and deny rules.

The customer attempted to migrate CMS from Azure to AWS. After the relevant application components were set up in AWS, the customer attempted to migrate the policies from Azure Security Groups to AWS Security Groups and its Access Control List. In order for the policies to migrate with the application, the deny rules in Azure Security Groups had to be translated into allow rules for all other traffic in AWS security groups or to network layer deny rules in AWS access control lists (ACLs).

Important differences between AWS security groups and ACLs:

  1. Security groups – security groups are applied at the EC2 level and are tied to an asset, not an IP. They only enable whitelisting traffic and are stateful. This is the first layer of defense, thus traffic must be allowed by Security Groups to then be analyzed by an ACL.
  2. ACLs – access control lists are applied at the VPC level, thus are directly tied to IPs. They support both allow and deny rules, but as they are tied to specific IPs, they do not support blocking by application context. They are not stateful and thus are not valid for compliance requirements.
  3. AWS security groups do not support blacklisting functionalities and only enable whitelisting, while AWS ACLs support both deny and allow rules, but are tied to an IP address within a VPC in AWS, enabling blocking only static IPs or a whole subnet.

Given the differences between security groups and ACLs (see sidebar), migrating the CMS application from Azure to AWS alongside the policies, required an employee to evaluate each Azure rule, and translate it to relevant rules in AWS Security Groups and ACLs. This unexpectedly set back the migration tests and simulation.

This is just one example. Each major public cloud provider provides their own tools for policy management. Working with multiple cloud native tools requires a lot of time and resources, and results in a less secure policy and added inflexibility. The more hybrid your environment, and the more you depend on native tools, the more tools you will end up using. Each tool requires an expert, someone who knows how to use it and can overcome its limitations. You will need to have security experts for each cloud provider you choose to use, as each cloud provider has a completely different solution with it’s own limitations. An example of one such limitation that all cloud provider native segmentation tools share is that cloud-based security groups only provide L4 policy control, and so additional tools will be required to secure your application layer.

Guardicore Provides a Single Pane of Glass for Segmentation Rules

When using Guardicore, each rule is applied on all workloads: virtual machines, public clouds (AWS, Azure, GCP…), bare-metal, and container systems. Rules follow workloads and applications when migrating to/between clouds or from on-premises to the cloud. Security teams can work with a single tool instead of multiple solutions to build a single, secure policy which saves time and resources, and ensure consistency across a heterogeneous environment.
Guardicore therefore enables migrating workloads with no security concerns. Policies will migrate with the workloads wherever they go. All you will need to take into account in your decision of workload migration is your cloud provider’s offering.

Our customer used Guardicore to create the CMS application policies, adding an additional layer 7 security with rules at process level, enhancing the Layer 4 controls from the native cloud provider. To migrate CMS from Azure to AWS seamlessly, policies were no longer a concern. Guardicore Centra policies follow the application wherever it goes. As policies are decoupled from the underlying infrastructure, and are created based on labels. The policies in Guardicore followed the workloads from Azure to AWS with no changes necessary.

Phase 2 – Create policies for cross-cloud application dependencies

The customer experiment setup in AWS included an Accounting application in the London, UK region, that periodically needed to access data from the Billing application databases. The billing application was set up in Azure.

The Accounting application had 2 instances, one in the production environment and another in the development environment. The goal was for only the Accounting application in production to have access to the Billing application.

In a recent Gartner analysis Infrastructure Monitoring With the Native Tools of AWS, Google Cloud Platform and Microsoft Azure, Gartner mentions that “Providers’ native tools do not offer full support for other providers’ clouds, which can limit their usability in multicloud environments and drive the need for third-party solutions.” One such limitation was encountered by our customer.

Azure and AWS Security Groups or ACLs enable controlling cross-cloud traffic based only on the public IPs of the cloud providers. One must allow the whole IP region range of one cloud provider to communicate with another in order for two applications to communicate cross-cloud.
No external IP can be statically set for a server in Azure or AWS. Thus without introducing a third-party solution, there is no assurance that traffic is coming from a specific application in Azure when talking to a specific application in AWS and vice versa.

As public IPs are dynamically assigned to workloads within both Azure and AWS, our customer had to permit the whole IP range of the AWS London, UK region to communicate with the Azure environment, with no application context, and no control, introducing risk. Moreover, there was no way to prevent the Accounting application in the development environment from creating such a connection, without introducing an ACL in AWS to block all communication from that application instance to the Azure range. This would be problematic and restrictive, for example if the dev app had dependencies on another application in Azure.

Guardicore Makes Multi-cloud Policy Management Simple

As we have already discussed, policies in Guardicore are decoupled from the underlying infrastructure. The customer created policies based on Environment and Application labels with no dependency on the underlying cloud provider hosting the applications or on the application’s private or public IPs. This enabled easy policy management, blocking the Accounting application in the Development environment on AWS while allowing the Production application instance access to the Billing application in Azure. Furthermore, this gave our customer ultimate flexibility and the opportunity for future application migration seamlessly between cloud providers.

Guardicore provided a single pane of glass for multi-cloud segmentation rules. Each rule was applied on all relevant workloads regardless of the underlying infrastructure. Security teams were able to work with a single tool instead of managing multiple silos.

The same concept can be introduced for controlling and managing how your on-premises applications communicate with your cloud applications, ensuring a single policy across your whole data center, on premises or in the cloud. Using Guardicore, any enterprise can build a single, secure policy and save time and resources, while ensuring best-of-breed security.

Check out this blog to learn more about Guardicore and security in Azure or read more about Guardicore and AWS security here.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More