Environment Segmentation is your Company’s First Quick Micro-Segmentation Win

We often tell our customers that implementing micro-segmentation technology should be a phased project. Starting with a thorough map of your entire IT ecosystem, your company should begin with the ‘low hanging fruit’, the easy wins that can show quick time to value, and have the least impact on other parts of the business. From here, you’ll be in a strong vantage point to get buy in for more complex or granular segmentation projects, perhaps even working towards a zero-trust model for your security.

One of the first tasks that many customers take on is separating environments from one another. Let’s see how it works.

Understanding the Context of your Data Center

Whether your workloads are on-premises, in the cloud, or in a hybrid mix of the two, your data center will be split into environments. These include:

  • Development: Where your developers create code, try out experiments, fix bugs, and use trial and error to create new features and tools.
  • Staging: This is where testing is done, either manually or through automation. Resource-heavy, and as similar as possible to your production environment. This is where you would do your final checks.
  • Production: Your live environment is your production environment. If any errors or bugs make it this far, they could be discovered by your users. If this happens in this environment, it could have the greatest impact on your business through your most critical applications. While all environments are vulnerable, and some may even be more easily breached, penetration and movement in this environment can have the most impact and cause the most damage.

Of course, every organization is different. In some cases, you might have environments such as QA, Local, Feature, or Release, to name just a few. Your segmentation engine should be flexible enough to meet any business structure, suiting your organization rather than the other way around.

It’s important to note that these environments are not entirely separate. They share the same infrastructure and have no physical separation. In this reality, there will be traffic which needs to be controlled or blocked between the different environments to ensure best-practice security. At the same time however, in order for business to run as usual, specific communication flows need to be allowed access despite the environment separations. Mapping those flows, analyzing them and white-listing them is often not an easy process in itself, adding another level of complexity to traditional segmentation projects carried out without the right solution.

Use cases for environment segmentation include keeping business-critical servers away from customer access, and isolating the different stages of the product life cycle. This vital segmentation project also allows businesses to keep up with compliance regulations and prevents attackers from exploiting security vulnerabilities to access critical data and assets.

Traditional Methods of Environment Segmentation

Historically, enterprises would separate their environments using firewalls and VLANs, often physically creating isolation between each area of the business. They may have relied on cloud platforms for development, and then used on-premises data centers for production for example.

Today, some organizations adapt VLANs to create separations inside a data center. This relies on multiple teams spending time configuring network switches, connecting servers, and making application and code changes where necessary. Despite this, In static environments, hosted in the same infrastructure, and without dynamic changes or the need for large scale, VLANs get the job done.

However, the rise in popularity of cloud and containers, as well as fast-paced DevOps practices, has made quick implementation and flexibility more important than ever before. It can take months to build and enforce a new VLAN, and become a huge bottleneck for the entire business, even creating unavoidable downtime for your users. Manually maintaining complex rules and changes can cause errors, while out of date rules leave dangerous gaps in security that can be exploited by sophisticated attackers. VLANs do not extend to the cloud, which means your business ends up trying to reconcile multiple security solutions that were not built to work in tandem. Often this results in compromises being made which put you at risk.

A Software-Based Segmentation Solution Helps Avoid Downtime, Wasted Resources, and Bottlenecks

A policy that follows the workload using software bypasses these problems. Using micro-segmentation technology, you can isolate low-value environments such as Development from Production, so that even in case of a breach, attackers cannot make unauthorized movement to critical assets or data. With intelligent micro-segmentation, this one policy will be airtight throughout your environment. This includes on-premises, in the public or private cloud, or in a hybrid data center.

The other difference is the effort in terms of implementation. Unlike with VLANs, with software-based segmentation, there is no complex coordination among teams, no downtime, and no bottlenecks while application and networking teams configure switches, servers and code. Using Guardicore Centra as an example, it takes just days to deploy our agents, and your customers won’t experience a moment of downtime.

Achieve Environment Segmentation without Infrastructure Changes

Environment segmentation is a necessity in today’s data centers: to achieve compliance, reduce the attack surface, and maintain secure separation between the different life stages of the business. However, this project doesn’t need to be manually intensive. When done right, it shouldn’t involve multiple teams, result in organizational downtime or even require infrastructure changes. In contrast, it can be the first stage of a phased micro-segmentation journey, making it easier to embrace new technology on the cloud, and implement a strong posture of risk-reduction across your organization.

Want to learn more about what’s next after environment segmentation as your first micro-segmentation project? Read up on securing modern data centers and clouds.

More Here.

Are you Prepared for a Rise in Nation State Attacks and Ransomware in 2020?

Once you know what you’re up against, keeping your business safe might be easier than you think. In this blog, we’re going to look at two kinds of cyber threats: nation state cyber attacks and ransomware. Neither is a new concern, but both are increasing in sophistication and prevalence. Many businesses feel powerless to protect against these events, and yet a list of relatively simple steps could keep you protected in the event of an attack.

Staying Vigilant Against Nation State Actors

According to the 2019 Verizon Data Breach study, nation state attacks have increased from 12 percent of attacks in 2017 to 23 percent in 2018.

One of the most important things to recognize about nation state attacks is that it is getting harder to ascertain where these attacks are coming from. Attackers learn to cleverly obfuscate their attacks through mimicking other state actor behavior, tools, and coding and through layers of hijacked, compromised networks. In some cases, they work through proxy actors. This makes the process of attribution very difficult. One good example is the 2018 Olympics in Pyongyang, where attackers launched the malware Olympic Destroyer. This took down the Olympic network’s wireless access points, servers, ticketing system, and even reporters Internet access for 12 hours, immediately prior to the start of the games. While at first, metadata in the malware was thought to attribute the attack to North Korea, this was actually down to manipulations of the code. Much later, researchers realized it was of Russian origin.

These ‘false flag’ attacks have a number of benefits for the perpetrators. Firstly, the real source of the threat may never be discovered. Secondly, even if the correct attribution is eventually found, the news cycle has died down, the exposure is less, and many people may not believe the new evidence.

This has contributed to nation state actors feeling confident to launch larger and more aggressive attacks, such as Russian attacks on Ukrainian power grids and communications, or the Iranian cyber-attack APT 33, that recently took down more than 30,000 Saudi oil production laptops and servers.

Ransomware often Attacks the Vulnerable, including Local Government and Hospitals

State sponsored attacks have the clout to do damage where it hurts the most, as seen by the two largest ransomware attacks ever experienced, WannaCry and NotPetya. These were created using what was allegedly a stolen US NSA tool kit called EternalBlue, as well as a French password stealer called Mimikatz.

This strength, combined with the tight budgets and flat networks of local governments and healthcare systems, is a recipe for catastrophe. Hospitals in particular are known for having flat networks and medical devices based on legacy and end-of-life operating systems. According to some estimates, hospitals are the targets of up to 70% of all ransomware incidents. The sensitive nature of PII and health records and the direct impact on safety and human life makes the healthcare industry a lucrative target for hackers looking to get their ransom paid by attacking national infrastructure.

As attackers become increasingly brazen, and go after organizations that are weak-placed to stand up to the threat, it’s more important than ever that national infrastructure thinks about security, and takes steps to handle these glaring gaps.

Shoring Up Your Defenses is Easier Than You Think

The party line often seems to be that attackers are getting smarter and more insidious, and data centers are too complex to handle this threat. It’s true that today’s networks are more dynamic and interconnected, and that new attack vectors and methods to hide these risks are cropping up all the time. However, what businesses miss, is the handful of very achievable and even simple steps that can help to limit the impact of an attack, and perhaps even prevent the damage occurring in the first place.

Here’s what enterprises can do:

  • Create an Incident Response Plan: Make sure that anyone can understand what to do in case of an incident, not just security professionals. Think about the average person on your executive board, or even your end users. You need to assume that a breach or a ransomware attack will happen, you just don’t know when. With this mindset, you’ll be more likely to create a thorough plan for incident response, including drills and practice runs.
  • Protect your Credentials: This starts with utilizing strong passwords and two-factor authentication, improving the posture around credentials in general. On top of this, the days of administrative rights are over. Every user should have only the access they need, and no further. This stops bad actors from escalating privileges and moving laterally within your data center, taking control of your devices.
  • Think Smart on Security Hygiene: Exploits based on the Eternal Blue tool kit – the Microsoft SMB v1 vulnerability, were able to cause damage because of a patch that had been released by Microsoft by May 2017. Software vulnerabilities can be avoided through patching, vulnerability testing, and certification.
  • Software-Defined Segmentation: If we continue the mindset that an attack will occur, it’s important to be set up to limit the blast radius of your breach. Software-defined segmentation is the smartest way to do this. Without any need to make infrastructure changes, you can isolate and protect your critical applications. This also works to protect legacy or end-of-life systems that are business critical but cannot be secured with existing modern solutions, a common problem in the healthcare industry. Also unlike VLANs and cloud security groups these take no physical infrastructure changes and take hours not months to implement.

Following this Advice for Critical Infrastructure

This advice is a smart starting point for national infrastructure as well as enterprises, but it needs more planning and forethought. When it comes to critical infrastructure, your visibility is essential, especially as you are likely to have multiple platforms and geographies. The last thing you want is to try to make one cohesive picture out of multiple platform-specific disparate solutions.

It’s also important to think about modern day threat vectors. Today, attacks can come through IP connected IoT devices or networks, and so your teams need to be able to detect non-traditional server compute nodes.

Incident response planning is much harder on a governmental or national level, and therefore needs to be taken up a notch in preparation. You may well need local, state, and national participation and buy-in for your drills, including law enforcement and emergency relief in case of panic or disruption. How are you going to communicate and share information on both a local and international scale, and who will have responsibility for what areas of your incident response plan?

Learning from the 2018 Olympics

Attacks against local government, critical infrastructure and national systems such as healthcare are inevitable in today’s threat landscape. The defenses in place, and the immediate response capabilities will be the difference between disaster and quick mitigation.

The 2018 Olympics can serve as proof. Despite Russia’s best attempts, the attack was thwarted within 12 hours. A strong incident response plan was put into place to find the malware and come up with signatures and remediation scripts within one hour. 4G access points had been put in place to provide networking capabilities, and the machines at the venue were reimaged from backups.

We can only hope that Qatar 2022 is already rehearsing as strong an incident response plan for its upcoming Olympics, especially with radical ‘semi-state actors’ in the region such as the Cyber Caliphate Army and the Syrian Electronic Army who could act as a proxy for a devastating state actor attack.

We Can Be Just as Skilled as the Attackers

The attitude that ‘there’s nothing we can do’ to protect against the growth in nation state attacks and ransomware threats is not just unhelpful, it’s also untrue. We have strong security tools and procedures at our disposal, we just need to make sure that we put these into place. These steps are not complicated, and they don’t take years or even months to implement. Staying ahead of the attackers is a simple matter of taking these steps seriously, and using our vigilance to limit the impact of an attack when it happens.

Want to understand more about how software defined segmentation can make a real difference in the event of a cyber attack? Check out this webinar.

A Case Study for Security and Flexibility in Multi-cloud Environments

Most organizations today opt for a multi-cloud setup when migrating to the cloud. “In fact, most enterprise adopters of public cloud services use multiple providers. This is known as multicloud computing, a subset of the broader term hybrid cloud computing. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Gartner comments that “most organizations adopt a multicloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions”.

When considering segmentation solutions for the cloud, avoiding vendor lock-in is equally important, especially considering security concerns.

Let’s consider the following example, following up on an experiment that was performed by one of our customers. As we discussed in the previous posts in the series, the customer created a simulation of multiple applications running in Azure and AWS. For the specific setup in Azure please consider the first and second posts in this series.

Understanding the Experiment

 

Phase 1 – Simulate an application migration between cloud providers:

The customer set up various applications in Azure. One of these is the CMS application. NSGs and ASGs have been set up for CMS, using a combination of allow and deny rules.

The customer attempted to migrate CMS from Azure to AWS. After the relevant application components were set up in AWS, the customer attempted to migrate the policies from Azure Security Groups to AWS Security Groups and its Access Control List. In order for the policies to migrate with the application, the deny rules in Azure Security Groups had to be translated into allow rules for all other traffic in AWS security groups or to network layer deny rules in AWS access control lists (ACLs).

Important differences between AWS security groups and ACLs:

  1. Security groups – security groups are applied at the EC2 level and are tied to an asset, not an IP. They only enable whitelisting traffic and are stateful. This is the first layer of defense, thus traffic must be allowed by Security Groups to then be analyzed by an ACL.
  2. ACLs – access control lists are applied at the VPC level, thus are directly tied to IPs. They support both allow and deny rules, but as they are tied to specific IPs, they do not support blocking by application context. They are not stateful and thus are not valid for compliance requirements.
  3. AWS security groups do not support blacklisting functionalities and only enable whitelisting, while AWS ACLs support both deny and allow rules, but are tied to an IP address within a VPC in AWS, enabling blocking only static IPs or a whole subnet.

Given the differences between security groups and ACLs (see sidebar), migrating the CMS application from Azure to AWS alongside the policies, required an employee to evaluate each Azure rule, and translate it to relevant rules in AWS Security Groups and ACLs. This unexpectedly set back the migration tests and simulation.

This is just one example. Each major public cloud provider provides their own tools for policy management. Working with multiple cloud native tools requires a lot of time and resources, and results in a less secure policy and added inflexibility. The more hybrid your environment, and the more you depend on native tools, the more tools you will end up using. Each tool requires an expert, someone who knows how to use it and can overcome its limitations. You will need to have security experts for each cloud provider you choose to use, as each cloud provider has a completely different solution with it’s own limitations. An example of one such limitation that all cloud provider native segmentation tools share is that cloud-based security groups only provide L4 policy control, and so additional tools will be required to secure your application layer.

Guardicore Provides a Single Pane of Glass for Segmentation Rules

When using Guardicore, each rule is applied on all workloads: virtual machines, public clouds (AWS, Azure, GCP…), bare-metal, and container systems. Rules follow workloads and applications when migrating to/between clouds or from on-premises to the cloud. Security teams can work with a single tool instead of multiple solutions to build a single, secure policy which saves time and resources, and ensure consistency across a heterogeneous environment.
Guardicore therefore enables migrating workloads with no security concerns. Policies will migrate with the workloads wherever they go. All you will need to take into account in your decision of workload migration is your cloud provider’s offering.

Our customer used Guardicore to create the CMS application policies, adding an additional layer 7 security with rules at process level, enhancing the Layer 4 controls from the native cloud provider. To migrate CMS from Azure to AWS seamlessly, policies were no longer a concern. Guardicore Centra policies follow the application wherever it goes. As policies are decoupled from the underlying infrastructure, and are created based on labels. The policies in Guardicore followed the workloads from Azure to AWS with no changes necessary.

Phase 2 – Create policies for cross-cloud application dependencies

The customer experiment setup in AWS included an Accounting application in the London, UK region, that periodically needed to access data from the Billing application databases. The billing application was set up in Azure.

The Accounting application had 2 instances, one in the production environment and another in the development environment. The goal was for only the Accounting application in production to have access to the Billing application.

In a recent Gartner analysis Infrastructure Monitoring With the Native Tools of AWS, Google Cloud Platform and Microsoft Azure, Gartner mentions that “Providers’ native tools do not offer full support for other providers’ clouds, which can limit their usability in multicloud environments and drive the need for third-party solutions.” One such limitation was encountered by our customer.

Azure and AWS Security Groups or ACLs enable controlling cross-cloud traffic based only on the public IPs of the cloud providers. One must allow the whole IP region range of one cloud provider to communicate with another in order for two applications to communicate cross-cloud.
No external IP can be statically set for a server in Azure or AWS. Thus without introducing a third-party solution, there is no assurance that traffic is coming from a specific application in Azure when talking to a specific application in AWS and vice versa.

As public IPs are dynamically assigned to workloads within both Azure and AWS, our customer had to permit the whole IP range of the AWS London, UK region to communicate with the Azure environment, with no application context, and no control, introducing risk. Moreover, there was no way to prevent the Accounting application in the development environment from creating such a connection, without introducing an ACL in AWS to block all communication from that application instance to the Azure range. This would be problematic and restrictive, for example if the dev app had dependencies on another application in Azure.

Guardicore Makes Multi-cloud Policy Management Simple

As we have already discussed, policies in Guardicore are decoupled from the underlying infrastructure. The customer created policies based on Environment and Application labels with no dependency on the underlying cloud provider hosting the applications or on the application’s private or public IPs. This enabled easy policy management, blocking the Accounting application in the Development environment on AWS while allowing the Production application instance access to the Billing application in Azure. Furthermore, this gave our customer ultimate flexibility and the opportunity for future application migration seamlessly between cloud providers.

Guardicore provided a single pane of glass for multi-cloud segmentation rules. Each rule was applied on all relevant workloads regardless of the underlying infrastructure. Security teams were able to work with a single tool instead of managing multiple silos.

The same concept can be introduced for controlling and managing how your on-premises applications communicate with your cloud applications, ensuring a single policy across your whole data center, on premises or in the cloud. Using Guardicore, any enterprise can build a single, secure policy and save time and resources, while ensuring best-of-breed security.

Check out this blog to learn more about Guardicore and security in Azure or read more about Guardicore and AWS security here.

Interested in cloud security for hybrid environments? Get our white paper about protecting cloud workloads with shared security models.

Read More

Guardicore Centra Security Platform Verified as Citrix Ready

Micro-segmentation Solution Enables Strong Security for Citrix Virtual Apps and Desktops by Isolating Workloads and Preventing Lateral Movement

Boston, Mass. and Tel Aviv, Israel – November 12, 2019 – Guardicore, a leader in internal data center and cloud security, today announced its solution has been verified as Citrix® Ready. The Citrix Ready technology partner program offers robust testing, verification, and joint marketing for Digital Workspace, Networking, and Analytics solutions – with over 30,000 verifications listed in the Citrix Ready Marketplace. Guardicore completed a rigorous testing and verification process for its Guardicore Centra security platform to ensure compatibility with Citrix Virtual Apps and Desktops, providing confidence in joint solution compatibility.

“Using Guardicore Centra’s micro-segmentation capabilities, Citrix customers can now more effectively create and enforce policies that isolate Citrix Virtual Apps and Desktops securely, delivering a Zero Trust approach and preventing unauthorized access as well as lateral movement,” said Sharon Besser, Vice President of Business Development, Guardicore. “By integrating with critical technologies from Citrix and other members of our partner ecosystem we enable customers to maximize the value of existing investments while transforming security in the cloud and software-defined data center.”

“The Guardicore Centra security platform delivers a simple and intuitive way to apply micro-segmentation controls to reduce the attack surface, detect, and control breaches,” said John Panagulias, Director, Citrix Ready. “With this integration and Citrix Ready validation, we can offer customers integrated security solutions that combine Guardicore Centra with Citrix Virtual Apps and Desktops to protect virtual workloads while enhancing productivity.”

Virtual desktop infrastructure deployments require effective security controls that can scale without losing visibility and control. Unlike traditional deployments where end-user machines can be physically isolated from the data center and controlled and monitored, securing virtual environments requires a different approach, especially when applying principles of Zero Trust. Micro-segmentation is central to the network virtualization paradigm. It enables better security for these environments by isolating workloads from each other, controlling and enforcing security policies that prevent lateral movement attacks. Guardicore augments Citrix Virtual Apps and Citrix Virtual Desktops with micro-segmentation, using its advanced capabilities for flows, applications and users to create secure zones that enhance the application of Zero Trust without compromising productivity or user experience.

Available now, Guardicore Centra supports Citrix Virtual Apps and Desktops, and older versions of Citrix XenApp and Citrix XenDesktop. Guardicore Centra for Citrix products can be found immediately in the Citrix Ready Marketplace.

About Guardicore

Guardicore is a data center and cloud security company that protects your organization’s core assets using flexible, quickly deployed, and easy to understand micro-segmentation controls. Our solutions provide a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment. For more information, visit www.guardicore.com.

Using Zero Trust Security to Ease Compliance

Data privacy in cyber-security is a hugely regulated sector. New regulations such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR) have added to the list of compliance mandates that already included PCI-DSS for financial data and HIPAA for patient information. Many enterprises now have compliance officers or even teams established, who have a heavy workload in achieving and proving compliance for these regulations, in order to be prepared for an audit and to put best-practices into place.

As data centers have become increasingly complex and dynamic, this workload has increased exponentially. Visibility is understandably hard to achieve in a heterogeneous environment, and if you don’t know where your data is – how can you secure it?

Traditional Perimeter Security Causes Problems for Compliance

If your business relies on perimeter-based security, any breach is a breach of your whole network. Everything is equally accessible once an attacker has made it through your external perimeter. This security model cannot distinguish between types of data or applications, and does not define or visualize critical assets, giving everything in your data center an equal amount of protection.

This reality is a struggle for any IT or Security teams responsible for compliance. Multiple compliance authorities enforce strict controls over the management of customer data, including how it is held, deleted, shared and accessed. Personally identifiable information (PII) and anywhere that financial information is stored (eg: CDE) needs added security measures or governance for compliance mandates, and yet these are often left unidentified, let alone secured. This is made more complicated today by a growing amount of data that resides or communicates outside of the firewall, for example in the cloud. Visibility is the first hurdle, and many enterprises fall immediately at the challenge.

On top of this, with border controls alone, as soon as your perimeter is breached, all your data is up for grabs by attackers who can make lateral movements inside your network. Even if you could see what you have, perimeter security simply can’t protect critical data that falls in scope for compliance at the required level.

Zero Trust as a Solution for Compliance

Many enterprises know that a Zero Trust model would provide a stronger security posture, and are worried about the movement of east-west traffic that remains unprotected, but think of moving to a Zero Trust paradigm as an incredibly complex initiative. Segmenting applications, writing policy for different areas of the business, establishing what access to give permissions to and where, it sounds like it would complicate security, not make it simpler.

However, when completed intelligently, principal analyst at Forrester Research, Renee Murphy explains how a Zero Trust model actually makes security and compliance a whole lot easier. “You end up with a less complex environment and doing less work overall. Once you know what [your data] is, where it is and how important it is, you can [then] put your efforts towards it.”

For this to be successful, and remain simple, your Zero Trust model’s implementation needs to start with visibility. Data classification is not an IT problem, it’s a business problem, and the business needs to be able to automatically discover all assets and data, both in real-time, and with historical baselines for comparison and policy creation.

Your partner in creating a Zero Trust model should be able to provide an automatic map of all applications, databases, communications and flows, including dependencies and relationships. This needs to be both deep, providing granular insight, and also broad, across your hybrid environment covering everything from legacy on-premises to container systems.

Furthermore, pick a vendor with good granular enforcement capabilities. The best protection leaves the least possible exposure. Policies that can lock compliance environments down farther than port and IP are required. Seek those that can create policies at the process, user, and domain name level.

Not only does this provide the best starting point for Zero Trust initiatives, but it also means that compliance becomes far easier as a result of best-in-class documentation and records at every stage.

Regardless of which standard you wish to comply with, utilizing the Zero Trust model for visibility and segmentation to effectively limit scope and resources is essential. For example the PCI-DSS Security Council has come out with the Information Supplement: Guidance for PCI DSS Scoping and Network Segmentation Guidance Scoping in which this is directly called out.

When You Establish Zero Trust, All Data Can be Treated Unequally

Once visibility is established, and you have an accurate view of your network, you can easily identify what needs protecting. Compliance mandates are usually very clear about what data is in scope and out of scope, and only insist on what is in scope keeping to regulations. While perimeter security made it impossible to apportion security differently throughout your data center, this is where micro-segmentation and zero-trust thrive.

With zero trust, your security strategy can recognize that not everything is created equally. Some data or applications need more security and governance than others, and while certain assets need to be watched and controlled closely, others can be left with minimal controls.

With the right partner in place, enterprises can use a distributed firewall to prioritize where to put their compliance, moving from the most essential tasks forward. Granular rules can be put in place, down to process level or based on user-identity, strictly enforcing micro-perimeters around systems and data that are in scope. This is a much easier task than ‘protect everything, all the time.’

Demonstrating Compliance using a Zero Trust Environment

Adopting a Zero Trust mentality is also a really strong way to show auditors that you’re doing your part. A huge part of compliance is being able to guarantee that even in case of a breach, you have taken all reasonable steps to ensure that your data was protected from malicious intent. Each time an east-west movement is attempted, this communication is checked and verified. As such, your enterprise has never assumed that broad permissions are enough to guarantee a safe connection, and with micro-segmentation, you have reduced the attack surface as much as possible. This process also provides an audit trail, making incident response and documentation much simpler in case of a breach.

Consider partnering with a vendor that includes monitoring and analytics, as well as breach detection and incident response, to lower the chance of a cyber-attack, and create a plan for any events that violate policy or suggest malicious intent. This can dramatically improve your chances of an attack, as well as help to bolster a robust compliance checklist.

The days of relying on perimeter-based controls to stay compliant and secure are long gone. In a world where Zero Trust models are gaining acceptance and improving security posture so widely, enterprises need to do more to prove that they are compliant with the latest regulations.

The Zero Trust framework acknowledges that internal threats are now almost a guarantee, and enterprises need to protect sensitive data and crown jewel applications with more than just border control alone. Remaining compliant is an important yardstick to measure the security of your infrastructure against, and Zero Trust is an effective model to achieve that compliance.

Want to read more about implementing cloud security toward an effective Zero Trust model? Get our white paper about how to move toward a Zero Trust framework faster.
Read More

Guardicore Achieves Microsoft IP Co-Sell Status: Available for Download on the Azure Marketplace – Here’s What That Means for You

A couple of weeks ago we announced that the Guardicore Centra security platform is available in the Microsoft Azure Marketplace. As you might know, Centra was available in the marketplace before, as Guardicore has worked with Microsoft for a very long time, providing various integrations as well as research for Azure and Azure Stack. Now, the latest version of Centra is available and Guardicore has achieved an IP Co-Sell status.

One of the most important capabilities that we developed for Azure provides Centra with real-time integration to Azure orchestration. This provides metadata on the assets deployed in your Azure cloud environment, complementing the information provided by Guardicore agents.

For example, information coming from orchestration may include data that can’t be collected from the VM itself, including: Source Image, Instance Name, Private DNS name, Instance Id, Instance Type , Security groups, Architecture, Power State, Private IP Address and Subscription Name.

Using this information, Centra will accelerate security migration from an on-premises data center to Azure.

In addition, we are very proud that Guardicore has achieved the Microsoft IP Co-Sell status. This designation recognizes that Guardicore has demonstrated its proven technology and deep expertise that helps customers achieve their cloud security goals. Achieving this status demonstrates our commitment to the Microsoft partner ecosystem. It also proves our ability to deliver innovative solutions that help forward-thinking enterprise customers to secure their business-critical applications and data with quick time to value, reduce the cost and burden of compliance, and securely embrace cloud adoption.