Posts

Securing a Hybrid Data Center – Strategies and Best Practices

Today’s data centers exist in a hybrid reality. They often include on-premises infrastructure such as Bare Metal or Virtual Machines, as well as both Public and Private cloud. At the same time, most businesses have legacy systems that they need to support. Even as you embrace cutting-edge infrastructure like containers and microservices, your legacy systems aren’t going anywhere, and it’s probably not even on your near future road-map to replace them. As a result, your security strategy needs to be suitable across a hybrid ecosystem, which is not as simple as it sounds.

The Top Issues with Securing a Hybrid Data Center

Many businesses attempt to use traditional security tools to manage a hybrid data center, and quickly run into problems.

Here are the most common problems that companies encounter when traditional tools are used to secure a modern, hybrid data center:

  • Keeping up with the pace of change: Business moves fast, and traditional security tools such as legacy firewalls, ACLs, VLANs and cloud security groups are ineffectual. This is because these solutions are made for one specific underlying infrastructure. VLANs will work well for on premises – but fall short when it comes to cloud and container infrastructure. Cloud security groups work for the cloud, but won’t support additional cloud providers or on premises. If you want to migrate, security will seriously affect the speed and flexibility of your move, slowing down the whole process – and probably negating the reasons you chose cloud to begin with.
  • Management overhead: Incorporating different solutions for different infrastructure is nothing short of a headache. You’ll need to hire more staff, including experts in each area. A cross-platform security strategy that incorporates everyone’s field of expertise is costly, complex, and prone to bottlenecks because of the traditional ‘too many cooks’ issue.
  • No visibility: Your business will also need to think about compliance. This could involve an entirely different solution and staff member dedicated to compliance and visibility. Without granular insight into your entire ecosystem, it’s impossible to pass an audit. VLANs for example offer no visibility into application dependencies, a major requirement for audit-compliance. When businesses use VLANs, compliance therefore becomes an additional headache.
  • Insufficient control: Today’s security solutions need Layer 7 control, with granularity that can look at user identity, FQDN (fully qualified domain names), command lines and more. Existing solutions rely on IPs and ports, which are insufficient to say the least.
    Take cloud security groups for example, which for many has become the standard technology for segmenting applications, the same way as they would on-premises. However, on the cloud this solution stops at Layer 4 traffic, ports and IPs. For application-aware security on AWS, you will need to add another set of controls. In a dynamic data center, security needs to be decoupled from the IPs themselves, allowing for migration of machines. Smart security uses an abstraction level, enabling the policy to follow the workload, rather than the IP.
  • Lack of automation: In a live hybrid cloud data center, automation is essential. Without automation as standard, for example using VLANs, changes can take weeks or even months. Manually implementing rules can result in the downtime of critical systems, as well as multiple lengthy changes in IPs, configurations of routers, and more.

Hybrid Data Center Security Strategies that Meet These Issues Head-On

The first essential item on your checklist should be infrastructure-agnostic security. Centralized management means one policy approach across everything, from modern and legacy technology on-premises to both public and private cloud. Distributed enforcement decouples the security from the IP or any underlying infrastructure – allowing policy to follow the workload, however it moves or changes. Security policy becomes an enabler of migration and change, automatically moving with the assets themselves.

The most effective hybrid cloud solutions will be software-based, able to integrate with any other existing software solution, including ansible, chef, puppet, SCCM, and more. This will also make deployment fast and seamless, with implementation taking hours rather than days. At Guardicore, our customers often express surprise when we request three hours to install our solution for a POC, as competitors have asked for three days!

The ease of use should continue after the initial deployment. An automated, readable visualization of your entire ecosystem makes issues like compliance entirely straightforward, and provides an intuitive and knowledgeable map that is the foundation to policy creation. Coupling this with a flexible labeling system means that any stakeholder can view the map of your infrastructure, and immediately understand what they are looking at.

These factors allow you to implement micro-segmentation in a highly effective way, with granular control down to the process level. In comparison to traditional security tools, Guardicore can secure and micro-segment an application in just weeks, while for one customer it had taken 9 months to do the same task using VLANs.

What Makes Guardicore Unique When it Comes to Hybrid Data Center Security Strategies?

For Guardicore, it all starts with the map. We collect all the flows, rather than just a sample, and allow you to access all your securely stored historical data rather than only snap-shotting small windows in time. This allows us to support more use cases for our customers, from making compliance simple to troubleshooting a slowdown or forensic investigation into a breach. We also use contextual analysis on all application dependencies and traffic, using orchestration data, as well as the process, user, FQDN and command line of all traffic. We can enable results, whatever use case you’re looking to meet.

Guardicore is also known for our flexibility, providing a grouping and labeling process that lets you see your data center the way you talk about it, using your own labels rather than pre-defined ones superimposed on you by a vendor, and Key:Value formats instead of tags. This makes it much easier to create the right policies for your environment, and use the map to see a hierarchical view of your entire business structure, with context that makes sense to you. Taking this a step further into policy creation, your rules methodology can be a composite of whitelisting and blacklisting, giving less risk of inflexibility and complexity in your data center, and even allowing security rules that are not connected to segmentation use cases. In contrast, competitors use white-list only approaches with fixed labels and tiers.

Fast & Simple Segmentation with Guardicore

Your hybrid data center security strategies should enable speed and flexibility, not stand in your way. First, ensure that your solution supports any environment. Next, gain as much visibility as possible, including context. Use this to glean all data in an intuitive way, without gaps, before creating flexible policies that focus on your key objectives – regardless of the underlying infrastructure.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

The Risk of Legacy Systems in a Modern-Day Hybrid Data Center

If you’re still heavily reliant on legacy infrastructure, you’re not alone. In many industries, legacy servers are an integral part of ‘business as usual’ and are far too complex or expensive to replace or remove.

Examples include Oracle databases that run on Solaris servers, applications using Linux RHEL4, or industry-specific legacy technology. Think about legacy AIX machines that often manage the processing of transactions for financial institutions, or end of life operating systems such as Windows XP that are frequently used as end devices for healthcare enterprises. While businesses do attempt to modernize these applications and infrastructure, it can take years of planning to achieve execution, and even then might never be fully successful.

When Legacy Isn’t Secured – The Whole Data Center is at Risk

When you think about the potential risk of legacy infrastructure, you may go straight to the legacy workloads, but that’s just the start. Think about an unpatched device that is running Windows XP. If this is exploited, an attacker can gain access directly to your data center. Security updates like this recent warning about a remote code execution vulnerability in Windows Server 2003 and Windows XP should show us how close this danger could be.

Gaining access to just one unpatched device, especially when it is a legacy machine, is relatively simple. From this point, lateral movement can allow an attacker to move deeper inside the network. Today’s data centers are increasingly complex and have an intricate mix of technologies, not just two binary categories of legacy and modern, but future-focused and hybrid such as public and private clouds and containers. When a data center takes advantage of this kind of dynamic and complex infrastructure, the risk grows exponentially. Traffic patterns are harder to visualize and therefore control, and attackers are able to move undetected around your network.

Digital Transformation Makes Legacy More Problematic

The threat that legacy servers pose is not as simple as it was before digital transformation. Modernization of the data center has increased the complexity of any enterprise, and attackers have more vectors than ever before to gain a foothold into your data centers and make their way to critical applications of digital crown jewels.

Historically, an on-premises application might have been used by only a few other applications, probably also on premises. Today however, it’s likely that it will be used by cloud-based applications too, without any improvements to its security. By introducing legacy systems to more and more applications and environments, the risk of unpatched or insecure legacy systems is growing all the time. This is exacerbated by every new innovation, communication or advance in technology.

Blocking these communications isn’t actually an option in these scenarios, and digital transformation makes these connections necessary regardless. However, you can’t embrace the latest innovation without securing business-critical elements of your data center. How can you rapidly deploy new applications in a modern data center without putting your enterprise at risk?

Quantifying the Risk

Many organizations think they understand their infrastructure, but don’t actually have an accurate or real-time visualization of their IT ecosystem. Organizational or ‘tribal’ knowledge about legacy systems may be incorrect, incomplete or lost, and it’s almost impossible to obtain manual visibility over a modern dynamic data center. Without an accurate map of your entire network, you simply can’t quantify what the risks are if an attack was to occur.

Once you’ve obtained visibility, here’s what you need to know:

  1. The servers and endpoints that are running legacy systems.
  2. The business applications and environments where the associated workloads belong.
  3. The ways in which the workloads interact with other environments and applications. Think about what processes they use and what goals they are trying to achieve.

Once you have this information, you then know which workloads are presenting the most risk, the business processes that are most likely to come under attack, and the routes that a hacker could use to get from the easy target of a legacy server, across clouds and data centers to a critical prized asset. We often see customers surprised by the ‘open doors’ that could lead attackers directly from an insecure legacy machine to sensitive customer data, or digital crown jewels.

Once you’ve got full visibility, you can start building a list of what to change, which systems to migrate to new environments, and which policy you could use to protect the most valuable assets in your data center. With smart segmentation in place, legacy machines do not have to be a risky element of your infrastructure.

Micro-segmentation is a Powerful Tool Against Lateral Movement

Using micro-segmentation effectively reduces risk in a hybrid data center environment. Specific, granular security policy can be enforced, which works across all infrastructure – from legacy servers to clouds and containers. This policy limits an attacker’s ability to move laterally inside the data center, stopping movement across workloads, applications, and environments.

If you’ve been using VLANs up until now, you’ll know how ineffective they are when it comes to protecting legacy systems. VLANs usually place all legacy systems into one segment, which means just one breach puts them all in the line of fire. VLANs rely on firewall rules that are difficult to maintain and do not leverage sufficient automation. This often results in organizations accepting loose policy that leaves it open to risk. Without visibility, security teams are unable to enforce tight policy and flows, not only among the legacy systems themselves, but also between the legacy systems and the rest of a modern infrastructure.

One Solution – Across all Infrastructure

Many organizations make the mistake of forgetting about legacy systems when they think about their entire IT ecosystem. However, as legacy servers can be the most vulnerable, it’s essential that your micro-segmentation solution works here, too. Covering all infrastructure types is a must-have for any company when choosing a micro-segmentation vendor that works with modern data centers. Even the enterprises who are looking to modernize or replace their legacy systems may be years away from achieving this, and security is more important than ever in the meantime.

Say Goodbye to the Legacy Challenge

Legacy infrastructure is becoming harder to manage. The servers and systems are business critical, but it’s only becoming harder to secure and maintain them in a modern hybrid data center. Not only this, but the risk, and the attack surface are increasing with every new cloud-based technology and every new application you take on.

Visibility is the first important step. Security teams can use an accurate map of their entire network to identify legacy servers and their interdependencies and communications, and then control the risks using tight micro-segmentation technology.

Guardicore Centra can cover legacy infrastructure alongside any other platform, removing the issue of gaps or blind spots for your network. Without fear of losing control over your existing legacy servers, your enterprise can create a micro-segmentation policy that’s future-focused, with support for where you’ve come from and built for today’s hybrid data center.

Interested in learning more about implementing a hybrid cloud center security solution?

Download our white paper

Guardicore Raises $60 Million; Funding Fuels Company Growth and Continued Disruption

Today I am excited to share that we have secured a Series C funding round of $60 million, bringing our total funding to more than $110 million. The latest round was led by Qumra Capital and was joined by other new investors DTCP, Partech, and ClalTech. Existing investors Battery Ventures, 83North, TPG Growth, and Greenfield Partners also participated in the round.

Since we launched the company in 2015, Guardicore has been focused on a single vision for providing a new, innovative way to protect critical assets in the cloud and data center. Our focus, and our incredible team, has earned the trust of some of the world’s most respected brands by helping them protect what matters most to their business. As the confidence our customers have in us has grown, so has our business, which has demonstrated consistent year-over-year growth for the past three years.

Our growth is due to our ability to deliver on a new approach to secure data centers and clouds using distributed, software-defined segmentation. This approach aligns with the transformation of the modern data center, driven by cloud, hybrid cloud, and PaaS adoption. As a result, we have delivered a solution that redefines the role of firewalls and implementing Zero Trust security frameworks. More dynamic, agile, and practical security techniques are required to complement or even replace the next-generation firewall technologies. We are delivering this and give our customers the ability to innovate rapidly with the confidence their security posture can keep up with the pace of change.

Continued Innovation

The movement of critical workloads into virtualized, hybrid cloud environments, industry compliance requirements and the increase of data center breaches demands a new approach to security that moves away from legacy firewalls and other perimeter-based security products to a new, software-defined approach. This movement continues to inspire our innovations and ensure that our customers have a simpler, faster way to guarantee persistent and consistent security — for any application, in any IT environment.

Our innovation is evident in several areas of the company. First, we have been able to quickly add new innovative technology into our Centra solution, working in close partnership with our customers. For example, we deliver expansive coverage of data center, cloud infrastructure and operating environments, and simpler and more intuitive ways to define application dependencies and segmentation policies. This gives our customers the right level of protection for critical applications and workloads in virtually any environment.

Second, our Guardicore Labs global research team continues to provide deep insights into the latest exploits and vulnerabilities that matter to the data center. They also equip industry with access to open source tools like Infection Monkey, and Cyber Threat Intelligence (CTI) that allows security teams to keep track of potential threats that are happening in real time.

We have also continued to build out other areas of our business, such as our partner ecosystem, which earned the five-star partner program rating from CRN since its inception two years ago, as well as our technology alliances, which include relationships with leading cloud / IaaS infrastructure players such as AWS, Azure, and Nutanix.

Looking Ahead

We are proud of our past, but even more excited about our future. While there is always more work to do, we are in a unique position to lead the market with not only great technology, but a strong roster of customers, partners and, most importantly, a team of Guardicorians that challenge the status quo every single day to deliver the most innovative solutions to meet the new requirements of a cloud-centric era. I truly believe that we have the best team in the business.

Finally, as we celebrate this important milestone, I want to say thanks to our customers who have made Guardicore their trusted security partner. It is our mission to continue to earn your trust by
ensuring you maximize the value of your security investments beyond your goals and expectations.

Understanding and Avoiding Security Misconfiguration

Security Misconfiguration is simply defined as failing to implement all the security controls for a server or web application, or implementing the security controls, but doing so with errors. What a company thought of as a safe environment actually has dangerous gaps or mistakes that leave the organization open to risk. According to the OWASP top 10, this type of misconfiguration is number 6 on the list of critical web application security risks.

How Do I Know if I Have a Security Misconfiguration, and What Could It Be?

The truth is, you probably do have misconfigurations in your security, as this is a widespread problem, and can happen at any level of the application stack. Some of the most common misconfigurations in traditional data centers include default configurations that have never been changed and remain insecure, incomplete configurations that were intended to be temporary, and wrong assumptions about the application expected network behaviour and connectivity requirements.

In today’s hybrid data centers and cloud environments, and with the complexity of applications, operating systems, frameworks and workloads, this challenge is growing. These environments are technologically diverse and rapidly changing, making it difficult to understand and introduce the right controls for secure configuration. Without the right level of visibility, security misconfiguration is opening new risks for heterogeneous environments. These include:

  • Unnecessary administration ports that are open for an application. These expose the application to remote attacks.
  • Outbound connections to various internet services. These could reveal unwanted behavior of the application in a critical environment.
  • Legacy applications that are trying to communicate with applications that do not exist anymore. Attackers could mimic these applications to establish a connection.

The Enhanced Risk of Misconfiguration in a Hybrid-Cloud Environment

While security misconfiguration in traditional data centers put companies at risk of unauthorized access to application resources, data exposure and in-organization threats, the advent of the cloud has increased the threat landscape exponentially. It comes as no surprise that “2017 saw an incredible 424 percent increase in records breached through misconfigurations in cloud servers” according to a recent report by IBM. This kind of cloud security misconfiguration accounted for almost 70% of the overall compromised data records that year.

One element to consider in a hybrid environment is the use of public cloud services, third party services, and applications that are hosted in different infrastructure. Unauthorized application access, both from external sources or internal applications or legacy applications can open a business up to a large amount of risk.

Firewalls can often suffer from misconfiguration, with policies left dangerously loose and permissive, providing a large amount of exposure to the network. In many cases, production environments are not firewalled from development environments, or firewalls are not used to enforce least privilege where it could be most beneficial.

Private servers with third-party vendors or software can lack visibility or an understanding of shared responsibility, often resulting in misconfiguration. One example is the 2018 Exactis breach, where 340 million records were exposed, affecting more than 21 million companies. Exactis were responsible for their data, despite the fact that they use standard and commonly used Elasticsearch infrastructure as their database. Critically, they failed to implement any access control to manage this shared responsibility.

With so much complexity in a heterogeneous environment, and human error often responsible for misconfiguration that may well be outside of your control, how can you demystify errors and keep your business safe?

Learning about Application Behavior to Mitigate the Risk of Misconfiguration

Visibility is your new best friend when it comes to fighting security misconfiguration in a hybrid cloud environment. Your business needs to learn the behavior of its applications, focusing in on each critical asset and its behavior. To do this, you need an accurate, real-time map of your entire ecosystem, which shows you communication and flows across your data center environment, whether that’s on premises, bare metal, hybrid cloud, or using containers and microservices.

This visibility not only helps you learn more about expected application behaviors, it also allows you to identify potential misconfigurations at a glance. An example could be revealing repeated connection failures from one specific application. On exploration, you may uncover that it is attempting to connect to a legacy application that is no longer in use. Without a real-time map into communications and flows, this could well have been the cause of a breach, where malware imitated the abandoned application to extract data or expose application behaviors. With foundational visibility, you can use this information to remove any disused or unnecessary applications or features.

Once you gain visibility, and you have a thorough understanding of your entire environment, the best way to manage risk is to lock down the most critical infrastructure, allowing only desired behavior, in a similar method to a zero-trust model. Any communication which is not necessary for an application should be blocked. This is what OWASP calls a ‘segmented application architecture’ and is their recommendation for protecting yourself against security misconfiguration.

Micro-segmentation is an effective way to make this happen. Strict policy protects communication to the most sensitive applications and therefore its information, so that even if a breach happens due to security misconfiguration, attackers cannot pivot to the most critical areas.

Visibility and Smart Policy Limit the Risk of Security Misconfiguration

The chances are, your business is already plagued by security misconfiguration. Complex and dynamic data centers are only increasing the risk of human error, as we add third-party services, external vendors, and public cloud management to our business ecosystems.

Guardicore Centra provides an accurate and detailed map of your hybrid-cloud data center as an important first step, enabling you to automatically identify unusual behavior and remove or mitigate unpatched features and applications, as well as identify anomalies in communication.

Once you’ve revealed your critical assets, you can then use micro-segmentation policy to ensure you are protected in case of a breach, limiting the attack surface if misconfigurations go unresolved, or if patch management is delayed on-premises or by external vendors. This all in one solution of visibility, breach detection and response is a powerful tool to protect your hybrid-cloud environment against security misconfiguration, and to amp up your security posture as a whole.

Want to hear more about Guardicore Centra and micro-segmentation? Get in touch

Looking for a Micro-segmentation Technology That Works? Think Overlay Model

Gartner’s Four Models for Micro-Segmentation

Gartner has recently updated the micro-segmentation evaluation factors document (“How to Use Evaluation Factors to Select the Best Micro-Segmentation Model (Refreshed: 5 November 2018).

This report details the four different models for micro-segmentation, but it did not make a clear recommendation on which was best. Understanding the answer to this means looking at the limitations of each model, and recognizing what the future looks like for dynamic hybrid-cloud data centers. I recommend reading this report and evaluating the different capabilities, however for us at Guardicore, it is clear that one solution model stands above the others and it should not be a surprise that vendors that have previously used other models are now changing their technology to use this model: Overlay.

But first, let me explain why other models are not adequate for most enterprise customers.

The Inflexibility of Native-Cloud Controls

The native model uses the tools that are provided with a virtualization platform, hypervisor, or infrastructure. This model is inherently limited and inflexible. Even for businesses only using a single hypervisor provider, this model ties them into one service, as micro-segmentation policy cannot be simply moved when you switch provider. In addition, while businesses might think they are working under one IaaS server or hypervisor, the provider may have servers elsewhere, too, known as Shadow IT. The reality is that vendors that used to support Native controls for micro-segmentation have realized that customers are transforming and had to develop new Overlay-based products.

More commonly, enterprises know that they are working with multiple cloud providers and services, and need a micro-segmentation strategy that can work seamlessly across this heterogeneous environment.

The Inconsistency of Third-Party Firewalls

This model is based on virtual firewalls offered by third-party vendors. Enterprises using this model are often subject to network layer design limitations, and therefore forced to change their networking topology. They can be prevented from gaining visibility due to proprietary applications, encryption, or invisible and uncontrolled traffic on the same VLAN.

A known issue with this approach is the creation of bottlenecks due to reliance on additional third-party infrastructure. Essentially, this model is not a consistent solution across different architectures, and can’t be used to control the container layer.

The Complexity of a Hybrid Model

A combination of the above two models, enterprises using a hybrid model for micro-segmentation are attempting to limit some of the downsides of both models alone. To allow them more flexibility than native controls, they usually utilize third-party firewalls for north-south traffic. Inside the data center where you don’t have to worry about multi-cloud support, native controls can be used for east-west traffic.

However, as discussed, both of these solutions, even in tandem, are limited at best. With a hybrid approach, you are also going to add the extra problems of a complex and arduous set up and maintenance strategy. Visibility and control of a hybrid choice is unsustainable in a future-focused IT ecosystem where workloads and applications are spun up, automated, auto-scaled and migrated across multiple environments. Enterprises need one solution that works well, not two that are sub-par individually and limited together.

Understanding the Overlay Model – the Only Solution Built for Future Focused Micro-Segmentation

Rather than a patched-together hybrid solution from imperfect models, Overlay is built to be a more robust and future-proof solution from the ground up. Gartner describes the Overlay model as a solution where a host agent or software is enforced on the workload itself. Agent-to-agent communication is utilized rather than network zoning.

One of the negative sides to third-party firewalls is that they are inherently unscalable. In contrast, agents have no choke points to be constrained by, making them infinitely scalable for your needs.

With Overlay, your business has the best possible visibility across a complex and dynamic environment, with insight and control down to the process layer, including for future-focused architecture like container technology. The only solution that can address infrastructure differences, Overlay is agnostic to any operational or infrastructure environments, which means an enterprise has support for anything from bare metal and cloud to virtual or micro-services, or whatever technology comes next. Without an Overlay model – your business can’t be sure of supporting future use cases and remaining competitive against the opposition.

Not all Overlay Models are Created Equal

It’s clear that Overlay is the strongest technology model, and the only future-focused solution for micro-segmentation. This is true for traditional access-list style micro-segmentation as well as for implementing deeper security capabilities that include support for layer 7 and application-level controls.

Unfortunately, not every vendor will provide the best version of Overlay, meeting the functionality that its capable of. Utilizing the inherent benefits of an Overlay solution means you can put agents in the right places, setting communication policy that works in a granular way. With the right vendor, you can make intelligent choices for where to place agents, using context and process level visibility all the way to Layer 7. Your vendor should also be able to provide extra functionality such as enforcement by account, user, or hash, all within the same agent.

Remember that protecting the infrastructure requires more than micro-segmentation and you will have to deploy additional solutions that will allow you to reduce risk and meet security and compliance requirements.

Micro-segmentation has moved from being an exciting new buzzword in cyber-security to an essential risk reduction strategy for any forward-thinking enterprise. If it’s on your to-do list for 2019, make sure you do it right, and don’t fall victim to the limitations of an agentless model. Guardicore Centra provides an all in one solution for risk reduction, with a powerful Overlay model that supports a deep and flexible approach to workload security in any environment.

Want to learn more about the differences between agent and agentless micro-segmentation? Check out our recent white paper.

Read More

What is Micro-Segmentation?

Micro-segmentation is an emerging security best practice that offers a number of advantages over more established approaches like network segmentation and application segmentation. The added granularity that micro-segmentation offers is essential at a time when many organizations are adopting cloud services and new deployment options like containers that make traditional perimeter security less relevant.

Micro-Segmentation Methods

The best way for organizations to get started with micro-segmentation is to identify the methods that best align with their security and policy objectives, start with focused policies, and gradually layer additional micro-segmentation techniques over time through step-by-step iteration.

Harness the Benefits of Micro-Segmentation

One of the major benefits of micro-segmentation is that it provides shared visibility into the assets and activities in an environment without slowing development and innovation. Implementing micro-segmentation greatly reduces the attack surface in environments with a diverse set of deployment models and a high rate of change.

Time to Transform Data Centre Security?

Digital transformation is by its very definition redefining how data centres are designed and services managed and deployed. In fact, much like the long-maligned ‘perimeter’ security model many once datacentre-centric workloads are evaporating and re-forming as more agile and elastic cloud-based operational models.

Read more

Security Features of the Hybrid Cloud (OpenStack and AWS)

Everyone knows about the many benefits of the cloud: it is infinitely scalable, developer-friendly, and easy to use. However, we often avoid addressing the reality that the cloud is not perfect. The truth is that, despite the cloud’s many merits, it presents a significant challenge from a security standpoint. Security concerns might make you hesitate to deploy your workloads in any cloud, be it public or private – and understandably so.

Read more