We saw the early signs about two years ago: while everyone was talking about cloud migration and moving faster to the cloud, there were enterprises that increased their investments in the on-premises data center, and they continue to do so even in this current era.
Over the past months since the COVID-19 pandemic first entered our lives and work from home transitioned from being a tentative reality to a necessity, organizations are moving faster to the cloud, but there are still a lot of applications and workloads that must remain on premises. We write a lot about critical applications that still run on legacy Unix, old Windows operating systems, ancient Linux and other veteran OS that cannot be migrated to the cloud but while many may have assumed that soon enough enterprises will manage to migrate all workloads to the cloud, that is not the case.
As enterprises are embracing new technologies and cloud computing microservices architectures, there’s a shift inside the data center. Not every application can be migrated, and some applications explicitly should not be moved to the cloud. Some of the reasons are clear: there’s more need for speed, higher throughput, and lower latency. Some aspects are less visible: like how containers and container operating systems are installed and deployed, and overall cost of running highly complicated applications in the cloud. As an example, there are a growing number of instances of Kubernetes being deployed on bare-metal servers due to better performance and lower latency and reliance on hardware accelerators.
Coupled with more requirements for using AI and other machine learning algorithms, these developments are leading to faster adaptation of new hardware and software infrastructure like NVIDIA GPU accelerated computing at the edge, faster connectivity, bigger pipes and overall, faster, simplified and more agile computing.
The modern application runs inside the data center and within the edge. It has extensions to the cloud and must operate as a well-defined single unit under new architecture.
While networking architects were busy redesigning the data center, the security architects realized that the firewall as we know it is no longer adequate to protect the modern data center, and new technologies are necessary to enable the required level of security and risk mitigation. There are many limitations that prevent traditional firewalls and even newer firewall-as-a-service solutions from addressing their needs.
First and most obvious, firewalls can protect only the traffic that they can inspect. This means mostly North-South traffic. Now, imagine that you have hundreds or more servers running at 10, 40, 100 and even 200 Gbps. How can your firewall support that amount of traffic? TOR architecture to steer and redirect traffic is not relevant for this new design and can’t be used. Moreover, the existing policy management paradigms built for static designs are not suitable for this new architecture that supports a dynamic and fast-changing application environment.
There are many other limitations, each of which frankly deserves a blog of its own. But in the interim, we all should accept the fact that some aspects of the firewall market and some of its current deployment scenarios are about to change dramatically. The winds of change have begun to blow.
In contrast, software-defined segmentation allows companies to apply workload and process-level security controls to data center and cloud assets that have an explicit business purpose for communicating with each other. It is extremely effective at detecting and blocking lateral movement in data center, cloud, and hybrid-cloud environments.
And then DPUs and SmartNICs were invented.
Data processing units (or DPUs) are changing how and where data center security is performed. DPU-based SmartNICs fuel the new architectural redesign. It started with hyperscalers, large service providers and tier-1 cloud service providers (CSPs) that discovered the benefits of having a managed device that can free up expensive CPU cycles. They all like how SmartNICs are providing added-value services beyond core networking functionality. As a reminder here are some of its capabilities:
- Offloading network functions
- Providing security-related processing
- Tcp offloading to dedicated engines that free up CPU cores
- Improving networking performance
- Providing cryptography capabilities like faster encryption
And there are even more security services like workload isolation, secure boot and protecting customers workloads from other tenants.
Partnering with NVIDIA, Guardicore pioneered the concept of using SmartNICs for micro-segmentation to enable the best of both worlds: accelerating performance and functionality while providing secure segmentation capabilities for the new data center.
Using Guardicore with NVIDIA BlueField-2 DPU will allow enterprise customers to embrace the new and cover the old with software-defined segmentation for hardware, providing a faster, more granular way for enterprises to protect their critical assets. Projects that in the past may have spanned many years can now be done in a matter of a few weeks with this new approach, quickly reducing risk and validating compliance.
Guardicore is working with NVIDIA to provide a solution that, just like your DevOps practices, is decoupled from any particular infrastructure, and is both automatable and auto-scalable. On top of this, it provides equal visibility and control across the board in a granular way, so that speed and innovation can thrive, with security an equal partner in the triangle of success.
We are also working with NVIDIA on new BlueField-2 DPU integrations to support the new data center architecture. Doing so with this integration we enable enterprise customers to accelerate their application, innovate faster and deliver competitive solutions to the market.