This is part 2 of a 2-part series examining how security requirements have changed for an evolving IT infrastructure.
In part I of this blog series, we discussed the changes in the way businesses and IT teams are executing and how security practitioners are being presented with a unique opportunity to align their tools and strategies with the direction the business is going. In this post, we’ll review some of the strategies and tools that can be used to help secure your hybrid cloud environment and keep pace with the DevOps model.
Visibility and Visualization
As security practitioners, we’ve always discussed the need for better visibility into the environments that we’re tasked with securing. The strategy that most organizations have taken to accomplish this is to implement tools for collecting NetFlow data and logs from various points in the environment and then use a SIEM solution to try and correlate that data and make it something actionable. Unfortunately, too many times that NetFlow data becomes an overwhelming pool of information and we don’t know what to look for nor do we have a method of using that data for any form of insight into the environment. NetFlow data also lacks the application awareness and scalability to be an effective method of gaining visibility into communication flows in a hybrid cloud environment.
Security practitioners need to implement solutions that provide application-aware visibility into the communication flows within the environment. This means visibility that includes details into the applications and processes that are creating and receiving the flows. This level of visibility will give us real, actionable information that we can use to identify rogue applications, processes and malware running in the environment. After we’ve collected all of this information, we need an intelligent way to parse through it and that’s where visualization capabilities can help us simplify the processing of large amounts of data. Imagine being able to have a graphical view of your environment that you can manipulate to meet your needs for threat hunting, segmentation or troubleshooting needs. These visibility and visualization capabilities are the means with which we start to regain control of the environment.
For a long time, segmentation was seen as an Infrastructure responsibility and security practitioners used it as a checkbox for compliance initiatives. Advancements in this concept have now made segmentation a powerful tool for controlling communication flows and restricting access to sensitive parts of the environment, which helps defend against lateral movement attempts by attackers. Micro-segmentation helps us accomplish this by creating segmentation policies down to the application-level. This allows us to define how flows between applications occur and alert upon or block flows that don’t meet our defined policies. After we’ve gained the visibility we need, microsegmentation becomes a key part for controlling and security our environment.
Asset Discovery and Management
We say it all the time, we cannot protect something if we don’t know it exists. This is even truer now that DevOps teams are deploying new systems at a much faster pace. The systems we deploy must have a method of discovering assets or integrating with existing asset management solutions. This helps us understand the magnitude and scope of what we’re trying to protect so we can implement the appropriate tools that scale to meet the size of our environment.
Another tool to help us keep pace with DevOps teams are configuration management tools. Tools such as Chef and Puppet are allowing DevOps teams to quickly deploy and configure a system. Security practitioners should work with DevOps teams to leverage these tools as a method for deploying their security solutions as systems get deployed. This helps cut down on the time it takes to secure a system and will help us bridge the gap between security teams and DevOps teams by showing that we understand their need for speed.
In order to address the challenges of dwell time, we can use the visibility we gained into our environment by implementing intelligent and scalable breach detection solutions. By effectively detecting and containing breaches early on, we can contain the attack before it becomes a much larger breach. The 2017 Cost of a Data Breach study by the Ponemon Institute shows that identifying a breach within the first 100 days of the attack can save an organization roughly $1 million, while containing an identified breach within 30 days of detection can then also save the organization roughly $1 million. Having said that, the average time to detect a breach is currently 191 days and the average time to contain a breach is 66 days.
The graphs above show the relationships between mean time to identify (left) and mean time to contain (right) versus average total cost of a breach. Source: Ponemon Institute – The 2017 Cost of a Data Breach Study
To detect and contain breaches faster, it’s time to go beyond the typical malware detection capabilities and invest in the ability to detect and react to lateral movement within the environment. Lateral movement is a core piece of an attacker’s strategy once he’s gained a foothold within the environment. In the investigation following the Anthem breach, it was determined that the attacker accessed 90 systems during their established dwell time. That’s a lot of movement and opportunity to detect the attack before it became the large-scale breach that we know it as today. As the attacker is moving from system to system, we have an opportunity to detect that movement early on and take steps to not only prevent the attack but learn from the attacker by redirecting that lateral movement into isolated deception environments where we can analyze their tools and methods.
As we continue to move toward hybrid cloud environments where our workloads live both on-premise or in the multiple clouds, the tools we implement need to support these heterogeneous environments. We cannot afford to invest in one tool that supports VMware only, another tool supports AWS and a third tool that supports Microsoft Azure. We need a single tool that supports all environments, providing a single view and the same level of capabilities.
Implementing point solutions that do not integrate with other solutions within our environment is a costly investment and gets us very little return. As security practitioners, we should demand that every solution includes an open API that we can use to integrate our solutions together, allowing us to automate reactions and responses between tools. Extensibility helps organizations address the cybersecurity skills gap and helps to speed up our incident response practices.
Along the same lines of extensibility, we should be implementing platforms that integrate a set of tools instead of individual point solutions that address a single need. Platforms help reduce the complexity of supporting our environments and make it easier for security practitioners to learn a single UI capable of performing multiple functions.
When we’re talking about addressing the challenge of security within a hybrid cloud environment, the solutions we implement must be capable of scaling to support larger environments with higher traffic rates. The platforms we implement also need to be able to support auto-scaling and burstable environments without impacting the performance of the systems.