Considerations when managing a hybrid cloud environment

The cloud

82% of businesses have a self-declared hybrid cloud strategy , which forms a necessary foundation for any organization ultimately looking to migrate their applications to the public cloud. And with the promise of combining the benefits of both public and private clouds, as well as modernizing service delivery and cutting IT infrastructure costs, the appeal of a hybrid model is perhaps hardly surprising. However, the reality of managing such an environment is often much more complex than many imagine. 

Hybrid cloud opportunities. When key touch points across private and public cloud platforms are managed effectively, cost savings, scalability and much more agile service delivery can be realized – with the centralized management of workloads across different platforms delivering the interoperability needed to deliver these outcomes. Developers in particular are also keen to leverage hybrid cloud because it gives them the flexibility to rapidly adapt their application delivery, allowing them to select the best IT infrastructure for a specific application. Other use cases for hybrid cloud include the ability to retain applications with sensitive data on-premise. Similarly, legacy applications can be modernized according to the organization’s own timetable and resources, before being moved to the public cloud (if at all).

Hybrid cloud challenges. It can be hard to realize the full potential of a hybrid cloud environment, with the model often ending in compromise or – worse – failure. Top of mind are concerns about cost center ownership, security, and problem resolution. And with organizations estimating that 32% of their cloud spend is wasted , the desired cost efficiencies can be hard to come by. Choosing the right public cloud provider can also be an incredibly complex decision, as is the identification of end-to-end application dependencies prior to migration. It also requires additional cross-departmental collaboration in order to streamline processes, workflows, and identify cost saving opportunities. 

For those currently grappling with hybrid cloud management here are 5 key considerations for those looking to optimize their model:

1. Address security challenges

73% of IT staff are concerned about the security of cloud-native applications . Understandable when a recent report found that 27% of organizations have experienced a security incident in their public cloud infrastructure in the last 12 months – many due to misconfigurations in cloud infrastructure. In a public facing environment not only does this potentially introduce more vulnerabilities, but it is also not possible to customize security configurations. 

One layer of protection that can be leveraged in the cloud is a Web Application Firewall (WAF). This gives the ability to inspect the traffic and make decisions to improve the security of the application (this differs from a standard firewall that provides a barrier between external and internal network traffic). Arguably, any web server that is publicly facing and available on the public internet should have at least a basic level of web protection, and many cloud-native services have their own native WAF built-in. For example, AWS, Azure, and Google have their own. And if we look at Content Delivery Networks (CDNs), a lot of them also bundle in WAF-based protection as part of their package. So it’s now expected that if you’re in the public cloud, or if you’re using a CDN, some level of web protection will be provided.

There are however many different types of WAF. For example, there are WAFs built on the open source Web Application Firewall (WAF) called the OWASP ModSecurity Core Rule Set , also known as the CRS. What a lot of the web-based services in the public cloud do is offer either: 1) a cut-down version of the Core Rule Set, which means they take out some of the functionality to make it easier to use; or 2) they offer very little or no flexibility, so the ability to do anything custom to the application is lost. And almost always, public cloud vendors rely on outdated versions of the Core Rule Set. This means running on WAF rules that are perhaps 2, 3, 4, or 5 years out of date. Not great from a security perspective. So realistically, not only does a provider need to be identified that has the right level of protection, but they also need to be one that offers extended WAF functionality.

2. Control cloud spend

Firstly, it is important to note right off the bat that forecasting cloud costs can be a tricky business. With pressure from the top to take a cloud-first approach, it can be tempting to jump first and ask questions later, but the fact that 58% of CIOs and CTOs admitted to overspend on their cloud budget should perhaps sound a note of caution. Despite low barriers to entry, organizations expanding their infrastructure over time become lucrative customers for the big hyperscalers. So much so that there comes a tipping point where the private cloud ultimately become cheaper than the public cloud. 

Secondly, optimizing costs in the cloud is nigh on impossible without greater cost visibility and more transparent billing. Nevertheless, with ever-shifting IT infrastructure and technology this becomes increasingly complex. Having said that, some applications are more likely to experience more standardized demand than others, and so may be a more likely candidate for initial deployment to the public cloud. The reality though is that cost optimization in the cloud is a moving target and should be seen as an ongoing exercise, with the need to continuously review user demand, pricing models, Service Level Agreements (SLAs) and performance expectations, which will inevitably change over time.

3. Optimize architecture

The fundamental appeal of hybrid cloud is the flexibility of its architecture to scale up and down as and when needed. However, as demands and technology change on a regular basis, the architecture will need constant monitoring and adjustment in order to ensure workflows and applications are hosted on the correct platform. Building a ‘single pane of glass’ can play a critical part in monitoring and therefore ultimately managing disparate workloads, regardless of where they sit and which API (Application Programming Interface) they use. This single interface can help remove the complexity of multiple native interfaces, translating what’s important, from one cloud to another. 

The alternative is having different monitoring tools for different systems, which means jumping between tools to investigate any issues, work out where the problem originates from, and debug accordingly. While a ‘single pane of glass’ monitoring tool can be enabled in the public cloud, selecting a cloud-native monitoring tool can increase the risk of vendor lock-in, which is the very thing you’re likely trying to avoid in an agile environment. 

4. De-risk migration

The de-risking of hybrid deployments by simplifying the architecture on each platform is also critical. This means conducting a full audit of existing infrastructure, with a view to taking the complexity out, and therefore de-risking over time. The less there is to manage, the less risk there is and the easier it is to handle, meaning that a smaller team would find it simpler to maintain, and more likely to be able to guarantee high availability and performance. Simplifying IT architecture not only reduces costs and saves time, it also has the potential to improve services, as well as protecting data security. This can however be a challenge because historically many applications have been very siloed, with no or few links between systems. Hence there is now a strong focus on building modular systems that support growth and can actually talk to one another. 

As a first step, existing IT architecture needs to be reviewed and reshaped to ensure it remains fit for purpose, and to gain a thorough understanding of the existing patchwork of IT ecosystems. Secondly, there is a need to determine how to migrate, support, maintain and guarantee the performance of the workflows identified. For example, with a per app high availability approach maintenance headaches are minimized as this has no bearing on the remainder of the application stack, and it makes it much easier to scale without impacting performance.

5. Plan for the worst case scenario

No matter what your migration strategy (from on-premise, to private cloud, and ultimately public cloud) all scenarios should be considered in order to mitigate any risks. For example, there needs to be a means of failing back, offering protection from unforeseen events that may happen, causing potential data loss or outages. In this way, it is possible to take a more secure, phased approach because each workload can be moved and fully tested to make sure it is operating as expected prior to formal migration. Furthermore, having a tried-and-tested disaster recovery plan for cloud infrastructure is the first step to resuming critical functions and avoiding downtime if something goes wrong. It is worth underlining that the security measures that have been deployed into the production stack also need to be applied to the recovery mechanism. 

There are a number of different recovery methods to consider: cold, warm, and hot sites. Hot sites help the organization recover in a matter of minutes. In a best case scenario the hot site is always-on, increasing both capacity and resilience. A warm site on the other hand relies on incremental backups and uses these to recover to the last stable production environment. Depending on the amount of investment involved, a warm site may take a long time to bring online, and in some cases (to reduce costs), the compromise of a degree of data loss may be acceptable. Although public cloud services usually have basic disaster recovery plans integrated into them, enhanced disaster recovery capability needs to be requested to guarantee near-instant recovery time using incremental snapshots.


Becoming cloud-native (if indeed that is the ultimate goal), requires a steady, phased migration and management strategy, with tests met at each stage, on each platform. Hence it is predicted that hybrid cloud will remain the dominant model for many years to come as organizations continue to grapple with their hybrid environments and determine which applications best suit which platforms, at what times, and how to integrate them. At the end of the day, there is no substitute for identifying precisely what is being managed, who owns it, how it needs to be secured, and the ins and outs of the ever-evolving Service Level Agreements (SLAs) of each hyperscaler.

James Loveday

James Loveday is a Cloud Specialist and #ADCHero at, guardians of uptime, and experts at load balancing critical applications, using clever, not complex, load balancers that put IT teams in control. Find out how they keep businesses flowing here.

The cost of living crisis.

TBT Newsroom • 29th June 2022

What Communication Service Providers can do to help their customers cope with the cost-of-living crisis. We’re all familiar with the rip roaring marketing slogans of our U.K. Communication Service Providers – ‘together we can’, ‘The future is bright’, ‘It’s all about you’…but sadly, these no longer appear to ring true for the millions of consumers...