Native Hybrid Cloud Storage Requirements

Although physical-era storage environments can coexist with and support hybrid cloud storage initiatives, in terms of efficiency and flexibility, a lot is left on the table. In the world of the hybrid cloud, virtualization is a core component and, as such, the use of a filesystem purpose-built for virtualization is important.

Virtualization-specific Architecture

As the saying goes, “you can put lipstick on a pig” but, at the end of the day, you just have a pretty pig, not an awesome cheetah. You’re still hampered by the limitations inherent in the underlying animal. In the world of storage, you can add all the extensions you like to a legacy architecture, but you will never be able to get the same level of benefits that you do from a modern storage system.

Rather, it’s important to customers to look at storage systems that don’t depend on uncontrollable and inconsistently delivered API implementations—VVols is an API implementation—to deliver value. VM-Aware storage platforms are far better positioned for hybrid cloud storage environments since they eschew complexity in favor of simple implementations that increase performance and manageability. As organizations seek to implement hybrid cloud architectures, improved performance potential and easier administration are significant parts of their journey.

Cloud is a Journey
Consider the Cloud. Often, when thinking about the Cloud, Amazon and Azure immediately come to mind. However, for many organizations, there is hesitancy around simply throwing workloads over the fence to these public cloud providers. Rather, organizations considering the Cloud are really looking for the kinds of outcomes that are achievable with public cloud services, such as improved economics and frictionless operations. For them, the Cloud is not about a destination, but about a journey of improvement.

Automation, Orchestration, and Clean APIs

Legacy storage architecture is, well… not friendly. Much has been written about the complexity of legacy storage systems and the negative impacts that this complexity has on IT and the business as a whole. Remember, your storage is the lifeblood of your business. If you have difficulty with that resource, it will, thanks to the hooks it has into everything you do, create challenges for you.

Modern storage with a virtualization-specific architecture eschews this complexity, which also makes it easier to automate. Comprehensive automation is a key tenet of the Cloud and, without it, you can’t implement higher order features, such as self-service capabilities, inter-system orchestration, and DevOps-friendly capabilities. In Figure 4-xx, note that automation drives orchestration and self-service.

Figure 4-xx. APIs, automation, and orchestration are key characteristics of hybrid cloud storage services

Clean REST-based APIs are an additional key underpinning for enabling automation, orchestration, and user self-service in the world of hybrid cloud storage. A complete set of APIs enables a storage system to be fully managed without a person clicking a mouse.

Let’s for a moment consider a web-based, web-scale application that straddles both a local data center and the cloud. Imagine, if you will, this common scenario: The company that developed this application experiences occasional surges in its use. Perhaps the application is one that supports an ecommerce site and holidays drive additional traffic. During such surges, there is a need to continually adjust resources assigned to support the application. Without automation and orchestration driven by a clean API, this work will need to be handled manually.

In other words, someone will need to keep constant watch over the application’s performance levels and storage capacity usage and then make reactive adjustments to resource levels. This reactive, manual method basically ensures that some customers will experience very poor results while they wait for IT to assign new resources.

Now, let’s assume that the organization deployed their application atop an API-laden system and one that embraces a modern virtualization-specific architecture. As a part of the application development process, the developer can integrate deeply with the storage environment. They can write routines that keep an eye on storage latency and capacity and, as latency increases due to load or as storage capacity dwindles, enable the application to proactively deploy new virtual machines and storage services without an IT staff person having to be involved.

That’s the potential power behind the software-centric nature of today’s virtualization-aware storage products.

Unified Model Across Different Components

Virtual machines are just the tip of the iceberg in the modern data center. Today, containers are making a big splash in the data center pond, and with good reason: They are far more efficient than virtual machines.

If you consider the original reasons that people made the switch from physical servers to virtual ones, it was all about resource utilization. With physical servers, you had a ton of hardware to deploy, and you only ended up using between 5% and 15% of that hardware’s potential, on average. There was just a ton of wasted capacity. With virtual machines, we used the same hardware, but we could cram more workloads onto it because we abstracted that hardware and turned each server into a software construct, eliminating a lot of hardware overhead and massively increasing the utilization of the hardware.

Virtualization brought relatively high levels of efficiency and flexibility to IT, particularly when compared to the world of the physical workload. With virtualization, companies can run operating systems from many vendors and different versions of each of those operating systems on the same hardware. Those workloads can be easily shifted between hosts, between data centers, and even to and from cloud providers.

But, for many, virtualization is inefficient. Every single virtual machine gets an operating system. Think about that for a second. If you install 100 identical virtual machines to support an application, that’s 100 individual operating systems running in your environment. Every operating system imposes steep overhead by consuming CPU, memory, and storage resources.

To combat this inefficiency, containers have hit the market. Containers have an implicit assumption that the operating system for many environments is, in fact, the same, and can be safely abstracted, thus eliminating the overhead of individual operating systems. This has the immediate effect of increased density in the environment since you can cram even more workloads onto the hardware.

Virtual machines vs. containers

In terms of hybrid cloud storage, it’s important for this resource to be able to equally support virtual machines as well as containers while continuing to enable deep understanding of performance bottlenecks. Containers will continue to increase in usage, particularly for web-scale applications and for tools that leverage web services, which we discussed in another article). They provide a more efficient operational framework than is offered by virtual machine paradigms.

Containers in the Real World
Containers are still in their infancy and will undergo the same interest, scrutiny, and deployment challenges that initially faced virtualization, but with some twists. Whereas virtualization generally enabled companies to simply drag and drop applications from the physical environment to the virtual one, with containers, this is not the case. Applications must be purpose built for containers, at least for now. Over time, expect to see more tools put into place to enable more seamless support for migrating workloads to containers. For now, however, understand that the market is still digesting exactly how containers will ultimately fit into the IT landscape of the future.

 Step Back

Read On