Operations trends are one thing, but there are several overall industry trends that are important to understand as organizations make their way to the hybrid cloud.
A portmanteau of “development” and “operations”, DevOps is a movement that joins together development processes and infrastructure with eventual product outcomes. Under a DevOps-driven methodology, the infrastructure becomes a part of the development process, which streamlines the lifecycle and leads to more frequent deployments with higher levels of success. The ultimate goal of DevOps is to increase an organization’s overall business performance.
According to Puppet Lab’s 2016 State of DevOps report, highly performing organizations get a number of benefits, including:
- 200 times more frequent code deployments
- 2,555 times faster lead times
- 24 times faster mean time to recover
- 3 times lower change failure rate
All of this leads to lower costs, increased efficiency, much faster time-to-value, and increased revenue.
In a DevOps world, you may hear the phrase programmable infrastructure. In essence, programmable infrastructure is generally software-defined and enables developers to treat various infrastructure components just like they would treat a software object. They can treat infrastructure as code, thus enabling their software creations to fully manipulate the very foundation on which those creations operate.
Because there were not enough confusing terms in the IT landscape, someone decided to throw another one into the mix. We’re used to big, monolithic architectures, but we also know that such environments are far from flexible and agile. As we make changes to such systems, we risk unintended consequences. Here’s a little limerick for you:
99 little bugs in the code.
Take one down, patch it around,
127 little bugs in the code…
Unfortunately, that’s a reality for a lot of developers of big systems. To try and combat this and to make it easier to develop systems, web-services have entered the equation.
Loosely coupled with one another, the intent behind web-services is to enable modularity and separation of core functionality in order to improve security and to vastly accelerate development efforts. Web-services depend heavily on APIs that enable deep interfacing between components. They don’t need to all be built and managed by a single company, either. Many of today’s applications and services have rich APIs that enable inter-product communication, so many web-services from many sources can actually interoperate quite nicely. Of course, a single company can choose to leverage web-services for all its development, too. The beauty of a web-services framework is that it enables discrete development on small components. This is the part that makes it possible to very quickly iterate and accelerate the introduction of new functionality into a development project. There is less testing to do and less that can go wrong.
If you like virtualization because it allows you to cram more workloads onto a single server than was possible in the glory days of the physical server, then you’re gonna love containers! And, if you’re hooked on web-services, you’ll quickly see why containers have become so popular at DevOps parties.
Imagine, if you will, a world in which you were forced to deploy a new virtual machine for every web-service component. After all, just like you do with monolithic applications, you might want to keep your individual web-services running individually as well, to improve security and performance. With the need to create a software-based server (a virtual machine) and install an operating system, patch it, and then deploy the web-service, you’re looking at a lot of overhead, as you can see in the diagram below.
With small web-services, the overhead gets pretty intense. You need a more granular way to deploy these services. That’s where containers come into the picture. Rather than a separate operating system deployment for every virtual machine, containers all run atop the same operating system instance. On top of that operating system runs a container engine—Docker is the most popular and well-known such engine—and each individual application or web-service gets its own access to the binaries and libraries that support it. The following diagram gives you a look at how virtual machines and containers compare.
Everything you’ve heard so far is great, but for one thing. You need a place to run all this stuff. Further, you want that place to be something that gives you cloud-like capability and resources, including compute, storage, and networking, among other components and services. Well, back in 2010, NASA and Rackspace walked into a lab one day and walked out the next day having created the platform known as OpenStack.
OpenStack provides organizations with an infrastructure-as-a-service offering that is fully manageable via a command line, a GUI, and via powerful APIs. Every six months, the OpenStack development community releases new updates to correct problems and add new functionality. OpenStack is a highly componentized system, enabling fast development on individual modules without the risk of those development efforts impacting other modules.
Since this is a site about storage, let’s focus there for a minute. OpenStack’s Cinder service provides shared block storage functionality to the platform and also provides the capability for third party storage providers to provide plugin drivers. These drivers enable OpenStack to leverage that company’s storage assets as if they were a core part of OpenStack itself.
Why is this important? In an open world, one size does not fit all. The people behind OpenStack understand that other solutions can provide far more functionality than the platform itself can ever do. For example, through a Cinder driver, OpenStack adopters can deploy a VM-Aware storage solution, which operates directly at the virtual machine level, eliminating the need for complex mapping to LUNs and volumes. This dramatically simplify and improves how you manage storage in an OpenStack cloud deployment.
Finally, let’s talk about the trend known as serverless computing. Let’s get one thing out of the way, though. Servers are still involved, but apparently some marketing person won the battle of the moniker. In reality though, serverless computing is much more about abstraction than it is about actually eliminating the use of servers. The goal is to enable developers to focus on their code rather than on infrastructure, which includes both physical and virtual servers. Today, developers often have to worry about these elements, but this worry shouldn’t be necessary.
Let’s look at a real-world serverless computing service—AWS Lambda. With Lambda, you’re able to upload code to the service and execute this code without having to provision the underlying resources, such as virtual machines. The service itself silently and automatically provisions whatever resources are necessary to carry out your will. And then, once the code is done executing, all the virtual infrastructure that was provisioned is destroyed. Under such a service, you’re only paying for the resources that were in use while your code was running.
Why is this important? Think about the model of simply shifting your virtual machines to a cloud provider. If those virtual machines are running 24/7, you’re paying 24/7, even if you only use some of them once a week. Under a serverless computing model, you can move ever close to a true “pay as you go” economic model for workload operations.