The data center is a trendy place right now (it feels kind of weird and awesome to write that). There are a number of local data center trends that deserve to be called out for their role in helping organizations bring cloud-like characteristics to the local data center.
Virtualization does not equal cloud. Yes, we said this here as well, and it’s still true. However, virtualization is perhaps the most critical element in getting to an environment that can be considered cloud-like. As you look at cloud providers out there, one fact becomes abundantly clear: All of them run workloads that are virtualized in some way. Amazon’s workloads run on Xen; Microsoft Azure’s workloads run on a combination of Azure VMs and Hyper-V.
Why is this?
There are a whole lot of reasons, and here are the most important:
- Hardware efficiency. Virtualization has made it possible to push hardware to its very limits. This is a good thing and it reduces overall hardware cost since you don’t need to buy individual servers for every workload.
- Abstraction. From an operation perspective, the ability to abstract operating systems and applications from hardware has made it possible to consider underlying hardware as almost an afterthought. As long as it has the capability to support its workloads, it doesn’t matter on which system workloads actually operate. Here’s the key item to remember: abstraction turns hardware-based servers and components into software. Software is far more easily managed and manipulated than hardware, which brings us to…
- Workload mobility. Enabled by abstraction, mobility is the ultimate outcome from virtualizing workloads and has imbued supported workloads with the ability to be shifted to new platforms and data centers for availability and copied to various places for disaster recovery purposes.
Continuously increasing the level of virtualization for new workloads enables more and more automation and efficiency in data center operations.
Analytics-Driven Monitoring and Planning
You can’t make good decisions without good data! Executive teams have known this forever and they’re always looking for good organizational metrics. In more recent years, however, this kind of thinking has come to IT infrastructure as well. Today, IT decision makers consider what’s actually happening in their infrastructure environments in order to plan for the future and to react to operational performance issues. The ability to access key performance indicators at every level of the infrastructure and application stack is absolutely critical to maintaining a high level of performance and avoiding downtime.
The traditional data center requires a ton of attention from a variety of people, each with vastly different, and often expensive, skill sets. The ultimate destination for the local data center, however, is one that manages itself without the need for a lot of people.
This is not to say that there won’t be an IT staff that has to take some part in managing the data center, but it does mean that the skills that these IT pros bring to bear may look very different than the ones we see today.
Autonomous data center operations is actually viable, too… when you fill your data center chock full of virtualization and then sprinkle on a dash of analytics. With the right foundation—virtualized—combined with the right level of analytics, it is possible to have a data center that is able to handle its own routine operations. For example, is your CRM application being overloaded by too many new connections from potential customers? Allow your autonomous data center to spin up a series of new virtual web servers to help handle the load. Is your key customer-facing application beginning to run out of storage capacity? Allow your autonomous data center to provision temporary capacity in the public cloud or move low-value data from the local data center to a cloud provider for archiving to free up local capacity.