Scalability

One of the key characteristics of cloud—whether it’s public, private, or hybrid—is the ability to scale in whatever dimension makes sense for your workloads’ needs. The traditional direction for storage scaling was up—adding more shelves of capacity as needed. But now that capacity generally comes with a lot of all-flash IO, and so the controller becomes the bottleneck. And piling up shelves of capacity only creates a growing failure domain—lose the controllers and lose the data on ALL the shelves to which they are attached.

Inside the Data Center: Scale Up vs. Scale Out
For years, a silent war has been waged inside your data center… one with the hallmarks of some of the greatest historical matchups: Coke vs. Pepsi, Mac vs. PC, Kirk vs. Picard. In this war, however, unsuspecting organizations are the losers.

Scale up vs. Scale out: To the untrained eye, both are just ways to add more capacity to an existing storage environment.

Scale up architectures allow you to add more disks to an existing array or add expansion shelves of disks atop a processing unit, which contains the processing and memory resources for the storage environment.

In a scale out system, every time you add capacity, you also add the underlying resources upon which that capacity depends. Add a shelf of disks and you also add more processors, RAM, and network/storage fabric ports.

This is really important and here’s why: Predictability. As you add storage capacity to the data center, you shouldn’t have to give up performance, but that’s exactly what can happen in a scale up environment. Eventually, you overtax the shared processors and fabric connections and you begin to suffer from storage performance problems. When that happens, you must start being careful about where you’re placing workloads and virtual machines so that you avoid the hot spots.

In the proper world of scale out storage, you don’t run into such issues since you’re adding all of the resources your storage needs to function. You can scale workloads and maintain predictable levels of performance. Even better as you add new workloads and virtual machines, the right system can optimize placement of these items without worry that necessary resources won’t be available.

Happy users, happy business.

And so more organizations have shifted the direction of their scale plans to out. This entails adding more nodes—interconnecting arrays to join their controllers together, and adding both capacity and performance. This adds redundancy of controllers and increases your ability to put all that performance to effective use.

The challenge here is to keep storage scale-out as simple as say… scaling compute. If you need more compute resources for your cloud workloads, you simply install a new virtualized server, add it to the resource pool and automatic live migration optimizes the VMs across the pool.

Fortunately, the previously mentioned category of VM-aware storage has this visibility across your pool and capability to optimize the placement of every VM. Simply add another node of VM-aware storage, and intelligent algorithms re-distribute VMs to best balance capacity and performance requirements. Best of all, it all happens automatically without admin intervention.

Now remember that scale also means spinning workloads up and down. When we’re talking about scale in the context of cloud, bear in mind that it should be possible to expand the size of a resource pool as well as shrink resource pools when workload demands begin to diminish.

But, there is a real difference in achieving scale in a public cloud as opposed to a private cloud, which means that the two sides of your hybrid cloud will have somewhat different capabilities here. On the public cloud front, you can temporarily provision a resource and then, when you’re done with it, deprovision that resource on the fly without incurring any capital expenditure costs. On the private side of your hybrid cloud, if you run out of a resource, you’ll need to buy more and deploy it before you can provision it for use. If you have a temporary workload, even if you deprovision the resource when you’re done with it, the resource still physically exists in your data center and you’ve paid for it. In a private data center, by deprovisioning resources, you’re making them available for use by other workloads.

Put concisely, on the public side, you get the full cloud experience, which includes both economics and operational efficiency. On the private side of your hybrid cloud, the focus is on making things easier to expand and contract and to make more efficient use of your resources.

 Step Back

Read On