The Lightcrest Blog

Let's talk about fluid computing, hyperconverged infrastructure, and hybrid cloud technology.

5 Cloud Trends in 2015 You Need to Track

Last update on Sept. 9, 2015.

As product cycles from 2014 bleed into the present, there will inevitably be extensions of last year’s computing trends that appear to the casual observer as “merely incremental” as opposed to all-out paradigm shifts. But some of the groundswells from last year’s product cycles may very well culminate with transformative innovations for the modern datacenter.

So what should we be tracking? Let’s take a look.

1. Private Cloud Adoption

Private cloud is taking off and adoption is accelerating. It’s interesting, because on the surface private cloud seems kind of boring. Empty a bag of servers into a cabinet, throw on some KVM, or Xen; and you’re off to the races, right? What’s the fun in that? You have to commit to the cost of the hardware stack, and unlike your friends running on Amazon or Azure, you can’t momentarily spin your entire stack down and bring your effective MRC to $0.00.

Well, as it turns out, there is a loftier boast than instant scale-down and “theoretically infinite elasticity” if you have a relatively mature business. The public cloud is a great place to start your business or host small web applications – and yes, it can be a good place to host big ones – but you’re going to pay for it in time and silicon. As you scale up – and achieve a steady baseline of traffic – the public cloud starts to cost. As this Cloud Adoption Study shows, enterprises are starting to adopt a multi-cloud approach, which makes sense – steady-state projects go on private clouds, while new projects start out on public. This is because on multi-tenant infrastructure, you need more VMs to handle the same amount of load. For those of you who are new to the space, multi-tenant infrastructure implies sharing resources with other customers – i.e. on Amazon or Azure’s public cloud, you’re sharing computing resources with other users who are just as susceptible to load spikes and DoS attacks as you are. In the final analysis, if you’re spending over $10k/mo on these clouds, chances are you’re overpaying and getting less capacity, CPU time, and I/O performance than you should be. And if you want the best possible experience for your users at the lowest possible cost – ironically, the public cloud is not the ideal.

2. Software Defined Storage (SDS)

SDS implies the separation of control plane from the data plane of your storage tier – in other words, the nodes doing the physical storing of data need not know anything about the controls, policies, or mechanisms governing the distribution, reading, or writing of that data. That’s the basic framework – it’s up to innovators to work within those constraints. What you’ll find are many vendors claiming to provide SDS, when in fact they’re simply offering up a modified shared filesystem and associated protocol. Others are in fact doing hard work and building interesting things – such as granularly distributing I/O, allowing for custom data durability policies, and automating I/O scaling on a per VM basis.

You need to track this category because the innovators in this space will be displacing all the legacy technology associated with RAID, NAS, and DAS. The promises of SDS are to simplify data management, eliminate high costs associated with big-iron storage (think EMC and Netapp SANs), improve data durability, and increase I/O performance. The consolidation opportunities, fully realized, are going to result in a drastic decrease in costs for any CIO with a reasonable datacenter footprint. If you are exploring the cost benefits of the private cloud, SDS should be on your watch list if not your TO-DO list, as you’ll gain more elasticity at a lower TCO than you would on traditional RAID or SAN deployments.

3. Hyperconvergence

Hyperconvergence entails the delivery of compute and storage resources as a single, condensed unit, and is essentially an evolution of “converged” infrastructure (i.e. Cisco UCS, IBM Pureflex) whereby one deploys a centrally managed chassis brimming with network, storage, and compute blades. Converged infrastructure promises vendor-unification and reduced management overhead by consolidating network, storage, and compute into a centralized system; unfortunately, this often comes with a hefty price tag and does not scale-out as cost effectively as commodity hardware driven by an intelligent control plane.

In the case of hyperconvergence, instead of deploying a monolithic chassis, you deploy a single node that provides both compute and storage as a single module, which usually consists of a commodity server with some proprietary software rolled on top. The goal of hyperconvergence is to simplify IT management and lower costs by eliminating the need to support multiple vendors (and thus skill-sets) – and provide the same unit of marginal scale for both compute and storage. Need more I/O capacity? Add another node. Need more storage? Add another node. So instead of managing, say, a Netapp SAN, a cluster of IBM X servers, and multiple hypervisors technologies, you can manage one platform to service all your potential storage and computing needs.

The differentiator in these products is the software – the control plane that determines how your data is distributed, how volumes are managed, and and how caching and intelligent data-locality are leveraged to improve performance. Expect a lot of variance across vendor software and not so much difference across the underlying hardware. Like SDS, hyperconvergence done right can transform your private cloud into the high-octane swiss-army-knife it should be – but be weary of glossy material and put all your faith in real-world evaluations before making a purchasing decision.

4. Software Defined Networks

The incumbent networking titans (Juniper, Cisco) are steeling themselves for a romp with vendors of Software Defined Networks. Similar in the abstract to SDS, SDN entails decoupling proprietary network operating systems from their hardware stacks and delivering control plane on operating systems that can run on a myriad of commodity chipsets. Some of these vendors (i.e. Cumulus) are delivering their software on top of Linux, allowing for network engineers and systems administrators alike to manage their routers and switches much like they would their application and database servers. The implications are quite powerful, as your staff who are already well versed in interpreters such as bash, python, or ruby can apply their server automation skills to the network, and write their own orchestration platforms that tightly integrate the network with their cloud.

As SDS takes hold and the switch fabric becomes the de facto storage backplane, SDN will become increasingly more relevant. Moving away from centralized SANs to decentralized SDS clusters requires tight integration of the storage control with the network control, and high performance will require tuning of key network attributes such as QoS and ethernet bonding.

5. DevOps

More of a methodology than a product, DevOps is changing the way people think about and manage infrastructure – and it’s infecting everyone, including Starbucks. Traditionally, firms built their own in-house teams to manage the operations of their network and systems infrastructure. It was common to provision servers on a piece-meal basis, whereby servers running different operating systems would require different deployment procedures. Furthermore, systems administrators often had to compile lengthy “run books” to document their specific approach to configuration and code deployment. DevOps breaks from this lineage in that it provides both administrators and developers alike with the ability to treat their infrastructure as an application framework – in other words, instead of having to hire 4 different people to manage 4 different operating system deployments, firms can now hire 1 person (or a managed services provider) to roll out many disparate operating systems, very rapidly, across heterogenous infrastructure through a common REST API (often exposed through a web service or set of CLI tools).

The result of DevOps is automation of provisioning and deployment of systems, servers, and code through a common abstraction layer. Need to roll out a new version control system integrated with your favorite CI/CD framework? No problem, use this API call. Need to roll out 100 new VMs, 50% of them running CentOS, the other 50% running windows? No problem, use this API call. Need to roll back that botched nginx configuration that just took your site down? No problem, use this – okay, enough already, you get the picture.

There is no question about the efficacy of DevOps – the prudent queries are more about staff resources and technology – should you do DevOps in-house? Should you build it or buy it? In many cases, it makes a ton of sense to outsource DevOps to the provider of your cloud so you can focus on your business. In other cases, if you already have your own datacenter(s), it may be wise to build a center of excellence in-house, and leverage some of the aforementioned technologies to facilitate your automation strategies.

Conclusion

As the the next crests of innovation roll our way, it’ll be imperative to evaluate these technologies and trends in real-world uses cases to ascertain their impact on the organization’s efficiency and bottom-line. And as always, the mantra should be: lower your costs, get more out of your existing infrastructure, and work smarter, not just harder.

More to come.