Hypervisor myopia limits the promise of a software-defined datacenter

 Mypopia pic

I hear again and again from customers that they’d like to move to the cloud. Although the economics might not justify migration today, they want to eventually be free of the challenges in acquiring, provisioning and managing infrastructure.

Public cloud offers potential benefits, but reducing infrastructure complexity should not be counted among them. Hyper-converged infrastructure (HCI) can provide the simplicity of public cloud in customers’ own datacenters. And by facilitating a hybrid cloud strategy determined by workload needs, it can enable the same type of agility, efficiency and risk management as public cloud.

Seamless infrastructure requires not just abstraction of storage, but abstraction of cloud computing. Infrastructure should be intelligent enough to run applications on the most appropriate platforms whether on-premise or public cloud. This requires an HCI vision that goes far beyond dependency upon a single hypervisor.

The Hypervisor is no Longer the Center of the IT Universe

If you are running a virtualized datacenter, the odds are that you already have more than one hypervisor.

A September 2014 IDC market analysis states, “Over half of the enterprises (51%) have more than one type of hypervisor installed…VMware still leads the pack in terms of installed production deployments, but Microsoft is closing the gap. Other hypervisors are increasing their share, primarily by stealing from VMware’s historically predominant share.”

This statistic is corroborated by a Gartner poll showing that by July of 2014, 48% of VMware customers were already using Hyper-V as their secondary hypervisor. The poll also said that Microsoft’s share of new virtualized workloads is gaining.

Tightly integrating HCI with the kernel of a single hypervisor may bind a customer to the manufacturer’s product suite, but it disregards the trends of openness, agility and choice (not to mention resulting in a much fatter hypervisor). Senior Wikibon analyst, Steve Chambers, recently poked fun at this type of hypervisor myopia by comparing how the datacenter solar system would have looked pre and post Copernicus.

Copernicus pic

Operating System Centricity

A VMware spokesperson for its Storage and Availability group recently stated, “The harsh market reality is that there’s just not a lot of demand for non-vSphere-based hyperconverged solutions…I would argue that it’s hard to compete with features that are simple extensions of the hypervisor.”

IDC Mkt Share pic

This argument resembles the one Microsoft used to make in the late 2000s, “Virtualization is simply a role within the Windows operating environment.” Many industry analysts believed the messaging and told VMware that it needed to be more price competitive.

“If I were VMware, I would be looking to lower my prices.”

    -Laura DiDio, an analyst with ITIC. (Reuters, July 6, 2009).

 

Despite the analyst warnings and all of Microsoft’s marketing muscle, VMware continued to dominate the industry for years. IT leaders knew that virtualization could save them a vast amount of money – but only if it worked flawlessly. An IT manager would look pretty foolish telling her users that all of the VMs might be down, but the company saved several thousand dollars on a less expensive hypervisor.

Today, Microsoft has significantly decreased its operating system centricity. It has also reversed its opposition to open source and has made more contributions to the Linux code than any other vendor. The company no longer pitches virtualization based upon lower cost, but instead emphasizes enterprise-class virtualization, IT agility and flexibility.

Hyper-V still lags vSphere in management, and Microsoft has not developed the virtualization focus and community support that VMware has built over the years. But customers understand that Microsoft is striving to give them what they want, and they’re bringing Hyper-V into their datacenters.

Hypervisor Dependency is Contrary to a Software-Defined Datacenter

The term, “software-defined datacenter” (SDDC), was coined by VMware’s former CTO, Steve Herod, but it’s taken on a life of its own. Multi-hypervisor demand belies the concept of SDDC as merely an extension of vSphere.

VMware’s NSX team understands this new reality. VMware promotes multi-hypervisor support  as a “key feature…instrumental to the value NSX delivers.” In response to Cisco claims of hypervisor dependency, VMware fired back that some NSX environments don’t use VMware hypervisors at all.

A software-defined datacenter demands more than a single hypervisor HCI strategy. What if, for example, customers determine that KVM-based HCI enhances availability and performance of running containers in production? Or perhaps they want to run Hyper-V to lower the cost of their Citrix VDI environment. Or maybe deploying a combination of KVM and vSphere optimizes the application lifecycle from test/dev to production.

Multi-hypervisor HCI not only gives customers choice, it can also provide them with superior capabilities. Nutanix, for example, increases flexibility by supporting not just multiple hypervisors, but multiple versions of hypervisors. And these versions can run on the same cluster, potentially even in multiple datacenters.

Separating the operating system from the hypervisor enables non-disruptive 1-click upgrades of the Nutanix operating system, the hardware firmware and even the hypervisor – whether ESX, Hyper-V or KVM. Storage layer release cycles are higher frequency and bring improved performance, security and functionality with each release.

The Future of the Software-Defined Datacenter

If customers had a way to efficiently run and manage multiple hypervisors in the same environment – and to seamlessly meet their business and application needs; if they had training and certifications geared to a multi-hypervisor datacenter; if they had community support for their efforts to optimize performance while reducing cost – then the rapidly growing landscape of multi-hypervisor environments would undoubtedly accelerate faster still.

Nutanix Next pic

At Nutanix.NEXT in Miami in June, Nutanix is unveiling our Act II. We will reveal our plans to take multi-hypervisor capabilities to a new level. I hope that I will see you and your customers there to participate in the future of the software-defined datacenter.

 

Thanks to Prabu Rambadran (@_praburam), Steve Dowling, Payam Farazi (@farazip) and Angelo Luciani (@AngeloLuciani) for suggestions and edits.

2 thoughts on “Hypervisor myopia limits the promise of a software-defined datacenter

  1. Just wanted to pass a note of appreciation as I’ve recently come across your blog. As an employee of Veeam Software and fan of all things Moore’s law related I definitely appreciate your insights. I do have a couple questions if you don’t mind. What obstacles does Nutanix face in the way of gaining market share from EMC and Netapp? In your opinion, what other technology sectors are ripe for disruption? I appreciate your time.

    Like

    • Thanks for the comment Braden. Both EMC and NetApp have been phenomenally successful over the years and have been major contributors to the modern datacenter. But even though they both now have HCI solutions, they have a tremendous dependency upon, and are structured to sell, legacy storage arrays.

      Hyper-converged infrastructure (HCI) is a far superior approach to hosting a virtualized datacenter than traditional 3-tier compute + storage arrays. HCI incorporates significant advantages including scalability, resiliency (self-healing), simplicity and more compelling economics. Of course, not everyone agrees with this statement – just as many didn’t believe virtualization was a superior approach to the datacenter in the early days. But two industry observations should remove all doubt:

      1. All of the leading cloud providers have abandoned arrays for their primary hosting businesses, and utilize a distributed file system along with commodity server hardware. The cloud providers have much more demanding datacenter requirements than most enterprises; since HCI has already won in that space, it will certainly win in the enterprise as well.

      2. All of the leading datacenter storage players are already providing, or have announced, HCI solutions. NetApp has ONTAP EVO:Rail while EMC has both ScaleIO and VSPEX Blue. Both VMware and Citrix also have HCI solutions. VMware’s endorsement, in particular, of HCI as the optimal hosting architecture for a virtualized datacenter provides tremendous validation.

      In terms of obstacles to gaining market share from IBM and NetApp, I think the primary one is simply overcoming inertia. A lot of that inertia has to do with channel partners who have strong relationships with, and obligations to, the legacy manufacturers. That’s the reason I started this blog site last year.

      What other technology sectors are ripe for disruption? Networking is certainly one. I think we still have lots of room for innovation in the user productivity space (going beyond just VDI). The traditional reseller business model, IT resource optimization (i.e. better chargeback/showback models), and technology support are additional areas of opportunity.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s