Hypervisor myopia limits the promise of a software-defined datacenter

 Mypopia pic

I hear again and again from customers that they’d like to move to the cloud. Although the economics might not justify migration today, they want to eventually be free of the challenges in acquiring, provisioning and managing infrastructure.

Public cloud offers potential benefits, but reducing infrastructure complexity should not be counted among them. Hyper-converged infrastructure (HCI) can provide the simplicity of public cloud in customers’ own datacenters. And by facilitating a hybrid cloud strategy determined by workload needs, it can enable the same type of agility, efficiency and risk management as public cloud.

Seamless infrastructure requires not just abstraction of storage, but abstraction of cloud computing. Infrastructure should be intelligent enough to run applications on the most appropriate platforms whether on-premise or public cloud. This requires an HCI vision that goes far beyond dependency upon a single hypervisor.

The Hypervisor is no Longer the Center of the IT Universe

If you are running a virtualized datacenter, the odds are that you already have more than one hypervisor.

A September 2014 IDC market analysis states, “Over half of the enterprises (51%) have more than one type of hypervisor installed…VMware still leads the pack in terms of installed production deployments, but Microsoft is closing the gap. Other hypervisors are increasing their share, primarily by stealing from VMware’s historically predominant share.”

This statistic is corroborated by a Gartner poll showing that by July of 2014, 48% of VMware customers were already using Hyper-V as their secondary hypervisor. The poll also said that Microsoft’s share of new virtualized workloads is gaining.

Tightly integrating HCI with the kernel of a single hypervisor may bind a customer to the manufacturer’s product suite, but it disregards the trends of openness, agility and choice (not to mention resulting in a much fatter hypervisor). Senior Wikibon analyst, Steve Chambers, recently poked fun at this type of hypervisor myopia by comparing how the datacenter solar system would have looked pre and post Copernicus.

Copernicus pic

Operating System Centricity

A VMware spokesperson for its Storage and Availability group recently stated, “The harsh market reality is that there’s just not a lot of demand for non-vSphere-based hyperconverged solutions…I would argue that it’s hard to compete with features that are simple extensions of the hypervisor.”

IDC Mkt Share pic

This argument resembles the one Microsoft used to make in the late 2000s, “Virtualization is simply a role within the Windows operating environment.” Many industry analysts believed the messaging and told VMware that it needed to be more price competitive.

“If I were VMware, I would be looking to lower my prices.”

    -Laura DiDio, an analyst with ITIC. (Reuters, July 6, 2009).

 

Despite the analyst warnings and all of Microsoft’s marketing muscle, VMware continued to dominate the industry for years. IT leaders knew that virtualization could save them a vast amount of money – but only if it worked flawlessly. An IT manager would look pretty foolish telling her users that all of the VMs might be down, but the company saved several thousand dollars on a less expensive hypervisor.

Today, Microsoft has significantly decreased its operating system centricity. It has also reversed its opposition to open source and has made more contributions to the Linux code than any other vendor. The company no longer pitches virtualization based upon lower cost, but instead emphasizes enterprise-class virtualization, IT agility and flexibility.

Hyper-V still lags vSphere in management, and Microsoft has not developed the virtualization focus and community support that VMware has built over the years. But customers understand that Microsoft is striving to give them what they want, and they’re bringing Hyper-V into their datacenters.

Hypervisor Dependency is Contrary to a Software-Defined Datacenter

The term, “software-defined datacenter” (SDDC), was coined by VMware’s former CTO, Steve Herod, but it’s taken on a life of its own. Multi-hypervisor demand belies the concept of SDDC as merely an extension of vSphere.

VMware’s NSX team understands this new reality. VMware promotes multi-hypervisor support  as a “key feature…instrumental to the value NSX delivers.” In response to Cisco claims of hypervisor dependency, VMware fired back that some NSX environments don’t use VMware hypervisors at all.

A software-defined datacenter demands more than a single hypervisor HCI strategy. What if, for example, customers determine that KVM-based HCI enhances availability and performance of running containers in production? Or perhaps they want to run Hyper-V to lower the cost of their Citrix VDI environment. Or maybe deploying a combination of KVM and vSphere optimizes the application lifecycle from test/dev to production.

Multi-hypervisor HCI not only gives customers choice, it can also provide them with superior capabilities. Nutanix, for example, increases flexibility by supporting not just multiple hypervisors, but multiple versions of hypervisors. And these versions can run on the same cluster, potentially even in multiple datacenters.

Separating the operating system from the hypervisor enables non-disruptive 1-click upgrades of the Nutanix operating system, the hardware firmware and even the hypervisor – whether ESX, Hyper-V or KVM. Storage layer release cycles are higher frequency and bring improved performance, security and functionality with each release.

The Future of the Software-Defined Datacenter

If customers had a way to efficiently run and manage multiple hypervisors in the same environment – and to seamlessly meet their business and application needs; if they had training and certifications geared to a multi-hypervisor datacenter; if they had community support for their efforts to optimize performance while reducing cost – then the rapidly growing landscape of multi-hypervisor environments would undoubtedly accelerate faster still.

Nutanix Next pic

At Nutanix.NEXT in Miami in June, Nutanix is unveiling our Act II. We will reveal our plans to take multi-hypervisor capabilities to a new level. I hope that I will see you and your customers there to participate in the future of the software-defined datacenter.

 

Thanks to Prabu Rambadran (@_praburam), Steve Dowling, Payam Farazi (@farazip) and Angelo Luciani (@AngeloLuciani) for suggestions and edits.

The 10 reasons why Moore’s Law is accelerating hyper-convergence

SAN manufacturers are in trouble.

IDC says that array vendor market share is flat despite continued massive growth in storage.

pre 1 IDC mkt share

Hyper-convergence (HC) contributes to SAN manufacturer woes. The March 23, 2015 PiperJaffray research report states, “We believe EMC is losing share in the converged infrastructure market to vendors such as Nutanix.”

One of the most compelling advantages of HC is the cost savings. This is particularly evident when evaluated within the context of Moore’s Law.

Moore’s Law – Friend to Hyper-Convergence, Enemy to SAN

Moore’s Law, which states that the number of transistors on a processor doubles every 18 months, has long powered the IT industry. Laptops, the World Wide Web, iPhone and cloud computing are examples of technologies enabled by ever faster CPUs.

1 macMoore’s Law in Action  (via Igmur)

Innovative CPU manufacturing approaches such as increasing the number of cores, photonics and memristors should continue the Moore’s Law trajectory for a long time to come. The newly released Intel Haswell E5-2600 CPUs, for example, show performance gains of 18% – 30% over the Sandy Bridge predecessor.

Here are the 10 reasons why Moore’s Law is an essential consideration when evaluating hyper-convergence versus traditional 3-tier infrastructure:

1.  SANs were built for physical, not virtual infrastructure.

Virtualization is an example of an IT industry innovation made possible by Moore’s Law. But while higher-performing servers, particularly Cisco UCS, helped optimize virtualization capabilities, arrays remained mired in the physical world for which they were designed. Even all-flash arrays are constrained by the transport latency between the storage and compute which does not evolve as quickly.

The following image from Chad Sakac’s post, VMware I/O queues, “micro-bursting”, and multipathing, shows the complexity (meaning higher costs) of supporting virtual machines with a SAN architecture.

2 - sakac

HC: Hyper-convergence was built from the ground up to host a virtualized datacenter (“Hyper” in hyper-convergence refers to “hypervisor”, not to “ultra”). The image below from Andre Leibovici’s post, Nutanix Traffic Routing: Setting the Story Straight, shows the much more elegant and efficient access to data enabled by HC.

 3 - Andre

2.  Customers are stuck with old SAN technology even as server performance quickly improves.

A SAN’s firmware is tightly coupled with the processors; new CPUs can’t simply be plugged in. And proprietary SANs are produced on an assembly line basis in any case – quick retooling is not possible. When a customer purchases a brand new SAN, the storage controllers are probably at least one generation behind.

HC: HC decouples the storage code from the processors. As new nodes are added to the environment, customers benefit from the performance increases of the latest technology in CPU, memory, flash and disk.

Table 1 shows an example of an organization projecting a 20% increase in server workloads per year. The table also reflects a 20% density increase of VMs per Nutanix node – conservative by historical trends.

Fourteen nodes are required to support 700 VMs in Year 1, but only 8 more nodes support the 1,452 workloads in Year 5. And the total rack unit space required increases only 50% – from 8U to 12U.

4 Nodes
Table 1:  Example of decreasing number of nodes required to host increasing VMs

3.  A SAN performs best on the day it is installed. After that it’s downhill.

Josh Odgers wrote about how a SAN’s performance degrades as it starts scaling. Adding more servers to the environment, or even more storage shelves to the SAN, reduces the IOPs per virtualization host. Table 2 (from Odger’s post) shows how IOPs decrease per server as additional servers are added to the environment.

5 Odgers

Table 2:  IOPs Per Server Decline when Connected to a SAN

HC: As nodes are added, storage controllers (which are virtual), read cache and read/write cache (flash storage) all scale either linearly or better (because of Moore’s Law enhancements).

4.  Customers must over-purchase SAN capacity.

When SAN customers fill up an array or reach the limit on controller performance, they must upgrade to a larger model to facilitate additional expansion. Besides the cost of the new SAN, the upgrade itself is no easy feat. Wikibon estimates that the migration cost to a new array is 54% of the original array cost.

In order to try and avoid this expense and complexity, customers buy extra capacity/headroom up-front that may not be utilized for two to five years. This high initial investment cost hurts the project ROI. Moore’s Law then ensures the SAN technology becomes increasingly archaic (and therefore less cost effective) by the time it’s utilized.

Even buying lots of extra headroom up-front is no guarantee of avoiding a forklift upgrade. Faster growth than anticipated, new applications, new use cases, purchase of another company, etc. all can, and all too frequently do, lead to under-purchasing SAN capacity. A Gartner study, for example, showed that 90% of the time organizations under-buy storage for VDI deployments.

HC: HC nodes are consumed on a fractional basis – one node at a time. As customers expand their environments, they incorporate the latest in technology. Fractional consumption makes under-buying impossible. On the contrary, it is economically advantageous for customers to only start out with what they need up-front because Moore’s Law quickly ensures higher VM per node density of future purchases.

5.  A SAN incurs excess depreciation expense

The extra array capacity a customer purchases up-front starts depreciating on day one. By the time the capacity is fully utilized down the road, the customer has absorbed a lot of depreciation expense along with the extra rack space, power and cooling costs.

Table 3 shows an example of excess array/controller capacity purchased up front that depreciates over the next several years.

6 Excess Depreciation

Table 3:  Excess Capacity Depreciation

HC: Fractional consumption eliminates requirement to buy extra capacity up-front, minimizing depreciation expense.

6.  SAN “lock-in” accelerates its decline in value

The proprietary nature of a SAN further accelerates its depreciation. A Nutanix customer, a mortgage company, had purchased a Vblock 320 (list price $885K) one year before deciding to migrate to Nutanix. A leading refurbished specialist was only willing to give them $27,000 for their one-year old Vblock.

While perhaps not a common problem, in some cases modest array upgrades are difficult or impossible because of an inability to get the required components.

HC: An HC solution utilizing commodity hardware also depreciates quickly due to Moore’s Law, but there are a few mitigating factors:

  • In a truly software-defined HC solution, enhancements in the OS can be applied to the older nodes. This increases performance while enabling the same capabilities and features as newer nodes.
  • Since an organization typically purchases nodes over time, the older nodes can easily be redeployed for other use cases.
  • If an organization wanted to abandon HC, it could simply vMotion/live migrate VMs off of the nodes, erase them and then re-purpose the hardware as basic servers with SSD/HDDs ready to go.

Tesla

7.  SANs Require a Staircase Purchase Model

A SAN is typically upgraded by adding new storage shelves until the controllers, or the array or expansion cabinets, reach capacity. A new SAN is then required. This is an inefficient way to spend IT dollars.

It is also anathema to private cloud. As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN or Nexus 7000 switch.

Table 4 shows an example, based upon a Nutanix customer, of a comparison in purchasing costs of a SAN vs. HC – assuming a SAN refresh takes place in year 4.

8 staircase purch

 Table 4: Staircase Purchase of a SAN vs. Fractional Consumption of HC

HC: The unit of purchase is simply a node which, in the case of an HC solution such as Nutanix, is self-discovered once attached to the network and then automatically added to the cluster. Fractional consumption makes it much less expensive to expand private cloud as needed. It also makes it easier to implement meaningful charge-back policies.

8.  SANs have a Much Higher Total Cost of Ownership

When evaluating the likely technology winner, bet on the economics. This means full total cost of ownership (TCO), not just product acquisition.

SANs lock customers into old technology for several years. This has implications beyond just slower performance and less capabilities; it means on-going higher operating costs for rack space, power, cooling and administration. Table 5 shows a schematic from the mortgage company mentioned above that replaced a Vblock 320 with two Nutanix NX-6260 nodes.

9 vblock 320 tco

Table 5: Vblock 320 vs. Nutanix NX-6260 – Rack Space

Rack space, power and cooling costs are easy to calculate based upon model specifications. They, along with costs of associated products such as switching fabrics, should be projected for each solution over the next several years.

Administrative costs need to also be considered, but they are typically more difficult to gauge. They can also vary widely depending upon the type of compute and storage infrastructure utilized.

Some of the newer arrays, such as Pure Storage, do an excellent job at simplifying administration, but even Pure still requires storage tasks related to LUNs, zoning, masking, FC, multipathing, etc. And this doesn’t include all the work administering the server side. Here’s my recent post comparing upgrading firmware between Nutanix and Cisco UCS.

Table 6 shows the 5-year TCO chart for the mortgage customer including a conservative estimate of reduced administrative cost.

10 Cumm TCO

Table 6: TCO of Vblock 320 vs. Nutanix NX-6260

HC: In addition to slashed costs for rack space, power and cooling, HC is managed entirely by the virtualization team – no need for specialized storage administration tasks.

9.  SANs have a higher risk of downtime / lost productivity

RAID is, by today’s standards, an ancient technology. Invented in 1987, RAID still leaves a SAN vulnerable to failure. In some configurations, such as RAID 5, two lost drives can mean downtime or even data loss.

Both disks and RAID sets are getting larger. Disk failures require longer rebuilds, increasing both risk to performance along with another failure taking out the set.

And regardless of RAID type, a failed storage controller cuts SAN performance in half (assuming two controllers). Lose two controllers, and it’s game over.

11 BEarena tweet

Sometimes unexpected events such as a water main breaking on the floor directly above the SAN can create failure. And firmware upgrades, in addition to being a laborious process, carry additional risk of downtime. Then there’s human error. Array complexity makes this a realistic concern.

As demands on the array increase over time, the older SAN technology becomes still more vulnerable to disruption or outright failure. Even temporary downtime can be very expensive.

HC: Rather than RAID striping, an HC solution such as Nutanix includes replication of virtual machines onto two or three nodes. A lost drive or even entire node has minimal impact as the remaining nodes rebuild the failed unit non-disruptively in the background. And the more nodes that are added to the environment, the faster the failed node is restored in the background.

10.  Downsizing Penalty

Growth is not the only source of SAN inefficiency; downsizing can be a problem as well. Downsizing can result from decreased business, but also from a desire to move workloads to the cloud. The high cost and fixed operating expenses of a SAN make it difficult to justify reduced workloads.

HC: Customers can sell off or redeploy their older, slower nodes. This minimizes rack space, power and cooling expenses by only running the newest, highest-performance nodes. The software-defined nature of HC makes it easy to add new capabilities such as Nutanix’s “Cloud Connect” which enables automatic backup to public cloud providers.

The Inevitable Transition from SANs to HC

SANs were designed for the physical world, not for virtualized datacenters. The reason they proliferate today is that when VMware launched vMotion in 2003, it mandated, “The hosts must share a storage area network”.

But Moore’s Law marches relentlessly on. Hyper-convergence takes advantage of faster CPU, memory, disk and flash to provide a significantly superior infrastructure for hosting virtual machines. It will inevitably replace the SAN as the standard of the modern datacenter.

Thanks to Josh Odgers (@josh_odgers), Scott Drummonds (@drummonds), Cameron Stockwell (@ccstockwell), James Pung (@james_nutanix), Steve Dowling and George Fiffick for ideas and edits.

IBM jumps on the hyper-converged bandwagon

IBM jumps on the hyper-converged bandwagon

Last week’s announcements further show that HCI has gone mainstream.

One of the world’s largest and most storied legacy players, IBM, said it is investing $1 billion in SDS. This BusinessInsider article, In another brilliant move, IBM just budgeted $1 billion to take down EMC,  discusses IBM’s strategy. It also features Nutanix as the “poster child for this new market”.

In the introductory article to this blog site, I described how seven legacy datacenter manufacturers control $56 billion of the annual $73 billion server and storage market. Here’s an updated status of their participation in the HCI space:

HP:                         StorVirtual & EVO:Rail

IBM:                      Announced HCI strategy

Dell:                       Dell XC (Nutanix OEM) & EVO:Rail

Oracle:

Hitachi:               EVO:Rail

Cisco:                    Teamed with Maxta & Simplivity. Investment in stratoscale.

EMC:                     VSPEX Blue & ScaleIO

NetApp:               ON TAP EVO:Rail

As IBM’s server business transitions to Lenovo, the Chinese giant should replace IBM on the list. Lenovo hasn’t yet announced an HCI offering – but undoubtedly it will.

Springpath

Springpath, another HCI start-up, came out of stealth mode last week. Formerly known as Storvisor, Springpath was founded by a couple of VMware veterans (maybe they decided to grab the new name since VMware spun off SpringSource to Pivotal?).

Springpath has $34 Million in funding from Sequoia, Redpoint and other VCs. VMware’s Duncan Epping wrote a complimentary piece about the company on Yellow-Bricks, though Forbes was somewhat critical. I do think that their subscription model is intriguing.

FalconStor

Last week also saw an erroneous headline claiming that small but 15-year old IT company, FalconStor, announced a hyper-converged solution. The new FreeStor is actually a “horizontal converged data services platform”, but it just goes to show how hyper-convergence has become so top-of mind.

Other HCI Players

In addition to industry leader Nutanix and the large legacy players and their partners mentioned above, the other manufacturers who have, or who have announced, HCI solutions include:

Atlantis Computing

Citrix Sanbolic (recent acquisition)

DataCore

Huawei (partnered with DataCore)

NIMBOXX

Pivot3

Pure Storage

Scale Computing

StorMagic

VMware VSAN

Citrix’s Sanbolic acquisition: The s**t has hit the SAN

Citrix announced its acquisition of Sanbolic at its partner conference last week, giving it a hyper-converged solution to compete with VMware’s VSAN. But reading between the lines, the acquisition, along with VSAN, further validates that the SAN is dead for End User Computing (EUC).

The Hyper-converged Bandwagon

Citrix and VMware, of course, dominate the EUC market. Both organizations have been partnering with Nutanix for some time. They are enthusiastic about the ability of hyper-converged infrastructure to dramatically accelerate VDI by slashing the cost, complexity, risk and performance inconsistency of a virtual desktop deployment. With the advantages of hyper-converged infrastructure, 2015 may well finally be the year of VDI.

IDC HCI chart

A first-ever market share report for hyperconverged solutions from IDC ranks the contenders as of August 2014 based on execution, strategy, and market share. The size of a vendor’s bubble reflects that vendor’s market share. Nutanix is firmly in the leader position with 52% of the entire market. IDC’s next report will presumably include Citrix along with other datacenter incumbents and yet more start-ups.

What about Sanbolic vs. Nutanix?

Not unexpectedly, several partners at Citrix Summit asked me how the Sanbolic acquisition is going to affect Citrix’ relationship with Nutanix. I do not anticipate any disruption to the momentum between our companies for the following four reasons:

  1. Sanbolic Technology Still Needs to be Improved and Integrated. Sanbolic is a 14 year old company with approximately 30 employees. Sanbolic has been partnering with Citrix since at least 2010. The Nutanix and Citrix relationship is much more recent, but is rapidly building great traction including the only hyper-converged CVS (Citrix Validated Solution). Nutanix already enables the software-defined simplicity, elegance and automation that Citrix CEO, Mark Templeton, spoke about in his keynote.
  1. VSAN Parallel. A similar situation has been taking place with VMware VSAN. Even though Nutanix competes with VMware in the hyper-converged space, we still have a strong partnership – particularly with VMware EUC. This bodes well for continued momentum with Citrix.
  1. Citrix-Specific Innovation. Nutanix continues to innovate to add further value to Citrix customers. At Summit, we formally announced the Nutanix Plugin for Citrix XenDesktop. This patented capability enables the desktop folks to handle the infrastructure tasks. The Plugin for XenDesktop provides full SLA management direct from the XenDesktop Studio Console. No one else does this.
  1. Citrix Partners. Nutanix continues to work closely with leading Citrix partners across the globe. At Summit, the most commonly expressed Citrix partner description of Nutanix was that it is a “game-changer.” As Jim Steinlage of Choice Solutions remarked, “[Nutanix] allows us to have the applications up and running much more timely and with more predictable results. And Nutanix enables us to achieve our goal of providing users with a better experience than their physical desktop from day one.”

The Rapidly Growing Competitive Landscape

Beyond the contenders featured in the IDC hyper-converged report, five of the seven leading datacenter hardware manufacturers now all have launched or announced hyper-converged solutions (not even including EVO:Rail solutions): Dell, HP, EMC, Cisco and NetApp. This is expected. Nutanix is not going to turn the $73B server & storage market on its head without lots of competition.

As the competition starts to mature and improve, the onus will be on Nutanix to continue innovating and raising the bar in areas such as performance, simplicity and scalability as well as in capabilities such as hybrid cloud enablement and management. This is the only way we will maintain our leadership position. I believe Nutanix is up to the challenge.

 

Channel partners rally behind Nutanix Web-scale converged infrastructure

“Really?!”

That was the one word email I received from Nutanix’s Sr. VP of Sales (and my boss), Sudheesh Nair, in response to the Q4 2013 Piper Jaffray Storage VAR Survey. The surveyed partners ranked Nutanix second to last in terms of sales performance relevant to plan.

Needless to say, I was frustrated. The channel perception of Nutanix was out of synch with Nutanix’s record-setting sales in 2013 as the fastest-growing infrastructure company of at least the past ten years.

But understanding and successfully positioning Nutanix has been a learning process for the channel. When Nutanix CEO, Dheeraj Pandey, first approached Lightspeed Venture Partners almost five years ago, he made it clear that his new company would disrupt the storage industry – including the venture capitalists’ existing investments. Unlike most entrants into the suddenly popular hyper-converged space, this revolutionary vision is integral to everything we do at Nutanix.

Partners can’t simply pitch a “faster, cheaper, better” storage array as they can with the other early stage companies in the survey. Partners need to be able to articulate and evangelize to their clients how Web-scale is a sea change that is fundamentally altering the infrastructure of the modern, virtualized datacenter.

The Difference a Year Makes

2014 continued the trajectory of rocketing sales and, gratifyingly, a much broader spectrum of channel partners caught the Web-scale fever as well. From small partners building their businesses around Nutanix to multi-billion dollar channel organizations moving Fortune 500 clients over to Web-scale, Nutanix is changing the channel landscape.

According to the latest Piper Jaffray report, channel partners now rank Nutanix sales performance in the #1 position – ahead of CommVault, Dell Storage, EMC, HP Storage, NetApp, Nimble, Pure Storage, Veeam and VMware.

Piper Jaffray

 

The Stern Agee Channel Survey similarly shows a huge improvement in channel recognition of Nutanix. Channel partners listed Nutanix as the second leading key company disrupting the established storage sector – right behind Pure Storage (but quickly catching up). Nutanix is ranked ahead of Nimble (and rapidly increasing the spread), and is ranked far ahead of Tintri, Violin Memory, Nimbus Data, Nexenta, Solidfire and everyone else.

sterne Agee

 

Looking Forward to 2015

It’s exciting to see Nutanix partners across the world enthusiastically embrace the Web-scale opportunity. They’re leveraging Nutanix to differentiate their companies, gain new customers, increase sales and shorten sales cycles.

I want to thank all of our partners for your continued faith and trust. The good news is that Nutanix is really just getting started. New capabilities such as one-click hypervisor upgrades, metro availability, connectivity to AWS and Microsoft Azure, among many others, mean extraordinary continued opportunity in the year ahead.

.

 

 

When a channel partner looks in the mirror, does a trusted advisor look back?

In my former position as VP of Cloud and Virtualization at Presidio, I frequently used financial modeling to assist our reps, but did not drive sales on my own. That changed after I learned about Nutanix.

I loved the no-SAN concept and was curious to see how it would actually play in Peoria. I pitched a savvy CIO who had participated in an EDUCAUSE panel I moderated, and she was immediately intrigued. But the Chicago office of Presidio was reluctant to work with a new manufacturer. I just made the sale myself and convinced another region with which I had stronger ties to process the paperwork.

The experience should have tipped me off as to the type of situation I would face in my dual channel and strategic sales role at Nutanix. While it’s been surprisingly easy to sell web-scale converged infrastructure to former clients who have called me or vice-e-versa (always running the deals through partners of course), it’s often difficult to get buy-in from VARs – especially from large ones.

mfg rep 1

The Channel Partner Perspective

I had dinner a few days ago with the VP of Sales of a sizable regional VAR. He asked me how much business our top partner would do with us this year. I told him that one organization had a plan in place to sell $50M in our new fiscal year, though internally we pared it down to be conservative. The VP told me that his company will do $90M this year with EMC alone.

As enamored as he and his team were with our technology, I could tell he was thinking about how he could realistically present it internally. Even matching the sales of Nutanix’s largest partner wouldn’t come anywhere near the business he’s driving with EMC and Cisco. How could he convince his executive team that they should risk the wrath of their two largest vendors by promoting Nutanix?

And, suppose he did manage to persuade the executive team to go all in with web-scale; they still would have to get their sales reps on-board. The reps have established relationships with legacy manufacturers, are trained and experienced in selling their products and depend upon them for opportunities. These “coin-operated” reps do not readily gravitate toward promoting new technologies.

mfg rep 2

The Customer Perspective

If I were a CIO, I would not want a solutions provider who simply brought me different product configurations from a leading datacenter manufacturer – I could find that information myself on the Web. I’d want to work with a partner who was diligent enough to constantly investigate new promising technologies, and who was astute enough to discern which ones could have a positive impact on my organization. I’d expect the partner to bring those options and his recommendations to me for review.

VARs that close-mindedly mimic their vendor perspectives risk becoming, in the eyes of customers, glorified manufacturer reps. An EMC partner, for example, might feel confident today in leveraging a trusted relationship with a CIO to advocate Vblock as the best option for a VDI deployment. But the probability is increasing that the CIO will learn on her own that she could have implemented a similar project at a fraction of the cost and with none of the risk by utilizing web-scale. She will consequently feel her partner is either uninformed or, worse, acting in EMC’s rather than in her best interest.

mfg rep 3

Preserving the Customer Relationship

Channel partners tell me that large enterprises move very slowly – the implication being that they have plenty of time to continue making lots of money by promoting legacy 3-tier infrastructure. Perhaps they’re correct, but it’s a dangerous way to conduct business.

Henry Ford famously said, “If I had asked people what they wanted, they would have said faster horses.”  Just because a customer asks for more storage doesn’t mean a solutions provider should limit the conversation to arrays. They can take the opportunity to educate their client about how Google and the leading cloud providers have moved away from using SANs and ancient (1987) RAID technology. They can discuss the advantages of web-scale converged infrastructure and about whether or not the architecture might be appropriate for the customer’s environment.

Even if the customer decides, for whatever reason, to go with traditional 3-tier infrastructure, at least the channel partner looked out for the customer’s best interest. Over time, as web-scale/hyper-converged infrastructure becomes the virtualized datacenter standard, the customer will appreciate the effort and integrity of the partner for introducing it.

The Playing Field has Already Changed

I don’t agree with the premise that big enterprises will continue to move slowly. External pressures from public cloud and internal pressures from much more rapidly changing technologies will force enterprises to change more quickly as well.

Just look at web-scale. Almost overnight it has jumped solidly into the mainstream. VMware’s endorsement of hyper-converged infrastructure as the platform of choice for hosting virtual machines leaves no doubt as to the future direction of virtualized datacenter architecture.

Then there’s Dell – one of the “big seven” who collectively drive 76% ($56B) of the annual server and storage business. Dell also blessed hyper-converged architecture last week with its launch of the Dell XC Series: Web-scale Converged Appliances. Yet another of the “big seven”, EMC, has said it will develop its own EVO:Rail offering. Even HP is weighing in both with an EVO:Rail solution and with its own StoreVirtual product. Cisco is showing signs of making the leap as well.This massive validation during the past few months by the leading datacenter players enables solution providers to bring up web-scale without concern of appearing “bleeding edge”. It also means that they should, with at least some degree of impunity, be able to focus on hyper-converged solutions by creating a separate division explicitly for this purpose.

However they do it, I strongly encourage channel partners to figure out a way to get engaged with web-scale. Nutanix continues, and is even accelerating, our trajectory as the fastest-growing infrastructure company of the past decade. This provides an extraordinary opportunity for forward-thinking partners to grow along with us.

The VCE dissolution: here comes channel disruption

“Partnering is more difficult than acquisitions…Most strategic coalitions have a very high failure rate, worse than acquiring, and yet as a company, we’ve all three been able to do this.”
-John Chambers, 2009

The waves of disruption are starting to break.

Several magazines (see Sources) reported today that EMC is folding VCE into its business and buying out most of Cisco’s stake. Assuming this is true (EMC is having a big announcement tomorrow morning), it is another huge indicator of the massive datacenter disruption that’s coming. While I think that EMC will certainly still promote Vblocks, it’s hard to imagine that they’ll do so as enthusiastically as they did in conjunction with VCE.

Selling Vblocks

VCE has been on a $1.8 billion run rate. Vblock partners tend to love the product because they make a lot of money from selling, installing and upgrading it. One VCE partner told me that every time VMware upgrades vSphere, customers have to upgrade their Vblocks. This is a laborious process often requiring a team of consultants working up to three days to accomplish. It translates to great services business.

I have to admit, I was surprised at how well VCE has done during the past five years. When I initially heard about Acadia (as VCE was initially called), I thought that there was no way this product was going to sell. The idea of getting the server, storage and networking folks to all come together at the same time and agree upon a common platform purchase seemed to be an insurmountable challenge. A year later I was still somewhat skeptical. I even wrote a blog post about the sales incentive problems with misaligned quarter endings.

But I misjudged the desperation many IT staffs felt as they increasingly virtualized their datacenters. They faced huge challenges in deployment time, finger pointing between server and storage manufacturers, and in functional group collaboration. The topnotch salespeople from VCE, along with channel partner support, convinced many of them that “the world’s most advanced converged infrastructure” was an answer to their struggles.

pastedGraphic

Working for a channel partner that moved a whole a lot of both Cisco UCS and EMC, I jumped on the Vblock bandwagon and helped facilitate a fair number of sales utilizing ROI analysis. But I always felt that there had to be a more elegant solution to the challenges of hosting a virtualized datacenter than simply integrating separate products as a single SKU. Once I learned about the Nutanix web-scale architecture, I became convinced that this was a vastly superior alternative.

Channel Implications

A Taneja Group report comparing Nutanix with VCE (sponsored by Nutanix) was just released today. The report states, “Taneja has found that the majority of VCE customers adopted Vblocks because they already had active VCE or VCE-partner sales teams coming at them.”

It will be interesting to see how Vblock partners fare without the huge VCE focus and assistance. On the plus side, it’s almost a certainty that they’ll no longer have to promote Cisco’s ACI over VMware’s NSX. On the negative, they can say goodbye to the monetary incentives from the recently established joint channel program between EMC and Cisco.

Not surprisingly, I’d like to see more Vblock partners get on board with web-scale. In my opinion, selling Vblocks in comparison to selling Nutanix is like pushing rope. Even some die-hard very large Vblock customers have now started migrating to web-scale.

The Taneja Group report says it well, “We believe that even data centers that are happy enough with converged systems today will look to hyperconverged systems tomorrow. Better yet, instead of investing in a traditional convergence solution, businesses should consider going directly to a next-generation solution like Nutanix.”

Sources:

Cisco Said to be Selling Most of VCE Stake to EMC. 10/22/2014. Bob Brown. Computerworld.

Report: EMC to Take a Bigger Role in VCE as Cisco Reduces Stake. 10/21/2014. Barb Darrow. Gigamom.

The End of Pretend? Cisco Looks to Partially Exit VCE Joint Venture. 10/21/2014. Ben Kepes. Forbes.

EMC Said to Absorb VCE Joint Venture as Cisco Reduces Stake. 10/21/2014. Dina Bass & Peter Burrows. Bloomberg.

Tech Titans Unite for Private Cloud Push. 11/05/2009. Jennifer Kavur. IT World Canada.

http://www.itworldcanada.com/article/tech-titans-unite-for-private-cloud-push/40087

Channel Disrupt – an Introduction

“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”
-Charles Darwin

I’ve headed up channel sales for the Americas at Nutanix for 19 months now, but previously worked on the partner side of the IT infrastructure channel for 25 years. During that time, I saw a lot of disruptive technologies such as Ethernet, VoIP, the Internet, etc. But from a datacenter standpoint, nothing came close to the impact of virtualization.

VMware’s introduction of vMotion 11 years ago turned out to be an incredible boon to the IT channel. Solutions providers grew in both number and size as they helped organizations across the globe to virtualize their data centers through the acquisition of high-end hosting servers, SANs and switch fabrics.

Many VARs, especially the larger ones, were reluctant to fully embrace VMware’s revolutionary technology. But they were fortunate in that virtualization spread surprisingly slowly; they generally had time to adapt. The VMware consultancy I ran, for example, was purchased after only three years in business by a much larger publicly-traded Cisco partner desperate to quickly acquire virtualization expertise.

Today’s disruptive climate is dramatically different. Rather than enjoying an exploding increase in purchases of traditional datacenter hardware, solutions providers are going to see the opposite take place. And the timeframe is going to be very fast.

Consider:

  • Web-scale technology, as introduced to the enterprise by Nutanix three years ago, has gained exceptionally fast mind share across the globe including an OEM offering by Dell.
  • VMware has introduced its own hyper-scale technology, EVO:Rail, that competes with both its long-term storage partners and with parent company, EMC.
  • HP is jumping on the hyper-converged bandwagon both with EVO:Rail and with an updated version of its Left-Hand Networks. And it’s splitting off half of its business in order to focus on the enterprise space.
  • EMC, the leading storage manufacturer, has been seeking to be acquired.
  • VCE, the leading player in the so-called “converged infrastructure” space has increasing animosity among two of its primary partners, and Cisco is rumored to have turned off the financial spigot.
  • AWS is making big strides as they work to take away everyone’s hardware business.

According to IDC, the total server and storage market is now over $70B annually. An incredible $56B of this business is done by only seven vendors (eight once Lenovo’s purchase of IBM’s server business is reflected): EMC, HP, IBM, Dell, NetApp, Oracle and Cisco. Most of this revenue flows through channel partners.  As the status quo business decreases, resellers are going to have to react very quickly to compensate.

Manufacturer 2014 Server Revenue
(in millions – annualized)
2014 Storage Revenue
(in millions – annualized)
Total
(in millions)
HP $12,776 $2,384 $15,160
IBM $11,888 $2,848 $14,736
Dell $8,332 $1,700 $10,032
EMC
$7,056
$7,056
NetApp
$3,060
$3,060
Oracle
$2,948
$2,948
Cisco
$2,908
$2,908
Subtotal $38,852 $17,048 $55,900
Hitachi
$1,492 $1,492
ODM Direct $3,340
$3,340
Others $8,084 $4,940 $13,024
Total $50,276 $23,480 $73,756

Putting the Customer Second

All solutions providers say that they have their customers’ best interests at heart. But infrastructure VARs are typically dependent upon a few, or less, of the handful of leading datacenter manufacturers for most of their business.

The VARs make significant investments in trainings for both salespeople and technical folks. They work to obtain both individual and organizational certifications. They attend manufacturer conferences, engage in manufacturer led demand-generation events, and develop close relationships with their manufacturer partners. In return, the resellers receive Marketing Development Funds, access to sales and engineering resources and, most importantly, opportunities.

This channel structure has worked quite well since the early days of IBM-initiated solutions integrators, and many reseller organizations are now doing hundreds of millions or even billions of dollars in annual revenues. But the channel structure can often make it challenging to put customer advocacy ahead of manufacturer loyalty.

When a manufacturer, for example, introduces a partner into a new account, that partner has to push the manufacturer’s products regardless of whether or not the technology is the best fit. Introducing a competitive product would spell the death knell for receiving future opportunities.

Partners of the leading datacenter incumbents need to be careful even in their existing accounts of mentioning one of the newer disruptive technologies as an option. The partner will typically position it, if at all, only in situations where the incumbent either lacks a competitive product or is not likely to notice. The alternative is to risk the potential wrath of the datacenter giant.

So while a channel partner will privately often concede that, say, web-scale infrastructure makes a lot more sense for a virtualized datacenter than a Vblock, it can’t even bring up this option to the customer for fear of jeopardizing its large Cisco and EMC revenue streams. The manufacturer  relationships supersede the customer’s best interest.

In Boldness There’s Opportunity

Ray Noorda of Novell used to tell his partners, “In mystery, there’s margin”.  Today’s corollary might be, “In boldness, there’s opportunity”. The handful of infrastructure giants are selling virtualized organizations tens of billions of dollars worth of equipment that was designed and optimized for a physical datacenter.

Enterprising channel partners have an extraordinary opportunity to educate clients and prospects about the advantages of web-scale converged infrastructure. Not only is the revenue and profit potential vast, but they can also differentiate themselves from the pack as innovative, forward-thinking, and as leaders in integrating cloud technologies.

Sources
Weak Demand for Storage Systems…as Worldwide External Disk Storage Systems Revenue Falls for Second Consecutive Quarter.  09/05/2014. IDC Press Release.
Server Refresh Cycle Propels Industry Forward in Q2. 08/27/2014. Charlie Osborne. ZDNET.