Hyperconvergence players

[Author note: This post has been updated and moved to By The Bell http://bythebell.com/2016/01/hyperconverged-players-index.html

While, according to IDC (via SiliconANGLE), “Nutanix generated 52 percent of all global hyperconverged revenue during the first half of 2014”, many other legacy datacenter players and startups have introduced hyper-converged infrastructure (HCI) offerings. The following is a list of all the known (to me) hyperconvergence players:

1 Atlantis Computing Atlantis HyperScale
2 Breqwatr All-flash appliance
3 Cisco Investment in Stratoscale. Selling arrangements with Maxta & Simplivity
4 Citrix Sanbolic
5 Datacore Datacore Hyper-Converged Virtual SAN
6 Dell Dell XC (Nutanix OEM) & EVO:Rail
7 EMC VSPEX Blue, ScaleIO & VxRack (VCE)
8 Fujitsu EVO:RAIL
9 Gridstore Private cloud in a box
10 HPE StoreVirtual & EVO:Rail
11 Hitachi Data Systems Unified Compute Platform 1000 for VMware EVO:Rail
12 HTBase HTVCenter
13 Huawei FusionCube
14 Idealstor Idealstor IHS
15 IBM Announced HCI Strategy
16 Lenovo Nutanix OEM. EVO:Rail. Selling arrangements with StorMagic, Maxta and Simplivity
17 Maxta Hyper-Convergence for Open Stack
18 NetApp NetApp Integrated VMware EVO:RAIL Solution
19 NIMBOXX Hyperconverged Infrastructure Solutions
19 NodeWeaver NodeWeaver Appliance Series
20 Nutanix Xtreme Computing Platform
21 Pivot3 Enterprise HCI All-Flash Appliance
22 Pure Storage Possible HCI solution coming
23 Rugged Cloud HCI
24 Scale Computing HC3
25 SimpliVity Omnicube (hardware-assisted SDS)
26 Sphere3D V3 VDI
27 Springpath Independent IT Infrastructure
28 Starwind Starwind Hyper-Converged Platform
29 Stratoscale The Data Center Operating System
30 StorMagic SvSAN
31 Supermicro EVO:RAIL
32 VMware EVO: RAIL, VSAN, EVO: RACK
33 Yottabyte yStor
34 ZeroStack ZeroStack Cloud Platform

The 10 ways in which Nutanix is Uberizing the datacenter

blockbuster 2

Millennials today probably chuckle at how taxi drivers once drove around randomly and aimlessly looking for fares while would-be passengers stood on street corners trying to hail a cab.

With the exception of two-way radio and computer-assisted dispatching innovations, the taxi business was stagnant for 100 years. Uber applied new technologies to vastly improve the customer experience and in the process, turned the industry upside down. Other companies such as NetFlix, Apple and Amazon similarly used new technology to shake up the video store, record company, newspaper, and book store businesses, among many others.

The traditional datacenter is not just more inefficient than the taxi industry; it’s dysfunctional. Proprietary storage arrays, dedicated switch fabrics and storage-specific administrative requirements inhibit simplicity, scalability and resiliency. Inflexible silos of specialized IT skills and technology islands of different equipment compound the wastefulness and high cost.

Young people starting work in IT are often flabbergasted by the processes and complexity. They’re used to being able to instantly download a new app to their iPhone with a few swipes. Now they have to wait weeks, if not months, for server, storage and networking components to be ordered and configured before they can stand up their applications.

The datacenter has long been primed for an Uber-like disruption. Here are the ten ways in which Nutanix is making it happen by transforming the IT customer experience:

1.  Leveraging New Technology to Simplify the Environment

Imagine going back in time 25 years and trying to explain to a taxi patron (probably standing in the rain trying to fruitlessly hail a cab), that combining future Web, GPS and Smart Phone technologies would alleviate her transportation struggles. Skepticism would be the likely outcome.

But Uber streamlined the taxi “transportation stack” from driver to dispatcher to consumer. This disintermediation replaced complexity and anxiety with simplicity and certainty. As would be expected, traditional taxi sales and medallion prices have plummeted.

medallion prices

Nutanix similarly converges the infrastructure stack to build what CEO Dheeraj Pandey calls, “The iPhone of the datacenter.” Intuitive VM-centric storage management combined with Web-scale technologies eliminates the complexity of buying, deploying and administering datacenter infrastructure.

This simplicity extends to all areas of the virtualized environment including seamless business continuity, GUI-driven disaster recovery schema, test and development, private clouds, backup, branch office management and more. Nutanix even enables one-click non-disruptive upgrades not only of the Nutanix OS, but of the underlying hypervisors and disk firmware.

2.  Software Defined Innovation

There was a time when a storage manufacturer could build an empire on a single feature such as deduplication, but today’s fiercely competitive environment penalizes the lack of innovation. Seeking Alpha recently observed, “These aren’t great times for legacy storage companies.” The storage leaders are seeing declining sales, and based upon the shrinking gross margins of EMC and NetApp, even lower prices aren’t helping.

Nutanix jolted the status quo over three years ago with the first storage and compute platform built specifically for hosting a virtualized datacenter. The exceptional popularity of its hyperconverged approach quickly reverberated throughout the industry. Today every leading storage manufacturer, a whole slew of start-ups, and even VMware, Citrix and Microsoft have either introduced or announced a hyperconverged solution.

But Nutanix continues to innovate at a furious pace. Its engineering department doesn’t have a lot of ex-storage folks. Instead, engineers with backgrounds from Web-scale companies such as Google, Facebook and Twitter build massively scalable, very simple and low-cost infrastructure. It’s a completely different mindset, and it leads to rapid development in response to customer and partner requests.

While some innovations, such as the industry’s first hyperconverged all-flash node, utilize commodity hardware form factors, most are delivered strictly via software (Tesla-style). Recent examples include Metro Availability (for active/active datacenters), MapReduce Deduplication, Cloud Connect, shadow volumes, Plugin for Citrix XenDesktop, among many others.

3.  Automation and Analytics

TornadoAnalytics_new

Chris Matys of Georgian Partners wrote, “Uber’s use of data science is perhaps the most disruptive – and therefore awe-inspiring – aspect of what it does.”  Matys describes how Uber uses applied analytics to, “drive efficiency and create positive user experiences.”

Nutanix also utilizes extensive automation and rich system-wide monitoring for data-driven efficiency combined with REST-based programmatic interfaces for integration with datacenter management tools. Rich data analytics such as Cluster Health enable administrators to receive alerts in real time as the Nutanix system monitors itself for potential problems, investigates and determine root cause, and then proactively resolves issues to restore system health and maintain application uptime.

Many customers and partners say that Nutanix’s management interface is the most intuitive in the industry. Prism Central dashboards display aggregated data around multi cluster hardware, VM and health statistics into a single management window.

Analytics

4.  Predictability

In most major cities, a limited number of taxi medallions make hailing a taxi ride a hit and miss proposition – especially when a local convention or pouring rain increases demand. The same is true of 3-tier infrastructure. Faster growth than anticipated, new applications or use cases, purchase of another company, etc. all can, and all too frequently do, overwhelm a SAN and its dedicated network, causing both inconsistent and degraded performance.

When SAN customers fill up an array or reach the limit on controller performance, they must upgrade to a larger model to facilitate additional expansion. Besides the cost of the new SAN, the upgrade itself is no easy feat. Wikibon estimates that the migration cost to a new array is 54% of the original array cost.

The Nutanix controller VM lives on every node and distributes all data, metadata and operations across the entire cluster, eliminating performance bottlenecks. Adding linear scalability results in performance predictability and budgeting preciseness.

5.  Reliability

Uncertainty about taxi availability or a functioning credit card machine makes taxicabs a less reliable mode of transportation than Uber or Lyft. Uber provides both visibility and predictability while finding the best vehicle fit for a transportation request and directing it to the customer.

SANs are more like taxis than ride-sharing services. Most use RAID technology which was invented in 1987 and is archaic by today’s standards. Loss of a storage controller can cut available performance in half. Losing two drives in a RAID 5 configuration, user errors, power failures and many other issues can cause unplanned downtime.

Nutanix keeps multiple copies of data and metadata both local to the VM running the active workload as well as throughout the cluster. In the event of failure, MapReduce technology is leveraged to deliver non-disruptive and quick rebuilds.

The Nutanix Distributed File System is designed for hardware failure and is self-healing. Always-on operation includes detection of silent data corruption and repair of errors around data consistency, automatic data integrity checks during reads, and automatic isolation and recovery during drive failures.

6.  Fractional Consumption

Uber patrons love the way it makes payments invisible. They no longer have to contend with slow or broken credit card machines or calculating the tip at the end of the ride.

Native staircase

Purchasing traditional infrastructure tends to require large outlays for storage arrays, blade chasses and expensive networking switches. Because the entire cost is often borne by the business unit with a VM request exceeding existing capacity, this “staircase” purchasing model inhibits a completely virtualized datacenter.

Nutanix reduces budgeting challenges by enabling purchases in bite-sized increments only as needed – including mixing compute heavy and storage heavy nodes. This fractional consumption model also facilitates private cloud by simplifying development of a meaningful charge-back/show-back system.

7.  Lower Cost

Uber is generally, albeit not always, less expensive than taxis. But when taking account of the vastly improved user experience and other benefits, many riders still gladly pay a higher price.

Onisick Uber

Nutanix similarly may not always be less expensive than 3-tier infrastructure in terms of up-front acquisition cost. But even in these cases, factoring in other important variables should easily beat three-tier competition. These variables include (but are not limited to) rack space, power, cooling, switching fabric, planned and unplanned downtime, administrative cost and the effects of Moore’s Law.

8.  Egalitarianism

While certainly not immune from controversy, Uber tends to have an egalitarian feel. All passengers enjoy the same type of limo-like service previously reserved for the rich.

The storage manufacturers have long been able to get away with complex solutions, high maintenance costs and mandatory forklift refreshes because of the proprietary nature of their products. In response to demands from virtualized customers for an infrastructure solution that is faster to deploy and troubleshoot, they came up with so-called “converged infrastructure.”

“Converged infrastructure” is the mother of all misnomers; there may be added cost and still less flexibility compared with buying individual components, but there is not a molecule of converged infrastructure in “converged infrastructure.” Convergence implies, as was the case with VoIP, consolidation of redundant hardware and elimination of multiple management tiers. Neither is true with converged infrastructure which has thrived by addressing customer pain with prepackaged legacy servers, storage and network.

Nutanix, on the other hand, takes the Web-scale approach of moving all of the intelligence out of the hardware and into software, eliminating redundant equipment and management tiers. A low entry cost and simple administration without requiring storage and networking specialists enables world-class infrastructure for the world’s largest enterprises as well as for SMBs.

9.  Passion

The typical taxi experience is rarely associated with passion. Uber users, on the other hand, tend to be quite vocal about their enthusiasm for the service.

Nutanix has a singular focus on revolutionizing the virtualized datacenter. Contrast this passion with the legacy players’ challenge of selling archaic array technologies side-by-side with their hyperconverged offerings.

Twin Father

Nutanix customers tend to be huge fans of both the technology and of the organization. This is reflected in Nutanix’s astounding Net Promoter Score of 90 and in winning the prestigious Omega NorthFace Award for exceptional customer satisfaction and loyalty for the last two years in a row.

10.  Transparency

Taxi riders generally have an idea about the cost of the trip, but traffic jams, toll fees and other unexpected charges can significantly increase the total expense. Uber enables riders to know exactly what their ultimate cost, including tips, will be. In response to widespread customer complaints, Uber even made its surge-pricing transparent.

Lack of transparency is a sore point in the IT infrastructure industry. And the complexity of three-tier infrastructure, particularly storage arrays, promotes functional isolation and lack of visibility.

Transparency

Architecture built utilizing Nutanix Web-scale infrastructure is simple to deploy, administer and scale. And Nutanix is transparent about how our technology works. No secrets, no politics, no misleading claims. The schematic above is an example of the type of product functionality detailed on www.nutanixbible.com.

What’s Next?

Whether Uber or taxi, the goal is to arrive at a destination for some sort of purpose; perhaps a job interview or meeting a spouse for dinner. The ride that takes you there is really immaterial. It should be pleasant but seamless and predictable.

Datacenter infrastructure exists only to support enterprise applications and the business objectives they facilitate. Nutanix’s ACT I, hyperconvergence, set the stage for making infrastructure invisible.

At the Nutanix.NEXT user conference next week in Miami, we’ll be unveiling our ACT II. We’ll show how we’re transforming the datacenter to put the emphasis on applications rather than on infrastructure.

Related Articles

The Ten Reasons Why Moore’s Law is Accelerating Hyper-Convergence. 04/06/2015. Steve Kaplan. ChannelDisrupt.

This is the Financial Proof that Uber is Destroying Taxi Companies. Jim Edwards. 02/27/2015. Business Insider.

After Getting Crushed by Uber, NYC Taxi Mogul Demands a Government Bailout. 04/14/2014. Brad Reed. BGR.

Thanks to Sudheesh Nair (@sudheenair), Prabu Rambadran (@_praburam), Payam Farazi(@farazip), James Pung (@james_nutanix) and Ryan Hesson (@RyanHesson1) for suggestions.

The 10 reasons why Moore’s Law is accelerating hyper-convergence

SAN manufacturers are in trouble.

IDC says that array vendor market share is flat despite continued massive growth in storage.

pre 1 IDC mkt share

Hyper-convergence (HC) contributes to SAN manufacturer woes. The March 23, 2015 PiperJaffray research report states, “We believe EMC is losing share in the converged infrastructure market to vendors such as Nutanix.”

One of the most compelling advantages of HC is the cost savings. This is particularly evident when evaluated within the context of Moore’s Law.

Moore’s Law – Friend to Hyper-Convergence, Enemy to SAN

Moore’s Law, which states that the number of transistors on a processor doubles every 18 months, has long powered the IT industry. Laptops, the World Wide Web, iPhone and cloud computing are examples of technologies enabled by ever faster CPUs.

1 macMoore’s Law in Action  (via Igmur)

Innovative CPU manufacturing approaches such as increasing the number of cores, photonics and memristors should continue the Moore’s Law trajectory for a long time to come. The newly released Intel Haswell E5-2600 CPUs, for example, show performance gains of 18% – 30% over the Sandy Bridge predecessor.

Here are the 10 reasons why Moore’s Law is an essential consideration when evaluating hyper-convergence versus traditional 3-tier infrastructure:

1.  SANs were built for physical, not virtual infrastructure.

Virtualization is an example of an IT industry innovation made possible by Moore’s Law. But while higher-performing servers, particularly Cisco UCS, helped optimize virtualization capabilities, arrays remained mired in the physical world for which they were designed. Even all-flash arrays are constrained by the transport latency between the storage and compute which does not evolve as quickly.

The following image from Chad Sakac’s post, VMware I/O queues, “micro-bursting”, and multipathing, shows the complexity (meaning higher costs) of supporting virtual machines with a SAN architecture.

2 - sakac

HC: Hyper-convergence was built from the ground up to host a virtualized datacenter (“Hyper” in hyper-convergence refers to “hypervisor”, not to “ultra”). The image below from Andre Leibovici’s post, Nutanix Traffic Routing: Setting the Story Straight, shows the much more elegant and efficient access to data enabled by HC.

 3 - Andre

2.  Customers are stuck with old SAN technology even as server performance quickly improves.

A SAN’s firmware is tightly coupled with the processors; new CPUs can’t simply be plugged in. And proprietary SANs are produced on an assembly line basis in any case – quick retooling is not possible. When a customer purchases a brand new SAN, the storage controllers are probably at least one generation behind.

HC: HC decouples the storage code from the processors. As new nodes are added to the environment, customers benefit from the performance increases of the latest technology in CPU, memory, flash and disk.

Table 1 shows an example of an organization projecting a 20% increase in server workloads per year. The table also reflects a 20% density increase of VMs per Nutanix node – conservative by historical trends.

Fourteen nodes are required to support 700 VMs in Year 1, but only 8 more nodes support the 1,452 workloads in Year 5. And the total rack unit space required increases only 50% – from 8U to 12U.

4 Nodes
Table 1:  Example of decreasing number of nodes required to host increasing VMs

3.  A SAN performs best on the day it is installed. After that it’s downhill.

Josh Odgers wrote about how a SAN’s performance degrades as it starts scaling. Adding more servers to the environment, or even more storage shelves to the SAN, reduces the IOPs per virtualization host. Table 2 (from Odger’s post) shows how IOPs decrease per server as additional servers are added to the environment.

5 Odgers

Table 2:  IOPs Per Server Decline when Connected to a SAN

HC: As nodes are added, storage controllers (which are virtual), read cache and read/write cache (flash storage) all scale either linearly or better (because of Moore’s Law enhancements).

4.  Customers must over-purchase SAN capacity.

When SAN customers fill up an array or reach the limit on controller performance, they must upgrade to a larger model to facilitate additional expansion. Besides the cost of the new SAN, the upgrade itself is no easy feat. Wikibon estimates that the migration cost to a new array is 54% of the original array cost.

In order to try and avoid this expense and complexity, customers buy extra capacity/headroom up-front that may not be utilized for two to five years. This high initial investment cost hurts the project ROI. Moore’s Law then ensures the SAN technology becomes increasingly archaic (and therefore less cost effective) by the time it’s utilized.

Even buying lots of extra headroom up-front is no guarantee of avoiding a forklift upgrade. Faster growth than anticipated, new applications, new use cases, purchase of another company, etc. all can, and all too frequently do, lead to under-purchasing SAN capacity. A Gartner study, for example, showed that 90% of the time organizations under-buy storage for VDI deployments.

HC: HC nodes are consumed on a fractional basis – one node at a time. As customers expand their environments, they incorporate the latest in technology. Fractional consumption makes under-buying impossible. On the contrary, it is economically advantageous for customers to only start out with what they need up-front because Moore’s Law quickly ensures higher VM per node density of future purchases.

5.  A SAN incurs excess depreciation expense

The extra array capacity a customer purchases up-front starts depreciating on day one. By the time the capacity is fully utilized down the road, the customer has absorbed a lot of depreciation expense along with the extra rack space, power and cooling costs.

Table 3 shows an example of excess array/controller capacity purchased up front that depreciates over the next several years.

6 Excess Depreciation

Table 3:  Excess Capacity Depreciation

HC: Fractional consumption eliminates requirement to buy extra capacity up-front, minimizing depreciation expense.

6.  SAN “lock-in” accelerates its decline in value

The proprietary nature of a SAN further accelerates its depreciation. A Nutanix customer, a mortgage company, had purchased a Vblock 320 (list price $885K) one year before deciding to migrate to Nutanix. A leading refurbished specialist was only willing to give them $27,000 for their one-year old Vblock.

While perhaps not a common problem, in some cases modest array upgrades are difficult or impossible because of an inability to get the required components.

HC: An HC solution utilizing commodity hardware also depreciates quickly due to Moore’s Law, but there are a few mitigating factors:

  • In a truly software-defined HC solution, enhancements in the OS can be applied to the older nodes. This increases performance while enabling the same capabilities and features as newer nodes.
  • Since an organization typically purchases nodes over time, the older nodes can easily be redeployed for other use cases.
  • If an organization wanted to abandon HC, it could simply vMotion/live migrate VMs off of the nodes, erase them and then re-purpose the hardware as basic servers with SSD/HDDs ready to go.

Tesla

7.  SANs Require a Staircase Purchase Model

A SAN is typically upgraded by adding new storage shelves until the controllers, or the array or expansion cabinets, reach capacity. A new SAN is then required. This is an inefficient way to spend IT dollars.

It is also anathema to private cloud. As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN or Nexus 7000 switch.

Table 4 shows an example, based upon a Nutanix customer, of a comparison in purchasing costs of a SAN vs. HC – assuming a SAN refresh takes place in year 4.

8 staircase purch

 Table 4: Staircase Purchase of a SAN vs. Fractional Consumption of HC

HC: The unit of purchase is simply a node which, in the case of an HC solution such as Nutanix, is self-discovered once attached to the network and then automatically added to the cluster. Fractional consumption makes it much less expensive to expand private cloud as needed. It also makes it easier to implement meaningful charge-back policies.

8.  SANs have a Much Higher Total Cost of Ownership

When evaluating the likely technology winner, bet on the economics. This means full total cost of ownership (TCO), not just product acquisition.

SANs lock customers into old technology for several years. This has implications beyond just slower performance and less capabilities; it means on-going higher operating costs for rack space, power, cooling and administration. Table 5 shows a schematic from the mortgage company mentioned above that replaced a Vblock 320 with two Nutanix NX-6260 nodes.

9 vblock 320 tco

Table 5: Vblock 320 vs. Nutanix NX-6260 – Rack Space

Rack space, power and cooling costs are easy to calculate based upon model specifications. They, along with costs of associated products such as switching fabrics, should be projected for each solution over the next several years.

Administrative costs need to also be considered, but they are typically more difficult to gauge. They can also vary widely depending upon the type of compute and storage infrastructure utilized.

Some of the newer arrays, such as Pure Storage, do an excellent job at simplifying administration, but even Pure still requires storage tasks related to LUNs, zoning, masking, FC, multipathing, etc. And this doesn’t include all the work administering the server side. Here’s my recent post comparing upgrading firmware between Nutanix and Cisco UCS.

Table 6 shows the 5-year TCO chart for the mortgage customer including a conservative estimate of reduced administrative cost.

10 Cumm TCO

Table 6: TCO of Vblock 320 vs. Nutanix NX-6260

HC: In addition to slashed costs for rack space, power and cooling, HC is managed entirely by the virtualization team – no need for specialized storage administration tasks.

9.  SANs have a higher risk of downtime / lost productivity

RAID is, by today’s standards, an ancient technology. Invented in 1987, RAID still leaves a SAN vulnerable to failure. In some configurations, such as RAID 5, two lost drives can mean downtime or even data loss.

Both disks and RAID sets are getting larger. Disk failures require longer rebuilds, increasing both risk to performance along with another failure taking out the set.

And regardless of RAID type, a failed storage controller cuts SAN performance in half (assuming two controllers). Lose two controllers, and it’s game over.

11 BEarena tweet

Sometimes unexpected events such as a water main breaking on the floor directly above the SAN can create failure. And firmware upgrades, in addition to being a laborious process, carry additional risk of downtime. Then there’s human error. Array complexity makes this a realistic concern.

As demands on the array increase over time, the older SAN technology becomes still more vulnerable to disruption or outright failure. Even temporary downtime can be very expensive.

HC: Rather than RAID striping, an HC solution such as Nutanix includes replication of virtual machines onto two or three nodes. A lost drive or even entire node has minimal impact as the remaining nodes rebuild the failed unit non-disruptively in the background. And the more nodes that are added to the environment, the faster the failed node is restored in the background.

10.  Downsizing Penalty

Growth is not the only source of SAN inefficiency; downsizing can be a problem as well. Downsizing can result from decreased business, but also from a desire to move workloads to the cloud. The high cost and fixed operating expenses of a SAN make it difficult to justify reduced workloads.

HC: Customers can sell off or redeploy their older, slower nodes. This minimizes rack space, power and cooling expenses by only running the newest, highest-performance nodes. The software-defined nature of HC makes it easy to add new capabilities such as Nutanix’s “Cloud Connect” which enables automatic backup to public cloud providers.

The Inevitable Transition from SANs to HC

SANs were designed for the physical world, not for virtualized datacenters. The reason they proliferate today is that when VMware launched vMotion in 2003, it mandated, “The hosts must share a storage area network”.

But Moore’s Law marches relentlessly on. Hyper-convergence takes advantage of faster CPU, memory, disk and flash to provide a significantly superior infrastructure for hosting virtual machines. It will inevitably replace the SAN as the standard of the modern datacenter.

Thanks to Josh Odgers (@josh_odgers), Scott Drummonds (@drummonds), Cameron Stockwell (@ccstockwell), James Pung (@james_nutanix), Steve Dowling and George Fiffick for ideas and edits.

EMC, Pure and NetApp weigh in on Hyper-converged infrastructure

Nearly every leading legacy and startup datacenter hardware player has, or has announced, a Hyper-Converged Infrastructure (HCI) solution. But how do they really see HCI?

Yesterday provides some clues: An article from The Register discusses declining array sales; a blog post from EMC President of Global Systems Engineering, Chad Sakac, covers the new VCE HCI announcements; and a post from Pure Storage Chief Evangelist, Vaughn Stewart, makes a case for why HCI won’t replace storage arrays.

Disk Array Disarray

Chris Mellor’s article in The Register, Disk array devastation: New-tech onslaught tears guts from trad biz, reveals what is perhaps a significant reason that the storage manufacturers are entering the HCI market, “An EMC chart shows a steep decline in legacy SAN drive array sales.”  The article goes on to say, “EMC sees the market moving “toward converged and hyperconverged systems, all-flash arrays and purpose-built back-up appliances.”

Sakac Tweet

Chad Sakac’s post, “A big day in converged infrastructure,” discusses how EMC’s Vblock is helping the company address the sea change in storage. The post was not clear (at least to me) about how Vblocks will incorporate HCI – but Sakac left no doubt that they will, “This is the experience of an ‘engineered system’ like a Vblock or a VxBlock – whether it’s converged, or hyper-converged.”

Sakac also references both VSPEX Blue and EVO:Rack – both of which, along with Vblock, are now part of EMC’s VSPEX converged infrastructure division.

Pure Storage

Vaughn Stewart, former Cloud Evangelist atNetApp, wrote an interesting post yesterday about HCI, Hyper-Converged Infrastructures are not Storage Arrays. Stewart starts off endorsing HCI, “I’m a Huge Fan of Hyper-Converged Infrastructures,” but then quickly changes course and relegates the technology to “the low end storage array market.”

Stewart goes on to outright bash HCI – making an argument that data mirroring on a virtual disk basis is inferior to RAID (a technology invented in 1987). Stewart also presents lots of calculations claiming low storage utilization and other supposed HCI limitations.

Vaughn tweet

I’m not going to address Stewart’s claims in this post; they may very well be applicable to other HCI players. They do not apply to Nutanix. Josh Odgers (aka FUDbuster) is writing a post in response to Vaughn’s piece.

Stewart made no mention in his article about Pure’s own apparent plans to introduce an HCI solution.

NetApp

Since NetApp’s Mike Riley wrote the post, VSAN and Hyper-Converged will Hyper-Implode, last June, it’s unfair to assume that it reflects NetApp’s current day perspective on HCI. On the other hand, even when NetApp unveiled ONTAP EVO:Rail a few months ago, the company made it clear that HCI, without NetApp storage, is not suitable for the enterprise.

Duncan Tweet

A Question of Mindset

Sakac, Stewart and Riley are among the most respected technologists in our industry. But they also work for array manufacturers and naturally see the world through the lens of protecting legacy business.

The tremendous gain in mind share of HCI is driving the storage players to enter the market. This further validates the technology even though the array manufacturers position HCI as a low-end alternative to disk or flash arrays.

Nutanix, on the other hand, eats breaths and sleeps web-scale HCI in all that we do. It’s a question of mind set. The array manufacturers offer customers yet another storage option. Nutanix is revolutionizing the virtualized datacenter.

 

 

 

 

 

Why Nutanix isn’t singing the VSPEX BLUEs

Does EMC’s announcement of VSPEX BLUE pose a roadblock to Nutanix’s record-setting momentum?  It’s actually the opposite. Nutanix is not going to revolutionize the $73B server and storage market without a lot of good competitors. And there is no hardware manufacturer more important than EMC to validating hyper-converged infrastructure (HCI) as the future of the virtualized datacenter.

The Clout of EMC

EMC started the whole storage array industry in 1990 with its introduction of Symmetrix. The company continues to dominate with a 30% share of the $23.5B storage market. And it has augmented its storage business with many other very successful acquisitions over the years including VMware, Data Domain, Avamar, RSA and Isilon.

The Hopkinton giant has also done an admirable job in developing channel partner loyalty despite selling directly to certain customers. Partners appreciate both the leads EMC brings them and the help it extends in closing deals. They also like the distinction they earn by acquiring EMC certifications. These certs translate into back-end services revenues for integrating EMC’s complex stable of storage products.

But all is not roses. “The Federation” has stumbled a bit the past few years as its revenue growth rate has declined. EMC recently had to absorb the highly unprofitable VCE partnership, and the company was known to have shopped itself out to HP, and possibly others, late last year.

Despite these setbacks, EMC continues to be one of the most influential companies in the datacenter. Customers and partners across the globe take note of its vision and purchase its products. As a recent example, even all of the pain of the disruptive XtremeIO upgrade didn’t squelch its title as the fastest-growing EMC product ever (albeit a lot of this growth is likely coming at the expense of declining VMAX sales).

Positioning of VSPEX BLUE

EMC is going to market with an EVO:Rail solution as part of its VSPEX group which now also includes VCE. VSPEX, of course, is a converged infrastructure reference architecture including servers, storage and network while Vblock is a manufacturer-integrated solution. In neither case is there any actual convergence of infrastructure. Customers still face the same extensive rack space requirements, management challenges and scalability issues as when purchasing the products individually.

VMware’s EVO:Rail, on the other hand, is genuinely hyper-converged infrastructure. It includes consolidation of redundant hardware and elimination of multiple management tiers. (As an aside, “hyper” in hyper-convergence stands for “hypervisor”, not for “excessive”. Hyper-converged products only work, at least today, with virtualized workloads).

VSPEX BLUE’s product name and category grouping indicates that EMC considers hyper-convergence to be just another offering in its vast array of storage oriented products. EMC Chairman, Joe Tucci, reinforced this perspective in his 01/30/2014 earnings call: “Let me add a little color. When our sales force goes in they don’t think about [deciding] what’s declining, what’s growing, what they think about is, what are the customers’ needs and then we have a whole portfolio of products and as you can see, that’s our strength and as we are doing that, you can also note that our gross margins are doing well.”

Nutanix: One Mission

While Nutanix describes its offering as “Web-scale” in reference the Google-like infrastructure it introduced to the enterprise, the overall industry increasingly recognizes the broad category as “hyper-converged infrastructure”.  Nutanix, with a 52% market share, is the clear leader in the hyper-converged space.

IDC HCI chartUnlike EMC, Nutanix does not consider hyper-converged infrastructure to be a storage line-item. We live, eat and breathe Web-scale as not only a vastly superior platform for hosting a virtualized datacenter, but as the inevitable future.

If you go to Nutanix’s engineering department, you don’t find a lot of ex-storage folks. Instead, engineers from companies such as Google, Facebook and Twitter work to enable massively scalable, very simple and low-cost infrastructures for government and enterprise customers. It’s a completely different mindset.

This same scale-out mindset is pervasive in marketing, finance, channels, operations, HR, professional services, alliances and sales. Sr. VP of Sales, Sudheesh Nair, recently commented in a blog post, “EMC is a $60B company with one of the fiercest and meanest enterprise sales engines ever assembled on the face of the earth (I say this as a compliment with full admiration).”

But as good as EMC’s sales force may be, messaging hyper-convergence as just another approach to a virtualized data center is going to be difficult to convey with the same conviction as Nutanix’s sales folks. Nutanix is focused on revolutionizing the data center – or as our federal team likes to say, #OneMission.

So Who Will Win, Nutanix or EMC?

A big answer to this question, of course, is dependent upon the channel. Channel partners hold a lot of sway over their customers and are instrumental in helping them select the best technology for their requirements.

Fortunately, we’re seeing a rapidly increasing number of channel partners adopt the same type of Web-scale passion as our own sales teams. Partners are realizing that while they may not be able to charge their customers for the same back-end integration services that EMC products enable, they develop a deeper trust and many more higher margin services opportunities in areas such as hybrid and private cloud enablement, big data, Splunk, metro cluster, VDI and so on.

The VSPEX BLUE launch, paradoxically, is going to help Nutanix partners make a huge leap forward. Marketing gurus Al Ries and Jack Trout describe “Law #1: The Law of Leadership” in their book, The 22 Immutable Laws of Marketing. This law states, “The leading brand in any category is almost always the first brand into the prospect’s mind.”

In other words, by promoting VSPEX BLUE, EMC sets the stage to win against the real competition – the $73B of servers and storage sold every year. Both Nutanix partners and their customers will win as a result.

See Also

EMC’s VSPEX BLUE Joins the VMware EVO:RAIL Family of Systems. 02/03/2015. Mornay Van Der Walt. VMware Blogs.

EMC’s Joe Tucci on Q4 2014 Results – Earnings Call Transcript. 01/30/2014. Seeking Alpha.

EMC Combines VCE, VSPEX into New $1B-plus Converged Infrastructure Business. 01/28/2014. Joe Kovar. CRN.

IDC MarketScape: Worldwide Hyperconverged Systems. 01/26/2015. Storage Newsletter.

On Classless Winners and Classy Losers. 01/26/2015. Sudheesh Nair. LinkedIn.

EMC said to Explore Options Ahead of CEO’s Retirement. 09/22/2014. Beth Jinks. Bloomberg.

XtremIO Craps on EMC Badge. 09/18/2014. Nigel Poulton. Nigelpoulton.com

 

 

 

 

 

Channel partners rally behind Nutanix Web-scale converged infrastructure

“Really?!”

That was the one word email I received from Nutanix’s Sr. VP of Sales (and my boss), Sudheesh Nair, in response to the Q4 2013 Piper Jaffray Storage VAR Survey. The surveyed partners ranked Nutanix second to last in terms of sales performance relevant to plan.

Needless to say, I was frustrated. The channel perception of Nutanix was out of synch with Nutanix’s record-setting sales in 2013 as the fastest-growing infrastructure company of at least the past ten years.

But understanding and successfully positioning Nutanix has been a learning process for the channel. When Nutanix CEO, Dheeraj Pandey, first approached Lightspeed Venture Partners almost five years ago, he made it clear that his new company would disrupt the storage industry – including the venture capitalists’ existing investments. Unlike most entrants into the suddenly popular hyper-converged space, this revolutionary vision is integral to everything we do at Nutanix.

Partners can’t simply pitch a “faster, cheaper, better” storage array as they can with the other early stage companies in the survey. Partners need to be able to articulate and evangelize to their clients how Web-scale is a sea change that is fundamentally altering the infrastructure of the modern, virtualized datacenter.

The Difference a Year Makes

2014 continued the trajectory of rocketing sales and, gratifyingly, a much broader spectrum of channel partners caught the Web-scale fever as well. From small partners building their businesses around Nutanix to multi-billion dollar channel organizations moving Fortune 500 clients over to Web-scale, Nutanix is changing the channel landscape.

According to the latest Piper Jaffray report, channel partners now rank Nutanix sales performance in the #1 position – ahead of CommVault, Dell Storage, EMC, HP Storage, NetApp, Nimble, Pure Storage, Veeam and VMware.

Piper Jaffray

 

The Stern Agee Channel Survey similarly shows a huge improvement in channel recognition of Nutanix. Channel partners listed Nutanix as the second leading key company disrupting the established storage sector – right behind Pure Storage (but quickly catching up). Nutanix is ranked ahead of Nimble (and rapidly increasing the spread), and is ranked far ahead of Tintri, Violin Memory, Nimbus Data, Nexenta, Solidfire and everyone else.

sterne Agee

 

Looking Forward to 2015

It’s exciting to see Nutanix partners across the world enthusiastically embrace the Web-scale opportunity. They’re leveraging Nutanix to differentiate their companies, gain new customers, increase sales and shorten sales cycles.

I want to thank all of our partners for your continued faith and trust. The good news is that Nutanix is really just getting started. New capabilities such as one-click hypervisor upgrades, metro availability, connectivity to AWS and Microsoft Azure, among many others, mean extraordinary continued opportunity in the year ahead.

.

 

 

EMC implies that SANs may not be so great for hosting virtual machines after all

The inventor of the storage array, EMC, has indicated that a hardware-designed architecture is perhaps no longer the best solution for hosting a virtualized datacenter. The Register reported today that EMC will utilize ScaleIO as a VMware kernel module.

As I pointed out in the introductory post to this site less than two months ago, IDC says that $56B of annual server and storage sales go through just seven datacenter manufacturers: HP, IBM, EMC, Dell, Cisco, Oracle and NetApp. EMC’s announcement means that the majority now have a certified hyper-converged solution (not even counting EVO:Rail):

  • EMC:     ScaleIO
  • Cisco:    Maxta. Cisco also has invested in Stratoscale.
  • HP:         StoreVirtual
  • Dell:       XC Series web-scale converged appliances, powered by Nutanix software

Despite their dependency upon legacy 3-tier infrastructure for tens of billions in revenues, these datacenter giants recognize the necessity of joining the hyper-converged revolution. The threat of public cloud combined with much faster access to information is resulting in an astounding pace of its adoption.

SAN Huggers

Back in the aughts, we had to contend with the server huggers who staunchly refused to believe that their applications could run as well, let alone better, as virtual machines. But the financial and other advantages were too compelling to resist, and datacenters are now approaching an 80% virtualization rate.

Today, server huggers have been replaced by SAN huggers. These are the folks who insist that it is preferable to move flash and disk away from the compute and put them into proprietary arrays that must be accessed across the network. Never mind the issues around complexity, performance, resiliency, time-to-market and cost.

But just as virtualization provided an enormous opportunity for forward-thinking channel partners last decade, Web-scale has even more potential over the next several years. The key is introducing the concept in a way that will resonate with customers steeped in years of 3-tier infrastructure tradition.

Financial Modeling

It is natural for technologists, including channel partners, to jump into speeds and feeds and attributes and deficiencies. But I suggest taking a different tact. Help customers see a bigger picture, and consequently adopt a more strategic approach, with the aid of financial modeling.

IT leaders are realizing that to remain relevant, they need to run their internal operations with the same type of efficiency, responsiveness and accountability as the public cloud providers. This necessitates a more comprehensive process for selecting infrastructure than simply comparing up-front costs of similar solutions.

Cloud providers ruthlessly evaluate all of their on-going costs to ensure they are maximizing every square meter of datacenter space. Transitioning to ITaaS requires evaluating not only the equipment purchase price, but also expenses such as power, cooling, rack space, support, administration and associated hardware and software requirements.

One approach is to boil everything down to a lifecycle cost metric that can be easily applied to competing solutions. I describe a TCO per VM model in a recent Wikibon article. But regardless of how partners present the results, financial modeling on its own is insufficient for optimally determining an organization’s datacenter future.

Financial modeling is the hook to capture a prospect’s attention and to guarantee an audience with decision-makers. It is the key for partners to really understand their client’s pain points and objectives. They can then incorporate other vital variables such as risk, expandability, agility, reliability, resiliency, and so on within a framework that will resonate with their customers.

Going through this process positions a solutions provider to help its customers begin the datacenter migration process. It also provides the opportunity to incorporate private cloud, active/active datacenters, virtual desktops and other use cases made economically feasible by a hyper-converged infrastructure.

Disruption Made Easy

Even a compelling Web-scale evaluation can still leave a partner challenged to disrupt existing buying habits, processes and governance policies. But now that EMC has joined VMware and three of the other leading hardware manufacturers in validating hyper-converged infrastructure, it is easier for partners to initiate a conversation around datacenter strategy.

The winners in the new software-defined era will be those solutions providers who help their customers understand, select and implement the best architecture for their environments. The losers will be the VARs who continue to push legacy solutions without even bringing the Web-scale options to the table.

Happy birthday VMware vMotion

On this day 11 years ago in 2003, VMware introduced vMotion, and the datacenter was never the same again.

Windfall for Storage Manufacturers and for Solutions Providers

If you were involved in IT, you probably still remember the first time you saw vMotion – moving a live running virtual machine between physical hosts seemed like magic at the time. In my case, a friend’s demonstration of vMotion convinced me to start an integrator business with him focused on enterprise virtualization.

The introduction of vMotion was also the birth of the modern datacenter. It was the feature that made IT organizations really take notice of virtualization and of what it could do to improve their operations. And because vMotion required a SAN, it prompted organizations across the globe to begin purchasing shared storage arrays in massive quantities.

vMotion

VMware vMotion was, of course, a huge bonanza for the young storage manufacturers whose sales had been hit hard by the dot com bubble burst. EMC recognized a good thing when it saw it, and the next month announced its intent to purchase VMware for $625 million (VMware’s market cap today is $36 billion – so quite an astute acquisition).

VMware vMotion also turned out to be quite a boon for solutions providers – many of whom were still struggling themselves from the dot com bubble aftermath. Their services were in strong demand for helping organizations decide what arrays to buy, and how to design and implement the complex products and switching fabrics.

SAN Huggers

In the early days of virtualization, server huggers were common.  We used to joke with IT staffs about putting in a façade of servers and blinking lights so that they could make the application owners feel comfortable. And we really did used to hide the ESX tools from the Windows task bar so that the software manufacturer, when troubleshooting its product, wouldn’t see that it was running as a virtual machine.

Today, the server huggers are nearly an extinct species. Organizations are commonly virtualizing even large SQL Server, Oracle and Exchange applications. But a new group has arisen to take their place: SAN huggers.

As the name implies, SAN huggers don’t want anyone to replace their arrays with the new breed of hyper-converged or web-scale infrastructure products. They’re very comfortable with LUN snapshot management, balancing virtual machines across different physical volumes to get around LUN limitations, maintaining aggregates/meta-volumes, and the many other storage administration tasks.

The ironic thing is that storage arrays were built for a physical “scale-up” datacenter. Although they satisfied vMotion’s requirement for shared storage, they’re simply not a good fit for a highly virtualized “scale-out” datacenter . Take RAID which was invented in 1987. This is a really old technology that requires lengthy rebuild times and that can be disastrous if multiple drives fail simultaneously. The same is true if a SAN loses both of its storage controllers. Losing just one controller significantly reduces performance.

SANs take the disk and the flash away from the CPU and stick them in proprietary arrays at the end of networks where they’re subject to latency and network hops. They scale very poorly, are expensive, in many cases require separate switching fabrics, and are complex to manage.

Web-Scale Converged Infrastructure

When Google came on the scene in the late 1990s, co-founder Sergey Brin refused to buy SANs and instead hired a group of scientists to rethink datacenter infrastructure. They invented the Google File System, Map Reduce and NoSQL and put all of the intelligence into software rather than into proprietary hardware. The result was a very inexpensive infrastructure that is also highly resilient, scalable and simple to manage.

The lead Google scientist and two other Nutanix co-founders brought this same type of architecture to the enterprise datacenter by leveraging the hypervisor to virtualize the storage controllers. The result is a low-cost, self-healing, linearly scalable and very simple to manage infrastructure.

Although still very small by datacenter incumbent standards, Nutanix has already made a big impact in the industry. VMware introduced VSAN and now EVO:Rail as the recommended path to a software-defined datacenter. And hardware leaders EMC, Dell, HP and Cisco all have existing solutions, or planned entries, in the web-scale/hyper-converged infrastructure space.

While it may seem highly unlikely today, my guess is that the SAN huggers are going to have a much shorter reign than the server huggers did before them.

Today’s vMotion-like Moment

Nutanix’s management interface, Prism, is simple, elegant and comprehensive. When partners and customers see it for the first time, many report having the same type of “wow!” experience that they had the first time they saw vMotion.

Thanks to @vmmike130 for editing.

When a channel partner looks in the mirror, does a trusted advisor look back?

In my former position as VP of Cloud and Virtualization at Presidio, I frequently used financial modeling to assist our reps, but did not drive sales on my own. That changed after I learned about Nutanix.

I loved the no-SAN concept and was curious to see how it would actually play in Peoria. I pitched a savvy CIO who had participated in an EDUCAUSE panel I moderated, and she was immediately intrigued. But the Chicago office of Presidio was reluctant to work with a new manufacturer. I just made the sale myself and convinced another region with which I had stronger ties to process the paperwork.

The experience should have tipped me off as to the type of situation I would face in my dual channel and strategic sales role at Nutanix. While it’s been surprisingly easy to sell web-scale converged infrastructure to former clients who have called me or vice-e-versa (always running the deals through partners of course), it’s often difficult to get buy-in from VARs – especially from large ones.

mfg rep 1

The Channel Partner Perspective

I had dinner a few days ago with the VP of Sales of a sizable regional VAR. He asked me how much business our top partner would do with us this year. I told him that one organization had a plan in place to sell $50M in our new fiscal year, though internally we pared it down to be conservative. The VP told me that his company will do $90M this year with EMC alone.

As enamored as he and his team were with our technology, I could tell he was thinking about how he could realistically present it internally. Even matching the sales of Nutanix’s largest partner wouldn’t come anywhere near the business he’s driving with EMC and Cisco. How could he convince his executive team that they should risk the wrath of their two largest vendors by promoting Nutanix?

And, suppose he did manage to persuade the executive team to go all in with web-scale; they still would have to get their sales reps on-board. The reps have established relationships with legacy manufacturers, are trained and experienced in selling their products and depend upon them for opportunities. These “coin-operated” reps do not readily gravitate toward promoting new technologies.

mfg rep 2

The Customer Perspective

If I were a CIO, I would not want a solutions provider who simply brought me different product configurations from a leading datacenter manufacturer – I could find that information myself on the Web. I’d want to work with a partner who was diligent enough to constantly investigate new promising technologies, and who was astute enough to discern which ones could have a positive impact on my organization. I’d expect the partner to bring those options and his recommendations to me for review.

VARs that close-mindedly mimic their vendor perspectives risk becoming, in the eyes of customers, glorified manufacturer reps. An EMC partner, for example, might feel confident today in leveraging a trusted relationship with a CIO to advocate Vblock as the best option for a VDI deployment. But the probability is increasing that the CIO will learn on her own that she could have implemented a similar project at a fraction of the cost and with none of the risk by utilizing web-scale. She will consequently feel her partner is either uninformed or, worse, acting in EMC’s rather than in her best interest.

mfg rep 3

Preserving the Customer Relationship

Channel partners tell me that large enterprises move very slowly – the implication being that they have plenty of time to continue making lots of money by promoting legacy 3-tier infrastructure. Perhaps they’re correct, but it’s a dangerous way to conduct business.

Henry Ford famously said, “If I had asked people what they wanted, they would have said faster horses.”  Just because a customer asks for more storage doesn’t mean a solutions provider should limit the conversation to arrays. They can take the opportunity to educate their client about how Google and the leading cloud providers have moved away from using SANs and ancient (1987) RAID technology. They can discuss the advantages of web-scale converged infrastructure and about whether or not the architecture might be appropriate for the customer’s environment.

Even if the customer decides, for whatever reason, to go with traditional 3-tier infrastructure, at least the channel partner looked out for the customer’s best interest. Over time, as web-scale/hyper-converged infrastructure becomes the virtualized datacenter standard, the customer will appreciate the effort and integrity of the partner for introducing it.

The Playing Field has Already Changed

I don’t agree with the premise that big enterprises will continue to move slowly. External pressures from public cloud and internal pressures from much more rapidly changing technologies will force enterprises to change more quickly as well.

Just look at web-scale. Almost overnight it has jumped solidly into the mainstream. VMware’s endorsement of hyper-converged infrastructure as the platform of choice for hosting virtual machines leaves no doubt as to the future direction of virtualized datacenter architecture.

Then there’s Dell – one of the “big seven” who collectively drive 76% ($56B) of the annual server and storage business. Dell also blessed hyper-converged architecture last week with its launch of the Dell XC Series: Web-scale Converged Appliances. Yet another of the “big seven”, EMC, has said it will develop its own EVO:Rail offering. Even HP is weighing in both with an EVO:Rail solution and with its own StoreVirtual product. Cisco is showing signs of making the leap as well.This massive validation during the past few months by the leading datacenter players enables solution providers to bring up web-scale without concern of appearing “bleeding edge”. It also means that they should, with at least some degree of impunity, be able to focus on hyper-converged solutions by creating a separate division explicitly for this purpose.

However they do it, I strongly encourage channel partners to figure out a way to get engaged with web-scale. Nutanix continues, and is even accelerating, our trajectory as the fastest-growing infrastructure company of the past decade. This provides an extraordinary opportunity for forward-thinking partners to grow along with us.

Dell XC Series Launches – and Nutanix partners benefit

Nutanix partners will benefit from the rising tide of web-scale mindshare as Dell launches its XC Series: Web-scale Converged Appliances across the world. But solutions providers currently selling Nutanix-branded appliances through Dell are already discovering other advantages. Choice Solutions, for example, beat out an entrenched Vblock incumbent with the assistance of Dell’s existing server business and its financing capabilities.

Disruption with Dell

Nutanix’s Sr. VP of Sales, Sudheesh Nair, likes to talk about the compressing disruption cycle in our industry. Disruption used to take place over a roughly ten year period, but now it occurs on a to a two to four year cycle.

Large companies tend to be very, very good at running marathons.  They spot emerging patterns and then maintain their leadership through acquisition or internal development. Only rarely have exceptional companies such as NetApp and VMware been able to emerge through the disruptive cycles as large leaders themselves.

The datacenter infrastructure landscape is long overdue for disruption. According to IDC, just four manufacturers – EMC, NetApp, IBM and HP, command a 65% market share of storage today. EMC, with a 30% share alone, has been particularly adept at acquiring innovative companies such as Data Domain* and Isilon. IBM and HP acquired XIV and 3Par respectively.

Dell’s acquisitions of Compellent and EqualLogic have enabled it to attain a 7% share of the storage market. In contrast, Dell has a 17% share of the server business which is also dominated by a handful of manufacturers:  HP, IBM, Dell, Oracle and Cisco own a 77% market share between them.

This discrepancy between server and storage market share creates a huge incentive for Dell to leverage its server base to more deeply penetrate the enterprise storage market. While Dell knew that EVO:Rail was coming, it also recognized that EVO would be a 1.0 product lacking the necessary enterprise attributes for wide scale adoption. Enter Nutanix.

Nutanix Web-Scale

Nutanix pioneered the hyper-converged infrastructure era just three years ago, but legacy datacenter players have already been scrambling to claim a stake. After EMC’s mid-2013 acquisition of ScaleIO went nowhere, the company now is counting on subsidiary VMware’s EVO:Rail. HP has resurrected its Left Hand Networks product as StoreVirtual with EVO:Rail positioned as a back-up in situations where it can’t sell its own product. Even the leading all-flash array start-up, Pure Storage, has announced it will be coming out with a hyper-converged offering.

Dell, however, took a different tact. It took a look at all the potential hyper-converged products, including one that already utilized its hardware, and quickly realized that Nutanix’s innovative vision, enterprise capabilities and exceptional support could enable it to make the same type of inroads into enterprise storage that it already holds with servers.

Dell’s recognition of the huge opportunity Nutanix presented led to an OEM agreement signing that is reportedly the fastest that the company has ever done. The OEM agreement includes unique terms that ensure Dell will not have a price advantage over other partners selling Nutanix-branded appliances. And Dell is subject to the same stringent rules as all of Nutanix partners in terms of forecasting and registering opportunities.

Synergies with Nutanix Channel Partners

Dell partners cannot sell Nutanix without first meeting the requirements of, and enrolling in, the Nutanix Partner Network. And Nutanix-only partners cannot sell the Dell XC Series. But partners of both companies can sell either product depending upon customer technical, environmental and purchasing requirements.

Dell brings a great deal to the table. Customers have a lot of trust in the Dell brand, and Dell already has a pervasive footprint in datacenters across the globe. Its extensive purchasing agreements and contracts with both governmental and commercial entities make procurement much easier. And Dell Financial Services can significantly shorten sales cycles.

It’s already been established that once Nutanix gets a foot in the door for a particular use case, customers quickly come to love the simplicity and elegance of the solution. As a result, Nutanix becomes an almost annuity-like business for partners as customers, now unencumbered by the cost and difficulty of scaling arrays, expand their environments.

But beyond increased revenues and shortened sales cycles, partners of both Nutanix and Dell also benefit from the tendency of web-scale to expand to more specialized use cases such as VDI, private and hybrid cloud, big data, disaster recovery and remote branch infrastructure. Partners consequently have an opportunity to increase their services business, and to provide more specialized services at higher rates.

Datacenter Infrastructure Under Seige

Between web-scale and public cloud, the $73 billion annual server and storage business is under siege for the first time. As Choice Solutions and other Nutanix partners are already learning, working together with Dell enables them to grow their businesses by grabbing a piece of the massive low-hanging status quo fruit.

“Choice Solutions has already had a great experience with the Dell team even before the XC Series has shipped. We have seen first-hand that the partnership will amplify Nutanix’s footprint in the Data Center. Being Part of the Nutanix Channel Advisory Council, I have seen the commitment from Nutanix to protect the interest of the channel, and know that Nutanix has been diligent about the joint opportunity registration program. We already had our first Nutanix/Dell marketing event in Dallas, and the Dell team was both enthusiastic and successful in helping drive attendance to the event. Similar events in other cities are already planned.”    

                Jim Steinlage, President and CEO, Choice Solutions

____

*The Data Domain acquisition did not go uncontested. NetApp still has a press release on its Web site proclaiming its acquisition of the same company.