The 10 reasons why Moore’s Law is accelerating hyper-convergence

SAN manufacturers are in trouble.

IDC says that array vendor market share is flat despite continued massive growth in storage.

pre 1 IDC mkt share

Hyper-convergence (HC) contributes to SAN manufacturer woes. The March 23, 2015 PiperJaffray research report states, “We believe EMC is losing share in the converged infrastructure market to vendors such as Nutanix.”

One of the most compelling advantages of HC is the cost savings. This is particularly evident when evaluated within the context of Moore’s Law.

Moore’s Law – Friend to Hyper-Convergence, Enemy to SAN

Moore’s Law, which states that the number of transistors on a processor doubles every 18 months, has long powered the IT industry. Laptops, the World Wide Web, iPhone and cloud computing are examples of technologies enabled by ever faster CPUs.

1 macMoore’s Law in Action  (via Igmur)

Innovative CPU manufacturing approaches such as increasing the number of cores, photonics and memristors should continue the Moore’s Law trajectory for a long time to come. The newly released Intel Haswell E5-2600 CPUs, for example, show performance gains of 18% – 30% over the Sandy Bridge predecessor.

Here are the 10 reasons why Moore’s Law is an essential consideration when evaluating hyper-convergence versus traditional 3-tier infrastructure:

1.  SANs were built for physical, not virtual infrastructure.

Virtualization is an example of an IT industry innovation made possible by Moore’s Law. But while higher-performing servers, particularly Cisco UCS, helped optimize virtualization capabilities, arrays remained mired in the physical world for which they were designed. Even all-flash arrays are constrained by the transport latency between the storage and compute which does not evolve as quickly.

The following image from Chad Sakac’s post, VMware I/O queues, “micro-bursting”, and multipathing, shows the complexity (meaning higher costs) of supporting virtual machines with a SAN architecture.

2 - sakac

HC: Hyper-convergence was built from the ground up to host a virtualized datacenter (“Hyper” in hyper-convergence refers to “hypervisor”, not to “ultra”). The image below from Andre Leibovici’s post, Nutanix Traffic Routing: Setting the Story Straight, shows the much more elegant and efficient access to data enabled by HC.

 3 - Andre

2.  Customers are stuck with old SAN technology even as server performance quickly improves.

A SAN’s firmware is tightly coupled with the processors; new CPUs can’t simply be plugged in. And proprietary SANs are produced on an assembly line basis in any case – quick retooling is not possible. When a customer purchases a brand new SAN, the storage controllers are probably at least one generation behind.

HC: HC decouples the storage code from the processors. As new nodes are added to the environment, customers benefit from the performance increases of the latest technology in CPU, memory, flash and disk.

Table 1 shows an example of an organization projecting a 20% increase in server workloads per year. The table also reflects a 20% density increase of VMs per Nutanix node – conservative by historical trends.

Fourteen nodes are required to support 700 VMs in Year 1, but only 8 more nodes support the 1,452 workloads in Year 5. And the total rack unit space required increases only 50% – from 8U to 12U.

4 Nodes
Table 1:  Example of decreasing number of nodes required to host increasing VMs

3.  A SAN performs best on the day it is installed. After that it’s downhill.

Josh Odgers wrote about how a SAN’s performance degrades as it starts scaling. Adding more servers to the environment, or even more storage shelves to the SAN, reduces the IOPs per virtualization host. Table 2 (from Odger’s post) shows how IOPs decrease per server as additional servers are added to the environment.

5 Odgers

Table 2:  IOPs Per Server Decline when Connected to a SAN

HC: As nodes are added, storage controllers (which are virtual), read cache and read/write cache (flash storage) all scale either linearly or better (because of Moore’s Law enhancements).

4.  Customers must over-purchase SAN capacity.

When SAN customers fill up an array or reach the limit on controller performance, they must upgrade to a larger model to facilitate additional expansion. Besides the cost of the new SAN, the upgrade itself is no easy feat. Wikibon estimates that the migration cost to a new array is 54% of the original array cost.

In order to try and avoid this expense and complexity, customers buy extra capacity/headroom up-front that may not be utilized for two to five years. This high initial investment cost hurts the project ROI. Moore’s Law then ensures the SAN technology becomes increasingly archaic (and therefore less cost effective) by the time it’s utilized.

Even buying lots of extra headroom up-front is no guarantee of avoiding a forklift upgrade. Faster growth than anticipated, new applications, new use cases, purchase of another company, etc. all can, and all too frequently do, lead to under-purchasing SAN capacity. A Gartner study, for example, showed that 90% of the time organizations under-buy storage for VDI deployments.

HC: HC nodes are consumed on a fractional basis – one node at a time. As customers expand their environments, they incorporate the latest in technology. Fractional consumption makes under-buying impossible. On the contrary, it is economically advantageous for customers to only start out with what they need up-front because Moore’s Law quickly ensures higher VM per node density of future purchases.

5.  A SAN incurs excess depreciation expense

The extra array capacity a customer purchases up-front starts depreciating on day one. By the time the capacity is fully utilized down the road, the customer has absorbed a lot of depreciation expense along with the extra rack space, power and cooling costs.

Table 3 shows an example of excess array/controller capacity purchased up front that depreciates over the next several years.

6 Excess Depreciation

Table 3:  Excess Capacity Depreciation

HC: Fractional consumption eliminates requirement to buy extra capacity up-front, minimizing depreciation expense.

6.  SAN “lock-in” accelerates its decline in value

The proprietary nature of a SAN further accelerates its depreciation. A Nutanix customer, a mortgage company, had purchased a Vblock 320 (list price $885K) one year before deciding to migrate to Nutanix. A leading refurbished specialist was only willing to give them $27,000 for their one-year old Vblock.

While perhaps not a common problem, in some cases modest array upgrades are difficult or impossible because of an inability to get the required components.

HC: An HC solution utilizing commodity hardware also depreciates quickly due to Moore’s Law, but there are a few mitigating factors:

  • In a truly software-defined HC solution, enhancements in the OS can be applied to the older nodes. This increases performance while enabling the same capabilities and features as newer nodes.
  • Since an organization typically purchases nodes over time, the older nodes can easily be redeployed for other use cases.
  • If an organization wanted to abandon HC, it could simply vMotion/live migrate VMs off of the nodes, erase them and then re-purpose the hardware as basic servers with SSD/HDDs ready to go.

Tesla

7.  SANs Require a Staircase Purchase Model

A SAN is typically upgraded by adding new storage shelves until the controllers, or the array or expansion cabinets, reach capacity. A new SAN is then required. This is an inefficient way to spend IT dollars.

It is also anathema to private cloud. As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN or Nexus 7000 switch.

Table 4 shows an example, based upon a Nutanix customer, of a comparison in purchasing costs of a SAN vs. HC – assuming a SAN refresh takes place in year 4.

8 staircase purch

 Table 4: Staircase Purchase of a SAN vs. Fractional Consumption of HC

HC: The unit of purchase is simply a node which, in the case of an HC solution such as Nutanix, is self-discovered once attached to the network and then automatically added to the cluster. Fractional consumption makes it much less expensive to expand private cloud as needed. It also makes it easier to implement meaningful charge-back policies.

8.  SANs have a Much Higher Total Cost of Ownership

When evaluating the likely technology winner, bet on the economics. This means full total cost of ownership (TCO), not just product acquisition.

SANs lock customers into old technology for several years. This has implications beyond just slower performance and less capabilities; it means on-going higher operating costs for rack space, power, cooling and administration. Table 5 shows a schematic from the mortgage company mentioned above that replaced a Vblock 320 with two Nutanix NX-6260 nodes.

9 vblock 320 tco

Table 5: Vblock 320 vs. Nutanix NX-6260 – Rack Space

Rack space, power and cooling costs are easy to calculate based upon model specifications. They, along with costs of associated products such as switching fabrics, should be projected for each solution over the next several years.

Administrative costs need to also be considered, but they are typically more difficult to gauge. They can also vary widely depending upon the type of compute and storage infrastructure utilized.

Some of the newer arrays, such as Pure Storage, do an excellent job at simplifying administration, but even Pure still requires storage tasks related to LUNs, zoning, masking, FC, multipathing, etc. And this doesn’t include all the work administering the server side. Here’s my recent post comparing upgrading firmware between Nutanix and Cisco UCS.

Table 6 shows the 5-year TCO chart for the mortgage customer including a conservative estimate of reduced administrative cost.

10 Cumm TCO

Table 6: TCO of Vblock 320 vs. Nutanix NX-6260

HC: In addition to slashed costs for rack space, power and cooling, HC is managed entirely by the virtualization team – no need for specialized storage administration tasks.

9.  SANs have a higher risk of downtime / lost productivity

RAID is, by today’s standards, an ancient technology. Invented in 1987, RAID still leaves a SAN vulnerable to failure. In some configurations, such as RAID 5, two lost drives can mean downtime or even data loss.

Both disks and RAID sets are getting larger. Disk failures require longer rebuilds, increasing both risk to performance along with another failure taking out the set.

And regardless of RAID type, a failed storage controller cuts SAN performance in half (assuming two controllers). Lose two controllers, and it’s game over.

11 BEarena tweet

Sometimes unexpected events such as a water main breaking on the floor directly above the SAN can create failure. And firmware upgrades, in addition to being a laborious process, carry additional risk of downtime. Then there’s human error. Array complexity makes this a realistic concern.

As demands on the array increase over time, the older SAN technology becomes still more vulnerable to disruption or outright failure. Even temporary downtime can be very expensive.

HC: Rather than RAID striping, an HC solution such as Nutanix includes replication of virtual machines onto two or three nodes. A lost drive or even entire node has minimal impact as the remaining nodes rebuild the failed unit non-disruptively in the background. And the more nodes that are added to the environment, the faster the failed node is restored in the background.

10.  Downsizing Penalty

Growth is not the only source of SAN inefficiency; downsizing can be a problem as well. Downsizing can result from decreased business, but also from a desire to move workloads to the cloud. The high cost and fixed operating expenses of a SAN make it difficult to justify reduced workloads.

HC: Customers can sell off or redeploy their older, slower nodes. This minimizes rack space, power and cooling expenses by only running the newest, highest-performance nodes. The software-defined nature of HC makes it easy to add new capabilities such as Nutanix’s “Cloud Connect” which enables automatic backup to public cloud providers.

The Inevitable Transition from SANs to HC

SANs were designed for the physical world, not for virtualized datacenters. The reason they proliferate today is that when VMware launched vMotion in 2003, it mandated, “The hosts must share a storage area network”.

But Moore’s Law marches relentlessly on. Hyper-convergence takes advantage of faster CPU, memory, disk and flash to provide a significantly superior infrastructure for hosting virtual machines. It will inevitably replace the SAN as the standard of the modern datacenter.

Thanks to Josh Odgers (@josh_odgers), Scott Drummonds (@drummonds), Cameron Stockwell (@ccstockwell), James Pung (@james_nutanix), Steve Dowling and George Fiffick for ideas and edits.

EMC, Pure and NetApp weigh in on Hyper-converged infrastructure

Nearly every leading legacy and startup datacenter hardware player has, or has announced, a Hyper-Converged Infrastructure (HCI) solution. But how do they really see HCI?

Yesterday provides some clues: An article from The Register discusses declining array sales; a blog post from EMC President of Global Systems Engineering, Chad Sakac, covers the new VCE HCI announcements; and a post from Pure Storage Chief Evangelist, Vaughn Stewart, makes a case for why HCI won’t replace storage arrays.

Disk Array Disarray

Chris Mellor’s article in The Register, Disk array devastation: New-tech onslaught tears guts from trad biz, reveals what is perhaps a significant reason that the storage manufacturers are entering the HCI market, “An EMC chart shows a steep decline in legacy SAN drive array sales.”  The article goes on to say, “EMC sees the market moving “toward converged and hyperconverged systems, all-flash arrays and purpose-built back-up appliances.”

Sakac Tweet

Chad Sakac’s post, “A big day in converged infrastructure,” discusses how EMC’s Vblock is helping the company address the sea change in storage. The post was not clear (at least to me) about how Vblocks will incorporate HCI – but Sakac left no doubt that they will, “This is the experience of an ‘engineered system’ like a Vblock or a VxBlock – whether it’s converged, or hyper-converged.”

Sakac also references both VSPEX Blue and EVO:Rack – both of which, along with Vblock, are now part of EMC’s VSPEX converged infrastructure division.

Pure Storage

Vaughn Stewart, former Cloud Evangelist atNetApp, wrote an interesting post yesterday about HCI, Hyper-Converged Infrastructures are not Storage Arrays. Stewart starts off endorsing HCI, “I’m a Huge Fan of Hyper-Converged Infrastructures,” but then quickly changes course and relegates the technology to “the low end storage array market.”

Stewart goes on to outright bash HCI – making an argument that data mirroring on a virtual disk basis is inferior to RAID (a technology invented in 1987). Stewart also presents lots of calculations claiming low storage utilization and other supposed HCI limitations.

Vaughn tweet

I’m not going to address Stewart’s claims in this post; they may very well be applicable to other HCI players. They do not apply to Nutanix. Josh Odgers (aka FUDbuster) is writing a post in response to Vaughn’s piece.

Stewart made no mention in his article about Pure’s own apparent plans to introduce an HCI solution.

NetApp

Since NetApp’s Mike Riley wrote the post, VSAN and Hyper-Converged will Hyper-Implode, last June, it’s unfair to assume that it reflects NetApp’s current day perspective on HCI. On the other hand, even when NetApp unveiled ONTAP EVO:Rail a few months ago, the company made it clear that HCI, without NetApp storage, is not suitable for the enterprise.

Duncan Tweet

A Question of Mindset

Sakac, Stewart and Riley are among the most respected technologists in our industry. But they also work for array manufacturers and naturally see the world through the lens of protecting legacy business.

The tremendous gain in mind share of HCI is driving the storage players to enter the market. This further validates the technology even though the array manufacturers position HCI as a low-end alternative to disk or flash arrays.

Nutanix, on the other hand, eats breaths and sleeps web-scale HCI in all that we do. It’s a question of mind set. The array manufacturers offer customers yet another storage option. Nutanix is revolutionizing the virtualized datacenter.

 

 

 

 

 

Cisco jumps into the hyper-converged game

Cisco changed the datacenter game with UCS – the only server designed from top to bottom for virtualization. Despite widespread skepticism that the networking giant knew nothing about servers and would fail miserably, in less than five years UCS became the number one blade seller in the Americas.

In our new compressed disruptive-cycle world, Cisco itself has now fallen far behind when it comes to optimally hosting a virtualized datacenter. The company is, however, scurrying to catch up. Within the past few weeks, it’s been revealed that Cisco both invested in hyper-converged startup, Stratoscale, and also blessed Maxta as the first, and so far only, certified hyper-converged solution to run on Cisco UCS.

How UCS Thumped the Server Leaders

I’ve been a vocal fan of UCS from the beginning. In late 2009, when “over 100 companies” were using UCS, I wrote a blog post comparing UCS vs. the HP Matrix. While competitors scoffed at UCS as a “one-size-fits-all product”, I maintained that it would revolutionize datacenter virtualization.

The dominant server manufacturers of the day were perfectly happy with the status quo. But Cisco realized that virtualization would become the datacenter standard and that a new type of server was required. Cisco initially approached IBM and HP to jointly develop a product, but both companies declined. So Cisco instead funded VMware cofounder, Ed Bugnion, and a team of engineers to spend three years building UCS.

UCS helped mitigate virtualization challenges with capabilities such as FCoE (Fibre Channel over Ethernet), hypervisor bypass, extended memory, services profiles and a GUI that can help the server, storage and network teams collaborate more effectively.

But UCS’s Achilles heel is that it really only addresses a small part of the virtualized datacenter issues – the compute. By far the majority of the pain in the modern datacenter has to do with storage. Not surprisingly, four storage manufacturers, EMC/VCE, NetApp, Hitachi and Nimble, all incorporate UCS as an integral component of their so-called “converged infrastructure” solutions.

Channel partners across the globe, such as the one I worked for, understood that as customers increasingly virtualized their datacenters, they would want the enterprise capabilities and features that UCS offered. These partners worked with Cisco to make UCS the number two blade seller in the world.

Descending into Irrelevance

Ah, but all things must change – especially in a software-defined world. While Cisco was promoting the superiority of custom-designed ASICs, Nutanix was bringing the advantages of commodity-driven web-scale architecture to the enterprise. The impressive innovations that Cisco unveiled over five years ago are now not just obsolete, but superfluous.

  • Fibre Channel over Ethernet (FCoE): Unlike the converged infrastructure offerings built around UCS, FCoE is an example of true convergence of the network stack – melding fibre channel and IP Ethernet networks. But today, Web-scale eliminates the requirement for SANs and switching fabrics entirely.
  • UCS Manager GUI: Lets storage and server teams collaborate more effectively together. Not so useful when separate storage administrators are no longer necessary.
  • Custom ASICs: Cisco boasts 12% increased performance from proprietary hardware. Nice but inconsequential when Moore’s Law doubles performance every 18 months anyway. Nutanix utilizes commodity hardware, but increases performance nonetheless with regular software updates that improve hardware effectiveness.
  • Services profiles and templates: These were great in the day for relatively fast provisioning of ESX hosts. Nutanix Foundation is much faster and doesn’t require zone masking or manual hypervisor installs.
  • Integrating the Cisco Nexus switch: Making the network the management center was key to Cisco gaining traction with its network administrator constituency. But Web-scale eliminates the requirement for complex, intelligent and expensive converged network switches.

The leading converged infrastructure manufacturer, VCE, proudly advertises that it only takes 45 days to order and put a Vblock into production – 5 X faster than with conventional servers and storage. In contrast, Nutanix can be ordered, received, installed and in production in around five days.

VCE ad

Upgrading VMware vSphere requires a corresponding upgrade to the entire Vblock – a process that can easily require a team of consultants several days to accomplish. And even then there are risks involved. A former Vblock customer that recently migrated to Nutanix was still running three versions back of vSphere because they didn’t want to have to deal with the associated Vblock upgrade.

Contrast all of this time, expense and risk with doing a vSphere (or Hyper-V or KVM) upgrade on Nutanix. The process is literally just a single click. No cost, no downtime and no risk.

Lesson for Nutanix

UCS and Nutanix both target the same customers – virtualized enterprise environments. I’ve heard from multiple partners that despite our relatively tiny size, Cisco has declared Nutanix to be its number one competitor. Not HP. Not VMware. Nutanix. Cisco’s announcements around Maxta and Stratoscale reflect its determination to, albeit belatedly, get into the game.

Cisco is one of the most successful and well-run companies of all time. While known for its innovations in areas such as routing, switching, VoIP and collaboration – perhaps nothing has been as impressive as Cisco’s accomplishment in the datacenter. Cisco upended all of the existing dominant server players by developing UCS to fulfill the computing requirements of the virtualized datacenter.

The lesson here for Nutanix is that if Cisco can fall into complacency, anyone can. We’ve got to keep our heads down, be humble, stay hungry and keep innovating – even if we have to eventually disrupt our own technologies.

Lesson for Channel Partners

Cisco, VMware, Nutanix, Dell and HP, in addition to the other EVO:Rail partners and lots of startups, validate that hyper-converged infrastructure/web-scale is the future of the virtualized datacenter. There’s a $50 billion + annual server and storage market out there just begging to be disrupted by those channel partners with both the vision and the desire to execute.

Thanks to @vmmike130, @langonej, @evolvingneurons and to @richardarsenian for input.

When a channel partner looks in the mirror, does a trusted advisor look back?

In my former position as VP of Cloud and Virtualization at Presidio, I frequently used financial modeling to assist our reps, but did not drive sales on my own. That changed after I learned about Nutanix.

I loved the no-SAN concept and was curious to see how it would actually play in Peoria. I pitched a savvy CIO who had participated in an EDUCAUSE panel I moderated, and she was immediately intrigued. But the Chicago office of Presidio was reluctant to work with a new manufacturer. I just made the sale myself and convinced another region with which I had stronger ties to process the paperwork.

The experience should have tipped me off as to the type of situation I would face in my dual channel and strategic sales role at Nutanix. While it’s been surprisingly easy to sell web-scale converged infrastructure to former clients who have called me or vice-e-versa (always running the deals through partners of course), it’s often difficult to get buy-in from VARs – especially from large ones.

mfg rep 1

The Channel Partner Perspective

I had dinner a few days ago with the VP of Sales of a sizable regional VAR. He asked me how much business our top partner would do with us this year. I told him that one organization had a plan in place to sell $50M in our new fiscal year, though internally we pared it down to be conservative. The VP told me that his company will do $90M this year with EMC alone.

As enamored as he and his team were with our technology, I could tell he was thinking about how he could realistically present it internally. Even matching the sales of Nutanix’s largest partner wouldn’t come anywhere near the business he’s driving with EMC and Cisco. How could he convince his executive team that they should risk the wrath of their two largest vendors by promoting Nutanix?

And, suppose he did manage to persuade the executive team to go all in with web-scale; they still would have to get their sales reps on-board. The reps have established relationships with legacy manufacturers, are trained and experienced in selling their products and depend upon them for opportunities. These “coin-operated” reps do not readily gravitate toward promoting new technologies.

mfg rep 2

The Customer Perspective

If I were a CIO, I would not want a solutions provider who simply brought me different product configurations from a leading datacenter manufacturer – I could find that information myself on the Web. I’d want to work with a partner who was diligent enough to constantly investigate new promising technologies, and who was astute enough to discern which ones could have a positive impact on my organization. I’d expect the partner to bring those options and his recommendations to me for review.

VARs that close-mindedly mimic their vendor perspectives risk becoming, in the eyes of customers, glorified manufacturer reps. An EMC partner, for example, might feel confident today in leveraging a trusted relationship with a CIO to advocate Vblock as the best option for a VDI deployment. But the probability is increasing that the CIO will learn on her own that she could have implemented a similar project at a fraction of the cost and with none of the risk by utilizing web-scale. She will consequently feel her partner is either uninformed or, worse, acting in EMC’s rather than in her best interest.

mfg rep 3

Preserving the Customer Relationship

Channel partners tell me that large enterprises move very slowly – the implication being that they have plenty of time to continue making lots of money by promoting legacy 3-tier infrastructure. Perhaps they’re correct, but it’s a dangerous way to conduct business.

Henry Ford famously said, “If I had asked people what they wanted, they would have said faster horses.”  Just because a customer asks for more storage doesn’t mean a solutions provider should limit the conversation to arrays. They can take the opportunity to educate their client about how Google and the leading cloud providers have moved away from using SANs and ancient (1987) RAID technology. They can discuss the advantages of web-scale converged infrastructure and about whether or not the architecture might be appropriate for the customer’s environment.

Even if the customer decides, for whatever reason, to go with traditional 3-tier infrastructure, at least the channel partner looked out for the customer’s best interest. Over time, as web-scale/hyper-converged infrastructure becomes the virtualized datacenter standard, the customer will appreciate the effort and integrity of the partner for introducing it.

The Playing Field has Already Changed

I don’t agree with the premise that big enterprises will continue to move slowly. External pressures from public cloud and internal pressures from much more rapidly changing technologies will force enterprises to change more quickly as well.

Just look at web-scale. Almost overnight it has jumped solidly into the mainstream. VMware’s endorsement of hyper-converged infrastructure as the platform of choice for hosting virtual machines leaves no doubt as to the future direction of virtualized datacenter architecture.

Then there’s Dell – one of the “big seven” who collectively drive 76% ($56B) of the annual server and storage business. Dell also blessed hyper-converged architecture last week with its launch of the Dell XC Series: Web-scale Converged Appliances. Yet another of the “big seven”, EMC, has said it will develop its own EVO:Rail offering. Even HP is weighing in both with an EVO:Rail solution and with its own StoreVirtual product. Cisco is showing signs of making the leap as well.This massive validation during the past few months by the leading datacenter players enables solution providers to bring up web-scale without concern of appearing “bleeding edge”. It also means that they should, with at least some degree of impunity, be able to focus on hyper-converged solutions by creating a separate division explicitly for this purpose.

However they do it, I strongly encourage channel partners to figure out a way to get engaged with web-scale. Nutanix continues, and is even accelerating, our trajectory as the fastest-growing infrastructure company of the past decade. This provides an extraordinary opportunity for forward-thinking partners to grow along with us.

The VCE dissolution: here comes channel disruption

“Partnering is more difficult than acquisitions…Most strategic coalitions have a very high failure rate, worse than acquiring, and yet as a company, we’ve all three been able to do this.”
-John Chambers, 2009

The waves of disruption are starting to break.

Several magazines (see Sources) reported today that EMC is folding VCE into its business and buying out most of Cisco’s stake. Assuming this is true (EMC is having a big announcement tomorrow morning), it is another huge indicator of the massive datacenter disruption that’s coming. While I think that EMC will certainly still promote Vblocks, it’s hard to imagine that they’ll do so as enthusiastically as they did in conjunction with VCE.

Selling Vblocks

VCE has been on a $1.8 billion run rate. Vblock partners tend to love the product because they make a lot of money from selling, installing and upgrading it. One VCE partner told me that every time VMware upgrades vSphere, customers have to upgrade their Vblocks. This is a laborious process often requiring a team of consultants working up to three days to accomplish. It translates to great services business.

I have to admit, I was surprised at how well VCE has done during the past five years. When I initially heard about Acadia (as VCE was initially called), I thought that there was no way this product was going to sell. The idea of getting the server, storage and networking folks to all come together at the same time and agree upon a common platform purchase seemed to be an insurmountable challenge. A year later I was still somewhat skeptical. I even wrote a blog post about the sales incentive problems with misaligned quarter endings.

But I misjudged the desperation many IT staffs felt as they increasingly virtualized their datacenters. They faced huge challenges in deployment time, finger pointing between server and storage manufacturers, and in functional group collaboration. The topnotch salespeople from VCE, along with channel partner support, convinced many of them that “the world’s most advanced converged infrastructure” was an answer to their struggles.

pastedGraphic

Working for a channel partner that moved a whole a lot of both Cisco UCS and EMC, I jumped on the Vblock bandwagon and helped facilitate a fair number of sales utilizing ROI analysis. But I always felt that there had to be a more elegant solution to the challenges of hosting a virtualized datacenter than simply integrating separate products as a single SKU. Once I learned about the Nutanix web-scale architecture, I became convinced that this was a vastly superior alternative.

Channel Implications

A Taneja Group report comparing Nutanix with VCE (sponsored by Nutanix) was just released today. The report states, “Taneja has found that the majority of VCE customers adopted Vblocks because they already had active VCE or VCE-partner sales teams coming at them.”

It will be interesting to see how Vblock partners fare without the huge VCE focus and assistance. On the plus side, it’s almost a certainty that they’ll no longer have to promote Cisco’s ACI over VMware’s NSX. On the negative, they can say goodbye to the monetary incentives from the recently established joint channel program between EMC and Cisco.

Not surprisingly, I’d like to see more Vblock partners get on board with web-scale. In my opinion, selling Vblocks in comparison to selling Nutanix is like pushing rope. Even some die-hard very large Vblock customers have now started migrating to web-scale.

The Taneja Group report says it well, “We believe that even data centers that are happy enough with converged systems today will look to hyperconverged systems tomorrow. Better yet, instead of investing in a traditional convergence solution, businesses should consider going directly to a next-generation solution like Nutanix.”

Sources:

Cisco Said to be Selling Most of VCE Stake to EMC. 10/22/2014. Bob Brown. Computerworld.

Report: EMC to Take a Bigger Role in VCE as Cisco Reduces Stake. 10/21/2014. Barb Darrow. Gigamom.

The End of Pretend? Cisco Looks to Partially Exit VCE Joint Venture. 10/21/2014. Ben Kepes. Forbes.

EMC Said to Absorb VCE Joint Venture as Cisco Reduces Stake. 10/21/2014. Dina Bass & Peter Burrows. Bloomberg.

Tech Titans Unite for Private Cloud Push. 11/05/2009. Jennifer Kavur. IT World Canada.

http://www.itworldcanada.com/article/tech-titans-unite-for-private-cloud-push/40087