Citrix’ Bill Burley on building a disruptive channel business

I used to run a Citrix partner business in the San Francisco East Bay. We weren’t in the first group of Platinum partners, but we were close behind. I remember the first Citrix Platinum Partner meeting I attended in the late 1990s in Florida. As sometimes happens with channel partners, we were complaining a bit. I remember Citrix’s head of North American Sales, Bill Burley, berating us – saying, “You should be bleeding Citrix colors!”

Burley

I thought about what Bill said, and realized he was absolutely right. It was the ability of Citrix to open new doors and to differentiate our organizations that was the engine powering most of our growth. Bill has long since moved onto other duties at Citrix (he is currently VP, Business Integration of Acquisitions), but with Citrix Summit approaching next week, I thought it would be appropriate to seek his perspective about channel partners and disruptive technology:

SK:  Can you give me some personal background about how you came to Citrix?

BB:  I ran sales at LANSystems which is where I first met [Citrix CEO] Mark Templeton. After Intel acquired the company, I joined Citrix to head up North American sales which included running channels.

SK:  What do you see as the importance of channel from a manufacturer perspective?

BB:  The channel allows manufactures to touch many customers. This allows manufacturers to scale far faster and easier than trying to utilize a direct sales force. Back when software came in a box, it was a boat anchor unless someone integrated it and made it a solution. The channel was the perfect vehicle for accomplishing this then, but is also instrumental today in helping customers deploy enterprise solutions.

SK:  How did you build out Citrix’ channel?

BB:  Coming from a leveraged channel model at LANSystems, I established a similar motion while at Citrix.  We started out recruiting Novell resellers. Before long, we helped the resellers get NT certified because our platform was NT-based. This had the side benefit of gaining Microsoft’s deep appreciation which in turn helped us recruit more partners.

SK:  When Citrix Presentation Server was introduced, it was a radically different approach from other products then available.  How did you help partners be successful in selling it?

BB:  When I first met [Citrix founder] Ed Iacobucci, he told me, “Bill, we’re going to change the way the world computes!”  From the beginning, we made sure that the partner community knew we were not just another “me too” product – but that we had a solution which would help their customers lower costs and increase employee productivity. The smart partners understood this opportunity – they could smell it. Like many new technologies, we started off small – targeting individual departments. Eventually, we took over the whole enterprise.

SK:   How did you build the great channel loyalty for which Citrix is famous?

BB:   We weren’t selling widgets or math coprocessor chips. We were selling business transformation. This entailed a lot of partner investment and commitment – if we had only emphasized software margin alone, the partners would have gotten up and left the room. Citrix was probably the first manufacturer to show the “drag” of other products and services that our technology pulled along. At one point, for every dollar of Citrix software, partners were realizing an average of $7 of associated pull-through. This drag, along with the ability of Citrix technology to enhance our partners’ role as trusted advisor with their customers, was very compelling to the channel.

SK:  What advice do you have for partners on how to capitalize on disruption?

BB:  It is important for partners to understand where they came from. They should be truthful to what their business is, what their customer base is, and how they serve those customers. Customers always rule. A partner’s job is to invest. I believe that partners should either commit, or get out of the game. Dabbling is a mistake. Partners who don’t invest in training, education and understanding of the opportunity won’t be successful.

Partners can, of course, be smart about embracing new technologies. They should run the technology by their core customers – the ones who will give them honest feedback. But if the good customers tell partners that the technology is golden, partners need to pull out the stops.

Citrix Summit 2015

Bill unfortunately is not going to be able to make Citrix Summit this year, but Nutanix is a gold sponsor and in addition to our booth, we have a meeting room, hospitality suite, presentations, etc. If any partners would like to meet with either me or with other Nutanix executives – please shoot me an email: kap@nutanix.com.

Channel partners rally behind Nutanix Web-scale converged infrastructure

“Really?!”

That was the one word email I received from Nutanix’s Sr. VP of Sales (and my boss), Sudheesh Nair, in response to the Q4 2013 Piper Jaffray Storage VAR Survey. The surveyed partners ranked Nutanix second to last in terms of sales performance relevant to plan.

Needless to say, I was frustrated. The channel perception of Nutanix was out of synch with Nutanix’s record-setting sales in 2013 as the fastest-growing infrastructure company of at least the past ten years.

But understanding and successfully positioning Nutanix has been a learning process for the channel. When Nutanix CEO, Dheeraj Pandey, first approached Lightspeed Venture Partners almost five years ago, he made it clear that his new company would disrupt the storage industry – including the venture capitalists’ existing investments. Unlike most entrants into the suddenly popular hyper-converged space, this revolutionary vision is integral to everything we do at Nutanix.

Partners can’t simply pitch a “faster, cheaper, better” storage array as they can with the other early stage companies in the survey. Partners need to be able to articulate and evangelize to their clients how Web-scale is a sea change that is fundamentally altering the infrastructure of the modern, virtualized datacenter.

The Difference a Year Makes

2014 continued the trajectory of rocketing sales and, gratifyingly, a much broader spectrum of channel partners caught the Web-scale fever as well. From small partners building their businesses around Nutanix to multi-billion dollar channel organizations moving Fortune 500 clients over to Web-scale, Nutanix is changing the channel landscape.

According to the latest Piper Jaffray report, channel partners now rank Nutanix sales performance in the #1 position – ahead of CommVault, Dell Storage, EMC, HP Storage, NetApp, Nimble, Pure Storage, Veeam and VMware.

Piper Jaffray

 

The Stern Agee Channel Survey similarly shows a huge improvement in channel recognition of Nutanix. Channel partners listed Nutanix as the second leading key company disrupting the established storage sector – right behind Pure Storage (but quickly catching up). Nutanix is ranked ahead of Nimble (and rapidly increasing the spread), and is ranked far ahead of Tintri, Violin Memory, Nimbus Data, Nexenta, Solidfire and everyone else.

sterne Agee

 

Looking Forward to 2015

It’s exciting to see Nutanix partners across the world enthusiastically embrace the Web-scale opportunity. They’re leveraging Nutanix to differentiate their companies, gain new customers, increase sales and shorten sales cycles.

I want to thank all of our partners for your continued faith and trust. The good news is that Nutanix is really just getting started. New capabilities such as one-click hypervisor upgrades, metro availability, connectivity to AWS and Microsoft Azure, among many others, mean extraordinary continued opportunity in the year ahead.

.

 

 

NetApp joins the hyper-converged froth

I’m surprised that The Register, with its humorous yet poignant headlines, didn’t run an article titled something along the lines of:

NetApp to VMware: “EVO is nice for branch offices and stuff, but leave the heavy lifting to us”

Apparently, the whole NetApp EVO:Rail announcement took VMware by surprise.  Duncan Epping, Chief Technologist at the VMware CTO office commented in his blog, “Although I have been part of the EVO:RAIL team, it is not something I would have seen coming.”

From a datacenter disruption standpoint, the EVO:Rail partnership is important because it indicates yet another of  the “big 7” have now announced their own hyper-converged solutions; only IBM and Hitachi remain without an offering (not counting standard EVO:Rail for Hitachi). But I have my doubts about how serious NetApp actually is:

  1. NetApp has publicly stated, “FlexPod works more in the enterprise data center and large offices, while EVO: RAIL is more for department and branch office deployment outside the core data center.” I can just imagine that the VMware folks are grinding their teeth about that quote.
  2. Adding a Filer, or any SAN/NAS storage,  kills the EVO:Rail scale-out story – one of the most powerful attributes of a hyper-converged architecture. In other words, once customers fill up the Filer, they’ll need to purchase another Filer.
  3. EVO:Rail isn’t cheap. And even if an organization has a VMware ELA, it must still purchase the EVO:Rail licensing on an OEM basis from the manufacturer. When it is time to upgrade the hardware, the licensing must be purchased again. Adding NetApp will, of course, make the solution still more expensive.
  4. There is confusion about what the offering really is. No one even knows which servers will be used (best guess: Lenovo or Fujitsu). One thing is almost for certain, it will be complex. NetApp and VMware are probably banking on VVOLS with policy management to help administer the environment, but VVOLS itself is not yet proven.
  5. Since NetApp cannot compete with a truly hyper-converged solution, it is trying to move the EVO:Rail architecture back toward the FlexPod/Vblock architectures by adding capabilities such as data deduplication, compression, cloning, replication, etc. But it will be difficult to message the NetApp EVO offering in respect to FlexPod. Support will likely be challenging (is it a VMware EVO or NetApp issue?), flexibility will be limited, and resiliency constrained by the RAID and other archaic options of an array-based solution.

NetApp appears to have rushed this announcement to market – it didn’t want to be left out of the hyper-converged revolution. I suspect that while NetApp may use its EVO:Rail offering to open doors, that its reps will still primarily be pushing FlexPod.

Time, of course, will tell whether I’m right or totally off-base. In the interim, I would be very interested in hearing from readers, especially from channel partners and potential customers, about your take on the NetApp EVO:Rail announcement.

EMC implies that SANs may not be so great for hosting virtual machines after all

The inventor of the storage array, EMC, has indicated that a hardware-designed architecture is perhaps no longer the best solution for hosting a virtualized datacenter. The Register reported today that EMC will utilize ScaleIO as a VMware kernel module.

As I pointed out in the introductory post to this site less than two months ago, IDC says that $56B of annual server and storage sales go through just seven datacenter manufacturers: HP, IBM, EMC, Dell, Cisco, Oracle and NetApp. EMC’s announcement means that the majority now have a certified hyper-converged solution (not even counting EVO:Rail):

  • EMC:     ScaleIO
  • Cisco:    Maxta. Cisco also has invested in Stratoscale.
  • HP:         StoreVirtual
  • Dell:       XC Series web-scale converged appliances, powered by Nutanix software

Despite their dependency upon legacy 3-tier infrastructure for tens of billions in revenues, these datacenter giants recognize the necessity of joining the hyper-converged revolution. The threat of public cloud combined with much faster access to information is resulting in an astounding pace of its adoption.

SAN Huggers

Back in the aughts, we had to contend with the server huggers who staunchly refused to believe that their applications could run as well, let alone better, as virtual machines. But the financial and other advantages were too compelling to resist, and datacenters are now approaching an 80% virtualization rate.

Today, server huggers have been replaced by SAN huggers. These are the folks who insist that it is preferable to move flash and disk away from the compute and put them into proprietary arrays that must be accessed across the network. Never mind the issues around complexity, performance, resiliency, time-to-market and cost.

But just as virtualization provided an enormous opportunity for forward-thinking channel partners last decade, Web-scale has even more potential over the next several years. The key is introducing the concept in a way that will resonate with customers steeped in years of 3-tier infrastructure tradition.

Financial Modeling

It is natural for technologists, including channel partners, to jump into speeds and feeds and attributes and deficiencies. But I suggest taking a different tact. Help customers see a bigger picture, and consequently adopt a more strategic approach, with the aid of financial modeling.

IT leaders are realizing that to remain relevant, they need to run their internal operations with the same type of efficiency, responsiveness and accountability as the public cloud providers. This necessitates a more comprehensive process for selecting infrastructure than simply comparing up-front costs of similar solutions.

Cloud providers ruthlessly evaluate all of their on-going costs to ensure they are maximizing every square meter of datacenter space. Transitioning to ITaaS requires evaluating not only the equipment purchase price, but also expenses such as power, cooling, rack space, support, administration and associated hardware and software requirements.

One approach is to boil everything down to a lifecycle cost metric that can be easily applied to competing solutions. I describe a TCO per VM model in a recent Wikibon article. But regardless of how partners present the results, financial modeling on its own is insufficient for optimally determining an organization’s datacenter future.

Financial modeling is the hook to capture a prospect’s attention and to guarantee an audience with decision-makers. It is the key for partners to really understand their client’s pain points and objectives. They can then incorporate other vital variables such as risk, expandability, agility, reliability, resiliency, and so on within a framework that will resonate with their customers.

Going through this process positions a solutions provider to help its customers begin the datacenter migration process. It also provides the opportunity to incorporate private cloud, active/active datacenters, virtual desktops and other use cases made economically feasible by a hyper-converged infrastructure.

Disruption Made Easy

Even a compelling Web-scale evaluation can still leave a partner challenged to disrupt existing buying habits, processes and governance policies. But now that EMC has joined VMware and three of the other leading hardware manufacturers in validating hyper-converged infrastructure, it is easier for partners to initiate a conversation around datacenter strategy.

The winners in the new software-defined era will be those solutions providers who help their customers understand, select and implement the best architecture for their environments. The losers will be the VARs who continue to push legacy solutions without even bringing the Web-scale options to the table.

Cisco jumps into the hyper-converged game

Cisco changed the datacenter game with UCS – the only server designed from top to bottom for virtualization. Despite widespread skepticism that the networking giant knew nothing about servers and would fail miserably, in less than five years UCS became the number one blade seller in the Americas.

In our new compressed disruptive-cycle world, Cisco itself has now fallen far behind when it comes to optimally hosting a virtualized datacenter. The company is, however, scurrying to catch up. Within the past few weeks, it’s been revealed that Cisco both invested in hyper-converged startup, Stratoscale, and also blessed Maxta as the first, and so far only, certified hyper-converged solution to run on Cisco UCS.

How UCS Thumped the Server Leaders

I’ve been a vocal fan of UCS from the beginning. In late 2009, when “over 100 companies” were using UCS, I wrote a blog post comparing UCS vs. the HP Matrix. While competitors scoffed at UCS as a “one-size-fits-all product”, I maintained that it would revolutionize datacenter virtualization.

The dominant server manufacturers of the day were perfectly happy with the status quo. But Cisco realized that virtualization would become the datacenter standard and that a new type of server was required. Cisco initially approached IBM and HP to jointly develop a product, but both companies declined. So Cisco instead funded VMware cofounder, Ed Bugnion, and a team of engineers to spend three years building UCS.

UCS helped mitigate virtualization challenges with capabilities such as FCoE (Fibre Channel over Ethernet), hypervisor bypass, extended memory, services profiles and a GUI that can help the server, storage and network teams collaborate more effectively.

But UCS’s Achilles heel is that it really only addresses a small part of the virtualized datacenter issues – the compute. By far the majority of the pain in the modern datacenter has to do with storage. Not surprisingly, four storage manufacturers, EMC/VCE, NetApp, Hitachi and Nimble, all incorporate UCS as an integral component of their so-called “converged infrastructure” solutions.

Channel partners across the globe, such as the one I worked for, understood that as customers increasingly virtualized their datacenters, they would want the enterprise capabilities and features that UCS offered. These partners worked with Cisco to make UCS the number two blade seller in the world.

Descending into Irrelevance

Ah, but all things must change – especially in a software-defined world. While Cisco was promoting the superiority of custom-designed ASICs, Nutanix was bringing the advantages of commodity-driven web-scale architecture to the enterprise. The impressive innovations that Cisco unveiled over five years ago are now not just obsolete, but superfluous.

  • Fibre Channel over Ethernet (FCoE): Unlike the converged infrastructure offerings built around UCS, FCoE is an example of true convergence of the network stack – melding fibre channel and IP Ethernet networks. But today, Web-scale eliminates the requirement for SANs and switching fabrics entirely.
  • UCS Manager GUI: Lets storage and server teams collaborate more effectively together. Not so useful when separate storage administrators are no longer necessary.
  • Custom ASICs: Cisco boasts 12% increased performance from proprietary hardware. Nice but inconsequential when Moore’s Law doubles performance every 18 months anyway. Nutanix utilizes commodity hardware, but increases performance nonetheless with regular software updates that improve hardware effectiveness.
  • Services profiles and templates: These were great in the day for relatively fast provisioning of ESX hosts. Nutanix Foundation is much faster and doesn’t require zone masking or manual hypervisor installs.
  • Integrating the Cisco Nexus switch: Making the network the management center was key to Cisco gaining traction with its network administrator constituency. But Web-scale eliminates the requirement for complex, intelligent and expensive converged network switches.

The leading converged infrastructure manufacturer, VCE, proudly advertises that it only takes 45 days to order and put a Vblock into production – 5 X faster than with conventional servers and storage. In contrast, Nutanix can be ordered, received, installed and in production in around five days.

VCE ad

Upgrading VMware vSphere requires a corresponding upgrade to the entire Vblock – a process that can easily require a team of consultants several days to accomplish. And even then there are risks involved. A former Vblock customer that recently migrated to Nutanix was still running three versions back of vSphere because they didn’t want to have to deal with the associated Vblock upgrade.

Contrast all of this time, expense and risk with doing a vSphere (or Hyper-V or KVM) upgrade on Nutanix. The process is literally just a single click. No cost, no downtime and no risk.

Lesson for Nutanix

UCS and Nutanix both target the same customers – virtualized enterprise environments. I’ve heard from multiple partners that despite our relatively tiny size, Cisco has declared Nutanix to be its number one competitor. Not HP. Not VMware. Nutanix. Cisco’s announcements around Maxta and Stratoscale reflect its determination to, albeit belatedly, get into the game.

Cisco is one of the most successful and well-run companies of all time. While known for its innovations in areas such as routing, switching, VoIP and collaboration – perhaps nothing has been as impressive as Cisco’s accomplishment in the datacenter. Cisco upended all of the existing dominant server players by developing UCS to fulfill the computing requirements of the virtualized datacenter.

The lesson here for Nutanix is that if Cisco can fall into complacency, anyone can. We’ve got to keep our heads down, be humble, stay hungry and keep innovating – even if we have to eventually disrupt our own technologies.

Lesson for Channel Partners

Cisco, VMware, Nutanix, Dell and HP, in addition to the other EVO:Rail partners and lots of startups, validate that hyper-converged infrastructure/web-scale is the future of the virtualized datacenter. There’s a $50 billion + annual server and storage market out there just begging to be disrupted by those channel partners with both the vision and the desire to execute.

Thanks to @vmmike130, @langonej, @evolvingneurons and to @richardarsenian for input.

Happy birthday VMware vMotion

On this day 11 years ago in 2003, VMware introduced vMotion, and the datacenter was never the same again.

Windfall for Storage Manufacturers and for Solutions Providers

If you were involved in IT, you probably still remember the first time you saw vMotion – moving a live running virtual machine between physical hosts seemed like magic at the time. In my case, a friend’s demonstration of vMotion convinced me to start an integrator business with him focused on enterprise virtualization.

The introduction of vMotion was also the birth of the modern datacenter. It was the feature that made IT organizations really take notice of virtualization and of what it could do to improve their operations. And because vMotion required a SAN, it prompted organizations across the globe to begin purchasing shared storage arrays in massive quantities.

vMotion

VMware vMotion was, of course, a huge bonanza for the young storage manufacturers whose sales had been hit hard by the dot com bubble burst. EMC recognized a good thing when it saw it, and the next month announced its intent to purchase VMware for $625 million (VMware’s market cap today is $36 billion – so quite an astute acquisition).

VMware vMotion also turned out to be quite a boon for solutions providers – many of whom were still struggling themselves from the dot com bubble aftermath. Their services were in strong demand for helping organizations decide what arrays to buy, and how to design and implement the complex products and switching fabrics.

SAN Huggers

In the early days of virtualization, server huggers were common.  We used to joke with IT staffs about putting in a façade of servers and blinking lights so that they could make the application owners feel comfortable. And we really did used to hide the ESX tools from the Windows task bar so that the software manufacturer, when troubleshooting its product, wouldn’t see that it was running as a virtual machine.

Today, the server huggers are nearly an extinct species. Organizations are commonly virtualizing even large SQL Server, Oracle and Exchange applications. But a new group has arisen to take their place: SAN huggers.

As the name implies, SAN huggers don’t want anyone to replace their arrays with the new breed of hyper-converged or web-scale infrastructure products. They’re very comfortable with LUN snapshot management, balancing virtual machines across different physical volumes to get around LUN limitations, maintaining aggregates/meta-volumes, and the many other storage administration tasks.

The ironic thing is that storage arrays were built for a physical “scale-up” datacenter. Although they satisfied vMotion’s requirement for shared storage, they’re simply not a good fit for a highly virtualized “scale-out” datacenter . Take RAID which was invented in 1987. This is a really old technology that requires lengthy rebuild times and that can be disastrous if multiple drives fail simultaneously. The same is true if a SAN loses both of its storage controllers. Losing just one controller significantly reduces performance.

SANs take the disk and the flash away from the CPU and stick them in proprietary arrays at the end of networks where they’re subject to latency and network hops. They scale very poorly, are expensive, in many cases require separate switching fabrics, and are complex to manage.

Web-Scale Converged Infrastructure

When Google came on the scene in the late 1990s, co-founder Sergey Brin refused to buy SANs and instead hired a group of scientists to rethink datacenter infrastructure. They invented the Google File System, Map Reduce and NoSQL and put all of the intelligence into software rather than into proprietary hardware. The result was a very inexpensive infrastructure that is also highly resilient, scalable and simple to manage.

The lead Google scientist and two other Nutanix co-founders brought this same type of architecture to the enterprise datacenter by leveraging the hypervisor to virtualize the storage controllers. The result is a low-cost, self-healing, linearly scalable and very simple to manage infrastructure.

Although still very small by datacenter incumbent standards, Nutanix has already made a big impact in the industry. VMware introduced VSAN and now EVO:Rail as the recommended path to a software-defined datacenter. And hardware leaders EMC, Dell, HP and Cisco all have existing solutions, or planned entries, in the web-scale/hyper-converged infrastructure space.

While it may seem highly unlikely today, my guess is that the SAN huggers are going to have a much shorter reign than the server huggers did before them.

Today’s vMotion-like Moment

Nutanix’s management interface, Prism, is simple, elegant and comprehensive. When partners and customers see it for the first time, many report having the same type of “wow!” experience that they had the first time they saw vMotion.

Thanks to @vmmike130 for editing.