Executives of the Round Table

The news of top executive role shifts at EMC & VMware and the accompanying rumors of a new cloud-oriented spinoff  have certainly spiced up the summer. One thing is abundantly clear – EMC and VMware should be thought of as the same corporate entity and there is no point in pretending otherwise. It is also clear that the cloud is the irresistible force in this story.  The new roles for both Mr. Gelsinger and Mr. Maritz were described in the context of cloud:

 ”Pat will now lead Cloud Infrastructure at VMware, and Paul will look across our technology strategy with a particular focus on Big Data and the next generation of cloud-oriented applications that will be built on top of these fundations.”

 It doesn’t really matter what VMware’s legal structure is, it operates as a part of EMC. The swapping of logos on Gelsinger’s and Maritz’ business cards is powerful evidence. The two companies are co-managed and will continue to be co-managed and cross-pollinated for many years. Employees moving up the ranks of either company will see their opportunities coming from both companies, which will make it easier for them to retain top talent. M&A activity will continue to be done by both companies, with the eventual determination of which company will be the final owner of the technology to be decided later.  The ability for EMC/VMware to acquire and develop technologies in different ways and in parallel could turn out to be a huge advantage for successful technology and corporate integration.

It’s also clear that EMC is not really a storage company anymore, it is a data infrastructure company that designs and sells both hardware and software.  The EMC brand implies a storage focus, while the VMware brand implies a compute focus.  Together they are twin gorillas of infrastructure that will drive their business results independently. They seem to have found an optimal way to manage through the market swings that inevitably come with a paradigm shift. It certainly wasn’t part of the plan when they bought VMware but it certainly looks like genius today.

The rumored spinout of a new company to address platform opportunities with Cloud Foundry and Greenplum will be interesting to watch. Some think  Mr. Maritz will be named to run the company because the job description for Maritz specifically mentioned Big Data and cloud-oriented applications. But I’m not so sure. His tenure at VMware was immensely successful. He vastly expanded the opportunities for VMware by creating a platform software business that meshes perfectly with EMC’s core business. I suspect the company will reward him by placing him in an over-arching strategic planning role for the entire EMC universe, something he is uniquely suited for, as opposed to building a new spinout from the ground floor.  There are other operations-oriented execs in the company who can do that. Why wouldn’t EMC/VMware want to show off more of their talent pool?  And with that thought, seeing as how so many things have changed at EMC over the last decade, you might think they might elevate somebody that isn’t a Caucasian male. That would be a switch.

Dedupe is coming fast for primary storage – and the cloud will follow

With EMC’s acquisition of XtremeIO today the landscape for storage products appears to be destined to change again to include a new segment for all-flash arrays.  One of the technologies that will go mainstream along with flash arrays is primary dedupe. When you have all the read performance that flash provides, there isn’t any reason not to do it.  A number of smaller vendors including StorSimple, the company I work for and Pure Storage have been using dedupe already paired with flash SSDs.

Chris Evans, in his blog The Storage Architect, wrote a couple weeks ago about the potential future for primary dedupe, pointing out that Netapp’s A-SIS product has produced good results for customers since it was introduced in 2007. He then goes on to discuss the symbiotic relationship between flash SSDs and dedupe before posings the question about when dedupe will become mainstream for primary storage, saying

That brings us to the SSD-based array vendors.  These companies have a vested interest in implementing de-duplication as it is one of the features they need to help make the TCO for all SSD arrays to work.  Out of necessity dedupe is a required feature, forcing it to be part of the array design.

Solid state is also a perfect technology for deduplicated storage.  Whether using inline or post-processing, de-duplication causes subsequent read requests to be more random in nature as the pattern of deduplicated data is unpredicable.  With fixed latency, SSDs are great at delivering this type of read request that may be tricker for other array types.

Will de-duplication become a standard mainstream feature?  Probably not in current array platforms but definitely for the new ones where legacy history isn’t an issue.  There will come a time when those legacy platforms should be put out to pasture and by then de-duplication will be a standard feature.

In a post I wrote  last week  about  using deduplication technology for data that is stored in the cloud I described the benefits of dedupe for reducing cloud storage transaction and storage costs.  As the wheels of the industry continue to converge, it’s also inevitable that the systems that access cloud storage will also dedupe data. There isn’t any reason not to do it. The technology is available today and it’s working. Check it out.

 

Speed excites, but how about cool, tight storage?

In 2011, Fusion-io disrupted the enterprise storage industry with their high-performance PCIe flash memory cards. EMC responded this week announcing its own VFCache product.  Suddenly there is a hotly contested race and  it’s up to the rest of the industry to respond.

I love going fast on bikes and skis, but fast is a relative thing. There are always people who will go a lot faster than me, but I don’t need to keep up with them to be happy. The same is true for driving – I drive a 4-cylinder Ford Fusion because I love Sync and I don’t care if it’s not fast.

Cold will be hot

The “good enough for me” principle works for storage too. Most companies have a lot of data that doesn’t need high performance I/O. Server flash products address the hottest data a company has, but what about all the “cool” or “cold” data that is infrequently or never accessed? It typically occupies storage systems that were built to compete based on performance. Even the lowest-performing tiers of most enterprise storage systems significantly over-serve customers by providing service levels far beyond what is needed.  At some point another industry disruption will occur as new products emerge that significantly reduce the cost of storing cool data.

A difficult problem for storage administrators is that there is no way to separate cool data that will be accessed again in the future from cold data that will never be accessed again. One approach is to archive cool data to tape, but the delays and difficulties in locating and restoring cool data when it reheats are not all that comforting. Another approach is to send cool data to an online cloud storage tier provided by enterprise vendors such as Microsoft, Amazon, Rackspace, AT&T, and HP. Cool data in the cloud that reheats is transparently moved back to a local, warmer tier until it cools off again. Data stored in a cloud tier does not require the power, cooling and footprint overhead of data stored in the corporate data center storage and it also reduces the cost and impact of storage system end-of-life events.

"Tight" storage looks good

But cloud storage tiers are not the whole answer. Customers will want to ensure that cool/cold data doesn’t consume any unnecessary storage capacity.  Cloud storage products that incorporate the various forms of data reduction technologies such as thin provisioning, deduplication and compression will provide the “tightest” fit for this data by running at capacity utilizations that are unheard of with primary enterprise storage today. In addition to saving customers on storage costs, these products will also increase the return on investment by saving customers on bandwidth and transaction costs that some cloud service providers charge.  Keeping a tight grip on storage expenses will become synonymous with using the tightest, most efficient cloud-integrated storage systems.