Flash is Disrupting the Benchmark Game too

reindeer raceA couple days ago, Lou Lydiksen from Pure Storage posted a blog titled “What’s wrong with using 100% non-reducible data?” The post was better than the title suggests because it exposes a problem storage buyers have today, which is validating the performance of flash storage systems using benchmarks developed for disk-based systems.

Independent storage performance TALK-NERDY-TO-MEbenchmarks have been nerdy infotainment for many years due to the work that often goes into getting the best possible results. Storage systems are usually configured and tuned a certain way to get the best results. (An example of a benchmark that appears to break this rule is the recent SPC benchmark from Kaminario). There is nothing wrong with optimizing application performance, even if that application is a benchmark, but it obviously makes sense to use benchmarks that model realistic, production workloads.

New architecturesThe larger question is what happens when new technologies with new architectures come to market?  For example, the SPC1 benchmark for transaction processing accommodates flash storage technology, but it does not allow the use of compression and deduplication. That’s a big deal because both are important cost-saving features of enterprise flash systems that most customers want but have no way to predict if, or how, they will impact performance. The lack of a feature-comprehensive benchmark does not appear to be slowing the growth of flash storage systems, but that is beside the point – customers would benefit. Other realistic tests would be running the benchmark in a virtual environment alongside simulated background processes and pinning the benchmark application in flash while background processes run in disk or some combination of disk and+flash.

The SPC has indicated that they are sorting some of this out and are developing a way to include compression (first) and deduplication (later) rainbow in skittlesin their benchmarks. I don’t know where they are with respect to virtualization and hybrid designs but in my opinion, storage benchmarking is suddenly a green-field opportunity again – brought about by the rise of enterprise flash storage systems. Will we see enterprising analyst/entrepreneurs rise up to grab the brass ring and establish clear thought leadership amidst the chaos? Are there any takers?

The Return of the Steering Wheel Camera Society of America

Yes!  It's back - The Steering Wheel Camera (LG 3) is now bungee-corded to the rearview mirror of my Infinity G37 instead of the wheel of my Ford Fusion. It's a good setup that provides minimal distractions from the pot holes and fender bending of … [Continue reading]

The next big thing for me – Dub Storage at Tegile

Dub Storage Logo

My long run of good luck just seems to keep on going. Case in point: on June 5th I was laid off from Quaddra and within two weeks I had signed on the dotted line as an Evangelist at Tegile. Woot!  I always liked their fundamentals and I am now … [Continue reading]

Rubrik in the clear: clusters, clouds and automation for data protection


When I posted about Rubrik back at the end of March I didn't expect it to be Part 1 of a 2-part blog.  But that's blogging for you - and today Rubrik announced it's product and a big B-round of funding ($41M).   I also know a lot more about Rubrik, … [Continue reading]

Will Rubrik’s time machine fix the mess of data protection?


A couple weeks ago on the Speaking in Tech podcast I had a deja vu experience when the topic became a statement attributed to Microsoft that "backup software deserves to die."   This came from an article by Simon Sharwood in The Register that quoted … [Continue reading]

Qumulo Emerges and Nudges the Awareness of Data Awareness

swimmer surface tension

  Last week was good for the storage universe, as Qumulo announced their data-aware NAS systems called Qumulo Core.  Qumulo stole a page from DataGravity's playbook by positioning Qumulo Core as data-aware storage, but the two products … [Continue reading]