Our fearless leader takes the stage tomorrow in New York with BMC Software

The Amazon Web Services Cloud Storage for the Enterprise event will take place in New York City tomorrow at the Hilton Hotel, New York.  Our CEO Ursheet Parikh will be presenting at the mid-day keynote with a customer, BMC Software about how they are using StorSimple to deal with their VM sprawl problems. It’s a terrific application of enterprise cloud storage to solve an enterprise level problem that is common in development organizations.

The cloud wants your junk data

What do you think about when you think about cloud?   A lot of people think of shiny, new technology made of all new APIs and hypervisors and mobile devices and cutting edge code and things that only the next generation will understand. And for a lot of cloud customers, that’s reality. New, new, new.

What you probably didn’t know, however, is that the storage part of the cloud service provider businesses aren’t hung up on new. In fact, they are ecstatic about old. Old junk data that you would rather forget about, get out of your life and out of your data center. Data that you know you shouldn’t just delete because an attorney somewhere will ask for it. But data that’s taking up expensive tier 1 storage that is the digital equivalent of engine sludge.

Cloud storage services want it – even if you end up deleting it later. It doesn’t matter to them.  You might be thinking they just want to mine your data.  Nope. They are perfectly fine storing encrypted data that they will never be able to read. To them, it’s all the same flavor of money at whatever the going rate is.  They don’t care if the data was a lot bigger before it was deduped or compressed or whatever you have done to it to reduce the cost. Why should they care if you send them 100 GB of data that was originally 1 TB. They don’t.

It’s good business for them – they’ll even replicate it numerous times to prevent data loss.  You might be thinking “but it’s garbage data, I’d never replicate it”.  True, but if it’s garbage, data then why do you have so many backup copies of it on tape and possibly in other locations?  Why are you managing garbage over and over again?

It’s a double win. They want it and you don’t. All you need is the equivalent of a pump to move it from your expensive tier 1 storage to their data storage services. There are a number of ways this can be done, including using products from StorSimple, the company I work for. A StorSimple system ranks data based on usage, compacts it, tags it (in metadata), encrypts it and migrates it to a storage tier in the cloud where it can be downloaded or deleted later if that’s what you decide to do with it. How much money do you think your company is wasting taking care of junk?

Speed excites, but how about cool, tight storage?

In 2011, Fusion-io disrupted the enterprise storage industry with their high-performance PCIe flash memory cards. EMC responded this week announcing its own VFCache product.  Suddenly there is a hotly contested race and  it’s up to the rest of the industry to respond.

I love going fast on bikes and skis, but fast is a relative thing. There are always people who will go a lot faster than me, but I don’t need to keep up with them to be happy. The same is true for driving – I drive a 4-cylinder Ford Fusion because I love Sync and I don’t care if it’s not fast.

Cold will be hot

The “good enough for me” principle works for storage too. Most companies have a lot of data that doesn’t need high performance I/O. Server flash products address the hottest data a company has, but what about all the “cool” or “cold” data that is infrequently or never accessed? It typically occupies storage systems that were built to compete based on performance. Even the lowest-performing tiers of most enterprise storage systems significantly over-serve customers by providing service levels far beyond what is needed.  At some point another industry disruption will occur as new products emerge that significantly reduce the cost of storing cool data.

A difficult problem for storage administrators is that there is no way to separate cool data that will be accessed again in the future from cold data that will never be accessed again. One approach is to archive cool data to tape, but the delays and difficulties in locating and restoring cool data when it reheats are not all that comforting. Another approach is to send cool data to an online cloud storage tier provided by enterprise vendors such as Microsoft, Amazon, Rackspace, AT&T, and HP. Cool data in the cloud that reheats is transparently moved back to a local, warmer tier until it cools off again. Data stored in a cloud tier does not require the power, cooling and footprint overhead of data stored in the corporate data center storage and it also reduces the cost and impact of storage system end-of-life events.

"Tight" storage looks good

But cloud storage tiers are not the whole answer. Customers will want to ensure that cool/cold data doesn’t consume any unnecessary storage capacity.  Cloud storage products that incorporate the various forms of data reduction technologies such as thin provisioning, deduplication and compression will provide the “tightest” fit for this data by running at capacity utilizations that are unheard of with primary enterprise storage today. In addition to saving customers on storage costs, these products will also increase the return on investment by saving customers on bandwidth and transaction costs that some cloud service providers charge.  Keeping a tight grip on storage expenses will become synonymous with using the tightest, most efficient cloud-integrated storage systems.