The new world of DR: cloud-integration

When most people think about disaster recovery they automatically assume it requires a complicated configuration with replicated data on redundant storage systems in two locations some distance apart from each other.  There are many details to pay attention to, including storage performance, network performance, available bandwidth and data growth. It costs a lot and takes a long time to implement.

But more and more customers are discovering that it doesn’t have to be that way. Just as cloud technology is changing how developers think about structuring and deploying applications, it is also changing the face of business continuity.

One of the biggest ways DR is changing with cloud technology is by removing the requirement for a separate DR site with all the networking, storage and server equipment. Customers are starting to realize instead that backup and recovery data can be automatically stored at one or more cloud storage service providers, such as AWS, Azure, EMC/ATMOS, Google, HP, Nirvanix and  Rackspace.  Using the cloud for DR provides the following key benefits

  1. Transfers infrastructure costs to cloud service providers
  2. Facilitates DR testing and validation
  3. Eliminates physical tapes and tape management
  4. Provides flexibility for the recovery location
  5. Centralizes DR storage from multiple sites, including ROBOs
  6. Improves RTO
  7. Enables recovery-in-cloud

StorSimple makes Cloud-integrated enterprise storage that does all of these things by automating data protection between on-premises storage and cloud storage services.

Transfer infrastructure costs

Equipment and resources for DR have costs with a very small chance of generating a return on the investment. There is no point in owning resources such as storage, networking, servers, racks, power and cabling that you hope to never use. Clearly, the cloud mantra of paying only for what is used applies here.  Don’t overpay for insurance.

Facilitate testing

Of course everything has to work when you need it to. The interesting thing about cloud DR is that it is even easier to test and validate than traditional DR because it can be done without interrupting production systems. Many of our customers at StorSimple cite this as a very important benefit.

Eliminate tapes

One of the worst parts of any recovery operation is anything and everything involving tapes. Naming tapes, loading tapes, unloading tapes, moving tapes, retensioning tapes, copying tapes, deleting tapes, disposing tapes, and all things tape-related.  They aren’t needed with cloud DR.

Recovery location flexibility

Cloud-based recovery can happen at any site with a reasonably good Internet connection. Moreover, it can happen at multiple sites, which means it is easier to make contingency plans for multiple-site complications as well as being able to spread the recovery load over more resources.

Centralize DR storage

Another aspect of location flexibility with DR is the ability for companies to store DR data in the cloud from many sites or remote branch offices (ROBOs). While each site or branch office will have a unique URL to store their data, the access to this data is centralized in the cloud where it can all be easily accessed from a single Internet connection in their primary data center. In other words, the DR data from any ROBO can be instantly accessed at headquarters.

Improve RTO

The data that is needed to resume operations after a disaster can be limited to only the data that is needed by applications – as opposed to downloading multiple tape images in-full and restoring data from them. This can save weeks during a large scale recovery. Data that is not needed immediately does not consume any bandwidth or other resources that would interfere with the restore process. This approach to DR uses a concept called “the working set”, which is the collection of data that is being used by applications. Working-set based DR is the most efficient way to recover data.

Recovery in-cloud

Related to recovery flexibility is the ability to resume operations in the cloud by using of cloud compute services. In this case, the DR data stays in the cloud where it is accessed by cloud-resident applications. Application users connect to the application through a connection to their cloud service provider. The data that stays in the cloud needs to be presented to the application in it’s usual fashion – as a file share, for instance.

 

Joint webinar with Amazon Web Services

StorSimple and Amazon Web Services are holding a joint webinar this morning at 10:00 AM Pacific time for enterprise storage professionals who are looking for ways to leverage the cloud for storage and data management.

The combination of Amazon S3 storage and StorSimple Cloud-integrated enterprise storage helps customers solve some of their biggest storage problems such as data growth, backup management, expanding DR coverage, data archiving and successfully managing unstructured data.

Stelio D’Alo from AWS and Marc Farley from StorSimple will be presenting together, discussing their company’s respective solutions and how enterprise customers can rely on them to be:

  • Secure with strong encryption of data in flight and at rest
  • Highly availability, with engineered redundancy, replication and DR
  • Automated to free administrators from storage drudgery
  • Efficient, through deduplication and compression

Attendees will learn what cloud-integrated solutions exist and  have the opportunity to pose questions to Marc and Stelio.

(This post is also on the StorSimple corporate blog here )

Crazy Larry’s Cloud Odyssey

I thought I’d put together a little montage of Oracle‘s fearless leader Larry Ellison talking about cloud.  It’s not that Larry has been wishy-washy – he’s never wishy washy – but he has certainly changed his tune over the years as the business imperatives for cloud became more obvious to him.

When cloud started taking off, it was pretty clear it took Ellison by surprise. Now his company is playing catch up, which is something he is comfortable with and it wouldn’t surprise me to see Oracle make a few acquisitions, bring a few lawsuits and any other late-to-the-party shenanigans they do.  But will it work this time? Will people that feel trapped by their Oracle licenses want more of the same from their cloud operations?

Our fearless leader takes the stage tomorrow in New York with BMC Software

The Amazon Web Services Cloud Storage for the Enterprise event will take place in New York City tomorrow at the Hilton Hotel, New York.  Our CEO Ursheet Parikh will be presenting at the mid-day keynote with a customer, BMC Software about how they are using StorSimple to deal with their VM sprawl problems. It’s a terrific application of enterprise cloud storage to solve an enterprise level problem that is common in development organizations.

Google+ Hangout #fail, but I got the peach!

Made a trip to Google HQ today in Mountain View, CA. A quest of sorts – trying to see if we could do a Google+ hangout on-air from the WiFi network on Google’s campus. It sort of worked. The video describes what happened.

If you want to see the video that was created from the hangout, it’s below. Even though it didn’t work today trying to do something goofy from Goole’s campus, I am extremely stoked by Google+ hangouts on air. I only wish they had a shorter name.

The cloud wants your junk data

What do you think about when you think about cloud?   A lot of people think of shiny, new technology made of all new APIs and hypervisors and mobile devices and cutting edge code and things that only the next generation will understand. And for a lot of cloud customers, that’s reality. New, new, new.

What you probably didn’t know, however, is that the storage part of the cloud service provider businesses aren’t hung up on new. In fact, they are ecstatic about old. Old junk data that you would rather forget about, get out of your life and out of your data center. Data that you know you shouldn’t just delete because an attorney somewhere will ask for it. But data that’s taking up expensive tier 1 storage that is the digital equivalent of engine sludge.

Cloud storage services want it – even if you end up deleting it later. It doesn’t matter to them.  You might be thinking they just want to mine your data.  Nope. They are perfectly fine storing encrypted data that they will never be able to read. To them, it’s all the same flavor of money at whatever the going rate is.  They don’t care if the data was a lot bigger before it was deduped or compressed or whatever you have done to it to reduce the cost. Why should they care if you send them 100 GB of data that was originally 1 TB. They don’t.

It’s good business for them – they’ll even replicate it numerous times to prevent data loss.  You might be thinking “but it’s garbage data, I’d never replicate it”.  True, but if it’s garbage, data then why do you have so many backup copies of it on tape and possibly in other locations?  Why are you managing garbage over and over again?

It’s a double win. They want it and you don’t. All you need is the equivalent of a pump to move it from your expensive tier 1 storage to their data storage services. There are a number of ways this can be done, including using products from StorSimple, the company I work for. A StorSimple system ranks data based on usage, compacts it, tags it (in metadata), encrypts it and migrates it to a storage tier in the cloud where it can be downloaded or deleted later if that’s what you decide to do with it. How much money do you think your company is wasting taking care of junk?

Doing their very best to stall cloud storage

The annual trip to storage Mecca occurred this week with EMC World in Las Vegas and from what I could tell from 500 miles away in San Jose, things went according to plan. Lots of people came from all over the world to see what the biggest company in the storage industry had to say.

EMC made a slew of announcements across the breadth of their product lines and I’m sure if I had been there I would have had a difficult time keeping my eyes open for all of them – especially the Documentum news. Something about sitting in a dark room listening to tech marketing lingo without sufficient access to stimulants. This isn’t a comment about EMC, by the way – it’s a trade show disease I’ve developed over the years.

Not surprisingly, the Cloud was a big theme.  EMC announced they were going to make improvements to Atmos to make it more acceptable as a cloud storage platform.  Every product needs development to stay competitive and Atmos is a fine example of a product that needs improving.  It will be interesting to see if it gets what it needs to keep up with the leading cloud storage services like Amazon’s S3 and Microsoft’s Windows Azure Storage.  I don’t want to rule Atmos out quite yet, but so much of what makes cloud storage services work is the environment they run in and nobody is keeping up with the cloud data center technology being developed by Amazon, Google and Microsoft.

Mike Fratto from Network Computing reporting on Joe Tucci’s keynote wrote:

  “There will be hundreds of thousands of private clouds, and thousands of public clouds. The future of cloud is an and, not an or,” declared Joe Tucci, EMC CEO, during his keynote. Tucci wants that cloud to be EMC-powered.

It is certainly true that Tucci and EMC want the cloud to be EMC-powered, but it’s not going to work that way.  There will not be hundreds of thousands of private clouds, there might be a few thousand and there will not be thousands of public clouds – there might be a few hundred – and most of them won’t use EMC storage products because they will build their own storage like Amazon, Google and Microsoft do today.  The fact is, the cloud companies building large-scale cloud data centers are rolling their own storage nodes that cost a lot less than anything they can buy from infrastructure vendors – and that doesn’t make guys like Joe Tucci very happy.  It’s not surprising they want private clouds to succeed, but that’s just wishful thinking.

The problem is that cloud service providers (CSPs) are making it very easy to start development projects with a minimal investment. You could start a project today using an existing flexible and scalable cloud infrastructure or spend a lot of money to build a private cloud with 20% of the functionality and get it 12 months from now.  As corporate leaders wake up to that reality, the vendor pipe dreams for private clouds are going to blow away. Spending far less to get much more today will always be a better option than investing a lot more to get a lot less a year from now.

But what about the data?  This is where there will be a lot of teeth gnashing. Traditional infrastructure storage vendors don’t want data moving to the cloud because it means customers will no longer need as many arrays, dedupe systems, management software and anything else they sell today. That’s a scary thing. The irony of the situation though is that the CSPs tend to think that corporate data will just appear somehow like magic, which is an obvious recipe for failure on their part. Meanwhile customers, who will have the final say about how it all turns out, need to be assured that their data will be safe and secure in the cloud – and that they will be able to change their cloud service providers if they want to. The paranoid notion that data is safer in a corporate data center is not well-founded and there are likely going to be many advantages to putting data in a cloud that has the best security technology and practices. Data portability between cloud service providers is not hard if you have the tools to access it and move it.

StorSimple, the company I work for is part of a small number of startups that are working on technology to move data between corporate data centers and cloud storage for archive, backup and applications running in cloud-compute environments. It’s really all about data management between the data center and the cloud. It’s called cloud-integrated enterprise storage, but the real focus is on the data and the things we can do with it to make cloud storage as affordable, flexible, safe and secure as possible.

I don’t think it’s going to turn out like iSCSI did, where the enterprise storage industry was able to position the upstart technology as inadequate for most of their customers and maintain higher margins in the process. This time, CSPs are the competition and they are changing the way the world thinks about computing. Traditional infrastructure vendors won’t control the final outcome because they don’t control the cloud, just as they don’t control the mobile devices that are accessing the cloud a zillion times a day. I’d like to say that traditional storage infrastructure vendors will have a long slow decline but technology transitions tend to be swift and cruel.  It mostly depends on the motivation of the CSPs and whether they can get their act together to make it happen.

Are you feeling lucky, or just confident?

 

 

 

 

Chris Mellor wrote an article for The Register yesterday on cloud storage.  At the end of it all, Chris malappropriated the famous soliloquy from the movie Dirty Harry:

“Being this is a .44 Magnum, the most powerful cloud storage service in the world, and would blow your SAN head clean off, you’ve got to ask yourself one question: ‘Do I feel lucky? “Well do ya, punk?” ®

For those unfamiliar with the movie, the context here is that a violent detective (Dirty Harry) has caught a psychotic serial killer and asks him the ultimate question about his fate.  Tension builds with the realization that Harry is asking himself the same question because he is unsure if there are any bullets left in his gun.  He obviously wants to find out, but struggles with a good cop vs evil cop dichotomy. He needs his psychopathic adversary to make the first move, but he seems awfully confident.

It doesn’t have much to do with cloud storage, other than suggesting the question of fate – something that storage administrators think about with regards to data more often than they think about their own.

So what is the fate of data stored in the cloud and what sorts of steps do cloud service providers take to give customers re-assurances that theirs is safe? You can’t plan for everything, but you can plan to cover an awful lot of mayhem that can occur.

For starters you can store data in multiple locations to protect from being unable to access data from a single cloud site. As Chris’ article pointed out, StorSimple allows customers to do that. They can store data in separate discrete regions run by a single service provider or they can store data in cloud data centers run by different cloud service providers. Different customers will have different comfort levels where cloud redundancy is concerned.

But it’s important to know that cloud storage service providers already store data in multiple locations anyway to protect against an outage at a single site that could cause a data loss. Data in the cloud is typically stored multiple times at the site where it is first uploaded and then stored again at other sites in the cloud service provider’s network.  Customers who are concerned about the fate of their data should discuss how this is done with the storage service providers they are considering because they are all a little different.

There is an awful lot of technology that has gone into cloud storage. We tend to think of it like a giant disk drive in the sky, but that is only the easiest way to think about it.  Cloud storage – especially object storage in the cloud, the kind StorSimple uses and the stuff based on RESTful protocols has been amazingly reliable. There have been other problems with different aspects of the cloud, including block storage, but object storage has been rock solid.  It’s not really about feeling lucky as Dirty (Chris) Harry suggested, it’s about the scalable and resilient architectures that have been built.

We would love to talk to you about cloud storage and how you can start using it. If you have a cloud service provider in mind, we are probably already working with them.

HP finds new gravity with their Cloud Services

There is certainly nothing unexpected about HP’s Cloud Services announcement today, considering they have been very open with their private cloud beta program. HP and cloud computing are almost a perfect match: the cloud has enormous business potential and demands immense scalability. HP is a huge business and needs to get into large growth businesses. The company has a lot of resources and a lot of talent – the only question is whether they have the flexibility to change with the rest of the industry as it goes through the inevitable fits and starts along the way.

StorSimple is very happy to be a part of the announcement. We’ve been working with HP for some time on our Cloud Services integration and they have been a good development partner.  Speaking of partners, HP appears to have done an excellent job lining up a long list of partners for HP Cloud Services.  Their involvement in the OpenStack Foundation appears to be very solid.  One of the things HP does very well is get engaged with industry groups to build an ecosystem of technology players. The fact that HP has rolled up its sleeves and joined the OpenStack community is important to everybody involved with the movement because of the visibility and resources they bring.

It’s always tempting to compare different vendors and predict which will have the most success, but that is missing the point where HP is concerned. HP is probably less concerned about their cloud competitors right now than they are about getting satisfied customers using their Cloud Services. They are not rushing into cloud, instead they are working with customers trying to figure out what they need to do to make them happy. If cloud computing someday has an adverse impact on their ability to sell servers, storage and networking products, they will want to offset any decreases with increases in HP Cloud Services.

Will HP eat their own dog food when it comes to cloud?

It will be interesting to see whether or not HP finds ways to use HP Cloud Services for their internal needs. They have a history of using their own products internally and one would expect that they would do the same with their Cloud Services.  I know from being a recent employee that they also tend to discourage shadow IT and BYOD because it is easier to manage and costs less. That was part of Mark Hurd’s legacy that was not very popular inside the company. People that work with technology expect to have the freedom to do things they believe are necessary to do their jobs.  By simply being in the cloud business, they are sending a message to their approximately 320,000 employees that doing things in a cloud-like way is good. For all my friends at HP, if you read this – pay attention to what Cloud Services is doing and look for ways to make it work for you.

Integration with StorSimple

StorSimple’s Cloud-integrated Enterprise Storage systems work with HP Cloud Services the same as they do with other cloud service providers.  Data that is stored on-premises in a StorSimple system is first reduced in size by deduplication and compression before it is encrypted and sent to the cloud.  The cloud functions as a repository for backup and archive data as well as an online disaster recovery facility.  The cloud also acts as a “cold” tier for data that is no longer being accessed by the system. The working set stays on-premises and inactive data is moved to the cloud tier.  This means that restore operations from the cloud with StorSimple proceed much faster than other cloud backup systems because StorSimple only requires that the working set of data be downloaded.

To find out more about StorSimple and it’s products, click here

A customer’s journey to real-world cloud storage

Dan Streufert, the IT Director for MedPlast recently presented at the  SNW Spring conference about his experience with a StorSimple enterprise cloud storage solution.  In this video he talks a lot about their requirements and what they were looking for cloud storage to do for them.  Dan’s approach to running IT is based on leveraging cloud services as much as possible and as he says, ” It lets us keep pretty lean and focused on what our business does best, which is making medical products, not IT.”