Oracle buys Xsigo to boost Sun ROI

 

The announcement this week of Oracle acquiring Xsigo did one thing: it put the focus of enterprise computing back onto Oracle for a brief moment. A number of excellent posts were written on it, including:

The dollar amount was undisclosed, but Chris Mellor from The Register figures it was around $800 Million, but I’m not so sure.  Xsigo didn’t have many customers, which means the company probably had a financial crisis looming and needed to get the best deal it could before running out of time.

The acquisition is more opportunistic than strategic.  Assuming Xsigo was looking for an acquirer,  Oracle may have felt it needed to make a premature decision and not miss out on something they might regret later.  That would explain the confused messaging around the acquisition where some thought it was an SDN play and others saying it was part of Oracle’s cloud strategy.  I can assure you that Larry Ellison laughed out loud reading comments about this being a cloud play.

Oracle could care less about Xsigo’s customers – they already have enough of their own. Its all about Oracle now.  Xsigo’s products will be discontinued and Xsigo employees not involved with technology development will be let go.

As for this acquisition being a cloud play – it’s not. There are still things that happen in this industry that have nothing to do with the cloud. As others have written, Xsigo technology would most likely be used with Oracle’s Exasystems as a virtual data center fabric component. If Oracle wants to sell more Exasystems they need to make them more flexible and easier to manage and Xsigo can help do that.  The problem is that most people building clouds are doing it with different technologies than Oracle’s expensive Exasystems.

So cloud is not the opportunity, data centers are. The ROI from this acquisition depends completely on Oracle’s ability to sell more Exasystem products to people running data centers. Xsigo’s role is to help generate a faster ROI from their Sun acquisition. Maybe Oracle will break even on Sun, but they have a long ways to go. Nonetheless, this is something Oracle is extremely good at – selling high-margin products to large, captive, mature markets.

Where cloud is concerned, Oracle needs to figure out what to do about Hadoop, the open source compute/storage technology that was pioneered by Google, set free by Yahoo and now has several competitive distributions. Hadoop is the technology of choice for most IT professionals working on Big Data analytics applications and it is highly unlikely there is much Oracle can do to compete with it. Cloud customers can create huge Hadoop applications on Amazon and Google today for far less money than they can build out an Exasystem data center.  Hadoop won’t be used for every application in the cloud,  but its going to be used for a lot of things people have associated with traditional databases over the years. People wondering about Oracle’s business in the cloud are advised to watch the Hadoop space and any moves Oracle makes related to it.

The new world of DR: cloud-integration

When most people think about disaster recovery they automatically assume it requires a complicated configuration with replicated data on redundant storage systems in two locations some distance apart from each other.  There are many details to pay attention to, including storage performance, network performance, available bandwidth and data growth. It costs a lot and takes a long time to implement.

But more and more customers are discovering that it doesn’t have to be that way. Just as cloud technology is changing how developers think about structuring and deploying applications, it is also changing the face of business continuity.

One of the biggest ways DR is changing with cloud technology is by removing the requirement for a separate DR site with all the networking, storage and server equipment. Customers are starting to realize instead that backup and recovery data can be automatically stored at one or more cloud storage service providers, such as AWS, Azure, EMC/ATMOS, Google, HP, Nirvanix and  Rackspace.  Using the cloud for DR provides the following key benefits

  1. Transfers infrastructure costs to cloud service providers
  2. Facilitates DR testing and validation
  3. Eliminates physical tapes and tape management
  4. Provides flexibility for the recovery location
  5. Centralizes DR storage from multiple sites, including ROBOs
  6. Improves RTO
  7. Enables recovery-in-cloud

StorSimple makes Cloud-integrated enterprise storage that does all of these things by automating data protection between on-premises storage and cloud storage services.

Transfer infrastructure costs

Equipment and resources for DR have costs with a very small chance of generating a return on the investment. There is no point in owning resources such as storage, networking, servers, racks, power and cabling that you hope to never use. Clearly, the cloud mantra of paying only for what is used applies here.  Don’t overpay for insurance.

Facilitate testing

Of course everything has to work when you need it to. The interesting thing about cloud DR is that it is even easier to test and validate than traditional DR because it can be done without interrupting production systems. Many of our customers at StorSimple cite this as a very important benefit.

Eliminate tapes

One of the worst parts of any recovery operation is anything and everything involving tapes. Naming tapes, loading tapes, unloading tapes, moving tapes, retensioning tapes, copying tapes, deleting tapes, disposing tapes, and all things tape-related.  They aren’t needed with cloud DR.

Recovery location flexibility

Cloud-based recovery can happen at any site with a reasonably good Internet connection. Moreover, it can happen at multiple sites, which means it is easier to make contingency plans for multiple-site complications as well as being able to spread the recovery load over more resources.

Centralize DR storage

Another aspect of location flexibility with DR is the ability for companies to store DR data in the cloud from many sites or remote branch offices (ROBOs). While each site or branch office will have a unique URL to store their data, the access to this data is centralized in the cloud where it can all be easily accessed from a single Internet connection in their primary data center. In other words, the DR data from any ROBO can be instantly accessed at headquarters.

Improve RTO

The data that is needed to resume operations after a disaster can be limited to only the data that is needed by applications – as opposed to downloading multiple tape images in-full and restoring data from them. This can save weeks during a large scale recovery. Data that is not needed immediately does not consume any bandwidth or other resources that would interfere with the restore process. This approach to DR uses a concept called “the working set”, which is the collection of data that is being used by applications. Working-set based DR is the most efficient way to recover data.

Recovery in-cloud

Related to recovery flexibility is the ability to resume operations in the cloud by using of cloud compute services. In this case, the DR data stays in the cloud where it is accessed by cloud-resident applications. Application users connect to the application through a connection to their cloud service provider. The data that stays in the cloud needs to be presented to the application in it’s usual fashion – as a file share, for instance.

 

Executives of the Round Table

The news of top executive role shifts at EMC & VMware and the accompanying rumors of a new cloud-oriented spinoff  have certainly spiced up the summer. One thing is abundantly clear – EMC and VMware should be thought of as the same corporate entity and there is no point in pretending otherwise. It is also clear that the cloud is the irresistible force in this story.  The new roles for both Mr. Gelsinger and Mr. Maritz were described in the context of cloud:

 ”Pat will now lead Cloud Infrastructure at VMware, and Paul will look across our technology strategy with a particular focus on Big Data and the next generation of cloud-oriented applications that will be built on top of these fundations.”

 It doesn’t really matter what VMware’s legal structure is, it operates as a part of EMC. The swapping of logos on Gelsinger’s and Maritz’ business cards is powerful evidence. The two companies are co-managed and will continue to be co-managed and cross-pollinated for many years. Employees moving up the ranks of either company will see their opportunities coming from both companies, which will make it easier for them to retain top talent. M&A activity will continue to be done by both companies, with the eventual determination of which company will be the final owner of the technology to be decided later.  The ability for EMC/VMware to acquire and develop technologies in different ways and in parallel could turn out to be a huge advantage for successful technology and corporate integration.

It’s also clear that EMC is not really a storage company anymore, it is a data infrastructure company that designs and sells both hardware and software.  The EMC brand implies a storage focus, while the VMware brand implies a compute focus.  Together they are twin gorillas of infrastructure that will drive their business results independently. They seem to have found an optimal way to manage through the market swings that inevitably come with a paradigm shift. It certainly wasn’t part of the plan when they bought VMware but it certainly looks like genius today.

The rumored spinout of a new company to address platform opportunities with Cloud Foundry and Greenplum will be interesting to watch. Some think  Mr. Maritz will be named to run the company because the job description for Maritz specifically mentioned Big Data and cloud-oriented applications. But I’m not so sure. His tenure at VMware was immensely successful. He vastly expanded the opportunities for VMware by creating a platform software business that meshes perfectly with EMC’s core business. I suspect the company will reward him by placing him in an over-arching strategic planning role for the entire EMC universe, something he is uniquely suited for, as opposed to building a new spinout from the ground floor.  There are other operations-oriented execs in the company who can do that. Why wouldn’t EMC/VMware want to show off more of their talent pool?  And with that thought, seeing as how so many things have changed at EMC over the last decade, you might think they might elevate somebody that isn’t a Caucasian male. That would be a switch.

Joint webinar with Amazon Web Services

StorSimple and Amazon Web Services are holding a joint webinar this morning at 10:00 AM Pacific time for enterprise storage professionals who are looking for ways to leverage the cloud for storage and data management.

The combination of Amazon S3 storage and StorSimple Cloud-integrated enterprise storage helps customers solve some of their biggest storage problems such as data growth, backup management, expanding DR coverage, data archiving and successfully managing unstructured data.

Stelio D’Alo from AWS and Marc Farley from StorSimple will be presenting together, discussing their company’s respective solutions and how enterprise customers can rely on them to be:

  • Secure with strong encryption of data in flight and at rest
  • Highly availability, with engineered redundancy, replication and DR
  • Automated to free administrators from storage drudgery
  • Efficient, through deduplication and compression

Attendees will learn what cloud-integrated solutions exist and  have the opportunity to pose questions to Marc and Stelio.

(This post is also on the StorSimple corporate blog here )

Doing their very best to stall cloud storage

The annual trip to storage Mecca occurred this week with EMC World in Las Vegas and from what I could tell from 500 miles away in San Jose, things went according to plan. Lots of people came from all over the world to see what the biggest company in the storage industry had to say.

EMC made a slew of announcements across the breadth of their product lines and I’m sure if I had been there I would have had a difficult time keeping my eyes open for all of them – especially the Documentum news. Something about sitting in a dark room listening to tech marketing lingo without sufficient access to stimulants. This isn’t a comment about EMC, by the way – it’s a trade show disease I’ve developed over the years.

Not surprisingly, the Cloud was a big theme.  EMC announced they were going to make improvements to Atmos to make it more acceptable as a cloud storage platform.  Every product needs development to stay competitive and Atmos is a fine example of a product that needs improving.  It will be interesting to see if it gets what it needs to keep up with the leading cloud storage services like Amazon’s S3 and Microsoft’s Windows Azure Storage.  I don’t want to rule Atmos out quite yet, but so much of what makes cloud storage services work is the environment they run in and nobody is keeping up with the cloud data center technology being developed by Amazon, Google and Microsoft.

Mike Fratto from Network Computing reporting on Joe Tucci’s keynote wrote:

  “There will be hundreds of thousands of private clouds, and thousands of public clouds. The future of cloud is an and, not an or,” declared Joe Tucci, EMC CEO, during his keynote. Tucci wants that cloud to be EMC-powered.

It is certainly true that Tucci and EMC want the cloud to be EMC-powered, but it’s not going to work that way.  There will not be hundreds of thousands of private clouds, there might be a few thousand and there will not be thousands of public clouds – there might be a few hundred – and most of them won’t use EMC storage products because they will build their own storage like Amazon, Google and Microsoft do today.  The fact is, the cloud companies building large-scale cloud data centers are rolling their own storage nodes that cost a lot less than anything they can buy from infrastructure vendors – and that doesn’t make guys like Joe Tucci very happy.  It’s not surprising they want private clouds to succeed, but that’s just wishful thinking.

The problem is that cloud service providers (CSPs) are making it very easy to start development projects with a minimal investment. You could start a project today using an existing flexible and scalable cloud infrastructure or spend a lot of money to build a private cloud with 20% of the functionality and get it 12 months from now.  As corporate leaders wake up to that reality, the vendor pipe dreams for private clouds are going to blow away. Spending far less to get much more today will always be a better option than investing a lot more to get a lot less a year from now.

But what about the data?  This is where there will be a lot of teeth gnashing. Traditional infrastructure storage vendors don’t want data moving to the cloud because it means customers will no longer need as many arrays, dedupe systems, management software and anything else they sell today. That’s a scary thing. The irony of the situation though is that the CSPs tend to think that corporate data will just appear somehow like magic, which is an obvious recipe for failure on their part. Meanwhile customers, who will have the final say about how it all turns out, need to be assured that their data will be safe and secure in the cloud – and that they will be able to change their cloud service providers if they want to. The paranoid notion that data is safer in a corporate data center is not well-founded and there are likely going to be many advantages to putting data in a cloud that has the best security technology and practices. Data portability between cloud service providers is not hard if you have the tools to access it and move it.

StorSimple, the company I work for is part of a small number of startups that are working on technology to move data between corporate data centers and cloud storage for archive, backup and applications running in cloud-compute environments. It’s really all about data management between the data center and the cloud. It’s called cloud-integrated enterprise storage, but the real focus is on the data and the things we can do with it to make cloud storage as affordable, flexible, safe and secure as possible.

I don’t think it’s going to turn out like iSCSI did, where the enterprise storage industry was able to position the upstart technology as inadequate for most of their customers and maintain higher margins in the process. This time, CSPs are the competition and they are changing the way the world thinks about computing. Traditional infrastructure vendors won’t control the final outcome because they don’t control the cloud, just as they don’t control the mobile devices that are accessing the cloud a zillion times a day. I’d like to say that traditional storage infrastructure vendors will have a long slow decline but technology transitions tend to be swift and cruel.  It mostly depends on the motivation of the CSPs and whether they can get their act together to make it happen.

Are you feeling lucky, or just confident?

 

 

 

 

Chris Mellor wrote an article for The Register yesterday on cloud storage.  At the end of it all, Chris malappropriated the famous soliloquy from the movie Dirty Harry:

“Being this is a .44 Magnum, the most powerful cloud storage service in the world, and would blow your SAN head clean off, you’ve got to ask yourself one question: ‘Do I feel lucky? “Well do ya, punk?” ®

For those unfamiliar with the movie, the context here is that a violent detective (Dirty Harry) has caught a psychotic serial killer and asks him the ultimate question about his fate.  Tension builds with the realization that Harry is asking himself the same question because he is unsure if there are any bullets left in his gun.  He obviously wants to find out, but struggles with a good cop vs evil cop dichotomy. He needs his psychopathic adversary to make the first move, but he seems awfully confident.

It doesn’t have much to do with cloud storage, other than suggesting the question of fate – something that storage administrators think about with regards to data more often than they think about their own.

So what is the fate of data stored in the cloud and what sorts of steps do cloud service providers take to give customers re-assurances that theirs is safe? You can’t plan for everything, but you can plan to cover an awful lot of mayhem that can occur.

For starters you can store data in multiple locations to protect from being unable to access data from a single cloud site. As Chris’ article pointed out, StorSimple allows customers to do that. They can store data in separate discrete regions run by a single service provider or they can store data in cloud data centers run by different cloud service providers. Different customers will have different comfort levels where cloud redundancy is concerned.

But it’s important to know that cloud storage service providers already store data in multiple locations anyway to protect against an outage at a single site that could cause a data loss. Data in the cloud is typically stored multiple times at the site where it is first uploaded and then stored again at other sites in the cloud service provider’s network.  Customers who are concerned about the fate of their data should discuss how this is done with the storage service providers they are considering because they are all a little different.

There is an awful lot of technology that has gone into cloud storage. We tend to think of it like a giant disk drive in the sky, but that is only the easiest way to think about it.  Cloud storage – especially object storage in the cloud, the kind StorSimple uses and the stuff based on RESTful protocols has been amazingly reliable. There have been other problems with different aspects of the cloud, including block storage, but object storage has been rock solid.  It’s not really about feeling lucky as Dirty (Chris) Harry suggested, it’s about the scalable and resilient architectures that have been built.

We would love to talk to you about cloud storage and how you can start using it. If you have a cloud service provider in mind, we are probably already working with them.

Dedupe is coming fast for primary storage – and the cloud will follow

With EMC’s acquisition of XtremeIO today the landscape for storage products appears to be destined to change again to include a new segment for all-flash arrays.  One of the technologies that will go mainstream along with flash arrays is primary dedupe. When you have all the read performance that flash provides, there isn’t any reason not to do it.  A number of smaller vendors including StorSimple, the company I work for and Pure Storage have been using dedupe already paired with flash SSDs.

Chris Evans, in his blog The Storage Architect, wrote a couple weeks ago about the potential future for primary dedupe, pointing out that Netapp’s A-SIS product has produced good results for customers since it was introduced in 2007. He then goes on to discuss the symbiotic relationship between flash SSDs and dedupe before posings the question about when dedupe will become mainstream for primary storage, saying

That brings us to the SSD-based array vendors.  These companies have a vested interest in implementing de-duplication as it is one of the features they need to help make the TCO for all SSD arrays to work.  Out of necessity dedupe is a required feature, forcing it to be part of the array design.

Solid state is also a perfect technology for deduplicated storage.  Whether using inline or post-processing, de-duplication causes subsequent read requests to be more random in nature as the pattern of deduplicated data is unpredicable.  With fixed latency, SSDs are great at delivering this type of read request that may be tricker for other array types.

Will de-duplication become a standard mainstream feature?  Probably not in current array platforms but definitely for the new ones where legacy history isn’t an issue.  There will come a time when those legacy platforms should be put out to pasture and by then de-duplication will be a standard feature.

In a post I wrote  last week  about  using deduplication technology for data that is stored in the cloud I described the benefits of dedupe for reducing cloud storage transaction and storage costs.  As the wheels of the industry continue to converge, it’s also inevitable that the systems that access cloud storage will also dedupe data. There isn’t any reason not to do it. The technology is available today and it’s working. Check it out.

 

Skimming content from Calvin’s blog – my homies!

 

Two of my buddies from HP, Calvin Zito and Michael Haag recorded a podcast at SNW last week.  The topic is Building Storage for Virtualization.  In addition to having good content, this podcast is very well recorded – good job Calvin.  Of course, the discussion moved to 3PAR systems and although I no longer work for HP/3PAR, I still like the 3PAR system and its architecture.  I’m glad that StorSimple does not compete with 3PAR because I have so many good friends over there.

At one point Calvin suggests that Michael sounds like JR (Jim Richardson, legendary 3PAR employee).  No, Michael you don’t, thank goodness.  I can’t imagine two JRs breathing the same air – something would have to give.

At the end of the podcast, they talk about storage efficiency for virtual servers. The two technologies mentioned were thin provisioning and deduplication.  While StorSimple is now becoming known for it’s cloud integration, I always like reminding people that our systems use both thin provisioning and deduplication technologies with primary storage.

Is the new Netapp the old IBM?

When you hang out with car salesmen you start thinking station wagons are cool…

When you think station wagons are cool, you watch the big screen through your windshield…

And when you watch the big screen through your windshield, you watch educational movies about mainframe computers

And when you watch educational films about mainframes you get lost in mainframeland…..

And when you get lost in mainframeland you start seeing Netapp filers….

 

And when you start seeing Netapp filers, you take a trip through a wormhole…

 

And when you travel through a vortex you become a supernova baby…

Don’t become a supernova baby.

StorSimple grabs the gold from Storage Magazine/Searchstorage in Storage Systems category

Cockle doodle doo!  It’s crowing time.

We just learned that we won the top prize (Product of the Year – Gold) from Storage Magazine/SearchStorage in the category of Storage Systems.

Thanks to our customers who our so important to our success!