Nespresso bliss coffee tech

There are many ways to bliss out on coffee in this world and my newest favorite way to do it is pictured below, the Nespresso Pixie.

The Nespresso makes terrific shots of espresso that you can have first thing in the morning to shake the cobwebs, mid afternoon when things get dull or in the evening when you are just trying to hang in there. Just stumble into the kitchen, drop in a capsule, press the button and watch it do it’s magic.  As they say, it’s all good.  The shots are hot, but not tongue searing and they have a nice layer of crema (aerated coffee foam) riding on top. There are a number of machines that will do this for you, but a lot of them cost a lot more or require you to mess around with the coffee a lot more. Nespresso is flat out the easiest way to make a great cup of espresso.

Nespresso is made by Nestle and it has a definite European mojo to it.  The unit is small, the capsules are small, the cups it pours are small and the buzz is definite but not harsh. You get a decent lift for a small amount of java and if you are like me, that’s important. What the heck, I wouldn’t bother  if it was just for decaf - I’d go for beer instead, but beer and coffee point me in opposite directions and there you have it.

The only hitch with Nespresso is that you have to get the capsules from Nestle, either over the web, or if you are lucky enough like we are to live near a boutique that sells them you can walk in and pretend you are in Switzerland. As far as I know, nobody else makes them. The business angle is pretty clear, get a customer to buy one and they buy coffee from Nestle the rest of their lives.  The capsules look like little space ships – they are metallic in a mix of gemstone colors and there are a 16 regular flavors all with ridiculous European names you will never be able to remember if you are over 45 – regardless of how much coffee you rev up your head with. Ristretto, Livanto, Volluto, Indriya, Rosabaya – they sound like Urugayan reindeer. Anyway you can buy these things by the hundreds and get fancy accessories like they probably had on the Orient Express. Here’s a picture of the one we have to help us pick out the perfect capsule:

They also have limited-time special flavors that you need to order before they run out. The corner drug pusher has nothing on these guys.

The idea is that you drop the capsule down into the miniature bowels of the machine like you are making a tiny orc or something. The mechanism for opening the front is cantilevered where you rotate the handle up and the front of the machine pushes outward where the handle had been. This in itself is a really cool little design detail that I appreciate every time.  When you pull the handle back down, the capsule seats in the machine where it is ready to have pressurized steam blown through it.  Three small holes are made in one end of the capsule and a grid of punctures go on the other end (the cup end). These apparently are the blow holes that the steam gets blasted through on the way to your cup. You push the button and the compressor kicks in. It doesn’t shake the house but it could wake up dog and shortly thereafter the desired result is in your hands. The next time you open the handle, the expired capsule drops into the receptacle of wasted capsules below.  It took either Swiss or German engineers to come up with this compact mechanical wonder.

In addition to the espresso machine, Nestle also makes Nespresso milk frothers for making lattes, cappuccinos and any other drink where foamy milk is used. Darn, if they don’t turn out great too. In addition to coffee, I’ve been using it to make milk for chai lattes and the results have been excellent.

We’ve had our Nespresso about a month so we don’t know how long it will last, but the quality of the coffee drinks has been top notch. Highly Recommended.

This blog post was 100% produced on my Microsoft Surface -all words and pictures.




My Ubuntu epiphany – that old Dell lives again

I’ve always been a Windows user. I’ve never had problems finding the apps I needed to get the job done – and that includes audio and video production, so I’ve stuck with it and Windows has treated me well in return. FWIW, I’m definitely looking forward to Windows 8, although I probably won’t be an early adopter, preferring to wait for the first round of gotchas to get ironed out.

But in the last couple days I had a situation come up that drove me off Windows for a solution. A family member is having problems with their aging Mac and asked what they should do. My technology-tired spouse unit piped up: “Ask Marc, he has a bunch of machines lying around, he should be able to help you out”.  And of course, like a moron, I said I did and I would as long as they could use a Windows system. They were desperate and finally caved.

So I pulled the door open to the closet from hell and extracted a Dell bag from the bottom containing an old corporate system (Dell D630). I was supposed to have turned it in at work some years ago, but it had been a good friend and it looked so sad sitting there – I couldn’t have just given it to the grim reaper of corporate transition. Besides it had a bunch of source files on it from various ongoing blog concepts that I thought might be useful. Of course, once it hit the closet, it was never seen again.

All I needed to do was fire it up, clear out the old data and give it away.  As if.

CTRL+ALT+DEL  and the prompt for credentials appeared. 15 minutes later I could see this was going nowhere and so I turned to the Internet having seen references years ago for recovering XP passwords. There’s nothing quite like getting your hopes up with an Internet search to find that everything written is an insult to your intelligence. “Here’s a great tip – try logging in as Administrator!” Uhhh, yeah, I did  already as well as trying as guest, admin, petrock and several other favorites. I finally tried Ophcrack which involves downloading an ISO file on another system, burning the a CD and  booting the locked system. At this point I was in geek heaven, but it turns out that Ophcrack didn’t reveal anything on the first pass and I didn’t want to take 5 hours figuring out which hash tables I needed. It was becoming clear to me that this would cost me money if I had to replace the hard drive and buy another Windows OS license. Spending $$ was never a part of my good-family plan.

Then it hit me. Linux. Reinstall over XP and get it over with. The only problem was that I never really worked with the stuff. The last time I tried, I got it installed but never really did anything with it. The learning curve seemed too steep for something I didn’t need.  FWIW, it was experiences like that had me questioning my geek status. Anyway, I had enough awareness to know that Ubuntu had some popularity, so after verifying that with Google, I downloaded an ISO for Ubuntu, made the CD, booted it and started the installation.

OMG – was this ever the easiest installation for anything, or what?  It went flawlessly and quickly. It found all my hardware like my wireless card and gave me a list of networks to use.  I had to adjust my touchpad settings to my liking, but that was all I had to do. Damn! The distribution came with Firefox and software called LibreOffice for word processing, spreadsheets and presentation. I haven’t tried them, but they look reasonable – especially for somebody who doesn’t need them for  work.

It also came with an app called the Ubuntu Software Center, which is like an app store, but a lot of the apps in it are open source freeware apps and utilities.  There is a ton of stuff in there and after 30 minutes or so of dorking around, I figured out how it was organized and could search it with some effectiveness.

In short order I had a basic working system that looks good, performs well and does a lot of things a lot of people need a system to do. And it didn’t cost me anything except for the time it took – most of which was wrapped up in futile Windows password cracking attempts.

A pleasant surprise this morning happened when my wife mentioned she was up early and saw some stars that she wondered about. I fired up an application called Stellarium (Linux is great for scientific and educational software) that allowed me to get a picture of the night sky from any time. Amazing! A screencap from Stellarium is below.

Of course it looks better on a full screen, you can get the idea by clicking it.  Ultra-coolness.

So it turns out that this machine is suddenly fun again and I’m liking it too much to give it up to somebody that can’t possibly appreciate it. It will be back to the closet for me to find another orphaned system. FWIW, This blog post was written on my new/old Ubuntu machine. I couldn’t really tell the difference from my Windows 7 system while doing this (working in Firefox).




Oracle buys Xsigo to boost Sun ROI


The announcement this week of Oracle acquiring Xsigo did one thing: it put the focus of enterprise computing back onto Oracle for a brief moment. A number of excellent posts were written on it, including:

The dollar amount was undisclosed, but Chris Mellor from The Register figures it was around $800 Million, but I’m not so sure.  Xsigo didn’t have many customers, which means the company probably had a financial crisis looming and needed to get the best deal it could before running out of time.

The acquisition is more opportunistic than strategic.  Assuming Xsigo was looking for an acquirer,  Oracle may have felt it needed to make a premature decision and not miss out on something they might regret later.  That would explain the confused messaging around the acquisition where some thought it was an SDN play and others saying it was part of Oracle’s cloud strategy.  I can assure you that Larry Ellison laughed out loud reading comments about this being a cloud play.

Oracle could care less about Xsigo’s customers – they already have enough of their own. Its all about Oracle now.  Xsigo’s products will be discontinued and Xsigo employees not involved with technology development will be let go.

As for this acquisition being a cloud play – it’s not. There are still things that happen in this industry that have nothing to do with the cloud. As others have written, Xsigo technology would most likely be used with Oracle’s Exasystems as a virtual data center fabric component. If Oracle wants to sell more Exasystems they need to make them more flexible and easier to manage and Xsigo can help do that.  The problem is that most people building clouds are doing it with different technologies than Oracle’s expensive Exasystems.

So cloud is not the opportunity, data centers are. The ROI from this acquisition depends completely on Oracle’s ability to sell more Exasystem products to people running data centers. Xsigo’s role is to help generate a faster ROI from their Sun acquisition. Maybe Oracle will break even on Sun, but they have a long ways to go. Nonetheless, this is something Oracle is extremely good at – selling high-margin products to large, captive, mature markets.

Where cloud is concerned, Oracle needs to figure out what to do about Hadoop, the open source compute/storage technology that was pioneered by Google, set free by Yahoo and now has several competitive distributions. Hadoop is the technology of choice for most IT professionals working on Big Data analytics applications and it is highly unlikely there is much Oracle can do to compete with it. Cloud customers can create huge Hadoop applications on Amazon and Google today for far less money than they can build out an Exasystem data center.  Hadoop won’t be used for every application in the cloud,  but its going to be used for a lot of things people have associated with traditional databases over the years. People wondering about Oracle’s business in the cloud are advised to watch the Hadoop space and any moves Oracle makes related to it.

Doing their very best to stall cloud storage

The annual trip to storage Mecca occurred this week with EMC World in Las Vegas and from what I could tell from 500 miles away in San Jose, things went according to plan. Lots of people came from all over the world to see what the biggest company in the storage industry had to say.

EMC made a slew of announcements across the breadth of their product lines and I’m sure if I had been there I would have had a difficult time keeping my eyes open for all of them – especially the Documentum news. Something about sitting in a dark room listening to tech marketing lingo without sufficient access to stimulants. This isn’t a comment about EMC, by the way – it’s a trade show disease I’ve developed over the years.

Not surprisingly, the Cloud was a big theme.  EMC announced they were going to make improvements to Atmos to make it more acceptable as a cloud storage platform.  Every product needs development to stay competitive and Atmos is a fine example of a product that needs improving.  It will be interesting to see if it gets what it needs to keep up with the leading cloud storage services like Amazon’s S3 and Microsoft’s Windows Azure Storage.  I don’t want to rule Atmos out quite yet, but so much of what makes cloud storage services work is the environment they run in and nobody is keeping up with the cloud data center technology being developed by Amazon, Google and Microsoft.

Mike Fratto from Network Computing reporting on Joe Tucci’s keynote wrote:

  “There will be hundreds of thousands of private clouds, and thousands of public clouds. The future of cloud is an and, not an or,” declared Joe Tucci, EMC CEO, during his keynote. Tucci wants that cloud to be EMC-powered.

It is certainly true that Tucci and EMC want the cloud to be EMC-powered, but it’s not going to work that way.  There will not be hundreds of thousands of private clouds, there might be a few thousand and there will not be thousands of public clouds – there might be a few hundred – and most of them won’t use EMC storage products because they will build their own storage like Amazon, Google and Microsoft do today.  The fact is, the cloud companies building large-scale cloud data centers are rolling their own storage nodes that cost a lot less than anything they can buy from infrastructure vendors – and that doesn’t make guys like Joe Tucci very happy.  It’s not surprising they want private clouds to succeed, but that’s just wishful thinking.

The problem is that cloud service providers (CSPs) are making it very easy to start development projects with a minimal investment. You could start a project today using an existing flexible and scalable cloud infrastructure or spend a lot of money to build a private cloud with 20% of the functionality and get it 12 months from now.  As corporate leaders wake up to that reality, the vendor pipe dreams for private clouds are going to blow away. Spending far less to get much more today will always be a better option than investing a lot more to get a lot less a year from now.

But what about the data?  This is where there will be a lot of teeth gnashing. Traditional infrastructure storage vendors don’t want data moving to the cloud because it means customers will no longer need as many arrays, dedupe systems, management software and anything else they sell today. That’s a scary thing. The irony of the situation though is that the CSPs tend to think that corporate data will just appear somehow like magic, which is an obvious recipe for failure on their part. Meanwhile customers, who will have the final say about how it all turns out, need to be assured that their data will be safe and secure in the cloud – and that they will be able to change their cloud service providers if they want to. The paranoid notion that data is safer in a corporate data center is not well-founded and there are likely going to be many advantages to putting data in a cloud that has the best security technology and practices. Data portability between cloud service providers is not hard if you have the tools to access it and move it.

StorSimple, the company I work for is part of a small number of startups that are working on technology to move data between corporate data centers and cloud storage for archive, backup and applications running in cloud-compute environments. It’s really all about data management between the data center and the cloud. It’s called cloud-integrated enterprise storage, but the real focus is on the data and the things we can do with it to make cloud storage as affordable, flexible, safe and secure as possible.

I don’t think it’s going to turn out like iSCSI did, where the enterprise storage industry was able to position the upstart technology as inadequate for most of their customers and maintain higher margins in the process. This time, CSPs are the competition and they are changing the way the world thinks about computing. Traditional infrastructure vendors won’t control the final outcome because they don’t control the cloud, just as they don’t control the mobile devices that are accessing the cloud a zillion times a day. I’d like to say that traditional storage infrastructure vendors will have a long slow decline but technology transitions tend to be swift and cruel.  It mostly depends on the motivation of the CSPs and whether they can get their act together to make it happen.

Some gigabytes are worth more than others

Getting clarity on the cost and relative worth of enterprise technology has always been a challenge because of the complex environments and diverse requirements involved. For every good question about which product is better, there is the almost universal answer – “it depends”.  One product might have more capacity than it’s competitors, while another might have a unique feature that supports a new application and another product might have a new operating or management approach that increases productivity.  Beauty is in the eye of the beholder and enterprise customers dig a lot deeper than what appears in competitors’ spec sheets. In some respects, it’s like comparing real estate properties where location and design trump square footage.

One of the traps people fall into when comparing the value of cloud services to legacy infrastructure technologies is limiting their analysis to a direct cost per capacity analysis. This article in Information Week did that in a  painstaking way where the author, Art Wittman, made a commendable effort to make a level cost comparison, but he left out the location and design elements.  He concludes that IaaS services are not worthwhile because the costs per capacity are not following the same cost curve as legacy components and systems.  There is certainly some validity to his approach – if the capacity cost of disk drives has dropped an order of magnitude in four years, why should the cost of Amazon’s S3 service be approximately 39% higher?

Conceding that productivity gains can be realized from cloud services, he limits their value to application services and summarily rejects that they could apply to IaaS. After all the work he had done to make a storage capacity cost comparison, he refused to factor in the benefits of using a service.  Given that omission, Mr. Wittman concludes there is no way for an IaaS business model to succeed.

I agree with Mr. Wittman in one respect, if a service can’t be differentiated from on-site hardware, then it will fail.  But that is not the case with enterprise  cloud storage and it is especially not true with cloud storage that is integrated with local enterprise storage. Here’s why:

Storage is an infrastructure element, but it has specialized applications, such as backup and archiving that require significant expense to manage media (tapes). Moving tapes on and off-site for disaster recovery purposes is time-consuming and error-prone. While the errors are usually not damaging, they can result in lost data or make it impossible to recover versions of files that the business might need. The cost of lost data is one of those things that is very difficult to measure, but it can be very expensive if it involves data needed for legal or compliance purposes.  Using cloud storage as virtual tape media for backup kills two birds with one stone by eliminating physical tapes and the need for off-site tape rotations. It still takes time to complete the backup job and move data to the cloud, but many hours a month in media management can be recaptured as well as tape-related costs.

There are even greater advantages available with backup if it can be integrated from primary storage all the way to the cloud, as it is with StorSimple’s cloud-integrated enterprise storage (CIES).  Using snapshot techniques on CIES storage, the amount of backup data generated is kept to a minimum, which means the amount of storage consumed from the storage cloud service provider is far less than if a customer used the cloud for virtual tape backup storage. Cloud-resident data snapshots have a huge capacity advantage over backup storage where the storage of files for legal and compliance purposes are concerned and it demonstrates how the design of a cloud appliance can deliver even more value from cloud storage.

The next increase in cloud storage value comes from integrating deduplication, or dedupe technology with cloud storage.  Dedupe minimizes the amount of storage capacity consumed by data by eliminating redundant information within the data itself. Sometimes, the amount of deduped data can be quite large – as occurs with virtualized systems. StorSimple’s CIES systems automatically applies dedupe to the data stored in the cloud and squishes capacity consumption to its minimum level – which also minimizes the amount of data that is transferred to and from the cloud. With the help of a cloud-integrated enterprise storage system, the capacity of cloud storage increases in value a lot because so much less of it is consumed.

But the worth of cloud storage is not all about consuming capacity, it’s about accessing data faster than you can from legacy data archives. Data stored in the cloud with a CIES system is online and can be accessed by workers and administrators without the need to find it in a separate archive pool of storage. If you don’t work in IT, you might not know how much time that can save the IT staff, but if you do work in IT, you know this is a huge advantage that returns a lot of administrator time for other projects.

The access to data in cloud storage is probably most valuable when it occurs following a disaster.  Cloud storage provides the ultimate flexibility in recovery by being location-independent.  Backup or snapshot data stored in the cloud can be accessed from almost any location with an Internet connection to the cloud storage service provider.  Again, cloud-integrated storage has some important advantages that further increase the value of cloud storage by requiring only a small subset of the data to be downloaded before application systems can resume production work. This is much faster than downloading multiple virtual tapes and then restoring data to application servers.

I could go on – and I will in future blog posts. This one is long enough already. There are numerous ways that cloud storage is worth more than it’s raw capacity.  Some of this worth comes from its role in disaster recovery but a lot of it comes from how it is used as part of an integrated storage stack that incorporates primary, backup, archive and cloud storage.

Speed excites, but how about cool, tight storage?

In 2011, Fusion-io disrupted the enterprise storage industry with their high-performance PCIe flash memory cards. EMC responded this week announcing its own VFCache product.  Suddenly there is a hotly contested race and  it’s up to the rest of the industry to respond.

I love going fast on bikes and skis, but fast is a relative thing. There are always people who will go a lot faster than me, but I don’t need to keep up with them to be happy. The same is true for driving – I drive a 4-cylinder Ford Fusion because I love Sync and I don’t care if it’s not fast.

Cold will be hot

The “good enough for me” principle works for storage too. Most companies have a lot of data that doesn’t need high performance I/O. Server flash products address the hottest data a company has, but what about all the “cool” or “cold” data that is infrequently or never accessed? It typically occupies storage systems that were built to compete based on performance. Even the lowest-performing tiers of most enterprise storage systems significantly over-serve customers by providing service levels far beyond what is needed.  At some point another industry disruption will occur as new products emerge that significantly reduce the cost of storing cool data.

A difficult problem for storage administrators is that there is no way to separate cool data that will be accessed again in the future from cold data that will never be accessed again. One approach is to archive cool data to tape, but the delays and difficulties in locating and restoring cool data when it reheats are not all that comforting. Another approach is to send cool data to an online cloud storage tier provided by enterprise vendors such as Microsoft, Amazon, Rackspace, AT&T, and HP. Cool data in the cloud that reheats is transparently moved back to a local, warmer tier until it cools off again. Data stored in a cloud tier does not require the power, cooling and footprint overhead of data stored in the corporate data center storage and it also reduces the cost and impact of storage system end-of-life events.

"Tight" storage looks good

But cloud storage tiers are not the whole answer. Customers will want to ensure that cool/cold data doesn’t consume any unnecessary storage capacity.  Cloud storage products that incorporate the various forms of data reduction technologies such as thin provisioning, deduplication and compression will provide the “tightest” fit for this data by running at capacity utilizations that are unheard of with primary enterprise storage today. In addition to saving customers on storage costs, these products will also increase the return on investment by saving customers on bandwidth and transaction costs that some cloud service providers charge.  Keeping a tight grip on storage expenses will become synonymous with using the tightest, most efficient cloud-integrated storage systems.




New storage soap opera gets rave reviews

EMC’s introduction of VFCache this week and the sudden mutual slamming of Fusion-io is classic storage theater. We should all enjoy the moment as an online event featuring technology marketing as entertainment for uber geeks.

The plot is well-worn: A scintillating upstart, (Fusion-io) starts setting the world on fire with its technology and bravado.  The established front runner (EMC) becomes fiendishly jealous of all the attention and dollars the little upstart is amassing.  Actions that are only hinted at publicly but occur invisibly behind closed doors are speculated upon by pundits. Details of these liasons are never released, but some people say they have pictures. Then…. nothing – all is silent.

At some point the front-runner decides to make and not buy after years of market and technology analysis.  The process of birthing the new knock-off begins in earnest and the company starts preparing for the delivery. At the special pre-ordained moment, the front runner announces the birth of its creation with all the pomp and circumstance it can muster, including body punches at the upstart, intended to demostrate its power and might.

At this point, a virtual abyss opens beneath the upstart as storage soap opera fans begin to wonder if this could actually be the death of the fair-haired and wild bad-child upstart. There is a great inhalation of air as the audience anxiously waits for the next morsel of information that the internet can produce.   Thank god a posse of bloggers shows up to set the record straight and defend the honor, if not the longevity of the upstart.

This is as good as it gets in our industry and I’m looking forward to all the installments of this series.  Stay Tuned!

Hooking up in the cloud

It’s a bird, it’s a plane, no…. it’s cloud storage!

This week marks the beginning of my journey into cloud storage – and I expect this journey to be incredibly interesting, with many things to learn about cloud services and unlearn about legacy storage.  In one respect, the leap to the cloud is not very far because the company I joined, StorSimple, makes storage appliances that integrate enterprise storage with cloud storage services. The enterprise storage side of these appliances is something I understand fairly well, but legacy storage applications in the cloud can have fascinating surprises and power – as well as limitations. The trick, as always is to ameliorate the weaknesses and enhance the strengths.

For instance, one of the obvious issues with cloud storage is that it’s base-level functions are not nearly as reliable or fast as enterprise storage arrays.  The StorSimple design addresses these weaknesses with rich metadata that guarantees data integrity and data reduction technologies (dedupe and compression) that reduce the amount of data that needs to be transferred and stored by the storage service. This blog will explore these technologies in depth as I get up to speed on them. I am going to have a lot of fun figuring out how to explain how it all works in whiteboard animations, graphics and words.

Rest assured, as it is with any form of enterprise storage, there is a lot of heavy lifting to do – and all the various corner cases that need flattening. The engineering team here is made up of very experienced enterprise storage people who bring the intensity that hard core storage development demands.

I am overjoyed to be here with them.