Animoto is no infrastructure flexibility benchmark

I have nothing against Animoto. From what I know about them (mostly from John’s podcast with Brad Jefferson) they built their system, using EC2, in a very smart way.

But I do have something against their story being used to set the benchmark for infrastructure flexibility. For those who haven’t heard it five times already, the summary of “their story” is ramping up from 50 to 5000 machines in a week (according to the podcast). Or from 50 to 3500 (according to the this AWS blog entry). Whatever. If I auto-generate my load (which is mostly what they did when they decided to auto-create a custom video for each new user) I too can create the need for a thousands of machines.

This was probably a good business decision for Animoto. They got plenty of visibility at a low cost. Plus the extra publicity from being an EC2 success story (I for one would never have heard of them through their other channels). Good for them. Good for Amazon who made it possible. And who got a poster child out of it. Good for the facebookers who got to waste another 30 seconds of their time straining their eyes. Everyone is happy, no animal got hurt in the process, hurray.

That’s all good but it doesn’t mean that from now on any utility computing solution needs to support ramping up by a factor of 100 in a week. What if Animoto had been STD’ed (slashdoted, technoratied and dugg) at the same time as the Facebook burst, resulting in the need for 50,000 servers? Would 1,000 X be the new benchmark? What if a few of the sites that target the “lonely guy” demographic decided to use Animoto for… ok let’s not got there.

There are three types of user requirements. The Animoto use case is clearly not in the first category but I am not convinced it’s in the third one either.

  1. The “pulled out of thin air” requirements that someone makes up on the fly to justify a feature that they’ve already decided needs to be there. Most frequently encountered in standards working groups.
  2. The “it happened” requirements that assumes that because something happened sometimes somewhere it needs to be supported all the time everywhere.
  3. The “it makes business sense” requirements that include a cost-value analysis. The kind that comes not from asking “would you like this” to a customer but rather “how much more would you pay for this” or “what other feature would you trade for this”.

When cloud computing succeeds (i.e. when you stop hearing about it all the time and, hopefully, we go back to calling it “utility computing”), it will be because the third category of requirements will have been identified and met. Best exemplified by the attitude of Tarus (from OpenNMS) in the latest Redmonk podcast (paraphrased): sure we’ll customize OpenNMS for cloud environments; as soon as someone pays us to do it.

4 Comments

Filed under Amazon, Business, CMDB Federation, Everything, Mgmt integration, Specs, Tech, Utility computing

4 Responses to Animoto is no infrastructure flexibility benchmark

  1. The key to the Animoto story is that they had 25k customers and because of the change they made to their Facebook interface they had 700k wanting to use their service and become customers. That type of growth with out a utility/cloud designed infrastructure would not have happened for years. This was completely different from being slahsdotted. Based on the deisgn of thier business model they needed to add thousands of servers in that week to support the creation of animoto video’s. I would have to think that Animoto would not have spun up twice as many just for people wanting to browse thier site (i.e., slahdotted). No matter what way you spin it supporting an infrastructure that allows your business to grow 700k new customers in a week is a good thing (benchmark or no-benchmark).

    IMHO, customers that expect huge success will design their business model to accept this kind of dynamic growth. Of course a smart business will weight the cost and benefit of when to throttle. In fact SmugMug has a good example of how they first started putting everything on a queue and then found sometimes the queue was to large to process (economically). They now have monitors that throttle queues (based on what I have read).

    As for calling it a cloud, I agree, that term is killing all of us. However, unless you use that term you can’t speak a common language.

    my .2 cents.
    John

  2. Best exemplified by the attitude of Tarus (from OpenNMS) in the latest Redmonk podcast (paraphrased): sure we’ll customize OpenNMS for cloud environments; as soon as someone pays us to do it.

    This, IMHO, is why he sitting on the sidelines of the “Little 4″/”Mighty Two” lists…

  3. Pingback: What’s my motivation? « Identity Blogger

  4. Pingback: William Vambenepe’s blog » Blog Archive » Grid cloudification