Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don't see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

20 Responses to Moving towards utility/cloud computing standards?

  1. Stu

    I said it on John’s blog, and I’ll say it here — someone got way ahead of themselves.

    Other than comments on blog posts and podcasts agreeing that it would be a good thing to do one day, none of these companies (at least not Elastra) have spoken about forming a standard any time soon. Now is the time for some innovation, and a standards committee is probably not the place to do it. All of your above comments apply. I’d much rather see our approaches broadly market-tested and battered beforehand; without this, there’s not much to counter-act the usual power politics.

    There is, of course, the alternative where one creates a market out of a standard — putting the cart before the horse. This usually works in “dark horse” opportunities; something like RSS or Atom, or in the enterprise, J2EE (which no one expected to be successful in the 90′s – but customers demanded it!) In the case of cloud computing, there’s already too much attention on the topic to be able to pull it off.

    BTW – regarding the “line between infrastructure and applications”. Though perhaps I need you to elaborate more on what you mean, I think the missing link with infrastructure-delimited standards is that they don’t take architecture into account, and that’s tie that binds, at least from in the domains of provisioning and scaling.

  2. why VCs don’t play a more active role in standards.

    VC’s don’t breathe unless they can make money on it. Unless standards are a sure-fire path to $moola$ I can’t image them even adding lip service.

    Also, I have a lengthy discussion with Bert about Cloudware on my Cloud Cafe #4 podcast.

    Cloud Cafe Podcast #4

    John

  3. Apologies to Elastra, I agree we’re currently not working with them on the standards effort (something did get lost in the transmission here). Thanks, Stu.

    Whether it is too early to start establishing a standard: maybe. We believe that the time has come to start putting together aspects of cloud computing that will allow interoperability between systems. Someone’s got to start. We at 3Tera have demonstrated single-command migration of complex applications between datacenters on different continents (from running state to running state, with no manual intervention) — so we have existence proof of some things that were only dreamed of in OVF.

    Wrt OVF, it may be a nice standard; however, it is a standard for virtual machines, not for cloud computing. In addition, I believe XML has gotten in the way of defining something simple — after all, VMs have existed for 10+ years on the PCs alone, so they are no longer rocket science (OK, on a second thought they still are, but we have quite a few rocket scientists out there).

    Will it be easy: I agree with William — unlikely. But that doesn’t mean we should try and work on it. It may take 12-18 months to get to something where all sides are considered, and I would think Stu may also get on board. One thing to consider for such standard is that it should be like HTML and SMTP — simple to expand with vendor-specific options (as long as these are just this, options, not Microsoft-like tie-ins).

    Finally, I agree with John (which I frequently do) that VCs will not be able to do anything here, nor will want to. Good standards are technical endeavors that take business into account, not the other way around (HTML, TCP/IP, Ethernet, need I list more). Bert is a veteran of IEEE standards battles and knows full well what is ahead of us.

    Cheers,
    – Peter
    http://www.3tera.com

  4. Thanks Stu, John and Peter for the good comments and clarifications.

    Stu, you’re right that I am not very specific about what I mean by having the app model drive infrastructure automation. Part of this is that, unlike most other things I write about, this is very directly linked to my day job at Oracle (to some extent it *is* my day job). For the moment, I am focusing more on internal evangelism than external evangelism on this. In the meantime, I write about standards because it’s not my day job anymore, and yet I have enough context/insight into it that I can (I think) write about it semi-intelligently.

    WRT to the “VC and standards” idea, it’s just a thought. I never worked in a VC-backed company so I may be extremely naive. But don’t VCs typically help with legal advice, hiring, partnerships, etc? At lest, aren’t they supposed to if they are really more than just gamblers? It seems to me that standards may, in some cases, be in the same category of make-or-break efforts. A well-executed standard plan might immensely increase the value of a start-up who’s offering is suddenly seen in a “standard” light. To John’s point, that’s where the $$$ signs come that would motivate a VC to look into this. And just to be clear, I am not suggesting having VCs sit in standards bodies (the horror! it’s painful enough when the lawyers get involved…). Just helping craft a strategy, execution plan and recruiting/networking effort around it. Don’t you think that the existence of UDDI increased the valuation of Systinet when Mercury bought it? Anyway, just a thought. Some fresh blood in standards wouldn’t hurt.

    John, thanks for the podcast link but you really need to offer a feed that’s dedicated to podcasts. It’s a pain to get all your non-podcast posts in my podcast manager. And downloading individual podcasts by hand is tiresome too. Please…

    Peter, I am not sure how much XML really gets in the way. Maybe I have spent too much time in XML-land. But it seems to me that if you look at the whole package of technical value, tooling, market acceptance it’s a pretty compelling deal, if not the optimal technology. Especially when you then go on to say that the format should be “simple to expand with vendor-specific options”. XML (with namespaces) has something to say here that simpler formats can’t do. Unless you are looking at RDF as the alternative (with or without XML serialization). Which, BTW, I encourage you to do.

    The main problem with XML for management models is not XML per se, it’s XSD. And SML is paying that price, which is why I didn’t bring up SML as something that a Cloud standard effort should take into account, even though on paper it would be.

  5. Hate to be, as Cote says, “The Cloud in Everyones Silver Lining” … However …

    RightScale Comments…

  6. Stu

    VC’s primarily are focused with building a business and an investment exit strategy. Standards are more about building an industry – i.e. they may be a viable business strategy, but it’s a rare art form to pull it off. So, VC’s are likely to stick to advising on the former. I’d also note that most VC’s are rather hands off from the technical and market direction — they evaluate, and give feedback, yes, but largely it’s the job of management to make the company successful. If the VC is heavily involved with setting your direction, that might mean you haven’t been doing your job. ;-)

    To use the UDDI example, there was a specification that was too ahead of its time. The vision was to be “public yellow pages”. But almost all public UDDI servers are gone now. The real problem was “governance infrastructure” and “interface interoperability”. UDDI had little to do with governance other than enabling categorization — it didn’t help processes. And we’re still running at that interface interoperability fence. So, UDDI devolved into becoming a plugin to lookup services via developer IDE’s or service buses. Yet most developers (and servers) can also just use RESTful discovery (i.e. the ?WSDL query string at a service endpoint) – no specialized protocol required!

    Did it increase Systinet’s valuation? Certainly, yes, in the early days. By many millions by the time Mercury bought them? Likely not. By that time, UDDI was, at best, a checkbox feature to provide a modicum of comfort that a customer wasn’t totally locked in to some kind of proprietary metadata repository (which they were, if it was to do anything useful). Systinet’s value was mostly that it built useful, extensible, and popular management applications — completely independent of UDDI, except, again, as a checkbox of modest “interop? let’s not lift that rock kthx!” comfort.

    It’s one thing to be open and flexible — the point behind ECML, EDML, [insert your description or markup language here] etc. It’s another thing to formalize agreement around these things — hard when we participants don’t even necessarily agree on our shared concerns, since the area is so new. :-)

  7. Alright, you guys win. VCs won’t help here. I guess I was just looking for a Zorro to come rescue the little start-ups bullied by large companies in standards bodies. Maybe open source is that Zorro, as John was implying in the podcast.

    But back in the days of e-speak, we had a fully open-source implementation and we thought that freed us from bothering to standardize the communication protocols. Then came SOAP/WSDL/UDDI and e-speak died… But that may not prove anything. For one thing, we had the code out but we failed to build a development community around it, which is a big difference.

    On the Systinet/UDDI front, I still think UDDI’s mindshare and the fact that Systinet was the leading provider enhanced their valuation a lot. Even if at the product level the UDDI support was a small portion of the actual feature set. But what do I know. Luc, are you reading? Any comment? Any advice to Bert and his colleagues?

  8. Pingback: June 2008 Review Post | IT Management and Cloud Blog

  9. Pingback: Do we need a cloud standard or just one good old IT management standard? | IT Management and Cloud Blog

  10. Pingback: People Over Process » links for 2008-07-02

  11. I’d push back against the “Standards need OSS implementations”. Turn the question around. Do OSS implementations need standards? Not really, because they are defacto standards that are freely reusable. Why go into the committee, get lost in the mire of competing ideas, when you can build stuff that works?

  12. Hi Steve. That approach didn’t serve us too well with e-speak. But then again, I don’t want to read too much into the e-speak experience because there are lots of things we could have done better. Failure to standardize might not have been a major contributor.
    In a way, standards are to OSS what a SOA governance registry is to a set of Web services. But it is as easy to take a cynical view of SOA governance as it is to do of standards, so I am probably not building much of a case here.
    When you ask “do OSS implementations need standards?”, I want to ask back: for what? They surely don’t need standards to deliver useful features. They may need standards to generate large business benefits to the users.
    Still hard to picture a world where Jigsaw replaces the HTTP spec…

  13. Stu

    An open source project becoming a de facto standard is the exception, not the rule. In any popular space, there will be many, many alternatives. Look at the number of open source Java web frameworks there are: has this helped the Java world? It certainly hasn’t helped “standardize” much. If anything, it’s made it more confusing to a newcomer, even though there’s a lot of “stuff that works”.

    I would note, however, that those innovations rely on a standard that had “available source” but not “open source” until very recently. Those standards were the Java SE libraries and Java Servlet APIs. Interestingly, these were not hammered out by committee, but were one of those “right time & place” successes that a) had working code behind them, b) had Sun’s force of will behind them to keep them stable, and c) were eventually adopted and managed by committee (the JCP) when they had matured enough. I see this approach being taken increasingly by Internet standards like Atom, Atompub, OpenID, OAuth, etc, and it seems to be reasonable.

    At Jazoon, Roy Fielding gave a good keynote on the conditions in fostering a thriving community and “de facto standards”. His major point was that they require “an architecture for decentralized evolution”. He made the bold claim that the success of Linux, Apache httpd, and Eclipse actually had little to do with their open source nature, and much more to do with their architecture enabling modules, plugins, and the enablement of isolated evolution that didn’t require any influence or time commitment from the core team. Slides are availabile here.

  14. “Right time & place” was certainly part of it for the servlet API. But why didn’t this apply to NSAPI, (Fast-)CGI, ISAPI and the Apache API? All doing similar things and backed by code. Several of them backed by influential organizations.

    I am not debating your point (with which I tend to agree) as much as pointing out that history is a messy business… Many factors interlock (like differing programming languages fortunes in this example). Did java rise because of servlet or did servlet rise because of java?

  15. Luc Clement

    Re: “On the Systinet/UDDI front, I still think UDDI’s mindshare and the fact that Systinet was the leading provider enhanced their valuation a lot. Even if at the product level the UDDI support was a small portion of the actual feature set. But what do I know. Luc, are you reading? Any comment? Any advice to Bert and his colleagues?”

    If Systinet was sold for 105M it wasn’t because of UDDI, but rather because of 1) the ecosystem of vendors we built around the Systinet Governance Interoperability Framework (GIF) for which UDDI served as a key building block; 2) by being the leading registry vendors (we oem’d to BEA, Oracle and Tibco); 3) by defining and being the first to deliver a SOA Governance framework using GIF, UDDI, and great product capabilities in the areas of contract management, policy management and the like; and 4) GREAT marketing. That being said, we WOULD NOT have been successful had we not emphasized openness and standardization and executed accordingly. For more on this see “http://soaguidebook.com/chapters.html#chapter3″ (shameless plug).

    On the matter of “Standards need OSS implementations”. These issues are entirely and utterly orthogonal. OSS has very little to do with standardization – quite the contrary! I’ve seen too many exploit OSS as a way to get around standardization. It’s not to say that OSS cannot contribute to standardization, but as is the case for BPEL, BPMN, BPEL Extensions for People and WS-HumanTask – areas of standardization I’m currently involved in – OSS has been contributing very little. OSS is a business-model for +95% of participants. Please don’t confound OSS with standards – that would be a disservice to the standards community.

  16. Stu

    Re: ““Right time & place” was certainly part of it for the servlet API. But why didn’t this apply to NSAPI, (Fast-)CGI, ISAPI and the Apache API? All doing similar things and backed by code. Several of them backed by influential organizations.”

    Answer: None of them had Java bindings. Who the heck wants to program Web sites in C? Those were bindings to Web Server Extensions, not a database-backed website.

    While we’re tripping down memory lane, here’s my recollection of how stuff shaked out (I was a fresh college student in those days). Servlets, if anything, were a reflection of NetDynamics’ approach to application servers circa 1997. Netscape’s LiveWire was another useful attempt (they even had Comet-like push HTTP!) and coulda grown if JavaScript interpreters were faster and more stable and they wanted to build a multi-vendor community around it (they didn’t). Then Netscape bought Kiva and, well, did nothing to drive adoption of either.

    FastCGI could have been the answer, but I think the problem was twofold: a) CGI generally was felt to be hackish; scripting languages weren’t taken anywhere near as seriously then as they are now, b) implementation stability; FastCGI had (and still has!) a habit of taking down the web server when the module crashes. I surmise this is because it doesn’t have the isolation that, for example, Apache mod_* memory pools give you, but I’m not completely sure.

    Java was the fastest adopted language in computing history, as it got wrapped up in the whole Internet wave. Not that Java was perfect, but more what it represented: the first sign that the web browser was the new “everything”, emancipation for developers from Win32, MFC or Borland OWL, and VB….

  17. I can’t believe that I missed this thread when it first occurred. Thanks to William (who commented on my rant, triggered by Rich Miller’s) for pointing it out.

    As I like to often point out here, what we have here is a complex adaptive system doing its thing. The overall software ecosystem, and cloud computing markets specifically, require many iterations to produce an agent that survives, much less thrives, in this highly competitive environment. Traditional standards bodies play a role in all of this, as does open source.

    Perhaps the big winners in the “de facto standard” world are the finished, tested protocols (including programming languages and their runtimes) that anyone can use and develop to from the get go. See RSS, REST, SOAP, JavaScript, Java, etc.

    That being said, what bothers me most about Cloudware is the seemingly blatant attempt by 3TERA to base the standard around their architecture and protocols. I am concerned that Cloudware adoption will actually–for example–force Amazon to do a ton of work to comply, which seems odd to me since AWS has many times the mindshare and marketshare for HaaS than 3TERA.

    I applaud the idea that someone should get started, but pre-announcing the effort, then freezing out other vendors until the reference implementation is based on your product, seems a little self-serving to me. As I wrote in my post, either open source AppLogic (and share the implementation), or create Cloudware as a framework for AppLogic itself, and *then* begin the discussion about how to create a standard based on any useful protocols, not the implementation itself.

    (By the way, you may not be able to tell from this comment, but I am actually generally a big fan of what 3TERA has accomplished from a marketing perspective. I just have issues with the Cloudware initiative.)

    On another note, I agree with William that should a standards body get to work on anything in this space (even something wrapped up nicely in a pretty bow), the big guys are going to throw their entire “diplomatic” muscle at that body, and the discussion will quickly become a chess game of competing interests, permanently morphing such a proposal beyond recognition. It will take years to create something agreed upon and generally implemented, ala WS-*.

  18. James, I don’t really share your exasperation with 3Tera’s approach to standards. Of course they are trying to create a standard that is as close as possible to their implementation, but who doesn’t? That’s the (potential and often elusive) reward that motivates people to do the work to get a standard ratified. I don’t see 3Tera’s action as anymore sneaky than anybody else.

    The real sneaky part is to try to get a standard ratified that requires a patent that you secretly hold. So that you can milk the implementers once the standard is adopted. Standards organizations have IP policies that try to prevent this, but it’s very hard to do. But I don’t see any sign that 3Tera is doing this.

    It’s kind of a “damned if you do damned if you don’t” situation for 3Tera. If they don’t have an implementation they are accused of “design by committee” and standardizing too early. If they do then they are accused of trying to game the system towards their solution.

    In addition, seeing how stacked that deck is against small companies in standards bodies, I am willing to give them plenty of slack. It’s one thing to criticize Microsoft for trying to impose their proprietary specs as standards, it’s another to go after 3Tera for the same.

    Note that I don’t know enough about Cloudware and ADL to actually have a strong opinion about the technical value of their proposal. But the fact that they base it on their implementation seems natural to me.

    And don’t worry, they won’t get away with it anyway. Some technologies take the world by surprise such that the early , obscure standard becomes dominant (because no-one was paying attention when they got set, e.g. by the time big vendors realized the web was a big deal it was too late to replace HTTP by a “better” protocols). But utility computing is too high-profile by now for this to happen. Not to mention that it is too tightly linked to existing IT management technologies (mainly IT automation).

  19. Why do I get the feeling that instead of one standard, there will emerge a set of trimmed standards that for a 12 month period be under scrutiny among all enterprises who will get to march in the clouds. Amazon maybe the front runner now but 3Tera surely isn’t sleeping on its heels. And from these will emerge a set of aspects that will primarily be judged according to interoperability to reliability or whatever until finally it comes down to those coming from open-sourced implementations.

    Then most will say it was a pretty obvious move but then ‘we were already tied up with ours’.

    Best.
    alain

  20. Pingback: The Standard Cloud « My missives