Can Cloud standards be saved?

Then: Web services standards

One of the most frustrating aspects of how Web services standards shot themselves in the foot via unchecked complexity is that plenty of people were pointing out the problem as it happened. Mark Baker (to whom I noticed Don Box also paid tribute recently) is the poster child. I remember Tom Jordahl tirelessly arguing for keeping it simple in the WSDL working group. Amberpoint’s Fred Carter did it in WSDM (in the post announcing the recent Amberpoint acquisition, I mentioned that “their engineers brought to the [WSDM] group a unique level of experience and practical-mindedness” but I could have added “… which we, the large companies, mostly ignored.”)

The commonality between all these voices is that they didn’t come from the large companies. Instead they came from the “specialists” (independent contractors and representatives from small, specialized companies). Many of the WS-* debates were fought along alliance lines. Depending on the season it could be “IBM vs. Microsoft”, “IBM+Microsoft vs. Oracle”, “IBM+HP vs. Microsoft+Intel”, etc… They’d battle over one another’s proposal but tacitly agreed to brush off proposals from the smaller players. At least if they contained anything radically different from the content of the submission by the large companies. And simplicity is radical.

Now: Cloud standards

I do not reminisce about the WS-* standards wars just for old time sake or the joy of self-flagellation. I also hope that the current (and very important) wave of standards, related to all things Cloud, can do better than the Web services wave did with regards to involving on-the-ground experts.

Even though I still work for a large company, I’d like to see this fixed for Cloud standards. Not because I am a good guy (though I hope I am), but because I now realize that in the long run this lack of perspective even hurts the large companies themselves. We (and that includes IBM and Microsoft, the ringleaders of the WS-* effort) would be better off now if we had paid more attention then.

Here are two reasons why the necessity to involve and include specialists is even more applicable to Cloud standards than Web services.

First, there are many more individuals (or small companies) today with a lot of practical Cloud experience than there were small players with practical Web services experience when the WS-* standardization started (Shlomo Swidler, Mitch Garnaat, Randy Bias, John M. Willis, Sam Johnston, David Kavanagh, Adrian Cole, Edward M. Goldberg, Eric Hammond, Thorsten von Eicken and Guy Rosen come to mind, though this is nowhere near an exhaustive list). Which means there is even more to gain by ensuring that the Cloud standard process is open to them, should they choose to engage in some form.

Second, there is a transparency problem much larger than with Web services standards. For all their flaws, W3C and OASIS, where most of the WS-* work took place, are relatively transparent. Their processes and IP policies are clear and, most importantly, their mailing list archives are open to the public. DMTF, where VMWare, Fujitsu and others have submitted Cloud specifications, is at the other hand of the transparency spectrum. A few examples of what I mean by that:

  • I can tell you that VMWare and Fujitsu submitted specifications to DMTF, because the two companies each issued a press release to announce it. I can’t tell you which others did (and you can’t read their submissions) because these companies didn’t think it worthy of a press release. And DMTF keeps the submission confidential. That’s why I blogged about the vCloud submission and the Fujitsu submission but couldn’t provide equivalent analysis for the others.
  • The mailing lists of DMTF working groups are confidential. Even a DMTF member cannot see the message archive of a group unless he/she is a member of that specific group. The general public cannot see anything at all. And unless I missed it on the site, they cannot even know what DMTF working groups exist. It makes you wonder whether Dick Cheney decided to call his social club of energy company executives a “Task Force” because he was inspired by the secrecy of the DMTF (“Distributed Management Task Force”). Even when the work is finished and the standard published, the DMTF won’t release the mailing list archive, even though these discussions can be a great reference for people who later use the specification.
  • Working documents are also confidential. Working groups can decide to publish some intermediate work, but this needs to be an explicit decision of the group, then approved by its parent group, and in practice it happens rarely (mileage varies depending on the groups).
  • Even when a document is published, the process to provide feedback from the outside seems designed to thwart any attempt. Or at least that’s what it does in practice. Having blogged a fair amount on technical details of two DMTF standards (CMDBf and WS-Management) I often get questions and comments about these specifications from readers. I encourage them to bring their comments to the group and point them to the official feedback page. Not once have I, as a working group participant, seen the comments come out on the other end of the process.

So let’s recap. People outside of DMTF don’t know what work is going on (even if they happen to know that a working group called “Cloud this” or “Cloud that” has been started, the charter documents and therefore the precise scope and list of deliverables are also confidential). Even if they knew, they couldn’t get to see the work. And even if they did, there is no convenient way for them to provide feedback (which would probably arrive too late anyway). And joining the organization would be quite a selfless act because they then have to pay for the privilege of sharing their expertise while not being included in the real deciding circles anyway (unless there are ready to pony up for the top membership levels). That’s because of the unclear and unstable processes as well as the inordinate influence of board members and officers who all are also company representatives (in W3C, the strong staff balances the influence of the sponsors, in OASIS the bylaws limit arbitrariness by the board members).

What we are missing out on

Many in the standards community have heard me rant on this topic before. What pushed me over the edge and motivated me to write this entry was stumbling on a crystal clear illustration of what we are missing out on. I submit to you this post by Adrian Cole and the follow-up (twice)by Thorsten von Eicken. After spending two days at a face to face meeting of the DMTF Cloud incubator (in an undisclosed location) this week, I’ll just say that these posts illustrate a level of practically and a grounding in real-life Cloud usage that was not evident in all the discussions of the incubator. You don’t see Adrian and Thorsten arguing about the meaning of the word “infrastructure”, do you? I’d love to point you to the DMTF meeting minutes so you can judge for yourself, but by now you should understand why I can’t.

So instead of helping in the forum where big vendors submit their specifications, the specialists (some of them at least) go work in OGF, and produce OCCI (here is the mailing list archive). When Thorsten von Eicken blogs about his experience using Cloud APIs, they welcome the feedback and engage him to look at their work. The OCCI work is nice, but my concern is that we are now going to end up with at least two sets of standard specifications (in addition to the multitude of company-controlled specifications, like the ubiquitous EC2 API). One from the big companies and one from the specialists. And if you think that the simplest, clearest and most practical one will automatically win, well I envy your optimism. Up to a point. I don’t know if one specification will crush the other, if we’ll have a “reconciliation” process, if one is going to be used in “private Clouds” and the other in “public Clouds” or if the conflict will just make both mostly irrelevant. What I do know is that this is not what I want to see happen. Rather, the big vendors (whose imprimatur is needed) and the specialists (whose experience is indispensable) should work together to make the standard technically practical and widely adopted. I don’t care where it happens. I don’t know whether now is the right time or too early. I just know that when the time comes it needs to be done right. And I don’t like the way it’s shaping up at the moment. Well-meaning but toothless efforts like cloud-standards.org don’t make me feel better.

I know this blog post will be read both by my friends in DMTF and by my friends in Clouderati. I just want them to meet. That could be quite a party.

IBM was on to something when it produced this standards participation policy (which I commented on in a cynical-yet-supportive way – and yes I realize the same cynicism can apply to me). But I haven’t heard of any practical effect of this policy change. Has anyone seen any? Isn’t the Cloud standard wave the right time to translate it into action?

Transparency first

I realize that it takes more than transparency to convince specialists to take a look at what a working group is doing and share their thoughts. Even in a fully transparent situation, specialists will eventually give up if they are stonewalled by process lawyers or just ignored and marginalized (many working group participants have little bandwidth and typically take their cues from the big vendors even in the absence of explicit corporate alignment). And this is hard to fix. Processes serve a purpose. While they can be used against the smaller players, they also in many cases protect them. Plus, for every enlightened specialist who gets discouraged, there is a nutcase who gets neutralized by the need to put up a clear proposal and follow a process. I don’t see a good way to prevent large vendors from using the process to pressure smaller ones if that’s what they intend to do. Let’s at least prevent this from happening unintentionally. Maybe some of my colleagues  from large companies will also ask themselves whether it wouldn’t be to their own benefit to actually help qualified specialists to contribute. Some “positive discrimination” might be in order, to lighten the process burden in some way for those with practical expertise, limited resources, and the willingness to offer some could-otherwise-be-billable hours.

In any case, improving transparency is the simplest, fastest and most obvious step that needs to be taken. Not doing it because it won’t solve everything is like not doing CPR on someone on the pretext that it would only restart his heart but not cure his rheumatism.

What’s at risk if we fail to leverage the huge amount of practical Cloud expertise from smaller players in the standards work? Nothing less than an unpractical set of specifications that will fail to realize the promises of Cloud interoperability. And quite possibly even delay them. We’ve seen it before, haven’t we?

Notice how I haven’t mentioned customers? It’s a typical “feel-good” line in every lament about standards to say that “we need more customer involvement”. It’s true, but the lament is old and hasn’t, in my experience, solved anything. And today’s economical climate makes me even more dubious that direct customer involvement is going to keep us on track for this standardization wave (though I’d love to be proven wrong). Opening the door to on-the-ground-working-with-customers experts with a very neutral and pragmatic perspective has a better chance of success in my mind.

As a point of clarification, I am not asking large companies to pick a few small companies out of their partner ecosystem and give them a 10% discount on their alliance membership fee in exchange for showing up in the standards groups and supporting their friendly sponsor. This is a common trick, used to pack a committee, get the votes and create an impression of overwhelming industry support. Nobody should pick who the specialists are. We should do all we can to encourage them to come. It will be pretty clear who they are when they start to ask pointed questions about the work.

Finally, from the archives, a more humorous look at how various standards bodies compare. And the proof that my complaints about DMTF secrecy aren’t new.

12 Comments

Filed under Cloud Computing, CMDBf, DMTF, Everything, HP, IBM, Mgmt integration, Microsoft, Oracle, People, Protocols, Specs, Standards, Utility computing, VMware, W3C, Web services, WS-Management

12 Responses to Can Cloud standards be saved?

  1. agree, but I was once with the large three letter company just like you are with the “Oracle” today and others before. So, you do have a conflict of interest since the man pays you. As far as, the “specialists” they too have possible conflicts of interest in undisclosed alliances in addition to switching sides, so to speak. It is indeed a slippery catch 22 or 66 since I am indeed independent (hmm, maybe even a plain cloud specialist) and could switch sides (ethically, of course). As far as the 10% discounts on alliance memberships, I believe large companies have just bought out small companies and specialists with more than a 10% premium. All the same, keep up the good comments.

  2. Lets start with changing the marketing message of your employer from “Innovation Happens Here” to “Innovation Happens Elsewhere [w/ Specialists]” or “Innovation Happens Here, Here and Here”. That will create the right mindset in the industry, ;-).

  3. Hear, hear.

    One reason why we don’t see people calling themselves “customers” in these groups is because the customers of cloud APIs are we, the large and small shops who use the cloud to build systems and applications – independent consultants included.

    To expand on @lmacvittie’s comment :
    Users use Applications
    Applications use Clouds
    Application Developers use Cloud Standards

  4. Well said, William.

    I’ve been doing cloud development for four years, so I think I literally was cloud before it was cool. Having a standards process for cloud computing that is opaque to me is kind of like having my marriage arranged by complete strangers: it’s vitally important and unlikely to work out well for me.

    I hope your appeal for involvement and transparency don’t fall on deaf ears.

    Mitch

  5. Pingback: Cloud API requirements « すでにそこにある雲

  6. Pingback: GIS-Lab Blog» Архив блога » Новости вокруг 35

  7. Pingback: William, Chill out, please a.k.a. Irrational Exercuberance in the world of Cloud standards « My missives

  8. Pingback: William Vambenepe — Waiting for events (in Cloud APIs)

  9. Pingback: William Vambenepe — HP has submitted a specification to the DMTF Cloud incubator

  10. One issue is trend following. The CTO or VP of Engineering gets his head around a trendy new approach to doing something — like, say, RESTful interfaces to web services — and even if it doesn’t map well to your specific problem domain, that’s how it has to be. I had to fight hard to get XML-RPC as the web services standard for controlling VPEP, because XML-RPC is viewed as “quaint” and “outdated” by “people who matter”, but the VPEP API mapped straight to XML-RPC and the proof-of-concept service exporting the API took me literally two hours to write (thanks to Python’s excellent built-in XML-RPC server class) so why use anything more complicated for this application? Other than a desire to not see Dave Winer’s head get any bigger lest it alter the gravitational pull of Earth, but that’s a lost battle already. ;) But the point is, those of us with practical experience want simple standards that model well what we *know* are the set of tasks that need to be done now and in the future, while there’s a significant set of people who want complex standards that fulfill a need to put in every trendy buzzword under the sun in order to appear “hip” and “with it”. And unfortunately, the second class of people are too often the decision makers on these things :(.

    The other issue, of course, is one of large organizations *wanting* standards to be complex. The more complex the standard (and the more BigCorp-patented technologies included in it of course!), the more resources it will take to fully implement it. The goal is to make the resources and patent licenses needed to fully implement the standard so onerously huge that only large organizations will have the resources to do so, meaning they are the only ones who are “standards-compliant” and they can slam any potential upstart competitors as not being “standards-compliant”. Not going to name names here, but I’ll just point out that simpler standards tend to drive out the more complex standards, thereby leaving the big companies high and dry with a product that nobody wants to buy. Has anybody here used the complex X.25 protocol lately? What, you’re using the simpler TCP/IP protocol instead? Exactly.

    So you’re correct that this behavior isn’t even in the interest of large corporations, since it tends to create “standards” that nobody buys into, and a “standard” that nobody uses — or that only customers of a few large corporations use — is hardly a real standard. And keeping the standards discussions secretive is hardly in the best interests of anybody also, it means that real problems with “standards” will be overlooked until the “standard” is actually published, at which point all the effort used to produce the “standard” is useless because nobody will create products that implement the “standard” (thus rendering it *not* a standard). Yet we still see this sort of rent-seeking behavior on the part of certain large corporations that seem convinced that it actually works. Inexplicable…

  11. Pingback: andy.edmonds.be › links for 2010-03-01

  12. Pingback: William Vambenepe — Standards Disconnect at Cloud Connect