Separating model from protocol in Cloud APIs

What happened to the separation between the model and the protocol in management APIs? For all the arguments we had in the design of WSDM and WS-Management, this was one fundamental concept that took little discussion before everyone agreed: that the protocol (the interaction model and the on-the-wire shape of the messages used) should be defined in a way that is agnostic to the type of resource being managed (computers, elevators or toasters — the perennial silly example). To this end, WSDM took pains to release MUWS (Management Using Web Services) and MOWS (Management Of Web Services) as two different specifications.

Contrast that to the different Cloud APIs (there is a new one released every other day). If they have one thing in common it is that they happily ignore this principle and tackle protocol concerns alongside the resource model. Here are my guesses as to why that is:

1) It’s a land grad

The goal is not to produce the best long-term API, it’s to be out early, to stake your claim and to gain leverage, so that you can steer the final standard close to your implementation. Editorial niceties like properly factoring the specification are not major concerns, there will be plenty of time for this during the standardization process. In fact, leaving such improvements for the standardization phase is a nice way to make it look like the group is not just rubberstamping, while not changing much that actually impacts your implementation. The good old “give them something insignificant to argue about” trick. It works BTW.

As an example of how rushed some of these submissions can be, did you notice that what VMWare submitted to DMTF this week is the vCloud API Specification v0.8 (a 7-page document that is simply a list of operations), not the accompanying vCloud API programming guide v0.8 which is ten times longer and is the real specification, the place where the operation semantics, payload formats and protocol considerations are actually described and without which the previous document cannot possibly be implemented. Presumably the VMWare team was pressed to release on time for a VMWorld announcement and they came up with this to be able to submit without finishing all the needed editorial work. I assume this will follow soon and in the meantime the DMTF members will retrieve the programming guide from the VMWare site in order to make sense of what was submitted to them.

This kind of rush is not rare in the history of specification submission, even those that have been in the work for a long time . For example, the initial CBE submission by IBM had “IBM Confidential” all over the specification and a mention that one should retrieve the most up to date version from the “Autonomic Computing Problem Determination Offering Team Notes Database” (presumably non-IBMers were supposed to break into the server).

If lack of time is the main reason why all these APIs do not factor out the protocol aspects then I have no problem, there is plenty of time to address it. But I suspect that there may be other reasons, that some may see it as a feature rather than a bug. For example:

2) Anything but WS-*

SOAP-based interfaces (WS-* or WS-DeathStar) have a bad rap and doing anything in the opposite way is a crowd pleaser (well, in the blogosphere at least). Modularity and composition of specifications is a major driving force behind the WS-* work, therefore it is bad and we should make all specifications of the new REST order stand-alone.

3) Keep it simple

A more benevolent way to put it is the concern to keep things simple. If you factor specifications out you put on the developer the burden of assembling the complete documentation, plus you introduce versioning issues between the parts. One API document that fully describes the contract is simpler.

4) We don’t need no stinking’ protocol, we have HTTP

Isn’t this the protocol? Through the magic of REST, all that’s needed is a resource model, right? But if you look in the specifications you see sections about authentication, fault handling, long-lived operations, enumeration of long result sets, etc… Things that have nothing to do with the resource model.

So what?

Why is this confluence of model and protocol in one specification bad? If nothing else, the “keep it simple” argument (#3) above has plenty of merits, doesn’t it? Aren’t WSDM and WS-Management just over-engineered?

They may be, but not because they offer this separation. Consider the following practical benefits of separating the protocol from the model:

1) We can at least agree on one part

Thanks to the “REST is the new black” attitude in Cloud circles, there are lots of commonalities between these various Cloud APIs. Especially the more recent ones, those that I think of as “second generation” APIs: vCloud, Sun API, GoGrid and OCCI (Amazon EC2 is the main “1st generation” Cloud API, back when people weren’t too self-conscious about not just using HTTP but really “doing REST”). As an example of convergence between second generation specifications, see for example, how vCloud and the Sun API both use “202 Accepted” and a dedicated “status” resource to handle long-lived operations. More comparisons here.

Where they differ on such protocol matters, it wouldn’t be hard to modify one’s implementation to use an alternative approach. Things become a lot more sensitive when you touch the resource model, which reflects the actual capabilities of the Cloud management infrastructure. How much flexibility in the network setup? What kind of application provisioning? What affinity/anti-affinity control level? Can I get block-level storage? Etc. Having to implement the other guy’s interface in these matters is not just a matter of glue code, it’s a major product feature. As a result, the resource model is a much more strategic control point than the protocol. Would you rather dictate the terms of a contract or the color of the ink in which it is printed?

That being the case, I suspect that there could be relatively quick and painless agreement on that first layer of the Cloud API: a set of protocol considerations, based on HTTP and REST, that provide a resource control framework with support for security, events, long-running operations, faults, many-as-one semantics, enumeration, etc. Or rather, that if there is to be a “quick and painless” agreement on anything related to Cloud computing standards it can only be on something that is limited to protocol concerns. It doesn’t have to be long and complex. It doesn’t have to be factored in 8 different specifications like WS-* did. It can be just one specification. Keep it simple, ignore all use cases that aren’t related to Cloud Computing. In the end, please call it MUR (Management Using REST)… ;-)

2) Many Clouds, one protocol to rule them all

Whichever Cloud taxonomy strikes your fancy (I am so disappointed that SADIST-PIMP hasn’t caught on), it’s pretty clear that there will not be one kind of Cloud. There will be at least some IaaS, some PaaS and plenty of SaaS. There will not be one API that provides control of them all, but they can share a base protocol that will make life a lot easier for developers. These Clouds won’t be isolated, developers will use them as a continuum.

3) Not just one access model

As much as it makes sense to start from simple and mostly synchronous operations, there will be many different interaction models for Cloud Computing. In addition to the base operations, we may get more of a desired-state/blueprint interaction pattern, based on the same resource model. Or, somewhere in-between, some kind of stored execution flow where modules are passed around rather than individual operations. Also, as the level of automation increases you may want a base framework that is more event-friendly for rapid close-loop management. And there are other considerations involved (like resource monitoring, policies…) not currently covered by these specifications but that can surely reuse the protocol aspects. By factoring out the resource model, you make it possible for these other interaction patterns to emerge in a compatible way.

The current Cloud APIs are not far away from this clean factoring. It would be an easy task to extract protocol considerations as a separate document, in large part due to the fact that REST prevents you from burying the resource model inside convoluted operation semantics. To some extent it’s just a partitioning issue, but the same can be said of many intractable and bloody armed conflicts around the world… Good fences make good neighbors in the world of IT specs too.

[UPDATE: Soon after this entry went to “press” (meaning soon after I pressed the publish button), I noticed this report of a “REST-*” proposal by Mark Little of RedHat/JBOSS. I will reserve judgment until Mark has blogged about it or I have seen some other authoritative description. We may be talking about the same thing here. Or maybe not. The REST-* name surprises me a bit as I would expect opponents of such a proposal to name it just this way. We’ll see.]

[UPDATE 2009/9/6: Apparently I am something like the 26th person to think of the “one protocol/API to rule them all” sentence. We geeks have such a shallow set of shared cultural references it’s scary at times.]

[UPDATED 2009/11/12: Lori MacVittie has a very nice follow-up on this, with examples and interesting analogies. Check it out.]

8 Comments

Filed under API, Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, Protocols, REST, Specs, Standards, Utility computing

8 Responses to Separating model from protocol in Cloud APIs

  1. William,

    Thanks for another enthralling post. We agree on pretty much every point… especially about HTTP as the “universal interface”. I jumped on OCCI during the formation discussions (it was called CAPI at the time) and had a clear vision of how it could/should/would look. My plan was to follow in the footsteps of Google GData (at least v2) by taking advantage of AtomPub as the meta-model. It has links and categories which are very nice, and as a bonus “link relations” are already part of HTTP and HTML too. Atom’s categories are also quite cool (no forcing people to use fugly terms like “VPDC”) but they didn’t work outside of Atom so I wrote an I-D (Web Categories, heavily inspired by Web Linking) to bring them to HTTP and will see about something similar for HTML in due course (assuming we include a HTML rendering, which I would like for us mere mortals – thus combining the machine and user interfaces). The theme is to be as compliant with existing [mostly IETF] standards as possible, contributing to them where we can (in addition to IETF I’m in the HTML 5 WG and have been keeping a close eye on those developments too, if not contributing to them).

    I had originally argued for a tight schedule, bringing the due date in from 2011 to this year – after all there was little do do (see the GData specs which cover, what, 16 different data types from contacts to calendars to documents, blogs and videos for proof). Fortunately or unfortunately I ran into problems gaining consensus regarding Atom, XML, or indeed any one particular format. Perhaps I didn’t do a good enough job of explaining but then again with numerous people flat out refusing to touch anything but their preferred format I wonder how successful any protocol based on XML/JSON/etc. will ever be. We’ve been forced to separate the model from the protocol as a result, though we are avoiding going for a pure “model then render” type approach that can result in a domain-specific language with a steep learning curve. Instead we are relying on existing standards like HTTP as a meta-model where we can. For example the web is built on linking so why would we want to reinvent the wheel? Indeed HTTP included linking functionality from the outset (Link: headers and poorly specified LINK and UNLINK verbs) but its thunder was stolen [for a few decades] by hypertext (eg HTML) with its in-band linking. Link relations and [Ss]emantic web technologies (like RDFa) are now enabling us to link anything to anything in a standard rather than home-grown fashion which is IMO a good (and necessary) thing for cloud computing. We’ll be doing our best to use these standard approaches to associating and linking resources.

    Basically the idea is to accept the various formats-du-jour (OVF, VMX, Xen, Hyper-V) and add metadata for linking (e.g. associating storage, compute and network resources with one another), related resources (documentation, console access/screenshots), categorisation (with a flexible, proven categorisation model lifted directly from Atom) and of course animation (start, stop, restart, resize). This metadata will be exposed out-of-band, either in the HTTP headers or a separate metadata resource (or both), which allows us to cater for any format (including non-hypertext formats like machine images) – rather than relying on any one in particular (e.g. OVF). Nobody bestowed a particular image format as “the one” for the Internet and now we have both choice (jpeg for pictures, png for logs, etc.) and interoperability (browsers support a number of common formats). The question came up again in the context of video codecs and despite being a free software advocate I’m happy to see that neither commercial nor free formats got a leg up here – markets are good at deciding such things.

    Anyway I’m working [almost] 24×7 on this now and canceled a family trip this week to get it finished as soon as possible. Following much discussion we now have a clean and simple protocol that I should have documented for public comment directly. Watch this space.

    Sam

  2. Pingback: BotchagalupeMarks for September 4th - 13:37 | IT Management and Cloud Blog

  3. Mark Hapner

    Hi William,

    Well defined REST API’s are fully capable of providing the separation of model and protocol you are decrying the lack of. For a practical example of this see Atom. The model is defined by the Atom Feed Spec and the protocol is defined by the Atom Publishing Protocol Spec. The IETF clearly understands the difference between model and protocol – let’s hope the DMTF does as well.

    This goes back to REST basics, the model is the hypertext representations and their links – a protocol is the specifics of how HTTP is used to interact with this hypertext. You could define some other protocol for interacting with the hypertext. For instance, you could define a WS* protocol for interacting with Atom resources – it would define some mapping of ‘logical’ hypertext operations to some specific WSDL interface. The beauty of HTTP is that its entire role in life is to supply such a mapping.

    So, if vCloud were to be well documented, it would emulate the example of Atom and specify its model and publishing protocol as separate elements. To be fair, it isn’t too difficult to tease this out of the current vCloud Programmers Doc but you are right to note that those how intermingle model and protocol in the name of REST are making a mistake. Users of the API (and programmer docs) don’t have to make this distinction; however, formal definitions of REST APIs should.

  4. Pingback: William Vambenepe’s blog » Blog Archive » Toolkits to wrap and bridge Cloud management protocols

  5. Pingback: William Vambenepe’s blog » Blog Archive » Cloud Data Management Interface (CDMI) draft released

  6. Pingback: William Vambenepe’s blog » Blog Archive » REST-*: good specs, bad branding?

  7. Pingback: William Vambenepe — REST in practice for IT and Cloud management (part 3: wrap-up)

  8. Pingback: William Vambenepe — Review of Fujitsu’s IaaS Cloud API submission to DMTF