REST in practice for IT and Cloud management (part 1: Cloud APIs)

In this entry I compare four public Cloud APIs (AWS EC2, GoGrid, Rackspace and Sun Cloud) to see what practical benefits REST provides for resource management protocols.

As someone who was involved with the creation of the WS-* stack (especially the parts related to resource management) and who genuinely likes the SOAP processing model I have a tendency to be a little defensive about REST, which is often defined in opposition to WS-*. On the other hand, as someone who started writing web apps when the state of the art was a CGI Perl script, who loves on-the-wire protocols (e.g. this recent exploration of the Windows management stack from an on-the-wire perspective), who is happy to deal with raw XML (as long as I get to do it with a good library), who appreciates the semantic web, and who values models over protocols the REST principles are very natural to me.

I have read the introduction and the bible but beyond this I haven’t seen a lot of practical and profound information about using REST (by “profound” I mean something that is not obvious to anyone who has written web applications). I had high hopes when Pete Lacey promised to deliver this through a realistic example, but it seems to have stalled after two posts. Still, his conversation with Stefan Tilkov (video + transcript) remains the most informed comparison of WS-* and REST.

The domain I care the most about is IT resource management (which includes “Cloud” in my view). I am familiar with most of the remote API mechanisms in this area (SNMP to WBEM to WMI to JMX/RMI to OGSI, to WSDM/WS-Management to a flurry of proprietary interfaces). I can think of ways in which some REST principles would help in this area, but they are mainly along the lines of “any consistent set of principles would help” rather than anything specific to REST. For a while now I have been wondering if I am missing something important about REST and its applicability to IT management or if it’s mostly a matter of “just pick one protocol and focus on the model” (as well as simply avoiding the various drawbacks of the alternative methods, which is a valid reason but not an intrinsic benefit of REST).

I have been trying to learn from others, by looking at how they apply REST to IT/Cloud management scenarios. The Cloud area has been especially fecund in such specifications so I will focus on this for part 1. Here is what I think we can learn from this body of work.

Amazon EC2

When it came out a few years ago, the Amazon EC2 API, with its equivalent SOAP and plain-HTTP alternatives, did nothing to move me from the view that it’s just a matter of picking a protocol and being consistent. They give you the choice of plain HTTP versus SOAP, but it’s just a matter of tweaking how the messages are serialized (URL parameters versus a SOAP message in the input; whether or not there is a SOAP wrapper in the output). The operations are the same whether you use SOAP or not. The responses don’t even contain URLs. For example, “RunInstances” returns the IDs of the instances, not a URL for each of them. You then call “TerminateInstances” and pass these instance IDs as parameters rather than doing a “delete” on an instance URL. This API seems to have served Amazon (and their ecosystem) well. It’s easy to understand, easy to use and it provides a convenient way to handle many instances at once. Since no SOAP header is supported, the SOAP wrapper adds no value (I remember reading that the adoption rate for the EC2 SOAP API reflect this though I don’t have a link handy).

Overall, seeing the EC2 API did not weaken my suspicion that there was no fundamental difference between REST and SOAP in the IT/Cloud management field. But I was very aware that Amazon didn’t really “do” REST in the EC2 API, so the possibility remained that someone would, in a way that would open my eyes to the benefits of true REST for IT/Cloud management.

Fast forward to 2009 and many people have now created and published RESTful APIs for Cloud computing. APIs that are backed by real implementations and that explicitly claim RESTfulness (unlike Amazon). Plus, their authors have great credentials in datacenter automation and/or REST design. First came GoGrid, then the Sun Cloud API and recently Rackspace. So now we have concrete specifications to analyze to understand what REST means for resource management.

I am not going to do a detailed comparative review of these three APIs, though I may get to that in a future post. Overall, they are pretty similar in many dimensions. They let you do similar things (create server instances based on images, destroy them, assign IPs to them…). Some features differ: GoGrid supports more load balancing features, Rackspace gives you control of backup schedules, Sun gives you clusters (a way to achieve the kind of manage-as-group features inherent in the EC2 API), etc. Leaving aside the feature-per-feature comparison, here is what I learned about what REST means in practice for resource management from each of the three specifications.


Though it calls itself “REST-like”, the GoGrid API is actually more along the lines of EC2. The first version of their API claimed that “the API is a REST-like API meaning all API calls are submitted as HTTP GET or POST requests” which is the kind of “HTTP ergo REST” declaration that makes me cringe. It’s been somewhat rephrased in later versions (thank you) though they still use the undefined term “REST-like”. Maybe it refers to their use of “call patterns”. The main difference with EC2 is that they put the operation name in the URI path rather than the arguments. For example, EC2 uses…(auth-parameters)…

while GoGrid uses…(auth-parameters)…

So they have action-specific endpoints rather than a do-everything endpoint. It’s unclear to me that this change anything in practice. They don’t pass resource-specific URLs around (especially since, like EC2, they include the authentication parameters in the URL), they simply pass IDs, again like EC2 (but unlike EC2 they only let you delete one server at a time). So whatever “REST-like” means in their mind, it doesn’t seem to be “RESTful”. Again, the EC2 API gets the job done and I have no reason to think that GoGrid doesn’t also. My comments are not necessarily a criticism of the API. It’s just that it doesn’t move the needle for my appreciation of REST in the context of IT management. But then again, “instruct William Vambenepe” was probably not a goal in their functional spec


In this “interview” to announce the release of the Rackspace “Cloud Servers” API, lead architects Erik Carlin and Jason Seats make a big deal of their goal to apply REST principles: “We wanted to adhere as strictly as possible to RESTful practice. We iterated several times on the design to make it more and more RESTful. We actually did an update this week where we made some final changes because we just didn’t feel like it was RESTful enough”. So presumably this API should finally show me the benefits of true REST in the IT resource management domain. And to be sure it does a better job than EC2 and GoGrid at applying REST principles. The authentication uses HTTP headers, keeping URLs clean. They use the different HTTP verbs the way they are intended. Well mostly, as some of the logic escapes me: doing a GET on /servers/id (where id is the server ID) returns the details of the server configuration, doing a DELETE on it terminates the server, but doing a PUT on the same URL changes the admin username/password of the server. Weird. I understand that the output of a GET can’t always have the same content as the input of a PUT on the same resource, but here they are not even similar. For non-CRUD actions, the API introduces a special URL (/servers/id/action) to which you can POST. The type of the payload describes the action to execute (reboot, resize, rebuild…). This is very similar to Sun’s “controller URLs” (see below).

I came out thinking that this is a nice on-the-wire interface that should be easy to use. But it’s not clear to me what REST-specific benefit it exhibits. For example, how would this API be less useful if “delete” was another action POSTed to /servers/id/action rather than being a DELETE on /servers/id? The authors carefully define the HTTP behavior (content compression, caching…) but I fail to see how the volume of data involved in using this API necessitates this (we are talking about commands here, not passing disk images around). Maybe I am a lazy pig, but I would systematically bypass the cache because I suspect that the performance benefit would be nothing in comparison to the cost of having to handle in my code the possibility of caching taking place (“is it ok here that the content might be stale? what about here? and here?”).


Like Rackspace, the Sun Cloud API is explicitly RESTful. And, by virtue of Tim Bray being on board, we benefit from not just seeing the API but also reading in well-explained details the issues, alternatives and choices that went into it. It is pretty similar to the Rackspace API (e.g. the “controller URL” approach mentioned above) but I like it a bit better and not just because the underlying model is richer (and getting richer every day as I just realized by re-reading it tonight). It handles many-as-one management through clusters in a way that is consistent with the direct resource access paradigm. And what you PUT on a resource is closely related to what you GET from it.

I have commented before on the Sun Cloud API (though the increasing richness of their model is starting to make my comments less understandable, maybe I should look into changing the links to a point-in-time version of Kenai). It shows that at the end it’s the model, not the protocol that matters. And Tim is right to see REST in this case as more of a set of hygiene guidelines for on-the-wire protocols then as the enabler for some unneeded scalability (which takes me back to wondering why the Rackspace guys care so much about caching).

Anything learned?

So, what do these APIs teach us about the practical value of REST for IT/Cloud management?

I haven’t written code against all of them, but I get the feeling that the Sun and Rackspace APIs are those I would most enjoy using (Sun because it’s the most polished, Rackspace because it doesn’t force me to use JSON). The JSON part has two component. One is simply my lack of familiarity with using it compared to XML, but I assume I’ll quickly get over this when I start using it. The second is my concern that it will be cumbersome when the models handled get more complex, heterogeneous and versioned, chiefly from the lack of namespace support. But this is a topic for another day.

I can’t tell if it’s a coincidence that the most attractive APIs to me happen to be the most explicitly RESTful. On the one hand, I don’t think they would be any less useful if all the interactions where replaced by XML RPC calls. Where the payloads of the requests and responses correspond to the parameters the APIs define for the different operations. The Sun API could still return resource URLs to me (e.g. a VM URL as a result of creating a VM) and I would send reboot/destroy commands to this VM via XML RPC messages to this URL. How would it matter that everything goes over HTTP POST instead of skillfully choosing the right HTTP verb for each operation? BTW, whether the XML RPC is SOAP-wrapped or not is only a secondary concern.

On the other hand, maybe the process of following REST alone forces you to come up with a clear resource model that makes for a clean API, independently of many of the other REST principles. In this view, REST is to IT management protocol design what classical music training is to a rock musician.

So, at least for the short-term expected usage of these APIs (automating deployments, auto-scaling, cloudburst, load testing, etc) I don’t think there is anything inherently beneficial in REST for IT/Cloud management protocols. What matter is the amount of thought you put into it and that it has a clear on-the-wire definition.

What about longer term scenarios? Wouldn’t it be nice to just use a Web browser to navigate HTML pages representing the different Cloud resources? Could I use these resource representations to create mashups tying together current configuration, metrics history and events from wherever they reside? In other words, could I throw away my IT management console because all the pages it laboriously generates today would exist already in the ether, served by the controllers of the resources. Or rather as a mashup of what is served by these controllers. Such that my IT management console is really “in the cloud”, meaning not just running in somebody else’s datacenter but rather assembled on the fly from scattered pieces of information that live close to the resources managed. And wouldn’t this be especially convenient if/when I use a “federated” cloud, one that spans my own datacenter and/or multiple Cloud providers? The scalability of REST could then become more relevant, but more importantly its mashup-friendliness and location transparency would be essential.

This, to me, is the intriguing aspect of using REST for IT/Cloud management. This is where the Sun Cloud API would beat the EC2 API. Tim says that in the Sun Cloud “the router is just a big case statement over URI-matching regexps”. Tomorrow this router could turn into five different routers deployed in different locations and it wouldn’t change anything for the API user. Because they’d still just follow URLs. Unlike all the others APIs listed above, for which you know the instance ID but you need to somehow know which controller to talk to about this instance. Today it doesn’t matter because there is one controller per Cloud and you use one Cloud at a time. Tomorrow? As Tim says, “the API doesn’t constrain the design of the URI space at all” and this, to me, is the most compelling long-term reason to use REST. But it only applies if you use it properly, rather than just calling your whatever-over-HTTP interface RESTful. And it won’t differentiate you in the short term.

The second part in the “REST in practice for IT and Cloud management” series will be about the use of REST for configuration management and especially federation. Where you can expect to read more about the benefits of links (I mean “hypermedia”).

[UPDATE: Part 2 is now available. Also make sure to read the comments below.]


Filed under Amazon, API, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, REST, SOA, SOAP, SOAP header, Specs, Utility computing, Virtualization

35 Responses to REST in practice for IT and Cloud management (part 1: Cloud APIs)

  1. Great article!

    It seems that namespaces would be a crucial feature for supporting federation of multiple providers behind a single service. Lacking namespaces, how can we make sense of the different semantics used by each implementation for the same HTTP verbs and HTTP headers?

  2. Thanks for the analysis. This is helpful. Could you take a look at the latest OCCI cloud proposal and include that in any further writings?



  3. Shlomo,

    I am with you on the need for namespaces, but I am confused by how you then want to apply them to HTTP verbs. The whole point is that these are common across all implementations, so this is not something you want to namespace-qualify. It does make sense for HTTP headers on the other hand (this is one of the things that SOAP addresses with its namespace-qualified headers).

  4. Randy,

    Thanks for the pointer. I had a quick look at and it looks interesting (it seems to make use of all the stuff I like in the Sun API). But what I see is just a list of design principles, not a spec. And not, unlike the others listed in the blog, something that has been implemented in front of a real Cloud. So I’ll put this in my “to watch” list. Is there a better link you can provide with a more flushed out version? Or is there a more advanced version that is internal to OGF?

  5. William,

    As you say, different implementations have different semantics for GET and PUT to the same URL (e.g. changing the admin password). That needs to be ironed out either at the namespace level (not sure how) or in some other metadata.

    Better yet, standardize a given behavior.

  6. Currently there is more current stuff going regarding OCCI at The link you have is outdated – I’ll try to remove it.

    Currently there is no implementation yet – but not far away. Currently we are writing up the API spec.

  7. Thanks for the clarification Thijs. I left my GGF/OGF account behind when I came to Oracle, so I’ll just wait for the spec to be publicly available.

  8. Mark Hapner

    Hi William,

    Nice post, I wanted to comment on the REST vs WS* point because there actually is a concrete, technical reason why REST is better for ‘services’ that is getting lost in the general debate. This is a little long for a comment but hopefully it’s worth reading …

    Many look at REST as more-or-less a synonym for HTTP. In fact, what REST describes is a hypertext architecture. When you define a hypertext service (aka a REST service) the first thing you need to define is the hypertext information model the service presents to its users. The second thing you need to define is how HTTP is used to interact with this hypertext. There are some REST services such as AWS that also define how Soap is used to interact their hypertext. While Soap can be used as a means of operating on hypertext, it wasn’t designed for this job and that is why it’s cumbersome to use for this purpose. AWS is a hypertext service regardless of what protocol is used to interact with it.

    The Atom IETF standard is a good example of a ‘well defined’ hypertext service. It is composed of RFC 4287 – The Atom Syndication Format and RFC 5023 – The Atom Publishing Protocol. RFC 4287 defines Atom’s hypertext. RFC 5023 defines how HTTP is used to operate on the Atom Feed hypertext.

    Many of the existing REST service descriptions interleave their description of their hypertext and how to operate on it with HTTP. This gives the impression that their hypertext information model (their resource and resource link information model) is less important than how it’s operated on with HTTP. This likely makes it simpler for developers to wrap their heads around but it obscures the fact that hypertext is the core service abstraction.

    Hypertext is more ‘workable’ because it puts an information model directly on the web as resources and their embedded links (rather than hiding these resources behind some unique set of RPC signatures and superimposing some service specific ‘linking’ concept). Developers understand RPC and find hypertext a bit of a challenge to grasp but once they do, they intuitively understand why it is more ‘workable’ as a web service model.

    Applying this to the cloud services you have surveyed, it becomes clear that exposing cloud resources via hypertext is more workable than trying to expose it as an RPC ‘library’. Why, because the state and function of the cloud is easily modeled as ‘resources’ and ‘links’ and hypertext operations on these.

  9. Hi Mark,

    Great comment, thanks for taking the time. What you describe (much more thoroughly) is the conclusion I came to, that the one real difference that’s potentially meaningful for IT management protocols that follow REST is the embedded linkage (hypermedia) inherent to the architecture. And interestingly, it’s not currently being taken advantage of in the RESTful (or REST-like) implementations of the different Cloud API.

    In one of the future installments in this series, I plan to look at how this relates to the importance of relationships/associations in IT models (the IT management domain is a lot more explicit in modeling relationships than other domains).

    In another one (I have so many notes jotted down that I didn’t have room to capture in this blog entry that it could occupy me for months) I plan to examine if/why SOAP is actually a barrier to this style.

  10. Mark Hapner

    There are quite a few links in the AWS and Sun hypertext.

    It might be interesting for you to extract the hypertext one or two of these cloud services has defined to see if it makes sense.

    It is interesting to note that HTML does not provide namespaces but this hasn’t limited its capabilities. The reason is that links are a very effective mechanism for composing resources. Rather than composition via complicated ’embedding’ mechanisms such as namespaces, the web composes resources via links. If HTML hadn’t provided open-ended, embeddable links there would be no web.

  11. Jorge L. Williams

    Hello William,

    I’m one of the software engineers at The Rackspace Cloud who worked on the design and development of the Cloud Servers API. Thanks for your comments on the API. We’re always open to review and critique as it helps keep us honest and exposes areas for improvement. I’d like to make some clarifications and to lend a little perspective on the API from a ReST services point of view.

    Before going further, let me first clarify that a GET on /servers/{id} and a PUT on /servers/{id} both operate on a representation of a server. Thus, it is possible to GET a server, modify the name of that server, then PUT the server to have the change take effect. The PUT operation is simply doing an update on a particular server and does not violate ReST semantics. In the spec, we showcase the fact that you can use this PUT operation to modify the name of the server and its administrative password because those are the only aspects of the server that are currently modifiable via the API. We’ll work to clarify this confusion in future revisions.

    I’d also like to note that you may have missed one of the things that makes us unique (as far as we know) in the world of Cloud management APIs and which may be particularly appealing to those of us who are used to working with WS-*: While the Cloud Servers API adheres diligently to ReST principles, it also makes strong use of XML Schema and is described in a machine processable manner (via a WADL). Some of our reasons for using XML Schema are purely practical: we are coding primarily in Java and Java has a host of tools that allow us to work easily with the schema language. More importantly, however, XML Schema is valuable because it is rich enough to allow us to specify our data model in a machine processable and verifiable manner, and flexible enough to customize to our needs. In a very real sense, we use XML Schema to describe not just the structure of our XML request/response pairs, but also as a means to provide a definition for our underlying entities. The XML schema forms a “source of truth” for those entities and provides a guide when supporting other representations. That is not to say, however, that we take a singularly XML-focused approach. One of the mistakes we made early on was to attempt to auto-generate JSON based solely on XML instance documents. This created JSON “objects” that were difficult to work with and seemed overly verbose. We now take a more “hands on” approach where-by the XML Schema defines the facets (or restrictions) of an underlying entity (a list of servers cannot contain more than 1000 servers, uploaded files must be encoded in Base 64 and may not exceed 10 KB) but does not entirely define the structure of the representation when we’re not dealing with XML. Our goal is to take a best of both worlds approach where by both JSON and XML are first class citizens that are both easy work with. Additionally, we also leaving the door open to support other representations in future releases of the API.

    This leads me to a very important reason why we considered ReST for our public API in the first place. ReST provides a many-to-one relationship between representations and entities. In ReST, it is the responsibility of the service to provide support for conversions. In SOAP there is only a single XML representation and it’s the client’s responsibility to convert. Public services have different kinds of clients and each typically wants to deal with a representation that make sense to them. Internal, middle-ware, and most Java-based clients typically want XML, most external clients and particularly web-based clients want JSON, and individuals typically want HTML pages. HTTP provides a well established content negotiation protocol by which we can serve all of these clients in a consistent manner. Our goal is to keep the number of barriers to entry low, and ReST allows us to do this in a manner that SOAP cannot.

    Finally, you mentioned caching in your article. While I agree that it may be overkill from the perspective of a single request or even customer, it makes sense when talking about tens of thousands of accounts and servers. In addition, there are a number of partner use cases that require heavy polling of many servers across many accounts. We designed the caching features based on feedback and input from some of these partners. The current design provides efficient mechanisms for polling and allows us to scale the API service, which is a win-win. Also, please be aware that it is not possible for a client to obtain stale data from our services as we utilize purging mechanisms to ensure that objects served out of cache are always accurate and up to date. Again, thanks for the feedback and we’ll work to emphasize this in future revisions of the spec.


    Jorge L. Williams, Ph.D.
    Senior Software Engineer
    The Rackspace Cloud

  12. Mark,

    I see the links in he Sun API, though not in the AWS EC2 API. Or are you referring to another AWS service than EC2?

    So the potential is there for the Sun API and your colleagues (and you if you are involved in the effort) have done their share. The next question is whether people will choose to exercise its hypertext capabilities or not. I don’t know what code has been written yet to use the Sun API, if any, but if it is just a port of code that was designed for EC2, GoGrid or other than they’ll just glance over this and treat it as just a somewhat peculiar form of RPC. So that it fits the mold.

    Because an API can be RESTful in the sense that it allows and encourages system that use it to follow REST principles, but in the end it’s the actual application making use of the API which is RESTful or not. That’s where the moment of truth is. Don’t you agree.

    Again, I think I agree with you on the preeminence of hypermedia among REST principles and the fact that the Sun API recognizes it.

    Your point about HTML not needing namespaces because it uses links is very interesting to me. I don’t think it’s quite a an apple to apple comparison but there definitely is something there that I need to think more about. I agree that if we have a more RESTful (or at least more link-friendly) system than we’ll do less embedding and more referencing. But the fact remains that different people will want to define structure with different semantics, and that’s something you need to take care of whether these new structures appear via embedding or via referencing. At this point, we are in the realm or semantic web and Linked Data more than REST, really. But again, I think there is a warning against over-engineering that is worth heading in your remark about the lack of namespaces in HTML.

  13. Hi Jorge,

    Thanks for the long and comprehensive comment. First let me make clear that my post was not meant as a review/critic of the APIs as much as an exploration to see what I can learn about the benefits of REST for IT mgmt protocols. But I did drop some criticism here and there and it’s great to have your clarifications/corrections on them.

    On the GET/PUT on /servers/{id} question I see what you mean. Indeed it would be great to clarify the doc to say that the PUT allows you to update the details and it just so happens that only the server name and the admin pwd are updatable right now. Right now in your doc, the title of the section that introduces this PUT is “Update Server Name / Administrative Password” so I don’t feel too stupid for my interpretation. On the other hand, I somehow turned “server name” into “admin username” and that’s just my mistake.

    Still, of these two properties only one (server name) actually appears in the GETable description. The password doesn’t (for obvious reasons). Now that there is a partial overlap (and that the intent is to have maximum possible overlap, subject to practicality/feasibility), it doesn’t shock me anymore. Just for old time sake, here is what WSDM defined as properties metadata ( Mutability, Modifiability, Valid Values, Valid Range, Static Values, Notifiability. Then WSRF went on to create WS-ResourceMetadata (see which, as far as I can remember, never made it to standards. At this point WSRF was so far into over-engineering that I think even the committee members had realized it (it took me a while but by then I, for one, had). Interestingly, this spec defines “modifiability” as either read-only or read-write, meaning that it failed to capture your write-only use case which is applicable for passwords.

    On your use of XSD, I didn’t touch on this because it wasn’t really the topic. Again, this isn’t a general review of the API, it’s just an investigation of the REST-related benefits that it exhibits. On this topic though I’ll just note that an explicit model is a good thing but that I don’t believe (anymore) that XSD has much to offer. I’ve touch on this in this entry (see the SML/XSD part): The “we use XSDs” because it has tools” argument is one that scares me. But again, this is another topic.

    WRT to content negotiation, I’ll grant you that it’s native in HTTP in a way that SOAP does not leverage (or replicate). That being said, it is not an overwhelming argument because there is nothing in SOAP that prevents this from happening. For example, WS-Management has a wsman:Locale SOAP header that you can include in your request to request the response to be in a given language. Which is (at a higher level of the stack than representation) a form of content negotiation. With the “must-understand” mechanisms, SOAP headers are actually a better foundation for this than HTTP headers. But in HTTP the header is backed into the spec, while in SOAP it hasn’t even been defined generically, so I am not arguing that practicality for this is on the side of HTTP.

    For caching I still don’t get your explanation. Are you sharing the cache across customers? I would expect that security concerns would drastically constrain the universe of documents that can be cashed across customers. More importantly, I am very confused by the assertion that “it is not possible for a client to obtain stale data from our services as we utilize purging mechanisms to ensure that objects served out of cache are always accurate and up to date”. If that’s the case, then there is no caching concern in the API, period. As far as I (as a client) am concerned, there is no caching. Your implementation may cache behind the scenes, but it’s irrelevant to me. But if that’s the case, then why does your API doc return a 203/Cached code and a “last modified” header? Why would I care if this supposedly cached content is guaranteed to be up to date?

  14. Mark Hapner

    I have to admit that AWS is a bit light on its use of hypertext; however, it does provide various ‘list’ GETs that return resources that contain sets of links. This hypertext is hidden away in its ‘API’ documentation. AWS structures its services as a set of unique query actions and a number of these actions are not idempotent. The result is a service that is a mixture of hypertext and procedural. At its core, it is hypertext because it creates resources and offers them as URIs even if the resource is created with a GET instead of a POST.

    As you start looking at these services from a hypertext rather than an HTTP perspective, you begin to understand them better and it provides a more practical basis for comparing them. On the other hand, if you think of the representations that they produce in responses as just artifacts of their ‘API’ they start to look like an arbitrary mishmash of HTTP. People see and understand the web as hypertext. This is also the best way for programs and programmers to understand REST services.

    On the namespace subject, you might want to look at Atom Categories as an example of how to support specialization of hypertext. Categories allow a Feed to define a set of specialized Feed Entry types that extend the standard Entry hypertext. This is a very simple extension mechanism defined in the Atom Publishing Protocol. It is another example of where simple hypertext facilities often solve information integration problems better than the complexities of XML Schema.

  15. Mark,

    Your second paragraph is great. It’s more or less what I was trying to say in my response to your previous comment, when I wrote “in the end it’s the actual application making use of the API which is RESTful or not. That’s where the moment of truth is.”

    For all the talk about whether an API is restful or not, what really matters is how it is used rather than the details of what it spits out. It’s the usage model that you apply to it. The actual protocol may be more or less conducive to this model, but it is not the model by itself.

  16. Pingback: William Vambenepe’s blog » Blog Archive » Anthology of blog posts about protocols and data formats

  17. Pingback: William Vambenepe’s blog » Blog Archive » REST in practice for IT and Cloud management (part 2: configuration management)

  18. Pingback: People Over Process » “There’s a cloud for that.” - IT Management & Cloud Podcast #49

  19. William,

    Thanks for another intriguing read – I know how much time it takes to write posts like this and I don’t know where you get it all from.

    I’ve updated the temporary OCCI site to point at and would like to hear your thoughts once we’ve got the current thinking own on paper.


  20. Sam,

    I’ll gladly take a look. But once again I am not giving thumbs up or down to any spec (though I may drop a few pointed comments here and there). I am mainly going through them in order to learn the industry state of the art.


  21. I think we should learn and use from BC (Before Cloud) to get to AC (After Cloud). In my world, BC is/was SNA, OSI, TCP/IP, Web.*, Java.*, Grid.*, SOA.*, Cloud.* … It seems that each vendor comes to the game of standardization with their own agenda. Once I was a vendor (so am guilty too). Ultimately, I think, we do need a Cloud stack (seems like we have 3 … IaaS, Paas, Aaas). Maybe, we do need 5 (hmm, just like TCP/IP) or maybe we need 7 (after all it seems that Nature.* happened in 7 layers along with OSI). Or maybe, we just let the vendors all roll the cloud out and interoperate later. Obviously, we may end up having 7 clouds (with 7 powerful vendors like the 7 oil commpanies or bells). All the same, your discussion seems very much in the SMaaS (ahh, see I created that … Service Management aaS) and somewhat towards IaaS. Anyhow, I think all the wisemen (WSDM, WSMF, WSsomething etc) seem to make sense. Then when all the suds (SOAP, REST and something) things then move into more a “how”. It is interesting that in the beginning (my world) there was command line things. It seems command line things are still here (I like that since I only remember my long term memories of command lines). I believe, the SEaas (SEcurity aaS) may end causing a small implosion in this whole Cloud.* (I think splat and stack makes sense). Obviously, I just created 5 layers from 3. Maybe, we should stick with 3 and then have some sub layers below each main layer (hmm, objectspeak or something). Now, if one was a developer (lets say Java or Python or something), then we have some type of frameworks, models or patterns). Now that gets even more interesting since those “seem” to be more higher level things (like all this stuff is really for some business service or process to make some money). hmm, that seems like an Enterprise Architecture type of thing. whoops, maybe we do need some kind of “Architecture” and rules for Cloud. Then again, each architecture will have that vendor tilt. All the same, it certainly is nice after all these years (from BC) to be able to read something and see AC happening. I do believe, there will always something beyond something else. Maybe, we should just go from Cloud.* to Cloud 6.0 (skip the releases in between). I think the Web 3.0 seems so numerical. Anyhow, I think we are all collectively getting somewhere. Obviously, when some standard does not seem to enhance the cloud it will surely just disappear (go extinct like the dinos). Finally, we do have to think about the OS.* (operating system animal). Linux.*, Unix.*, W.* … ie. OS.*. but that is another animal anyhow.

  22. Pingback: William Vambenepe’s blog » Blog Archive » VMWare publishes (and submits) vCloud API

  23. Pingback: William Vambenepe’s blog » Blog Archive » Separating model from protocol in Cloud APIs

  24. reading your article i realized again how IT is changing fundamentally. a clear and concise interface (REST of whatever) allows for automating most of the management problems in IT resource management.

    i don’t have experience with large scale clouds, we mainly use one of the clouds to help our customers with the infrastructure for bringing and keeping their applications available to their users.

    there is the another benefit to a clear and concise interface to your cloud, big or small. you can manage your resources through different interfaces. and to keep your cloud ‘up’ you can use this feature.

    we have created a native android application called decaf (, for managing amazon ec2 accounts. apart from operating your cloud artifacts you can now see the health of your own cloud graphically, for example. with a native android we are also able to monitor instances and receive/process sms alerts in a way that is meaningful.

    with an application like decaf we could do other interesting things as well. you cloud offload resource intensive tasks to amazon ec2, from your phone. although i don’t see the immediate problem this solution solves it is interesting. but i think google apps is a more appropriate cloud for this model.

  25. Hi Guys

    Very interesting and technical article and related list of posts …
    Doing software since a long time, I think it could be great if all software vendors could take the time when submitting their API to: list their requirements, the non functional requirements and constraints, a high level UML model of the concept and then finally an API. When I was doing research it was also good to position itself towards existing stuff in the same area.
    As a customer, my only feeling is that today exist around 5 differents specs that again will lock me in with a specific provider. It will also force me to “program” access to this API …
    He Guys, do you think our Admins can program Java and are JSON and XML schema fluents?
    At least we have some bloggers that take the time to make reviews.
    Anyway, do you think my procurement manager will choose a cloud vendor based on its API? So if you want to be really user oriented, try to make it simple AND interoperable…

  26. William Vambenepe,

    I’m an architect at the Rackspace Cloud and I work with Jorge Williams. I’m responsible for the part of the API system design that utilizes caching. The system will indicate to you that a cached entry was returned by using a 203/Cached response in cases where you repeatedly poll for something that has not yet changed state, or in cases where you are polling at a rate that’s impossibly fast. For example if you ask to reboot a server and ask 1000 times what it’s status is over the next minute before it’s even possible that the server has rebooted, and changed state. It’s a way we can gracefully indicate to the client to slow down the polling interval by consulting the max-age of the Cache-Control header along with the Last-Modified header value to indicate when you should poll again.

    >As far as I (as a client) am concerned, there is no caching. Your
    > implementation may cache behind the scenes, but it’s irrelevant to me.

    That’s right, but we still respoectully indicate to you that we are doing that so that you have an indication of how to tune your polling interval for optimal results.

    > But if that’s the case, then why does your API doc return a 203/Cached code
    > and a “last modified” header? Why would I care if this supposedly cached
    > content is guaranteed to be up to date?

    Functionally you don’t care. but operationally you do. You’ll care when other users of the API service start sending in thousands of polling requests per second on operations that do database lookups or back-end system API calls to produce the answer. That use pattern would slow the system down considerably for neighboring users, including you. We use the caching as a coarse method of QoS admission control to determine how much back-end work we will do and at what rate.

    I’m also responsible for the LIMITS design which further defines how often you should expect to run operations of various sorts. This is designed to control the rate at which we accept API calls that result in cache misses. By using these approaches in combination, we were able to produce an efficient system that gracefully handles most unintentional abuse of the API service. The neighboring users are efficiently spared from the unwanted work pattern, and very busy clients are minimally invaded by error conditions.

    Rackspace believes deeply in transparency in everything we do. We produced a system design with a supporting API specification that allows us to be as transparent as possible about how we are handling things on the back-end of the API service. The limits are clearly available for the client to see, and by using our 203/Cached return code, you know when you’re asking for something too frequently.


    Adrian Otto

  27. Pingback: William Vambenepe — REST in practice for IT and Cloud management (part 3: wrap-up)

  28. Pingback: What Language Does the Cloud Speak, Now and In the Future?

  29. Pingback: Developing API Server – Practical Rules of Thumb

  30. Pingback: William Vambenepe — Introducing the Oracle Cloud API

  31. Pingback: - A Rackspace Hailmary Pass? | CloudAve

  32. Pingback: Updated: – A Rackspace Hailmary Pass?

  33. Pingback: William Vambenepe — Amazon proves that REST doesn’t matter for Cloud APIs

  34. R. Est

    I admire your courage in admitting that you were involved in some of the WS-* protocol design. That was some of the worst and most obtuse and inaccessible technology ever foist upon a naive group of IT consumers. It represented a classic failure of “design by a committee of lifelong standards meeting goers” and a classic success of making a design so unnecessarily complex as to create a wall around the implementation knowledge so as to extort untold amounts of consulting dollars from terrified CIOs frightened of missing the “next big thing” that their 4 handicap salesperson hypnotized them with in his power points.