REST in practice for IT and Cloud management (part 3: wrap-up)

[Preface: a few months ago I shared some thoughts about how REST was (or could) be applied to IT and Cloud management. Part 1 was a comparison of the RESTful aspects of four well-known IaaS Cloud APIs and part 2 was an analysis of how REST applies to configuration management. Both of these entries received well-informed reader comments BTW, so if you read the posts but didn’t come back for the comments you really owe it to yourself to do so now. At the time, I jotted down thoughts for subsequent entries in this series, but I never got around to posting them. Since the topic seems to be getting a lot of attention these days (especially in DMTF) I decided to go back to these notes and see if I could extract a few practical recommendations in the form of a wrap-up.]

The findings listed below should be relevant whether your protocol is trying to be truly RESTful, just HTTP-centric or even zen-SOAPy. Many of the issues that arise when creating a protocol that maps well to IT management use cases should transcend these variations and that’s what I try to cover.

Finding #1: Relationships (links) are first-class entities (a.k.a. “hypermedia”)

The clear conclusion of both part 1 and part 2 was that the most relevant part of REST for IT and Cloud management is the use of hypermedia. IT management enjoys a head start on this compared to other domains, because its models are already rich in explicit relationships (e.g. CIM associations), as opposed to other business domains in which relationships are more implicit (to the end user at least). But REST teaches us that just having relationships in your model is not enough. They need to be exposed in a way that maps directly to the protocol, so that following a relationship is an infrastructure-level task, not an application-level task: passing an ID as a parameter for some domain-specific function is not it.

This doesn’t violate the rule to not mix the protocol and the model because the alignment should take place in the metamodel. XML is famously weak in that respect, but that’s where Atom steps in, handling relationships in a generic way. Similarly, support for references is, in addition to its accolade to Schematron, one of the main benefits of SML (extra kudos for apparently dropping the “EPR” reference scheme between submission and standardization, in favor of just the “URI” scheme). Not to mention RDFa and friends. Or HTTP Link headers (explained) for link-challenged types.

Finding #2: Put IDs on steroids

There is little to argue about the value of clearly identifying things of interest and we didn’t wait for the Web to realize this. But it is also one of the most vexing and complex problems in many areas of computing (including IT management). Some of the long-standing questions include:

  • Use an opaque ID (some random-looking string a characters) or an ID grounded in “unique” properties of the resource (if you can find any)?
  • At what point does a thing stop being the same (typical example: if I replace each hardware component of a server one after the other, at which point is it not the same server anymore? Does it make sense for the IT guys to slap an “asset id” sticker on the plastic box around it?)
  • How do you deal with reconciling two resources (with their own IDs) when you realize they represent the same thing?

REST guidelines don’t help with these questions. There often is an assumption, which is true for many web apps, that the application “owns” the resource. My “inbox” only exists as a resource within the mail server application (e.g. Gmail or an Exchange server). Whatever URI GMail assigns for it is the URI for my inbox, period. Things are not as simple when the resources exist outside of any specific application: take a server, for example: the board management controller (or the hypervisor in the case of a VM), the OS management layer and the management agent installed on the machine all have claims to report on the machine (and therefore a need to identify it).

To some extent, Cloud computing simplifies many of these issues by providing controllers that “own” infrastructure resources and can authoritatively identify them. But it really is only pushing the problem to the next level of the stack.

Making the ID a URI doesn’t magically answer these questions. Though it helps in that it lets you leverage reconciliation mechanisms developed around URIs (such as <atom:link rel=”alternate”> or owl:sameAs). What REST does is add another constraint to this ID mechanism: Make the IDs dereferenceable URLs rather than just URIs.

I buy into this. A simple GET on a resource URI doesn’t solve everything but it has so many advantages that it should be attempted in all cases. And make this HTTP GET please (see finding #6).

In this adoption of GET, we just have to deal with small details such as:

  • What URL do I use for resources that have more than one agent/controller?
  • How close to the resource do I point this URL? If it’s too close to it then it may change as the resource evolves (e.g. network changes) or be affected by the resource performance (e.g. a crashed machine or application that does not respond to its management API). If it’s removed from the resource, then I introduce a scope (e.g. one controller) within which the resource has to remain, which may cause scalability concerns (how many VMs can/should one controller handle, what if I want to migrate a VM across the ocean…).

These are somewhat corner cases (and the more automation and virtualization you get, the fewer possible controllers you have per resource). While they need to be addressed, they don’t come close to negating the value of dereferenceable IDs. In addition, there are plenty of mechanisms to help with the issues above, from links in the representations (obviously) to RDDL-style lightweight directory to a last resort “give Saint Peter a call” mechanism (the original WSRF proposal had a sub-specification called WS-RenewableReferences that would let you ask for a new version of an expired EPR but it was never published — WS-Naming in then-GGF also touched on that with its reference resolvers — showing once again that the base challenges don’t change as fast as technology flavors).

Implicit in this is the fact that URIs are vastly superior to EPRs. The latter were only just a band-aid on a broken system (which may have started back when WSDL 1.1 decided to define “ports” as message aggregators that can have only one URL) and it’s been more debilitating to SOAP than any other interoperability issue. Web services containers internalized this assumption to the point of providing a stunted dispatch mechanism that made it very hard to assign distinct URLs to resources.

Finding #3: If REST told you to jump off a bridge, would you do it?

Adherence to REST is not required to get the benefits I describe in this series. There is a lot to be inspired by in REST, but it shouldn’t be a religion. Sure, if you squint hard enough (and poke it here and there) you can call your interface RESTful, but why bother with the contortions if some parts are not so. As long as they don’t detract from the value of REST in the other parts. As in all conversions, the most fervent adepts of RPC will likely be tempted to become its most violent denunciators once they’re born again. This is a tired scenario that we don’t need to repeat. Don’t think of it as a conversion but as a new perspective.

Look at the “RESTful with many parameters?” comment thread on Stefan Tilkov’s excellent InfoQ introduction to REST. It starts with some shared distaste for parameter-laden URIs and a search for a more RESTful approach. This gets suggested:

You could do a post on some URI like ./query/product_dep which would create a query resource. Now you “add” products to the query either by sending a product uri list with the initial post or by calling post on ./query/product_dep/{id}. With every post to the query resource the get on the query resource would change.

Yeah, you could. But how about an RPC-like query operation rather than having yet another resource lifecycle to manage just for the sake of being REST-compliant? And BTW, how do you think any sane consumer of your API is going to handle this? You guessed it, by packaging the POST/POST/GET/DELETE in one convenient client-side library function called “query”. As much as I criticize RPC-centric toolkits (see finding #5 below), it would be justified in this case.

Either you understand why/how REST principles benefit you or you don’t. If you do, then use this understanding to interpret the REST principles to best fit your needs. If you don’t, then no amount of CONTENT-TYPE-pixie-dust-spreading, GET-PUT-POST-DELETE-golden-rule-following and HATEOAS-magical-incantation-reciting will help you. That’s the whole point, for me at least, of this tree-part investigation. Stefan says essential the same, but in a converse way, in his article: “there are often reasons why one would violate a REST constraint, simply because every constraint induces some trade-off that might not be acceptable in a particular situation. But often, REST constraints are violated due to a simple lack of understanding of their benefits.” He says “understand why you violate” and I say “understand why you obey”. It is essentially the same (if you’re into stereotypes you can attribute the difference to his Germanic heritage and my Gallic blood).

Even worse than bending your interface to appear RESTful, don’t cherry-pick your use cases to only keep those that you feel you can properly address via REST, leaving the others aside. Conversely, don’t add requirements just because REST makes them easy to support (interesting how quickly “why do you force me to manage the lifecycle of yet another resource just to run a query” turns into “isn’t this great, you can share queries among users and you can handle long-running queries, I am sure we need this”).

This is not to say that you should not create a fully RESTful system. Just that you don’t necessarily have to and you can still get many benefits as long as you open your eyes to the cost/benefits trade-off involved.

Finding #4: Learn humility from REST

Beyond the technology, there is a vibe behind REST design. You can copy the technology and still miss it. I described it in 2005 as Humble Architecture, and applied to SOA at the time. But it describes REST just as well:

More practically, this means that the key things to keep in mind when creating a service, is that you are not at the center of the universe, that you don’t know who is going to consume your service, that you don’t know what they are going to do with it, that you are not necessarily the one who can make the best use of the information you have access to and that you should be willing to share it with others openly…

The SOA Manifesto recently called this “intrinsic interoperability”.

In IT management terms, it means that you can RESTify your CMDB and your event console and your asset management software and your automation engine all you want, if you see your code as the ultimate consumer and the one that knows best, as the UI that users have to go through, the “ultimate source of truth” and the “manager of managers” then it doesn’t matter how well you use HTTP.

Finding #5: Beware of tools bearing gifts

To a large extent, the great thing about REST is how few tools there are to take it away from you. So you’re pretty much forced to understand what is going on in your contract as opposed to being kept ignorant by a wsdl2java type of toolkit. Sure, Java (and .NET) have improved in that regard, but really the cultural damage is done and the expectations have been set. Contrast this to “the ‘router’ is just a big case statement over URI-matching regexps”, from Tim Bray’s post on the Sun Cloud API, one of my main inspirations for this investigation.

REST is not inherently immune to the tool-controlling-the-hand syndrome. It’s just a matter of time until such tools try to make REST “accessible” to the “normal” developer (who can supposedly prevent thread deadlocks but not parse XML). Joe Gregorio warns about this in the context of WADL (to summarize: WADL brings XSD which leads to code generation). Keep this in mind next time someone states that REST is more “loosely coupled” than SOAP. It’s how you use it that matters.

Finding #6: Use screws, not glue, so we can peer inside and then close the lid again

The “view source” option is how I and many others learned HTML. It unfortunately created a generation of HTML monsters who never went past version 3.2 (the marbled background makes me feel young again). But it also fueled the explosion of the Web. On-the-wire inspection through soapUI is what allowed me to perform this investigation and report on it (WMI has allowed this for years, but WS-Management is what made it accessible and usable for anyone on any platform). This was, of course, in the context of SOAP which is also inspectable. Still, in that respect nothing beats plain HTTP which is why I recommend HTTP GET in finding #2 (make IDs dereferenceable) even though I don’t expect that the one-page-per-resource view is going to be the only way to access it in the finished product.

These (HTML source, on-the-wire XML and resource-description pages) rarely hit the human eye and yet their presence enables the development of the more commonly used views. Making it as easy as possible to see what is going on under the covers helps with learning, with debugging, with extending and with innovating. In the same way that 99% of web users don’t look at the HTML source (and 99.99% of them don’t see the HTTP requests) but the Web would not be what it is to them if this inspectability wasn’t been there to fuel its development.

Along the same line, make as few assumptions as possible about the consumers in your interfaces. Which, in practice, often means document what goes on the wire. WSDL/WADL can be used as a format, but they are at most one small component. Human-readable semantics are much more important.

Finding #7: Nothing is free

Part of what was so attractive about SOAP is everything you were going to get “for free” by using it. Message-level security (for all these use cases where your messages starts over HTTP, then hops onto a train, then get delivered by a carrier pigeon). Reliable messaging. Transactionality. Intermediaries (they were going to be a big deal in SOAP, as you can see in vestigial form today in the Nodes/Roles left in the spec – also, do you remember WS-Routing? I do.)

And it’s true that by now there is a body of specifications that support this as composable SOAP headers. But the lack of usage of these features contrasts with how often they were bandied in the early days of SOAP.

Well, I am detecting some of the same in the REST camp. How often have you heard about how REST enables caching? Or about how content types allows an ISP to compress images on the fly to speed up delivery over dial-up? Like in the SOAP case, these are real features and sometimes useful. It doesn’t mean that they are valuable to you. And if they are not, then don’t let them be used as justifications. Especially since they are not free. If caching doesn’t help me (because of low volume, because security considerations prevent a shared cache, etc) then its presence actually adds a cost to me, since I now have to worry whether something is cached or not and deal with ETags. Or I have to consistently remember to request the cache to be bypassed.

Finding #8: Starting by sweeping you front door.

Before you agonize about how RESTful your back-end management protocol is, how about you make sure that your management application (the user front-end) is a decent Web application? One with cool URIs , where the back button works, where bookmarks work, where the data is not hidden in some over-encompassing Flash/Silverlight thingy. Just saying.

***

Now for some questions still unanswered.

Question #1: Is this a flee market?

I am highly dubious of content negotiation and yet I can see many advantages to it. Mostly along the lines of finding #6: make it easy for people to look under the hood and get hold of the data. If you let them specify how they want to see the data, it’s obviously easier.

But there is no free lunch. Even if your infrastructure takes care of generating these different views for you (“no coding, just check the box”), you are expanding the surface of your contract. This means more documentation, more testing, more interoperability problems and more friction when time comes to modify the interface.

I don’t have enough experience with format negotiation to define the sweetspot of this practice. Is it one XML representation and one HTML, period (everything else get produced by the client by transforming the XML)? But is the XML Atom-wrapped or not? What about RDF? What about JSON? Not to forget that SOAP wrapper, how hard can it be to add. But soon enough we are in legacy hell.

Question #2: Mime-types?

The second part of Joe Gregorio’s WADL entry is all about Mime types and I have a harder time following him there. For one thing, I am a bit puzzled by the different directions in which Mime types go at the same time. For example, we have image formats (e.g. “image/png”), packaging/compression formats (e.g. “application/zip”) and application formats (e.g. “application/vnd.oasis.opendocument.text” or “application/msword”). But what if I have a zip full of PNG images? And aren’t modern word processing formats basically a zip of XML files? If I don’t have the appropriate viewer, maybe I’d like them to be at least recognized as ZIP files. I don’t see support for such composition and taxonomy in these types.

And even within one type, things seem a bit messy in practice. Looking at the registered applications in the “options” menu of my Firefox browser, I see plenty of duplication:

  • application/zip vs. application/x-zip-compressed
  • application/ms-powerpoint vs. application/vnd.ms-powerpoint
  • application/sdp vs. application/x-sdp
  • audio/mpeg vs. audio/x-mpeg
  • video/x-ms-asf vs. video/x-ms-asf-plugin

I also wonder at what level of depth I want to take my Mime types. Sure I can use Atom as a package but if the items I am passing around happen to be CIM classes (serialized to XML), doesn’t it make sense to advertise this? And within these classes, can I let you know which domain (e.g. which namespace) my resources are in (virtual machines versus support tickets)?

These questions may simply be a reflection of my lack of maturity in the fine art of using Mime types as part of protocol design. My experience with them is more of the “find the type that works through trial and error and then leave it alone” kind.

[Side note: the first time I had to pay attention to Mime types was back in 1995/1996, playing with non-parsed headers and the multipart/x-mixed-replace type to bring some dynamism to web pages (that was before JavaScript or even animated GIFs). The site is still up, but the admins have messed up the Apache config so that the CGIs aren’t executed anymore but return the Python code. So, here are some early Python experiments from yours truly: this script was a “pushed” countdown and this one was a “pushed” image animation. Cool stuff at the time, though not in a “get a date” kind of way.]

On the other hand, I very much agree with Joe’s point that “less is more”, i.e. that by not dictating how the semantics of a Mime type are defined the system forces you to think about the proper way to define them (e.g. an English-language RFC). As opposed to WSDL/XSD which gives the impression that once your XML validator turns green you’re done describing your interface. These syntactic validations are a complement at best, and usually not a very useful one (see “fat-bottomed specs”).

In comments on previous posts, Stu Charlton also emphasizes the value that Mime types bring. “Hypermedia advocates exposing a variety of links for such state-transitions, along with potentially unique media types to describe interfaces to those transitions.” I get the hypermedia concept, the HATEOAS approach and its very practical benefits. But I am still dubious about the role of Mime types in achieving them and I am not the only one with such qualms. I have too much respect for Joe and Stu to dismiss it entirely, but until I get an example that makes it “click” in practice for me I won’t sweat about Mime types too much.

Question #3: Riding the Zeitgeist?

That’s a practical question rather than a technical one, but as a protocol creator/promoter you are going to have to decide whether you market it as “RESTful”. If I have learned one thing in my past involvement with standards it is that marketing/positioning/impressions matter for standards as much as for products. To a large extent, for Clouds, Linked Data is a more appropriate label. But that provides little marketing/credibility humph with CIOs compared to REST (and less buzzword-compliance for the tech press). So maybe you want to write your spec based on Linked Data and then market it with a REST ribbon (the two are very compatible anyway). Just keep in mind that REST is the obvious choice for protocols in 2009 in the same way that SOAP was a few years ago.

Of course this is not an issue if you specification is truly RESTful. But none of the current Cloud “RESTful” APIs is, and I don’t expect this to change. At least if you go by Roy Fielding’s definition (or Paul’s handy summary):

A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling].

And (in a comment) Mark Baker adds:

I’ve reviewed lots of “REST APIs”, many of them privately for clients, and a common theme I’ve noticed is that most folks coming from a CORBA/DCE/DCOM/WS-* background, despite all the REST knowledge I’ve implanted into their heads, still cannot get away from the need to “specify the interface”. Sometimes this manifests itself through predefined relationships between resources, specifying URI structure, or listing the possible response codes received from different resources in response to the standard 4 methods (usually a combination of all those). I expect it’s just habit. But a second round of harping on the uniform interface – that every service has the same interface and so any service-specific interface specification only serves to increase coupling – sets them straight.

So the question of whether you want to market yourself as RESTful (rather than just as “inspired by the proper use of HTTP illustrated by REST”) is relevant, if only because you may find the father of REST throwing (POSTing?) tomatoes at you. There is always a risk in wearing clothes that look good but don’t quite fit you. The worst time for your pants to fall off is when you suddenly have to start running.

For more on this, refer to Ted Neward’s excellent Roy decoder ring where he not only explains what Roy means but more importantly clarifies that “if you’re not doing REST, it doesn’t mean that your API sucks” (to which I’d add that it is actually more likely to suck if you try to ape REST than if you allow yourself to be loosely inspired by it).

***

Wrapping up the wrap-up

There is one key topic that I had originally included in this wrap-up but decided to remove: extensibility. Mark Hapner brings it up in a comment on a previous post:

It is interesting to note that HTML does not provide namespaces but this hasn’t limited its capabilities. The reason is that links are a very effective mechanism for composing resources. Rather than composition via complicated ‘embedding’ mechanisms such as namespaces, the web composes resources via links. If HTML hadn’t provided open-ended, embeddable links there would be no web.

I am the kind of guy who would have namespace-qualified his children when naming them (had my wife not stepped in) so I don’t necessarily see “extension via links” as a negation of the need for namespaces (best example: RDF). The whole topic of embedding versus linking is a great one but this post doesn’t need another thousand words and the “REST in practice” umbrella is not necessarily the best one for this discussion. So I hereby conclude my “REST in practice for IT and Cloud management” series, with the intent to eventually start a “Linked Data in practice for IT and Cloud management” series in which extensibility will be properly handled. And we can also talk about querying (conspicuously absent from Cloud APIs, unless CMDBf is now a Cloud API) and versioning. As a teaser for the application of Linked Data to IT/Cloud, I will leave you with what Vint Cerf has to say.

[UPDATED 2010/1/27: I still haven’t written the promised “Linked Data in practice for IT and Cloud management” post, but this explanation of the usage of Linked Data for data.gov.uk pretty much says it all. I may still write a post describing how what Jeni says about government data applies to Cloud management APIs, but it’s almost too obvious to bother. Actually, there may be reasons why Cloud management benefits even more from Linked Data than UK government data, so it may still be worth a post. At some point. When I convince myself that it may influence things rather than be background noise.]

15 Comments

Filed under API, Application Mgmt, Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, Protocols, REST, Semantic tech, SOA, SOAP, Specs, Utility computing

15 Responses to REST in practice for IT and Cloud management (part 3: wrap-up)

  1. Great post. After I’ve read it a couple more times, I’m sure I’ll have something clever to say 8^). BTW, s/brandied/bandied/

    Mitch

  2. There’s really two things that have aggravated me about the REST community.

    1) This idea of self describing messages just doesn’t work very well for machine-based clients. Self describing messages work very well for browsers because a human is driving state transitions. A human is capable of making decisions on the fly based on rendred content. HTML has worked so well with little constraints because a human is driving the interaction.

    A machine client is not capable of making decisions on the fly. Things have to be pre-defined up-front for a machine-based client to work. This is especially true of error conditions. This doesn’t mean REST principles aren’t useful for machine-based clients, it just means that these core principles are applied a little bit differently. For instance, I’ve found that links are more useful as a “Naming Service” replacement than a way to give a client options for state transition. That links real purpose in machine-based clients are to make URI’s opaque. Maybe I’m just horrible at REST, but I don’t see the “dynamic decision making” capabilities of links making such a big impact in machine-based clients.

    I think a huge arrogance of the REST community involves in not thinking about machine based clients because much of the success of the web has been human-client driven. There’s going to be a bit more up-front predefinition for machine-based client consumption of REST services. Which brings me to my 2nd point…

    2) There’s too much “black and white” going on in the REST community. It seems you either are or aren’t REST and you better not dare call your application or API RESTful if you aren’t 100%. What drives me crazy is not even the Web itself is 100% REST. The vast majority of applications (I’m not talking about websites but applications) on the web do not conform 100% to REST principles. They use GET to change resource state. They rely on having a session with the server. They overload POST by tunneling mini-RPCs (think of the action URIs in Struts for example). If you truly look at the success of the web, its the hypermedia aspect of it that is the underlying cause, not a uniform interface, and surely not stateless interaction with resources. What boggles my mind is that these same fanbois that call your interface unRESTful if you’ve had to bend a principle are the same people that promote REST as being the architecture of the Web. This is pretty much why I’ve completely unsubscribed from the rest-discuss list. It is just too aggravating to listen to all these people describe web services in terms of black and white and not grays…

    Anyways, sorry to rant. Interesting blog post.

  3. Roger Menday

    Great post. Looking forward to the next series. I think the REST/RDF (or REST/LinkedData) combination is a superb one! One which can tackle the Cloud, Management type scenarios – and many others.

    Expose the graph of resources as LinkedData, leverage lots of existing semantic web tools to drive the domain modeling, i.e. OWL, manipulate the graph RESTfully, and add a SPARQL endpoint to allow the graph to be queried. A small illustration of this for IaaS is at http://fujocci.appspot.com/iamsecret

    A small point on your question #1: as RDF is a infoset kind of thing, there are many different serialisations of this: XML, text, JSON … all of which are the same list of triples when it comes down to it. I’m not sure if that is a path to legacy hell anymore (?)

  4. Joshalot

    REST in peace!

  5. Pingback: GIS-Lab Blog» Архив блога » Новости вокруг

  6. Stu

    I’ve been meaning to respond to this with a blog entry, but of course Moveable Type has been acting up on me since I upgraded back in November…

    Anyway, yes, I agree with most of your points, and think you’ve really hit it with #1 (hypermedia), #2 (the importance of URIs, but also that we haven’t solved all problems here), and #4 (which I would term a restatement of ‘design for serendipity’).

    I also think that for Clouds, and IT Management, it’s all about linked data.

    Regarding MIME types, well, there are limitations to them, and we run into them especially when we start applying REST to more and more data variants or custom (i.e. business) data structures. They were designed in a different (pre-Web!) era, and it’s wonderful they’ve lasted this long, but surely something new must come along to supplant them.

    And a response to Bill Burke, whom I’m not sure will be reading this:

    “A machine client is not capable of making decisions on the fly. Things have to be pre-defined up-front for a machine-based client to work. ”

    This is true to a point — but just a point, and an evolving one. The tendency among the integration crowd has been to pre-define things way, way too much, in a way that drastically limits the amount of forward compatibility.

    Think through the evolution of data integration, from fixed-format data files with zero self-description, through comma-delimited files, through labeled delimited files, RPC formats, DII or Reflective calls, tagged data, and SQL or ETL-based integration. ETL tools out there weren’t coded against a particular schema, and are capable of doing backflips with one’s data.

    “Maybe I’m just horrible at REST, but I don’t see the “dynamic decision making” capabilities of links making such a big impact in machine-based clients.”

    See, this I disagree with, as it’s one of the major drivers behind REST’s loose coupling. There is plenty of evidence abounding that a machine can make a dynamic decision based on a link. Your web browser does it all the time when it interprets DIV tags relative to CSS, or what algorithm to use to render an IMG when it gets the MIME type of that image (or sniffs it ;). RSS/Atom aggregators do it all the time as well — what do they do with alternates? How do they render them? It depends.

    “There’s too much “black and white” going on in the REST community. It seems you either are or aren’t REST and you better not dare call your application or API RESTful if you aren’t 100%.”

    The main problem is in devaluing the term. If someone puts together a crap API, and it’s objectively bad (i.e. breaks the semantics of HTTP), then it’s worth calling that out.

    On the other hand, the constant bickering about “purity” through PUT vs. POST or URI syntax and parameter-laden URIs are IMO games for chumps. They matter, but not much.

  7. @Stu:

    Yeah, web browsers do a lot of nice things, but what they don’t do is make decisions. Humans do that. Machine-based clients need to make decisions based on a preset finite set of rules which are decided upon ahead of time. A human using a browser can basically follow any link rendered by the browser they want. I think this is a fundamental disconnect that a lot of RESTafarians make. Including Roy.

  8. @Stu:

    Not sure I articulated well enough. But, again, IMO, browsers are a horrible example as they don’t drive state transitions, which is a huge points that many RESTafarians miss. They only render, which basically means they are a transformation engine changing (X)HTML into pixels.

    I also agree that too much definition can be done at times. But as in regular software development bad engineers will never worry about backward compatibility or fluidity. Same is in regards to custom media types.

  9. Stu

    @Bill:

    My first reaction is that this feels almost like we are arguing over moving goal posts. I don’t think there’s a disconnect, or any disagreement that machine-based decisions are based on a set of finite rules, even from Roy, at least I’ve never read any such thing from him. A browser makes dynamic decisions to transform and render multiple media types onto a screen, which renders the state of the hypertext application. But it does not normally make decisions that enact state transitions on the resources themselves, because that’s what the human is for.

    On the other hand, can we think of machine agents that do make transitions on the resources themselves? There are a few, usually mashups and/or thick front-ends to RESTful APIs, but they’re certainly not as pervasive. I think the issue may be that you see limitations of the current Web architecture for the problems you’re interested in (e.g. classic enterprise integration scenarios), and there is a frustrating lack of admission of the amount of work that remains to address them.

    To me, a RESTful interface for changing resource state is not very different, IMO, from dynamic interfaces such as CORBA DII, or COM Automation, or RMI with Reflection, or even SOAP sans WSDL — with the exception of uniform method & return code semantics, and that there’s a limited amount of out-of-band information required to access the interface description – GET often suffices. But there is no magic here, it’s just an interface, little different from those we use in other technologies. The main benefit is that it presumes a dynamic interface by default, one that can be introspected, instead of hard-coded against. A RESTful toolkit presumes a dynamic interface by default instead of as a bag on the side, as with normal CORBA stubs/skeletons, or COM vtables, or RMI Remote interfaces.

    When you presume a dynamic interface by default, it enables freedom for an agent to have much more varied decision making behaviour than just if-then-else statements. We’ve seen plenty of evidence for this on the agent-side with the use of GET, we’ve seen less evidence of this on the resource-side with automated use of POST/PUT/DELETE, mainly because it’s so dependent on building a modular set of media types to achieve broad interoperability. Comprehension and standardization here has been glacial, and the REST community does often downplay the large amount of effort still required to bring the “write” side of the read/write web up to par. This remains frustrating.

  10. @Stu:

    You’re correct. I’m thinking of classic enterprise integration scenarios. I think REST can really shine here as well, but there is still *a lot* of work to do. We need to take techniques, patterns, and services defined in these traditional environments and iterate on them over and over again to see where REST can improve things. This is the main reason REST-* (rest-star.org) was created. To figure out where these classic enterprise idioms intersect with REST.

  11. Pingback: William Vambenepe — REST in practice for IT and Cloud management (part 2: configuration management)

  12. Pingback: William Vambenepe — Square peg, REST hole

  13. Pingback: William Vambenepe — Two versions of a protocol is one too many

  14. Pingback: William Vambenepe — Amazon proves that REST doesn’t matter for Cloud APIs

  15. Pingback: » REST + RDF finaly a practical solution? Cloud Comedy, Cloud Tragedy