Category Archives: Specs

A post-mortem on the previous IT management revolution

Before rushing to standardize “Cloud APIs”, let’s take a look back at the previous attempt to tackle the same problem, which is one of IT management integration and automation. I am referring to the definition of specifications that attempted to use the then-emerging SOAP-based Web services framework to easily integrate IT management systems and their targets.

Leaving aside the “Cloud” spin of today and the “Web services” frenzy of yesterday, the underlying problem remains to provide IT services (mostly applications) in a way that offers the best balance of performance, availability, security and economy. Concretely, it is about being able to deploy whatever IT infrastructure and application bits need to be deployed, configure them and take any required ongoing action (patch, update, scale up/down, optimize…) to keep them humming so customers don’t notice anything bothersome and you don’t break any regulation. Or rather so that any disruption a customer sees and any mandate you violate cost you less than it would have cost to avoid them.

The realization that IT systems are moving more and more towards distributed/connected applications was the primary reason that pushed us towards the definition of Web services protocols geared towards management interactions. By providing a uniform and network-friendly interface, we hoped to make it convenient to integrate management tasks vertically (between layers of the IT stack) and horizontally (across distributed applications). The latter is why we focused so much on managing new entities such as Web services, their execution environments and their conversations. I’ll refer you to the WSMF submission that my HP colleagues and I made to OASIS in 2003 for the first consistent definition of such a management framework. The overview white paper even has a use case called “management as a service” if you’re still not convinced of the alignment with today’s Cloud-talk.

Of course there are some differences between Web service management protocols and Cloud APIs. Virtualization capabilities are more advanced than when the WS effort started. The prospect of using hosted resources is more realistic (though still unproven as a mainstream business practice). Open source component are expected to play a larger role. But none of these considerations fundamentally changes the task at hand.

Let’s start with a quick round-up and update on the most relevant efforts and their status.

Protocols

WSMF (Web Services Management Framework): an HP-created set of specifications, submitted to the OASIS WSDM working group (see below). Was subsumed into WSDM. Not only a protocol BTW, it includes a basic model for Web services-related artifacts.

WS-Manageability: An IBM-led alternative to parts of WSDM, also submitted to OASIS WSDM.

WSDM (Web Services Distributed Management): An OASIS technical committee. Produced two standards (a protocol, “Management Using Web Services” and a model of Web services, “Management Of Web Services”). Makes use of WSRF (see below). Saw a few implementations but never achieved real adoption.

OGSI (Open Grid Services Infrastructure): A GGF (the organization now known as OGF) standard to provide a service-oriented resource manipulation infrastructure for Grid computing. Replaced with WSRF.

WSRF: An OASIS technical committee which produced several standards (the main one is WS-ResourceProperties). Started as an attempt to align the GGF/OGSI approach to resource access with the IT management approach (represented by WSDM). Saw some adoption and is currently quietly in use under the cover in the GGF/OGF space. Basically replaced OGSI but didn’t make it in the IT management world because its vehicle there, WSDM, didn’t.

WS-Management: A DMTF standard, based on a Microsoft-led submission. Similar to WSDM in many ways. Won the adoption battle with it. Based on WS-Transfer and WS-Enumeration.

WS-ResourceTransfer (aka WS-RT): An attempt to reconcile the underlying foundations of WSDM and WS-Management. Stalled as a private effort (IBM, Microsoft, HP, Intel). Was later submitted to the W3C WS-RA working group (see below).

WSRA (Web Services Resource Access): A W3C working group created to standardize the specifications that WS-Management is built on (WS-Transfer etc) and to add features to them in the form of WS-RT (which was also submitted there, in order to be finalized). This is (presumably) the last attempt at standardizing a SOAP-based access framework for distributed resources. Whether the window of opportunity to do so is still open is unclear. Work is ongoing.

WS-ResourceCatalog : A discovery helper companion specification to WS-Management. Started as a Microsoft document, went through the “WSDM/WS-Management reconciliation” effort, emerged as a new specification that was submitted to DMTF in May 2007. Not heard of since.

CMDBf (Configuration Management Database Federation): A DMTF working group (and soon to be standard) that mainly defines a SOAP-based protocol to query repositories of configuration information. Not linked with (or dependent on) any of the specifications listed above (it is debatable whether it belongs in this list or is part of a new breed).

Modeling

DCML (Data Center Markup Language): The first comprehensive effort to model key elements of a data center, their relationships and their policies. Led by EDS and Opsware. Never managed to attract the major management vendors. Transitioned to an OASIS member section and died of being ignored.

SDM (System Definition Model): A Microsoft specification to model an IT system in a way that includes constraints and validation, with the goal of improving automation and better linking the different phases of the application lifecycle. Was the starting point for SML.

SML (Service Modeling Language): Currently a W3C “proposed recommendation” (soon to be a recommendation, I assume) with the same goals as SDM. It was created, starting from SDM, by a consortium of companies that eventually submitted it to W3C. No known adoption other than the Eclipse COSMOS project (Microsoft was supposed to use it, but there hasn’t been any news on that front for a while). Technically, it is a combination of XSD and Schematron. It appears dead, unless it turns out that Microsoft is indeed using it (I don’t know whether System Center is still using SDM, whether they are adopting SML, whether they are moving towards M or whether they have given up on the model-centric vision).

CML (Common Model Library): An effort by the SML authors to create a set of model elements using the SML metamodel. Appears to be dead (no news in a long time and the cml-project.org domain name that was used seems abandoned).

SDD (Solution Deployment Descriptor): An OASIS standard to define a packaging mechanism meant to simplify the deployment and configuration of software units. It is to an application archive what OVF is to a virtual disk. Little adoption that I know of, but maybe I have a blind spot on this.

OVF (Open Virtualization Format): A recently released DMTF standard. Defines a packaging and descriptor format to distribute virtual machines. It does not defined a common virtual machine format, but a wrapper around it. Seems to have some momentum. Like CMDBf, it may be best thought of as part of a new breed than directly associated with WS-Management and friends.

This is not an exhaustive list. I have left aside the eventing aspects (WS-Notification, WS-Eventing, WS-EventNotification) because while relevant it is larger discussion and this entry to too long already (see here and here for some updates from late last year on the eventing front). It also does not cover the Grid work (other than OGSI/WSRF to the extent that they intersect with the IT management world), even though a lot of the work that took place there is just as relevant to Cloud computing as the IT management work listed above. Especially CDDLM/CDL an abandoned effort to port SmartFrog to the then-hot XML standards, from which there are plenty of relevant lessons to extract.

The lessons

What does this inventory tell us that’s relevant to future Cloud API standardization work? The first lesson is that protocols are easy and models are hard. WS-Management and WSDM technically get the job done. CMDBf will be a good query language. But none of the model-related efforts listed above seem to have hit the mark of “doing the job”. With the possible exception of OVF which is promising (though the current expectations on it are often beyond what it really delivers). In general, the more focused and narrow a modeling effort is, the more successful it seems to be (with OVF as the most focused of the list and CML as the other extreme). That’s lesson learned number two: models that encompass a wide range of systems are attractive, but impossible to deliver. Models that focus on a small sub-area are the way to go. The question is whether these specialized models can at least share a common metamodel or other base building blocks (a type system, a serialization, a relationship model, a constraint mechanism, etc), which would make life easier for orchestrators. SML tries (tried?) to be all that, with no luck. RDF could be all that, but hasn’t managed to get noticed in this context. The OVF and SDD examples seems to point out that the best we’ll get is XML as a shared foundation (a type system and a serialization). At this point, I am ready to throw the towel on achieving more modeling uniformity than XML provides, and ready to do the needed transformations in code instead. At least until the next window of opportunity arrives.

I wish that rather than being 80% protocols and 20% models, the effort in the WS-based wave of IT management standards had been the other way around. So we’d have a bit more to show for our work, for example a clear, complete and useful way to capture the operational configuration of application delivery services (VPN, cache, SSL, compression, DoS protection…). Even if the actual specification turns out to not make it, its content should be able to inform its successor (in the same way that even if you don’t use CIM to model your server it is interesting to see what attributes CIM has for a server).

It’s less true with protocols. Either you use them (and they’re very valuable) or you don’t (and they’re largely irrelevant). They don’t capture domain knowledge that’s intrinsically valuable. What value does WSDM provide, for example, now that’s it’s collecting dust? How much will the experience inform its successor (other than trying to avoid the WS-Addressing disaster)? The trend today seems to be that a more direct use of HTTP (“REST”) will replace these protocols. Sure. Fine. But anyone who expects this break from the past to be a vaccination against past problems is in for a nasty surprise. Because, and I am repeating myself, it’s the model, stupid. Not the protocol. Something I (hopefully) explained in my comments on the Sun Cloud API (before I knew that caring about this API might actually become part of my day job) and something on which I’ll come back in a future post.

Another lesson is the need for clear use cases. Yes, it feels silly to utter such an obvious statement. But trust me, standards groups still haven’t gotten this. It’s not until years spent on WSDM and then WS-Management that I realized that most people were not going after management integration, as I was, but rather manageability. Where “manageability” is concerned with discovering and monitoring individual resources, while “management integration” is concerned with providing a systematic view of the environment, with automation as the goal. In other words, manageability standards can allow you to get a traditional IT management console without the need for agents. Management integration standards can allow you to coordinate your management systems and automate their orchestration. WS-Management is for manageability. CMDBf is in the management integration category. Many of the (very respectful and civilized) head-butting sessions I engaged in during the WSDM effort can be traced back to the difference between these two sets of use cases. And there is plenty of room for such disconnect in the so-loosely-defined “Cloud” world.

We have also learned (or re-learned) that arbitrary non-backward compatible versioning, e.g. for political or procedural reasons as with WS-Addressing, is a crime. XML namespaces (of the XSD and WSDL types, as well as URIs used in similar ways in specifications, e.g. to identify a dialect or profile) are tricky, because they don’t have backward compatibility metadata and because of the practice to use organizations domain names in the URI (as opposed to specification-specific names that can be easily transferred, e.g. cmdbf.org versus dmtf.org/cmdbf). In the WS-based management world, we inherited these problems at the protocol level from the generic WS stack. Our hands are more or less clean, but only because we didn’t have enough success/longevity to generate our own versioning problems, at the model level. But those would have been there had these models been able to see the light of day (CML) or see adoption (DCML).

There are also practical lessons that can be learned about the tactics and strategies of the main players. Because it looks like they may not change very much, as corporations or even as individuals. Karla Norsworthy speaks for IBM on Cloud interoperability standards in this article. Andrew Layman represented Microsoft in the post-Manifestogate Cloud patch-up meeting in New York. Winston Bumpus is driving the standards strategy at VMWare. These are all veterans of the WS-Management, WSDM and related wars collaborations (and more generally the whole WS-* effort for the first two). For the details of what there is to learn from the past in that area, you’ll have to corner me in a hotel bar and buy me a few drinks though. I am pretty sure you’d get your money’s worth (I am not a heavy drinker)…

In summary, here are my recommendations for standardizing Cloud API, based on lessons from the Web services management effort. The theme is “focus on domain models”. The line items:

  • Have clear goals for each effort. E.g. is your use case to deploy and run an existing application in a Cloud-like automated environment, or is it to create new applications that efficiently take advantage of the added flexibility. Very different problems.
  • If you want to use OVF, then beef it up to better apply to Cloud situations, but keep it focused on VM packaging: don’t try to grow it into the complete model for the entire data center (e.g. a new DCML).
  • Complement OVF with similar specifications for other domains, like the application delivery systems listed above. Informally try to keep these different specifications consistent, but don’t over-engineer it by repeating the SML attempt. It is more important to have each specification map well to its domain of application than it is to have perfect consistency between them. Discrepancies can be bridged in code, or in a later incarnation.
  • As you segment by domain, as suggested in the previous two bullets, don’t segment the models any further within each domain. Handle configuration, installation and monitoring issues as a whole.
  • Don’t sweat the protocols. HTTP, plain old SOAP (don’t call it POS) or WS-* will meet your need. Pick one. You don’t have a scalability challenge as much as you have a model challenge so don’t get distracted here. If you use REST, do it in the mindset that Tim Bray describes: “If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.” Not as something that needs to scale to Web scale or as a rebuff of WS-*.
  • Beware of versioning. Version for operational changes only, not organizational reasons. Provide metadata to assert and encourage backward compatibility.

This is not a recipe for the ideal result but it is what I see as practically achievable. And fault-tolerant, in the sense that the failure of one piece would not negate the value of the others. As much as I have constrained expectations for Cloud portability, I still want it to improve to the extent possible. If we can’t get a consistent RDF-based (or RDF-like in many ways) modeling framework, let’s at least apply ourselves to properly understanding and modeling the important areas.

In addition to these general lessons, there remains the question of what specific specifications will/should transition to the Cloud universe. Clearly not all of them, since not all of them even made it in the “regular” IT management world for which they were designed. How many then? Not surprisingly (since IBM had a big role in most of them), Karla Norsworthy, in the interview mentioned above, asserts that “infrastructure as a service, or virtualization as a paradigm for deployment, is a situation where a lot of existing interoperability work that the industry has done will surely work to allow integration of services”. And just as unsurprisingly Amazon’s Adam Selipsky, who’s company has nothing to with the previous wave but finds itself in leadership position WRT to Cloud Computing is a lot more circumspect: “whether existing standards can be transferred to this case [of cloud computing] or if it’s a new topic is [too] early to say”. OVF is an obvious candidate. WS-Management is by far the most widely implemented of the bunch, so that gives it an edge too (it is apparently already in use for Cloud monitoring, according to this press release by an “innovation leader in automated network and systems monitoring software” that I had never heard of). Then there is the question of what IBM has in mind for WS-RT (and other specifications that the WS-RA working group is toiling on). If it’s not used as part of a Cloud API then I really don’t know what it will be used for. But selling it as such is going to be an uphill battle. CMDBf is a candidate too, as a model-neutral way to manage the configuration of a distributed system. But here I am, violating two of my own recommendations (“focus on models” and “don’t isolate config from other modeling aspects”). I guess it will take another pass to really learn…

[UPDATED 2009/5/7: Senior moment! When writing this entry I forgot that I wrote an earlier entry (in late 2007) specifically to describe the difference between “manageability” and “management integration”. So here it is, if you care for more details on this topic.]

5 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, People, Portability, REST, SML, SOAP, Specs, Standards, Utility computing, Virtualization, WS-Management, WS-ResourceCatalog, WS-ResourceTransfer

Reality check on Cloud portability

SD Times recently published an interesting article about “cloud interoperability”. It has some well-informed opinions. But, like all Cloud-related discussions, it also suffers from mixing a bunch of things. The word “interoperability” is alternatively applied to the Cloud infrastructure services (in which case this “interoperability” is a way to provide application “portability”) and to the Cloud-hosted applications themselves.

Application-level interoperability (“look, my GAE-hosted app successfully sent an HTTP request to an Azure-hosted app, open the champagne”) is not very new or exciting anymore and is often used as an interoperability smokescreen (hello Salesforce.com). Many of these interop concerns are long solved and the others (like authentication and data migration) need to be solved in ways that don’t care whether the application is hosted in your Silicon Valley garage or near the Columbia river.

Cloud infrastructure compatibility (in other words application portability) is the more interesting discussion. I keep reading that it is needed (“no vendor lock-in, not ever again”) for enterprises to move to the Cloud. Being a natural-born cynic, I always ask myself whether those asking for it are naive (sometimes) or have ulterior motives (e.g. trying to catch-up with Amazon by entangling them in the standards net – some of my fellow cynics see the Open Cloud Manifesto as just this).

Because the reality is that, Manifesto or no Manifesto, you are not going to get application portability across IaaS-type Cloud providers. At least for production applications. Sorry. As a consolation prize, you may get some runtime portability such that we’ll be shown nice demos of prototype apps moving from one provider to another (either as applications or as virtual machines). Clap clap until you realize that they left behind their monitoring capabilities, or that their configuration rules don’t validate anything anymore. And that your printer ran out of red ink when printing the latest compliance report. Oops.

Maybe I am biased because they are both my friends and ex-colleagues, but the HP guys make the most sense in the SD Times article. Tim Hall has it right when he suggests “that the industry should focus on specific problems that it is going to solve around deployment and standardized monitoring”. And the other HP Tim, Mr. van Ash, rightly points out that we should “stop promising miracles”, which Forrester’s Jeffrey Hammond echoes, saying that there is a difference between a standard and “plug-and-play in reality”.

Tim Hall uses SQL as an example of a realistic common baseline. J2EE would be another one. They provide a good reality check. Standards are always supposed to prevent vendor lock-in. And there is a need for some of that, of course. But look at the track records. How many applications do you know that are certified and supported on any SQL database, any Unix operating system and any J2EE app server? And yet, standardizing queries on relational data and standardizing an enterprise-class runtime environment for one programming language are pretty constrained scopes in the grand scheme of things. At least compared to all the aspects that you need to standardize to provide real Cloud portability (security, monitoring, provisioning, configuration, language runtime and/or OS, data storage/retrieval, network configuration, integration with local apps, metering/billing, etc). And we’re supposed to put together a nice bundle of standards that will guarantee drag-and-drop portability across all these concerns? In how many lifetimes? By then, Cloud computing will have been replaced by the next big thing (galaxycomputing.com is still available BTW).

Not to mention that this standardization comes hand in hand with constraints on what you can do. That’s why I read Amazon’s Adam Selipsky’s comment that allowing customers to do “whatever they want” is vital as a way to say “get real” to requests for application portability, while allowing him to sound helpful rather than obstructionist.

This doesn’t mean that these standards are not useful. They make application portability possible if not free. They make for much improved productivity through generic tools and reusable developer knowledge. We still need all this.

Here is the best that can realistically happen in the “application portability across IaaS providers” area for at least 10 years:

  • a set of partial standards for small parts of the Cloud computing domain (see list above), many of which already exist.
  • a set of RightScale-like tools that do a lot of the grunt work of mapping/hiding/transforming between providers, with various degrees of success.
  • the need for application providers to certify their applications on Cloud providers one by one anyway and to provide cloning/migration as a feature of the application rather than an infrastructure-level task.

That’s assuming that IaaS providers become a major business, that there remains a difference between service providers and software providers. The other option is that the whole Cloud excitment goes back to SaaS only, that application creators are also hosting providers, that the only resource you get in a “utility” fashion is the application itself. At which point application portability is not a concern anymore and we go back to “only” worrying about data portability and application interoperability, an easier problem and one on which we have come a long way already. If this is what comes to pass then the challenge of Cloud portability may well be one of the main reasons. Along with the lack of revenue/margin potential for many of the actors in an IaaS world, as my CEO is fond of pointing out.

[UPDATED 2009/4/22: F5’s Lori MacVittie provides a very nice illustration of the same point, in her explanation of why OVF is not a cloud portability silver bullet.]

[UPDATED 2009/6/1: Soon after posting this entry I was contacted by people at SD Times about turning it into a “guest view” article in the June issue. It has just been published. It’s also in the paper version.]

5 Comments

Filed under Amazon, Application Mgmt, Articles, Cloud Computing, Everything, Google App Engine, HP, IT Systems Mgmt, Mgmt integration, People, Portability, Specs, Standards, Utility computing

“Federationing”

I am glad to see that, as it inches towards standardization in the DMTF, the CMDBf specification is getting more visibility. Forrester’s Glenn O’Donnell recently wrote very positively about it on his blog, presenting it as a key enabler for a federation of MDRs (Management Data Repositories, a term introduced by the CMDBf specification so don’t look for it in ITIL). He argues this is the only way (rather than a single data store) to fulfill the ITIL-defined role of a CMS. Rob England (the IT Skeptic) has also shared his thoughts about CMDBf and they were noticeably less enthusiastic, to say the least. While Glenn calls the specification “profound”, Rob calls it “the most over-hyped vendor marketing smokescreen ever”. There is plenty of room in between them, which is where I sit. As I explained before, it does have real value (as a query language/protocol for system integration) but is nowhere near providing “federation” capabilities.

I am happy to see Glenn approve of CMDBf and I agree with him that accurate specialized MDRs are more useful than a single store that attempts to capture all the relevant data. As Glenn puts it, “pockets of the truth are far superior to unified ambiguity”. But I wasn’t very comfortable with the tone of his article, which seemed to almost encourage the proliferation of these MDRs. Maybe he was just trying to present a clean break with the “one big CMDB” approach and overreached. Or maybe I am just not reading properly.

Because while I agree that the answer is not “one and only one store” I also don’t want to loose the value of having as much unification of the IT model as possible. Both at the data level (i.e. same metamodel/model, consistent retention/roll-up policies…) and the access level (i.e. in the same physical store, with shared access control, accessible using a well-known DSL for data manipulation…). Metamodel transformation and model bridging are costly (in accuracy, maintenance, reliability). If your CMS does more than just support a  “model navigation” GUI it may then need to run large queries that go across several portions of your IT model, including multiple different domains (e.g. a compliance rule kicked-off at the app level based on the type of data it manipulates that ends up having to look at the physical location of the servers running the hypervisors for the virtual machines that power the app). Through such global queries you can apply configuration rules, do impact analysis, event correlation, provide context to your transaction tracing, etc. No consolidation means no such queries (or a very limited subset). Considering the current state of federation, there is a lot more that you can do with your CMS if you have a very small number of MDRs rather than a sea of “federated” MDRs. This is why, as Oracle acquires IT management companies, we deliberately integrate their repositories with Enterprise Manager.

[UPDATED 2009/4/8: More, along the same line, from Glenn and his co-author Carlos Casanova available here. And my CMDBf partner-in-crime Van Wiles also responded to Glenn, bringing a BMC perspective.]

1 Comment

Filed under Application Mgmt, Automation, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Modeling, Query, Specs, Standards

Open Cloud Manifesto, circa 2004

The mini-scandal of last week was the manifesto-gate. The mini-scandal of this week is shaping out to be the Ulitzer-gate (if you want to make sure not to miss next week’s IT scandal, subscribe to the Register feed, ferreting these out and adding a bass-heavy soundtrack is their specialty).

Turns out I am one of these Ulitzer “unaware authors” through two articles I wrote a while ago for the Web services Journal, a paper publication by Sys-con (based on a request from HP PR) and a blog post I allowed Sys-con to republish. Looks like Ulitzer and Sys-con are one and the same. Three articles, spaced two years apart. That’s enough to earn me a dedicated home page at Ulitzer and a rank of 1,000 among their more than 6,000 authors. Makes you wonder how much the 5,000 “authors” behind me have (unknowingly) produced… Whatever. At least it’s all content that I authorized Sys-con to use, not something that was lifted from my blog as apparently happened to others.

Turns out the oldest of these articles (“From Web Services Management to Utility Computing” , from 2004) is not that different from the recently-published (and amply maligned) Open Cloud Manifesto. I described my article at the time as “an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive.”

It ends with “while it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications.”

Sure it has a lot more emphasis on WS-* specs than is compatible with the current zeitgeist, and it uses the now-obsolete term of “utility computing” rather than the nebulous alternative currently en vogue, but isn’t the main message there?

Just to be clear, I am not laying pretentious claims of prescience and vision (at least not in this entry). There are plenty of documents (e.g. from the Grid community) that make the same points in more eloquent terms and starting many years prior. It’s just fun to see this link from today’s scandal to the one from last week.

for old time sake, here is the content of the 2004 article:

From Web Services Management to Utility Computing
by William Vambenepe

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. This will be achieved through the development of composable and modular standards that expose the management capabilities of the building blocks of enterprise services. The Web services platform is an enabler of this transformation: a Web services-based management infrastructure provides a channel that is appropriate for dynamic resource provisioning, allocation, and configuration – often called utility computing.

We can consider this management infrastructure as a four-layered architecture. Starting at the foundation layer, the work on the base Web services infrastructure is far from over. First, until WSDL 2.0 is widely deployed, designers have to compose around the deficiencies of WSDL 1.1, such as the lack of portType inheritance. Second, there is still no standard for referencing Web services. Finally, key specifications such as WSRF (Web Services Resource Framework) and WSN (Web Services Notification), without which people were left to reinvent Web services interfaces to access stateful resources, have only recently reached the standards community. These issues are being resolved and a set of building blocks for accessing resources through an SOA (service-oriented architecture) is shaping up. It is critical that these building blocks be modular and composable to allow incremental adoption and separation of concerns.

Moving from the foundation to the management protocol layer, the OASIS WSDM (Web Services Distributed Management) technical committee, through its MUWS (Management Using Web Services) specification, is the key articulation point between the base Web services architecture and utility computing. Both the IT management community and the Grid community rely on MUWS. It defines how to express and exercise manageability capabilities through Web services, putting in place a management channel that is more interoperable and accessible than ever before.

Next is the modeling layer. Information models need to be composed so that a service can be represented based on the services that it is assembled from, be they peer or infrastructure services. Since these will be described by different models, the management channel (MUWS) needs to be model-agnostic in order to support a model-centric architecture. For example, CIM (Common Information Model) is a model that focuses on concrete resources. The DMTF WS-CIM subgroup must now open CIM to the Web services platform by developing a standard way to expose CIM-modeled resources through MUWS. Other models provide representations for service security, service-level agreements (SLA), etc. Only by composing these models will, for example, an auction service SLA be adequately managed as it depends on a combination of the performance of the servers on which the service runs, the application server that hosts it, the other services (authentication, billing, etc.) that it makes use of, and the business process engine that controls the bidding. Once this model-centric architecture is in place, management actions can be policy-driven through explicit constraints.

Finally, at the top layer, the architecture includes a set of common services for utility computing. They are being defined collaboratively by DMTF (Utility Computing working group) and GGF (OGSA working group).

All the pieces are falling into place but much remains to be done to allow comprehensive management of enterprise services in a model-centric way through Web services standards. While it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications. These applications will use this to synchronize business and IT and to capitalize on change.

Comments Off on Open Cloud Manifesto, circa 2004

Filed under Application Mgmt, Articles, Automation, Business Process, Cloud Computing, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Specs, Standards, Utility computing, Virtualization

OVF 1.0 and beyond

OVF 1.0 just got released as a DMTF standard. Here is the specification and its companion white paper. After a quick scan I didn’t see any major change from the submitted version, which is consistent with the content of the “preliminary standard” from last year.

The interesting question is what comes next, especially with regards to VMWare’s vCloud. The VMWare press release stated that “as one of the original authors of the Open Virtualization Format (OVF) standard now released from the Distributed Management Task Force (DMTF), VMware will build upon that work by submitting a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds” and Drue Reeves at the Burton group commented on this (Drue, we’re still waiting for part II). I see no reason to believe that VMWare is going to stop playing by the Microsoft playbook in DMTF as it appears to be quite successful so far (I’ll pat myself in the back for predicting over a year ago that “OVF might only be the beginning” for VMWare at DMTF).

This results in what looks like a landgrab from DMTF in Cloud standards. Meanwhile, in Washington DC yesterday, the Strategies and Technologies for Cloud Computing Interoperability (SATCCI) workshop took place. At this point all I know about it is the report from Reuven Cohen that I just read (hopefully Stu, Krishna and other bloggers who participated will provide additional perspectives). From Reuven’s report, Winston Bumpus (Director of Standards Architecture at VMware and President of the DMTF) described OVF as “an ideal cloud migration and deployment package”. Which may be true but is a pretty recent repurposing (the spec and the white paper don’t even mention this application). And while the DMTF is going full speed ahead on this, Reuven reports that “Craig Lee, President of the Open Grid Forum suggested that we need to take more time to examine the overlap between various standards groups, mapping the opportunities for collaboration”. Sure thing. The old timers might remember that when the DMTF decides to run with Microsoft’s WS-Management it wasn’t just OASIS (where WSDM was created) that eventually got hosed but also OGF (then called GGF) which relied on the WSRF/WSDM stack. At the time too there were discussions to identify and reconcile the overlap, for all the good they did (disclosure: I have some history there).

We’ve seen this in the WS-* game before. At the end it’s not so much a matter of what the standards bodies do (and even less of what they say), it’s a matter of what the big players do and where they choose to take their marbles. To the extent that you can separate the two, which becomes tricky in the case of vendor-run bodies like WS-I and DMTF. As I have written before, “at the end, it comes down to what [you think] a standard should be”.

[UPDATED 2009/3/26: Stu has now written a report on the SATCCI meeting.]

5 Comments

Filed under Cloud Computing, Conference, DMTF, Everything, Grid, IT Systems Mgmt, OVF, Portability, Specs, Standards, Utility computing, Virtualization, VMware, WS-Management

Cloud computing: would you like flexibility with your simplicity?

The recent announcement of the Sun Cloud, and more specifically its API is a good occasion to think about how much simplicity we really want in our datacenter automation mechanisms. The Sun API is very simple and its authors are proud of that fact. Indeed they should be proud of avoiding unneeded complexity. They have probably also kept out (at least so far), some needed complexity.

First, let’s focus on the important part:

It’s not REST that matters, it’s the rest

Most of the comments on the API focus on the fact that it’s RESTful. The authoritative source on this is Tim Bray’s description of the API, which he helped shape. But Tim is very down-to-earth about the reasons to use REST:

Why REST? · It’s a sensible question. The chief virtue of RESTful interfaces is massive scaling. But gimme a break, these are data-center management operations; a typical transaction frequency would be a single-digit number per week, with the single digit often being “0”, and it wouldn’t be surprising if a big multi-cluster staged-boot operation had a latency of minutes. The data-center controls are unlikely to be a bottleneck.

Why, then? Simply because we wanted a bits-on-the-wire interface. APIs, in the general case, suck; and are really hard to make portable. Bits-on-the-wire are ultimately flexible and interoperable. If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.

The use of REST is not a fundamental characteristic of the API. In other words, if this API turns out to be useful I can rewrite it as a SOAP API and it would still be useful. Unless the SOAP API is made purposely complicated, it would only be marginally harder to use, not fundamentally less useful.

In fact, we may find out. If the rumor is confirmed and IBM decides to Tivolify (rather than kill) the Sun Cloud, the whole thing can be refactored as WS-RT/XML/XQuery (and maybe WS-ResourceCatalog) in five days, four of which would be spent capturing, sedating and restraining Tim Bray (and his “spec machete”) with the last one used for coding.

In the case of the Sun Cloud API, REST makes the API simpler in the same way that a keyless system makes a car easier to operate. You don’t have to fumble for they key, but you still need to know to parallel park, change a tire and operate the stereo.

By using REST, the Sun team has kept away some arbitrary complexity (e.g. fine-grained PUT; instead Sun decides what are the two valid sets of input parameters to create a cluster). But that’s only a small percentage of the potential complexity of the system. Not to mention that most developer will use libraries rather than on-the-wire protocols so they won’t see any difference. Instead, the real deal is:

The model

By “the model” I mean both the resource model and the capabilities of the resources. For capabilities, I don’t care whether a virtual machine can be started via an HTTP GET request on a URL that ends with ?control=start, or via a SOAP message with the wsa:Action header set to http://iloveclouds.com/vm/start or via an RPC call to a Start(…) method. I just care that the model includes the capability to start a VM. And the list of states a VM can be in.

Look at a datacenter today. Make an inventory of all the networking equipment, storage, servers, hypervisors, operating systems and infrastructure services that it contains. Consider all the configuration settings of all these resources (as they would be represented in a complete, authoritative and consistent CMDB, that most elusive creature). Add to it all the controls and APIs they expose. That’s a lot of data, even if you don’t consider the applications layer. That’s a few orders of magnitude larger than what the model in the Sun Cloud API can describe. That gap (between our CMDB model and the Sun Cloud model) is what we should look at and analyze. Why are they so far apart? How big is the ideal datacenter automation and virtualization model?

Among other things, these hundreds of configuration settings in your current datacenter are used to optimize deployments. No-one would miss the pain of dealing with the optimizations if they went away, but we would miss the performance benefits they bring. So what replaces them if the model is too simple to support any tweak? Is the infrastructure behind the API auto-optimized, based on actual application patterns? Now that would be real progress towards simplicity and may allow us to rely on an API as simple as the Sun API. But the industry has been trying to do this with little success for a long time. I expect incremental, not radical, progress on this. Alternatively, does Cloud Computing change the economics to the point where performance optimizations through configurations are no longer cost-efficient, where scaling out is the answer? Hard to make this a general statement, considering how difficult it remains for many applications to scale out. And this sounds very SUV-like in these footprint-aware times (we see how well the “stretch the hood and add two cylinders to the engine” approach worked for Detroit).

Sun might very well have this covered under the hood. But I don’t know that I want to assume that they have an auto-optimizing system just because they produced an API that would benefit from having it underneath.

Not to mention that not all configuration tweaks have to do with performance optimization. Some of them are driven by licensing, organizational, risk and compliance considerations. If auto-detecting an application performance profile is hard, try auto-detecting its regulatory requirements.

Complexity with a purpose

The right place to be, between the “omniscient CMDB model” and the “Sun Cloud model” is somewhere in the middle, with a couple of incrementally complex layers. Of course they are so far apart that saying “somewhere in the middle” is a cope-out.  The current level of complexity is very hard to manage by humans (assisted by processes and tools, e.g. ITIL) and impossible to really automate. A lot of the complexity and variability is arbitrary rather than flexibility-inducing. We need to reduce this (all-out standardization is one way, stack integration is another). But the simplicity of the model in the Sun Cloud API is too extreme. Look at Amazon EC2. Everyone lauds the simplicity of the APIs and everyone, in the same breath, asks for more options (different instance types, availability zones, reserved instances…). Amazon (and Sun too, I assume) is taking the eminently rational approach of starting from simple and adding complexity (sorry, flexibility) as needed. That’s great. Just don’t get too enamored with the initial simplicity.

[UPDATED 2009/3/20: James Governor lauds the simplicity of Amazon’s cloud offering.  If I understand him correctly, he sees simplicity as coming not just from “few options” but also from backward compatibility with current app infrastructure. That second part is what William Louth criticizes in his comment below. At the very least I like to keep the two separated: “how intrinsicly simple is it” and “how backward compatible is it” even though both can be seen as providing the benefit of simplicity.]

7 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Modeling, REST, Specs, Tech, Utility computing, Virtualization

WS-Discovery and WS-DeviceProfile public review

We are in the middle of the public review period for the three specifications coming out of the “Web Services Discovery and Web Services Devices Profile (WS-DD)” OASIS technical committee. The specifications are:

This trio has been swirling around for many years. Mostly in the USB/PnP/UPnP community (devices physically attached to Windows boxes) but they have popped up on the enterprise WS-* radar screen from time to time, for two reasons:

  • WS-DD as another specification (in addition to WS-Management and the later incarnations of WS-MeX) who uses WS-Transfer, and therefore as a (poor, in my view) justification for making it a stand-along specification
  • WS-Discovery as potentially useful for datacenter management (for resource discovery, obviously)

Looking at the members list for the technical committee seems to confirm the suspicion that many members would be more interested in the potential datacenter-related applications (of WS-Discovery, presumably) than the USB gadgets use cases: CA, IBM, Novell, Progress, RedHat, Software AG, WSO2. Well, I guess Novell and RedHat could be in the game from the perspective of plugging devices into desktops running their version of Linux rather than from a datacenter perspective. CA and IBM could be in it from an Asset Management (for devices) perspective too. So who knows (if you had no tolerance for idle speculations you wouldn’t be reading blogs, would you?).

In any case, no Apple, Palm, RIM, Google (Android) or Sun (JavaFX). I guess handheld devices aren’t the devices targeted here. Rather it’s more printers and cameras. In which case you have to wonder why HP isn’t there but that’s another question (I’ll pretend not to know).

No point dwelling on this, especially since the member list is a very imperfect representation of the actual participation. As an alternative we can scan the mailing list archive to see who is active. Judging from the last two months (feb 2009, march 2009) it’s Microsoft, Microsoft and that company in Redmond. Microsoft, as we know, created these specifications from the start and they seem to still be the engine.

In the end it remains to be seen if WS-Discovery has any role to play in datacenter infrastructure discovery. That environment is clearly becoming more dynamic, but the most frequent changes will take place at the level of the guest VM (creation of new instances) and above (applications, services…) and all these creations will presumably be orchestrated by some controller so there is little need for a “hello world” resource-initiated discovery mechanism for them.

Which leaves the physical equipment. Do we need WS-Discovery to get from the point of a server being plugged to the point where it has been provisioned? There are alternatives in this somewhat homogenous environment. And UDP multicast only takes you so far in a datacenter (at least without extensions). So my guess is no. But who knows. The drafts are here for you to review if you think it may be relevant.

Comments Off on WS-Discovery and WS-DeviceProfile public review

Filed under Automation, Everything, IT Systems Mgmt, Manageability, Microsoft, Specs, Standards

CMDBf is a lot more and a lot less than you think

The DMTF CMDBf working group has recently published an updated draft of its specification. The final version should follow soon and I don’t expect major changes so now is not a bad time to start thinking about what this baby can do.

Since CMDBf stands for “configuration management database federation”, you might think the obvious answer to the “what can it do” question is “build a federation of configuration management databases”. Except it’s not. Despite its name, CMDBf provides little support for federation unless you take a very loose definition of the term. The specification gives you a query language and a very simple registration interface, with a sprinkle of metadata to improve interoperability. The query language lets you talk to a CMDB to retrieve information on configuration items (CIs) that it knows about. The registration interface lets you keep a CMDB informed of changes to CIs that it may care about. If you want to build on top of this a real federation, one that scales to the type of environment that CMDBs are used for today, you have to go further than what the specification provides. What CMDBf does give you is some amount of integration between CMDBs (at the protocol level at least, not at the model level). It may not sound like much but it is a lot of progress on the current situation and the right incremental step, whether you are aiming for true federation as the end goal or not.

That’s the “a lot less than you think” part. So, what’s the “a lot more than you think” part? Good stuff all around:

CMDBf provides a metamodel that is well-suited for complex IT systems and it provides an elegant graph-oriented query language on top of it. The most convenient representation for an IT system is neither “one big XML document” nor “a sea of nodes and edges”. CMDBf gives you a middle ground: a graph model with XML leaf nodes. So you can precisely model the relationships between your IT elements using explicit relationships (with their own records), but you can also attach a well-understood piece of XML to an item as a record without having to break that XML into a bunch of tiny relationships.

I am pretty sure there are other domains, beyond IT systems, for which this would be useful. It will be interesting to see if the CMDBf specification gets considered outside of its intended scope. But these domains are more likely to end up using RDF/OWL/SPARQL instead. Not everyone has made the leap from XML as a tool to XML as a religion, which made CMDBf necessary for us. But let’s not veer into another rant.

Let’s go back instead to describing how useful CDMBf can be to IT systems management, independently of any “federation” objective. Let me put it this way: if one was to create from scratch a configuration store for IT systems they should strongly consider the CMDBf conceptual model as the base metamodel. And something along the lines of the CMDBf Query (though not necessarily through its XML serialization) as the native query language for it. Most CMDBf implementers of course are not in this situation. Rather than writing the store from scratch they will create a CMDBf wrapper/interface on their current CMDB. And that’s fine too. CMDBf will work well as an interoperability protocol. Putting aside my gripes about XPath overuse, CMDBf strikes a reasonable balance that makes it implementable on top of any back-end technology (relational, XML, RDF, in-memory objects, bags of name-value pairs…). And the query patterns it supports map well to CMDB-to-CMDB integration use cases. But it is underselling it, in my view, to restrict it to this over-the-wire interoperability scenario. CMDBf also provides a very useful foundation for local access to the CMDB. CMDBf graph queries can support powerful visualization of the content of the CMDB. They can support the definition of configuration rules. They can support in-depth inspection of relationships (e.g. fault tree).

And that may jsut be the beginning. It could take three directions after v1:

The first one, as always for a standard, is that it is ignored and becomes irrelevant. I have to reluctantly list this one first, because it is statistically the most likely for a new standard. Especially one that is not a ratification of an existing de facto standard. And one that threatens an important control point for vendors. A slight variation on this scenario is for CMDBf to succeed from a marketing perspective, as a checkmark that most vendors tick, but not as a true technology. This is the “smokescreen” scenario from Mr. Skeptic. One scenario that worries me is that CMDBf could fail because of the poor models of the CMDBs that implement it. If your IT model is not granular enough or if it matches the UI of your application more than the semantics of the IT components, then CMDBf will expose these shortcomings and probably be blamed for them (with bad models, “shoot the messenger” becomes “shoot the protocol”).

The second possible direction is that CMDBf provides enough value in integrating CMDBs that people want more and challenge the group to deliver on the “f” part, federation. That could take the form of a combination of:

  • better integration with other protocols (mostly from the WS-Management family, like WS-Enumeration and WS-Eventing),
  • reconciliation support (here are ways to address it),
  • some model transformations or canonical models,
  • some optimizations in the query mechanism for distributed queries (e.g. data partition rules).

The third possible direction (not exclusive) is for CMDBf to become the basis for a standard rule language for IT models. Yeah, another one (remember SML?). SPIN and SML show us how a generic query language can be used to support configuration rules. I very much like SPIN but it requires adopting RDF as a metamodel, which is a hard sell in XML-land. SML suffers technically from being too reliant on an inappropriate validation tool (XSD) and treating relationships as a second thought rather than an integral part of the model. Which is fine in many areas (EMF does it too), but not, in my view, when modeling IT systems.

If we are not going to use RDF/SPIN then let’s copy them. We can use the CMDBf metamodel (graph-based) where SPIN uses RDF. We can use the CMDBf query language (graph-oriented) where SPIN uses SPARQL. Since CMDBf queries use XPath, we see some commonalities with SML (which uses XPath through Schematron). But in CMDBf XPath is scoped to the leaf nodes of the graph, not the entire model as it is in SML. In other words, SML adds relationship traversal to XPath, while CMDBf adds XPath to its relationship-aware queries. It’s a matter of who’s on top. It sounds academic but it isn’t.

Does the industry really want standardized, re-usable configuration rules? SML/CML seem to say no. The push towards Cloud interop, on the other hand, begs for it. At least if you believe in programming your environment in a way that is partialy declarative rather than entirely procedural.

[UPDATED 2009/3/5: Rob England (a.k.a. Mr. Skeptic as I refer to him above) provides a geek-to-English translation for this post. Neat!]

2 Comments

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Modeling, RDF, SML, Specs, Standards, Tech, XPath

Analyzing the DMTF incubator process

Depending on how you choose to look at it, either the DMTF has streamlined the process of defining standards or it has created a rubberstamping machine. I am referring to the “DMTF Standards Incubation Process”. It is recent, but not brand new (the DSP that defines it is dated April 6, 2007). I had heard about it but never really looked into it. Until this weekend, when I finally got motivated to investigate a bit. AFAIK this process has not yet produced any specification.

As I understand it, the goal of this incubation process is to allow a group of like-minded companies to get together in the DMTF and produce an “informational specification”, which is typically a refinement of a vendor submission. The informational specification would then go through a normal DMTF working group but often in an expedited fashion, only allowing limited changes. That’s not a guaranteed outcome, but it seems to be the “normal” case as envisioned in this process.

This overview should make it clear why this can be seen as a rubberstamping machine. Here are a few key points (the quotes come from the process description):

  • “Standards Incubators are often formed in conjunction with an initial baseline contribution by the founding members with the expectation that the group will serve to evolve and finalize that contribution” and later it says that leadership members should have a “commitment to maintaining alignment with the input submission”.
  • Only leadership-level DMTF members get to be on the review board (the part of the incubator that makes decisions). They have to fairly consider comments from the other companies, whatever “fairly” means.
  • It is necessary but not even sufficient to be DMTF leadership-level company to be on the review board of an incubator. If you are not there when the incubator is created then you have to be approved by the current leadership members to join. It is unclear to me whether any DMTF leadership-level company can join the “review board” of an incubator at the start or whether those who propose the incubators get to choose who they let in.

There is nothing sneaky here, the incubator process is pretty upfront about being designed to allow a vendor-provided specification to be considered/reviewed/improved in a friendly environment, in which opponents are kept away. The question is what happens next. There are four possible dispositions once an incubator finishes its work and its has produced an informational specification.

The first two, “bootstrap / expedited delivery” and “finalization” are pretty close in practice. A working group is created that is restricted to not making any significant change to the “information specification”. The “bootstrap” approach only allows small corrections, “finalization” also allows some additions (but no change of what is already there). In other words, the working group is mandated to pretty much rubberstamp the informational specification: with these two dispositions, there is no opportunity (in the incubator or in the ratification group) for people to suggest technical approaches that are significantly different from those in the initial submission.

The place where technical alternatives can be considered is if a competing incubator is created. At this point, the DMTF board may decide that a working group should be created to reconcile the two. Even then, the board may pick a winner (in which case the reconciliation amounts to adding in the selected specification some features that are only present in the other specification, in effect protecting existing implementations of the winner). And if this is the path taken, the process makes it clear that this should be driven not by technical merits but by “adoption and momentum”. Which implies that the companies that ship the most products get to pick since they can single-handly create “adoption and momentum”.

And finally, the last possible disposition is “termination”, in which no further work on the informational specification takes place in the DMTF. But the barrier is pretty high for this direction to be taken: the specification has to have “little adoption or industry interest”. It seems reasonable to interpret this to mean that this would only apply if the initial proponents (who created the incubator) have lost interest themselves, otherwise they alone would provide sufficient “industry interest” for the termination to not take place and force another outcome (which can only be one of the first two, the rubberstamping options,  if there is no competing incubation group). And even if the work is indeed terminated, the specification remains available indefinitely as a “DMTF informational specification” which people can (and will, if it serves their purpose) simplify to “a DMTF specification”. The difference between this and a DMTF standard will be lost on 99% of IT writers and IT buyers. I submit as evidence all the confusion about the status of a “W3C note” (the confusion prompted W3C to eventually rename this to “submission”). It will be even less clear with DMTF “informational specifications” because, unlike W3C notes, they did go through some modifications in the DMTF and they are indeed a product of the DMTF. I would not be surprised if some “DMTF informational specifications” stayed just that and lived a happy life as a pseudo-standard. One thing that remains unclear is whether such a terminated informational specification can be taken over by another standards organization.

Bottom line, if you don’t like a submission to an incubator group you’d better put together an alternative (quickly) to stay in the game. And if your position is not “this is a bad technical approach” but rather “this is something we should do in a more open and deliberative way, or maybe later” then you only get one chance to make your case: you’d better make a convincing argument to the board to not allow the incubator to be created because after that there is little chance to stop the train. And how many standard organization boards do you know who say “no” to submissions from large vendors?

Having said all this, is this really a bad thing? Let’s look at it from the “glass half full” side.

First let’s realize that this happens anyway. Companies get together (often around an initial document created by a single leader) to create a specification and then look for a standard organization to ratify it with as few changes as possible. Other than CIM, it seems that all recent DMTF efforts started out this way (WS-Management, CMDBf, OVF). This is how Microsoft (sometimes with IBM) built the whole WS-* stack. They even had a name for it (the “workshop process”) to try to make it sound more open than it was. I’ve been on the inside (SML, CMDBf, WSRF/WSRT) and outside (WS-Management and other WS-* specifications) of it and it’s a pain whether you’re inside or out. It’s very opaque. Efforts may die and nobody ever knows (for example, does anyone know what’s up with CML)? Even when those inside want to get feedback and share their work they have to deal with a tangle of legal agreements that make it unnecessarily hard for everyone. In addition, all the work and discussions that go into the submitted specification usually get lost as the work transitions to a standards body (e.g. no email archive and unclear IP/confidentiality rules in re-using them). And the fact that these efforts are private does not prevent companies from demanding guarantees that their submissions won’t be changed too much. For example, the WS-Management working group had an explicit goal in its charter to maintain compatibility with the submission and the same debate was played over and over again in drafting the charters of several OASIS and W3C groups. Companies play one standard organization against another if necessary to get this guarantee.

Anything that can take us away from this mess merits consideration even if it is not a perfect alternative (there isn’t any). The DMTF incubator process doesn’t seem to relax the control of the sponsor companies, but it provides some level of transparency (at least for DMTF members) and, presumably, some continuity between the incubation and ratification phases.

Standards organizations constantly get blamed for either being too slow/procedural (e.g. HTML at W3C) or being rubberstamping machines (e.g. OOXML at ISO). Or both at the same time (most WS-* work). Most steps an organization can take to address one criticism makes the other worse. This “incubator” process is an example.

Everyone complains about “design by committee” and how inconsistent and bloated specifications become when everyone is listened to and made to feel included. The specifications end up with too many options (a killer of interoperability) and no guiding vision. A more constrained set of authors usually produce a simpler and more consistent specification. Has anyone ever seen a standard that is shorter than the submission that started it?

Not to mention the fact that working group chairs are often in an uncomfortable position, forced to choose between, on one hand, accusations of being dictatorial and, on the other hand, seeing his/her working group drift away from one rathole to the next, with no end in sight. The standards world has its fair share of obstructionists and pontificators. Some do it on purpose (they have been mandated by their employer to prevent progress in a group), most do it just because of their personality (and the fact that their employer has no real interest in what the person does in this group, as long as his/her presence allows the employer to claim to be part of the game). Forcing people who want an alternative approach to actually put together a proposal is a way to keep pontificators at bay. Unfortunately, it also shuts off qualified people who know a domain well and want to share their knowledge but don’t have the time (or employer sponsorship) to put together an entire alternative specification around their proposal.

At the end, it comes down to what a standard should be. If you think a standard should capture the knowledge of most experts in the industry and give an equal voice to all organizations, then this is a step in the wrong direction. If, on the other hand, your position is that the big guys will effectively set standards anyway so it might as well be done in a way that is fast, relatively transparent and consistent with their implementation, then you’ll applaud this initiative.

In creating this process, the DMTF made a clear (though grammatically challenged) statement on this topic: “adoption and momentum may outweigh technical issue regarding success”.

2 Comments

Filed under DMTF, Everything, Mgmt integration, Specs, Standards

Is notification wrapping getting a bum rap?

Looks like the question of whether to wrap SOAP-based notifications is back. Like Gil I prefer to stay away from wrapping notifications but my reasons are somewhat different.

I am not convinced by WSDL-centric arguments one way or the other. Proponents of wrapping say that it gives them a WSDL they can use for creating a generic listener, while opponents say that avoiding wrapping gives them a WSDL that generates useful code (payload-aware). I am not a big fan of WSDL-based code generation, but even if you are going to do it nobody says that you have to do it based on the WSDL document that ships with the specification. You’re free to modify the WSDL any way you want before feeding it to your code generation tool, as long as the result correctly describes the messages. One can write an infinity of WSDL documents for a given set of messages, some more precise and others more high-level (in which you quickly hit an xs:any). So, if the spec gives you a WSDL where the payload is xs:any and you know that in your case the payload is going to be sec:intrusionDetected, feel free to insert that element in the WSDL before running wsdl2java or whatever.

At the end, the question is not about what the WSDL in the specification looks like. The question is simply to what extent you know ahead of time the payload of the events you are going to have to handle. And you’d better know enough about the payload to create whatever logic your event consumer has to apply to the notification. Whether that’s through WSDL or some other mean. If you are not going to apply any payload-dependent logic (“generic sink”) then you don’t need to know anything about the payload. And I don’t see why someone needs a wrapper to create a generic sink.

Rather, what I don’t like about wrapping notifications is that you force them to be handled only as notifications, not as regular SOAP messages. You put them in a separate world and you make it hard for someone to create a service that can be invoked either in a subscription-driven way or in a direct way.

Here is a made-up example: consider a message to indicate that a physical intrusion has been detected in a building. There are many possible consumers for this message (local security staff, private security company, police, sound alarm, the cell phone of the owner, audit log, etc…). There are many possible sources for the message. In some cases, the message does not come from a subscription (e.g. a homeowner calls the security company and the operator enters data in a system that produces the message, or the sensor is hard-coded to sound the alarm). In others, there is a subscription (e.g. a home alarm system allows someone to register phone numbers and email addresses to which to send intrusion alerts). Sometimes something that starts as a subscription-based notification gets forwarded to someone who did not register for anything. It’s a good thing if web services that consume this message do not have to know (if they don’t care) whether this message originated because of a subscription or not. All they need to worry about is that there is a message that they have to respond to (e.g. by dispatching a patrol of clowns with orange lights on their car).

Here is a simpler analogy. Imagine that you have a filter in your email client to move all messages from Joe to a given folder. How much would you like to have to write the rule twice, one for messages that Joe sends to you directly and one for messages that Joe sends to a mailing list to which you are subscribed? Not very much I imagine.

At the same time, most notification systems are aware that they are processing notifications and there may be notification-related data that you’d like to have available in a consistent way (e.g. enough information to manage the subscription that resulted in you receiving this message). That’s fine but you don’t need an intrusive wrapper for this. Just use a SOAP header. It’s out of the way if you don’t care about it and it’s right there if you do (if you want to subject yourself to a two-year-old rant about how the SOAP processing model is unfortunately underutilized, be my guest).

One place where you need some kind of wrapping is when delivering several events at a time (either because you use pull-style retrieval or because you find it more efficient to push them in batches). If that’s what you’re after (and you want to handle it within one SOAP message rather than boxcarring a set of SOAP messages) then go ahead define a wrapper but make it a specialized wrapper that serves this purpose: collecting notifications and properly attaching whatever metadata to each. That’s a real purpose, not some WSDL make-believe.

Another use case is if you apply some transformation to the notification before sending it. Say that instead of returning a large notification you filter it by running an XPath on it and returning a serialization of the resulting node set (assuming you first solve the XPath serialization conundrum). You’d need some kind of wrapper to contain the result and put it in context, but again that should be a specialized wrapper for you filter mechanism. Not a generic wrapper.

It’s been a while since I really thought about this. My recollection may be flawed but I think I was already holding this position in the OASIS WS-Notification technical committee (which completed its work by publishing three standards in October 2006). I remember David Hull making a very eloquent case in the same direction (“wrapping” as policy-advertised option, not a part of the base framework), and strong pushback from IBM. I learned a lot about pub/sub systems from my WS-Notification committee co-chair, IBM’s Peter Niblett (a leading expert on the topic) while working on WS-Notification, but this is one area in which he did not convert me.

Comments Off on Is notification wrapping getting a bum rap?

Filed under Everything, Mashup, Mgmt integration, Middleware, SOAP, SOAP header, Specs, Standards, Tech

Sorry, CMDBf doesn’t make coffee either

The IT Skeptic is writing to us from his mountain retreat (via a time-delayed post on his blog), and the topic he felt safe to cover in such fashion (what journalists call an “evergreen”) is the fact that CMDBf is an orchestrated sham, brilliantly executed by IT management vendors.

I’d love to be part of something that’s brilliantly executed for once, even if it is a sham, but I am afraid this is not it. But first I should state the obvious, clarifying that even though I am a member of the CMDBf group at DMTF (and also an author of the original version, under my previous employer) I do not speak for the group or DMTF (or my employer for that matter). Just as myself, as always on this blog.

The problem that Rob England, Mr. Skeptic, has with the CMDBf specification is that it doesn’t do a bunch of things that he’d like it to do, such as specifying how data sources acquire data for their domain, how they store the data, how the underlying resources are reconfigured, what processes are followed etc. See the full list from his post. The list is a copy/paste from the CMDBf specification, with some comments added, so at the very least he has to admit that as far as “smokescreens” go this one is pretty upfront about its limitations…

He concludes that “this is once again a geeky technical solution to a cultural, organizational and procedural problem.” I have to ask: who expects DMTF specifications to solve “cultural, organizational and procedural” problems? Does CIM solve such problems? Does WBEM?

Human-to-human communication is a “cultural, organizational and procedural” problem and SMTP/POP/IMAP/etc (the interoperable protocols used by email systems) are just as geeky as CMDBf. They don’t solve the larger problem, only contribute to the solution. If CMDBf can contribute as much to datacenter management as SMTP/POP/IMAP contribute to human communication (minus the SPAM if possible), I’d call that a success.

And then there is this warning:

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availability monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely.”

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

As far as the glacial pace of progress (“Glacial advance. That’s the way the vendors want it” from an earlier post by the Skeptic), CMDBf is no race horse but I don’t see it going any slower than other standards. Slowness (I mean, deliberation) is part of the landscape. I would submit a slight twist on Hanlon’s razor: “Never attribute to malice that which can be adequately explained by legal, procedural and organizational inertia.”

Having said all this, some of Rob’s criticism is perfectly justified, such as his sarcasm about this sentence from the specification:

“The Federated CMDB operates in a closed environment, in which some security issues are less critical than in open access or public systems.”

OK, that’s stupid indeed. Especially in a public cloud environment where you don’t know who is renting the VM next door. I’ll ask the group to remove this. Actually, that whole appendix is useless and I pointed this out in my earlier review of CMDBf 1.0 (look for the “security boilerplate” section at the bottom of the review).

Rob could also have pointed out that this specification only addresses “federation” if you accept a very scaled-down definition of the term. What it does do is help with CMDB query and synchronization. Not the holy grail, but nothing to sneer at either.

Rob, next time you want to throw tomatoes at CMDBf while you’re on holiday, just give me the password to the site and I’ll do it for you… :-)

[UPDATED 2009/1/21: Rob responds via a comment on his original blog entry.]

2 Comments

Filed under BSM, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Security, Specs, Standards

A new SPIN on enriching a model with domain knowledge (constraints and inferences)

Back when I was at HP and we got involved with what turned into SML (now a W3C candidate recommendation), we tried to make a case for the specification to be based on RDF/OWL rather than XML/XSD/Schematron. It was a strange situation from a technical perspective because RDF is a better foundation for an IT model than XML, but on the other hand XSD/Schematron is a better choice for validation than OWL. OWL is focused on inference, not validation (because of both fundamental design choices, e.g. the open world assumption, and language expressiveness limitations).

So our options were to either use the right way to represent the system (RDF) combined with the wrong way to capture constraints (OWL) or to use the wrong way to represent the system (XML) combined with the right way to constrain it (mostly Schematron, with some limited help from XSD). At the end, of course, this subtle technical debate was crushed under the steamroller of vendor politics and RDF never got a fair chance anyway.

The point of this little background story is to describe the context in which I read this announcement from Holger Knublauch of TopQuadrant: the new version of their TopBraid Composer tool introduces SPIN, a way to complement OWL with a SPARQL-based constraint checking and inference mechanism.

This relates to SML in two ways.

First, there are similarities in the approach: Schematron leverages the XPath language, used to query XML, to create validation rules. SML then marries Schematron with XSD, for a more powerful validation mechanism. Compare this to SPIN: SPIN leverages the SPARQL query language, used to query RDF, to create validation/inference rules. SPIN also marries this with OWL, for a more powerful validation/inference mechanism.

But beyond the mirroring structures of SPIN and SML, the most interesting thing is that it looks like SPIN could nicely solve the conundrum, described above, of RDF being the right foundation for modeling IT systems but OWL being the wrong constraint mechanism. SPIN may do a better job than SML at what SML is aiming to do (validation rules). And at the same time, you get “for free” (or as close to “for free” as you can get with software, which is still far from “free”) a pretty powerful inference mechanism. The most powerful I know of, short of using a general programming language to capture your inference rules (and good luck with maintaining these rules).

This may sound like sci-fi, but it’s the next logical step for IT configuration standardization. Let’s look at where we are today:

  • SML (at W3C) is an attempt to standardize the expression of constraints.
  • CMDBf (at DMTF) is standardizing how the model content is queried (and, to some limited extent at this point, federated).
  • And recently IBM authored a proposal for a reconciliation specification for items in the model and sent it to an Eclipse group (COSMOS).

But once you tackle reconciliation, you are already half-way into inferencing territory. At least if you want to reconcile between models, not just between instances expressed in the same model. Because the models may not be defined at the same level of granularity, and before you can reconcile items you need to infer finer-grained entities in your coarser-grained model (or vice-versa) so that you can reconcile apples with apples.

Today, inferencing for IT models is done as part of the “discovery packs” that you can buy along with your IT management model repository. But not very well, in general. Because the way you write such a discovery module for the HP Universal CMDB is very different from how you write it for the BMC CMDB, IBM’s CCMDB or as a plug-in for Oracle Enterprise Manager or Microsoft System Center. Not to mention the smaller, more specialized, players. As a result, there is little incentive for 3rd party domain experts to put work into capturing inference rules since the work cannot be widely leveraged.

I am going a bit off-topic here, but one interesting thing about standardization of inferencing for IT management, if it happens, is that it is going to be very hard to not use RDF, OWL and some flavor of SPARQL (SPIN or equivalent) there. And once you do that, the XML-based constraint mechanisms (SML or others) are going to be in for a rough ride. After resisting the RDF stack for constraints, queries and basic reconciliation (because the added value was supposedly not “worth the cost” for each of these separately), the XML dam might get a crack for inferencing. And once RDF starts to trickle through that crack, the whole dam is going to come down in a big wave. Just to be clear, this is a prophetic long-term vision, not a prediction for 2009 (unfortunately).

In the meantime, I’d like to take this SPIN feature a… spin (sorry) when I find some time. We’ll see if I can install the new beta of TopBraid composer despite having used up, a year ago, my evaluation license of the earlier version of the product. Despite what I had hopped at some point, this is not directly applicable to my current work, so I am not sure I want to buy a license. But who knows, SPIN may turn out to be the change that eventually puts RDF back on my “day job” list (one can dream)…

It’s also nice that Holger took the pain to deliver SPIN not just as a feature of his product but also as a stand-alone specification, which should make it pretty easy for anyone who has a SPARQL engine handy to support it. Hopefully the next step will be for him to clarify the IP terms for the specification and to decide whether or not he wants to eventually submit it for standardization. Maybe to the W3C SML working group? :-) I’d have a hard time resisting joining if he did.

12 Comments

Filed under CMDB, CMDBf, Everything, IT Systems Mgmt, Modeling, RDF, Semantic tech, SML, SPARQL, Specs, Tech, W3C

Less is more: inventory of XPath subsets

Many specifications that manipulate XML content have taken the step to create their own subset of XPath. Typically, they need an XML query/pointer language but full XPath (or XPointer) is overkill for their purpose (I am talking about XPath 1.0 here, XPath 2.0 is usually over-over-kill-and-then-some). Defining a subset of XPath rather than inventing a query language from scratch is attractive because:

  • people are relatively familiar with XPath (at least the most common parts)
  • it is already specified so you can leverage the W3C spec-writing work
  • implementers who have access to an XPath engine get an implementation of the subset “for free” since the XPath engine will process statements from any XPath subset

Here is a quick inventory of spec-defined XPath subsets that I am aware of.

XML Schema

Section 3.11.6 of the XML Schema specification (part 1) defines a subset of XPath used to point to a set of elements that are the target of an identity constraint. Here is the BNF of the abbreviated form of the subset (you can also use the functionally equivalent full-length notation):

Selector   ::=    Path ( '|' Path )*
Path       ::=    ('.//')? Step ( '/' Step )*
Step       ::=    '.' | NameTest
NameTest   ::=    QName | '*' | NCName ':' '*'

Actually, there is a second subset defined, to point to the identifying key from the identified element. It is very similar to the previous one but it allows attribute nodes to be selected, via this modification:

Path       ::=    ('.//')? ( Step '/' )* ( Step | '@' NameTest )

According to the specification, these subsets were defined “in order to reduce the burden on implementers, in particular implementers of streaming processors”. As this article points out, stream-friendliness of an XPath subset is a relative notion. But the subsets above seems, indeed, to fit the bill. They also include many simplifications that “reduce the burden on implementers” but have nothing to do with streaming, such as removing functions and all predicates (rather than simply restricting the content of predicates).

WS-Management and WS-ResourceTransfer

WS-Managment also defines two subsets (it calls them dialects) of XPath. I won’t copy the BNF definitions here, as they are pretty long. You can find them in Appendix D of the specification. They are called “XPath level 1” and “XPath level 2”.

The main use cases driving these dialects had to do with implementing WS-Management in resource-constrained environments, e.g. the board management controller of a server.

WS-ResourceTransfer took this idea from WS-Management and it too defines an “XPath level 1” dialect (see Appendix I of the specification)

Windows EventLog Remoting Protocol v6

Version 6.0  of the Microsoft EventLog Remoting Protocol, new in Vista, adds a filter mechanism (to select events in the log) that is based on a subset of XPath. Streaming appears to be a concern there too (“evaluation of each event MUST be restricted to forward-only, in-order, depth-first traversal of the XML”). And, being Microsoft, they also add some extensions to XPath.

CMDBf (coming soon)

CMDBf also defines a subset of XPath. It is not defined via a restricted syntax but via a limitation of the type of objects returned. There is no BNF provided. You can write your XPath any way you want as long as it only returns objects of the right types (e.g. nodesets containing comment nodes are out of luck). Think of it as “management by objectives” rather than micromanagement. The main driver here is not support for  streaming. It’s that since XPath nodeset serialization is a pain we only want to do it where there is a compelling use case. There is no point creating interoperability challenges for no practical benefit.

Others?

Except for XSD, all the examples above come from the IT management world, because that’s where I live. There are probably plenty of specifications in other domains which took similar steps, such as the Digital Talking Book ANSI standard (see this section).

And those are only XPath subsets defined as part of specifications. There are also XPath subsets defined by implementations (e.g. ElementTree’s limited XPath support). And others defined for research purposes (e.g. “Univariate XPath”, from this ACM article, that is quickly described in the previously-mentioned post about stream-friendly XPath subsets).

If you know of other interesting XPath subsets, please leave a comment.

Comments Off on Less is more: inventory of XPath subsets

Filed under Everything, Specs, Standards, WS-Management, WS-ResourceTransfer, XPath

Here comes WSTF

A new Web services-related industry body has been announced today: WSTF (Web Services Test Forum). More details about it from Infoworld. My employer (Oracle) seems to be one of the drivers (along with IBM) but I am not personally involved.

A lot of hand-wringing, of course, about its relationship with WS-I. Which is understandable if you consider what WS-I was originally supposed to deliver (profiles, sample applications and testing tools). But not if you consider what it has actually delivered that is relevant (a couple of profiles, some time ago). WSTF could also be compared with the SOAPBuilders Yahoo group, but since that group has seen only two emails messages so far in 2008 (last one dated April 2nd), it seems safe to consider it dead. It would be interesting to know why that is though (it used to be pretty active in the early days) and what lesson WSTF may learn from it. Another effort you may want to compare this to is the Microsoft Web Services Protocol Workshops Process. It’s too early to tell, but they may turn out to be more closely related than meets the eye.

I noticed this innocuous-sounding sentence in the press release (warning, PDF): “As an open community, WSTF has made it easy to introduce new interoperability scenarios and approve work through simple majority governance”. You may wonder why this is important enough to figure in the press release.

I interpret it as a dog whistle call (heard only by those to whom it is intended) for the WS-I board. Microsoft’s Paul Cotton responds to it in his quote for Infoworld: “WS-I provides a proven and open organization and process that best suits our customers’ needs”. He also talks about “the incredible industry-wide momentum and leadership of WS-I”, which is indeed not very credible (especially the momentum part). The WS-I process, associated board politics and resulting inaction is what I was talking about in this entry (“veto rules being commonly invoked, stopping most of the activities that the resort was originally planning to offer”).

Speaking of this “Standardstown” blog entry, I should probably soon update it to include WSTF. What should it be? Maybe a trailer park in which customers bring their own lodging, put them side by side and see how they line up?

The current test scenarios seem to focus on fixing the interoperability mess that is WS-Addressing. I assume more will soon be added to test the different WS-* specifications out there. It will be interesting to see what direction WSTF takes after that. Will the payloads of the test messages be obvious dummy payloads (so that the focus is on testing the implementation of the WS-* protocols)? Or will they start to include real payloads (e.g. real purchase orders from real enterprise applications)? How about this: “dear vendor, I will only buy your wonderfully open, standard and interoperable Web services-based application when it is available as a WSTF endpoint and there are three other real-life products (including one from your main competitor) that successfully interoperate with the exact same SOAP messages I will be using”. This could become an interesting tussle between vendors as well as between vendors and buyers.

Alternatively, of course, WSTF could turn into a test of how much difference there is between a “standard” and a publicly specified and interop-tested interaction scenario…

A quick (and unsuccesful) Technorati search for some blog comments returns the “WSTF Dark Retribution Dinorobots Limited Giftset” which “includes all five Dinorobots in their sinister evil incarnation”. Can’t say you were not warned…

[UPDATED 2008/12/10: Gilbert Pilz, who was involved with WSTF from the start (and also left a comment on this entry, see below), wrote a detailed description of the problem WSTF tries to address and how Gilbert and others have structured WSTF to solve it.]

[UPDATED 2008/12/15: Via InfoQ, another long description of the goals and processes of WSTF, this time from Doug Davis.]

[UPDATES 2009/1/5: Chris Ferris also weighs in, including his view on the relationship with WS-I. Having participated in several of the early WS-I plenary meetings, I have to wonder if Chris had any double-entente in mind when he wrote that WS-I helps “the community understand where the bar is”.]

[UPDATED 2009/2/17: A response from Redmond.]

1 Comment

Filed under Everything, IBM, Implementation, Microsoft, Oracle, SOAP, Specs, Standards, Testing

Who said WS-Transfer is for REST?

One more post on the “REST over SOAP” topic, recently revived by the birth of the W3C WS Resource Access working group. Then I’ll go quiet for a bit and let people actually working on it show me why I am wrong to worry about WS-RT.

Before that, I just want to clarify one thing. People seem to assume that WS-Transfer was created as a way to support the creation of RESTful systems that communicate over SOAP. As much as I can tell, this is simply not true.

I never worked for Microsoft and I was not in the room when WS-Transfer was created. But I know what WS-Transfer was created to support: chiefly, it was WS-Management and the Devices Profile for Web Services, neither of which claims to have anything to do with REST. It’s just that they both happen to deal with resources (that word again!) that have properties and they want to access (mostly retrieve, really) the values of these properties. But in both cases, these resources have a lot more than just state. You can call all sorts of type-specific operations on them. No uniform interface. It’s not REST and it’s not trying to be REST. The Devices Profile also happens to make heavy use of WS-Discovery and I am pretty sure that UDP broadcasts aren’t a recommend Web-scale design pattern. And no “hypermedia” in sight in either spec either.

A specification is not RESTful. An application system is. And most application systems that use WS-Transfer don’t even try to be RESTful. Mocking WS-Transfer for not being as good as HTTP to support REST systems is like mocking an airplane for not being as good as your hatchback for grocery shopping. It’s true, but who cares.

So let’s not reflexively attack WS-Transfer for assumed purposes. And similarly, let’s not reflexively defend WS-Transfer as a good way to build RESTful systems.

Just to clarify, this is not meant as a defense of WS-Transfer. I think that, at least in the context of its original purpose, it should be gutted to only its GET operation. The PUT and DELETE tasks should be handled by domain-specific operations. Which would have the consequence of making it look less like a REST wannabe. But my recommendation aims at improving its applicability to the management domain, not at making it comply to an architecture style that is not (at least currently) used in that domain.

4 Comments

Filed under Everything, IT Systems Mgmt, Manageability, Mgmt integration, REST, SOAP, Specs, WS-Transfer

WS Resource Access working group starting at W3C

Things went quiet for a while, but the W3C Web Services Resource Access Working Group has finally taken life, as was announced last week. It’s a well-know PR trick to announce bad news on a Friday such that it goes undetected, is it a coincidence that W3C picked a Friday for this announcement?

As you can tell by this last remark, I have no trouble containing my enthusiasm about this new group. Which should not come as a surprise to regular readers of this blog (see this, this, this and this, chronologically).

The most obvious potential pushback against this effort is the questionable architectural need to redo over SOAP what can be done over simple HTTP. Along the lines of Erik Wilde’s “HTTP over SOAP over HTTP” post. But I don’t expect too much noise about this aspect, because even on the blogosphere people eventually get tired of repeating the same arguments. If some really wanted to put up a fight against this, it would have been done when the group was first announced, not now. That resource modeling party is over.

While I understand the “WS-Transfer is just HTTP over SOAP over HTTP” argument, this is not my problem with this group. For one thing, this group is not really about WS-Transfer, it’s about WS-ResourceTransfer (WS-RT) which adds fine-grained resource access on top of WS-Transfer. Which is not something that HTTP gives you out of the box. You may argue that this is not needed (just model your addressable resources in a fine-grained way and use “hypermedia” to navigate between them) but I don’t really buy this. At least not in the context of IT management models, which is where the whole thing started. You may be able to architect an IT management system in such RESTful way, but even if you can it’s too far away from current IT modeling practices to be practical in many scenarios (unfortunately, as it would be a great complement to an RDF-based IT model). On the other hand, I am not convinced that this fine-grained access needs to go beyond “read” (i.e. no need for “fine-grained write”).

The next concern along that “HTTP over SOAP over HTTP” line of thought might then be why build this on top of SOAP rather than on top of HTTP. I don’t really buy this one either. SOAP, through the SOAP processing model (mainly the use of headers, something that WS-RT unfortunately butchers) is better suited than HTTP for such extensions. And enough of them have already been defined that you may want to piggyback on. The main problem with SOAP is the WS-Addressing tumor that grew on it (first I thoughts it was just a wart, but then it metastatized). WS-RT is affected by it, but it’s not intrinsic to WS-RT.

Finally, it would be a little hard for me to reject SOAP-based resources access altogether, having been associated with many such systems: WSMF, WSDM/WSRF, WS-Management and even WS-RT in its pre-submission days (and my pre-Oracle days). Not that I have signed away my rights to change my mind.

So my problem with WS-RAWG is not a fundamental architectural problem. It’s not even a problem with the defects in the current version of WS-RT. They are fixable and the alternative specifications aren’t beauty queens either.

Rather, my concerns are focused on the impact on the interoperability landscape.

When WS-RT started (when I was involved in it), it was as part of a convergence effort between HP, IBM, Intel and Microsoft. With the plan to use this to unify the competing WS-Management and WSDM/WSRF stacks. Sure it was also an opportunity to improve things a bit, but 90% of the value came from the convergence/unification aspect, not technical improvements.

With three of the four companies having given up on this, it isn’t much of a convergence anymore. Rather then paring-down the number of conflicting options that developers have to chose from (a choice that usually results in “I won’t pick either sine there is no consensus, I’ll just do it my own way”), this effort is going to increase it. One more candidate. WS-Management is not going to go away, and it’s pretty likely that in W3C WS-RT will move further away from it.

Not to mention the fact that CMDBf (and its SOAP-based graph-oriented query protocol) has since emerged and is progressing towards standardization. At this point, my (notoriously buggy) crystal ball shows a mix of WS-management and CMDBf taking the prize overall. With WS-Management used to access individual resources and CMDBf used to access any kind of overall system view. Which, as a side note, means that DMTF has really taken this game over (at least in the IT management domain) from W3C and OASIS. Not that W3C really wanted to be part of the game in the first place…

11 Comments

Filed under CMDBf, DMTF, Everything, HP, IBM, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Query, REST, SOAP, SOAP header, Specs, Standards, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer

First in-depth look at Microsoft’s Oslo and the “M” modeling language

Microsoft’s PDC is taking place this week and more details were shared with the attendees about project Oslo, an effort announced last year to drastically improve the use of models across the application lifecycle. Some code is available (I think the Quadrant code is only for PDC attendees but the Oslo SDK is available to everyone). I am not at PDC, I didn’t see any presentation and I didn’t download any code. But Microsoft has also posted technical details on MSDN and, as far as I am concerned, that’s the most time-effective way to spend a couple of hours learning about Oslo. BTW, the way they share these early design descriptions and accept to make their evolution public is admirable.

For those who only want to spend 10 minutes rather than 2 hours, here are the thoughts that came to my mind as I was reading.

Overall I am somewhat underwhelmed, but not necessarily in a bad way. I know that’s a little schizophrenic so let me explain. After hearing a lot about how Oslo was the next big thing in modeling, it is a little surprising to read a document that can be summarized as “modeling is good, so go create some SQL tables and store them in a RDBMS”. That’s the underwhelming part. But on the other hand, it is more down to earth and practically-minded than I feared. And this is just a summary, in truth there is more than just “use SQL”.

Half of the MSDN documentation basically explains how to use SQL Server to store application models (as of today, the “Developing Models for the Metadata Store” section has only one sub-section, “SQL Server Guidelines for Modeling in the Oslo Repository“). Does this mean that all .NET applications will eventually have to carry with them a deployment of SQL Server 2008 even if they don’t use it to store the their operational data? Sure there are a few extra repository services (e.g. finer-grained change auditing) but most Oslo repository services are generic SQL Server features. That section has quite a lot of T-SQL, but it’s pretty readable. It also has a lot of dependencies on following naming conventions which makes me think that directly creating T-SQL code is not the best approach.

Fortunately there is an alternative, the “M” language. It’s a schema language with a built-in constraint mechanism. I found it more data-oriented (as opposed to resource-oriented) than I expected. Even though “each model is really a set of data structures, relationships, and constraints in serialized form“, there is a lot more support for data structures and constraints than for relationships. It’s just a foreign key. Relationships aren’t items and don’t have any property (or “field” as they’re called in “M”). For example, the relationship between a student’s enrollment record and a given class can’t have, as property, the grade that the student got for that class (as in the example in section 4.1.4 of the second LC of SML). To model this in “M” you need to create another item (e.g. “courseEnrollment”) and have a relationship from the student to that item and another one from that item to the “course” item itself. Or to replace the foreign key in the student table with a complex structure that contains both the foreign key and the properties of the relationship. At the end it has the same expressiveness potential, but in a less streamlined form. I assume Microsoft took this approach for performance reasons.

I am going on a limb here, but it may also be a difference between development-time concerns and operation-time concerns. During development (all the way to testing and packaging), you can still mostly get away with a relatively simple containment structure. You care about the components of your application and how they are packaged inside or next to one another. Sure you care about who calls who outside of the deployment unit but that’s not as core a concern as getting your class dependencies right, your tests in order and your installer configured. In fact, some of the “who calls who” bindings will be only be realized at runtime. Oslo, at least so far, clearly seems more focused on development time than operations so support for a relationship-rich model may not seem critical. At operations time, on the other hand, you don’t really care so much about how things were packaged before installation. You care a lot more about who invokes who (especially for modern distributed applications), what the network layout is, what resources a ticket is attached to, etc. The model looks a lot more like a graph with complex relationships. Something that “M” doesn’t seem ideally suited for.

Except for this caveat, I like “M”. It’s not anti-XML (you can represent values as XML if you’d like) but it avoids the “the answer is XML/XSD what is the question” approach to modeling that is sometimes a little too prevalent. “M” is a much better schema language for IT systems than XSD. I especially like its approach to types. A value is not intrinsically of a given type. A type is a condition that you happen to meet or not at the current time (“take heart little field, you can be anything you want when you grow up”). As such, you can be of several types at the same time. Refined types are potatoes inside potatoes (not sure if “M” supports definition of types as unions and/or intersection of existing types, for intersection I want to write something like”type NewType : OldType1 where this in OldType2” but there is no “this” in “M”). That approach to types (and the way constraints leverage types) is reminiscent of RDF/OWL. It’s a classification more than a typification, but I understand why they didn’t want to call it “class”. The similarities with RDF/OWL don’t go any further. As I wrote earlier, “M”is very data-focused and not resource-focused: as far as I can tell “M” types are defined syntactically, not semantically (the semantics come as a consequence). For example, I don’t think that you can assert that a given item representing a person is of type “friendly” if there is no corresponding data in the item. You’d have to first create a boolean field called “friendly” and define that those that have that field set to “true” are of type “friendly”. Unlike in RDF/OWL where you can just assert that a subject is “friendly”.

Here is another reason why you can’t have “semantics-only” types: “if you do not specify the type of a field or value, M infers a type for it“. Two things don’t sound quite right to me here. First a detail: the sentence (like others in the doc) talks about “the” type of a field of value, while there can be more than one. More importantly, what’s the point of this feature? How does it help me to have my IRC nickname classified as a post code or as a password just because it happens to be made of a compatible combination of letters and numbers? Maybe it makes sense as a storage optimization, but why does it make sense to expose this to the user?

I also like the way “extents” work. The current description of that feature is pretty limited, but based on how it is used in other parts I think one of its usages is to support a non-OO equivalent to inheritance: create two extents, one for the “superclass” and one for the “subclass” where each only contains the properties/fields defined at that level. You should get both of them in order to have the full picture (all the fields). This is, if I understand it correctly, similar to something I have been (unsuccessfully so far because “XML doesn’t do it this way”) trying to sell to the DMTF CMDBf working group: model inheritance through a set of non-overlapping records rather than dealing with a type hierarchy on record types. It’s not just that it makes relational storage easier (even though it does and that’s probably why “M” does it this way), it also makes your query/select operations a lot easier to specify and implement.

All in all (and without having gone through the exercise of defining actual models in “M”), it seems like a fine schema language (except that its dependency on the CLR base types is unpractical for users outside of the Microsoft universe) but I am not sure if it is beefy enough to be a good IT management metamodel. When the document says that “the Oslo repository provides open and flexible access to the data it contains, which enables direct access to SQL Server views of the underlying data. There are no complex data access layers or APIs” it sounds better than saying “it’s just SQL, so map your model to it and if you want relationships or type inheritance just build it on top of it and quit whining”. But it is an admission of limitation at the same time as a claim of simplicity. I also smell an assumption that LINQ will provide enough hand-holding that non-SQL-savvy developers will be ok. We’ll see.

And then there is MGrammar. Things get a little confusing at that point if you try to relate MGrammar to “M”. Actually, the FAQ states that “the M language consists of three parts: MGraph, MSchema and MGrammar“. This came a bit as a surprise to me since at that point I had finished reading (not in details but not too quickly either) the “M” documentation and I hadn’t seen these names mentioned once. Looks like there is some documentation consistency issues here, but that’s hardly surprising considering this is a “hyper-early (pre-alpha)” release as Doug Purdy puts it.

I think that everything that I have referred to as “M” above is MSchema.

MGrammar is something different altogether: it’s the source of the Domain Specific Language (DSL) references we’ve been hearing in relationship with Oslo. Technically, MGrammar is a BNF on steroids plus an automatically generated parser for your syntax. Cute. I assume that “M” (i.e. MSchema) is built as MGrammar-defined DSL but I am not sure why I would care. I am all for reuse and if someone at Microsoft thought that there was something reusable in the way they defined MSchema then it’s a good thing to expose this tool. But where does it come into play in application modeling? The last thing I want is people inventing completely independent languages to describe different domains. I am all for specialization, but a common underlying metamodel is pretty nice when you have to make sense of a whole system. I don’t see any such commonality in MGrammar: as far as I can tell it can be used to define anything from PostScript to sonnets.

From the FAQ, the connection point between MGrammar and MSchema is MGraph (MGrammar languages are parsed into an MGraph, MSchema “builds on MGraph”). That’s nice, but since neither the MSchema nor the MGrammar documentation mention MGraph I don’t really know what to make of this. David Chappell’s white paper also mentions MSchema and MGrammar but not MGraph. The introduction to the MGrammar Language Specification states that “the data that results from Mg [a.k.a. MGrammar] processing is compatible with Mg’s sister language, The Oslo Modeling Language, M, which provides a SQL-compatible schema and query language that can be used to further process the underlying information“. Compatible? I need more information here. In any case, MGrammar sounds like a fun project for a techie. Who am I to deny Microsoft engineers their fun. Jokes aside, I am probably missing something here seeing how prevalent the DSL message is in all discussions of Oslo. Look at the “highlights of this book” section for the upcoming Oslo/M book from the creators of the “M” language: half of it is about the DSL support and there must be a reason beyond pure geekery. As a side note, if you buy this book you need to understand what little shelf life it will have (I can give you a good price on a lightly-used Hailstorm/”.Net my services” specification book).

Aside from the “M” language itself, there are a few models described in the documentation. One corresponds to BPMN (actually, it says that it “closely aligns with” BPPMN 1.1, does this imply that they are not quite the same?). The fact that this model supports imports from Visio is a nice feature.

The Application model (one of the places where you can see “extents” in action) scares me a little bit because I doubt that two different people would use the same “extents” to describe the same software elements. Unless of course that’s being done for them by a pre-defined mapping to their development framework (.NET) enacted by their common development tool (Visual Studio). Which may be the assumption. Yet, the Application model is defined in generic terms, not Microsoft-specific (with a couple of slip-ups, like a WebApplicationModule being defined as a “Web application (module) implemented by IIS or WAS“. Maybe I’ll feel better about the generic applicability of this Application model when I see a full-fledged description (e.g. including relationship semantics as captured in foreign key field names) and an example.

At the bottom of that Application model, there is a lonely “Manageable” type to use if you have a LifecycleState field. This reinforces my impression that despite the claims to link development time with operational time, a lot of the focus to date has been on the former rather than the latter.

The ServiceModel model will look familiar to people familiar with SCA and is presumably complementary to the WorkflowModel and WorkflowServiceModel models, both of which are directly mapped to Windows Workflow Foundation. I guess that’s where Oslo and Dublin touch one another. I am still glad they are now clearly separated.

There is also a “Quadrant” model which concerns me a bit (it seems to be used to store customization of the Quadrant UI which, while convenient to store straight in the repository, doesn’t strike me as necessarily belonging there).

At this point, the question is not whether Microsoft can build Oslo as it is currently defined. SQL Server 2008 already exists, the usage guidelines aren’t unrealistic and even the “M-to-T-SQL” translation doesn’t seem too hard for Microsoft to implement (the SDK presumable already contains an implementation). I have no doubt they can deliver the system they describe. What I don’t know is whether and how it will be actually useful.

Describing “M” in details is good. Describing how the repository is implemented on top of SQL Server 2008 is interesting but not so relevant. What I’d like to see is a description of how all this gets used. How does it change the Visual Studio experience? How does it change the installation process/format? How does it support round-tripping between lifecycle stages (e.g. if the developer changes the workflow model, does that original BPMN model get consequently updated)? How does it relate to SLAs and policies? How does it apply to application monitoring? How does it apply to configuration management, to the change process? Etc. In short, what’s the Oslo ecosystem going to be.

These questions aren’t completely ignored in the MSDN documentation, but they are dispensed with in a couple of pages: “Application Development and Lifecycle Improvements” and “IT Operations Benefits“. The former states, for example, that “having the Oslo repository act as a central location for these models also enables a connection between the design and implementation models. This connection helps prevent these models from becoming disconnected during the development process“. Which all sounds good but is just a set of assertions that we have heard many times before (not just from Microsoft). How do “M” and the Oslo repository really make this true?

On the “IT Operations Benefits” side, things are equally blurry: “the Oslo repository can store all types of machine and application configuration data. When consistently updated, this configuration data is a catalog of the current state of all monitored machines and applications in the environment“. Notice the “when consistently updated” hand wave. That’s kind of the crux if you really want to manage across the lifecycle. How will they achieve this consistency? By centralizing all changes through a model-driven controller a la SDM/SML? Through ongoing discovery and/or change notifications? By relying on good old ITIL/MOF processes?

The FAQ declares that “having a common approach does not necessarily correlate to one physical store, but more of a federated model and we believe that some of the new Repository, along with existing investments in both Configuration Management Database (CMDB) and Team Foundation Server (TFS), will form the foundation for a common Microsoft metadata strategy and should be supported across our set of products“. OK, but who is the source of truth for application configuration data? The Oslo repository or the CMDB? Is one the desired state and the other the observed state? Does the CMDB go back to simply being a Service Desk (and if so, does the Oslo repository take on the responsibility to enforce change processes, something that requires more than the security model in Oslo)? If the CMDB is still going to use SML as its metamodel, how do you efficiently federate across such different metamodels as SML (i.e. XSD + schematron + relationships) and “M”?

Lots of questions remaining. What will Oslo have turned into in a few years? A business process design/implementation/monitoring suite (there is a strong workflow feel to many parts)? A generic drag-and-drop programming environment (“the fact that entire features are already described by models means that for a wide array of application and component categories you can start using visual tools to design and implement your components“)? A control center for end to end application management? All of the above? Nothing?

This was just a quick brain dump after reading the documents. Actually, I just realized it somehow got pretty long (congrats if you’re still reading). I hope this post is not too disorganized. Oslo is an interesting effort, but, as Microsoft is first to admit, it’s at a very early stage. I am just surprised that this first release spends so much time on the “how” rather than the “what”. Maybe it’s just because I only got my information from the MSDN documentation. We’ll see when more content from PDC finds its way online. I just want the slides, watching recorded presentations is rarely time-efficient (and you can expect them to require Silverlight).

Speaking of Silverlight, there is this new site on Oslo if you think watching some videos is worth installing Silverlight. Those screenshots don’t motivate me sufficiently.

[UPDATED 2008/10/30: Rather than going to bed I Googled around a bit and found a  post by Martin Fowler that answers some of my questions about MGrammar, MGraph and MSchema. MGraph is for instances, MSchema is for types. It answers some plumbing question, but I still have questions about expected usage and relevance to applications modeling.]

[UPDATED 2008/10/30: I also found the recordings and slides from past PDC sessions. Nice job Microsoft for this quick turnaround time, even if you require Sliverlight and/or the PPTX viewer. The sessions are:

  • TL23 A Lap around “Oslo” (Doug Purdy, Vijaye Raji)
  • TL27 “Oslo”: The Language (Don Box, David Langworthy)
  • TL18 “Oslo”: Customizing and Extending the Visual Design Experience (Don Box, Florian Voss)
  • TL28 “Oslo”: Repository and Models (Chris Sells)

The first two sessions (deliverd Tuesday) have a replay and slides, the others should, I assume, follow soon.]

[UPDATED 2008/11/3: A nice overview of Oslo by Aaron Skonnard. Unlike most other Oslo articles over the last week, this one tries to paint the (yet-to-be-realized) full picture of the Oslo ecoystem. He mentions that “other Microsoft products and technologies are expected to build on Oslo to provide other runtimes. A few that have already been announced include Microsoft System Center (Operations Manager) and Team Foundation Server (TFS) in Visual Studio Team System”. It’s interesting that he qualifies System Center to be more specifically “operations manager” rather than “configuration manager” but I wouldn’t read too much into it at this point.]

5 Comments

Filed under Application Mgmt, BPM, Business Process, CMDB, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, SML, Specs, Tech

CMDBf work in progress

The DMTF CMDBf working group (of which I am part) has released a work in progress version of the CMDBf specification. The changes from the submitted version are minor. It’s mostly a move to the DMTF template. More important (but not drastic) changes should appear in the next release.

Comments Off on CMDBf work in progress

Filed under CMDB Federation, CMDBf, DMTF, Everything, Graph query, Specs, Standards

Reviewing DMTF OVF as a “preliminary standard”

OVF 1.0.0d is out as a “preliminary standard” so I gave it a quick read over the weekend. Things have not changed much since the “work in progress” document published this summer, which itself wasn’t a big change from the original specification. As I wrote in the review of the “work in progress”, the DMTF tightened the language of the  specification more than it added features.

Since there aren’t too many technical changes (see the end of this post if you’re interested in a few), the interesting discussion is about the marketing of this specification. And boy does it have wings on that front. The level of visibility the specification has received is pretty amazing, especially considering that it doesn’t really do that much technically. But you wouldn’t know it by reading all the announcements about OVF:

  • VMWare supports OVF packaging (which version?) with its new VMWare Studio.
  • Citrix uses OVF in Kensho to create a platform-agnostic VM management.
  • An Open Source “implementation” of OVF has been created. I put “implementation” between quotes because since OVF per se doesn’t do much its implementation is mostly a specialized command line editor for its XML descriptor. It requires a a vendor-specific runtime for deployment/activation. This is not a criticism of the open source project BTW, just a statement of fact about the spec.
  • Enomaly lists “OVF format support” on its roadmap for Q1 2009.
  • Microsoft support for OVF in products is supposedly “on the board” which doesn’t mean very much but their overall marketing/PR response to OVF has been surprisingly positive for a standard that they don’t control.

I have criticized the DMTF marketing efforts in the past (“give away pens and key chains”) but I must admit that, to the extent that DMTF had a significant role in promoting OVF adoption (in addition to marketing efforts directly from the vendors), it is a very nice marketing success. Well done, and so much for my cynicism. OVF may also have benefited from all the interest in the general topic of virtualization/cloud standards (the “cloud” association is silly, of course, but as we’ve just seen I am not a marketing genius) and the fact that there isn’t much else to talk about on these topics. So by default OVF becomes the name to put on your “standards” banner. Right place at the right time for the vendors behind it.

Speaking of the vendors, I have no insight into the functioning of the OVF working group, but judging by the specification’s foreword VMware is throwing plenty of resources at DMTF: it employs the working group chair and both co-editors, which is pretty atypical in my experience in standards efforts. People are usually sensitive to appearances of one company having disproportionate influence and try to distribute responsibilities around, at least on paper. Add to this VMWare’s recent ramp-up at the DMTF board level. They seem to know what they want. And indeed I can see how the industry leader would want some basic level of standardization, but not too much, which is currently just what OVF offers. We’ll see what’s next in store, if anything.

The specification itself is not marketing-free. According to line 122, “it supports the full range of virtual hard disk formats used for hypervisors today, and it is extensible, which will allow it to accommodate formats that may arise in the future”. Sure, in the same way that my car fully supports passengers of all nationalities (and is extensible enough to transport citizens of yet-to-be created countries – and maybe even other planets, as long as they come with buttocks to sit on). Since OVF doesn’t really do anything with the virtual hard disk formats, it can “support” pretty much any such format.

Speaking of extensibility, OVF clearly tries to have a good story there. Section 7.3 tries to move away from the usual “hey, it’s XML, you can add elements/attributes anywhere” approach towards the definition of new “sections”. This seems a bit drastic. Time will tell if this is visionary or short-sighted. OVF also plans to move towards “an extension model based on the design of the open content model in XML Schema 1.1”. I am not following XSD 1.1 too closely, but it is wise for OVF to not build too much dependency on it at least for now. And it seems to me that an extension model is not something that you plan to “plan […] to add” but rather something you need to define from the start (sounds like the good old “the next version will add versioning support”, or “no keyboard detected, press F8 to continue”).

But after all this comes what looks to me, from an extensibility perspective, like a big no-no: using (section 8.1) simple strings (e.g. “vmx-4”, “xen-3”) to represent types of virtual systems. You’d think that in 2008 people would have heard about URIs as a way to allow extensibility and prevent name clashes. On further reading, this doesn’t seem to be the fault of OVF as they get this property (vssd:VirtualSystemType) straight out of the politely named DMTF SVP (System Virtualization Profile) specification, itself a preliminary standard. But that’s not much of an excuse because I suspect large overlap of participation between the two groups and in any case you don’t have to take dependencies on something that’s not right (speaking as someone who authored several specs that took a dependency on WS-Addressing, I shouldn’t give lessons). In any case, I am not on top of all virtualization-related work in DMTF but it seems to me that if they are not going to use URIs then someone should step up and maintain a registry of these identifying “virtual system type” strings.

BTW, when left to its own device OVF does a better job. For example, it properly uses URIs to identify the virtual disk format (section 5.2).

One of the few new features is the addition of the ovf:bound attribute on virtual hardware element items (section 8.3) to specify whether the item description represents the normal, minimal or maximal allocation. My heads spins a bit when trying to apply this metadata to the rasd:Limit property (with ovf:bound=”min” the value of the rasd:Limit element would represent the minimal value of the maximum quantity or resources that will be granted, which takes some parsing effort), but I think it more or less squares out.

The final standard should not differ greatly from this version, so at this point we pretty much know what OVF will be technically. The real question is how it will be used and what, if anything, is going to come to complement it.

[UPDATED 2008/10/14: Good timing. OVF-loving Kensho just launched.]

3 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Open source, OVF, Specs, Standards, Tech, Utility computing, Virtualization, VMware

State modeling: party over, go home now.

Is the Northwest weather softening Savas? Is it the food? I just read the “how do I model state? let me count the ways” article that he, Ian Foster, Paul Watson and Mark McKeown published in the September 2008 Communications of the ACM. In the article, the authors attempt to recap (and advance?) the 5 years-old debate between the WSRF, HTTP-only and “no convention” (e.g. Zen-SOAP as used in CMIS) approaches to interacting with stateful resources over the Web. If you were anywhere near OGF (then called GGF) around 2003, you know what I am talking about. And you remember how heated the arguments were. There was something about this subject (or maybe it was the people involved) that consistently generated great showmanship (and some bruised egos) in the debates.

With that in mind, reading this article felt like watching a Chinese opera adaptation of Apocalypse Now. Or listening to Heavy Metal with the base dialed down to zero.

This would have been a very useful article to have in 2003. At the time, it would have clearly framed the question, shown the overwhelming similarities and small differences between the approaches and allowed people to see that there wasn’t actually that much to debate at a fundamental level, but mainly practical considerations to juggle. It may have prevented the quasi-religious war that erupted.

It took a while, but that period of religious war is well over now and we are firmly in the “I’ve heard you, you’ve heard me, do what you want I’ll do what I want” stage. WSRF people are still doing WSRF (or equivalent like WSRT). REST people are HTTPing right and left. They don’t meet much but when they do they don’t bump shoulders anymore. And in a way this article is a good illustration of this much more dispassionate environment.

So why am I complaining? Because these fights were fun! At least from a spectator’s point of view, but I suspect that Savas and the gang had plenty of fun too (not sure about the other side who, at least at first, expected “why are you throwing away OGSI” kind of pushback rather than this more radical-sounding response).

I printed this ACM article a little bit on the off chance that it would provide some new way to look at the problem, one that hadn’t emerged in the past five years. But in retrospect I think my true motivation was that I expected it to capture, like in the days, some of the entertainment value of a radio talk show. Instead, the excitement level in this article is in the league of NPR’s StarDate astronomy report.

I feel cheated. I haven’t learned anything new and I haven’t been entertained either. This article feels like the end of the party, when the bottles are being put away, the lights are flickering and bad music is playing to nudge the last guests out of the house.

Now that I am grumpy, I guess I have to point out a few highly questionable statements in the article in retribution:

“Fortunately, there seems to be industry support for an integration of the WS-Transfer and WS-RF approaches, based on a WS-Transfer substrate – the WS-ResourceTransfer specification.” See the last two paragraphs of this entry.

“Support for WS-Addressing has since become quasi-universal, and now few find its use objectionable.” Time to pull out the Victor Hugo quote I have been saving for a special occasion: “Et s’il n’en reste qu’un, je serai celui-là“. But frankly I very much doubt that I am the only one still shaking his head sadly in contemplation of WS-Addressing.

In fact, Stu agrees with me on this (see item #6a in his list of disagreements with the article). Looks like he too was made a bit grumpy by the article, for different reasons.

There is one more debatable choice in this article, and it’s more serious than the two above. It introduces an arbitrary difference between the WS-Transfer and HTTP approaches. Compare the third lines of tables 4 and 5 (retrieving the status of a specific job). According to the article, WS-Transfer gives you the choice between two options:

  • retrieve the entire state of the job and fish for the status field inside of it (the approach in table 4), or
  • “a new operation (for example GetEPRtoPart) is defined that requests that a new state representation be exposed, through a different EPR, representing parts of the original state representation”

The way it works for HTTP, on the other hand is through an “application-specific convention” (in this example, appending “/status” at the end of the URL).

Except there is no reason why this third approach cannot be used in the WS-Transfer scenario. The article says that  “in WS-Transfer, the same effect [accessing a subset of the resource state] can be achieved, but only by defining an auxiliary operation that returns an EPR to a desired subset”. What, pray tell, prevents a WS-Transfer implementation from having an “application-specific convention” just like the HTTP kids next door? It can be at the URL level (e.g. adding “/status”). Or at the EPR reference parameter level. The latter is actually exactly what WS-Management does, using the wsman:SelectorSet header. It does not, as the article claims, define a special operation to get these fine-grained EPR. It uses an application convention to do so (which, in the case of WS-Management, happens to be “whatever Windows implements”, but that’s a different debate).

By the way, this question of “convention over specification” is where I don’t quite follow Stu (see his point #4 in his aforementioned list of disagreements) and his invocation of the “hypermedia constraint”. I don’t see how any of the four specifications he calls to the rescue (HTML form submission, XForms submission options, Atompub service documents and URI templates) would prevent me from having to have an application-specific agreement about how to retrieve the state (as opposed to another subset of the representation, like the creation date). URI templates, for example, might support how this agreement is expressed but it doesn’t replace it.

The article does a pretty good job at showing how close the alternatives are (even though, as illustrated above, it still portrays them as more different than they need to be). I am not saying it’s a bad article for the Communications of the ACM. I am saying that the Communications of the ACM is a bad medium for one of the few nerdy debates that have genuine entertainment value.

[UPDATED 2008/10/2: Jim Webber, Savas Parastatidis and Ian Robinson provide a full REST example for InfoQ: how to GET a cup of coffee. Includes state considerations discussed in the ACM article.]

2 Comments

Filed under Articles, Everything, Grid, People, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer