Category Archives: Standards

WS convergence for the visually oriented

Since the convergence roadmap was introduced last week I have explained it to a few people. I found that a graphical representation of how all the specifications mentioned in the roadmap relate to one another helps a lot. So, in case other people can use it, here is the animated powerpoint description of the proposed converged stack. It has to be shown in slideshow mode so the animations work.

Creating this slide reminded me of the (much nicer) animated slides that Jay Unger from IBM created for the introduction of WSRF. Those who were around at the time will surely remember them. For all the luck they brought to WSRF. Fortunately, I am not superstitious.

[UPDATE on 2007/11/27: the  link to the roadmap on hp.com doesn’t work anymore, but you can still find the roadmap on the IBM site. Also, it was brought to my attention that the animations in the powerpoint slide don’t work with powerpoint 2000 (i.e. version 9.0 that is part of Office 2000). I know they work on powerpoint 2003 (version 11.0, part of Office 2003) since it’s what I used to create it. Not sure about powerpoint 2002 (aka version 10.0 that was part of Office XP). Without the animations, this slide doesn’t make much sense.]

Comments Off on WS convergence for the visually oriented

Filed under Everything, Standards

HP/IBM/Intel/Microsoft roadmap

HP, IBM, Intel and Microsoft just released a roadmap to describe how we plan to converge the myriad of Web services specs currently used for resource acces, eventing and management. Basically converging WS-Management and the stack under it with WSDM and the stack under it. This roadmap should make users of these specs feel a lot more at ease. It also is specific enough to give a good indication of the smart way to architect systems today in a way that will align with the reconciled version. Even though we don’t have spec candidates ready to share at this point, we thought it would be valuable to let people know of the direction we are heading towards. The resulting set of specifications will be based on the currently existing WS-Transfer, WS-Eventing and WS-Enumeration. Which, as it happens, are published as member submissions by the W3C today.

Comments Off on HP/IBM/Intel/Microsoft roadmap

Filed under Business, Everything, Standards

Updating an EPR

The question recently came back on the WS-Addressing mailing list of whether Reference Parameters can/should be used as the SOAP equivalent of cookies. Something more along the lines of session management than addressing. See Peter Hendry’s email for a clear description of his use case. The use case is reasonable but I don’t think this is what WS-Addressing is really for as I explain in bullet #3 of this post. What interested me more was the response that came from Conor Cahill and his statement that AOL is implementing an “EndpointReferenceUpdate” element that can be returned in the response to tell the sender to update the EPR. I am not fond of this as a mechanism for session management, but I can see one important benefit of this mechanism: getting hold of a “good” EPR for more efficient addressing. Here is an example application:

Imagine a Web services that represents the management interface of a business process engine. That Web service provides access to all the currently running business process instances in the engine (think Service Group if you’re into WSRF). Imagine that this Web service supports a SOAP header called “target” and that header is defined to contain an XPath statement. When receiving a message containing a “target” header, the Web service will look for the (for the sake of simplicity let’s assume there can only be one) business process instance for which this XPath statement returns “true” when evaluated on the XML representation of the state of the business process instance. And the Web service will then interpret the message to be targeted at that business process instance. This is somewhat similar to WS-Management’s “SelectorSet”. A sender can use this mechanism to address a specific business process instance based on the characteristics of that instance (side note: whether the sender understands and builds this header itself or whether it gets it as a Reference Parameter from an EPR is orthogonal). But this can be a very expensive dispatching mechanism. The basic implementation would require the Web service to evaluate an XPath statement on each and every business process instance state document. Far from optimal. This is where Conor’s “EndpointReferenceUpdate” can come in handy. After doing once the XPath evaluation work of finding out which business process instance the sender wants to address, the Web service can return a more optimized EPR to be used to address that instance, one that is a lot easier to dispatch on. This kind of scenario is a lot more relevant in my perspective to the work of the WS-Addressing working group than the session example.

An important consequence of a mechanism such as “EndpointReferenceUpdate” is that it makes it critical that the Web service be able to tell which SOAP headers are in the message as a result of being in the EPR used by the sender and which ones were added by the sender on purpose. For example, if a SOAP message comes in with headers “a”, “b” and “c” and the Web service assumes that “a” and “b” were in the EPR and “c” was added by the invoker, then the new EPR returned as part of “EndpointReferenceUpdate” will only be a replacement for “a” and “b” and the Web service will still expect “c” to be added by the sender. But if in fact “c” also came from a reference parameter in the EPR used by the sender then follow-up messages will be incomplete. This puts more stress and responsibilities on the already weak @isReferenceParameter attribute. And, by encouraging people to accept EPRs from more and more sources, it puts EPR consumers are even greater risk for the problems described in bullet (1) of this objection.

2 Comments

Filed under Everything, Security, Standards, Tech

Submission of WS-Management to the DMTF

The absence of new messages on this blog over the last few weeks does not correspond to a lack of new developments in the Web services and management domain. It has more to do with the arrival of a baby at home and just being very busy overall. In case you haven’t been following closely, the main industry development recently has been the submission of WS-Management to the DMTF and the WSDM/WS-Management interop demos at Enterprise Management World. The submission of WS-Management is great news because it is finally possible to work openly on this important piece of the infrastructure and on bringing alignment to the industry. I am not thrilled that the DMTF is the place where this happens because the industry needs a protocol that is not tied to CIM and work in the DMTF naturally tends to be CIM-centric. We’ll see how we can navigate around this iceberg. In addition, while WS-Management has been submitted, it has crucial dependencies on specifications which at this point are still proprietary (WS-Transfer, WS-Eventing, WS-Enumeration). This too is a major problem, hopefully not for much longer. All in all, this is not the ideal configuration but nevertheless a huge step forward.

Comments Off on Submission of WS-Management to the DMTF

Filed under Everything, Standards

Webcast on management roadmap

Some of the authors of the HP/IBM/CA management roadmap (namely Heather from IBM, Kirk from CA and me) are hosting a Webcast to present the roadmap and answer questions. The Webcast starts at 9:00AM Pacific on Tuesday August 30th. More info about the Webcast and registration (it’s free) information at http://www.presentationselect.com/hpinvent/detailsl.asp#977. Talk to you on Tuesday…

Comments Off on Webcast on management roadmap

Filed under Articles, Everything, Standards

EPR redefining the difference between SOAP body and SOAP header

The use of WS-Addressing EPRs is redefining the difference between SOAP body and SOAP headers. The way the SOAP spec looks at it, the difference is that a header element can be targeted at an intermediary, while the body is meant only for the ultimate receiver. But very often, contract designers seem to decide what to put in headers versus body less based on SOAP intermediaries than on the ability to create EPRs. Basically, parts of the message are put in headers just so that an EPR can be built that constrains that message element. To the point sometimes of putting the entire content of the message in headers and leaving an empty body (as Gudge points out and as several specs from his company do). And to the contrary, a wary contract designer might very well put info in the body rather than a header just for the sake of “protecting” it form being hard-coded in an EPR (the contract requires that the sender understands this element, it can’t be sent just because “an EPR told me to”).

This brings up the question: rather than twisting SOAP messages to accommodate the EPR mechanism, should the EPR mechanism be made more flexible in the way it constrains the content of a SOAP message?

Comments Off on EPR redefining the difference between SOAP body and SOAP header

Filed under Everything, Standards, Tech

WSRF and WS-Notification public review

The WSRF TC has approved a set of committee drafts and the corresponding documents are now submitted to public review, a step towards standard status in the OASIS process. The documents in this public review are:

  • WS-Resource
  • WS-ResourceProperties
  • WS-ResourceLifetime
  • WS-ServiceGroup
  • WS-BaseFaults
  • WSRF Application Notes

All the docs (and associated XSD and WSDL documents) can be accessed in one zip file. Now is the time to send your comments. I know I will. There is a lot of progress since the TC started a bit over a year ago and the actual SOAP messages defined by these specifications are useful, but unfortunately one needs a decoder ring to understand how to use the framework in a general way. And the WS-Resource document is NOT this decoder ring, it’s more the contrary. More on this later.The WS-Notification TC is not far behind. Last Thursday the TC approved new committee drafts of WS-BaseNotification and Ws-BrokeredNotification and asked OASIS to start a public review period on these two. So the official public review hasn’t started yet (we are waiting for the OASIS staff to start it) but hopefully it will very soon and you can already access the documents at the URLs provided in this email.

Comments Off on WSRF and WS-Notification public review

Filed under Everything, Standards

Discovery of resource capabilities with WSDM

In his first article in the “WSDM wisdom” series, Bryan explained how to discover WSDM resources. The second article addresses the next step: once you’ve discovered resources, what are different ways to discover their capabilities.

Comments Off on Discovery of resource capabilities with WSDM

Filed under Everything, Standards

So you want to build an EPR?

EPR (Endpoint References, from WS-Addressing) are a shiny and exciting toy. But a sharp one too. So here is my contribution to try to prevent fingers from being cut and eyes from being poked out.

So far I have seen EPRs used for five main reasons, not all of them very inspired:

1) “Dispatching on URIs is not cool”

Some tools make it hard to dispatch on URI. As a result, when you have many instances of the same service, it is easier to write the service if the instance ID is in the message rather than in the endpoint URI. Fix the tools? Nah, let’s modify the messages instead. I guess that’s what happens when tool vendors drive the standards, you see specifications that fit the tools rather than the contrary. So EPRs are used to put information that should be in the URI in headers instead. REST-heads see this as a capital crime. I am not convinced it is so harmful in practice, but it is definitely not a satisfying justification for EPRs.

2) “I don’t want to send a WSDL doc around for just the endpoint URI”

People seem to have this notion that the WSDL is a “big and static” document and the EPR is a “small and dynamic” document. But WSDL was designed to allow design-time and run-time elements to be separated if needed. If all you want to send around is the URI at which the service is available, you can just send the URI. Or, if you want it wrapped, why not send a soap:address element (assuming the binding is well-known). After all, in many cases EPRs don’t contain the optional service element and its port attribute. If the binding is not known and you want to specify it, send a around a wsdl:port element which contains the soap:address as well as the QName of the binding. And if you want to be able to include several ports (for example to offer multiple transports) or use the wsdl:import mechanism to point to the binding and portType, then ship around a simplified wsdl:descriptions with only one service that itself contains the port(s) (if I remember correctly, WS-MessageDelivery tried to formalize this approach by calling a WSRef a wsdl:service element where all the ports use the same portType). And you can hang metadata off of a service element just as well as off of an EPR.

For some reason people are happy sending an EPR that contains only the address of the endpoint but not comfortable with sending a piece of WSDL of the same size that says the same thing. Again, not a huge deal now that people seem to have settled on using EPRs rather than service elements, but clearly not a satisfying justification for inventing EPRs in the first place.

3) “I can manage contexts without thinking about it”

Dynamically generated EPRs can be used as a replacement for an explicit context mechanism, such as those provided by WS-Context and WS-Coordination. By using EPRs for this, you save yourself the expense of supporting yet-another-spec. What do you loose? This paper gives you a detailed answer (it focuses on comparing EPRs to WS-Context rather than WS-Coordination for pretty obvious reasons, but I assume that on a purely technical level the authors would also recommend WS-Coordination over EPRs, right Greg?). In a shorter and simplified way, my take on the reason why you want to be careful using dynamic EPRs for context is that by doing so you merge the context identifier on the one hand and the endpoint with which you use this context on the other hand into one entity. Once this is done you can’t reliably separate them and you loose potentially valuable information. For example, assume that your company buys from a bunch of suppliers and for each purchase you get an EPR that allows you to track the purchase as it is shipped. These EPRs are essentially one blob to you and the only way to know which one comes through FedEx versus UPS is to look at the address and try to guess based on the domain name. But you are at the mercy of any kind of redirection or load-balancing or other infrastructure reason that might modify the address. That’s not a problem if all you care about is checking the ETA on the shipment, each EPR gives you enough information to do that. But if you also want to consolidate the orders that UPS is delivering to you or if you read in the paper about a potential UPS drivers strike and want to see how it would impact you, it would be nice to have each shipment be an explicit context Id associated to a real service (UPS or FedEx), rather than a mix of both at the same time. This way you can also go to UPS.com, ask about your shipments and easily map each entry returned to an existing shipment you are tracking. With EPRs rather than explicit context you can’t do this without additional agreements.

The ironic thing is that the kind of mess one can get into by using dynamic EPRs too widely instead of explicit context is very similar in nature to the management problems HP OpenView software solves. Discovery of resources, building relationship trees, impact analysis, event correlation, etc. We do it by using both nicely-designed protocols/models (the clean way) and by using heuristics and other hacks when needed. We do what it takes to make sense of the customer’s system. So we could just as well help you manage your shipments even if they were modeled as EPRs (in this example). But we’d rather work on solving existing problems and open new possibilities than fix problems that can be avoided. And BTW using dynamic EPRs is not always bad. Explicit contexts are sometimes overkill. But keep in mind that you are loosing data by bundling the context with the endpoint. Actually, more than loosing data, you are loosing structure in your data. And these days the gold is less in the raw data than in its structure and the understanding you have of it.

4) “I use reference parameters to create new protocols, isn’t that cool!”

No it’s not. If you want to define a SOAP header, go ahead: define an XML element and then describe the semantics associated with this element when it appears as a SOAP header. But why oh why define it as a “reference parameter” (or “reference property” depending on your version of WS-A)? The whole point of an EPR is to be passed around. If you are going to build the SOAP message locally, you don’t need to first build an EPR and then deconstruct it to extract the reference parameters out of it and insert them as SOAP headers. Just build the SOAP message by putting in the SOAP headers you know are needed. If your tooling requires going through an EPR to build the SOAP message, fine, that’s your problem, but don’t force this view on people who may want to use your protocol. For example, one can argue for or against the value of WS-Management‘s System and SelectorSet as SOAP headers, but it doesn’t make sense to define those as reference parameters rather than just SOAP headers (readers of this blog already know that I am the editor of the WSDM MUWS OASIS standard with which WS-Management overlaps so go ahead and question my motives for picking on WS-Management). Once they are defined as SOAP headers, one can make the adventurous decision to hard-code them in EPRs and to send the EPRs to someone else. But that’s a completely orthogonal decision (and the topic of the fifth way EPRs are used – see below). But using EPRs to define protocols is definitely not a justification for EPRs and one would have a strong case to argue that it violates the opacity of reference parameters specified in WS-Addressing.

5) “Look what I can do by hard-coding headers!”

The whole point of reference parameters is to make people include elements that they don’t understand in their SOAP headers (I don’t buy the multi-protocol aspect of WS-Addressing, as far as I am concerned it’s a SOAP thing). This mechanism is designed to open a door to hacking. Both in the good sense of the term (hacking as a clever use of technology, such as displaying Craig’s list rental data on top of Google maps without Craig’s List or Google having to know about it), and in the bad sense of the term (getting things to happen that you should not be able to make happen). Here is an example of good use for reference parameters: if the Google search SOAP input message accepted a header that specifies what site to limit the search on (equivalent to adding “site:vambenepe.com” in the Google text box on Google.com), I could distribute to people an EPR to the vambenepe.com search service by just giving them an EPR pointing to the Google search service and adding a reference parameter that corresponds to the header instructing Google to limit the search to vambenepe.com.

Some believe this is inherently evil and should be stopped, as expressed in this formal objection. I think this is a useful mechanism (to be used rarely and carefully) and I would like to see it survive. But there are two risks associated with this mechanism that people need to understand.

The first risk is that EPRs allow people to trick others into making statements that they don’t know they are making. This is explained in the formal objection from Anish and friends as their problem #1 (“Safety and Security”) and I agree with their description. But I don’t agree with the proposed solutions as they prevent reference parameters to be treated by the service like any other SOAP header. Back last November I made an alternative proposal, using a wsa:CoverMyRearside element that would not have this drawback and I know other people have made similar proposals. In any case, this risk can and should be addressed by the working group before the specification becomes a Recommendation or people will stop accepting to process reference parameters after a few high-profile hacks. Reference parameters will become the ActiveX of SOAP.

The second risk is more subtle and that one cannot be addressed by the specification. It is the fragility that will result from applications that share too many assumptions. I get suspicious when someone gives me directions to their house with instructions such as “turn left after the blue van” or “turn right after the barking dog”, don’t you? “We’re the house after the green barn” is a little better but what if I want to re-use these directions a few years later. What’s the chance that the barn will be replaced or repainted? EPRs that contain reference parameters pose the same problem. Once you’ve sent the EPR, you don’t know how long it will be around, you don’t know who it will get forwarded to, you don’t know what the consumer will know. You need to spend at least as much efforts picking what data you use as a reference parameter (if anything) as you spend designing schemas and WSDL documents. If your organization is smart enough to have a process to validate schemas (and you need that), that same process should approve any element that is put in a reference parameter.

Or you’ll poke your eye out.

2 Comments

Filed under Everything, Implementation, Security, Standards, Tech

HP/IBM/CA roadmap white paper

HP, IBM and CA recently released a white paper describing how we see the different efforts in the area of management for the adaptive enterprise coming together and, more importantly, what else is needed to fulfill the vision. Being a co-author I am arguably more than a little biased, but I recommend the read as an explanatory map of the standards/specifications landscape, from the low levels of the Web services stack all the way up to model transformations and policy-based automated management: http://devresource.hp.com/drc/resources/muwsarch/index.jsp

Comments Off on HP/IBM/CA roadmap white paper

Filed under Articles, Everything, Standards, Tech

Someone is paying attention

It’s nice to see that, while most of the tech press seems happy to copy/paste from misleading press briefing documents rather than do any checking of their own, some analysts take a little bit more time to look through the smoke. So, when Gartner looks into the recent Microsoft/Sun announcement (see “Progress Report on Sun/Microsoft Initiative Lacks Substance”) their recommendation is to “view the latest Sun/Microsoft announcement as primarily public-relations-oriented”. Similar take from Jason Bloomberg from ZapThink who thinks that this “doesn’t do anything to contradict the fact that Microsoft is the big gorilla in this relationship”. And Forrester’s Randy Heffner (quoted in “Analysts Question Microsoft-Sun Alliance”) thinks that “Bottom line: Web services interoperability is not yet part of the picture”. Oh, and by the way “the WS-Management group has yet to come clean on how they will work with the WSDM standard approved by OASIS,” Heffner also says. “Again, WS-Management is still just a specification in the hands of vendors”. Very much so. But in PR-land everything looks different. As tech journalists write these articles including insight from analysts that contradict what the tech press reported a couple days earlier, I wonder if they ever think “hum, maybe I should be the one doing reality checks on the content of press releases rather than going around collecting quotes and then the analysts would focus on real in-depth analysis rather than just doing the basic debunking work…”

Comments Off on Someone is paying attention

Filed under Business, Everything, Standards, Tech

Reality check on Microsoft/Sun claims about single sign-on

This morning I learned that Microsoft and Sun had a public event where the CEOs reported on a year of working together. This is a follow-up to Greg Papadopoulos’ report on the progress of the “technical collaboration”. In that post, Greg told us about the amazing technical outcomes of the work between the two companies and, being very familiar with the specs he was referring to, I couldn’t help but point out that the result of the “technical collaboration” he was talking about looked a lot like Sun rubber-stamping a bunch of Microsoft specifications without much input from Sun engineers.

So when I heard this morning that the two companies were coming out publicly with the result of their work, I thought it would be fair for me to update my blog and include this information.

Plus, reading the press release and Greg’s Q&A session, it sounded pretty impressive and it would have been bad faith from my part to not acknowledge that indeed Greg actually had something to brag about, it just wasn’t yet public at the time. In effect, it sounded like they had found a way to make the Liberty Alliance specs and WS-Federation interoperate with one another.

From Greg’s Q&A: “In a nutshell, we resolved and aligned what Microsoft was trying to accomplish with Passport and the WS-Federation with what we’ve been doing with the Liberty Alliance. So, we’ve agreed upon a way to enable single sign-on to the Internet (whether through a .NET service or a Java Enterprise System service), and federate across those platforms based on service-level agreements and/or identity agreements between those services. That’s a major milestone.”

Yes Greg, it would have been. Except this is not what is delivered. The two specs that are supposed to support these claims are Web SSO MEX and Web SSO Interop Profile. Which are 14 and 9 pages long respectively. Now I know better than to equate length of a spec with value, but when you cut the boilerplate content out of these 14 and 9 pages, there is very little left for delivering on ambitious claims such as those Greg makes.

The reason is that these specs in no way provide interop between a system built using Liberty Alliance and a system built using WS-Federation. All they do is to allow each system to find out what spec the other uses.

One way to think about it is that we have an English speaker and a Korean speaker in the same room and they are not able to talk. What the two new specs do is put a lapel pin with a British flag on the english speakers and a lapel pin with a Korean flag on the korean speaker. Yes, this helps a bit. At least now the Korean speaker will know what the weird language that the other guy is speaking is and he can go to school and learn it. But just finding out what language the other guy speaks is a far cry from actually being able to communicate with him.

Even with these specs, a system based on Liberty Alliance and one based on WS-Federation are still incompatible and you cannot single sign-on from one to the other. Or rather, you can only if your client implements both. This is said explicitly in the Web SSO Interop Profile spec (look for the first line of page 5): “A compliant identity provider implementation MUST support both protocol suites”. Well, this isn’t interop, it’s duplication. Otherwise I could claim I have solved the problem of interoperability between English and Korean just by asking everyone to learn both languages. Not very convincing…

But of course Microsoft and Sun knew that they could get away with that in the press. For example, CNet wrote “The Web Single Sign-On Metadata Exchange Protocol and Web Single Sign-On Interoperability Profile will bridge Web identity management systems based on the Liberty and Web services specifications, the companies said”. As the Columbia Journalism Review keeps pointing out, real journalists don’t just report what people say, they check if it’s true. And in this case, it simply isn’t.

1 Comment

Filed under Business, Everything, Security, Standards, Tech

Greg Papadopoulos on “collaborating” with Microsoft

Greg Papadopoulos (Sun’s CTO) recently posted a blog entry to tell us, a year after, what’s it’s been like working with Microsoft. For those who forgot, a year ago Microsoft sent a $2 billion check to Sun to settle some legal disputes and turn Sun into a technical partner. So what kind of technical partnership is that? Well, according to Greg they’ve been making “some real architectural progress”. And he gives us four examples: WS-Addressing, WS-Management, WS-Eventing, WS-MetadataExchange. The funny thing is that for each one of these specifications Microsoft had written and publicized the specification before Sun became a partner and just put out a slightly updated version with Sun and other companies added as authors. Go ahead and check for yourself:

  • WS-Addressing: the “before Sun” version (March 2004) and the “after Sun” version (August 2004)
  • WS-Management: the “before Sun” version was called WMX but I can’t find a URI for it, only an overview document so on this one you’re on your own to find the “before Sun” document to compare (hint: call Microsoft, not Sun for this doc). Here is the “after Sun” version (October 2004)
  • WS-Eventing: the “before Sun” version (January 2004) and the “after Sun” version (August 2004)
  • WS-MetadataExchange: the “before Sun” version (March 2004) and the “after Sun” version (September 2004)

There might be a lot of in-depth technical collaboration going on between Sun and Microsoft that we are not allowed to see, but the only examples Greg has for us in his “one year later” piece make it sound a lot more like a business deal than technical collaboration. Maybe they have the CTO write about it because the CFO doesn’t have a blog?

In that same piece, Greg also tells us that “the ‘interoperate’ message is louder than even the ‘standardize’ one”. This is probably why 3 of the 4 specs he brings up are proprietary specs. This explains a lot about what to expect from Sun in terms of standard support. I agreed when Sun used to say that standards are the best way to provide specifications that can be safely implemented, including by small companies and open-source projects (in financial terms, legal terms and control terms) and that this is a key promise of Web services. Simon Phipps (Sun’s chief technology evangelist) explained it well. But this was in year 1BC (Before Check). How things change.

Comments Off on Greg Papadopoulos on “collaborating” with Microsoft

Filed under Business, Everything, Standards

Kolkhoz to jungle

In a recent post on his blog, Mark responded to the critics who think that making WSDM 1.0 an OASIS standard was premature. Go read his entry for the point by point refutation of the arguments against WSDM standardization (which, based on this entry, doesn’t seem to quite convince Savas). The key point I take away from Mark’s reply is that it is ok for people to say that they think a condition for being a standard it to be entirely based on approved standards, but this is not the only view of the world. As Mark pointed out, it’s doubtful that the OASIS organization would have just “forgotten” to mention this in its bylaws and process if that was its intent. Those of us who welcome WSDM as a standard take the view that this requirement is too strong and the real requirements are that the spec is implementable in an interoperable way, that it is royalties free and that it addresses an industry need.

I respect the opinion of those who would have rather waited for WSRF, WSN and WS-Addressing to be standards for WSDM to ship as a standard. The thing is, if this is very important to them they do not have to implement WSDM right now. They can wait. And the fact that WSDM 1.0 was released ahead of WSRF, WSN and WS-Addressing doesn’t mean that the version of WSDM that the “wait and see” crowd wants to see won’t arrive. In fact, the WSDM TC is committed to updating the specs when these dependencies reach standard status. It will just be called WSDM 1.1 or WSDM 1.5 or WSDM 2.0 or WSDM IsOracleHappyNow Edition (yes I should be in marketing). People who don’t have an urgent need for a Web services-based management infrastructure can very reasonably decide to wait until then. The fact is, many people have this need now and they need the best standard they can get. DMTF is not waiting to provide Web services access to CIM-modeled resources. GGF is not waiting to manage Grid resources. Devices vendors are not waiting to Web services-enable their products. The JCP is not waiting to provide a Web services interface to JMX for app management (see JSR 262). And those are only broad industry segments, not specific customers who want to build an adaptive infrastructure. These are the people that WSDM 1.0 is trying to help. If you don’t have the need then yes you might rather wait for a better later solution. But let’s respect those who have the need. We worked hard in WSDM to come up with the best compromise for the present time.

Which takes me back to the title of this entry. The Kolkhoz approach to standard is what I see as the claim that the entire stack accross all standards organization has to be completly clean at any point in time. Nice but really hard to do in the environment we live in. And with a high risk to result in proprietary solutions taking over long before anything useful comes out as a standard. On the other side of the spectrum, there is the jungle approach, where de-facto standards rule or where specifications are written by a group of people who are only looking for rubber-stamps from standard organizations. What we are trying to do with WSDM is find the right balance between kolkhoz and jungle. This is why for example we make use of specs that are not yet standards but being standardized but we stay away from specs that are not even in the standard process even though sometimes it might be technically tempting to use them (think policy and metadata). To the jungle people, we are too slow and too committee-driven which they love to sneer at. To the kolkhoz people we are barbarians who trample the rules and processes and they love to stone us for it. We don’t try to be heroes to either, we try to be heroes to the customers.

1 Comment

Filed under Everything, Standards

The documents that compose the WSDM 1.0 OASIS standard

Before writing about the final approval of WSDM 1.0 as an OASIS standard I was waiting for all the documents to be posted at the official and final URLs on the OASIS web site. But this seems to be taking a long time for the OASIS webmaster to do, so here are the links to the documents in the OASIS repository. These are the exact documents that have been approved as standards and these links are not going to stop working, it’s just that they are not the nice and user-friendly URLs at which the specs will eventually be available. For example, http://www.oasis-open.org/committees/download.php/11819/wsdm-muws-part1-1.0.pdf (the URL corresponding to MUWS Part 1 in the OASIS repository) is not as nice as http://docs.oasis-open.org/wsdm/2004/12/wsdm-muws-part1-1.0.pdf (the official URL at which the same document will soon be available).

Anyway, here are the documents that compose the WSDM 1.0 specification:

In order to help implementers, stand-alone versions of the XSD, WSDL and event XML files are available at the URLs that correspond to their namespaces:

A great source of information about WSDM is the WSDM page on HP’s Dev Resource site, including the “WSDM wisdom” articles (the first article is about discovery of resources) by Bryan who is also the editor of the WSDM primer so you can look forward to more clear explanations and examples when the primer comes out. Our fearless and inspiring TC chair Heather also provides a very good introduction to WSDM on the IBM developerworks site.

More about the discussions that took place during and after the vote in the next post…

Comments Off on The documents that compose the WSDM 1.0 OASIS standard

Filed under Everything, Standards

Resource discovery with WSDM MUWS

Bryan has just published an article describing options for discovering resources using WSDM MUWS. A highly recommended read.

2 Comments

Filed under Everything, Standards, Tech

The mnot standard geek index

Sometimes Amazon scares me. Last night I was browsing the site looking at some novels (nothing whatsoever to do with technology) and here is what I see on the left side bar: a suggestion for an advice list called “So you’d like to… be a standards geek” by an Amazon user called mnotting who of course turns out to be Mark Nottingham. The scary part is that I know for a fact that I wasn’t logged on the Amazon site and there was no Amazon cookie on my disk. So either this was a complete (and unlikely) coincidence or Amazon uses the not-so-dynamic IP address provided by my DSL provider to try to recognize me. And even then, my Amazon profile clearly flags me as someone interested in technology among other things, but I don’t see how it would flag me as a standards person unless it reads my email…

In any case, this tempted me to measure my level of standard geekiness and the result is that I rank a 3 out of 8. To get to this ranking, I only looked at the list of books. I ignored the travel gadgets such as battery chargers and cell phones because there are so many of these that the chance of having a match is pretty slim (my personal recommendation for those who work a lot in airplanes is a Tablet PC).

So, focusing on the books, my three points on the mnot standards geek index come from:

  • Machiavelli’s “The Prince”. I read it in French but I assume it still counts.
  • Robert’s Rules of Order. I can’t say I read every single page but I’ve browsed it enough to know where to looks for things. I received my copy (in a different edition than Mark’s) from the hands of OASIS’ Jamie Clark when the WS-Notification Technical Committee was created that I co-chair with Peter Niblett.
  • TBL’s “Weaving the Web” of which I talked in a previous blog entry (BTW Mark you might want to check the URL you provided for this book in your list, it is incorrect and causes Amazon to not list this book in the recap of all products you recommend).

I don’t really know what to think of my score of 3/8. So I’d like the other standards geeks out there (Chris, DaveC, DaveO, Glen, Jeff, Marc, Mark, Gudge, Sanjiva, Jorgen, Tom and many others) to take the test and report their results so I know how serious my case is.

2 Comments

Filed under Everything, Off-topic, Standards

Services vs. Resources: the WSDM case

In an SOA, a service should not be tied to the resources that allow the service to be delivered. WSDM MUWS closely ties services with resources and in doing so it does not violate any SOA principle. I will show in this entry that these two sentences do not contradict each other.

Resources come and go and creating information systems that directly connect resources to one another results in brittle systems that don’t scale. Service-orientation, when well used, addresses this problem. Many have said this better than me before, like Jim: “Web Services are about hiding resources and exposing processes which operate on those resources”.

WSDM MUWS exposes a ResourceId property and manageability capabilities that are specifically tied to a given resource. But this “resource” is not the resource that makes it possible to deliver the MUWS service. It is the resource that the service has been created specifically to represent. Let’s illustrate this by contrasting two examples:

Service1 is a storage service. The “operational value” of this service is to store data. The right way to represent Service1 is in a way that is separated from the resource (in this case a storage array) that is used to provide the service. The service should expose its capabilities in terms of reading and writing data, not in terms of what SCSI disks are used. So that tomorrow I can replace the storage array with another one (or maybe with two smaller ones) and, assuming I replicate data correctly, the users of my service will not notice the change. A basic example of service-orientation. Now let’s look at Service2. Service2 is a management service (in MUWS terms, a “manageability endpoint”) used to manage the storage array from the previous example. The “operational value” of this service is not to store data in the array, it is to manage the array. And not any array, this specific array seating in my machine room. The resource used to provide the service is the Web services engine in which Service2 runs and whatever mechanism allows it to manage the storage array. In this case too, Service2 should be exposed in a way that is independent from the resource(s) that it relies on (like the Web service engine it runs on). But having it not be tied to the storage array makes would negate the very value this service provides, namely to manage a given storage array.

Of course in some cases it makes sense to embed the manageability endpoint inside the resource being managed in which case the resource being managed is also the resource that provides the service. But this is a corner case and in no way something requested by MUWS.

Separating the service from the resources that compose it is a good thing, but when the operational value of the service is exposed in terms of specific resources it is fine to explicitly attach the service to the resource. When deciding whether it is ok to let a resource show through a Web service, one needs to clearly understand whether it is a Service1 or Service2 type of situation.

Comments Off on Services vs. Resources: the WSDM case

Filed under Everything, Standards, Tech

Vote for approval of WSDM 1.0 as OASIS standard

The vote has now started to approve WSDM 1.0 (both MUWS and MOWS) as an OASIS standard. The vote will close at the end of the month and this is a short month, so don’t waste any time to make sure your company casts its vote.

Comments Off on Vote for approval of WSDM 1.0 as OASIS standard

Filed under Everything, Standards

First we think the Web is HTML; then HTTP; then we realize it’s URL.

As I was sitting in my car listening to KQED on the way back from work and I recently remembered an interview of Tim Berners-Lee by Terry Gross on Fresh Air that took place in 1999. TBL was promoting his book, Weaving the Web. At that time I was very familiar with Web technologies (first Web site in 1994 and I had been writing Web applications as CGIs more or less non-stop since 1995) but I hadn’t realized that the URL was the key building block of the Web, way ahead of HTTP and even more ahead of HTML. I don’t think I had ever asked myself the question, but if I had I would probably have sorted them backward. Hearing TBL in this interview describe how, before the Web, people would create small files that described where to find information in a human-readable way (I assume it must have been something like “telnet to this machine, use this logon/pwd, go to this directory, start this application, load this file”) really made me understand the importance of this URL thing I had taken for granted for many years. To this day I vividly remember this interview and the Eureka feeling when I realized the importance of URLs as an enabler for the Web.

I don’t know if the fact that this interview, which was targeted at the general audience of Fresh Air (more used to hearing Jazzmen interviewed than geeks), taught a Web-head like me something important is a testament to TBL’s vision, Terry Gross’ skills as an interviewer or my stupidity for not having grasped such a basic concept earlier.

Going back to WS-Addressing EPRs for a minute, what I was thinking recently is that these EPRs look a bit like the old “do this, do that” files that TBL talked about and that were replaced with URIs. Where “do this” becomes “put this header in your SOAP message”. Unlike the “do this” files, the instructions in the EPR can be machine-processed and that’s a key difference. But still, I can’t help getting this deja-vu feeling. Not that I have ever encountered these “do this” files myself but TBL made me see them one day in 1999.

[UPDATED 2011/9/27: Before pointing to this piece on Twitter (in response to this post by Joe Hewitt) I just have to change the awful title (for the record, the original title was “Thinking about EPRs like Proust”; yeah, I know). Whenever someone uses “la petite madeleine” (or Schrödinger’s cat for that matter) to illustrate a point you know it’s going to suck, so I removed it. And while I’m at it, I am replaced all the references to “URI” by “URL” which is less pedantic (and more accurate in this context). There. I usually don’t like to edit old entries, but this one was so bad it made almost no sense (not to mention the fact that no-one cares much about EPRs anymore).]

1 Comment

Filed under Everything, Standards, Tech