Registry or not?

I recently had a meeting with people who practically could not imagine a form of discovery that didn’t involve a god-like central registry. Notifications, peer to peer relationships were heretic ideas on this call. Of course registries are good. And repositories are even better. But a registry is not the only way to discover services and it shouldn’t be. The delicious irony is that the meeting used NetMeeting and that we spent the first 5 minutes of the call repeating the IP address of the person hosting the NetMeeting to every single new participant upon joining. Instead of simply using the registry that was available (NetMeeting’s directory).

Comments Off on Registry or not?

Filed under Everything, Off-topic

Keeping track

After Systinet, it’s now Actional’s turn to take the plunge. For those trying to keep track, Jeff Schneider has a useful recap of SOA-related acquisitions and mergers. It’s only missing the name changes to be complete (e.g. Corporate Oxygen to Confluent, Digital Evolution to SOA Software…).

Comments Off on Keeping track

Filed under Business, Everything

Humble Architecture

In many respects, the principles of Service-Oriented Architecture (SOA) can be summarized as “be humble”. “Service” comes from “servus”, Latin for “slave”. It doesn’t get any more humble.

More practically, this means that the key things to keep in mind when creating a service, is that you are not at the center of the universe, that you don’t know who is going to consume your service, that you don’t know what they are going to do with it, that you are not necessarily the one who can make the best use of the information you have access to and that you should be willing to share it with others openly (instead of the all-too familiar syndrome where everyone wants to consume other people’s services but no-one sees the need to expose themselves as services because they think they “own” the connection to the human or they “own” the business process). You also shouldn’t assume that some human needs to come to you and ask for permission to use your service but instead you should provide machine-readable descriptions of it as well as quality documentation. And don’t assume that everyone speaks the same language you speak. In case of doubt in designing a service-oriented system, ask yourself “what would a slave do?”.

Focused, standard-based services are humble. Portlets are humble. RSS feeds are humble. Giant software suites and all-encompassing frameworks are not humble.

Successful open source projects are humble almost by definition. Large software companies rarely have humility genes in their DNA, unless it’s been beaten into them by customers.

2 Comments

Filed under Everything, Tech

Updating an EPR

The question recently came back on the WS-Addressing mailing list of whether Reference Parameters can/should be used as the SOAP equivalent of cookies. Something more along the lines of session management than addressing. See Peter Hendry’s email for a clear description of his use case. The use case is reasonable but I don’t think this is what WS-Addressing is really for as I explain in bullet #3 of this post. What interested me more was the response that came from Conor Cahill and his statement that AOL is implementing an “EndpointReferenceUpdate” element that can be returned in the response to tell the sender to update the EPR. I am not fond of this as a mechanism for session management, but I can see one important benefit of this mechanism: getting hold of a “good” EPR for more efficient addressing. Here is an example application:

Imagine a Web services that represents the management interface of a business process engine. That Web service provides access to all the currently running business process instances in the engine (think Service Group if you’re into WSRF). Imagine that this Web service supports a SOAP header called “target” and that header is defined to contain an XPath statement. When receiving a message containing a “target” header, the Web service will look for the (for the sake of simplicity let’s assume there can only be one) business process instance for which this XPath statement returns “true” when evaluated on the XML representation of the state of the business process instance. And the Web service will then interpret the message to be targeted at that business process instance. This is somewhat similar to WS-Management’s “SelectorSet”. A sender can use this mechanism to address a specific business process instance based on the characteristics of that instance (side note: whether the sender understands and builds this header itself or whether it gets it as a Reference Parameter from an EPR is orthogonal). But this can be a very expensive dispatching mechanism. The basic implementation would require the Web service to evaluate an XPath statement on each and every business process instance state document. Far from optimal. This is where Conor’s “EndpointReferenceUpdate” can come in handy. After doing once the XPath evaluation work of finding out which business process instance the sender wants to address, the Web service can return a more optimized EPR to be used to address that instance, one that is a lot easier to dispatch on. This kind of scenario is a lot more relevant in my perspective to the work of the WS-Addressing working group than the session example.

An important consequence of a mechanism such as “EndpointReferenceUpdate” is that it makes it critical that the Web service be able to tell which SOAP headers are in the message as a result of being in the EPR used by the sender and which ones were added by the sender on purpose. For example, if a SOAP message comes in with headers “a”, “b” and “c” and the Web service assumes that “a” and “b” were in the EPR and “c” was added by the invoker, then the new EPR returned as part of “EndpointReferenceUpdate” will only be a replacement for “a” and “b” and the Web service will still expect “c” to be added by the sender. But if in fact “c” also came from a reference parameter in the EPR used by the sender then follow-up messages will be incomplete. This puts more stress and responsibilities on the already weak @isReferenceParameter attribute. And, by encouraging people to accept EPRs from more and more sources, it puts EPR consumers are even greater risk for the problems described in bullet (1) of this objection.

2 Comments

Filed under Everything, Security, Standards, Tech

Submission of WS-Management to the DMTF

The absence of new messages on this blog over the last few weeks does not correspond to a lack of new developments in the Web services and management domain. It has more to do with the arrival of a baby at home and just being very busy overall. In case you haven’t been following closely, the main industry development recently has been the submission of WS-Management to the DMTF and the WSDM/WS-Management interop demos at Enterprise Management World. The submission of WS-Management is great news because it is finally possible to work openly on this important piece of the infrastructure and on bringing alignment to the industry. I am not thrilled that the DMTF is the place where this happens because the industry needs a protocol that is not tied to CIM and work in the DMTF naturally tends to be CIM-centric. We’ll see how we can navigate around this iceberg. In addition, while WS-Management has been submitted, it has crucial dependencies on specifications which at this point are still proprietary (WS-Transfer, WS-Eventing, WS-Enumeration). This too is a major problem, hopefully not for much longer. All in all, this is not the ideal configuration but nevertheless a huge step forward.

Comments Off on Submission of WS-Management to the DMTF

Filed under Everything, Standards

Webcast on management roadmap

Some of the authors of the HP/IBM/CA management roadmap (namely Heather from IBM, Kirk from CA and me) are hosting a Webcast to present the roadmap and answer questions. The Webcast starts at 9:00AM Pacific on Tuesday August 30th. More info about the Webcast and registration (it’s free) information at http://www.presentationselect.com/hpinvent/detailsl.asp#977. Talk to you on Tuesday…

Comments Off on Webcast on management roadmap

Filed under Articles, Everything, Standards

Apache WSRF, Pubscribe and Muse v1.0 Releases

The WSRF, Pubscribe and Muse teams at Apache have reached a major milestone in their work: version 1.0 release. Congrats to the teams! Binary and source distributions can be downloaded from:

Comments Off on Apache WSRF, Pubscribe and Muse v1.0 Releases

Filed under Everything, Implementation

Bridging the gap between business and IT: application to software pricing

With the ongoing virtualization of the computing infrastructure as well as the proliferation of multi-core processors, revising software pricing strategies (often based on number of processors) is a hot topic. The usual spin is: we can’t keep using the current model (as “number of processors” doesn’t mean much anymore) so we have to think of a new one. But there is another way to look at it. Revising the pricing strategy not because we have to but because we can.

Pricing software based on the number of processors only makes sense because we are used to it. We are used to it because it is prevalent. It is prevalent because it is easy to measure and apply (or was until recently). It’s hard to measure the value to the business of a piece of software but it is easy to measure how many processors run it. So we use the number of processors as an approximation of the value. This approach to pricing is very similar to the approach of policy-driven IT management that creates SLAs at different levels of the architecture. The IT administrator is told to make sure that a certain server stays up 99.9% of the time. Does the business really care that the server is up? No, what it cares about is that the business processes can progress and these processes happen to use applications running on the server. But if we told the IT admin “make sure the business processes can progress”, he doesn’t know what to do in practical terms. He doesn’t know whether the downtime to patch the server is worth it or not. By giving him a more measurable metric (uptime), the IT admin is now able to make the necessary decisions to meet the specific uptime SLA. Just like the number of processors is used as a convenient approximation of the business value of the software, the uptime SLA is used as a convenient approximation of the business need. Like any approximation, they are not perfect and making decisions based on them rarely leads to optimal decisions. But when that’s all you can do you call it good enough and you go with it.

One of the key promises of the effort to “bridge the gap between business and IT” is to better align infrastructure-level decisions with the real business impact. Products like OpenView’s Business Process Insight allow you to map business processes to the IT infrastructure that powers the steps of the process. So that you can make decisions on managing the IT elements based on their real impact on the business rather than fixed SLAs. We are seeing a huge amount of interest for this and there is a lot of room for optimization once this correlation is established. At this point, the focus is on using this to automate and optimize IT management. But this is so similar to the software pricing issues that one has to wonder whether these technologies won’t eventually allow us to price software in a way better aligned with the real business value provided by the software. And who knows, maybe one day management software will be used to tie salaries to business value rather than being driven by approximations such as “number of hours worked”, “number of bugs fixed”, “uptime of the server”, “number of specs produced”.

Comments Off on Bridging the gap between business and IT: application to software pricing

Filed under Business, Everything

A map to federated IT model repositories

Using scissors and tape, one can stitch street maps and road maps together to obtain an aggregated map showing how to go from downtown Palo Alto to downtown San Francisco. The equivalent in IT management is to stitch together different model repositories by federating them, as a way to get a complete view of an IT system of interest. As we go about creating the infrastructure for model federation, there is a lot to be learned from the evolution of street maps.

Let’s go back to paper maps for a minute. A map of the Bay Area will tell me what highways to take to go from Palo Alto to SF. But it won’t help me get from a specific house in Palo Alto to the highway and once in SF it won’t help me get from the highway to a specific restaurant. For this, I need to find maps of downtown Palo Alto and downtown SF and somehow stitch the three maps together for an end to end view. Of course all these maps have different orientations, different scales, partial overlap, different legends, etc. Compare this to using Google maps which covers the entire itinerary and allows the user to zoom in and out at will.

Let’s now go back to IT management. In order to make IT systems more adaptable, the level of automation in their management must drastically increase. This requires simplification. Trying to capture all the complexity of a system in one automation point is neither scalable nor maintainable. But one cannot simply wave a wand and make a system simpler. The basic building blocks of IT are not getting simpler: the number of transistors on a chip is going up, the number of lines of code in an application is going up, the number of data items in a customer record is going up. Literal simplification would be going back to mechanical calculators and paper records… What I really mean by simplification is decomposing the system into decision points (or control points) that process information and take action at a certain level of granularity. For example, an “employee provisioning” control point is written in terms of “mail account provisioning” and “payroll addition”, not in terms of “increasing size of a DB table”. That’s simplification. Of course, someone needs to worry about allocating enough space in the database. There is another control point at that lower level of granularity. The challenge in front of us is to find a way to seamlessly integrate the models at these different levels of granularity. Because they are obviously linked. The performance and reliability of the “employee provisioning” service is affected by the performance and reliability of the database. Management services need to be able to navigate across these models. We need to do this in a way inspired by Google Maps, not by stitching paper maps. Let’s use the difference between these two types of maps to explore the requirements of infrastructure for IT models federation.

Right level of granularity

The publishers of a paper map decide, based on space constraints, which streets are shown. With Google Maps, as you zoom in and out smaller streets show up and disappear. Similarly, an IT model should be exposed in a way that allows the consumer to decide what level of granularity is presented.

Machine-readable

Paper maps are for people, Google Maps can be used by people and programs. IT models must be exposed in a way that doesn’t assume a human sitting in front of a console is the consumer of the information.

Open to metadata and additional info

To add information to a paper map, you have to retrieve the information, find out where on the map it belongs and manually add it there. Google map lets you overlay any information directly on top of the map (see Housingmaps.com). Similarly, IT model federation requires the ability to link metadata and extra model information about model elements to the representation of the model, even if that information resides outside the model repository.

Standards-based

Google provides documentation for its maps service. It’s not a standard, but at least it’s documented and publicly accessible. Presumably they are not going to sue their users for patent violation. Time will tell whether this is good enough for the mapping world. In the IT management world, this will not be enough. Customers demand real standards to protect their investment, speed up deployment and prevent unneeded integration costs. Vendors need it as protection (however imperfect) against patent threats, as a way to focus their energy on value-added products rather than plumbing and just because smart customers demand it.

Seamless integration

I don’t know if Google gets all its mapping information from one source or from several, and I don’t need to know it. As I move North, South, East, West and zoom in and out, it is a seamless experience. The same needs to be true in the way federated models are exposed. The framework through which this takes place should provide seamless integration across sources. And simplify as much as possible discovery of the right source for the information needed.

Support for different metamodels

Not all maps use the same classification and legend. Similarly, not all models repositories use the same meta-model. Two meta-models might have the notion of “owner” of a resource but call it differently and provide different information about the owner. Seamless integration requires support for model bridging.

Searchable

Federated models repositories need to be efficiently searchable.

Up to date

Paper maps age quickly. Google Maps is more likely (but not guaranteed) to be up to date. Federated models must be as close a representation of the real state of the system as possible.

Secure

As you are composing information from different sources, the seamless navigation among these resources needs to be matched by similar seamless integration in the way the access is secured, using security federation.

Note 1: When I talk about navigating “models” in this entry, I am referring to an instance model that describes a system. For example, such a “model” can be a set of applications along with the containers in which they live, the OS these containers run on and the servers that host them. That’s one “model”. If the information is distributed among a set of MBean servers, CMOM, etc, then this is a federated model. I know some people don’t call this a “model” and I am not married to this word. Based on the analogy used in this entry, “system map” and “federated system map” would work just as well.

Note 2: This entry corresponds to the presentation I gave when participating in a panel (which I also moderated) on “Quality of Manageability of Web Services” at the IEEE ICWS 2005 conference in Orlando last week. The other speakers were Dr. Hemant Jain (UW Milwaukee), Dr. Hai Jin (Huazhong University of Science and Technology), Heather Kreger (IBM), Dr. Geng Lin (Cisco). Unfortunately, the presentation was made quite challenging when (1) the microphone stopped working (it was in a large ballroom), (2) a rainstorm had us compete with the sound of thunder, (3) torrential rain started to fall on the roof of our one-story building, turning the room into a resonance box and, to top it off, (4) the power went off completely in the entire hotel leaving me to try to continue talking by the light of the laptop screen and the emergency exit lights…. With all this plus time constraints, I am not sure I did a good job making my point clear. This entry hopefully does a better job than the presentation. The conference was quite interesting. In addition to the panel I also presented a co-authored paper based on an HP Lab project. The paper is titled “Dealing with Scale and Adaptation of Global Web Services Management”. The conference also allowed me to finally meet Steve Loughran face to face. Congrats to Steve and Ed Smith for being awarded the “Best paper” award for “Rethinking the Java SOAP stack“, also known as “the Alpine paper”. When a papers gets a nickname you know it is having an impact…

1 Comment

Filed under Everything, Research, Tech

EPR redefining the difference between SOAP body and SOAP header

The use of WS-Addressing EPRs is redefining the difference between SOAP body and SOAP headers. The way the SOAP spec looks at it, the difference is that a header element can be targeted at an intermediary, while the body is meant only for the ultimate receiver. But very often, contract designers seem to decide what to put in headers versus body less based on SOAP intermediaries than on the ability to create EPRs. Basically, parts of the message are put in headers just so that an EPR can be built that constrains that message element. To the point sometimes of putting the entire content of the message in headers and leaving an empty body (as Gudge points out and as several specs from his company do). And to the contrary, a wary contract designer might very well put info in the body rather than a header just for the sake of “protecting” it form being hard-coded in an EPR (the contract requires that the sender understands this element, it can’t be sent just because “an EPR told me to”).

This brings up the question: rather than twisting SOAP messages to accommodate the EPR mechanism, should the EPR mechanism be made more flexible in the way it constrains the content of a SOAP message?

Comments Off on EPR redefining the difference between SOAP body and SOAP header

Filed under Everything, Standards, Tech

WSRF and WS-Notification public review

The WSRF TC has approved a set of committee drafts and the corresponding documents are now submitted to public review, a step towards standard status in the OASIS process. The documents in this public review are:

  • WS-Resource
  • WS-ResourceProperties
  • WS-ResourceLifetime
  • WS-ServiceGroup
  • WS-BaseFaults
  • WSRF Application Notes

All the docs (and associated XSD and WSDL documents) can be accessed in one zip file. Now is the time to send your comments. I know I will. There is a lot of progress since the TC started a bit over a year ago and the actual SOAP messages defined by these specifications are useful, but unfortunately one needs a decoder ring to understand how to use the framework in a general way. And the WS-Resource document is NOT this decoder ring, it’s more the contrary. More on this later.The WS-Notification TC is not far behind. Last Thursday the TC approved new committee drafts of WS-BaseNotification and Ws-BrokeredNotification and asked OASIS to start a public review period on these two. So the official public review hasn’t started yet (we are waiting for the OASIS staff to start it) but hopefully it will very soon and you can already access the documents at the URLs provided in this email.

Comments Off on WSRF and WS-Notification public review

Filed under Everything, Standards

Spreading the word of SOA and SOA management

Over the last couple days, a few articles came up that help explain HP’s vision for Management of the Adaptive Enterprise, so here are the links.

Yesterday, Mark Potts published an article describing the value of SOA for enterprises and more specifically the management aspects of SOA (security, life cycle and configuration, management of infrastructure services and business services, governance, etc). BTW, the SOA practice from HP Consulting and Integration that Mark refers to at the end of his article is what I mentioned in my previous post.

Another interesting article is Alan Weissberger’s entusiastic report from GGF 14. Alan follows GGF and related OASIS activities very closely, doesn’t fall for fluff and is not easily impressed so this a testimony to the great work that Heather, Bryan, Bill and Barry did there, presenting a WSDM deep dive, the HP/IBM WSDM demos (which they also showed at IEEE ICAC in Seattle) and talking about the recently released HP/IBM/CA roadmap for management using Web services. These four should call themselves “Heather and the Bs” or “HB3” for short if they keep touring the world showing their cool demos. Can’t wait to see them at the Shoreline Amphitheatre. Of course, Alan’s positive comments also and mainly come out of all the hard technical work that lead to this successful GGF14, including the OGSA WSRF Basic Profile.

Two more articles to finish, both about the HP/IBM/CA roadmap. I talked to the journalists for both of these articles, one form ComputerWorld and one from the Computer Business Review.

Four good articles in two days, it is very encouraging to see how the understanding of how we are unleashing the power of SOAs through adaptive management is growing. This is what the roadmap is all about, explaining the objectives to people and inviting them on board.

Comments Off on Spreading the word of SOA and SOA management

Filed under Business, Everything, Tech

Sea, Services and Sun

There is a lot to like about HP’s announcement today that the company’s consulting arm is now offering seven new SOA services (including, of course, SOA Management) and opening four SOA competency centers (see the press release and IntenetNews.com’s report). I must admit that the idea of one day moving from the software group to HP Services and working on SOA solutions in the French Riviera at Sophia Antipolis (one of the four competency centers) is not without appeal. I am now spending a lot more time with customers than I used to anyway so it wouldn’t be too wide a chasm in that respect.

Even putting aside my bias for the good life in the “Côte d’Azur”, this is very good news. Good news of course for OpenView, including our SOA Manager product, but HP Services actually only represents a relatively small portion of OpenView sales.

More importantly, the SOA specialists in HP Services can help customers build an SOA by putting together parts from all our partners (Oracle, SAP, BEA, Microsoft, etc) as well as open source. Which is how you really want to go about building an SOA. In theory it is possible to build an SOA using homogenous products from the same vendor, but in practice this is as likely as designing a reusable and well factored-out interface while having only one use case and knowing about only one client for your service. In both conditions, assumptions creep in unnoticed into your contracts and abstractions. And you end up with a more tightly coupled system, which comes back to bite you as the number of participants grow.

Comments Off on Sea, Services and Sun

Filed under Business, Everything

Discovery of resource capabilities with WSDM

In his first article in the “WSDM wisdom” series, Bryan explained how to discover WSDM resources. The second article addresses the next step: once you’ve discovered resources, what are different ways to discover their capabilities.

Comments Off on Discovery of resource capabilities with WSDM

Filed under Everything, Standards

New names for Apache projects

As part of the move out of incubation into full-fledged Apache projects, the WSRF, WS-Notif and WSDM MUWS implementations in Apache have seen some name and URL changes. So here is the new list with the correct links:

Comments Off on New names for Apache projects

Filed under Everything, Implementation

So you want to build an EPR?

EPR (Endpoint References, from WS-Addressing) are a shiny and exciting toy. But a sharp one too. So here is my contribution to try to prevent fingers from being cut and eyes from being poked out.

So far I have seen EPRs used for five main reasons, not all of them very inspired:

1) “Dispatching on URIs is not cool”

Some tools make it hard to dispatch on URI. As a result, when you have many instances of the same service, it is easier to write the service if the instance ID is in the message rather than in the endpoint URI. Fix the tools? Nah, let’s modify the messages instead. I guess that’s what happens when tool vendors drive the standards, you see specifications that fit the tools rather than the contrary. So EPRs are used to put information that should be in the URI in headers instead. REST-heads see this as a capital crime. I am not convinced it is so harmful in practice, but it is definitely not a satisfying justification for EPRs.

2) “I don’t want to send a WSDL doc around for just the endpoint URI”

People seem to have this notion that the WSDL is a “big and static” document and the EPR is a “small and dynamic” document. But WSDL was designed to allow design-time and run-time elements to be separated if needed. If all you want to send around is the URI at which the service is available, you can just send the URI. Or, if you want it wrapped, why not send a soap:address element (assuming the binding is well-known). After all, in many cases EPRs don’t contain the optional service element and its port attribute. If the binding is not known and you want to specify it, send a around a wsdl:port element which contains the soap:address as well as the QName of the binding. And if you want to be able to include several ports (for example to offer multiple transports) or use the wsdl:import mechanism to point to the binding and portType, then ship around a simplified wsdl:descriptions with only one service that itself contains the port(s) (if I remember correctly, WS-MessageDelivery tried to formalize this approach by calling a WSRef a wsdl:service element where all the ports use the same portType). And you can hang metadata off of a service element just as well as off of an EPR.

For some reason people are happy sending an EPR that contains only the address of the endpoint but not comfortable with sending a piece of WSDL of the same size that says the same thing. Again, not a huge deal now that people seem to have settled on using EPRs rather than service elements, but clearly not a satisfying justification for inventing EPRs in the first place.

3) “I can manage contexts without thinking about it”

Dynamically generated EPRs can be used as a replacement for an explicit context mechanism, such as those provided by WS-Context and WS-Coordination. By using EPRs for this, you save yourself the expense of supporting yet-another-spec. What do you loose? This paper gives you a detailed answer (it focuses on comparing EPRs to WS-Context rather than WS-Coordination for pretty obvious reasons, but I assume that on a purely technical level the authors would also recommend WS-Coordination over EPRs, right Greg?). In a shorter and simplified way, my take on the reason why you want to be careful using dynamic EPRs for context is that by doing so you merge the context identifier on the one hand and the endpoint with which you use this context on the other hand into one entity. Once this is done you can’t reliably separate them and you loose potentially valuable information. For example, assume that your company buys from a bunch of suppliers and for each purchase you get an EPR that allows you to track the purchase as it is shipped. These EPRs are essentially one blob to you and the only way to know which one comes through FedEx versus UPS is to look at the address and try to guess based on the domain name. But you are at the mercy of any kind of redirection or load-balancing or other infrastructure reason that might modify the address. That’s not a problem if all you care about is checking the ETA on the shipment, each EPR gives you enough information to do that. But if you also want to consolidate the orders that UPS is delivering to you or if you read in the paper about a potential UPS drivers strike and want to see how it would impact you, it would be nice to have each shipment be an explicit context Id associated to a real service (UPS or FedEx), rather than a mix of both at the same time. This way you can also go to UPS.com, ask about your shipments and easily map each entry returned to an existing shipment you are tracking. With EPRs rather than explicit context you can’t do this without additional agreements.

The ironic thing is that the kind of mess one can get into by using dynamic EPRs too widely instead of explicit context is very similar in nature to the management problems HP OpenView software solves. Discovery of resources, building relationship trees, impact analysis, event correlation, etc. We do it by using both nicely-designed protocols/models (the clean way) and by using heuristics and other hacks when needed. We do what it takes to make sense of the customer’s system. So we could just as well help you manage your shipments even if they were modeled as EPRs (in this example). But we’d rather work on solving existing problems and open new possibilities than fix problems that can be avoided. And BTW using dynamic EPRs is not always bad. Explicit contexts are sometimes overkill. But keep in mind that you are loosing data by bundling the context with the endpoint. Actually, more than loosing data, you are loosing structure in your data. And these days the gold is less in the raw data than in its structure and the understanding you have of it.

4) “I use reference parameters to create new protocols, isn’t that cool!”

No it’s not. If you want to define a SOAP header, go ahead: define an XML element and then describe the semantics associated with this element when it appears as a SOAP header. But why oh why define it as a “reference parameter” (or “reference property” depending on your version of WS-A)? The whole point of an EPR is to be passed around. If you are going to build the SOAP message locally, you don’t need to first build an EPR and then deconstruct it to extract the reference parameters out of it and insert them as SOAP headers. Just build the SOAP message by putting in the SOAP headers you know are needed. If your tooling requires going through an EPR to build the SOAP message, fine, that’s your problem, but don’t force this view on people who may want to use your protocol. For example, one can argue for or against the value of WS-Management‘s System and SelectorSet as SOAP headers, but it doesn’t make sense to define those as reference parameters rather than just SOAP headers (readers of this blog already know that I am the editor of the WSDM MUWS OASIS standard with which WS-Management overlaps so go ahead and question my motives for picking on WS-Management). Once they are defined as SOAP headers, one can make the adventurous decision to hard-code them in EPRs and to send the EPRs to someone else. But that’s a completely orthogonal decision (and the topic of the fifth way EPRs are used – see below). But using EPRs to define protocols is definitely not a justification for EPRs and one would have a strong case to argue that it violates the opacity of reference parameters specified in WS-Addressing.

5) “Look what I can do by hard-coding headers!”

The whole point of reference parameters is to make people include elements that they don’t understand in their SOAP headers (I don’t buy the multi-protocol aspect of WS-Addressing, as far as I am concerned it’s a SOAP thing). This mechanism is designed to open a door to hacking. Both in the good sense of the term (hacking as a clever use of technology, such as displaying Craig’s list rental data on top of Google maps without Craig’s List or Google having to know about it), and in the bad sense of the term (getting things to happen that you should not be able to make happen). Here is an example of good use for reference parameters: if the Google search SOAP input message accepted a header that specifies what site to limit the search on (equivalent to adding “site:vambenepe.com” in the Google text box on Google.com), I could distribute to people an EPR to the vambenepe.com search service by just giving them an EPR pointing to the Google search service and adding a reference parameter that corresponds to the header instructing Google to limit the search to vambenepe.com.

Some believe this is inherently evil and should be stopped, as expressed in this formal objection. I think this is a useful mechanism (to be used rarely and carefully) and I would like to see it survive. But there are two risks associated with this mechanism that people need to understand.

The first risk is that EPRs allow people to trick others into making statements that they don’t know they are making. This is explained in the formal objection from Anish and friends as their problem #1 (“Safety and Security”) and I agree with their description. But I don’t agree with the proposed solutions as they prevent reference parameters to be treated by the service like any other SOAP header. Back last November I made an alternative proposal, using a wsa:CoverMyRearside element that would not have this drawback and I know other people have made similar proposals. In any case, this risk can and should be addressed by the working group before the specification becomes a Recommendation or people will stop accepting to process reference parameters after a few high-profile hacks. Reference parameters will become the ActiveX of SOAP.

The second risk is more subtle and that one cannot be addressed by the specification. It is the fragility that will result from applications that share too many assumptions. I get suspicious when someone gives me directions to their house with instructions such as “turn left after the blue van” or “turn right after the barking dog”, don’t you? “We’re the house after the green barn” is a little better but what if I want to re-use these directions a few years later. What’s the chance that the barn will be replaced or repainted? EPRs that contain reference parameters pose the same problem. Once you’ve sent the EPR, you don’t know how long it will be around, you don’t know who it will get forwarded to, you don’t know what the consumer will know. You need to spend at least as much efforts picking what data you use as a reference parameter (if anything) as you spend designing schemas and WSDL documents. If your organization is smart enough to have a process to validate schemas (and you need that), that same process should approve any element that is put in a reference parameter.

Or you’ll poke your eye out.

2 Comments

Filed under Everything, Implementation, Security, Standards, Tech

HP/IBM/CA roadmap white paper

HP, IBM and CA recently released a white paper describing how we see the different efforts in the area of management for the adaptive enterprise coming together and, more importantly, what else is needed to fulfill the vision. Being a co-author I am arguably more than a little biased, but I recommend the read as an explanatory map of the standards/specifications landscape, from the low levels of the Web services stack all the way up to model transformations and policy-based automated management: http://devresource.hp.com/drc/resources/muwsarch/index.jsp

Comments Off on HP/IBM/CA roadmap white paper

Filed under Articles, Everything, Standards, Tech

Apollo, Hermes, Muse out of incubation at Apache

Apollo (WS-ResourceProperties open source implementation), Hermes (WS-Notification open source implementation) and Muse (WSDM MUWS open source implementation) are now full Apache projects, out of incubation mode. Congrats Ian and Sal!

Comments Off on Apollo, Hermes, Muse out of incubation at Apache

Filed under Everything, Implementation

Someone is paying attention

It’s nice to see that, while most of the tech press seems happy to copy/paste from misleading press briefing documents rather than do any checking of their own, some analysts take a little bit more time to look through the smoke. So, when Gartner looks into the recent Microsoft/Sun announcement (see “Progress Report on Sun/Microsoft Initiative Lacks Substance”) their recommendation is to “view the latest Sun/Microsoft announcement as primarily public-relations-oriented”. Similar take from Jason Bloomberg from ZapThink who thinks that this “doesn’t do anything to contradict the fact that Microsoft is the big gorilla in this relationship”. And Forrester’s Randy Heffner (quoted in “Analysts Question Microsoft-Sun Alliance”) thinks that “Bottom line: Web services interoperability is not yet part of the picture”. Oh, and by the way “the WS-Management group has yet to come clean on how they will work with the WSDM standard approved by OASIS,” Heffner also says. “Again, WS-Management is still just a specification in the hands of vendors”. Very much so. But in PR-land everything looks different. As tech journalists write these articles including insight from analysts that contradict what the tech press reported a couple days earlier, I wonder if they ever think “hum, maybe I should be the one doing reality checks on the content of press releases rather than going around collecting quotes and then the analysts would focus on real in-depth analysis rather than just doing the basic debunking work…”

Comments Off on Someone is paying attention

Filed under Business, Everything, Standards, Tech

Reality check on Microsoft/Sun claims about single sign-on

This morning I learned that Microsoft and Sun had a public event where the CEOs reported on a year of working together. This is a follow-up to Greg Papadopoulos’ report on the progress of the “technical collaboration”. In that post, Greg told us about the amazing technical outcomes of the work between the two companies and, being very familiar with the specs he was referring to, I couldn’t help but point out that the result of the “technical collaboration” he was talking about looked a lot like Sun rubber-stamping a bunch of Microsoft specifications without much input from Sun engineers.

So when I heard this morning that the two companies were coming out publicly with the result of their work, I thought it would be fair for me to update my blog and include this information.

Plus, reading the press release and Greg’s Q&A session, it sounded pretty impressive and it would have been bad faith from my part to not acknowledge that indeed Greg actually had something to brag about, it just wasn’t yet public at the time. In effect, it sounded like they had found a way to make the Liberty Alliance specs and WS-Federation interoperate with one another.

From Greg’s Q&A: “In a nutshell, we resolved and aligned what Microsoft was trying to accomplish with Passport and the WS-Federation with what we’ve been doing with the Liberty Alliance. So, we’ve agreed upon a way to enable single sign-on to the Internet (whether through a .NET service or a Java Enterprise System service), and federate across those platforms based on service-level agreements and/or identity agreements between those services. That’s a major milestone.”

Yes Greg, it would have been. Except this is not what is delivered. The two specs that are supposed to support these claims are Web SSO MEX and Web SSO Interop Profile. Which are 14 and 9 pages long respectively. Now I know better than to equate length of a spec with value, but when you cut the boilerplate content out of these 14 and 9 pages, there is very little left for delivering on ambitious claims such as those Greg makes.

The reason is that these specs in no way provide interop between a system built using Liberty Alliance and a system built using WS-Federation. All they do is to allow each system to find out what spec the other uses.

One way to think about it is that we have an English speaker and a Korean speaker in the same room and they are not able to talk. What the two new specs do is put a lapel pin with a British flag on the english speakers and a lapel pin with a Korean flag on the korean speaker. Yes, this helps a bit. At least now the Korean speaker will know what the weird language that the other guy is speaking is and he can go to school and learn it. But just finding out what language the other guy speaks is a far cry from actually being able to communicate with him.

Even with these specs, a system based on Liberty Alliance and one based on WS-Federation are still incompatible and you cannot single sign-on from one to the other. Or rather, you can only if your client implements both. This is said explicitly in the Web SSO Interop Profile spec (look for the first line of page 5): “A compliant identity provider implementation MUST support both protocol suites”. Well, this isn’t interop, it’s duplication. Otherwise I could claim I have solved the problem of interoperability between English and Korean just by asking everyone to learn both languages. Not very convincing…

But of course Microsoft and Sun knew that they could get away with that in the press. For example, CNet wrote “The Web Single Sign-On Metadata Exchange Protocol and Web Single Sign-On Interoperability Profile will bridge Web identity management systems based on the Liberty and Web services specifications, the companies said”. As the Columbia Journalism Review keeps pointing out, real journalists don’t just report what people say, they check if it’s true. And in this case, it simply isn’t.

1 Comment

Filed under Business, Everything, Security, Standards, Tech