Category Archives: Tech

Omri on SOAP and WCF

Omri Gazitt presumably couldn’t find sleep last Friday night, so he wrote a well thought-out blog post instead. Well worth the read. His view on this is both broad and practical. There is enough in this post to, once again, take me within inches of trying WCF. But the fact that for all practical purposes the resulting code can only be deployed on Windows stops me from making this investment.

And since he still couldn’t sleep he penned another entry shortly after. That one is good but a bit less convincing. Frankly, I don’t think the technical differences between Java/C# and “dynamic languages” have much to do with the fact that stubs hurt you more often than not when developing code to process XML messages. With a sentence like “in a typed language, the conventional wisdom is that generating a proxy for me based on some kind of description of the service will make it easier for me to call that service using my familiar language semantics” Omri takes pain to avoid saying whether he agrees with this view. But if he doesn’t (and I don’t think he does), you’d think that he’d be in a pretty good position (at least on the .NET side) to change the fact that, as he says “the way WSDL and XSD are used in platforms like J2EE and .NET tends to push you towards RPC”…

I haven’t used .NET since writing C# code back when HP was selling the Bluestone J2EE server and I was in charge of Web services interoperability, so I have limited expertise there. But Java has the exact same problem with its traditional focus on RPC (just ask Steve). I am currently writing a prototype in Java for the CMDB Federation specification that is still at an early stage. All based on directly processing the XML (mostly through a bunch of XPath queries) and it makes it a breeze to evolve as the draft spec changes. Thank you XOM (and soon Nux).

I very much agree with the point Omri is making (that relying on metadata to add complexity in order to remove it) is an issue, but it’s not just for dynamic languages.

Comments Off on Omri on SOAP and WCF

Filed under Everything, Implementation, SOAP, Tech, XOM

A new management catalog proposal

As part of the work around the convergence of WS-Management and WSDM, HP, IBM, Intel and Microsoft just published a first version of a specification called WS-ResourceCatalog. This specification provides a way to list management endpoints for resources. For example, the BMC on a server motherboard could host a catalog that lists the management endpoints for its different components (network card, CPU, disk, etc). This is an attempt to bring more consistency to discovery scenarios.

The spec has been submitted to the DMTF for its consideration as part of its Web services-based management protocol efforts. The submission includes a list of issues related to the spec, so it’s pretty clear that it’s nowhere near done. Rather than hammering things out even longer (trust me, it’s been too long already), we decided to hand it over as is to the DMTF and let its members decide how to handle the issues. And any other change they wish to make.

1 Comment

Filed under Specs, Standards, Tech, WS-ResourceCatalog

Want to play a minesweeper game?

Since I am on a roll with off-topic posts…

I accidentally ran into some Web pages and scripts I wrote between 1994 and 1996. Mostly experiments with Web technologies that were emerging at the time. Some have pretty much disappeared (VRML), some are still pretty useful but slowly on their way out (CGI) but many of them are very prominent now. I found a bunch of Python scripts I wrote back then, some Java apps and applets and even a Minesweeper game written in JavaScript. And the impressive thing is that even though those were all pretty early technologies at the time, these programs seem to run just fine today with the latest virtual machines and interpreters for their respective languages. Kuddos to the people who have been growing these technologies while maintaining backward compatibility. Speaking of technologies that were emerging at the time and have made it big since then, all these were served from a Linux server and the Python stuff was developed on a Linux desktop (Slackware was the distribution of choice).

1 Comment

Filed under Everything, Game, JavaScript, Minesweeper, Off-topic, Tech

SML submitted to W3C

The previously released SML 1.0 and SML-IF 1.0 specifications have been submitted to W3C for standardization (actually the submission happened on 2/28 but W3C has acknowledged it today). My guess is that the fact that this announcement comes on the same day that SCA 1.0 is released is not going to decrease the confusion between these efforts.

Comments Off on SML submitted to W3C

Filed under Everything, SML, Standards, Tech

Coming up: SCA 1.0

A look at the “specifications” page of the “Open SOA” web site (the site used by the companies that created the SCA and SDO specifications) reveals a long list of specs with a release date of tomorrow. It’s like stumbling on the quarterly announcement of a publicly traded company the day before the announcement… except without the profit potential.

There is no link at this point, so no luck accessing the specifications themselves (unless one feels lucky and wants to try guessing the URLs based on those used for previously posted documents…) but we now know what they are and that they are coming out tomorrow:

  • SCA Assembly Model V1.00
  • SCA Policy Framework V1.00
  • SCA Java Common Annotations and APIs V1.00
  • SCA Java Component Implementation V1.00
  • SCA Spring Component Implementation V1.00
  • SCA BPEL Client and Implementation V1.00
  • SCA C++ Client and Implementation V1.00
  • SCA Web Services Binding V1.00
  • SCA JMS Binding V1.00
  • SCA EJB Session Bean Binding V1.00

The second one is the one I’ll read first.

Comments Off on Coming up: SCA 1.0

Filed under Everything, SCA, Standards, Tech

SML 1.0 is out

After taking the form of two early drafts (versions 0.5 and 0.65), the SML specification has now reached v1.0, along with its sidekick the SML-IF specification. You can find both of them at serviceml.org. This is where the happy bunch that assembled to create these specs would normally part ways. Not quite true in this case since there is related work about to be tackled by a very similar set of people (more on this later), but at least we are not going to touch SML and SML-IF anymore. They are ready for submission to a standards body where further modifications will take place (more on this later too).

1 Comment

Filed under Everything, SML, Standards, Tech

WS-ResourceTransfer article

Network World recently published a “technology update” column I wrote for them on WS-ResouceTransfer. It was supposed to come out soon after the release of WS-ResourceTransfer (in August 2006) but got postponed a few times. In the process, the editors requested that I made some improvements but also made some changes to the article that I hadn’t seen until it was published. The title is from them for example, as is this statement which I don’t actually agree with: “Models can be easily translated from one modeling language to another, so the invoker of the model and the service providers don’t need to use the same modeling language. Service Modeling Language, for example, was designed for that purpose.” SML was not designed for the purpose of doing model translation (even though you can of course transform to and from SML) and unfortunately model translation is not always easy. I guess the lesson is that if I had written the article more clearly to start with they wouldn’t have felt the need to make such modifications.

I think the article is still helpful in describing the potential role of WS-ResourceTransfer at the intersection of SOA and model-based management.

Comments Off on WS-ResourceTransfer article

Filed under Articles, Everything, Standards, Tech, WS-ResourceTransfer

SML versus the fat-bottomed specs

SML is, if I simplify, XSD augmented with Schematron. For those, like me, who aren’t fond of XSD, this is not very exciting… until you try to look at things in a different light. Instead of another spec that forces you towards the use of XSD (like WSDL), maybe the fact that SML uses XSD is your ticket *out* of XSD-hell. Let me explain.

I wrote above that I am not fond of XSD, and yet I see the value of having SML make use of it. Like it or not, many people and organizations have made heavy use of XSD to define well-known and reusable XML elements. And there is a lot of tooling (design time and runtime) for it. Breaking away from XSD altogether is possible (and advisable in many cases), but hard to do in places like systems management that have already invested heavily in using XSD.

The problem is that XSD is a document description language. It works well when the “document” abstraction is a good match. So, when I retrieve an XHTML page from a Web site, I want the paragraphs to be in the right order. The “document” abstraction is a good match. On the other hand, when I retrieve the configuration of a server, I don’t necessarily care if the description of the CPU comes before or after the description of the network card. I am still retrieving a document though (because XML forces this abstraction). But I don’t have the same requirements on its structure that I have on a document meant for publishing (like a Web page). For the non-publishing kind of interaction, a contract (a bullet list of things you can count on) is a better abstraction than a document.

XSD works better for the publishing kind of scenario, where you want to control all aspects of the document. It doesn’t work as well in situations where you just have some constraints that need to be met (e.g., the memory size must be a number) but other things are not important to you (order of some of the elements). As a result of XSD quirks, people often end up arbitrarily fixing the order of elements where it’s not needed (using xsd:sequence) and even have to introduce unneeded elements (to escape the dreaded UPA rule). And things become even worse when you have to extend and/or version existing XSD because of all the arbitrary constraints. Other metamodels like RDF avoid a lot of these problems by focusing on the assertion, rather than the document, as the base concept but this is a topic for another post.

One nice thing about the syntax constraints usually imposed by an XSD is that it makes the serialization of a piece of XML into a Java (or other language) more efficient. It doesn’t really matter semantically if the zip code is before or after the city name. In the US the zip code typically comes after (in postal addresses), in France it’s the contrary. And for this (unlike for the stupid MMDDYY date format, don’t get me started on this) you can make a case either way since in some places a zip code includes several cities and in others a city contains several zip codes. But whichever way you choose, you may be able to write a faster parser if you know in what order to expect them.

So I don’t mind at all having an XSD that describes a reusable type for elements that are very often used as an information atom, like an address (on the other hand, serializing an entire XML document into a Java object is often the wrong way to handle it).

By now you are getting an idea of what I want as an XML contract language. I want reusable elements that are small and potentially tightly defined (XSD definitions for a set of GEDs). And I want assertions that describe rules that a set of such elements need to obey in order to be valid as a unit per the contract. Which is where SML comes in. Because it provides a way to package XSD and Schematron, I can’t help thinking of it as a possible alternative to an all-XSD view of the world. If people have the discipline to only use the XSD part to describe small reusable elements and to rely on the XPath-driven Schematron constraints to provide the contract rules that tie these GEDs into a meaningful unit.

A few notes:

– I am fully aware (being part of it) that SML wasn’t created as a generic contract language for XML-based interaction, but as a desired state modeling language. The usage I am suggesting here is clearly a hack that abuses the syntax provided by SML (actually SML-IF). And I am not even sure that the SML-IF packaging would be an entirely convenient vehicle for this approach. I haven’t done the experimentation needed to validate that. It just seems to hit the ballpark of the requirements.

– I find it ironic that the approach to an XML contract language that I described above is already how many XML specs are defined in their human-readable section (at least in the SOAP world): a list of pseudo-XPath statements with a description of what to expect at the end of each one. But somehow at the bottom of each of these specs we get a huge XSD that imposes a lot of extra constraints that have no justification in the semantics of the spec. Rather than having a set of XPath-driven schematron statements that provide a machine-readable equivalent of the human-readable rules described used pseudo-XPath. Like the Queen song (almost) says “Fat bottomed specs you make the SOAPin world go dumb”.

1 Comment

Filed under Everything, SML, Standards, Tech

Come for the XML, stay for the desired-state approach

What would you think of programmers who switch from C to Java in order to be able to use Javadoc for interface documentation? On the one hand, if the benefits of Javadoc alone justify the effort to switch then why not? On the other hand, you can’t help thinking that it’s a pity that they don’t realize (and take advantage of) all the other improvements they are getting for switching to Java. Especially if they start to rewrite some of their existing C code in Java in a way that smells more like C than real Java. Wouldn’t you want to sit with them for a talk?

Well, I am seeing early signs of this happening with SML. As I wrote earlier, the main difference between CIM and SML is that of usage model. Unlike CIM, SML is designed to enable desired-state management. That’s the real difference. But it also happens that SML is XML-based (and naturally compatible with document exchange types of interactions, be they Web Services or REST) while CIM is not (and its XML form is unusable in practice for anything other than RPC). And the difficulty of doing XML doc exchange with CIM happens to be a more immediate problem to many people than desired-state management. As a result, it is tempting to look at SML as a solution to CIM’s lack of XML friendliness. But moving to SML for this reason, while keeping the same level of granularity and the same usage model, is just like moving from C to Java for the Javadoc.

Moving to SML because it is defined around XML documents is hard to justify. BTW, moving to SML because it’s based on XSD is even worse, as I’ll explain in the next post.

Comments Off on Come for the XML, stay for the desired-state approach

Filed under Desired State, Everything, SML, Standards, Tech

Give and take

I wasn’t looking for yet another “REST vs. Web Services” thread but Pete Lacey sucked me in (and many others) by hooking us with a hilarious bait post and since then he’s been pulling strongly on the line with very serious discussions on the topic so we haven’t been able to let go. The latest one left me a little puzzled though. In the security section Pete writes that it would make sense to use WS-Security indeed (and the SOAP envelope as a wrapper) if there was a need for message-level security rather than simply transport-level security. And then, barely catching his breath, he dismisses WS-Transfer and WS-Enumeration in the following paragraph on the basis that “these specifications effectively re-implement HTTP” (not really true for WS-Enumeration but let’s leave that aside). More importantly, how am I to reconcile this with the previous paragraph? Once I use WS-Security and the SOAP envelope, I can’t use pure HTTP anymore. But the patterns supported by HTTP are still very useful. That’s what WS-Transfer is for. That’s what SOAP is for more generally, providing a hook-up point for things like WS-Security that compose with the rest of the message. I don’t understand how Pete can concede that in some cases message-level security is useful but then take away the possibility to do a GET in these circumstances. Is he saying that for some reason the scenarios that justify message-level security are scenarios in which REST-style interactions don’t apply?

3 Comments

Filed under Everything, SOAP, Standards, Tech

The S stands for satire

The cynical view of SOAP is not new, but this piece (“The S stands for Simple” by Pete Lacey) puts it down in the best form I’ve seen so far. What makes it such a good satire is not the funny writing (“Saints preserve us! Alexander the Great couldn’t unravel that” on reading the XSD spec) but how true it is to what really took place. There was plenty of room for exaggeration to get additional comic effect but Pete staid clear of that and the resulting piece is much more powerful for it.

I am impatiently waiting for the second installment, when the poor developer gets introduced to the WS-Addressing disaster.

I love the piece, but it doesn’t mean I have given up on SOAP. The fact that there was a lot of bumping around trying to find out how SOAP is most useful is not bad per se even if the poor developer left a few handfuls of hair on the floor in the process (that’s the joy of being an early adopter, right?). Many other good technologies go through that, in fact this is what makes them good, figuring out what they should not do.

SOAP is indeed for doc exchange (not “wrapped-doc/lit”). If you need end to end security, reliability or transaction then it helps you with that. If you don’t need them but think you might need them someday then the cost of putting your message in a SOAP envelope it pretty low, so do it. If you know you won’t need that then by all means POX all you want. And BTW, while the “role” attribute is indeed useless, “mustUnderstand” is very important. In fact, it would be very nice to have something like this for any portion of the message, not just headers. And speaking of extending header goodies to the body, EPRs would be useful if they were a real mechanism for templatizing SOAP messages (any part of the message, with a way to indicate what portions are there because of the template) instead of a dispatching crutch for sub-standard SOAP stacks. And since I have switched into “Santa Claus list” mode, the other piece we need is a non-brittle XML contract language. That’s for a future blog entry.

Comments Off on The S stands for satire

Filed under Everything, SOAP, Standards, Tech

Is SML to CIM what WS is to RPC?

The question I hear most often when talking about SML, is how it relates to CIM. The easy part of the answer is to explain that SML is a metamodel, like MOF, not a set of model elements/classes like CIM. SML per se doesn’t define what a blade server or a three tier application looks like. So the question usually gets refined to comparing SML to MOF, or comparing an SML-based model to the CIM model. Is it a replacement, people want to know.

Well, you can look at it this way, but whether this is useful depends on your usage model. Does your usage model include a distinction between observed state and desired state? You can use SML to model the laptop in front of you, but if all you’re doing is reading/writing properties of the laptop directly you don’t get much out of using SML rather than CIM, definitely not enough to justify modifying an existing (and tested) manageability infrastructure. But if you change your interaction model towards one where there is more automation and intermediation between you and the resource (at the very least by validating the requested changes before effectuating them), then SML starts to provide additional value. The question is therefore more whether you would benefit from the extra expressiveness of constraints (through schematron) and the extra transformability/validation/extensibility (through the use of XML) that SML buys you. If you’re just going to assign specific values to specific properties then CIM is just as good. And of course keep in mind that a lot of work has already gone into defining the domain-specific semantics of many properties in CIM and that work should, to the extent possible, be leveraged. Either directly by using CIM where SML doesn’t provide additional value, or indirectly by carefully surfacing CIM-defined elements inside an SML model. Finally, CIM defines operations while SML doesn’t have such a concept. The most natural way to do a “start” using SML is not to invoke a “start” operation as it is in CIM, it is to request the configuration to be changed to a state in which the resource is started. In conclusion, there is a lot more to the “CIM versus SML” question than a direct replacement. Those who just look at ways to do syntactical translation between the two approaches will repeat the same errors that created (and still create) so many problems for those who saw Web services as just another way to do object to object RPC. The usage model (some would say the architecture) is what matters, not the syntax.

1 Comment

Filed under Everything, SML, Standards, Tech

Search engine for XML documents

One of the entries that has been collecting dust in the “draft” folder for this blog was about how it would be nice to have a search engine for XML documents. So, when the announcement of Google Code Search came out, I thought it was finally done and I could delete the never-published entry. Well, turns out it doesn’t support searching on XML documents. I don’t care to debate whether XML (or some XML dialects) is code or not, all I know is that it would be very nice to be able to do things such as:

  • look for instances of a specific GED
  • compare how often different XSD constructs are used (choice, sequence…)
  • look for all wsdl:binding elements that implement a given portType
  • look for all wsdl:port elements and all the WS-A EPRs that have an address in the hp.com domain
  • look for all XML documents for which a given XPath query evaluates to “true”
  • look at the entire Web (or a subset of it) as one giant SML model and query it
  • even for good old HTML/XHTML documents, it would be nice to search them as XML documents and be able to look for pages that contain a certain string as part of the title element or as part of a list.

In the meantime, people are going to have fun searching for password embedded in source code and other vulnerabilities.

Comments Off on Search engine for XML documents

Filed under Everything, Tech

Announcing SML

BEA, BMC, Cisco, Dell, EMC, HP, IBM, Intel, Microsoft and Sun just published a new modeling specification called SML (Service Modeling Language). This is the next step in the ongoing drive towards more automation in the management of IT resources. The specification makes this possible by providing a more powerful way (using Schematron) to express system constraints in a machine-readable (and more importantly machine-actionable) way. It also has the advantage (being based on XSD) to align very well with XML document exchange protocols and the Web services infrastructure.

Here is the SML spec on the HP site. Very soon there will be an HTML version of the spec there in addition to the PDF. In addition, the serviceml.org Web site is a basic but vendor-neutral home for the spec.

Those familiar with the QuarterMaster work will see a lot of commonality and know that HP has a lot of experience to contribute in this domain: paper 1, paper 2 and paper 3.

This is an initial draft, not a final specification. The major hole in my mind at this time is the lack of support for versioning. Something to address soon.

There are many good things about this specification, but unfortunately not the name. Just for kicks, here are some better candidates:

  • ITSOK (IT Systems Operational Knowledge) “it’s ok”
  • ITSON (IT Systems Open Notation) “it’s on”
  • ITSUP (IT Systems Upkeep Profile) “it’s up”

Comments Off on Announcing SML

Filed under Everything, SML, Standards, Tech

There is no control-Z on this thing!

Seen in today’s issue of the RISKS digest:

In the process of upgrading its storage management, PlusNet deleted more than 700GB of its customers’ e-mail and disabled the ability of about half its 140,000 users to send and receive new e-mail. “At the time of making this change the engineer had two management console sessions open one to the backup storage system and one to live storage. These both have the same interface, and until [then] it was impossible to open more than one connection to any part of the storage system at once.” Patches were installed, but the engineer assumed he was working with the backup rather than the live server. Thus, “the command to reconfigure the disk pack and remove all data therein was made to the wrong server.”

It’s for things like this that the RISKS digest should be a required reading for software professionals, especially in enterprise software. Tools make it easier to do useful things, they also make it easier to do very stupid things. Additional automation (that we are working on right now) can help prevent these problems. But it has corner cases too that may open the door to even bigger failures.

I have no idea what vendor console is involved in this specific incident. Could well be HP. Or Veritas. Or Tivoli

[UPDATE: turns out it was Sun.]

Comments Off on There is no control-Z on this thing!

Filed under Everything, Tech

A look at Web services support at Microsoft

A “high-level overview of Microsoft support for Web services across its product offerings” was recently published at MSDN. At times it sounds a bit like a superficial laundry list of specs (it really doesn’t mean much to say that a product “supports” a spec even though we all often resort to such vague statement). Also, the screen captures are not very informative. But considering the breath of material and the stated goal of providing a high-level overview this is a nice document. And if I imagine myself in the position of trying to write a similar document for HP, my appreciation for the work that went into it rises quickly.

Having all this listed in one place points out one disappointing aspect: the disconnect between Web services usage for “management and devices” and everything else Web services. If you look at the table at the bottom of the MSDN article, there is no checkbox for any management or device Web service technology in the ASMX 2.0, WSE 2.0, WSE 3.0 or WCF columns. These technologies only appear in the guts of the OS. So Vista can interact with devices using WS-Eventing, WS-Discovery and the Web services device profile, but I guess Visual Studio developers are not supposed to want to do anything with these interfaces. Similarly, Windows Server R2 provides access to its manageability information using WS-Management (actually, AFAIK it’s still an older, pre-standard version of WS-Management but we’ll pass on that), WS-Transfer, WS-Enumeration and WS-Eventing but the Visual Studio developers are again on their own to take advantage of this to write applications that take advantage of these capabilities.

This is disappointing because one of the most interesting aspects of using Web services for management is to ease integration between IT management and business applications, a necessary condition in order to make the former more aligned with the later. By using SOAP for both we are getting a bit closer than when one was SNMP and the other was RMI, but we won’t get to the end goal if there is no interoperability above the SOAP layer.

Hopefully this is only a “point in time” problem and we will soon see better support for Web services technologies used in management in the general Web services stack.

The larger question of course is that of the applicability (or lack of applicability) of generic XML transfer mechanisms (like WS-Transfer, WS-Eventing and WS-Enumeration) outside of the resource management domain. That’s a topic for a later post.

Comments Off on A look at Web services support at Microsoft

Filed under Everything, Implementation, Tech

RDF to XML tools for everyone in 2010?

As has been abundantly commented on, a lot of the tool/runtime support for XML development is centered on mapping XML to objects. The assumption being that developers are familiar with OO and not XML and so tools provide value by allowing developers to work in the environment they are most comfortable in. Of course, little by little it becomes obvious that this “help” is not necessarily that helpful and that if the processing of XML document is core to the application then the developer is much better off learning XML concepts and working with them.

The question is, what will happen to the tools once we move beyond XML as the key representation. XML might still very well be around as the serialization (it sure makes transformations easy), but in many domains (IT management for one), we’ll have to go beyond XML for the semantics. Relying on containment and order is a very crude mechanism, and one that can’t be extended very well despite what the X in XML is supposed to stand for. Let’s assume that we move to something like RDF and that by that point most developers are comfortable with XML. Who wants to bet that tools would show up that try to prevent developers from having to learn RDF concepts and instead find twisted ways to process RDF statements as one big XML document, using the likes of XPath, XSLT and XQuery instead of SPARQL?

All the trials and errors around Web services tools in Java (especially) made it very clear how tools can hold you back just as much as they can help you move forward.

Comments Off on RDF to XML tools for everyone in 2010?

Filed under Everything, Implementation, Tech

Looking at AMQP

If you spend a large part of your day reading and writing XML-related specs and artifacts, what do you do in the evening for a change? You read a non-XML spec. That’s what I did tonight, with the newly-released AMQP specification.

The clear aim of AMQP is to open up and commoditize the message queuing middleware space. Some customers in the financial services industry and smaller software vendors are fed up with, respectively, IBM/Tibco fees and IBM/Tibco market dominance. And this is how they plan to bring change. Definitely something to watch. And not just if you’re in banking, many other domains have demanding messaging needs, including IT management. So they have my attention.

I am not a messaging guru, so my comments are not about the core content of the spec (it seems very well thought-through though) but mostly around the interconnection between this spec and the domain I focus on. Basically, how this would fit in the landscape of standards-based management integration for increased automation and flexibility.

So how would this map to WS-Eventing, which is the basis for the eventing part of the ongoing Web services management convergence? To a large extent, they are orthogonal and complementary. WS-Eventing is about creating and managing subscriptions, not how the notifications get delivered. As long as you can create an EPR that indicates that AMQP is the delivery mechanism, WS-Eventing will be ok with that protocol. And conversely, AMQP has limited support for the function of subscribing. The closest thing is the ability, defined by AMQP, for a consumer application to create message queues and then to pass bindings to the server to drive messages into these queues. But in AMQP there is no concept of notifying the publisher application that there is now interest from a consumer for some type of message. So, in the general case, a mapping of a WS-Eventing subscribe call to AMQP would require two steps, one to create the appropriate queue and binding on the server and another (if applicable to the system), to notify (through a mechanism not specified by AMQP) the publisher application that there is interest for a specific type of messages. In some systems the publisher app always sends the same notifications and lets the messaging infrastructure deal with dropping those that no-one cares to get. But in other use case (including some in the management space), the system that plays the role of publisher application is able to vary the amount of notification it sends depending on its configuration. This is not addressed by AMQP.

Another interesting aspect is to look at how something like WS-Topics can be used with AMQP. AMQP does have a built-in “Topic” exchange type just for this. But it specifies that “the routing key used for a topic exchange MUST consist of words delimited by dots. Each word may contain the letters A-Z and a-z and digits 0-9”. This wouldn’t allow a direct mapping from WS-Topics since the topics in WS-Topics are XML elements, therefore their names are of type NCName, which includes the “dot” character.

What I find more puzzling is the claim (not in the spec but in the announcement) that “The AMQProtocol can be used with most of the current messaging and Web Service Specifications such as JMS, SOAP, WS-Security, WS-Transactions, and many more, complimenting (sic) the existing work in the industry.” I can easily see how AMQP can be used with JMS and as I described above it can also be used with the WS specs that manage subscriptions. But I don’t understand how it relates to things like WS-Security and WS-Transaction. AMQP defines a binary protocol that doesn’t conform to the SOAP model, so how does it compose with SOAP-based specifications? This is especially important because AMQP is very light on security. And when it comes to transactionality and reliability it just says that “there are no confirmations in AMQP. Success is silent, and failure is noisy. When applications need explicit tracking of success and failure, they should use transactions.” And leaves it at that.

The AMQP announcement is covered in this eWeek story. BTW, unless he is quoted out of context ZapThink’s Jason Bloomberg seems to miss the point when he compares AMQP with JMS and its “vendor-specific implementations”. The fact that JMS is not a on-the-wire protocol and has therefore vendor-specific implementations is actually the very reason why something like AMQP is needed. AMQP asserts that the protocol can be mapped to JMS and indeed I didn’t see anything in the spec that would prevent this from being true.

I also wonder if it’s for the irony or as a Freudian slip that the topic example they use in the spec happens to be STOCK.USD.IBM.

[UPDATED 2008/10/27: Microsoft announced that they will join the group.]

[UPDATED 2009/3/17: There is some debate around RedHat patents related to AMQP. Kirk is not happy. Matt is not worried. RedHat says it’s pure.]

2 Comments

Filed under Everything, Standards, Tech

Just-in-scope (or just-if-i-care) validation

I started my Christmas wish list early this year. In January I described how I would like a development tool that tests XPath statements not just against instance documents but also against the schema. Since I am planning to be good this year for a change, I might get away with a second item on my wish list, so here it is. It’s the runtime companion of my January design-time request.

Once I have written an application that consumes messages by providing XPath expressions to retrieve the parts I care about, I would like to have the runtime validate that the incoming messages are compatible with the app. But not reject messages that I could process. One approach would be to schema-validate all incoming messages and reject the messages that fail. Assuming that I validated my XPath statements against the schema using the tool from my January wish, this should protect me. But this might also reject messages that I would be able to process. For example, even if the schema does not allow element extensibility at the end of the document, it shouldn’t break my application if the incoming message does contain an extra element at the end, if all I do with the message is retrieve the value of a well-defined foo/bar element. So what I would like is a runtime that is able to compare the incoming message with the schema I used and reject it only if the deviations from the schema are in locations that can possibly be reached by my application through the XPath statements it uses to access the message. Otherwise allow the message to be processed.

Steve, could Alpine do that?

Comments Off on Just-in-scope (or just-if-i-care) validation

Filed under Everything, Implementation, Tech

Federated CMDB, one more step towards “Google maps for IT”

In July last year I gave a short presentation at the IEEE ICWS 2005 conference in Miami in which I used an analogy with Google Maps (since then assimilated into Google Local) to explain that we needed to do a better job at federating disparate instance model repositories for management. After the conference, I wrote up this blog entry to summarize my message. I got mostly positive feedback on this, with the one caveat that people were confused by the terminology. When I told them to replace “model instance” with “configuration”, things went a lot better. I realized I was guilty of that cardinal sin in our industry, lack of buzzword compliance. So here it is: I should have called the whole thing a Federated CMDB.

Between then and now, a bunch of major players in IT management got together to address this objective. Today we announced (along with our partners BMC, Fujitsu and IBM) a collaboration to produce a specification to federate configuration data repositories. And this time we are fully buzzword-compliant, so the work is described in terms of CMDB and support for ITIL best practices. Lesson learned. And of course you can expect plenty of SOA goodness sprinkled in the spec.

Stay tuned for more specifics on this soon. Before anyone sarcastically points it out, yes, this is the second announcement that we put out in a few weeks that is not backed by publicly available work (the other one is the WS-Management/WSDM convergence roadmap). And it might not even be over quite yet. Clearly, announcements are cheap (actually not so cheap if you see the work they take) compared to doing the real work. But there is real work going on behind this.

[UPDATE: a few days after I wrote this, Google went back to using the “Google maps” name instead of “Google local”.]

1 Comment

Filed under CMDB Federation, CMDBf, Everything, Tech