Category Archives: SPARQL

REST + RDF, finally a practical solution?

The W3C has recently approved the creation of the Linked Data Platform (LDP) Working Group. The charter contains its official marching orders. Its co-chair Erik Wilde shared his thoughts on the endeavor.

This is good. Back in 2009, I concluded a series of three blog posts on “REST in practice for IT and Cloud management” with:

I hereby conclude my “REST in practice for IT and Cloud management” series, with the intent to eventually start a “Linked Data in practice for IT and Cloud management” series.

I never wrote that later part, because my work took me away from that pursuit and there wasn’t much point writing down ideas which I hadn’t  put to the test. But if this W3C working group is successful, they will give us just that.

That’s a big “if” though. Religious debates and paralyzing disconnects between theorists and practitioners are all-too-common in tech, but REST and Semantic Web (of which RDF is the foundation) are especially vulnerable. Bringing these two together and trying to tame both sets of daemons at the same time is a daring proposition.

On the other hand, there is already a fair amount of relevant real-life experience (e.g. data.gov.uk – read Jeni Tennison on the choice of Linked Data). Plus, Erik is a great pick to lead this effort (I haven’t met his co-chair, IBM’s Arnaud Le Hors). And maybe REST and RDF have reached the mythical point where even the trolls are tired and practicality can prevail. One can always dream.

Here are a few different ways to think about this work:

The “REST doesn’t provide interoperability” perspective

RESTful API authors too often think they can make the economy of a metamodel. Or that a format (like XML or JSON) can be used as a metamodel. Or they punt the problem towards defining a multitude of MIME types. This will never buy you interoperability. Stu explained it before. Most problems I see addressed via RESTful APIs, in the IT/Cloud management realm, are modeling problems first and only secondarily protocol/interaction problems. And their failures are failures of modeling. LDP should bring modeling discipline to REST-land.

The “RDF was too much, too soon” perspective

The RDF stack is mired in complexity. By the time people outside of academia had formed a set of modeling requirements that cried for RDF, the Semantic Web community was already deep in the weeds and had overloaded its basic metamodel with enough classification and inference technology to bury its core value as a simple graph-oriented and web-friendly metamodel. What XSD-fever was to making XML seem overly complex, OWL-fever was to RDF. Tenfold.

Everything that the LDP working group is trying to achieve can be achieved today with existing Semantic Web technologies. Technically speaking, no new work is needed. But only a handful of people understand these technologies enough to know what to use and what to ignore, and as such this application doesn’t have a chance to materialize. Which is why the LDP group is needed. But there’s a reason why its starting point document is called a “profile”. No new technology is needed. Only clarity and agreement.

For the record, I like OWL. It may be the technology that most influenced the way I think about modeling. But the predominance of RDFS and OWL (along with ugly serializations) in Semantic Web discussions kept RDF safely out of sight of those in industry who could have used it. Who knows what would have happened if a graph query language (SPARQL) had been prioritized ahead of inference technology (OWL)?

The Cloud API perspective

The scope of the LDP group is much larger than Cloud APIs, but my interest in it is mostly grounded in Cloud API use cases. And I see no reason why the requirements of Cloud APIs would not be 100% relevant to this effort.

What does this mean for the Cloud API debate? Nothing in the short term, but if this group succeeds, the result will probably be the best technical foundation for large parts of the Cloud management landscape. Which doesn’t mean it will be adopted, of course. The LDP timeline calls for completion in 2014. Who knows what the actual end date will be and what the Cloud API situation will be at that point. AWS APIs might be entrenched de-facto standards, or people may be accustomed to using several APIs (via libraries that abstract them away). Or maybe the industry will be clamoring for reunification and LDP will arrive just on time to underpin it. Though the track record is not good for such “reunifications”.

The “ghost of WS-*” perspective

Look at the 16 “technical issues” in the LCD working group charter. I can map each one to the relevant WS-* specification. E.g. see this as it relates to #8. As I’ve argued many times on this blog, the problems that WSMF/WSDM/WS-Mgmt/WS-RA and friends addressed didn’t go away with the demise of these specifications. Here is yet another attempt to tackle them.

The standards politics perspective

Another “fun” part of WS-*, beyond the joy of wrangling with XSD and dealing with multiple versions of foundational specifications, was the politics. Which mostly articulated around IBM and Microsoft. Well, guess what the primary competition to LDP is? OData, from Microsoft. I don’t know what the dynamics will be this time around, Microsoft and IBM alone don’t command nearly as much influence over the Cloud infrastructure landscape as they did over the XML middleware standardization effort.

And that’s just the corporate politics. The politics between standards organizations (and those who make their living in them) can be just as hairy; you can expect that DMTF will fight W3C, and any other organization which steps up, for control of the “Cloud management” stack. Not to mention the usual coo-petition between de facto and de jure organizations.

The “I told you so” perspective

When CMDBf started, I lobbied hard to base it on RDF. I explained that you could use it as just a graph-based metamodel, that you could  ignore the ontology and inference part of the stack. Which is pretty much what LDP is doing today. But I failed to convince the group, so we created our own metamodel (at least we explicitly defined one) and our own graph query language and that became CMDBf v1. Of course it was also SOAP-based.

KISS and markup

In closing, I’ll just make a plea for practicality to drive this effort. It’s OK to break REST orthodoxy. And not everything needs to be in RDF either. An overarching graph model is needed, but the detailed description of the nodes can very well remain in JSON, XML, or whatever format does the job for that node type.

All the best to LDP.

4 Comments

Filed under API, Cloud Computing, CMDB Federation, CMDBf, DMTF, Everything, Graph query, IBM, Linked Data, Microsoft, Modeling, Protocols, Query, RDF, REST, Semantic tech, SPARQL, Specs, Standards, Utility computing, W3C

A new SPIN on enriching a model with domain knowledge (constraints and inferences)

Back when I was at HP and we got involved with what turned into SML (now a W3C candidate recommendation), we tried to make a case for the specification to be based on RDF/OWL rather than XML/XSD/Schematron. It was a strange situation from a technical perspective because RDF is a better foundation for an IT model than XML, but on the other hand XSD/Schematron is a better choice for validation than OWL. OWL is focused on inference, not validation (because of both fundamental design choices, e.g. the open world assumption, and language expressiveness limitations).

So our options were to either use the right way to represent the system (RDF) combined with the wrong way to capture constraints (OWL) or to use the wrong way to represent the system (XML) combined with the right way to constrain it (mostly Schematron, with some limited help from XSD). At the end, of course, this subtle technical debate was crushed under the steamroller of vendor politics and RDF never got a fair chance anyway.

The point of this little background story is to describe the context in which I read this announcement from Holger Knublauch of TopQuadrant: the new version of their TopBraid Composer tool introduces SPIN, a way to complement OWL with a SPARQL-based constraint checking and inference mechanism.

This relates to SML in two ways.

First, there are similarities in the approach: Schematron leverages the XPath language, used to query XML, to create validation rules. SML then marries Schematron with XSD, for a more powerful validation mechanism. Compare this to SPIN: SPIN leverages the SPARQL query language, used to query RDF, to create validation/inference rules. SPIN also marries this with OWL, for a more powerful validation/inference mechanism.

But beyond the mirroring structures of SPIN and SML, the most interesting thing is that it looks like SPIN could nicely solve the conundrum, described above, of RDF being the right foundation for modeling IT systems but OWL being the wrong constraint mechanism. SPIN may do a better job than SML at what SML is aiming to do (validation rules). And at the same time, you get “for free” (or as close to “for free” as you can get with software, which is still far from “free”) a pretty powerful inference mechanism. The most powerful I know of, short of using a general programming language to capture your inference rules (and good luck with maintaining these rules).

This may sound like sci-fi, but it’s the next logical step for IT configuration standardization. Let’s look at where we are today:

  • SML (at W3C) is an attempt to standardize the expression of constraints.
  • CMDBf (at DMTF) is standardizing how the model content is queried (and, to some limited extent at this point, federated).
  • And recently IBM authored a proposal for a reconciliation specification for items in the model and sent it to an Eclipse group (COSMOS).

But once you tackle reconciliation, you are already half-way into inferencing territory. At least if you want to reconcile between models, not just between instances expressed in the same model. Because the models may not be defined at the same level of granularity, and before you can reconcile items you need to infer finer-grained entities in your coarser-grained model (or vice-versa) so that you can reconcile apples with apples.

Today, inferencing for IT models is done as part of the “discovery packs” that you can buy along with your IT management model repository. But not very well, in general. Because the way you write such a discovery module for the HP Universal CMDB is very different from how you write it for the BMC CMDB, IBM’s CCMDB or as a plug-in for Oracle Enterprise Manager or Microsoft System Center. Not to mention the smaller, more specialized, players. As a result, there is little incentive for 3rd party domain experts to put work into capturing inference rules since the work cannot be widely leveraged.

I am going a bit off-topic here, but one interesting thing about standardization of inferencing for IT management, if it happens, is that it is going to be very hard to not use RDF, OWL and some flavor of SPARQL (SPIN or equivalent) there. And once you do that, the XML-based constraint mechanisms (SML or others) are going to be in for a rough ride. After resisting the RDF stack for constraints, queries and basic reconciliation (because the added value was supposedly not “worth the cost” for each of these separately), the XML dam might get a crack for inferencing. And once RDF starts to trickle through that crack, the whole dam is going to come down in a big wave. Just to be clear, this is a prophetic long-term vision, not a prediction for 2009 (unfortunately).

In the meantime, I’d like to take this SPIN feature a… spin (sorry) when I find some time. We’ll see if I can install the new beta of TopBraid composer despite having used up, a year ago, my evaluation license of the earlier version of the product. Despite what I had hopped at some point, this is not directly applicable to my current work, so I am not sure I want to buy a license. But who knows, SPIN may turn out to be the change that eventually puts RDF back on my “day job” list (one can dream)…

It’s also nice that Holger took the pain to deliver SPIN not just as a feature of his product but also as a stand-alone specification, which should make it pretty easy for anyone who has a SPARQL engine handy to support it. Hopefully the next step will be for him to clarify the IP terms for the specification and to decide whether or not he wants to eventually submit it for standardization. Maybe to the W3C SML working group? :-) I’d have a hard time resisting joining if he did.

12 Comments

Filed under CMDB, CMDBf, Everything, IT Systems Mgmt, Modeling, RDF, Semantic tech, SML, SPARQL, Specs, Tech, W3C

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

I have seen the future of CMDBf

I got a sneak peak at CMDBf v2 today.

I am calling it v2 based on the assumption that the one being currently standardized in DMTF will end up being called 1.0 (because it’s the first one out of DMTF) or 1.1 (to prevent confusion with the submitted version).

At the Semantic Technology Conference, David Booth from HP presented his work (along with his partner, Steve Battle from HP Labs) to provide a SPARQL front-end to HP’s Universal CMDB (the engine under what was the Mercury MAM product). Here are the slides.

The mapping from SPARQL to TQL (the native query interface for UCMDB) was made pretty easy by the fact that TQL is a graph-oriented query language. How much harder would it be to similarly transform a CMDBf (v1) query interface into a SPARQL query interface (and vice-versa)? Not much. The only added difficulty would come from the CMDBf XPath constraints. TQL has a property value mechanism that is very similar to CMDBf’s “propertyValue” constraint and maps well to SPARQL functions. The introduction of XPath as a constraint language in CMDBf makes things harder. It could be handled by adding XPath support to the SPARQL engine using function extensibility. Or by turning the entire XML into RDF and emulating XPath in SPARQL. But in either case, you’ll have impedence mismatch at some point because concepts such as element order that exist in XPath have no native equivalent in RDF.

The use of XPath in selectors on the other hand is not a problem. HP’s prototype uses Gloze (available as a Jena package) to turn the XML returned by UCMDB into RDF. An XSLT transform could turn that same XML into a CMDBf-valid XML response instead and that XSLT could easily handle the XPath selectors from the query request. This is another reason why constraints and selectors should remain separate in CMDBf (fortunately the specification is back to doing this properly).

Here is why I call this prototype CMDBf v2: The CMDBf effort (v1 or 1.1), in its current form of re-inventing a graph query, can succeed. Let’s assume the working group strikes a reasonable balance between completeness and complexity, and vendors choose to compete on innovation and execution rather than lock-in (insert cynical comment here). CMDBf may then end up being supported by the main CMDB vendors. It wouldn’t provide federation capabilities, but having a common CMDB query interface supported by the Big Four would help with management integration. And yet, while the value would be real, it would only provide a little help to solve a larger problem:

  • As a technology limited to IT systems management, it would be unlikely to see widely available tools (e.g. user consoles and language-specific libraries).
  • It wouldn’t get the kind of robustness and interoperability that comes from wide adoption. While pretty similar, there might be some minor differences in the various implementations. Once your implementation has been tweaked to work with the implementations from the Big Four, you’ll call it done. Just like SNMP, another technology that is specific to IT systems management (see it happen here).
  • Even if it works perfectly at the query level, it will just hasten the time when developers run into the real problem, model interoperability. CMDBf doesn’t help at all with this. In fact, it makes it harder by hard-coding some dependencies on an XML back-end (the XPath constraints).

In the long run, IT management has to become more automated and integrated. That’s a given. The way it happens may or may not go through CMDB-like configuration stores. But if it does, we’ll have to eventually move beyond CMDBf (v1) towards something that addresses the three requirements above. And federation. I don’t know if it will be called CMDBf v2, and/or if it will come from the DMTF (by then, the CMDBf brand might be an asset or a liability depending on developer experience with the specification). But I strongly suspect (“probability 0.8” as a Gartner analyst might put it) that it will use semantic technologies. Because the real, hard, underlying problem is a problem of semantic integration. In that sense, David and Steve’s prototype is a sneak peek at what will come after CMDBf v1/1.1.

Pretty much since the beginning of CMDBf I have been pushing for it to ideally embrace SPARQL (with no success) or to at least stay close to it conceptually in order to make the eventual mapping/evolution smooth (with a bit more success). This includes pushing for a topological query language, trying to keep XML idiosyncrasies at bay and keeping constraints and selectors cleanly separated. Rather than working within the CMDBf group, David took the alternative approach of simply doing it. Hopefully this will help convince people of the value of re-using semantic web technology for IT systems management. Yes semantic technologies have been designed for a much more general use case. But the use cases that CMDB systems address are a subset of the use cases addressed by semantic technologies. It’s hard for domain experts to see their domain as just a subset of a larger problem, but this is the case here. Isn’t HTTP serving the IT management community better than a systems management-specific alternative would?

By the way, there is no inferencing taking place in the HP prototype. We are just talking about re-using an existing, well though-through graph query language. Sure OWL inferencing and some rules could be seamless layered on top of this. But this is in no way required to do (better) what CMDBf v1 tries to do.

And then there is the “federation” question. Who do you trust more to deliver this? A bunch of IT system management architects in DMTF or the web and query experts at W3C, HP Labs etc who designed and implemented SPARQL over many years? BTW, it sounds like SPQARL federation was discussed at WWW 2008, based on these meeting notes (search for “federation”).

2 Comments

Filed under Automation, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, Graph query, HP, IT Systems Mgmt, Query, RDF, Semantic tech, SPARQL, Standards, W3C, XPath

XPath brain teasers: graph queries in XPath 1.0

Consider this piece of XML, in which the <g> elements represent groups that the people are part of (groups can have several members and people can be members of several groups). For example, “paul” is a member of groups 2, 3 and 4.

<doc>
  <person name="alan"><g>1</g><g>2</g><g>4</g></person>
  <person name="marc"><g>1</g><g>2</g><g>3</g></person>
  <person name="paul"><g>2</g><g>3</g><g>4</g></person>
  <person name="ivan"><g>2</g><g>4</g></person>
  <person name="eric"><g>4</g></person>
</doc>

This is essentially a graph structure, represented as a tree because of the constraints of XML.


Using a graph query language like SPARQL, answering questions such as “which groups contain alan, paul and ivan” would be trivial. In SPARQL that would be something like:

SELECT ?group
WHERE {
  [ ns:hasName "alan" ] ns:partOf ?group .
  [ ns:hasName "paul" ] ns:partOf ?group .
  [ ns:hasName "ivan" ] ns:partOf ?group . }

In the CMDBf query language, another graph query language, it would be more verbose but just as straightforward to express:

<query>
  <itemTemplate id="alan">
    <recordConstraint>
      <propertyValue namespace="http://example.com/people" localName="name">
        <equal>alan</equal>
      </propertyValue>
    </recordConstraint>
  </itemTemplate>
  <itemTemplate id="paul">
    <recordConstraint>
      <propertyValue namespace="http://example.com/people" localName="name">
        <equal>paul</equal>
      </propertyValue>
    </recordConstraint>
  </itemTemplate>
  <itemTemplate id="ivan">
    <recordConstraint>
      <propertyValue namespace="http://example.com/people" localName="name">
        <equal>ivan</equal>
      </propertyValue>
    </recordConstraint>
  </itemTemplate>
  <itemTemplate id="group"/>
  <relationshipTemplate id="alan-in-group">
    <recordConstraint>
      <recordType namespace="http://example.com/people" localName="partOf"/>
    </recordConstraint>
    <sourceTemplate ref="alan"/>
    <targetTemplate ref="group"/>
  </relationshipTemplate>
  <relationshipTemplate id="paul-in-group">
    <recordConstraint>
      <recordType namespace="http://example.com/people" localName="partOf"/>
    </recordConstraint>
    <sourceTemplate ref="paul"/>
    <targetTemplate ref="group"/>
  </relationshipTemplate>
  <relationshipTemplate id="ivan-in-group">
    <recordConstraint>
      <recordType namespace="http://example.com/people" localName="partOf"/>
    </recordConstraint>
    <sourceTemplate ref="ivan"/>
    <targetTemplate ref="group"/>
  </relationshipTemplate>
</query>

But using the right tool for the job is just no fun. How can we answer this question using XPath 1.0? Your first response might be “this is the wrong XML format”. And yes, we could switch things around and make people children of groups rather than the contrary, as in:

<invertedDoc>
  <group number="1"><p>alan</p><p>marc</p></group>
  <group number="2"><p>alan</p><p>marc</p><p>paul</p></group>
  <group number="3"><p>marc</p><p>paul</p></group>
  <group number="4"><p>alan</p><p>paul</p><p>ivan</p><p>eric</p></group>
</invertedDoc>

That would make the “is there a group that contains alan, paul and ivan” question very easy to answer in XPath 1.0, but then I would ask you “which persons are part of groups 1, 2 and 4” and you’d be back to the same problem. You won’t get off the hook that easily.

So, XPath brain teaser #1 is: how to answer “which groups contain alan, paul and ivan” using XPath 1.0 on the first XML document (<doc>, not <invertedDoc>)?

The answer is:

/doc/person/g[../@name="alan" and text()=/doc/person/g[../@name="paul"
  and text()=/doc/person/g[../@name="ivan"]]]

Which returns:

<g>2</g>
<g>4</g>

It doesn’t look like much, but go through it carefully and you’ll see that we have somewhat of a recursive loop (as close as XPath can get to recursion). With these loops, we go through the entire document n^m times, where n is the number of <people> elements and m is the number of names that we need to look for in each group (3 in the present case: alan, paul an ivan). In our simple example, that’s 5^3=125. Not very efficient for a query that could, with the right language, be answered in one pass through the document (I am assuming a basic XPath engine, not one that may be able pre-analyze the query and optimize its execution).

Which takes us to XPath brain teaser #2: can you find a way to answer that same question with fewer passes through the doc?

There is an answer, but it requires the document to adopt a convention to make all group IDs multiples of 10. 1 stays 1, 2 becomes 10, 3 becomes 100, etc.

The document that we are querying against now looks like this:

<?xml version="1.0" encoding="iso-8859-1"?>
<doc>
  <person name="alan"><g>1</g><g>10</g><g>1000</g></person>
  <person name="marc"><g>1</g><g>10</g><g>100</g></person>
  <person name="paul"><g>10</g><g>100</g><g>1000</g></person>
  <person name="ivan"><g>10</g><g>1000</g></person>
  <person name="eric"><g>1000</g></person>
</doc>

On this document, the following XPath:

sum(((/doc/person[@name="alan"]) | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g)

returns: 3131

Which is the answer to our question. It doesn’t look like it? Well, here is the key to decode this answer: every “3” digit that appears in this number represents a group that contains all three required members (alan, paul and ivan). In this example, we have a “3” in the “thousands” position (so group 1000 qualifies) and a “3” in the “tens” position (so group 10 qualifies).

How do we get the 3131 result? In that XPath statement, the processor simply picks out the <person> elements that correspond to alan, paul and ivan. Then it simply adds up the value of all the <g> elements contained in all these selected <person> elements. And that’s our 3131.

The transformation of group values from n to 10^(n-1) is what allows us to turn a recursive loop into a simple addition of group values. Each column in the running sum keeps track of the number of people who are in the group that corresponds to that column (the “units” column corresponds to group 1, the “tens” column corresponds to group 10, the “hundreds” column corresponds to group 100, etc). This is why we had to turn the group IDs to multiples of 10.

Does this approach meet our goal of requiring fewer passes through the document than the XPath that is the solution to brain teaser #1? Yes, because we only scan the content of the <people> elements we are interested in (and we only scan each of them once). We don’t care how many groups there are. So we go from n^m passes through the entire document to m passes (one for each <person> element that we need to locate). In our example, it means 125 versus 3.

One potential gotcha is that we are assuming that a given group only appears once inside a given <person> element. Which seems logical. But what if the maintainer of the document is sloppy and we suspect that he may sometimes add a group inside a <person> element without first checking whether that <person> element already contains that group? We can protect ourselves against this by filtering out the redundant <g> elements inside a <person>. To do so, we replace replace:

sum(((/doc/person[@name="alan"]) | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g)

with:

sum(((/doc/person[@name="alan"]) | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g[not(text()=preceding-sibling::g)])

The [not(text()=preceding-sibling::g)] part removes <g> elements that have a preceding sibling with the same value. At little processing cost.

If you don’t like the looks of this “3131” result, you can add a simple transformation into the XPath to turn it into 1010, which can be interpreted as the sum of the numbers corresponding to all the groups that satisfy our request (again, groups 1000 and 10 in this case):

translate(sum(((/doc/person[@name="alan"]) | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g), "123456789", "001000000")

Returns: 1010.

If you are still not satisfied, we can actually extract the <g> elements (basically the same result as in the XPath statement that corresponds to brain teaser #1), but at the cost of a bit more work for the XPath processor: instead of calculating the 3131 result once, you do it once for each group that alan is a member of (why alan? it doesn’t matter, pick paul or ivan if you want). The corresponding XPath is:

/doc/person[@name="alan"]/g[floor(sum(((/doc/person[@name="alan"])
  | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g) div text()) mod 10 = 3]

Which returns:

<g>10</g>
<g>1000</g>

And here too, if you are concerned that the same group may appear more than once inside the <person name=”alan”> element and you don’t want that to appear in the result, you can remove the <g> elements that have a preceding sibling with the same value (you have to remove them twice, once in the sum calculation and once in the selection of the <g> elements for display, which is why [not(text()=preceding-sibling::g)] appears twice below):

/doc/person[@name="alan"]/g[floor(sum(((/doc/person[@name="alan"])
  | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]))/g[not(text()=preceding-sibling::g)])
  div text()) mod 10 = 3][not(text()=preceding-sibling::g)]

BTW, a practical advantage of presenting the result as a set of element nodes rather than as a number is that many interactive XPath engines (including many on-line ones as well as JDeveloper 10.1.3.2) aren’t happy with resulting nodesets in which the nodes are not element nodes. Of course XPath APIs don’t have that problem.

We have already acknowledged one limitation of our approach, the need to transform the XML doc (by turning “2” into “10”, “3” into “100”, etc). Now comes XPath brain teaser 3: what are the other limitations of this approach?

The first one is obvious (and doesn’t’ have much to do with XPath per se): what happens when there is a carry-over in the computation of the sum() function? Bad stuff is the answer. Basically, we can’t have this. Which means that since our calculations take place in base 10 (the only one XPath supports) we are limited to a maximum number of 9 persons in a group. We can look for groups that contain alan, paul and ivan, but not for those that contain all 15 members of a rugby team.

The second limitation requires a bit more XPath wonkery. Or rather IEEE 754 wonkery since numbers in XPath are defined as using the IEEE 754 double-precision (64-bit) format. Which has a 52 bits mantissa. The format normalizes the mantissa such that it only has one significant bit before the decimal. And since that bit can only be “1” it is ignored in the representation, which means we actually get 53 bits worth of precision. I would have thought that this would give us 16 significant digits in decimal form, but when I test this by converting 9999999999999999 into 64-bit representation I get 0100001101000001110000110111100100110111111000001000000000000000 or 4341C37937E08000 in hex which gets turned back into the decimal value 10000000000000000. Looks like we can only count on 15 digits worth of precision for a decimal integer in XPath.

What does it mean for our application? It means that we can only track 15 groups in our sum(). So if the document has more than 15 different groups we are out of luck. In the spirit of a “glass half full”, let’s count (no pun intended) ourselves lucky that XPath chose double precision (64-bit) and not single precision (32-bit)…

It would be nice if we could free ourselves of the constraint of having group IDs be multiples of 10. Maybe we can turn them into multiples of 10 as we go, by calculating 10^(n-1) whenever we hit such an ID? The first problem with this is that XPath does not have an exponentiation (^) operator. But this one is surmountable, because we don’t need a generic exponentiation operator, we just need to be able to calculate 10^n for n ranging from 0 to 14 (remember, we are limited to 15 digits of precision). We can simply seed our XPath with an enumerated result list. Sure it’s ugly, but by now it should be clear that we are far removed from any practical application anyway (practically-minded people would have long moved to another query language or at least to version 2.0 of XPath). If you’re still reading you must admit to yourself that your inner geek is intrigued by this attempt to push XPath where it was never meant to go. Our poor man exponentiation function looks like this:

substring-before(substring-after("A0:1 A1:10 A2:100 A3:1000 A4:10000
  A5:100000 A6:1000000 A7:10000000 A8:100000000 A9:1000000000
  A10:10000000000 A11:100000000000 A12:1000000000000
  A13:10000000000000 A14:100000000000000", concat("A", 12, ":")), " ")

When you execute this XPath (on whatever document), it returns: “1000000000000”. Replace the 12 with any other integer between 0 and 14 and the XPath will return 10 to the power of your integer. So in effect, we have emulated the exponentiation function for all needed values.

Unfortunately, this doesn’t take us very far. It would be tempting to plug this ad-hoc exponentiation function in our precedent XPath (at the place where we retrieve the value of the <g> element, as in:

sum(substring-before(substring-after("A1:1 A2:10 A3:100 A4:1000
  A5:10000 A6:100000 A7:1000000 A8:10000000 A9:100000000
  A10:1000000000 A11:10000000000 A12:100000000000
  A13:1000000000000 A14:10000000000000 A15:100000000000000",
  concat("A", ((/doc/person[@name="alan"])
  | (/doc/person[@name="paul"])
  | (/doc/person[@name="ivan"]) )/g, ":")), " "))

And to hope that our 3131 result pops out again. But this is not to be.

There are two problems. First, this is not valid XPath because the sum() function can only apply to a nodeset, not strings (or numbers for that matter). Second, even if sum() was more forgiving what we are sending to it is not several strings. It’s one string. That’s because the insertion of the ((/doc/person[@name=”alan”]) | (/doc/person[@name=”paul”]) | (/doc/person[@name=”ivan”]) )/g nodeset as an operand to a function that expects a string (in this case, our ad-hoc exponentiation function) doesn’t generate a set of text nodes that contain the result of running the function on all nodes in the nodeset. Rather, it generates the result of the evaluation of the function on the one string that corresponds to the string-value for the nodeset (which is the string value of its first node). Feel free to re-read this slowly.

You can’t modify nodesets in XPath, just integers and strings. Once you’ve turned your nodeset into another object, you’re out of the loop. Literally.

Sorry to end with a downer. At least I hope this entertained you, helped you better understand XPath or illuminated the difference between a graph query language and a tree query language.

[UPDATED 2008/3/27: For more XPath fun, Dare Obasanjo provides a guided walk through some tricky aspects of the XPath syntax. Unlike me, his focus is on understanding the syntax, not abusing it… ;-)]

11 Comments

Filed under Brain teaser, CMDBf, Everything, Graph query, SPARQL, XPath

Of graphs and trees: Kingsley Idehen to the rescue

I just read the transcript of Jon Udell’s podcast interview of Kingsley Idehen. It’s almost two years old but it contains something that I have tried (and mostly failed) to explain for a while now, so maybe borrowing someone else’s words (and credibility) would help.

Kingsley says:

“A graph model, ideally, will allow you to explore almost all the comprehensible dimensions of the nodes in that network. So you can traverse that network in a myriad of different ways and it will give you much more flexibility than if you’re confined to a tree, in effect, the difference between XQuery and SPARQL. I always see the difference between these two things as this. If you visualize nodes on a network, SPARQL is going to get you to the right node. Your journey to what you want is facilitated by SPARQL, and then XQuery can then take you deeper into this one node, which has specific data that the graph traversal is taking you to.”

Nicely said, especially considering that this is not a prepared statement but a transcript of a (presumably) unscripted interview.

He later provides an example:

“Let’s take a microformat as an example. HCard, or an hCalendar, is a well-formed format. In a sense, it’s XML. You can locate the hCard in question, so if you had a collection of individuals who had full files on the network in the repository, it could be a graph of a social network or a group of people. Now, through that graph you could ultimately locate common interests. And eventually you may want to set up calendars but if the format of the calendar itself is well formed, with XQuery you can search a location, with XPath it’s even more specific. Here you simply want to get to a node in the content and to get a value. Because the content is well formed you can traverse within the content, but XQuery doesn’t help you find that content as effectively because in effect XQuery is really all about a hierarchical model.”

Here is one way to translate this to the IT management domain. Replace hCard with an XML-formated configuration record. Replace the graph of social relationships with a graph of IT-relevant relationships (dependency, ownership, connections, containment…). Rather than attempt to XQuery across an entire CMDB (or, even worse, an entire CMDB federation), use a graph query (ideally SPARQL) to find the items of interest and then use XPath/XQuery to drill into the content of the resulting records. The graph query language in CMDBf is an attempt to do that, but it has to constantly battle attempts to impose a tree-based view of the world.

This also helps illustrate why SPARQL is superior to the CMDBf query language. It’s not just that it’s a better graph query language, one that has received much more review and validation by people more experienced in graph theory and queries, and one that is already widely implemented. It also does something that CMDBf doesn’t attempt to do: it lets you navigate the graph based on the semantics appropriate for the task at hand (dependency relationships, governance rules, distributed performance management…), something that CMDBf cannot do. There is more to classification than simply class inheritance. I think this is what Kingsley refers to when he says “in a myriad of different ways” in the quote above.

Here is a way to summarize the larger point (that tree and graph views are complementary):

Me Tarzan, you Jena

Where Tarzan (appropriately) represents the ability to navigate trees and Jane/Jena represents the ability to navigate graphs (Jena, from HP Labs, is the leading open source RDF/OWL/SPARQL framework). As in the movie, they complement each other (to the point of saving one another’s life and falling in love, but I don’t ask quite that much of SPARQL and XQuery).

On a related topic, I recently saw some interesting news from TopQuadrant. Based on explicit requests from the majority of their customers, they have added capabilities to their TopBraid Composer product to better make use of the RDF/OWL support in the Oracle database. TopQuadrant is at the forefront of many semantic web applications and the fact that they see Oracle being heavily used by their customers is an interesting external validation.

[UPDATED 2008/03/05: more related news! The W3C RDB2RDF incubator group has started is life at W3C, chaired by my colleague Ashok Malhotra, to work on mappings between RDF/OWL and relational data.]

1 Comment

Filed under CMDB Federation, CMDBf, Everything, Graph query, Query, RDF, SPARQL, Standards, W3C, XPath, XQuery

SPARQL is a W3C Recommendation

SPARQL is now a W3C Recommendation (which is how W3C calls its approved standard specifications). Congratulations to those who made it happen, including my esteemed ex-colleagues at HP Labs Bristol. Just on time for the DMTF CMDBf working group to consider it as a candidate for its query language… :-)

And just below that SPARQL announcement we see a notice that the SML working group has released a third set of working drafts (SML, SML-IF). Just on time for the DMTF to be reminded of the goodness of open access to developing standards… :-)

Comments Off on SPARQL is a W3C Recommendation

Filed under DMTF, Everything, RDF, SML, SPARQL, W3C