Category Archives: DMTF

CMDBf is a lot more and a lot less than you think

The DMTF CMDBf working group has recently published an updated draft of its specification. The final version should follow soon and I don’t expect major changes so now is not a bad time to start thinking about what this baby can do.

Since CMDBf stands for “configuration management database federation”, you might think the obvious answer to the “what can it do” question is “build a federation of configuration management databases”. Except it’s not. Despite its name, CMDBf provides little support for federation unless you take a very loose definition of the term. The specification gives you a query language and a very simple registration interface, with a sprinkle of metadata to improve interoperability. The query language lets you talk to a CMDB to retrieve information on configuration items (CIs) that it knows about. The registration interface lets you keep a CMDB informed of changes to CIs that it may care about. If you want to build on top of this a real federation, one that scales to the type of environment that CMDBs are used for today, you have to go further than what the specification provides. What CMDBf does give you is some amount of integration between CMDBs (at the protocol level at least, not at the model level). It may not sound like much but it is a lot of progress on the current situation and the right incremental step, whether you are aiming for true federation as the end goal or not.

That’s the “a lot less than you think” part. So, what’s the “a lot more than you think” part? Good stuff all around:

CMDBf provides a metamodel that is well-suited for complex IT systems and it provides an elegant graph-oriented query language on top of it. The most convenient representation for an IT system is neither “one big XML document” nor “a sea of nodes and edges”. CMDBf gives you a middle ground: a graph model with XML leaf nodes. So you can precisely model the relationships between your IT elements using explicit relationships (with their own records), but you can also attach a well-understood piece of XML to an item as a record without having to break that XML into a bunch of tiny relationships.

I am pretty sure there are other domains, beyond IT systems, for which this would be useful. It will be interesting to see if the CMDBf specification gets considered outside of its intended scope. But these domains are more likely to end up using RDF/OWL/SPARQL instead. Not everyone has made the leap from XML as a tool to XML as a religion, which made CMDBf necessary for us. But let’s not veer into another rant.

Let’s go back instead to describing how useful CDMBf can be to IT systems management, independently of any “federation” objective. Let me put it this way: if one was to create from scratch a configuration store for IT systems they should strongly consider the CMDBf conceptual model as the base metamodel. And something along the lines of the CMDBf Query (though not necessarily through its XML serialization) as the native query language for it. Most CMDBf implementers of course are not in this situation. Rather than writing the store from scratch they will create a CMDBf wrapper/interface on their current CMDB. And that’s fine too. CMDBf will work well as an interoperability protocol. Putting aside my gripes about XPath overuse, CMDBf strikes a reasonable balance that makes it implementable on top of any back-end technology (relational, XML, RDF, in-memory objects, bags of name-value pairs…). And the query patterns it supports map well to CMDB-to-CMDB integration use cases. But it is underselling it, in my view, to restrict it to this over-the-wire interoperability scenario. CMDBf also provides a very useful foundation for local access to the CMDB. CMDBf graph queries can support powerful visualization of the content of the CMDB. They can support the definition of configuration rules. They can support in-depth inspection of relationships (e.g. fault tree).

And that may jsut be the beginning. It could take three directions after v1:

The first one, as always for a standard, is that it is ignored and becomes irrelevant. I have to reluctantly list this one first, because it is statistically the most likely for a new standard. Especially one that is not a ratification of an existing de facto standard. And one that threatens an important control point for vendors. A slight variation on this scenario is for CMDBf to succeed from a marketing perspective, as a checkmark that most vendors tick, but not as a true technology. This is the “smokescreen” scenario from Mr. Skeptic. One scenario that worries me is that CMDBf could fail because of the poor models of the CMDBs that implement it. If your IT model is not granular enough or if it matches the UI of your application more than the semantics of the IT components, then CMDBf will expose these shortcomings and probably be blamed for them (with bad models, “shoot the messenger” becomes “shoot the protocol”).

The second possible direction is that CMDBf provides enough value in integrating CMDBs that people want more and challenge the group to deliver on the “f” part, federation. That could take the form of a combination of:

  • better integration with other protocols (mostly from the WS-Management family, like WS-Enumeration and WS-Eventing),
  • reconciliation support (here are ways to address it),
  • some model transformations or canonical models,
  • some optimizations in the query mechanism for distributed queries (e.g. data partition rules).

The third possible direction (not exclusive) is for CMDBf to become the basis for a standard rule language for IT models. Yeah, another one (remember SML?). SPIN and SML show us how a generic query language can be used to support configuration rules. I very much like SPIN but it requires adopting RDF as a metamodel, which is a hard sell in XML-land. SML suffers technically from being too reliant on an inappropriate validation tool (XSD) and treating relationships as a second thought rather than an integral part of the model. Which is fine in many areas (EMF does it too), but not, in my view, when modeling IT systems.

If we are not going to use RDF/SPIN then let’s copy them. We can use the CMDBf metamodel (graph-based) where SPIN uses RDF. We can use the CMDBf query language (graph-oriented) where SPIN uses SPARQL. Since CMDBf queries use XPath, we see some commonalities with SML (which uses XPath through Schematron). But in CMDBf XPath is scoped to the leaf nodes of the graph, not the entire model as it is in SML. In other words, SML adds relationship traversal to XPath, while CMDBf adds XPath to its relationship-aware queries. It’s a matter of who’s on top. It sounds academic but it isn’t.

Does the industry really want standardized, re-usable configuration rules? SML/CML seem to say no. The push towards Cloud interop, on the other hand, begs for it. At least if you believe in programming your environment in a way that is partialy declarative rather than entirely procedural.

[UPDATED 2009/3/5: Rob England (a.k.a. Mr. Skeptic as I refer to him above) provides a geek-to-English translation for this post. Neat!]

2 Comments

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Modeling, RDF, SML, Specs, Standards, Tech, XPath

Analyzing the DMTF incubator process

Depending on how you choose to look at it, either the DMTF has streamlined the process of defining standards or it has created a rubberstamping machine. I am referring to the “DMTF Standards Incubation Process”. It is recent, but not brand new (the DSP that defines it is dated April 6, 2007). I had heard about it but never really looked into it. Until this weekend, when I finally got motivated to investigate a bit. AFAIK this process has not yet produced any specification.

As I understand it, the goal of this incubation process is to allow a group of like-minded companies to get together in the DMTF and produce an “informational specification”, which is typically a refinement of a vendor submission. The informational specification would then go through a normal DMTF working group but often in an expedited fashion, only allowing limited changes. That’s not a guaranteed outcome, but it seems to be the “normal” case as envisioned in this process.

This overview should make it clear why this can be seen as a rubberstamping machine. Here are a few key points (the quotes come from the process description):

  • “Standards Incubators are often formed in conjunction with an initial baseline contribution by the founding members with the expectation that the group will serve to evolve and finalize that contribution” and later it says that leadership members should have a “commitment to maintaining alignment with the input submission”.
  • Only leadership-level DMTF members get to be on the review board (the part of the incubator that makes decisions). They have to fairly consider comments from the other companies, whatever “fairly” means.
  • It is necessary but not even sufficient to be DMTF leadership-level company to be on the review board of an incubator. If you are not there when the incubator is created then you have to be approved by the current leadership members to join. It is unclear to me whether any DMTF leadership-level company can join the “review board” of an incubator at the start or whether those who propose the incubators get to choose who they let in.

There is nothing sneaky here, the incubator process is pretty upfront about being designed to allow a vendor-provided specification to be considered/reviewed/improved in a friendly environment, in which opponents are kept away. The question is what happens next. There are four possible dispositions once an incubator finishes its work and its has produced an informational specification.

The first two, “bootstrap / expedited delivery” and “finalization” are pretty close in practice. A working group is created that is restricted to not making any significant change to the “information specification”. The “bootstrap” approach only allows small corrections, “finalization” also allows some additions (but no change of what is already there). In other words, the working group is mandated to pretty much rubberstamp the informational specification: with these two dispositions, there is no opportunity (in the incubator or in the ratification group) for people to suggest technical approaches that are significantly different from those in the initial submission.

The place where technical alternatives can be considered is if a competing incubator is created. At this point, the DMTF board may decide that a working group should be created to reconcile the two. Even then, the board may pick a winner (in which case the reconciliation amounts to adding in the selected specification some features that are only present in the other specification, in effect protecting existing implementations of the winner). And if this is the path taken, the process makes it clear that this should be driven not by technical merits but by “adoption and momentum”. Which implies that the companies that ship the most products get to pick since they can single-handly create “adoption and momentum”.

And finally, the last possible disposition is “termination”, in which no further work on the informational specification takes place in the DMTF. But the barrier is pretty high for this direction to be taken: the specification has to have “little adoption or industry interest”. It seems reasonable to interpret this to mean that this would only apply if the initial proponents (who created the incubator) have lost interest themselves, otherwise they alone would provide sufficient “industry interest” for the termination to not take place and force another outcome (which can only be one of the first two, the rubberstamping options,  if there is no competing incubation group). And even if the work is indeed terminated, the specification remains available indefinitely as a “DMTF informational specification” which people can (and will, if it serves their purpose) simplify to “a DMTF specification”. The difference between this and a DMTF standard will be lost on 99% of IT writers and IT buyers. I submit as evidence all the confusion about the status of a “W3C note” (the confusion prompted W3C to eventually rename this to “submission”). It will be even less clear with DMTF “informational specifications” because, unlike W3C notes, they did go through some modifications in the DMTF and they are indeed a product of the DMTF. I would not be surprised if some “DMTF informational specifications” stayed just that and lived a happy life as a pseudo-standard. One thing that remains unclear is whether such a terminated informational specification can be taken over by another standards organization.

Bottom line, if you don’t like a submission to an incubator group you’d better put together an alternative (quickly) to stay in the game. And if your position is not “this is a bad technical approach” but rather “this is something we should do in a more open and deliberative way, or maybe later” then you only get one chance to make your case: you’d better make a convincing argument to the board to not allow the incubator to be created because after that there is little chance to stop the train. And how many standard organization boards do you know who say “no” to submissions from large vendors?

Having said all this, is this really a bad thing? Let’s look at it from the “glass half full” side.

First let’s realize that this happens anyway. Companies get together (often around an initial document created by a single leader) to create a specification and then look for a standard organization to ratify it with as few changes as possible. Other than CIM, it seems that all recent DMTF efforts started out this way (WS-Management, CMDBf, OVF). This is how Microsoft (sometimes with IBM) built the whole WS-* stack. They even had a name for it (the “workshop process”) to try to make it sound more open than it was. I’ve been on the inside (SML, CMDBf, WSRF/WSRT) and outside (WS-Management and other WS-* specifications) of it and it’s a pain whether you’re inside or out. It’s very opaque. Efforts may die and nobody ever knows (for example, does anyone know what’s up with CML)? Even when those inside want to get feedback and share their work they have to deal with a tangle of legal agreements that make it unnecessarily hard for everyone. In addition, all the work and discussions that go into the submitted specification usually get lost as the work transitions to a standards body (e.g. no email archive and unclear IP/confidentiality rules in re-using them). And the fact that these efforts are private does not prevent companies from demanding guarantees that their submissions won’t be changed too much. For example, the WS-Management working group had an explicit goal in its charter to maintain compatibility with the submission and the same debate was played over and over again in drafting the charters of several OASIS and W3C groups. Companies play one standard organization against another if necessary to get this guarantee.

Anything that can take us away from this mess merits consideration even if it is not a perfect alternative (there isn’t any). The DMTF incubator process doesn’t seem to relax the control of the sponsor companies, but it provides some level of transparency (at least for DMTF members) and, presumably, some continuity between the incubation and ratification phases.

Standards organizations constantly get blamed for either being too slow/procedural (e.g. HTML at W3C) or being rubberstamping machines (e.g. OOXML at ISO). Or both at the same time (most WS-* work). Most steps an organization can take to address one criticism makes the other worse. This “incubator” process is an example.

Everyone complains about “design by committee” and how inconsistent and bloated specifications become when everyone is listened to and made to feel included. The specifications end up with too many options (a killer of interoperability) and no guiding vision. A more constrained set of authors usually produce a simpler and more consistent specification. Has anyone ever seen a standard that is shorter than the submission that started it?

Not to mention the fact that working group chairs are often in an uncomfortable position, forced to choose between, on one hand, accusations of being dictatorial and, on the other hand, seeing his/her working group drift away from one rathole to the next, with no end in sight. The standards world has its fair share of obstructionists and pontificators. Some do it on purpose (they have been mandated by their employer to prevent progress in a group), most do it just because of their personality (and the fact that their employer has no real interest in what the person does in this group, as long as his/her presence allows the employer to claim to be part of the game). Forcing people who want an alternative approach to actually put together a proposal is a way to keep pontificators at bay. Unfortunately, it also shuts off qualified people who know a domain well and want to share their knowledge but don’t have the time (or employer sponsorship) to put together an entire alternative specification around their proposal.

At the end, it comes down to what a standard should be. If you think a standard should capture the knowledge of most experts in the industry and give an equal voice to all organizations, then this is a step in the wrong direction. If, on the other hand, your position is that the big guys will effectively set standards anyway so it might as well be done in a way that is fast, relatively transparent and consistent with their implementation, then you’ll applaud this initiative.

In creating this process, the DMTF made a clear (though grammatically challenged) statement on this topic: “adoption and momentum may outweigh technical issue regarding success”.

2 Comments

Filed under DMTF, Everything, Mgmt integration, Specs, Standards

Sorry, CMDBf doesn’t make coffee either

The IT Skeptic is writing to us from his mountain retreat (via a time-delayed post on his blog), and the topic he felt safe to cover in such fashion (what journalists call an “evergreen”) is the fact that CMDBf is an orchestrated sham, brilliantly executed by IT management vendors.

I’d love to be part of something that’s brilliantly executed for once, even if it is a sham, but I am afraid this is not it. But first I should state the obvious, clarifying that even though I am a member of the CMDBf group at DMTF (and also an author of the original version, under my previous employer) I do not speak for the group or DMTF (or my employer for that matter). Just as myself, as always on this blog.

The problem that Rob England, Mr. Skeptic, has with the CMDBf specification is that it doesn’t do a bunch of things that he’d like it to do, such as specifying how data sources acquire data for their domain, how they store the data, how the underlying resources are reconfigured, what processes are followed etc. See the full list from his post. The list is a copy/paste from the CMDBf specification, with some comments added, so at the very least he has to admit that as far as “smokescreens” go this one is pretty upfront about its limitations…

He concludes that “this is once again a geeky technical solution to a cultural, organizational and procedural problem.” I have to ask: who expects DMTF specifications to solve “cultural, organizational and procedural” problems? Does CIM solve such problems? Does WBEM?

Human-to-human communication is a “cultural, organizational and procedural” problem and SMTP/POP/IMAP/etc (the interoperable protocols used by email systems) are just as geeky as CMDBf. They don’t solve the larger problem, only contribute to the solution. If CMDBf can contribute as much to datacenter management as SMTP/POP/IMAP contribute to human communication (minus the SPAM if possible), I’d call that a success.

And then there is this warning:

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availability monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely.”

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

As far as the glacial pace of progress (“Glacial advance. That’s the way the vendors want it” from an earlier post by the Skeptic), CMDBf is no race horse but I don’t see it going any slower than other standards. Slowness (I mean, deliberation) is part of the landscape. I would submit a slight twist on Hanlon’s razor: “Never attribute to malice that which can be adequately explained by legal, procedural and organizational inertia.”

Having said all this, some of Rob’s criticism is perfectly justified, such as his sarcasm about this sentence from the specification:

“The Federated CMDB operates in a closed environment, in which some security issues are less critical than in open access or public systems.”

OK, that’s stupid indeed. Especially in a public cloud environment where you don’t know who is renting the VM next door. I’ll ask the group to remove this. Actually, that whole appendix is useless and I pointed this out in my earlier review of CMDBf 1.0 (look for the “security boilerplate” section at the bottom of the review).

Rob could also have pointed out that this specification only addresses “federation” if you accept a very scaled-down definition of the term. What it does do is help with CMDB query and synchronization. Not the holy grail, but nothing to sneer at either.

Rob, next time you want to throw tomatoes at CMDBf while you’re on holiday, just give me the password to the site and I’ll do it for you… :-)

[UPDATED 2009/1/21: Rob responds via a comment on his original blog entry.]

2 Comments

Filed under BSM, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Security, Specs, Standards

WS Resource Access working group starting at W3C

Things went quiet for a while, but the W3C Web Services Resource Access Working Group has finally taken life, as was announced last week. It’s a well-know PR trick to announce bad news on a Friday such that it goes undetected, is it a coincidence that W3C picked a Friday for this announcement?

As you can tell by this last remark, I have no trouble containing my enthusiasm about this new group. Which should not come as a surprise to regular readers of this blog (see this, this, this and this, chronologically).

The most obvious potential pushback against this effort is the questionable architectural need to redo over SOAP what can be done over simple HTTP. Along the lines of Erik Wilde’s “HTTP over SOAP over HTTP” post. But I don’t expect too much noise about this aspect, because even on the blogosphere people eventually get tired of repeating the same arguments. If some really wanted to put up a fight against this, it would have been done when the group was first announced, not now. That resource modeling party is over.

While I understand the “WS-Transfer is just HTTP over SOAP over HTTP” argument, this is not my problem with this group. For one thing, this group is not really about WS-Transfer, it’s about WS-ResourceTransfer (WS-RT) which adds fine-grained resource access on top of WS-Transfer. Which is not something that HTTP gives you out of the box. You may argue that this is not needed (just model your addressable resources in a fine-grained way and use “hypermedia” to navigate between them) but I don’t really buy this. At least not in the context of IT management models, which is where the whole thing started. You may be able to architect an IT management system in such RESTful way, but even if you can it’s too far away from current IT modeling practices to be practical in many scenarios (unfortunately, as it would be a great complement to an RDF-based IT model). On the other hand, I am not convinced that this fine-grained access needs to go beyond “read” (i.e. no need for “fine-grained write”).

The next concern along that “HTTP over SOAP over HTTP” line of thought might then be why build this on top of SOAP rather than on top of HTTP. I don’t really buy this one either. SOAP, through the SOAP processing model (mainly the use of headers, something that WS-RT unfortunately butchers) is better suited than HTTP for such extensions. And enough of them have already been defined that you may want to piggyback on. The main problem with SOAP is the WS-Addressing tumor that grew on it (first I thoughts it was just a wart, but then it metastatized). WS-RT is affected by it, but it’s not intrinsic to WS-RT.

Finally, it would be a little hard for me to reject SOAP-based resources access altogether, having been associated with many such systems: WSMF, WSDM/WSRF, WS-Management and even WS-RT in its pre-submission days (and my pre-Oracle days). Not that I have signed away my rights to change my mind.

So my problem with WS-RAWG is not a fundamental architectural problem. It’s not even a problem with the defects in the current version of WS-RT. They are fixable and the alternative specifications aren’t beauty queens either.

Rather, my concerns are focused on the impact on the interoperability landscape.

When WS-RT started (when I was involved in it), it was as part of a convergence effort between HP, IBM, Intel and Microsoft. With the plan to use this to unify the competing WS-Management and WSDM/WSRF stacks. Sure it was also an opportunity to improve things a bit, but 90% of the value came from the convergence/unification aspect, not technical improvements.

With three of the four companies having given up on this, it isn’t much of a convergence anymore. Rather then paring-down the number of conflicting options that developers have to chose from (a choice that usually results in “I won’t pick either sine there is no consensus, I’ll just do it my own way”), this effort is going to increase it. One more candidate. WS-Management is not going to go away, and it’s pretty likely that in W3C WS-RT will move further away from it.

Not to mention the fact that CMDBf (and its SOAP-based graph-oriented query protocol) has since emerged and is progressing towards standardization. At this point, my (notoriously buggy) crystal ball shows a mix of WS-management and CMDBf taking the prize overall. With WS-Management used to access individual resources and CMDBf used to access any kind of overall system view. Which, as a side note, means that DMTF has really taken this game over (at least in the IT management domain) from W3C and OASIS. Not that W3C really wanted to be part of the game in the first place…

11 Comments

Filed under CMDBf, DMTF, Everything, HP, IBM, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Query, REST, SOAP, SOAP header, Specs, Standards, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer

CMDBf work in progress

The DMTF CMDBf working group (of which I am part) has released a work in progress version of the CMDBf specification. The changes from the submitted version are minor. It’s mostly a move to the DMTF template. More important (but not drastic) changes should appear in the next release.

Comments Off on CMDBf work in progress

Filed under CMDB Federation, CMDBf, DMTF, Everything, Graph query, Specs, Standards

Reviewing DMTF OVF as a “preliminary standard”

OVF 1.0.0d is out as a “preliminary standard” so I gave it a quick read over the weekend. Things have not changed much since the “work in progress” document published this summer, which itself wasn’t a big change from the original specification. As I wrote in the review of the “work in progress”, the DMTF tightened the language of the  specification more than it added features.

Since there aren’t too many technical changes (see the end of this post if you’re interested in a few), the interesting discussion is about the marketing of this specification. And boy does it have wings on that front. The level of visibility the specification has received is pretty amazing, especially considering that it doesn’t really do that much technically. But you wouldn’t know it by reading all the announcements about OVF:

  • VMWare supports OVF packaging (which version?) with its new VMWare Studio.
  • Citrix uses OVF in Kensho to create a platform-agnostic VM management.
  • An Open Source “implementation” of OVF has been created. I put “implementation” between quotes because since OVF per se doesn’t do much its implementation is mostly a specialized command line editor for its XML descriptor. It requires a a vendor-specific runtime for deployment/activation. This is not a criticism of the open source project BTW, just a statement of fact about the spec.
  • Enomaly lists “OVF format support” on its roadmap for Q1 2009.
  • Microsoft support for OVF in products is supposedly “on the board” which doesn’t mean very much but their overall marketing/PR response to OVF has been surprisingly positive for a standard that they don’t control.

I have criticized the DMTF marketing efforts in the past (“give away pens and key chains”) but I must admit that, to the extent that DMTF had a significant role in promoting OVF adoption (in addition to marketing efforts directly from the vendors), it is a very nice marketing success. Well done, and so much for my cynicism. OVF may also have benefited from all the interest in the general topic of virtualization/cloud standards (the “cloud” association is silly, of course, but as we’ve just seen I am not a marketing genius) and the fact that there isn’t much else to talk about on these topics. So by default OVF becomes the name to put on your “standards” banner. Right place at the right time for the vendors behind it.

Speaking of the vendors, I have no insight into the functioning of the OVF working group, but judging by the specification’s foreword VMware is throwing plenty of resources at DMTF: it employs the working group chair and both co-editors, which is pretty atypical in my experience in standards efforts. People are usually sensitive to appearances of one company having disproportionate influence and try to distribute responsibilities around, at least on paper. Add to this VMWare’s recent ramp-up at the DMTF board level. They seem to know what they want. And indeed I can see how the industry leader would want some basic level of standardization, but not too much, which is currently just what OVF offers. We’ll see what’s next in store, if anything.

The specification itself is not marketing-free. According to line 122, “it supports the full range of virtual hard disk formats used for hypervisors today, and it is extensible, which will allow it to accommodate formats that may arise in the future”. Sure, in the same way that my car fully supports passengers of all nationalities (and is extensible enough to transport citizens of yet-to-be created countries – and maybe even other planets, as long as they come with buttocks to sit on). Since OVF doesn’t really do anything with the virtual hard disk formats, it can “support” pretty much any such format.

Speaking of extensibility, OVF clearly tries to have a good story there. Section 7.3 tries to move away from the usual “hey, it’s XML, you can add elements/attributes anywhere” approach towards the definition of new “sections”. This seems a bit drastic. Time will tell if this is visionary or short-sighted. OVF also plans to move towards “an extension model based on the design of the open content model in XML Schema 1.1”. I am not following XSD 1.1 too closely, but it is wise for OVF to not build too much dependency on it at least for now. And it seems to me that an extension model is not something that you plan to “plan […] to add” but rather something you need to define from the start (sounds like the good old “the next version will add versioning support”, or “no keyboard detected, press F8 to continue”).

But after all this comes what looks to me, from an extensibility perspective, like a big no-no: using (section 8.1) simple strings (e.g. “vmx-4”, “xen-3”) to represent types of virtual systems. You’d think that in 2008 people would have heard about URIs as a way to allow extensibility and prevent name clashes. On further reading, this doesn’t seem to be the fault of OVF as they get this property (vssd:VirtualSystemType) straight out of the politely named DMTF SVP (System Virtualization Profile) specification, itself a preliminary standard. But that’s not much of an excuse because I suspect large overlap of participation between the two groups and in any case you don’t have to take dependencies on something that’s not right (speaking as someone who authored several specs that took a dependency on WS-Addressing, I shouldn’t give lessons). In any case, I am not on top of all virtualization-related work in DMTF but it seems to me that if they are not going to use URIs then someone should step up and maintain a registry of these identifying “virtual system type” strings.

BTW, when left to its own device OVF does a better job. For example, it properly uses URIs to identify the virtual disk format (section 5.2).

One of the few new features is the addition of the ovf:bound attribute on virtual hardware element items (section 8.3) to specify whether the item description represents the normal, minimal or maximal allocation. My heads spins a bit when trying to apply this metadata to the rasd:Limit property (with ovf:bound=”min” the value of the rasd:Limit element would represent the minimal value of the maximum quantity or resources that will be granted, which takes some parsing effort), but I think it more or less squares out.

The final standard should not differ greatly from this version, so at this point we pretty much know what OVF will be technically. The real question is how it will be used and what, if anything, is going to come to complement it.

[UPDATED 2008/10/14: Good timing. OVF-loving Kensho just launched.]

3 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Open source, OVF, Specs, Standards, Tech, Utility computing, Virtualization, VMware

CMDBf interop demo

IBM and CA are apparently showing an interoperability demo between their respective CMDBs at itSMF Fusion this week. I am not there to see it, but they describe it (it’s a corporate merger scenario) in this press release. It is presumably based on the version of the specification that was submitted to DMTF.

More information about CMDBf, along with another demonstration, will be available in a couple of months for ManDevCon attendees. Three sessions are on the agenda, all in a row and in the same room (so make sure to get a good seat, i.e. one close to a power plug, from the start):

  • CMDB Federation Overview (Vince Kowalski, BMC and Marv Waschke, CA)
  • CMDB Federation Technical Description (Mark Johnson, IBM and Marv Waschke, CA)
  • CMDB Federation Demonstration (Mark Johnson, IBM and Dave Snelling, Fujitsu)

Comments Off on CMDBf interop demo

Filed under CA, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Specs, Standards, Trade show

A nice place to stay in Standardstown

You’ve just driven into Standardstown. It’s getting late and you need a place to stay. Your GPS navigation system has five listings under “accommodations”, with the following descriptions:

W3C campground

This campground provides well-equipped tents (free wi-fi throughout the camp). It has the most developed community feeling of all nearby accommodations. Every evening residents gather around a bonfire and the camp elders sing cryptic songs. At the end, the elders nod in approval of the moral of the song. Most campers don’t understand the lyrics but they like the melody. There is a recurring argument about how much soap the campground management should provide to guests. Old timers want to do away with this practice, but management is afraid that business travelers won’t patronize the camp if they are not provided with plenty of soap. The camp is located along a river, downstream from a large factory. When stuff floats down from the factory and lands on the shore of the camp, they call it a submission and thank for factory. So far, the attempts to build a clubhouse from the factory rubbish has mainly created eyesores.

OASIS housing development

This housing development is an option for accommodation because its management will give a plot of land to almost anyone who asks. More specifically, there needs to be at least three of you in the car. If you’re on your own, a common trick is to go pick up the village drunk (offer him a drink) and the village idiot (tell him you want his advice). They can usually be found on the main plaza, arguing about the requirements of imaginary users. Once you have your plot of land, the OASIS management maintains electric power, water and sewer but you can do pretty much what you want otherwise. If you just need temporary housing, you can just pitch a tent. As a result of this approach, there are several houses abandoned half-way through construction. This can make it hard to find your way to the house you are looking for. Residents typically don’t know anything about what’s going on in the house next door. You’ll find nice families living next to a crack house.

Motel DMTF

This motel is hard to find because it hides behind high walls. Even once you’re inside, there are segregated areas. Chances are your room card will give you access to the pool deck but not the clubhouse. Make a mental note of the way to the emergency exits, because there is no evacuation map on the wall (the map exists, but it’s considered confidential). We’ve heard that the best suites have a special door for direct access to the management office. After you leave, you can’t tell your friends what happened there. This review itself probably breaks some confidentiality rule.

WS-I Resort

This time-share resort is the newest development in town. By the time it got built, all the good land was taken so they had to build on land fragments leased from other hotels. The facilities are new and nice, but the owners association is dysfunctional. We’ve been told the feud started when a co-owner tried to organize a private mime show on shared land. Whatever the origin of the disagreement it has resulted in veto rules being commonly invoked, stopping most of the activities that the resort was originally planning to offer. But it remains a good option if you just need a place to sleep. The resort marketing has been pretty efficient: before doing business with you, many local companies will demand to see a receipt to show that you slept there.

Hotel ISO

Just getting a reservation there is a month-long process, so this is not an option if you’re already in town. Unfortunately, if you plan to do business with the local government you are expected to patronize this hotel. If that’s your case, the solution is to sleep in one of the other places in town and just go to this hotel for breakfast. Once there, order their breakfast special (called the “fast-track ruber stamp” which, unfortunately, tastes as bad as it sounds) and staple the breakfast receipt to your hotel bill. That should satisfy the city hall staff that they can do business with you.

11 Comments

Filed under DMTF, Everything, Standards, W3C

Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don’t see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

OVF in action: Kensho

Simon Crosby recently wrote about an upcoming Citrix product (I think that’s what it is, since he doesn’t mention open source anywhere) called Kensho. The post is mostly a teaser (the Wikipedia link in his post will improve your knowledge of oriental philosophy but not your IT management expertise) but it makes interesting claims of virtualization infrastructure interoperability.

OVF gets a lot of credit in Simon’s story. But, unless things have changed a lot since the specification was submitted to DMTF, it is still a wrapper around proprietary virtual disk formats (as previously explained). That wrapper alone can provide a lot of value. But when Simon explains that Kensho can “create VMs from VMware, Hyper-V & XenServer in the OVF format” and when he talks about “OVF virtual appliances” it tends to create the impression that you can deploy any OVF-wrapped VM into any OVF-compliant virtualization platform. Which, AFAIK, is not the case.

For the purpose of a demo, you may be able to make this look like a detail by having a couple of equivalent images and picking one or the other depending on the target hypervisor. But from the perspective of the complete lifecycle management of your virtual machines, having a couple of “equivalent” images in different formats is a bit more than a detail.

All in all, this is an interesting announcement and I take it as a sign that things are progressing well with OVF at DMTF.

[UPDATED 2008/6/29: Chris Wolf (whose firm, the Burton Group, organized the Catalyst conference at which Simon Crosby introduced Kensho) has a nice write-up about what took place there. Plenty of OVF-love in his post too, and actually he gives higher marks to VMWare and Novell than Citrix on that front. Chris makes an interesting forecast: “Look for OVF to start its transition from a standardized metadata format for importing VM appliances to the industry standard format for VM runtime metadata. There’s no technical reason why this cannot happen, so to me runtime metadata seems like OVF’s next step in its logical evolution. So it’s foreseeable that proprietary VM metadata file formats such as .vmc (Microsoft) and .vmx (VMware) could be replaced with a .ovf file”. That would be very nice indeed.]

[2008/7/15: Citrix has hit the “PR” button on Kensho, so we get a couple of articles describing it in a bit more details: Infoworld and Sysmannews (slightly more detailed, including dangling the EC2 carrot).]

Comments Off on OVF in action: Kensho

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, OVF, Standards, Virtualization, Xen, XenSource

Mapping CIM associations to CMDBf relationships

This post started as a comment on the blog of Van Wiles. When it became too long (and turned into a therapeutic rant at the end) I turned it into a blog post of its own. Please, read Van’s post first. Here is my response to him:

Hi Van. Sounds like what you are after is not a mapping of the CIM_Dependency association to a CMDBf record type (anyone can make up such a mapping as you point out), but a generic algorithm to map any CIM association to a corresponding CMDBf relationship record type. Correct? That algorithm needs to handle the fact that the CIM metamodel has the concept of relationship roles while the CMDBf metamodel doesn’t.

Here is a possible such mapping:

  1. Take a CIM association (called “myAssociation”) that has two roles (called “thisOne” and “theOtherOne”).
  2. Take the item that has role name that comes first alphabetically and make it the source (in this example, it is “theOtherOne”)
  3. Take the item that has role name that comes second alphabetically and make it the target (in this example, it is “thisOne”)
  4. Generate a CMDBf record type called “{associationName} _from_ {firstRoleNameAlphabetically} _to_ {secondRoleNameAlphabetically}”

You’re done. The new CMDBf record type is “myAssociation_from_theOtherOne_to_thisOne”, the source is the item with the role “theOtherOne” and the target is the item with the role “thisOne”. Everyone who follows this algorithm (of course it needs to be formally defined and evangelized, there is no guarantee here unless we bake CIM-specific concepts in the core CMDBf specification, which would be a mistake) will produce the same CMDBf relationship record type for a given CIM association.

Applied to the CIM_Dependency example, this would generate a “CIM_Dependency_from_Antecedent_to_Dependent” CMDBf record type, in which the source is the CIM Antecedent and the target is the CIM Dependent.

Alternatively, you can have the algorithm generate two CMDBf relationship record types (one going in each direction) for each CIM association. So you don’t have to arbitrarily pick the first one (alphabetically) as the source. But then you need to have model metadata to capture the fact that these relationships are the inverse of one another (and imply one another). As you well know,I have been advocating for the use of RDF/RDFS/OWL in CMDBf for a while. :-)

In the end, there are three potential approaches:

1) Someone (the CMDBf group or someone else) creates an authoritative mapping for all CIM associations (or at least all the useful ones) and we expect anyone who uses the CIM model with CMDBf to use that mapping.

2) Someone (again, the CMDBf group or someone else) defines a normative CIM to CMDBf mapping, e.g. the one above, and we expect anyone who generates a CMDBf relationship record type from a CIM association to use this mapping algorithm. From a pure logical perspective, it is the same as defining a CMDBf record type for each CIM association (approach 1), but it is less work and it doesn’t have to be updated every time a CIM association is created/versioned. At the cost of uglier (more arbitrary) CMDBf record types being defined.

3) We let people define the relationships in whatever way they choose and we provide a model metadata framework (aka ontology language) to allow mappings between these approaches. For example, you define, in your namespace, a van:CIM-inspired-dependency CMDBf record type that goes from antecedent to dependent. Separately, I defined, in my namespace, a william:CIM-like-dependency CMDBf record type that carries the same semantics (defined, not so precisely BTW but that’s a different topic, by CIM) except that its source is the dependent and its target is the antecedent. The inverse of yours. A suitable ontology language would allow someone (you, me, or a third party who has to assemble a system that uses both relationship types) to assert that mine is the inverse of yours. Once this assertion is captured, a request for any [A]—(van:CIM-inspired-dependency)—>[B] would also return the instances of [B]—(william:CIM-like-dependency)—>[A] because they are known to be the same. And you know how I am going to conclude, of course: OWL (specifically owl:inverseOf) provides just this.

BTW, approach 3 is not incompatible with 1 or 2. Whether or not we define mappings for CIM relationships and whether or not that mapping gets adopted, there will be plenty of cases in a federated scenario in which you need to reconcile models (CIM-based or not). Model metadata (aka an ontology language) is useful anyway.

Readers who only care about the technical aspects and have little time for rants can stop reading here. But, since I haven’t addressed any constructive criticism to the DMTF in a while, I can’t resist the opportunity to point out that if the mailing list archives for the DMTF working groups were publicly available, we wouldn’t have to have these discussions on our personal blogs. I am very glad that Van posted this on his blog because it is a question that many people will have. Whatever the CMDBf specification ends up doing, developers and architects who make use of it will benefit from having access to the deliberations and considerations that resulted in the specification being what it is. There are many emails in the CMDBf mailing list private archive that I am sure would be useful to future CMDBf implementers, but if they don’t show up on Google they don’t exist for any practical purpose. When grappling with the finer points of some specification or programming language I have often Googled my way into email archives (or old specification drafts) of the working groups that designed them. Sometimes I come out thinking “oh, ok, now I understand why they chose that approach” and other times it’s “ok, that’s what I suspected, these guys were high”. Either way, it’s useful to me as a user of the specification. W3C is the best example (of making working group records available, not of being high): not only is the mailing list available but the phone meetings often have a supporting IRC channel in which key points of the discussion get captured and archived. Here is an example. Making life easier for implementers is probably the single most important thing to make a specification successful. And ultimately, that’s the DMTF’s success too.

And it’s not just for developers and architects. It also impacts industry observers and pundits. Like the IT Skeptic who looked into CMDBf and reported “nothing on the DMTF website but press releases. try to find anything by navigating from the homepage”. And you wonder why his article is titled “the CMDB Federation proceeeds (sic) at its usual glacial pace”. There is good work going on, but there is no way for him to see it. This too is bad for the adoption and credibility of DMTF specifications.

Isn’t it ironic that the DMTF expends resources to sponsor a “hospitality suite” at the Burton Group Catalyst conference (presumably to spread the word about the good work taking place in the organization) but fails to make it easy for the industry to see that same good work taking place? It’s like a main street retail shop that advertises in the newspaper but covers its store window with cardboard, preventing passersby from seeing what’s on offer. I notice that all the other “hospitality suites” seem to be staffed by for-profit vendors (Oracle, IBM, Cisco, Microsoft etc are all there). Somehow W3C and OASIS (whose work is very relevant to some of the conference themes, like identity management and SOA) don’t feel the need to give away pens and key chains at the conference.

Dear DMTF, open source is not just good for code.

2 Comments

Filed under CA, CMDB Federation, CMDBf, Conference, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, RDF, Semantic tech, Specs, Standards, Trade show, W3C

Recent IT management announcements

There were a few announcements relevant to the evolution of IT management over the last week. The most interesting is VMware’s release of the open-source (BSD license) VI SDK, a Java API to manage a host system and the virtual machines that run on it. Interesting that they went the way of a language-specific API. The alternatives, to complement/improve their existing web services SDK, would have been: define CIM classes and implement a WBEM provider (using CIM-HTTP and/or WS-Management), use WS-Management but without the CIM part (define the model as native XML, not XML-from-CIM), use a RESTful HTTP-driven interface to that same native XML model or, on the more sci-fi side, go the MDA way with a controller from which you retrieve the observed state and to which you specify the desired state. The Java API approach is the easiest one for developers to use, as long as they can access the Java ecosystem and they are mainly concerned with controlling the VMWare entities. If the management application also deals with many other resources (like the OS that runs in the guest machines or the hardware under the host, both of which are likely to have CIM models), a more model-centric approach could be more handy. The Java API of course has an underlying model (described here), but the interface itself is not model-centric. So what with all the DMTF-love that VMWare has been displaying lately (OVF submission, board membership, hiring of the DMTF president…). Should we expect a more model-friendly version of this API in the future? How does this relate to the DMTF SVPC working group that recently released some preliminary profiles? The choice to focus on beefing-up the Java-centric management story (which includes Jython, as VMWare was quick to point out) rather than the platform-agnostic, on-the-wire-interop side might be seen by the more twisted minds as a way to not facilitate Microsoft’s “manage VMWare today to replace it tomorrow” plan any more than necessary.

Speaking of Microsoft, in unrelated news we also got a heartbeat from them on the Oslo project: a tech preview of some of the components is scheduled for October. When Oslo was announced, there was a mix of “next gen BizTalk” aspects and “developer-driven DSI” aspects. From this report, the BizTalk part seems to be dominating. No word on use of SML.

And finally, SOA Software (who was previously called Digital Evolution and who acquired Blue Titan, Flamenco and LogicLibrary, in case you’re trying to keep track) has released a “SOA Development Governance Product”. Nothing too exciting from what I can see on InfoQ about it, but that’s a pretty superficial evaluation so don’t let me stop you. Am I the only one who twitches whenever “federation” is used to mean at worst “import” or at best “synchronization”? Did CMDBf start that trend? BTW, is it just an impression or did SOA Software give InfoQ a list of the questions they wanted to be asked?

4 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Open source, Oslo, OVF, SML, Standards, Tech, Virtualization, VMware, WS-Management

I have seen the future of CMDBf

I got a sneak peak at CMDBf v2 today.

I am calling it v2 based on the assumption that the one being currently standardized in DMTF will end up being called 1.0 (because it’s the first one out of DMTF) or 1.1 (to prevent confusion with the submitted version).

At the Semantic Technology Conference, David Booth from HP presented his work (along with his partner, Steve Battle from HP Labs) to provide a SPARQL front-end to HP’s Universal CMDB (the engine under what was the Mercury MAM product). Here are the slides.

The mapping from SPARQL to TQL (the native query interface for UCMDB) was made pretty easy by the fact that TQL is a graph-oriented query language. How much harder would it be to similarly transform a CMDBf (v1) query interface into a SPARQL query interface (and vice-versa)? Not much. The only added difficulty would come from the CMDBf XPath constraints. TQL has a property value mechanism that is very similar to CMDBf’s “propertyValue” constraint and maps well to SPARQL functions. The introduction of XPath as a constraint language in CMDBf makes things harder. It could be handled by adding XPath support to the SPARQL engine using function extensibility. Or by turning the entire XML into RDF and emulating XPath in SPARQL. But in either case, you’ll have impedence mismatch at some point because concepts such as element order that exist in XPath have no native equivalent in RDF.

The use of XPath in selectors on the other hand is not a problem. HP’s prototype uses Gloze (available as a Jena package) to turn the XML returned by UCMDB into RDF. An XSLT transform could turn that same XML into a CMDBf-valid XML response instead and that XSLT could easily handle the XPath selectors from the query request. This is another reason why constraints and selectors should remain separate in CMDBf (fortunately the specification is back to doing this properly).

Here is why I call this prototype CMDBf v2: The CMDBf effort (v1 or 1.1), in its current form of re-inventing a graph query, can succeed. Let’s assume the working group strikes a reasonable balance between completeness and complexity, and vendors choose to compete on innovation and execution rather than lock-in (insert cynical comment here). CMDBf may then end up being supported by the main CMDB vendors. It wouldn’t provide federation capabilities, but having a common CMDB query interface supported by the Big Four would help with management integration. And yet, while the value would be real, it would only provide a little help to solve a larger problem:

  • As a technology limited to IT systems management, it would be unlikely to see widely available tools (e.g. user consoles and language-specific libraries).
  • It wouldn’t get the kind of robustness and interoperability that comes from wide adoption. While pretty similar, there might be some minor differences in the various implementations. Once your implementation has been tweaked to work with the implementations from the Big Four, you’ll call it done. Just like SNMP, another technology that is specific to IT systems management (see it happen here).
  • Even if it works perfectly at the query level, it will just hasten the time when developers run into the real problem, model interoperability. CMDBf doesn’t help at all with this. In fact, it makes it harder by hard-coding some dependencies on an XML back-end (the XPath constraints).

In the long run, IT management has to become more automated and integrated. That’s a given. The way it happens may or may not go through CMDB-like configuration stores. But if it does, we’ll have to eventually move beyond CMDBf (v1) towards something that addresses the three requirements above. And federation. I don’t know if it will be called CMDBf v2, and/or if it will come from the DMTF (by then, the CMDBf brand might be an asset or a liability depending on developer experience with the specification). But I strongly suspect (“probability 0.8” as a Gartner analyst might put it) that it will use semantic technologies. Because the real, hard, underlying problem is a problem of semantic integration. In that sense, David and Steve’s prototype is a sneak peek at what will come after CMDBf v1/1.1.

Pretty much since the beginning of CMDBf I have been pushing for it to ideally embrace SPARQL (with no success) or to at least stay close to it conceptually in order to make the eventual mapping/evolution smooth (with a bit more success). This includes pushing for a topological query language, trying to keep XML idiosyncrasies at bay and keeping constraints and selectors cleanly separated. Rather than working within the CMDBf group, David took the alternative approach of simply doing it. Hopefully this will help convince people of the value of re-using semantic web technology for IT systems management. Yes semantic technologies have been designed for a much more general use case. But the use cases that CMDB systems address are a subset of the use cases addressed by semantic technologies. It’s hard for domain experts to see their domain as just a subset of a larger problem, but this is the case here. Isn’t HTTP serving the IT management community better than a systems management-specific alternative would?

By the way, there is no inferencing taking place in the HP prototype. We are just talking about re-using an existing, well though-through graph query language. Sure OWL inferencing and some rules could be seamless layered on top of this. But this is in no way required to do (better) what CMDBf v1 tries to do.

And then there is the “federation” question. Who do you trust more to deliver this? A bunch of IT system management architects in DMTF or the web and query experts at W3C, HP Labs etc who designed and implemented SPARQL over many years? BTW, it sounds like SPQARL federation was discussed at WWW 2008, based on these meeting notes (search for “federation”).

2 Comments

Filed under Automation, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, Graph query, HP, IT Systems Mgmt, Query, RDF, Semantic tech, SPARQL, Standards, W3C, XPath

Oracle/BEA, WS-Management and MMS: announcements of the day

A few announcements came out today.

The good news: Oracle’s acquisition of BEA closes. Unobstructed technical work can start.

The conveniently-timed news: WS-Management officially a standard.

Speaking of MMS 2008, any announcement there? Not much so far, as explained by Ian Blyth. If I parse the cross-platform part of the press release correctly, it says that management of non-Windows resources by Operations Manager is based on WS-Management, but WS-Management alone is not enough so Microsoft is providing a development kit for several non-Microsoft operating systems. It will be interesting to see what exactly is produced by these management packs. Can they be called on by management tools other Operations Manager or is the stuff that rides on top of WS-Management too proprietary to allow this? No word on SML/CML.

By the end of the week we may have a clearer picture, including what’s going on with the previously-announced reset on System Center Service Manager. Coté is on the scene and will undoubtedly share his thoughts.

As a side note, the way the MMS main page loads betrays the fact that, in 2008, Microsoft (or more likely its event marketing contractor) is using the same clueless HTML design approach that I first saw in 1995 and recently wrote about. All the text in the center of the MMS home page is contained in one large picture (available here). They didn’t even bother with a “ALT” field, so good luck to blind users. The part that says “Registration Overview Page” was made blue and underlined to suggest that it is a link, but it is just a part of the picture. Which, presumably, was supposed to be turned into a link using an image map. Well, turns out they can’t even get that right.

They tried to use a client-side image map (not available in 1995) but somehow the actual map code is commented out in the HTML source:

<!--<map name=Map>
  <area shape=RECT coords=18,549,210,572 href="registrationoverview.aspx">
  <area shape=RECT coords=17,596,222,634 href="registrationoverview.aspx">
</map>-->

As a result, the single most preeminent link on the home page is dead. And there is no server-side image map mechanism as a backup (which I remember used to be best practice when client support for client-side image maps was spotty).

Looking at the HTML source also reveals that tables are over-used. That’s the kind of HTML I can write, and I don’t mean that as a compliment.

[UPDATED 2008/5/5: As expected/hoped, Coté did share his thoughts on this “cross-platform” move from the MMS floor.]

Comments Off on Oracle/BEA, WS-Management and MMS: announcements of the day

Filed under CMDB, DMTF, Everything, IT Systems Mgmt, Linux, Manageability, Microsoft, Oracle, Standards, Trade show

Comparing the “openness” of standards bodies

Via James Governor, a link to an IDC report that attempts to compare the “openness” of ten standards bodies: CEN, Ecma, ETSI, IETF, ISO, ITU, NIST, OASIS, OMG, and W3C. The report is 92 pages long, which is 91 more than I really want to read on this topic. I skimmed the report until I got to the “concluding remarks” at the end. The bottom line:

“However, there are differences between standard setting organizations in terms of ‘openness’ and certainly in terms of how ‘openness’ is implemented. It can be difficult to make a distinction of which form of ‘openness’ is the most appropriate.”

Sure, but after 92 pages maybe the author could at least propose some useful way to organize the problem rather than just making a laundry list of possible interpretations of “openness”.

Still, if you are in the business of running (or selecting) standards organizations it might be worth your time to read this report.

Bad news for DMTF: you are not important enough to be included. Good news for DMTF: your lack of transparency is not exposed by this report.

Comments Off on Comparing the “openness” of standards bodies

Filed under DMTF, Everything, Standards

DMTF members as primary voters?

I just noticed this result from the 2007 DMTF member survey (taken a year ago, but as far as I can tell just released now). When asked what their “most important interoperability priority” is, members made it pretty clear that they want the current CIM/WBEM infrastructure fixed and polished. They seem a lot less interested in these fancy new SOAP-based protocols and even less in using any other model than CIM.

It will be interesting to see what this means for new DMTF activities, such as CMDBf or WS-RC, that are supposed to be model-neutral. A few possibilities:

  • the priorities of the members change over time to make room for these considerations
  • turn-over (or increase) in membership brings in members with a different perspective
  • the model-neutral activities slowly get more and more CIM-influenced
  • rejection by the DMTF auto-immune system

My guess is that the DMTF leadership is hoping for #1 and/or #2 while the current “base” (to borrow from the US election-season language) wouldn’t mind #3 or #4. I am expecting some mix of #2 and #3.

Pushing the analogy with current US political events further than is reasonable, one can see a correspondence with the Republican primary:

  • CIM/WBEM is Huckabe, favored by the base
  • CMDBf/WS-RC/WS-Management etc is Romney, the choice of the party leadership
  • At the end, some RDF and HTTP-based integration-friendly approach comes from behind and takes the prize (McCain)

Then you still have to win the general election (i.e. industry adoption of whatever the DMTF cooks up).

[UPDATED 2008/2/7: the day after I write this entry, Romney quits the race. Bad omen for CMDBf and WS-RC? ;-) ]

Comments Off on DMTF members as primary voters?

Filed under CMDB Federation, CMDBf, DMTF, Everything, Standards, WS-Management

SPARQL is a W3C Recommendation

SPARQL is now a W3C Recommendation (which is how W3C calls its approved standard specifications). Congratulations to those who made it happen, including my esteemed ex-colleagues at HP Labs Bristol. Just on time for the DMTF CMDBf working group to consider it as a candidate for its query language… :-)

And just below that SPARQL announcement we see a notice that the SML working group has released a third set of working drafts (SML, SML-IF). Just on time for the DMTF to be reminded of the goodness of open access to developing standards… :-)

Comments Off on SPARQL is a W3C Recommendation

Filed under DMTF, Everything, RDF, SML, SPARQL, W3C

The window of opportunity for WS-Management

There is a narrow window of opportunity for WS-Management to become a unifying force that helps lower the need for management agents. Right now, WS-Management is still only “yet another manageability protocol”. Its adoption is growing but there isn’t much you can do with it that you can’t do through some other way (what resources today are only manageable through WS-Management?) and it is not so widely supported that you can get away with supporting just WS-Management.

I see two main reasons keeping pragmatic creators of IT resources (hardware and software) from more widely using WS-Management to expose the manageability capabilities of their resources. The first one, that I will cover here, is the fear of wasting development resources (and the lack of customer demand). The second one, that I will cover in a later post, is the complexity introduced by some technical choices in WS-Management.

There is plenty of uncertainty around the status and future of WS-Management. This means that any investment in implementing the specification is at risk of having to be later thrown away. It also means that customers, while they often mention it as part of a check-list, understand that at this point WS-Management doesn’t necessarily give them the investment protection that widely-supported stable standards provide. And as such they are receptive when vendors explain that at this point there really isn’t a stable standard for manageability that goes across domains and the best they can get is support for a patchwork of established specifications like SNMP, JMX, CIM/HTTP, WMI, etc.

One source of this uncertainty about WS-Management comes from the fact that there is an equivalent standard, WSDM, that came out of OASIS. But at this point, it is pretty clear that WSDM is going nowhere. Good metrics are hard to come by, but if you compare the dates of last commit activity in the three open-source WS-Management implementations that I know of (Openwsman, Wiseman and the WS-Management module of SOA4D) to that of the Muse implementation of WSDM, you are comparing ages in hours/days to ages in months. Another way is to look at the sessions in the Web services track at the recent Management Developers Conference: six presentations around WS-Management (including an intriguing Ruby on Rails module) compared to one for WSDM. Unless your company is an IBM-only account, WSDM isn’t a useful alternative to WS-Management (and it’s not due to technical inferiority, I still prefer WSDM MUWS to WS-Management on that point but it’s largely irrelevant).

The more serious concern is that, back when it wasn’t clear that the industry would pick WS-Management over WSDM, an effort was launched to reconcile the two specifications. That effort, often refered to as the WS-Management/WSDM convergence, is private so no-one outside of the four companies involved know what is happening. The only specification that has come out at this point is a draft of WS-ResourceTransfer in summer 2006 (I don’t include WS-ResourceCatalog because even though it came out of the same group it provides features that are neither in WS-Management nor in WSDM so it is not really part of converging them). What is happening now? The convergence effort may have died silently. Or it may be on the brink of releasing a complete new set of specifications. Or it may have focused on a more modest set of enhancements to WS-Management. Even though I was in the inside until a few months ago, I am not feigning ignorance here. There is enough up in the air that I can visualize any of these options realized.

This is not encouraging to people looking to invest their meager development resources to improve manageability interfaces on their products. What if they put work in WS-Management and soon after that Microsoft, IBM, HP and Intel come out with a new set of specifications and try to convince the industry to move from WS-Management to that new set of specifications? Much safer to stay on the sidelines for now. The convergence is a source of FUD preventing adoption of WS-Management. It is, on the other hand, a lifeline for WSDM because it provides a reason for those who went with WSDM to wait and see what happens with the convergence before moving away from WSDM.

Even before leaving HP, I had come to the conclusion that it was too late for the convergence to succeed. This doesn’t imply anything about HP’s current position on the topic, which I am of course not qualified to represent. But I just noticed that the new HP BTO chief architect doesn’t seem too fond of WS-*.

Even if the convergence effort manages to deliver the specifications it promised (including an update of WS-ResourceTransfer which is currently flawed, especially its “partial put” functionality), it will be years before they get published, interop-tested, submitted and standardized. Will there be appetite for a new set of WS-* specifications at that point? Very doubtful. SOAP will be around for a long time, but the effort in the SOAP community is around using the existing set of specifications to address already-identified enterprise integration problems. The final stage in the production of any good book, article or even blog post (not that this blog is a shining example) is to pair-down the content, to remove anything that is not essential. This is the stage that the SOAP world is in, sorting through the deluge of specifications to extract and polish the productive core. New multi-spec frameworks need not apply.

If there is to emerge a new, comprehensive, framework for web-based manageability, it won’t be the WS-Management/WSDM convergence. It probably won’t use SOAP (or at least not in its WS-Addressing-infected form). It may well use RDF. But it is not in sight at this point. So for now the choice is whether to seize the opportunity to create a widely-adopted standard on the basis of WS-Management (with all its flaws) or to let the window of opportunity close, to treat WS-Management as just another manageability tool in the toolbox and go on with life. Until the stars line up in a few years and the industry can maybe take another stab at the effort. To a large extent, this is in the hands of Microsoft, IBM, HP and Intel. Ironically, the best way for those who want nothing to do with SOAP to prevent SOAP from being used too much for manageability (beyond where WS-Management is already used) is to keep pushing the convergence (which is very much SOAP based) in order to keep WS-Management contained.

3 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Standards, WS-Management, WS-ResourceTransfer

CMDBf now in the hands of the DMTF

It’s now official, the CMDBf specification has been submitted to the DMTF and will be standardized there. Here is the press release and here is the specification (unchanged) republished on the DMTF site. The CMDBf working group was created a while ago at the DMTF but I didn’t report it since it wasn’t clear to me whether that was public information or not. The press release makes this clear now.

As a side note, this is one of my ongoing frustrations with the DMTF. Almost everything happens in private with no publicly-accessible URL until a press release comes out and of course lots of interesting things happen that don’t get a press release. I have heard many times that the DMTF is working on opening up the process, but I still haven’t seen much change. If this had been OASIS or W3C, the call for formation of the new working group would have been publicly accessible even before the group was created. OK, end of ranting.

As always, there isn’t much useful information to be gleaned from the text of the press release. Only that, as expected, the authors addressed the question of how this relates to CIM, since for many DMTF=CIM. So the press release proactively declares that the CMDBf work will not be limited to CIM-modeled configuration data. What this means in practice will be seen later (e.g. will there be CIM-specific extensions?).

Having seen how executive quotes for press releases get generated I hate to read too much into them, but another thing I can’t help noticing in the press release is that none of the quotes from the companies submitting the specification tout federation, but simply “integration” or “sharing”. For example: “integration and interoperability” (BMC), “share data” (CA), “sharing of information” (HP), “view, track and change information” (IBM), “exchange data” (Microsoft). This more realistic assessment of what the specification does stands in contrast to the way the DMTF presents it in the press release : “this specification provides a standard way to federate management data stored in multiple different data models”. At this point, it doesn’t really provide federation and especially not across different models.

All in all, it’s as good thing for this work to be moved to a standards organization. I may join the CMDBf group at the DMTF to track it, but I don’t plan to engage very much as this area isn’t my focus anymore now that I am at Oracle. But of course everything is linked at some level in the management field.

[UPDATE  on 2007/11/30: two days after posting this message I got the monthly DMTF newsletter which touches on points I raise here. So here are the relevant links. First, Mike Baskey, DMTF Chairman, shares his view on what CMDBf means for DMTF. Second, as if to respond to my rant on the opacity of the DMTF, Josh Cohen, DMTF Vice-chairman, gives an update on process improvements. Some progress indeed, but still a far cry from opening up mailing list archives so that observers can see in real time what issues are addressed and can go back in time to understand how a specific technical decision was made and what were the considerations.]

Comments Off on CMDBf now in the hands of the DMTF

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Specs, Standards

A review of OVF from a systems management perspective

I finally took a look at OVF, the virtual machine distribution specification that was recently submitted to DMTF. The document is authored by VMware and XenSource, but they are joined in the submission to DMTF by some other biggies, namely Microsoft, HP, IBM and Dell.

Overall, the specification does a good job of going after the low-hanging fruits of VM distribution/portability. And the white paper is very good. I wish I could say that all the specifications I have been associated with came accompanied by such a clear description of what they are about.

I am not a virtualization, operating system or hardware expert. I am mostly looking at this specification from the systems management perspective. More specifically I see virtualization and standardization as two of the many threads that create a great opportunity for increased automation of IT management and more focus on the application rather than the infrastructure (which is part of why I am now at Oracle). Since OVF falls in both the “virtualization” and “standardization” buckets, it got my attention. And the stated goal of the specification (“facilitate the automated, secure management not only of virtual machines but the appliance as a functional unit”, see section 3.1) seems to fit very well with this perspective.

On the other hand, the authors explicitly state that in the first version of the specification they are addressing the package/distribution stage and the deployment stage, not the earlier stage (development) or the later ones (management and retirement). This sidesteps many of the harder issues, which is part of why I write that the specification goes after the low-hanging fruits (nothing wrong with starting that way BTW).

The other reason for the “low hanging fruit” statement is that OVF is just a wrapper around proprietary virtual disk formats. It is not a common virtual disk format. I’ve read in several news reports that this specification provides portability across VM platforms. It’s sad but almost expected that the IT press would get this important nuance wrong, it’s more disappointing when analysts (who should know better) do, as for example the Burton Group which writes in its analysis “so when OVF is supported on Xen and VMware virtualization platforms for example, a VM packaged on a VMware hypervisor can run on a Xen hypervisor, and vice-versa”. That’s only if someone at some point in the chain translates from the Xen virtual disk format to the VMware one. OVF will provide deployment metadata and will allow you to package both virtual disks in a TAR if you so desire, but it will not do the translation for you. And the OVF authors are pretty up front about this (for example, the white paper states that “the act of packaging a virtual machine into an OVF package does not guarantee universal portability or install-ability across all hypervisors”). On a side note, this reminds me a bit of how the Sun/Microsoft Web SSO MEX and Web SSO Interop Profile specifications were supposed to bridge Passport with WS-Federation which was a huge overstatement. Except that in that case, the vendors were encouraging the misconception (which the IT press happily picked up) while in the OVF case it seems like the vendors are upfront about the limitations.

There is nothing rocket-science about OVF and even as a non-virtualization expert it makes sense to me. I was very intrigued by the promise that the specification “directly supports the configuration of multi-tier applications and the composition of virtual machines to deliver composed services” but this turns out to be a bit of an overstatement. Basically, you can distribute the VMs across networks by specifying a network name for each VM. I can easily understand the simple case, where all the VMs are on the same network and talking to one another. But there is no way (that I can see) to specify the network topology that joins different networks together, e.g. saying that there is a firewall between networks “blue” and “red” that only allows traffic on port 80). So why would I create an OVF file that composes several virtual machines if they are going to be deployed on networks that have no relationships to one another? I guess the one use case I can think of would be if one of the virtual machines was assigned to two networks and acted as a gateway/firewall between them. But that’s not a very common and scalable way to run your networks. There is a reason why Cisco sells $30 billions of networking gear every year. So what’s the point of this lightweight distributed deployment? Is it just for that use case where the network gear is also virtualized, in the expectation of future progress in that domain? Is this just a common anchor point to be later extended with more advanced network topology descriptions? This looks to me like an attempt to pick a low-hanging fruit that wasn’t there.

Departing from usual practice, this submission doesn’t seem to come with any license grant, which must have greatly facilitated its release and the recruitment of supporters for the submission. But it should be a red flag for adopters. It’s worth keeping track of its IP status as the work progresses. Unless things have changed recently, DMTF’s IP policy is pretty weak so the fact that works happens there doesn’t guarantee much protection per se to the adopters. Interestingly, there are two sections (6.2 about the virtual disk format and 11.3 about the communication between the guest software and the deployment platform) where the choice of words suggests the intervention of patent lawyers: phrases like “unencumbered specification” (presumably unencumbered with licensing requirements) and “someone skilled in the art”. Which is not surprising since this is the part where the VMWare-specific, Xen-specific or Microsoft-specific specifications would plug in.

Speaking of lawyers, the section that allows the EULA to be shipped with the virtual appliance is very simplistic. It’s just a human-readable piece of text in the OVF file. The specification somewhat naively mentions that “if unattended installs are allowed, all embedded license sections are implicitly accepted”. Great, thanks, enterprises love to implicitly accept licensing terms. I would hope that the next version will provide, at least, a way to have a URI to identify the EULA so that I can maintain a list of pre-approved EULAs for which unattended deployment is possible. Automation of IT management is supposed to makes things faster and cheaper. Having a busy and expensive lawyer read a EULA as part of my deployment process goes against both objectives.

It’s nice of the authors to do the work of formatting the specification using the DMTF-approved DSPxxxx format before submitting to the organization. But using a targetnamespace in the dmtf.org domain when the specification is just a submission seems pretty tacky to me, unless they got a green light from the DMTF ahead of time. Also, it looks a little crass on the part of VMware to wrap the specification inside their corporate white paper template (cover page and back page) if this is a joint publication. See the links at http://www.vmware.com/appliances/learn/ovf.html. Even though for all I know VMware might have done most of the actual work. That’s why the links that I used to the white paper and the specification are those at XenSource, which offers the plain version. But then again, this specification is pretty much a wrapper around a virtual disk file, so graphically wrapping it may have seemed appropriate…

OK, now for some XML nitpicking.

I am not a fan of leaving elementformdefault set to “unqualified” but it’s their right to do so. But then they qualify all the attributes in the specification examples. That looks a little awkward to me (I tend to do the opposite and qualify the elements but not the attributes) and, more importantly, it violates the schema in appendix since the schema leaves attributeFormDefault to its default value (unqualified). I would rather run a validation before makings this accusation, but where are the stand-alone XSD files? The white paper states that “it is the intention of the authors to ensure that the first version of the specification is implemented in their products, and so the vendors of virtual appliances and other ISV enablement, can develop to this version of the specification” but do you really expect us to copy/paste from PDF and then manually remove the line numbers and header/footer content that comes along? Sorry, I have better things to do (like whine about it on this blog) so I haven’t run the validation to verify that the examples are indeed in violation. But that’s at least how they look to me.

I also have a problem with the Section and Content elements that are just shells defined by the value of their xsi:type attribute. The authors claim it’s for extensibility (“the use of xsi:type is a core part of making the OVF extensible, since additional type definitions for sections can be added”) but there are better ways to do extensibility in XML (remember, that’s what the X stands for). It would be better to define an element per type (disk, network…). They could possibly be based on the same generic type in XSD. And this way you get more syntactic flexibility and you get the option to have sub-types of sub-types rather than a flat list. Interestingly, there is a comment inside the XSD that defines the Section type that reads “the base class for a section. Subclassing this is the most common form of extensibility”. That’s the right approach, but somehow it got dropped at some point.

Finally, the specification seems to have been formated based on WS-Management (which is the first specification that mixed the traditional WS-spec conventions with the DMTF DSPxxxx format), which may explain why WS-Management is listed as a reference at the end even though it is not used anywhere in the specification. That’s fine but it shows in a few places where more editing is needed. For example requirement R1.5-1 states that “conformant services of this specification MUST use this XML namespace Universal Resource Identifier (URI): http://schemas.dmtf.org/ovf”. I know what a conformant service is for WS-Management but I don’t know what it is for this specification. Also, the namespace that this requirement uses is actually not defined or used by this specification, so this requirement is pretty meaningless. The table of namespaces that follows just after is missing some namespaces. For example, the prefix “xsi” is used on line 457 (xsi:any and xsi:AnyAttribute) and I want to say it’s the wrong one as xsi is usually assigned to “http://www.w3.org/2001/XMLSchema-instance” and not “http://www.w3.org/2001/XMLSchema” but since the prefix is not in the table I guess it’s anyone’s guess (and BTW, it’s “anyAttribute”, not “AnyAttribute”).

By this point I may sound like I don’t like the specification. Not at all. I still stand with what I wrote in the second paragraph. It’s a good specification and the subset of problems that it addresses is a useful subset. There are a few things to fix in the current content and several more specifications to write to complement it, but it’s a very good first step and I am glad to see VMware and XenSource collaborating on this. Microsoft is nominally in support at this point, but it remains to be seen to what extent. I haven’t seen them in the past very interested in standards effort that they are not driving and so far this doesn’t appear to be something they are driving.

6 Comments

Filed under DMTF, Everything, IT Systems Mgmt, OVF, Specs, Standards, Tech, Virtualization, VMware, XenSource