Category Archives: IT Systems Mgmt

Forrester report on Oracle’s Enterprise Manager

Forrester’s Jean-Pierre Garbani wrote a short report last month about the current offering and future plans of Oracle’s IT management group (where I work).

As the report points out, Oracle’s IT management products don’t always enjoy a level of industry attention commensurate with the value they deliver. This report will hopefully help fix this.

Forrester: “Oracle Focuses On Business Value”.

1 Comment

Filed under Application Mgmt, BSM, Everything, IT Systems Mgmt, Oracle

Did someone at EDS miss the memo?

Two months ago, HP announced the acquisition of EDS.

One month later, HP Software announced a slew of new service management products, including an updated version (7.5) of Universal CMDB (from the Mercury acquisition).

One month later (today), according to BMC (with supporting quote from an EDS exec), “EDS Asia Pacific Standardises on BMC Software Atrium CMDB to Improve Service Delivery”.

As an ex-colleague pointed out to me, the acquisition isn’t closed yet. Still.

6 Comments

Filed under BSM, CMDB, Everything, HP, IT Systems Mgmt

WS Resource Access at W3C: the good, the bad and the ugly

As far as I know, the W3C is still reviewing the proposal that was made to them to create a new working group to standardize WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange. The suggested name, “Web Services Resource Access Working Group” or WS-RAWG is likely, if it sticks, to end up being shortened to WS-RAW. Which is a bit more cruel than needed. I’d say it’s simply half-baked.

There are many aspects to the specifications and features covered by the proposal. Some goodness, some badness and some ugliness. This post analyzes the good, points at the bad and hints at the ugly. Like your average family-oriented summer movie.

The good

The specifications proposed for W3C standardization describe a way to provide some generally useful features for SOAP messages. Some SOAP messages can get very long. In some cases, I know ahead of time what portion of the long messages promised by the contract (e.g. WSDL) I want. Wouldn’t it be nice, as an optimization, to let the message sender know about this so they can, if they are able to, filter down the message to just the part I want? Alternatively, maybe I do want the full response but I can’t consume it as one big message so I would like to get it in chunks.

You’ll notice that the paragraph above says nothing about “resources”. We are just talking about messaging features for SOAP messages. There are precedents for this. WS-Security can be used to encrypt a message. Any message. WS-ReliableMessaging can be used to ensure delivery of a message. Any message. These “quality of service” specifications are mostly orthogonal to the message content.

WS-RT and WS-Enumeration provide a solution to the “message filtering” and “message chunking”, respectively. But they only address them in the context of a GET-like operation. They can’t be layered on top of any SOAP message. How useful would WS-Security and WS-ReliableMessaging be if they had such a restriction?

If W3C takes on part of the work listed in the proposal, I hope they’ll do so in a way that expends the utility of these features to all SOAP messages.

And just like WS-Security and WS-ReliableMessaging, these features should be provided in a way that leverages the SOAP processing model. Such that I can judiciously use the soap:mustUnderstand header to not break existing services. If I’d like the message to be paired down but I can handle the complete message if need be, I’ll set this attribute to false. If I can’t handle the full message, I’ll set the attribute to true and I’ll get an error if the other party doesn’t understand this extension. At which point I can pick an alternative way to get the task accomplished. Sounds pretty basic but it’s amazing how often this important feature of SOAP (which heralds from and extends XML’s must-ignore semantics) is neglected and obstructed by designers of SOAP messages.

And then there is WS-MetadataExchange. While I am not a huge fan of this specification, I agree with the need for a simple, reliable way to retrieve different types of metadata for an endpoint.

So that’s the (potential) good. A flexible and generally useful way to pair-down long SOAP messages, to chunk them and to retrieve metadata for SOAP endpoints.

The bad

The bad is the whole “resource access” spin. It is not actually intrinsically bad. There are scenarios where such a pattern actually fits. But the way that pattern is being addressed by WS-RT and friends is overly generalized and overly XML-centric. By the latter I mean that it takes XML from an agreed-upon on-the-wire interchange format to an implicit metamodel (e.g. it assumes not just that you agree to exchange XML-formated data but that your model and your business logic are organized and implemented around an XML representation of the domain, which is a much more constraining requirement). I could go on and on about this, especially the use of XPath in the PUT operation. In fact I did go on and on with it, but I spun that off as a separate entry.

In the context of the W3C proposal at hand, this is bad because it burdens the generally useful features (see the “good” section above) with an unneeded and limiting formalism. Not to mention the fact that W3C kind of already has its resource access mechanism, but I’ll leave that aspect of the question to Mark and various bloggers (see a short list of relevant posts at the end of this entry).

The resource access part might be worth doing (one more time), but probably not in the same group as things like metadata discovery, message filtering and message chunking, which are not specific to “resource access” situations. And if someone is going to do this again, rather than repeating the not too useful approaches of the past, it may be good to consider alternatives.

The ugly

That’s the politics around this whole deal. There is, as you would expect, a lot more to it than meets the eye. The underlying drivers for all this have little to do with REST/WS or other architecture considerations. They have a lot to do with control. But that’s a topic for another post (maybe) when more of it can be publicly discussed.

A lot of what I describe in this post was already explained in the WS-ManagementHammer post from a couple of months ago. But that was before the W3C proposal and before WS-MetadataExchange was dragged into the deal. So I thought it might be useful to put the analysis in the context of that proposal. And BTW, this is a personal opinion, not an Oracle position (which is true in general for everything on this blog but is worth repeating specifically for this post).

2 Comments

Filed under Everything, Grid, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, SOAP, SOAP header, Specs, Standards, Tech, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

Who needs XPath fragment-level PUT?

WS-Management and WS-ResourceTransfer (WS-RT) both provide a mechanism to modify the XML representation of the state of a resource in a fine-grained way. The mechanisms differ a bit: WS-Management defines a SOAP header and distinguishes PUT from DELETE at the WS-Transfer operation level, while WS-RT uses the SOAP body and tunnels “modes” (remove, modify, insert) on top of the PUT WS-Transfer operation. But in their complete form both use XPath to point to any arbitrary nodeset and update it.

WS-ResourceProperties (WS-RP) takes a simpler approach. While it too supports XPath-driven retrieval of the content, it doesn’t attempt to provide an XPath-like level of flexibility when it comes to updating the content. All it offers is SET, INSERT, UPDATE and DELETE operations at the level of a property (a top-level child of the XML representation) and nothing more granular.

In this respect at least, WS-RP makes a better choice than its competitor and its aspiring successor.

First, XPath-driven updates sound easy but in fact are hard to specify. Not surprisingly, the current specifications do a pretty incomplete job at it. They often seem to assume that the XPath used to target the value to change returns only one node, but nothing guarantees this. If it picks up more than one node, do you replace all these nodes by the new values as a block (the new values get inserted once, presumably at the location of the first selected node) or do you replace each selected node by all the new values (in which case they get duplicated as needed)? Also, the specifications say nothing about what constitutes compatibility between the targeted nodes and the replacement nodes. One might assume that a “don’t be stupid” approach is all that’s needed. But there is no obvious line between “stupid” and “useful”. Does a request to replace a text node by an attribute node make sense? Not in a strongly-typed world, but a more forgiving implementation might just insert the text value of the attribute in the place of the text node to get to a valid result. What about replacing an element by a text node? Some may reject it for incompatible types but, unless the schema prevents mixed content, it may well result in a perfectly valid document. All in all, specifying a reliable way to edit XML is a pretty hairy task. Much harder than reading XML. It requires very careful considerations that have very little do with on-the-wire protocol considerations. Which is why doing this as part of a SOAP specification is a strange choice. The XQuery group is much more qualified for this. There must be a reason why that group decided to punt on this until they had taken care of the easier “read” case.

Second, it’s usually not all that useful anyway. Which is why the lack of precision in WS-Management’s specification of the fragment PUT haven’t really been a problem so far: people haven’t fully implemented that feature. A lot of the implementations are backed by a CIMOM, an MBean or some other OO store. In these stores, the exposed granularity is typically at the attribute level. The interactions used by programmers and consoles are also at that level. The XPath-driven update is then only used as a mechanism to update many properties at once (rather than going deep into individual properties) but that’s using a machine gun to kill a fly. The WS-RP approach supports these use cases without calling on XPath.

Third, XPath-driven PUT is really hard to implement unless your back-end store happens to be an XML database. You may end up having to write your own XPath parser and interpreter, an exercise during which you will face some impedance mismatches. Your back-end store may not have notions of property order for example, or attribute versus element. How do you handle these XPath instructions? And what kind of interoperability results from implementers having to make these decisions on their own? Implementing XPath selection on a GET is a lot simpler. All it assumes is that there is an XML serialization of the result, on which you can run the XPath expression before shipping it out. That XML serialization is a given in the SOAP world already. But doing an XPath-driven PUT injects XML considerations in your store itself, not just in the communication path.

Those are the practical reasons. In short, it makes the specifications at best complex and at worst non-interoperable, for a feature that is rarely needed. That should be enough already, but there are some architectural reasons to stay away too.

WS-Transfer is sometimes sold for REST over SOAP. And fragment-level WS-Transfer (what WS-Management and WS-RT do) is then REST on steroids. Sorry, not true. REST on crack if anything.

I am not a REST expert, but I know enough to understand that “everything has a URI” really means “anything meaningful has a URI”. It’s the difference between a crystal structure and a pile of mud. REST lets you interact directly with any node in the crystal, but there is a limited number of entities that are considered worthwhile of being a node. There is design involved (sorry, you can’t suddenly fire your architects, as attractive as that sounds). You can’t point to the space between two nodes in the crystal. XPath-on-top-of-WS-Transfer, on the other hand, lets you plunge your spoon anywhere in the pile of mud and scoop out whatever happens to be there.

Let’s take a look at WS-Federation (here is the latest draft), the only specification in a standard body that I know of that is currently using WS-RT. Whether it’s a wise choice or not for them, from a governance perspective, is a separate topic that I won’t cover here (answer: no. oops).

From a technical perspective, it is interesting to see how they went about using WS-RT PUT. They use it to update pseudonyms. But even though there is an XML representation for the pseudonyms, they don’t want to allow users to update any arbitrary part of that XML. So they create a specific dialect (the fed:FilterPseudonyms defined in section 6.1) that lets you, based on semantics that are meaningful in the specific domain covered by the specification, point to pseudonyms.

I believe most potential users of WS-RT PUT are in the same case as WS-Federation and are better served by a domain-specific way to identify entities of interest. At least the WS-Federation authors realized it rather than saying “great, WS-RT XPath fragment PUT gives us all this flexibility for free” and settling their implementers with the impossible task of producing interoperable implementations. Of course this begs the question of why WS-Federation uses WS-RT in the first place. A charitable interpretation is to pin this on overzealous re-use of all things WS-*. A more cynical interpretation sees this as a contrived precedent manufactured in an attempt “prove” that WS-RT provides features of general use rather than specific to the management domain.

Having described at length why XPath-driven updates aren’t as useful as they may seem, I can still think of two cases where a such a generic mechanism to modify an XML document could be useful. One is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page. But I haven’t exactly noticed wiki creators and users clamoring for wiki-over-SOAP, have you? The other situation is if you have a true model-driven system that is supported by a comprehensive system description and validation framework. The kind of thing that SML is trying to deliver. By using Schematron (rather than just XSD which is very limited in its expressivity beyond mere syntactical validation) to provide model validation. This would, in theory, allow the requester to validate the updated model before sending the change request. The change would still be validated on the receiver side (either explicitly or implicitly because a non-valid new model would simply fail when applied to the system), but the existence of the validation framework guarantees a high rate of successs (the sender would rarely send non-valid change requests). That’s very nice and exciting, but we don’t have this. SML is, as far as I can see, going nowhere fast in terms of adoption. Standardizing a model exchange protocol for that use case is, at this point in time, premature. Maybe one day.

5 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, Modeling, REST, SML, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath, XQuery

Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don’t see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

Progress Software acquires IONA… and MindReef too

The acquisition of IONA by Progress Software has been pretty widely written about (sometimes ironically). But that’s not the only thing happening in that neck of the wood. Less widely reported (but still covered on ZDNet, here and here) was their acquisition of MindReef. For the inside perspective, head over to Dan Foody’s blog.

This is yet another confirmation of the fact that testing and IT management are getting ever closer together. And for good reasons, if you want to better integrate application management tasks across the application’s lifecycle. Other signs of this were the recent acquisition of the e-Test suite from Empirix by Oracle (driven by Oracle’s application management team, not by the JDev team) and, some time ago, the fact that HP decided to hang on to Mercury’s testing business rather than spinning it off.

From the Progress perspective, the IT management side of course comes from the earlier Actional acquisition (that company itself having previously merged with WestBridge). From my earlier standards work I have worked with people from Actional, WestBridge and MindReef and there is an impressive wealth of SOAP and WS experience in these teams. What remains to be seen is how much management value can be derived from a very “on the wire” (as opposed to deep in the implementation) view of the interaction. Another challenge for SOAP-centric vendors, which might have been a driver for these acquisitions, is realization that SOAP is going to be only one component of the integration landscape, not its foundation. It may be the JMS of B2B integration, but it won’t be its TCP/IP as was once assumed.

2 Comments

Filed under Everything, IT Systems Mgmt, SOAP, Testing

OVF in action: Kensho

Simon Crosby recently wrote about an upcoming Citrix product (I think that’s what it is, since he doesn’t mention open source anywhere) called Kensho. The post is mostly a teaser (the Wikipedia link in his post will improve your knowledge of oriental philosophy but not your IT management expertise) but it makes interesting claims of virtualization infrastructure interoperability.

OVF gets a lot of credit in Simon’s story. But, unless things have changed a lot since the specification was submitted to DMTF, it is still a wrapper around proprietary virtual disk formats (as previously explained). That wrapper alone can provide a lot of value. But when Simon explains that Kensho can “create VMs from VMware, Hyper-V & XenServer in the OVF format” and when he talks about “OVF virtual appliances” it tends to create the impression that you can deploy any OVF-wrapped VM into any OVF-compliant virtualization platform. Which, AFAIK, is not the case.

For the purpose of a demo, you may be able to make this look like a detail by having a couple of equivalent images and picking one or the other depending on the target hypervisor. But from the perspective of the complete lifecycle management of your virtual machines, having a couple of “equivalent” images in different formats is a bit more than a detail.

All in all, this is an interesting announcement and I take it as a sign that things are progressing well with OVF at DMTF.

[UPDATED 2008/6/29: Chris Wolf (whose firm, the Burton Group, organized the Catalyst conference at which Simon Crosby introduced Kensho) has a nice write-up about what took place there. Plenty of OVF-love in his post too, and actually he gives higher marks to VMWare and Novell than Citrix on that front. Chris makes an interesting forecast: “Look for OVF to start its transition from a standardized metadata format for importing VM appliances to the industry standard format for VM runtime metadata. There’s no technical reason why this cannot happen, so to me runtime metadata seems like OVF’s next step in its logical evolution. So it’s foreseeable that proprietary VM metadata file formats such as .vmc (Microsoft) and .vmx (VMware) could be replaced with a .ovf file”. That would be very nice indeed.]

[2008/7/15: Citrix has hit the “PR” button on Kensho, so we get a couple of articles describing it in a bit more details: Infoworld and Sysmannews (slightly more detailed, including dangling the EC2 carrot).]

Comments Off on OVF in action: Kensho

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, OVF, Standards, Virtualization, Xen, XenSource

WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange on their way to W3C

A bit over a month ago, I mentioned my hope that WS-ResourceTransfer (WS-RT) would be allowed to rest in peace. This is apparently not to be and the specification is now on its way to W3C, along with WS-Transfer, WS-MetadataExchange and WS-Enumeration. This is not all that surprising and I had even hazarded a guess of who would join IBM in doing this. My list was IBM, CA, Fujitsu and Cisco. I got three out of four right, but Oracle replaced Cisco. The fact that the company I got wrong happens to be my employer is something I can’t really comment on, other than acknowledging the irony…

This is a very important development in the area of management standards. Some of the specifications listed here are used by WS-Management. They are also clearly intended to replace the WS-ResourceFramework stack that underpins WSDM. This is especially true of WS-RT which almost directly overlaps with WS-ResourceProperties. Users of both WS-Management and WSDM will take notice. As will those who have been standing on the side, waiting for things to stabilize…

If you are trying to relate this announcement to the WS-Management/WSDM convergence previously going on between Microsoft, IBM, HP and Intel (which is the forum in which WS-RT was originally produced), it looks like this is what the “convergence” has turned into. Except that three of the four vendors seem to have dropped out, thus my quotation marks around the word “convergence”.

The applicability of these specifications outside of the management domain seems to be assumed in this submission. It’s been often asserted but, in my mind, not yet proven. I don’t see the use of WS-RT by WS-Federation as a proof of this relevance (one of these days I’ll write a post to explain why).

It will be interesting to see how the W3C responds to this offer. The expected retort didn’t take long. If WS-RT wasn’t allowed to rest in peace, it won’t be allowed to REST in peace either. You can expect the blogosphere to light up with “WS-Transfer for RESTful applications” discussions (mostly making fun of WS-Transfer’s HTTP envy) very soon. Even though that’s just one of the many angles from which you can view this development, and not the most interesting one.

[UPDATED 2008/7/6: It took a little longer than expected, but the snarky/ironic blog posts have started: Steve, Mark, Tim, Bill, Stefan]

3 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, SOAP, Specs, Standards, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer

SaaS management: it’s MUWS and MOWS all over again

One of the most repetitive tasks when I was evangelizing WSDM was to explain the difference between the MUWS and MOWS specifications (the sum of which composes the entire WSDM body of work). MUWS (management using web services) describes how to use Web services to expose manageability capabilities of potentially any resource (a server, an application, a toaster…). MOWS (management of web services) defines a monitoring and control model for resources that are Web services themselves (so you can measure the number of messages received for example).

I ended up sounding like a cow when I was presenting. A retarded cow even, since my French accent forced me to say it slowly so people could hear the difference.

In retrospect, we should not have tried to tackle both in the same group. And not just because my dignity was bruised. It was a distraction inside the working group, and a source of confusion outside of it. We should have focused on MUWS (as WS-Management did) and possibly created a protocol-independent monitoring/control model for Web services separately. Something that, BTW, is still missing today.

I am being reminded of this MUWS vs. MOWS state of affair these days, when the topics of SaaS and IT management meet, often under the term “SaaS management”. By that, some people mean “delivering IT management as a hosted service, rather than running the management software in the same datacenter as the application”. Other mean “managing, using an on-premise deployment of the management software, a business application that is being delivered as a service (e.g. Oracle CRM On Demand), along with other local IT resources”. The latter is what I was talking about in this post. And sometimes it’s both at the same time (the business application is delivered as a service along with a hosted management console for status/issues/requests…). Not to mention the extra dimension of providing IT management to the administrators in charge of running a multi-tenant application in a SaaS scenario (instead of meeting the needs of their customer’s administrators).

All of these scenarios are valid. So far, we don’t have good names for them. And the MUWS/MOWS experience shows that good names matter. IMaaS (IT Management as a Service) and MoSaaS (Management of Software as a Service) won’t cut it.

[UPDATED 2008/6/23: This seems to be an example of MoSaaS (or rather MoIaaS) delivered through IMaaS. I am subjecting you to such an awful-sounding sentence as a way drive home the need for better names. The real value of course will come when these capabilities are delivered alongside (and integrated with) all your IT management capabilities. John has a nice analysis that lets some air out of the fluff.]

2 Comments

Filed under Application Mgmt, Everything, IT Systems Mgmt, SaaS, Standards, Utility computing

BMC acquires ITM Software

Another BMC acquisition today: ITM Software. Their software suite is designed to help drive IT decisions from the point of view of their business impact.

This is important, of course, for all the reasons that BMC, HP, Oracle and others have been explaining for a while (how often have you heard the word “alignment” over the last three years, compared to the previous thirty?). It’s becoming even more important now, as the options for IT sourcing (from the traditional “give it all to Unisys”, to SaaS, to running your own apps in a utility computing environment…) are multiplicating. Choosing between Intel and AMD CPUs in your datacenter is a technical decision, but choosing between an on-premise application, a SaaS application and running your application on EC2 is driven by business considerations of cost, risks, control, flexibility, etc. And it’s not just a one-time decision, it’s the day to day management that follows these decisions.

I don’t know much about the current ITM offering, but it was never clear to me how much they could deliver as a narrow layer, separate from the heavy-duty IT management stack (I can see how they would deliver financial and project management tools, but what about *really* linking day to day IT administration decisions to the business impact). Being part of BMC, presumably allowing deeper integration into real IT management operations, seems to make sense.

I just wish they didn’t make it sound so easy: “BMC’s purchase of ITM Software creates a unique, integrated solution that provides customers with a single comprehensive view into…”. So just signing the check creates the integration? Now I am going to get calls from our execs asking why it takes so much work to integrate acquired products, if BMC can do it the same day they sign the deal…

While I am at it, here is the press release that HP put out to list the announcements at their Software Universe conference this week. I notice that it’s all about new versions of ex-Mercury products. No OpenView, Peregrine or Opsware content, as far as I can tell. Without looking at it in more details I don’t know how different these new versions really are. What appears pretty new is the SaaS offering (also based on Mercury products) at the end of the press release. On the nitpicking side, can anyone tell me what these “static configuration management databases” are that are “unable to support the real-time needs of today’s complex technology environments”? I can see how a “static” database would be hard-pressed to help, but I haven’t noticed any vendor selling read-only config stores.

[UPDATED 2008/6/18: More details about the HP announcement at InfoWorld. Including quotes from my ex-boss Ramin. Congrats on getting UCMDB 7.5 out of the door!]

2 Comments

Filed under Application Mgmt, BSM, Business, CMDB, Everything, HP, IT Systems Mgmt, ITIL, Mgmt integration

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

Mapping CIM associations to CMDBf relationships

This post started as a comment on the blog of Van Wiles. When it became too long (and turned into a therapeutic rant at the end) I turned it into a blog post of its own. Please, read Van’s post first. Here is my response to him:

Hi Van. Sounds like what you are after is not a mapping of the CIM_Dependency association to a CMDBf record type (anyone can make up such a mapping as you point out), but a generic algorithm to map any CIM association to a corresponding CMDBf relationship record type. Correct? That algorithm needs to handle the fact that the CIM metamodel has the concept of relationship roles while the CMDBf metamodel doesn’t.

Here is a possible such mapping:

  1. Take a CIM association (called “myAssociation”) that has two roles (called “thisOne” and “theOtherOne”).
  2. Take the item that has role name that comes first alphabetically and make it the source (in this example, it is “theOtherOne”)
  3. Take the item that has role name that comes second alphabetically and make it the target (in this example, it is “thisOne”)
  4. Generate a CMDBf record type called “{associationName} _from_ {firstRoleNameAlphabetically} _to_ {secondRoleNameAlphabetically}”

You’re done. The new CMDBf record type is “myAssociation_from_theOtherOne_to_thisOne”, the source is the item with the role “theOtherOne” and the target is the item with the role “thisOne”. Everyone who follows this algorithm (of course it needs to be formally defined and evangelized, there is no guarantee here unless we bake CIM-specific concepts in the core CMDBf specification, which would be a mistake) will produce the same CMDBf relationship record type for a given CIM association.

Applied to the CIM_Dependency example, this would generate a “CIM_Dependency_from_Antecedent_to_Dependent” CMDBf record type, in which the source is the CIM Antecedent and the target is the CIM Dependent.

Alternatively, you can have the algorithm generate two CMDBf relationship record types (one going in each direction) for each CIM association. So you don’t have to arbitrarily pick the first one (alphabetically) as the source. But then you need to have model metadata to capture the fact that these relationships are the inverse of one another (and imply one another). As you well know,I have been advocating for the use of RDF/RDFS/OWL in CMDBf for a while. :-)

In the end, there are three potential approaches:

1) Someone (the CMDBf group or someone else) creates an authoritative mapping for all CIM associations (or at least all the useful ones) and we expect anyone who uses the CIM model with CMDBf to use that mapping.

2) Someone (again, the CMDBf group or someone else) defines a normative CIM to CMDBf mapping, e.g. the one above, and we expect anyone who generates a CMDBf relationship record type from a CIM association to use this mapping algorithm. From a pure logical perspective, it is the same as defining a CMDBf record type for each CIM association (approach 1), but it is less work and it doesn’t have to be updated every time a CIM association is created/versioned. At the cost of uglier (more arbitrary) CMDBf record types being defined.

3) We let people define the relationships in whatever way they choose and we provide a model metadata framework (aka ontology language) to allow mappings between these approaches. For example, you define, in your namespace, a van:CIM-inspired-dependency CMDBf record type that goes from antecedent to dependent. Separately, I defined, in my namespace, a william:CIM-like-dependency CMDBf record type that carries the same semantics (defined, not so precisely BTW but that’s a different topic, by CIM) except that its source is the dependent and its target is the antecedent. The inverse of yours. A suitable ontology language would allow someone (you, me, or a third party who has to assemble a system that uses both relationship types) to assert that mine is the inverse of yours. Once this assertion is captured, a request for any [A]—(van:CIM-inspired-dependency)—>[B] would also return the instances of [B]—(william:CIM-like-dependency)—>[A] because they are known to be the same. And you know how I am going to conclude, of course: OWL (specifically owl:inverseOf) provides just this.

BTW, approach 3 is not incompatible with 1 or 2. Whether or not we define mappings for CIM relationships and whether or not that mapping gets adopted, there will be plenty of cases in a federated scenario in which you need to reconcile models (CIM-based or not). Model metadata (aka an ontology language) is useful anyway.

Readers who only care about the technical aspects and have little time for rants can stop reading here. But, since I haven’t addressed any constructive criticism to the DMTF in a while, I can’t resist the opportunity to point out that if the mailing list archives for the DMTF working groups were publicly available, we wouldn’t have to have these discussions on our personal blogs. I am very glad that Van posted this on his blog because it is a question that many people will have. Whatever the CMDBf specification ends up doing, developers and architects who make use of it will benefit from having access to the deliberations and considerations that resulted in the specification being what it is. There are many emails in the CMDBf mailing list private archive that I am sure would be useful to future CMDBf implementers, but if they don’t show up on Google they don’t exist for any practical purpose. When grappling with the finer points of some specification or programming language I have often Googled my way into email archives (or old specification drafts) of the working groups that designed them. Sometimes I come out thinking “oh, ok, now I understand why they chose that approach” and other times it’s “ok, that’s what I suspected, these guys were high”. Either way, it’s useful to me as a user of the specification. W3C is the best example (of making working group records available, not of being high): not only is the mailing list available but the phone meetings often have a supporting IRC channel in which key points of the discussion get captured and archived. Here is an example. Making life easier for implementers is probably the single most important thing to make a specification successful. And ultimately, that’s the DMTF’s success too.

And it’s not just for developers and architects. It also impacts industry observers and pundits. Like the IT Skeptic who looked into CMDBf and reported “nothing on the DMTF website but press releases. try to find anything by navigating from the homepage”. And you wonder why his article is titled “the CMDB Federation proceeeds (sic) at its usual glacial pace”. There is good work going on, but there is no way for him to see it. This too is bad for the adoption and credibility of DMTF specifications.

Isn’t it ironic that the DMTF expends resources to sponsor a “hospitality suite” at the Burton Group Catalyst conference (presumably to spread the word about the good work taking place in the organization) but fails to make it easy for the industry to see that same good work taking place? It’s like a main street retail shop that advertises in the newspaper but covers its store window with cardboard, preventing passersby from seeing what’s on offer. I notice that all the other “hospitality suites” seem to be staffed by for-profit vendors (Oracle, IBM, Cisco, Microsoft etc are all there). Somehow W3C and OASIS (whose work is very relevant to some of the conference themes, like identity management and SOA) don’t feel the need to give away pens and key chains at the conference.

Dear DMTF, open source is not just good for code.

2 Comments

Filed under CA, CMDB Federation, CMDBf, Conference, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, RDF, Semantic tech, Specs, Standards, Trade show, W3C

Oracle is now a leading vendor of application testing tools

The e-TEST suite (previously from Empirix) has turned into a set of Oracle products for application testing (sorry, application quality management). Ever since the announcement of the deal, a couple of months ago, Oracle sales reps have received many unsolicited requests for that product, validating its good reputation in the market. If you have spent any time around sales reps, you know that for them to tell their customers that for the time being they should purchase from Empirix was about as pleasant as for Hillary Clinton to endorse Barack Obama. Fortunately, this awkward period is over. Not only can people buy the products from Oracle, all the technical support (even for people who bought from Empirix) is now provided by Oracle.

I probably don’t need to say it (since the products were created by an independent company) but just to be clear nothing in these products require the target Web applications to use Oracle infrastructure (e.g. DB, Middleware) or to be otherwise managed by Oracle Enterprise Manager. They will work happily with your Ruby-on-Rails application hosted on RedHat or your .NET Web application.

And it’s not just for HTML-driven, user-facing applications: XML-producing web services can also be tested that way.

Comments Off on Oracle is now a leading vendor of application testing tools

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Oracle, Testing

Recent IT management announcements

There were a few announcements relevant to the evolution of IT management over the last week. The most interesting is VMware’s release of the open-source (BSD license) VI SDK, a Java API to manage a host system and the virtual machines that run on it. Interesting that they went the way of a language-specific API. The alternatives, to complement/improve their existing web services SDK, would have been: define CIM classes and implement a WBEM provider (using CIM-HTTP and/or WS-Management), use WS-Management but without the CIM part (define the model as native XML, not XML-from-CIM), use a RESTful HTTP-driven interface to that same native XML model or, on the more sci-fi side, go the MDA way with a controller from which you retrieve the observed state and to which you specify the desired state. The Java API approach is the easiest one for developers to use, as long as they can access the Java ecosystem and they are mainly concerned with controlling the VMWare entities. If the management application also deals with many other resources (like the OS that runs in the guest machines or the hardware under the host, both of which are likely to have CIM models), a more model-centric approach could be more handy. The Java API of course has an underlying model (described here), but the interface itself is not model-centric. So what with all the DMTF-love that VMWare has been displaying lately (OVF submission, board membership, hiring of the DMTF president…). Should we expect a more model-friendly version of this API in the future? How does this relate to the DMTF SVPC working group that recently released some preliminary profiles? The choice to focus on beefing-up the Java-centric management story (which includes Jython, as VMWare was quick to point out) rather than the platform-agnostic, on-the-wire-interop side might be seen by the more twisted minds as a way to not facilitate Microsoft’s “manage VMWare today to replace it tomorrow” plan any more than necessary.

Speaking of Microsoft, in unrelated news we also got a heartbeat from them on the Oslo project: a tech preview of some of the components is scheduled for October. When Oslo was announced, there was a mix of “next gen BizTalk” aspects and “developer-driven DSI” aspects. From this report, the BizTalk part seems to be dominating. No word on use of SML.

And finally, SOA Software (who was previously called Digital Evolution and who acquired Blue Titan, Flamenco and LogicLibrary, in case you’re trying to keep track) has released a “SOA Development Governance Product”. Nothing too exciting from what I can see on InfoQ about it, but that’s a pretty superficial evaluation so don’t let me stop you. Am I the only one who twitches whenever “federation” is used to mean at worst “import” or at best “synchronization”? Did CMDBf start that trend? BTW, is it just an impression or did SOA Software give InfoQ a list of the questions they wanted to be asked?

4 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Open source, Oslo, OVF, SML, Standards, Tech, Virtualization, VMware, WS-Management

RESTful JMX access from someone who knows both sides

Anyone interested in application manageability and/or management integration should read about Jean-Francois Denise’s prototype for RESTful Access to JMX Instrumentation. Not (at least for now) as something to make use of, but to force us to think pragmatically about the pros and cons of the WS-* stack when used for management integration.

The interesting question is: which of these two interfaces (the WS-Management-based interface being standardized or the HTTP-centric interface that Jean-Francois prototyped) makes it easier to write a cross-platform management application such as the poker-cheating demo at JavaOne 2008?

Some may say that he cheated in that demo by using the Microsoft-provided WinRM implementation of WS-Management on the VBScript side. Without it, it would have clearly been a lot harder to implement the WS-Management based protocol in VBScript than the REST approach. True, but that’s the exact point of standards, that they allow such libraries to be made available to assist implementers. The question is whether such a library is available for your platform/language, how good and interoperable that library is (it could actually hinder rather than help) and what is the cost to the project of depending on it. Which is why the question is hard to answer in absolute. I suspect that, even with WinRM, the simple use case demonstrated at JavaOne would have been easier to implement using straight HTTP but that things change quickly when you run into more demanding use cases (e.g. event notification with filters, sequencing of large responses into an enumeration…). Which is why I still think that the sweetspot would be a simplified WS-Management specification (freed of the WS-Addressing crud for example) that makes it easy (almost as easy as the HTTP-based interface) to implement simple use cases (like a GET) by hand but is still SOAP-based, which lets it seamlessly enter library-driven territory when more advanced features are added (e.g. WS-Security, WS-Enumeration…). Rather than the current situation in which there is a protocol-level disconnect between the HTTP interface (easy to implement by hand) and the WS-Management interface (for which manually implementation is a cruel – and hopefully unusual – punishment).

So, Jean-Francois, where is this JMX-REST work going now?

While you’re on Jean-Francois’ blog, another must-read is his account of the use of Wiseman and Metro in the WS Connector for JMX Agent RI.

As a side note (that runs all the way to the end of this post), Jean-Francois’ blog is a perfect illustration of the kind of blogs I like to subscribe to. He doesn’t feel the need to post all the time. But when he does (only four entries so far this year, three of them “must read”), he provides a lot of insight on a topic he really understands. That’s the magic of RSS/Atom. There is zero cost to me in keeping his feed in my reader (it doesn’t even appear until he posts something). The opposite of what used to be conventional knowledge (that you need to post often to “keep your readers engaged” as the HP guidelines for bloggers used to say). Leaving the technology aside (there is nothing to RSS/Atom technologically other than the fact that they happen to be agreed upon formats), my biggest hope for these specifications is that they promote that more thoughtful (and occasional) style of web publishing. In my grumpy days (are there others?), a “I can’t believe United lost my luggage again” or “look at the nice flowers in my backyard” post is an almost-automatic cause for unsubscribing (the “no country for old IT guys” series gets a free pass though).

And Jean-Francois even manages to repress his Frenchness enough to not take snipes at people just for the fun of it. Another thing I need to learn from him. For example, look at this paragraph from the post that describes his use of Wiseman and Metro:

“The JAX-WS Endpoint we developed is a Provider<SOAPMessage>. Simply annotating with @WebService was not possible. WS-Addressing makes intensive use of SOAP headers to convey part of the protocol information. To access to such headers, we need full access to the SOAP Message. After some redesigning of the existing code we extracted a WSManAgent Class that is accessible from a JAX-WS Endpoint or a Servlet.”

In one paragraph he describes how to do something that IBM has been claiming for years can’t be done (implement WS-Management on top of JAX-WS). And he doesn’t even rub it in. Is he a saint? Good think I am here to do the dirty work for him.

BTW, did anyone notice the irony that this diatribe (which, by now, is taking as much space as the original topic of the post) is an example of the kind of text that I am glad Jean-Francois doesn’t post? You can take the man out of standards, but you can’t take the double standard out of the man.

[UPDATED 2008/6/3: Jean-Francois now has a second post to continue his exploration of marrying the Zen philosophy with the JMX technology.]

2 Comments

Filed under Application Mgmt, CMDB Federation, Everything, Implementation, IT Systems Mgmt, JMX, Manageability, Mgmt integration, Open source, SOAP, SOAP header, Specs, Standards, WS-Management

Oedipus meets IT management?

Having received John’s approval to reclaim the “mighty” adjective, I am going to have a bit of fun with it. More specifically, I am toying with adding VMWare to the list. Clearly, VMWare doesn’t want to go the way Sun did with Solaris (nice technology, right place at the right time, but commoditized in the long term). They have supposedly surrounded themselves with a pretty good patent minefield to slow the commoditization trend, but it will happen anyway and they know it. Especially with improved virtualization support in hardware making some of these patents less relevant. For this reason, they are putting a lot of effort on developing the IT management side of their portfolio.

One illustration of this is the fact that VMWare recently recruited the Senior VP of systems management at Oracle to become its Executive VP of R&D (incidentally, this happened a couple months after I joined his team at Oracle; maybe the knowledge that he wouldn’t have to deal with my bad sense of humor for too long made it easier for him to approve my hiring). I don’t think it’s a coincidence that they chose someone who is not a virtualization expert but an enterprise infrastructure expert (namely database performance and management software).

So, do we have the “Mighty Four” (Oracle, Microsoft, EMC and VMWare) for a nice symetry with the “Big Four” (HP, IBM, BMC and CA)? Or does the fact that EMC owns most of VMWare make us pause here? Might a mighty mother a mighty? How do you run a 85%-owned company whose strategic directions takes it toward direct competition with its corporate owner? EMC and VMWare are attacking IT management from different directions (EMC is actually going at it from several directions at the same time, based on its historical storage products, plus new software from acquisitions, plus hiring a few smart people away from IBM to put the whole thing together), so on paper their portfolios look pretty complementary. But if aligning and collaborating more closely may make sense from a product engineering perspective, it doesn’t make sense from a financial engineering perspective. At least as long as investors are so hungry for the few VMWare share available on the open market (as a side issue, I wonder if they like it so much because of the virtualization market per se or because they see VMWare’s position in that market as a beachhead for the larger enterprise IT infrastructure software market). And, as should not be suprising, the financial view is likely to prevail, which will keep the companies at arms length. But if both VMWare and EMC are succesful in assumbling a comprehensive enterprise infrastructure management system, things will get interesting.

[UPDATED 2008/5/28: The day after I write this, VMWare buys application performance management vendor B-hive. I am pretty lucky with my timing on this one.]

2 Comments

Filed under Everything, IT Systems Mgmt, Patents, People, Virtualization, VMware

I have seen the future of CMDBf

I got a sneak peak at CMDBf v2 today.

I am calling it v2 based on the assumption that the one being currently standardized in DMTF will end up being called 1.0 (because it’s the first one out of DMTF) or 1.1 (to prevent confusion with the submitted version).

At the Semantic Technology Conference, David Booth from HP presented his work (along with his partner, Steve Battle from HP Labs) to provide a SPARQL front-end to HP’s Universal CMDB (the engine under what was the Mercury MAM product). Here are the slides.

The mapping from SPARQL to TQL (the native query interface for UCMDB) was made pretty easy by the fact that TQL is a graph-oriented query language. How much harder would it be to similarly transform a CMDBf (v1) query interface into a SPARQL query interface (and vice-versa)? Not much. The only added difficulty would come from the CMDBf XPath constraints. TQL has a property value mechanism that is very similar to CMDBf’s “propertyValue” constraint and maps well to SPARQL functions. The introduction of XPath as a constraint language in CMDBf makes things harder. It could be handled by adding XPath support to the SPARQL engine using function extensibility. Or by turning the entire XML into RDF and emulating XPath in SPARQL. But in either case, you’ll have impedence mismatch at some point because concepts such as element order that exist in XPath have no native equivalent in RDF.

The use of XPath in selectors on the other hand is not a problem. HP’s prototype uses Gloze (available as a Jena package) to turn the XML returned by UCMDB into RDF. An XSLT transform could turn that same XML into a CMDBf-valid XML response instead and that XSLT could easily handle the XPath selectors from the query request. This is another reason why constraints and selectors should remain separate in CMDBf (fortunately the specification is back to doing this properly).

Here is why I call this prototype CMDBf v2: The CMDBf effort (v1 or 1.1), in its current form of re-inventing a graph query, can succeed. Let’s assume the working group strikes a reasonable balance between completeness and complexity, and vendors choose to compete on innovation and execution rather than lock-in (insert cynical comment here). CMDBf may then end up being supported by the main CMDB vendors. It wouldn’t provide federation capabilities, but having a common CMDB query interface supported by the Big Four would help with management integration. And yet, while the value would be real, it would only provide a little help to solve a larger problem:

  • As a technology limited to IT systems management, it would be unlikely to see widely available tools (e.g. user consoles and language-specific libraries).
  • It wouldn’t get the kind of robustness and interoperability that comes from wide adoption. While pretty similar, there might be some minor differences in the various implementations. Once your implementation has been tweaked to work with the implementations from the Big Four, you’ll call it done. Just like SNMP, another technology that is specific to IT systems management (see it happen here).
  • Even if it works perfectly at the query level, it will just hasten the time when developers run into the real problem, model interoperability. CMDBf doesn’t help at all with this. In fact, it makes it harder by hard-coding some dependencies on an XML back-end (the XPath constraints).

In the long run, IT management has to become more automated and integrated. That’s a given. The way it happens may or may not go through CMDB-like configuration stores. But if it does, we’ll have to eventually move beyond CMDBf (v1) towards something that addresses the three requirements above. And federation. I don’t know if it will be called CMDBf v2, and/or if it will come from the DMTF (by then, the CMDBf brand might be an asset or a liability depending on developer experience with the specification). But I strongly suspect (“probability 0.8” as a Gartner analyst might put it) that it will use semantic technologies. Because the real, hard, underlying problem is a problem of semantic integration. In that sense, David and Steve’s prototype is a sneak peek at what will come after CMDBf v1/1.1.

Pretty much since the beginning of CMDBf I have been pushing for it to ideally embrace SPARQL (with no success) or to at least stay close to it conceptually in order to make the eventual mapping/evolution smooth (with a bit more success). This includes pushing for a topological query language, trying to keep XML idiosyncrasies at bay and keeping constraints and selectors cleanly separated. Rather than working within the CMDBf group, David took the alternative approach of simply doing it. Hopefully this will help convince people of the value of re-using semantic web technology for IT systems management. Yes semantic technologies have been designed for a much more general use case. But the use cases that CMDB systems address are a subset of the use cases addressed by semantic technologies. It’s hard for domain experts to see their domain as just a subset of a larger problem, but this is the case here. Isn’t HTTP serving the IT management community better than a systems management-specific alternative would?

By the way, there is no inferencing taking place in the HP prototype. We are just talking about re-using an existing, well though-through graph query language. Sure OWL inferencing and some rules could be seamless layered on top of this. But this is in no way required to do (better) what CMDBf v1 tries to do.

And then there is the “federation” question. Who do you trust more to deliver this? A bunch of IT system management architects in DMTF or the web and query experts at W3C, HP Labs etc who designed and implemented SPARQL over many years? BTW, it sounds like SPQARL federation was discussed at WWW 2008, based on these meeting notes (search for “federation”).

2 Comments

Filed under Automation, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, Graph query, HP, IT Systems Mgmt, Query, RDF, Semantic tech, SPARQL, Standards, W3C, XPath

JSR262 public review ballot

The Public Review Ballot for JSR #262 that took place in the Executive Committee for SE/EE has closed. I am not familiar enough with the JCP process to know exactly what this milestone represents. But the results are interesting in any case.

The vote narrowly passed with 6 yes, 5 no and 1 abstain.

The overiding concern listed by the “no” voters (and several of the “yes” voters) is the fact that JSR262 uses WS-Management (a DMTF standard) which itself makes use of specifications that have been submitted to W3C but are not currently in the process of standardization (WS-Transfer, WS-Eventing, WS-Enumeration). And that it uses an older version of a now-standard specification (WS-Addressing).

SAP makes the most insightful comment: that this is not really a JCP problem but a DMTF problem. Hopefully the DMTF (and Microsoft, since it controls the fate of the specifications in question) will step up to the plate on this. This is likely to happen. Even if the DMTF and Microsoft didn’t care about making the JCP happy (but they do, don’t they?), they will run into similar issues if/when they push WS-Management towards ANSI/ISO standardization.

Next to this “non-standard dependencies” issue, there is only one technical issue mentioned. As you guessed, it’s IBM whining about the lack of a WSDL to feed their tools. This is becoming so repetitive that I may eventually stop making fun of it (but don’t hold your breath, I am not known for being very good at ending long-running jokes). It is pretty ironic to hear IBM claim that without that WSDL you can’t implement the spec on JAX-WS when you know that the wiseman reference implementation by Sun and HP is based on JAX-WS…

Comments Off on JSR262 public review ballot

Filed under Application Mgmt, Everything, IBM, Implementation, ISO, IT Systems Mgmt, JMX, Manageability, Microsoft, Specs, Standards, WS-Management, WS-Transfer

At the Semantic Technology Conference this week

I am going to spend some time, starting tomorrow, at the 2008 Semantic Technology Conference in San Jose. I have a few ideas (on ways to use semantic technologies for IT management) that I am trying to validate, improve or kill, as appropriate. I will of course be interested in any discussion/presentation related to applying semantic technologies to IT management. But I am going there in a pretty open frame of mind. Peripheral vision is important when considering technology that is so generic in nature and that has been used in many different fields. In general, I am more interested in concrete projects, lessons learned, best practices etc than new raw technology. There is already a lot more raw semantic web technology available than we can hope to make use of in IT management anytime soon. The question is more around the best ways to practically and opportunistically deliver value in a field that has plenty of existing relational schemas, XML descriptors, semi-standard class models, semi-structured events and in-Joe’s-head-only domain knowledge nuggets floating around.

If you are going to go there and have any interest in the application of semantic technologies to IT management, I’d be happy to hook up.

Comments Off on At the Semantic Technology Conference this week

Filed under Conference, Everything, IT Systems Mgmt, OWL, RDF, Semantic tech

Various IT management stories

Apparently Coté’s upstairs neighbors were having a party last night and he could not sleep. That’s good for us because as a result he bookmarked a long list IT systems management stories. Several of those picked my interest:

2 Comments

Filed under Application Mgmt, Articles, Everything, HP, IT Systems Mgmt, Manageability, Microsoft, Open source, Oracle