Oedipus meets IT management?

Having received John’s approval to reclaim the “mighty” adjective, I am going to have a bit of fun with it. More specifically, I am toying with adding VMWare to the list. Clearly, VMWare doesn’t want to go the way Sun did with Solaris (nice technology, right place at the right time, but commoditized in the long term). They have supposedly surrounded themselves with a pretty good patent minefield to slow the commoditization trend, but it will happen anyway and they know it. Especially with improved virtualization support in hardware making some of these patents less relevant. For this reason, they are putting a lot of effort on developing the IT management side of their portfolio.

One illustration of this is the fact that VMWare recently recruited the Senior VP of systems management at Oracle to become its Executive VP of R&D (incidentally, this happened a couple months after I joined his team at Oracle; maybe the knowledge that he wouldn’t have to deal with my bad sense of humor for too long made it easier for him to approve my hiring). I don’t think it’s a coincidence that they chose someone who is not a virtualization expert but an enterprise infrastructure expert (namely database performance and management software).

So, do we have the “Mighty Four” (Oracle, Microsoft, EMC and VMWare) for a nice symetry with the “Big Four” (HP, IBM, BMC and CA)? Or does the fact that EMC owns most of VMWare make us pause here? Might a mighty mother a mighty? How do you run a 85%-owned company whose strategic directions takes it toward direct competition with its corporate owner? EMC and VMWare are attacking IT management from different directions (EMC is actually going at it from several directions at the same time, based on its historical storage products, plus new software from acquisitions, plus hiring a few smart people away from IBM to put the whole thing together), so on paper their portfolios look pretty complementary. But if aligning and collaborating more closely may make sense from a product engineering perspective, it doesn’t make sense from a financial engineering perspective. At least as long as investors are so hungry for the few VMWare share available on the open market (as a side issue, I wonder if they like it so much because of the virtualization market per se or because they see VMWare’s position in that market as a beachhead for the larger enterprise IT infrastructure software market). And, as should not be suprising, the financial view is likely to prevail, which will keep the companies at arms length. But if both VMWare and EMC are succesful in assumbling a comprehensive enterprise infrastructure management system, things will get interesting.

[UPDATED 2008/5/28: The day after I write this, VMWare buys application performance management vendor B-hive. I am pretty lucky with my timing on this one.]

2 Comments

Filed under Everything, IT Systems Mgmt, Patents, People, Virtualization, VMware

I have seen the future of CMDBf

I got a sneak peak at CMDBf v2 today.

I am calling it v2 based on the assumption that the one being currently standardized in DMTF will end up being called 1.0 (because it’s the first one out of DMTF) or 1.1 (to prevent confusion with the submitted version).

At the Semantic Technology Conference, David Booth from HP presented his work (along with his partner, Steve Battle from HP Labs) to provide a SPARQL front-end to HP’s Universal CMDB (the engine under what was the Mercury MAM product). Here are the slides.

The mapping from SPARQL to TQL (the native query interface for UCMDB) was made pretty easy by the fact that TQL is a graph-oriented query language. How much harder would it be to similarly transform a CMDBf (v1) query interface into a SPARQL query interface (and vice-versa)? Not much. The only added difficulty would come from the CMDBf XPath constraints. TQL has a property value mechanism that is very similar to CMDBf’s “propertyValue” constraint and maps well to SPARQL functions. The introduction of XPath as a constraint language in CMDBf makes things harder. It could be handled by adding XPath support to the SPARQL engine using function extensibility. Or by turning the entire XML into RDF and emulating XPath in SPARQL. But in either case, you’ll have impedence mismatch at some point because concepts such as element order that exist in XPath have no native equivalent in RDF.

The use of XPath in selectors on the other hand is not a problem. HP’s prototype uses Gloze (available as a Jena package) to turn the XML returned by UCMDB into RDF. An XSLT transform could turn that same XML into a CMDBf-valid XML response instead and that XSLT could easily handle the XPath selectors from the query request. This is another reason why constraints and selectors should remain separate in CMDBf (fortunately the specification is back to doing this properly).

Here is why I call this prototype CMDBf v2: The CMDBf effort (v1 or 1.1), in its current form of re-inventing a graph query, can succeed. Let’s assume the working group strikes a reasonable balance between completeness and complexity, and vendors choose to compete on innovation and execution rather than lock-in (insert cynical comment here). CMDBf may then end up being supported by the main CMDB vendors. It wouldn’t provide federation capabilities, but having a common CMDB query interface supported by the Big Four would help with management integration. And yet, while the value would be real, it would only provide a little help to solve a larger problem:

  • As a technology limited to IT systems management, it would be unlikely to see widely available tools (e.g. user consoles and language-specific libraries).
  • It wouldn’t get the kind of robustness and interoperability that comes from wide adoption. While pretty similar, there might be some minor differences in the various implementations. Once your implementation has been tweaked to work with the implementations from the Big Four, you’ll call it done. Just like SNMP, another technology that is specific to IT systems management (see it happen here).
  • Even if it works perfectly at the query level, it will just hasten the time when developers run into the real problem, model interoperability. CMDBf doesn’t help at all with this. In fact, it makes it harder by hard-coding some dependencies on an XML back-end (the XPath constraints).

In the long run, IT management has to become more automated and integrated. That’s a given. The way it happens may or may not go through CMDB-like configuration stores. But if it does, we’ll have to eventually move beyond CMDBf (v1) towards something that addresses the three requirements above. And federation. I don’t know if it will be called CMDBf v2, and/or if it will come from the DMTF (by then, the CMDBf brand might be an asset or a liability depending on developer experience with the specification). But I strongly suspect (“probability 0.8” as a Gartner analyst might put it) that it will use semantic technologies. Because the real, hard, underlying problem is a problem of semantic integration. In that sense, David and Steve’s prototype is a sneak peek at what will come after CMDBf v1/1.1.

Pretty much since the beginning of CMDBf I have been pushing for it to ideally embrace SPARQL (with no success) or to at least stay close to it conceptually in order to make the eventual mapping/evolution smooth (with a bit more success). This includes pushing for a topological query language, trying to keep XML idiosyncrasies at bay and keeping constraints and selectors cleanly separated. Rather than working within the CMDBf group, David took the alternative approach of simply doing it. Hopefully this will help convince people of the value of re-using semantic web technology for IT systems management. Yes semantic technologies have been designed for a much more general use case. But the use cases that CMDB systems address are a subset of the use cases addressed by semantic technologies. It’s hard for domain experts to see their domain as just a subset of a larger problem, but this is the case here. Isn’t HTTP serving the IT management community better than a systems management-specific alternative would?

By the way, there is no inferencing taking place in the HP prototype. We are just talking about re-using an existing, well though-through graph query language. Sure OWL inferencing and some rules could be seamless layered on top of this. But this is in no way required to do (better) what CMDBf v1 tries to do.

And then there is the “federation” question. Who do you trust more to deliver this? A bunch of IT system management architects in DMTF or the web and query experts at W3C, HP Labs etc who designed and implemented SPARQL over many years? BTW, it sounds like SPQARL federation was discussed at WWW 2008, based on these meeting notes (search for “federation”).

2 Comments

Filed under Automation, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, Graph query, HP, IT Systems Mgmt, Query, RDF, Semantic tech, SPARQL, Standards, W3C, XPath

JSR262 public review ballot

The Public Review Ballot for JSR #262 that took place in the Executive Committee for SE/EE has closed. I am not familiar enough with the JCP process to know exactly what this milestone represents. But the results are interesting in any case.

The vote narrowly passed with 6 yes, 5 no and 1 abstain.

The overiding concern listed by the “no” voters (and several of the “yes” voters) is the fact that JSR262 uses WS-Management (a DMTF standard) which itself makes use of specifications that have been submitted to W3C but are not currently in the process of standardization (WS-Transfer, WS-Eventing, WS-Enumeration). And that it uses an older version of a now-standard specification (WS-Addressing).

SAP makes the most insightful comment: that this is not really a JCP problem but a DMTF problem. Hopefully the DMTF (and Microsoft, since it controls the fate of the specifications in question) will step up to the plate on this. This is likely to happen. Even if the DMTF and Microsoft didn’t care about making the JCP happy (but they do, don’t they?), they will run into similar issues if/when they push WS-Management towards ANSI/ISO standardization.

Next to this “non-standard dependencies” issue, there is only one technical issue mentioned. As you guessed, it’s IBM whining about the lack of a WSDL to feed their tools. This is becoming so repetitive that I may eventually stop making fun of it (but don’t hold your breath, I am not known for being very good at ending long-running jokes). It is pretty ironic to hear IBM claim that without that WSDL you can’t implement the spec on JAX-WS when you know that the wiseman reference implementation by Sun and HP is based on JAX-WS…

Comments Off on JSR262 public review ballot

Filed under Application Mgmt, Everything, IBM, Implementation, ISO, IT Systems Mgmt, JMX, Manageability, Microsoft, Specs, Standards, WS-Management, WS-Transfer

At the Semantic Technology Conference this week

I am going to spend some time, starting tomorrow, at the 2008 Semantic Technology Conference in San Jose. I have a few ideas (on ways to use semantic technologies for IT management) that I am trying to validate, improve or kill, as appropriate. I will of course be interested in any discussion/presentation related to applying semantic technologies to IT management. But I am going there in a pretty open frame of mind. Peripheral vision is important when considering technology that is so generic in nature and that has been used in many different fields. In general, I am more interested in concrete projects, lessons learned, best practices etc than new raw technology. There is already a lot more raw semantic web technology available than we can hope to make use of in IT management anytime soon. The question is more around the best ways to practically and opportunistically deliver value in a field that has plenty of existing relational schemas, XML descriptors, semi-standard class models, semi-structured events and in-Joe’s-head-only domain knowledge nuggets floating around.

If you are going to go there and have any interest in the application of semantic technologies to IT management, I’d be happy to hook up.

Comments Off on At the Semantic Technology Conference this week

Filed under Conference, Everything, IT Systems Mgmt, OWL, RDF, Semantic tech

Various IT management stories

Apparently Coté’s upstairs neighbors were having a party last night and he could not sleep. That’s good for us because as a result he bookmarked a long list IT systems management stories. Several of those picked my interest:

2 Comments

Filed under Application Mgmt, Articles, Everything, HP, IT Systems Mgmt, Manageability, Microsoft, Open source, Oracle

WS-ManagementHammer: don’t do it but if you are going to do it anyway then…

With the IBM/Microsoft/Intel/HP WSDM/WS-Management convergence now implicitly (if not yet officially) dead, it will be interesting to see what IBM is going to do with WSRF. WSRF is being used today, rarely explicitly but rather in an embedded fashion. People who use WSDM use it, people who use CDDLM use it, people who use the Globus Toolkit use it, etc. IBM could write off the convergence work (WS-ResourceTransfer, which was published as a draft, and WS-ResourceEnumeration and WS-EventNotification which were never published) and stick to using the existing WSRF specifications when they need the corresponding functionality. That’s what I hope they do.

Alternatively, they could decide to get the forceps out of the drawer. They can create a new, IBM-friendly (e.g. Fujitsu, CA, Cisco…) private consortium to take over the unfinished drafts (if the IBM/Microsoft/Intel/HP legal agreement allows this) or start new ones. Or they could go directly to W3C, OASIS or OGF and push for a new working group to do the work in the open (and since no-one else would really care about this work IBM should have relatively free hands there, the way Microsoft did in DMTF when IBM chose to boycott WS-Management). Why W3C would care and why OASIS or OGF would want to start commitees to obsolete their existing work is a separate question.

While I hope that IBM doesn’t try to push another pile of WS-* resouce management specifications on an industry that already has too many, if they do I hope that at least they’ll do it right. And that means doing away with the approach embedded in WS-ResourceTransfer. Having personally been involved in many iterations on this problem, I hope to have some insight to contribute.

Along the lines of the age-old parental advice “don’t do it but if you are going to do it then use a condom”, here is my advice to anyone thinking of doing another iteration on the WSRF question: don’t do it but if you are going to do it then be specific about what problem you are addressing.

First, let’s separate three scenarios.

Database query

WS-ResourceTransfer should not be seen as a way to query an XML database. Use XQuery for this.

REST

While architecturally it should be possible to build RESTful applications on top of WS-Transfer‘s operations, this is simply not what is happening. WS-Transfer is being used either by CIM people (who get to it via WS-Management) or by big-SOA people (who get is as part of the whole WS-* stack) and neither of them is doing anything remotely RESTful. So just leave that aside and don’t see WS-ResourceTransfer as a way to do “fine-grained REST”. No REST user is loosing sleep over WS-ResourceTransfer being in limbo.

A flexible way to interact with a complex system

This is the use case that you should focus on. You have a system made up of many parts (e.g. a composite application or a server that is made of many components) that you can represent as an XML document. The XML repesentation contains some important information about the system, but it isn’t the system. There are identified resources within the system that have lifecycles, management capabilities and internal parameters. Not everything relevant is captured in the XML model. This is why it is different from an XML database.

In general, I don’t think that XML is the best way to represent complex IT systems. It has plenty of complications that are not relevant to IT management and it doesn’t elegantly support the representation of graphs, often the most natural way to represent such a system (more on this here). CMDBf, with its graph-oriented approach, is a better choice in general. But there are plenty of areas (especially smaller, well-defined, sub-systems) in which XML formats have been defined to represent systems. SCA and SML for example.

In the case where you are dealing with such an XML-described system, then there is value in standard ways to simplify interactions with the system and its parts. But here too, we need to distinguished different patterns rather than trying to handle them all in the same way.

Filtering/sequencing of returned data

Complex IT systems can generate a lot of configuration and/or monitoring data and often you only care for a small subset. For example, an asset record has dozens of elements (lease terms, owner, assigned user…) but you may only care to retrieve the date the lease expires. When you do a GET on the record, you want to qualify it by specifying that only that date needs to be returned. That’s what WS-RP, WS-RT and the WS-Management wsman:TransferFragment header allow. In a variation of this, you want all the data but you don’t want it in one go, you want to pull it piece by piece. That’s what WS-Enumeration gives you. The problem with all these specifications is that they only offer that feature when you are retrieving the resource representation (a WS-Transfer GET or equivalent), not for other operations. But how is this different from invoking an AirlineBooking operation and saying that you only want to be sent the confirmation code, not the full itinerary, equipment type, assigned seat, etc? Bundling this inside WS-RT (or equivalent) is not helpful. A generic SOAP header that can go on any message would be more appropriate (the definition of this header would need to pay special attention to security considerations, especially if the response is signed, because it could be abused to trick the server into sending, and signing, specifically-crafted messages).

Interacting with a sub-element of the system

If you have a handle to a computer system resource and you know that it has one CPU and that this CPU is represented by the /comp:CPU element of the system, why would you need to use some out-of-band discovery mechanism to interact with that CPU? It’s right there, you can see it, you can point to it. Surely there must be a way to address operations to it directly, right? WS-Management tries to do it with its wsman:Selector mechanism, but the selectors are not tied to the model and require, effectively, a separate out-of-band agreement for addressing. There shouldn’t be a need for such an additional agreement once an agreement has already been reached on the model.

What is needed is a way, for systems that have a known XML model, to address message to subpart by using the model itself to support that addressing. Call it SOAPy mashup if you want to feel like you are part of the cool kids. I described such a mechanism a while ago. In effect, it is an improvement on wsman:Selector that an eventual new iteration of WSRF should at least consider.

In some cases, namely when the operation is a WS-Transfer GET, this capability overlaps with the “filtering of returned data” capability. One way to look at it is that you are doing a GET at the level of the overall computer system and filtering the results down to the part that represents the CPU. Another way to look at it is that you are pinpointing the message to a subset of the model (the CPU part) and doing an unmodified GET on it. It doesn’t matter how you choose to think about it. In my proposal, these two ways produce the same message. Like the wave view and particle view of a photon, that in the end, describe the same physical entity with each being the best representation for a set of situations.

The problem with WS-RT and its predecessors is that it doesn’t recognise that this is just the intersection of two orthogonal concerns (filering of output versus addressing of sub-elements) and only handles that intersection.

Interacting with a set of resources as a set

The same kind of expression (typically XPath) that lets you point at a sub-element inside of a system also lets you point at a set of such sub-elements. But even though from an XPath perspective there isn’t much of a different (the first one just happens to return a nodeset that contains only one node), from an architectural perspective it is a very different use case. If you want to support such a use case then you have handle it as such and define all the associated semantics (sequential/parallel execution, fault handling, partial completion, resource-specific permissions…). You can’t just cross your fingers and assume that you get such features “for free” just because XPath can return a nodeset.

I know that this post illustrates a way of giving free advice that virtually ensures that it gets ignored. Similar (if you’ll allow the big stretch) to the way Chirac and Villepin were arguing againt an Iraq invasion in ways that probably reinforced the Bush administration’s determination to do it. When will the world finally learn to appreciate the oh-so-slightly obnoxious undertone that is inherently French (because, let me tell you, we’re not about to loose it)? At least, when my grandchildren ask me “where were you when IBM invented WS-ManagementHammer?” I can point to this post and say “I tried to stop it, I tried”.

[UPDATED 2008/5/15: How timely! Just after publishing this I find, via Coté, what looks like another example of French abrasiveness in the systems management world: the attitude, name and the way Jeff ends with a French-language quote make it quite likely that the “Jacques” person discounting the fact that his company’s SNMP agent is broken is indeed a compatriot. French obnoxiousness aside, and despite my respect for standards, my advice to Jeff is that if a given SNMP agent works with HP, IBM, BMC and CA you will probably save yourself time in the long run by finding a way to support it (even if it is not spec-compliant) rather than getting the vendor to change. There are lots of sites out there that work fine with Firefox and IE but are not compliant with Web standards. Good luck getting them all fixed.]

[UPDATED 2008/7/14: I don’t really plan to turn this post into a ongoing set of updates about “French attitude” but since today is Bastille Day I’ll point to this map of the world as seen from Paris. If I wasn’t on strike right now, I’d explain why the commenter is wrong to assert that “French self-deprecating humour” is rare.]

4 Comments

Filed under Everything, HP, IBM, IT Systems Mgmt, Mgmt integration, Microsoft, SCA, SML, SOAP, SOAP header, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

Oracle Enterprise Manager in the news

I missed this good review of Oracle Enterprise Manager (OEM) by eWeek’s Cameron Sturdevant that came out almost two months ago. It is “good” in the sense that it is well researched and well written but it is also “good” in the sense that it is a very positive review. The only drawback listed is the price of some of the features. But you have to evaluate these numbers in comparison to productivity gains of your IT management staff. Or, even more compellingly, in comparison to the cost of business disruption that can result from insufficient management insight into the applications.

I got to this review through this very nice blog post in which my colleague Chung Wu (a director of product management for OEM) describes step by step the key role that OEM plays in effectively managing Oracle technologies and in allowing a smooth and controlled evolution of the deployed portfolio.

Comments Off on Oracle Enterprise Manager in the news

Filed under Application Mgmt, Articles, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle

Management product releases

A couple of product updates related to applications management were announced over the last couple of weeks:

  • My ex-colleagues at HP working on SOA management have released a new version of SOA Manager (the product that originated with the TalkingBlocks acquisition, when coolness first entered the gloomy 42-Lower floor of HP Cupertino) plus some SOA-buzzword-compliant improvements to Mercury-inherited products (testing tools and BAC). Or so at least says this article (I couldn’t easily find any specifics on the HP site).
  • The JBoss guys announced last week version 2.0 of JBoss ON (Operations Network) their application management console. I assume it is a follow-on to the previously announced work with Hyperic even though the press release does not mention anything about it.

1 Comment

Filed under Everything, HP, IT Systems Mgmt

The elusive XPath nodeset serialization

I have been involved in various capacity with five different specifications that define a GET (or GET-like) operation that takes as input an XPath expression used to pinpoint the subset of the XML document that should be retrieved (here is a quick history as of a couple of years ago, more has happened since). And I must shamefully admit that all but one are simply impossible to implement in an interoperable way.

That’s because they instruct implementers to return an XPath nodeset in the response SOAP message but say nothing about how to serialize the nodeset. While an XPath nodeset contains the kind of things that make up an XML document, it is not an XML document by itself. There is an infinite number of possible ways to serialized an XPath nodeset into XML. To have any hope of interoperability on this, a serialization algorithm has to be clearly described by the specification. Which hasn’t happened.

Let’s start with WS-ResourceProperties (WS-RP). It has a QueryResourceProperties operation that takes an XPath expression as input. The specification says that “the response MUST contain an XML serialization of the results of evaluating the QueryExpression against the resource properties document“. Great, thanks. The example provided happens to return a nodeset with only one node (a boolean), which is implicitly serialized into the text representation of that boolean. What if there is more than one node in the nodeset? What about other types of nodes?

Moving on to WS-Management, which defines a SOAP header that uses XPath to qualify a WS-Transfer GET request such that it only retrieves a subset of the target XML document. While it does a better job than WS-RP at describing the input (e.g. it specifies the context node and what namespace declarations are in scope for the XPath evaluation) it is even more cavalier than WS-RP in describing the output: “the output (lines 53-55) is like that supplied by a typical XPath processor and might or might not contain XML namespace information or attributes“. By “a typical XPath processor” we should understand MSXML I suppose. But as far as I know a “typical XML processor” doesn’t return XML, it returns language-specific data structures (e.g. a C# or Java object, like a nu.xom.Nodes instance). And here too, the examples only use single-node nodesets.

WS-ResourceTransfer (WS-RT) was supposed to be the convergence of these two efforts, so presumably it would have learned from their mistakes. While it is better written in general than its predecessors, it fails just as badly with regards to specifying the nodeset serialization. And once again, the example provided uses a nodeset with just one node.

And then came the CMDBf query operation which, for some unclear reason, was deemed in need of a built-in XPath transformation of records. As I pointed out in my review of CMDBf 1.0 at the time, this feature was added without taking the pain to define the XML serialization of the resulting nodeset. And there isn’t even an example of the XPath serialization.

It is sad in a way, but the only specification that acknowledges the problem and addresses it came before any of the four above even got started. It is the WSMF (Web Services Management Framework) work that we did at HP, and more specifically the “note on dynamic attributes and meta information” (not available at HP anymore but available from archive.org) . This specification was the first one to define a GET operation that is qualified by an XPath expression. Unlike its successors it also explicitly narrowed down the types of nodes that could be selected (“The manager MUST NOT send as input an XPath statement that returns a nodeset containing nodes other than element, attribute and namespace nodes“). And for those valid types it described how to serialized them in XML (“When a node in the result nodeset is an attribute node, for the sake of the response it is serialized as an element node which has the same name as the name of the original attribute (see example 4 for an illustration). The element is in the same namespace as the namespace the attribute it represents is in. This applies to namespace nodes as well, they are serialized like an attributes in the xmlns namespace“). Turning an attribute into an element of the same QName might not be the smartest thing in retrospect (after all there may be an element by that QName already) but at least we recognized and addressed the problem.

But all is good now, I am told, because XPath 2.0 is here, along with a clean data model and a well-described serialization.

Not so. Anyone wanting to use XPath for a SOAP-based query language still would have to specify a serialization.

The first problem with the W3C serialization is that the XML output method doesn’t work for all nodesets. Try to use it on a nodeset that contains a top-level attribute node and you get error err:SENR0001. And even for the nodesets it accepts, it sometimes returns less-than-useful results. For example, if your XPath is of the form /employee/name/text() and you have four employees, the result will look something like this:

“Joe SmithKathy O’ConnorHelen MartinBrian Jones”

Concatenated text values without separators. I guess W3C is like a department store, they don’t offer complimentary wrapping anymore…

That’s why the nux.xom.xquery.ResultSequenceSerializer class had to define its own wrapping mechanims to produce a useful XML serialization. The API gives you the choice between the W3C_ALGORITHM and the WRAP_ALGORITHM.

Bottom line, and however much some would like to think of it that way, XPath (1 or 2) is not an XML subsetting/transformation mechanism. It could be used to create one (as XSLT does), but you have to do your own plumbing.

In addition to the technical aspects of this discussion, what else can be learned from this sad state of things? The fact that all these specifications define an XPath-driven query mechanism that is simply broken (beyond the simplest use cases) withouth anyone even noticing tells me that there isn’t a real need for full XPath query over SOAP (and I am talking about XPath 1.0, the introduction of XPath 2.0 in CMDBf is even more out there). A way to retrieve individual elements (and maybe text values) is all that is needed for 99% of the use cases addressed by these specifications. Users would be better served (especially in a version 1.0) by specifications that cover the simple case correctly than by overly generic, complex and poorly documented features. There is always time to add features later if the initial specification is successful enough that users encounter its limitations.

3 Comments

Filed under CMDB Federation, CMDBf, Everything, SOAP, Specs, Standards, Tech, W3C, WS-Management, WS-ResourceTransfer, XPath

System Center “Cross Platform Extension”: too many distractions

I was hoping that by the time MMS was over there would be more clarity about the “Cross Platform Extension” to System Center that Microsoft announced there. But most of the comments I have seen have focused on two non-technical aspects: Microsoft is interested in heterogeneous management and Microsoft makes use of open source. That’s also the focus of Coté’s coverage.

So what? Is it still that exciting, in 2008, to learn that Microsoft recognizes that Linux and OSS are major players in enterprise computing? If Steve Ballmer eventually gets hold of Yahoo, do you think his first priority will be to move all the servers to Windows or to build up its search and advertising audience? It’s been now 10 years since the Halloween documents came out. They can be seen as the start of Microsoft’s realization that Linux/OSS are here for good. It is not surprising to see that one of their main authors is now the driving force behind WS-Management, an effort that illustrates the acceptance of heterogeneity and the need to deal with it (on Microsoft’s terms if possible, of course). The WS-Management effort started years ago and it was a clear sign that Microsoft knew it had to tackle heterogeneous management (despite the reassuring talk that “it’s all about making Windows the most manageable platform” to HP and others). Basically, Microsoft is using WS-Management to support heterogeneity without having to do too much work: by creating an industry standard that everyone writes to and that Microsoft uses internally. Heterogeneous management is intrinsic to DSI if DSI is to be anything more than a demo.

But all of this was known before MMS 2008 to anyone who was paying attention. Instead of all this Microsoft/OSS/heterogeneous talk, I am a lot more interested in the technical aspects of the “Cross Platform Extension”.

OpenPegasus has been around for a long time, as a C++ CIMOM with a bunch of associated providers and CIM-XML interoperability over HTTP with CIM clients. I don’t know where WS-Management support was on the OpenPegasus development timeline, but even without Microsoft getting involved it would have eventually happened. And this should have been sufficient for System Center to access the CIMOM (BTW, does System Center not support CIM-XML when WS-Management is not present and if it does then what is different in practice with WS-Management?).

I can see how Microsoft would bring some extra (and much welcome) development resources for the WS-Management implementation (BTW the guys at Intel already have an open-source C implementation of WS-Management) as well as some extra marketing/visibility/distribution. Nice, but not earth-shattering. Do they bring anything else to OpenPegasus?

And what else is in the “Cross Platform Extension” in addition to an OpenPegasus WS-Management-capable CIMOM? Is there any extra modeling capability beyond CIM? Any Microsoft-specific classes? Any discovery/reconciliation capability? How much actual configuration management versus just monitoring? Security? Health models? Desired state management? Or is it just a WS-Management CIMOM? Any pointer to specific information is welcome.

Of course the underlying question is whether others than Microsoft can manage resources that have an OpenPegasus-based System Center management pack on them. The Open Management Consortium guys have talked about an open management agent. Could, against all expectations, Microsoft be the one delivering it?

In the IT management world, there are the big 4 (HP, BMC, CA and IBM), the little 4 (Zenoss, Hyperic, GroundWorks and openQRM) and the mighty 3 (Oracle, Microsoft and EMC). Sorry John, I am reclaiming the use of the “mighty” term: your “mighty 2” (or 2.5) are really still the “little 2” (or 2.5). At least for now.

The interesting thing is that in that industry configuration there are topics on which the little ones and the mighty ones share common interests. For example, the big 4 have a lot more management packs for all kinds of resources, built up over the years. Some standard-based mechanism that partially resets the stage helps the little ones and the mighty ones better compete against the big 4. Even better if it has an attractive (and extensible) implementation ready in the form of an agent. But let’s be clear that it takes more than a CIMOM to make a management pack. You need domains-specific expertise in the form of health models, deployment/configuration scripts and/or descriptors, configuration validation, role management etc. Thus my questions about what else (beyond CIM over WS-Management) Microsoft is bringing to the table. SML and CML are supposed to address this space, but I didn’t hear them mentioned once in the MMS coverage.

[UPDATED on 2008/5/7: Another perspective on Microsoft and open source: Microsoft Ex-Pats Developing Open Source Software Outside of Redmond]

[UPDATED 2008/5/7: I got an answer to the question about System Center support for CIM-XML: it doesn’t have it. So indeed it’s either WS-Management of WMI. If you’re a Linux box, that means it’s WS-Management.]

1 Comment

Filed under CA, Everything, HP, IBM, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Open source, Oracle, SML, Standards, WS-Management, Yahoo

Oracle/BEA, WS-Management and MMS: announcements of the day

A few announcements came out today.

The good news: Oracle’s acquisition of BEA closes. Unobstructed technical work can start.

The conveniently-timed news: WS-Management officially a standard.

Speaking of MMS 2008, any announcement there? Not much so far, as explained by Ian Blyth. If I parse the cross-platform part of the press release correctly, it says that management of non-Windows resources by Operations Manager is based on WS-Management, but WS-Management alone is not enough so Microsoft is providing a development kit for several non-Microsoft operating systems. It will be interesting to see what exactly is produced by these management packs. Can they be called on by management tools other Operations Manager or is the stuff that rides on top of WS-Management too proprietary to allow this? No word on SML/CML.

By the end of the week we may have a clearer picture, including what’s going on with the previously-announced reset on System Center Service Manager. Coté is on the scene and will undoubtedly share his thoughts.

As a side note, the way the MMS main page loads betrays the fact that, in 2008, Microsoft (or more likely its event marketing contractor) is using the same clueless HTML design approach that I first saw in 1995 and recently wrote about. All the text in the center of the MMS home page is contained in one large picture (available here). They didn’t even bother with a “ALT” field, so good luck to blind users. The part that says “Registration Overview Page” was made blue and underlined to suggest that it is a link, but it is just a part of the picture. Which, presumably, was supposed to be turned into a link using an image map. Well, turns out they can’t even get that right.

They tried to use a client-side image map (not available in 1995) but somehow the actual map code is commented out in the HTML source:

<!--<map name=Map>
  <area shape=RECT coords=18,549,210,572 href="registrationoverview.aspx">
  <area shape=RECT coords=17,596,222,634 href="registrationoverview.aspx">
</map>-->

As a result, the single most preeminent link on the home page is dead. And there is no server-side image map mechanism as a backup (which I remember used to be best practice when client support for client-side image maps was spotty).

Looking at the HTML source also reveals that tables are over-used. That’s the kind of HTML I can write, and I don’t mean that as a compliment.

[UPDATED 2008/5/5: As expected/hoped, Coté did share his thoughts on this “cross-platform” move from the MMS floor.]

Comments Off on Oracle/BEA, WS-Management and MMS: announcements of the day

Filed under CMDB, DMTF, Everything, IT Systems Mgmt, Linux, Manageability, Microsoft, Oracle, Standards, Trade show

Unhealthy fun with IP aspects of optionality in specifications

The previous blog post has re-awaken the spec lawyer in me (on the hobby glamor scale, spec lawyering ranks just below collecting dead bugs). Which brought back to my mind a peculiar aspect of the “Microsoft Open Specification Promise“.

The promise was published to address fears some people had that adopting Microsoft-created specifications (especially non-standard ones) would put them at risk of patent claims from Microsoft. The core of the promise is only two paragraphs long. The first one contains this section:

“To clarify, ‘Microsoft Necessary Claims’ are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.”

That seams to pretty clearly state that only the required portions of a specification are covered by this promise. Which is a very significant limitation, as specifications often tend to (over-) use optional features. But if you read further, the list of “Covered Specifications” (those to which the promise applies), contains this statement:

“this Promise also applies to the required elements of optional portions of such specifications.”

I find this very puzzling because it seems to contradict the previous statement. And more importantly, it’s hard to understand what it really means. That’s where the fun starts:

For example, if my spec defines a document <a> with an optional element <b> that itself has an optional sub-element <c>, as in:

<a>
  ...
  <b>
    ...
    <c>...</c>
  </b>
</a>

The <b> element is a required part of the “b” optional portion of the spec (the portion of the spec that defines that element), so I guess it is covered, but is <c>? That’s an optional element of an optional portion (the “b” portion) of the spec, so it isn’t. Unless you consider the portion of the spec that defines <c> (the “c” portion of the spec) to be an optional portion of the spec itself. In which case the <c> element is covered.

But if you take that second line of reasoning, then everything in the spec is covered because for any feature, no matter how “optional” it is, there is a portion (optional or not) of the specification that describes this feature. And if you are implementing that portion, for example the portion that defines element <foo>, by definition element <foo> is required for it (how can an element not be a required part of its own definition?). But if Microsoft intended to cover all parts of the specification, why not say so rather than this recursion-inducing “required elements of optional portions” statement? And if not, why do they choose to only cover optional elements that are one degree removed from the base of the specification?

Wouldn’t it be fun to see a court of law deal with a suit that hinges on this statement (provided that you’re not a party in the suit, of course)?

When a real spec lawyer took a look at this promise, he didn’t comment on the second statement, the one that raises the most questions in my mind.

[UPDATED 2008/4/29: The “promise” has seen many updates. The original (which is the one Andy Updegrove reviewed at the previous link) came out on 2006/9/12. The one I reviewed is dated 2008/3/25. There is no change history on the Microsoft site, but the Wayback machine has archived some older versions. The oldest one I can find is dated 2006/10/23 and it does not contain the sentence about “required elements of optional portions” that puzzles me. So it’s likely that the version Andy reviewed didn’t include this either and as such was clearly limited to required portions of the specifications (something that Andy pointed out).]

Comments Off on Unhealthy fun with IP aspects of optionality in specifications

Filed under Business, Everything, Microsoft, Patents, Specs, Standards

WS-Transfer, its WSDL and its WS-I compliance: the art of engineered uselessness

Several years ago, Chris Ferris wrote a blog entry in which he explains that WS-Transfer is not WS-I Basic Profile (BP) compliant.

Chris’ main point is correct: the WSDL document in appendix II of the WS-Transfer specification is not compliant with the WS-I Basic Profile. But what does this mean and why should one care?

If you search for the word “wsdl” in WS-Transfer, you first find it in the table that declares namespace prefixes used in the specification. But the prefix is not used in the specification, so it could just as well be removed from that table.

We see it next mentioned in the “compliance” boilerplate where it is declared to be the least authoritative of all information in the specification.

The next occurrence is all the way down in section 8, as a reference to the WSDL 1.1 W3C note. The only place where that reference is used, is further below, in Appendix II.

In short, for all practical purposes there is no mention of WSDL in WS-Transfer except for this one appendix that contains a WSDL document. Since there is no MUST or REQUIRED statement that refers to it, it is at best a testing tool that one can use to validate WS-Transfer messages produced. There is no requirement at all that the implementation produces that WSDL (e.g. as a response to a WS-MeX request) or consumes it.

And if you look at the content of the WSDL, it is mostly XML gymnastics aimed at creating “empty” and “any” types to express almost nothing useful about the messages sent and received.

You don’t have to take my statement that the WS-Transfer WSDL is useless at face value. Here are two other proofs:

  • Chris doesn’t just point out the WS-I BP violation in the WS-Transfer WSDL, he also proposes a way to fix it. He writes: “I actually think that a more appropriate approach to handling WS-Transfer’s ‘Get’ would be to specify the output message as you would any doc-literal operation and merely annotate the operation with the appropriate wsa:Action attribute values” (he also provides an example). And he is perfectly right. If you really want a WSDL for your WS-Transfer operations, create one that is specific to the resource type (server, toaster…) that you are dealing with. By definition that WSDL can’t be baked into the model-agnostic WS-Transfer specification. While Chris doesn’t say it, the natural conclusion of his remark is that there is not point for a WSDL in WS-Transfer (because any resource-agnostic WSDL is useless).
  • The WS-Transfer XSD and WSDL have been modified, sometimes in backward-incompatible ways, without changing the target namespace. From the original version to the first W3C submission, some minor changes (message names, introduction of WS-Addressing). From the first W3C submission to the current submission, some potentially backward-incompatible changes (the GET input can now be non-empty, the CREATE response can now contain anything as a result of trying to support different versions of WS-Addressing). On top of that, all these XSD and WSDL documents embedded in various versions of the spec are “non-normative”. The normative versions are said to be the ones at xmlsoap.org (XSD, WSDL). Those have not changed, which means that both versions on the W3C web site contain an incorrect version of the XSD/WSDL in the spec. Shouldn’t that lack of XML hygiene be a big deal for a specification that is implemented (via WS-Management, which references the W3C submission) in resources with long product development cycles, such as servers from Dell, HP and others that have WS-Management support directly on the motherboard? It would, if the XSD and WSDL had any relevance for the implementers. The fact that there was no outcry is yet another proof that the WS-Transfer XSD and the WSDL are irrelevant.

So yes, Chris is right that the WS-Transfer WSDL (BTW all versions have the problem that Chris describes even though it could have been fixed in a backward-compatible way when the WSDL was altered) is not WS-I BP compliant. But since that WSDL is useless anyway, this shouldn’t keep anyone up at night. The WS-Transfer WSDL serves no purpose other than to annoy people who like things to be WS-I BP compliant.

But is it just the WS-Transfer WSDL that’s useless, or it is all of WS-Transfer?

I am not planning to go into WS-* vs. REST territory here. To those who are confused by the similarity between the names of WS-Transfer operations and HTTP methods and see WS-Transfer as a way to do “REST over SOAP” I’ll just point out that WS-Transfer is rarely used on its own but rather in conjunction with many other SOAP messages (like those defined by WS-Eventing and WS-Enumeration, plus countless custom operations). So much for uniform interfaces. WS-Transfer, at least as it is used today, is not about REST.

Rather, the reasons why I question the usefulness of WS-Transfer are more pragmatic than architectural. I can think of three potential justifications to carve out WS-Transfer as a separate specification, none of which is really convincing at this point in time.

The first reason is simply to avoid repeating the same text over and over again. If many specifications are going to describe the same SOAP message, just describe it once and refer to that description. Sounds good. But I know of three specifications that use WS-Transfer: WS-Management, WS-MeX and the Devices Profile for Web Services.

WS-MeX and the Devices Profile only use the GET operation. Which means that the only specification text that they can re-use from WS-Transfer is something like “send an empty get request and get something back”. WS-Transfer can’t say what that something is, only the domain-specific specifications can. As a result, you are spending as much time referencing WS-Transfer as would be spent defining a simple GET operation. For all practical purposes, you can implement WS-MeX and the Devices Profile without ever reading WS-Transfer.

The second potential reason is to provide a stand-alone piece of functionality that can be implemented once (e.g. as a library/module) and re-used for different purposes. Something that automatically kicks in when a WS-Transfer wsa:Action is detected. Think of a stand-alone encryption/decryption library for example, that looks for specific SOAP headers. Or WS-Eventing, for which a library can take over the task of managing the subscription lifecycle. Except WS-Transfer defines so little that it’s not clear what a stand-alone WS-Transfer implementation would do. Receive messages and do what with them? It is so tied to the back-end that there isn’t much you can do in a general fashion. Unless you are creating a library for a database product and you see WS-Transfer as a query interface for your database. But this only makes sense if you want to provide more fine-grained access to the XML content, which WS-Transfer does not do.

Which takes us to the third potential value of WS-Transfer, as a foundational specification on which to build extensions. Of the three this is the only one that I believed in at some point. WS-ResourceTransfer (WS-RT) was the main attempt at doing this. Any service that uses WS-Transfer could, via the magic of the SOAP processing model, offer a more precise/powerful access to the resources. But while this was possible in theory it hasn’t really panned out in practice for many reasons:

  • Some people (hints: Armonk; Blue) pushed hard to put WS-RT instructions in the body rather than in headers, seriously compromising its ability to seamlessly compose with existing SOAP messages.
  • WS-MeX and the Devices Profile typically deal with documents small enough that manipulating them as a whole is rarely a problem. This only leaves WS-Management which has its own “fragment transfer” mechanism so it doesn’t really need a stand-alone mechanism.
  • XQuery is now developing support for an update capability.

What then is left, in the Spring of 2008, to justify the need for WS-Transfer as a separate layer, rather than considering it an integral part of WS-Management? Not much. WS-MeX, in an earlier version, used to define its own GET operation and it wouldn’t be any worse off if it had stayed that way (or returned to it). Ditto for the Device Profile. At this point, it’s mostly a matter of pragmatically cleaning up the mess without creating another one.

In retrospect (color me partially guilty), maybe one shouldn’t use the same architectural rules when attempting to design an interoperable standard stack for an industry than when refactoring a software project. Maybe one should resist the urge to refactor the “code” (or rather the PowerPoint stack) every time one detects the smallest conceptual redundancy. There is a cost in constant changes. There is a cost in specification cross-dependencies. WSDM experienced it firth hand with the different versions of WS-Addressing (another dependency that didn’t need to be). WS-Management is seeing it from the perspective of standardization.

1 Comment

Filed under Everything, Microsoft, SOAP, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XQuery

Windows XP Service Pack 3

Microsoft announced SP3 for Windows XP today. This white paper gives an overview of its content. It will ship on April 29th through Windows Update. Many of the updates are related to improved management, which makes sense at this stage of the game for the OS. It also makes sense as a attempt to position the OS against the rising desktop Linux threat. I wanted to see what specific management-related updates were contained. They are:

This is all good but it seems to take a very System Center-centric view of Windows management. There may be some more third-party-friendly improvements in the complete list of updates contained, but the link provided by the white paper (Knowledge Base article 936929) doesn’t seem to work at this time.

This was announced by Chris Keroack, the release manager for SP3, on this forum.

I can’t resist the temptation to translation into common English a few selected sentences from the white paper:

“For customers with existing Windows XP installations, Windows XP SP3 fills gaps in the updates they might have missed—for example, by declining individual updates when using Automatic Updates…”

Translation: You obviously did not know what you were doing when you refused that update last year so we will now force it on you inside a bundle.

“Developing service packs for operating systems like Windows XP, which is nearing its end-of-sales period, is a standard practice, and Microsoft does this for the convenience of its customers and partners.”

Translation: Don’t get too excited you will still eventually have to move to Vista.

“With few exceptions, Microsoft is not adding Windows Vista features to Windows XP through SP3. As noted earlier, one exception is the addition of NAP to Windows XP to help organizations running Windows XP to take advantage of new features in Windows Server 2008.”

Translation: We are not going to let this cut into the sales of Vista, except when not doing it cuts into the sales of Windows Server 2008.

[UPDATED 2008/4/29: Turns out it’s not shipping today on Windows Update, as previously announced, because SP3 breaks Microsoft’s own Retail Management System (RMS) application. As does Vista SP1. I wonder if other vendors can also ask Microsoft to hold a service pack if it breaks their application…]

[UPDATED 2008/5/12: And when it does ship it causes some computers to fail to boot. This page has a lot of information. One of my machines is one of the affected AMD-based HP desktop with OEM OS so I am very happy to have seen this before applying the service pack. I think I’ll take my time to apply it in case other things shop up in adddition to the need to disable the Intel drivers. This KB article covers the same issue.]

Comments Off on Windows XP Service Pack 3

Filed under Everything, Microsoft

It is now safe to steal my identity

Note to whoever stole the laptop of a Fidelity employee two years ago, with personal information (SSN and more) for everyone enrolled in HP’s retirement plan: it is now safe to make use of the information. Congratulations on being patient.

I received an email telling me that the “credit watch” service in which all affected HP employees (and ex-employees) were enrolled for free has expired. Of course, we are invited to start paying Equifax to keep it running. $65 per year (and that’s supposedly a discounted rate, mind you, half the “normal” price) to run a DB query once a week on my behalf. Not bad. I should be in that business.

In what ways is the lost data less dangerous two years later? The “1 or 2 years of free credit watch” offer that is typical after events such security violations is obviously just a PR move to allow the guilty party to look like they are taking responsibility for their embarrassing display of incompetence. And it probably costs them very little, if anything, to provide this, considering how good a customer acquisition strategy it is for the “credit watch” department of the credit agencies. The fact that Fidelity and their pears don’t have to bear any real cost for this is the reason why it keeps happening.

If I sound a bit detached about this, it’s not that I am not worried about someone impersonating me by using my SSN and birth date. It’s just that I am not more worried about that specific laptop theft than I am about the hundreds of employees at medical offices, dental offices, insurances companies, banks etc that already have access to this information.

The solution is to publish every single SSN on a web site and stop pretending they can be used for authentication.

[UPDATED 2008/7/7: One more name in the long list of companies that have (often through a subcontractor) leaked so-called “personal” information about their employees. It’s only news because the employer is Google and anything Google-related is for some reason considered newsworthy. Danny is kind to be appreciative for the one year of free credit monitoring. It probably costs Google close to nothing. Which is why Google and the others don’t really care about the problem.]

1 Comment

Filed under Everything, HP, Identity theft, Security, SSN

Less bloat, more oxygen

I follow Coté for his coverage of the IT management market. He also covers the so-called RIA (“Rich Internet Application”) playground, so through his blog (e.g. this post today) I involuntarily get news and comments about Flash, AIR, Silverlight and other I-hate-the-Web technologies. And I keep thinking “I hope they won’t mess up the Web too much for the rest of us on their way down to failure”.

Every time I run into a “no Flash, no service” site, I have a flashback (if you think the pun is funny then consider it intended) to 1995. That’s when Jean-Michel Jarre (the French musician, of Oxygène fame) launched his first web site, jarre.net (now de-commissioned). As a pioneer of electronic music, it wasn’t surprising to see him be one of the first artists to use the Web. As someone who likes to illuminate entire cities with laser beams, it wasn’t surprising to see him use overkill technology. So his Photoshop-wielding consultant created an entire site where each page was just one big image, with embedded text. It took forever to load and the stupidity of the approach shocked me so much that I remember it 13 years later. All the links were based on server-side image maps (the x/y coordinates of the pixel that you clicked on get sent to the server where a map links these coordinates to a target URL). The way HTML was at the time, you couldn’t use fancy fonts, colored text and elaborate wrapping (but you could blink!). And we all know that you simply can’t provide dates and locations of upcoming concerts without colored text, twisted fonts and a fancy layout.

The Internet Archive doesn’t have a copy of this original Jarre site, I don’t know if it has survived anywhere other than in my scarred-for-life brain. And if you go to JM Jarre’s current site, guess what? It is a Flash-only site. With my non-Flash Firefox all I get is a black page with a sentence (in French only, and not even grammatically correct) pointing me to the Flash download page. Looking at it with my Flash-enabled IE confirms (after a long wait for the Flash content to download) what I expected: other than a few videos (which could indeed use a simple Flash player embedded in the HTML page), there is no value whatsoever in using Flash for this site. The photos of his 80’s haircut would look just as good/bad in HTML.

Just like there are some usages for which image maps are appropriate, there are some for which Flash and friends are the right tool. But if they were only used where they belong, there wouldn’t be nearly as much hype around them. Poor Coté would have to spend more time with boring IT management geeks and less with Flash hipsters.

6 Comments

Filed under Everything, Flash, Off-topic

IGF and GIF: it’s not a typo

With the Oracle announcements at the RSA conference this month (things like Oracle Role Manager and this white paper), the Identity Governance Framework (IGF) is back in the news. And since HP publicly released the Governance Interoperability Framework (GIF) earlier this year, there is some potential for confusion between the two (akin to the OSGi/OGSI confusion). I am not an author or even an expert in either, but I know enough about both that I can at least help reduce the confusion.

They are both frameworks, they are both about governance, they both try to enable interoperability, they both define XML formats, they were both privately designed and they are both pushed by their authors (and supporters) towards standardization. To add to the confusion, Oracle is listed as a supporter of HP’s GIF and HP is listed as a supporter of Oracle’s IGF.

And yet they are very different.

GIF is an attempt to address SOA governance, which mostly relates to the lifecycle of services and their artifacts (like WSDL, XSD and policies). So you can track versions, deployment status, ownership, dependencies, etc. HP is making the specification available to all (here but you need to register) and has talked about submission to a standards body but as far as I know this hasn’t happened yet.

IGF is a set of specifications and APIs that pull access policy for identity related information out of the application logic and into well-understood XML declarations. With the goal of better controlling the flow of such information. The keystones are the CARML specification used to describe what identity related information an application needs and its counterpart the AAPML specification, used to describe the rules and constraints that an application puts on usage of the identity-related information it owns. The framework also defines relevant roles and service interfaces. Unlike GIF, which is still controlled by HP, IGF is now under the control of the Liberty Alliance Project. Oracle is just one participant (albeit a leading one).

Could they ever meet?

A Web service managed through a GIF-like SOA governance system could have policies related to accessing identity-related information, as addressed by IGF (and realized through CARML and AAPML elements). GIF doesn’t really care about the content of the policies. Studying the positions of the IGF and GIF specifications relative to WS-Policy would be a good way to concretely understand how they operate at a different level from one another. While there could theoretically be situations in which IGF and GIF are both involved, they do not do the same thing and have no interdependency whatsoever.

[UPDATED 2008/4/18: Phil Hunt (co-author of IGF) has a blog where he often writes about IGF. He also wrote a good overview of IGF and its applicability to governance and SOX-style compliance.]

Comments Off on IGF and GIF: it’s not a typo

Filed under Everything, Governance, Identity theft, Oracle, Security, Specs, Standards

Between skinny and bloated

Spring’s Rod Johnson writes today about the future he sees for Java Bloatware (his unkind term for Java EE middleware). Of course, as Mr. Spring, he is far from neutral. Of course he is focusing on a certain class of applications (web-centered, mostly greenfield, which is a huge – and sexy – segment, but not in any way the only kind of applications). Of course he underestimates how established technology that works remains used long after it may have ceased to be the optimal solution for new developments. But even taking all that into account, he makes some good points about the proliferation of rarely-used capabilities in Java EE and the associated cost. Most of those points are well understood and are driving the more modular approach taken by Java EE 6. As well as the adoption of OSGi (see here and here for BEA’s example). In addition, as Rod mentions, the JCP now has to share the playground with other framework standardization efforts like SCA.

The most interesting part of Rod’s post from my perspective, is this prediction:

“The market will need to address the gap between Tomcat and WebLogic/WebSphere. Currently an important part of the market is neglected. The majority of Java web applications are most at home on Tomcat. A minority actually want some of the more esoteric functionality of a full-blown application server, such as JCA, or specialized capabilities such as distributed transaction management. But a larger minority need some of the operational and management features of those products, but are not interested in the esoteric APIs and the bloat they bring along with them. As more and more end user companies look to phase out legacy application servers in favor of better suited technologies, there will inevitably be a response to market demand, with products that hit the sweet spot and bridge this gap.”

Right on. This is the second time in a week that we see an acknowledgment of the importance of application manageability coming from SpringSource. Whether this mid-point demand will be met from the top down by a more modular Java EE stack or from the bottom up by building on top of Tomcat (or some non-Java HTTP server) remains to be seen. The two aren’t exclusive either.

I expect that the hosted application frameworks like the recently announced Google App Engine will also aim at that “more than Tomcat, less than J2EE” sweetspot. But the cost/benefit formula of a more full-featured (or “bloated” if you prefer Rod’s terminology) environment might turn out to be different in a “hosted framework” situation.

1 Comment

Filed under Application Mgmt, Everything, IT Systems Mgmt, JMX, Manageability, Spring, Tech

Comparing the “openness” of standards bodies

Via James Governor, a link to an IDC report that attempts to compare the “openness” of ten standards bodies: CEN, Ecma, ETSI, IETF, ISO, ITU, NIST, OASIS, OMG, and W3C. The report is 92 pages long, which is 91 more than I really want to read on this topic. I skimmed the report until I got to the “concluding remarks” at the end. The bottom line:

“However, there are differences between standard setting organizations in terms of ‘openness’ and certainly in terms of how ‘openness’ is implemented. It can be difficult to make a distinction of which form of ‘openness’ is the most appropriate.”

Sure, but after 92 pages maybe the author could at least propose some useful way to organize the problem rather than just making a laundry list of possible interpretations of “openness”.

Still, if you are in the business of running (or selecting) standards organizations it might be worth your time to read this report.

Bad news for DMTF: you are not important enough to be included. Good news for DMTF: your lack of transparency is not exposed by this report.

Comments Off on Comparing the “openness” of standards bodies

Filed under DMTF, Everything, Standards

SpringSource Application Management Suite

SpringSource has made some recent announcements, in an effort to build up its commercial offering on top of the open source Spring framework. There is now a SpringSource Enterprise subscription which gives you access to an “enterprise” edition of the framework, some support and the SpringSource Performance Suite.

The first two components (enterprise edition and support) are common approaches to commercial open source.

The performance suite is a new product, comprised of the Tool Suite (for development), an Advanced Pack for Oracle (for better use of Oracle RAC features) and the Application Management Suite (AMS). Application and middleware management is what I care most about, so AMS is the part of the announcement that caught my attention.

The only publicly-accessible source of meaningful information about AMS that I could find is this blog post by Jennifer Hickey. AMS is built on Hyperic. The monitoring is based on collecting, through instrumentation, entry and exit times for monitored methods. The agents then reports this to a server. Add to this some discovery capabilities and the console can then report observed metrics on the discovered/selected resources.

The blog post ends by saying that “we’d like to make it as powerful and easy to use as possible for both Developers and Operations staff”. At this stage, I think it’s a lot more likely to be used for development than for operations. The instrumentation overhead is supposed to be “very slight” but, as always with monitoring, this warrants more precise data. Also, it is not clear if/how AMS can integrate with other management tools.

In any case, it’s encouraging to see an open source application development framework which doesn’t entirely focus on ease of development but also acknowledges the full lifecycle of an application (and concerns such as monitoring, as addressed here, but also configuration management, governance, business activity management…). That’s the difference between “the best framework to create an application” and “the best framework to create an application that is expected to be used”. Before open source became a business strategy, a defining characteristic was that the developers where also users of the product. Which naturally meant that it was heavily biased towards developers and development tasks.

From an operations perspective, the AMS team should focus its efforts on application modeling, metric collection and management integration rather than the dashboard. A simple specialized console is great for application developers. The ability to discover, model, configure and monitor applications in conjunction with the other elements of the IT system (e.g. underlying infrastructure, end user experience, business processes and other forms of application integration, etc) is what operators really need.

In any case, it will be interesting to test the practical value of “Spring-aware” application management, above and beyond generic Java application management.

Bonus question: the enterprise edition of the Spring framework is “warranted to be virus-free”. Since the enterprise version includes the base framework, to the extent that the enterprise version is virus-free then mustn’t the base logically be “virus-free” as well? And what does “virus-free” mean anyway?

6 Comments

Filed under Application Mgmt, Everything, IT Systems Mgmt, JMX, Mgmt integration, Spring