Category Archives: Modeling

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

Oracle acquires ClearApp for composite application management

Oracle (and more specifically the middleware and applications management part of Oracle Enterprise Manager) has just acquired ClearApp. The company is based in Mountain View (California) and their QuickVision product is a very advanced management tool for composite applications, especially BPEL-based and Portal-based applications.

More information about the acquisition is available from this page and the press release. Information about the QuickVision product can be found on the ClearApp site.

QuickVision is a very complementary addition to our existing products and the acquisitions that we have made over the last year in the application management domain. Let’s take a performance management use case to see how they relate to one another conceptually (this is not an integration roadmap, just a comparison of the features of the existing products): Oracle Real User Experience Insight (from the Moniforce acquisition) will tell you that your users are seeing a performance degradation for a specific function of your Web application. If this is a stand-alone Java application, you can go straight into the Enterprise Manager App Server Diagnostic Pack to start from a URL and analyze where processing time is spent (servlet, JSP, EJB, JDBC…). AD4J (from the Auptyma acquisition) provides deep insight into the JVM. It will give you the line number and call stack of the slow methods. For example, it might lead you to a specific database call that is taking a long time to return. You can then follow the trail deep into the database using the Oracle Database Diagnostic and Tuning packs.

But if your application is a composite application (for example one that makes use of a BPEL process to orchestrate services deployed on different application servers), then you would have a hard time finding which application server to focus on. The QuickVision product fills that gap, taking a BPEL process from its invocation point into all its successive steps and into the code that the different steps invoke. So you can see if the problem is within the BPEL execution (e.g. you loop too many times) or inside an invoked Web service. In that case, QuickVision will lead you to the class that implements that service, at which point you have all the context that you need to fire off AD4J and do a fine-grained analysis of the problematic Java code as described above.

In this scenario (and assuming that the root cause is the slowness of a database query executed by a web services that has been invoked through a BPEL process), the chain of management capabilities goes something like this:

User Experience Insight
    -> QuickVision
        -> App Server Diagnostic Pack
            -> Database management packs

A variation on this would be if the service monitored was a SOAP service as opposed to a Web page. Oracle Web Services Manager could then be used as an alternative to Real User Experience Insight to alert you that something was amiss with the application performance. The rest of the flow would be the same.

At the end, it’s not just about managing Web services or Web sites, it’s about managing the whole SOA application.

Of course, QuickVision is not limited to performance analysis, even though that’s my favorite feature. For example, I could have picked a dependency analysis scenario.

To my new colleagues joining us from ClearApp, welcome!

[UPDATED 2008/9/9: InfoQ coverage of the acquisition by Dilip Krishnan.]

3 Comments

Filed under Application Mgmt, BPEL, Business Process, Everything, IT Systems Mgmt, Manageability, Mashup, Mgmt integration, Modeling, Oracle

BPEL as a source of application management metadata

Let’s put aside for now all the discussions about whether BPEL is an appropriate tool to capture a “true” business process, i.e. to implement the business logic understood by a business analyst (a topic that has been discussed at length already, including here, here, here, here, here, here and around the 5 minute mark of this podcast). Today, let’s look at it as simply another resource in a developer’s toolbox, alongside things like servlets and XML parsers. It’s a tool that can simplify the invocation of remote services (especially asynchronously), the parallelization of tasks, the definition of scoped compensation handlers, the transformation of XML, the encapsulation of key business logic and, most importantly, the reliable implementation of long-lived processes. If you need a few of these features, you might find BPEL a suitable programming tool. Plus, it refreshingly encourages handling of XML as XML (e.g. via XPath) rather than mindless code generation.

In addition to whatever developer productivity benefit you see in BPEL, there are other potential benefits form using it. They are the topic of this post and they relate to application management.

We all know that in an ideal world, no developer would release an application without providing a set of management capabilities that are carefully crafted to reflect the business logic of the application. Such that IT administrators can monitor, configure, optimize and troubleshoot the application in ways that are related to what the application really does (as opposed to generic metrics like memory, CPU and I/O metrics…).

Back in the real world, this is of course rarely the case. Enters BPEL. Just by virtue of using it in a reasonable way, and without any “just for the ops guys” metadata, BPEL provides a management model for the application. Sure it’s not as good as a hand-crafted management model, but at least it’s there. And it has some pretty compelling properties:

  • It feeds directly from the metadata used by the runtime, so it is guaranteed to be accurate (unlike metadata that is created specifically for management but has no role in the actual runtime).
  • It shows what external services the application depends on. Of course there is no guarantee that all remote invocation will be represented in the BPEL process, but since that’s a strength of BPEL it is reasonable to expect that it provides a good view of application dependencies (to be complemented, of course, by the application infrastructure dependencies like the database and the BPEL engine itself…). Remote invocations are a common point of failure and/or performance problems so they are a first class citizen of an application management model.
  • It explicitly captures process instances. No more jumping from one database table to another (assuming you even know where to look) to try to get a sense of the current overall status. The BPEL instances show the number of in-flight transactions in the application. It is also easy to compare the initialization and termination rates to see the trend.
  • It provides a horizontal segmentation of the processing tasks (via the BPEL activities) that is a good complement to the vertical segmentation often offered by application management tools (e.g. time spent in the database, time spent waiting on I/O, etc…).
  • It makes explicit certain exception conditions.

All these only make use of very basic aspects of BPEL: the enumeration of PartnerLinks, the notion of a process instance, the existence of activities, the fault/compensation/termination handlers. A fair amount of visibility into the health of the application can be derived form this alone. I am not making fancy assumptions about the management tool being able to make sense of the routing logic in the process or of the correlation rules. I am not assuming that the BPEL engine provides ways to control individual process instances. I am not assuming that the name attributes of certain elements (e.g. PartnerLink, variable) convey semantics that could help the administrator understand some of the semantics of the application.

At the end, it’s not about managing BPEL, it’s about managing an application that uses BPEL.

My point is not to push everyone to write any application as a BPEL process (or a set of them) as a way to get a great management infrastructure for free. But if BPEL is a potential choice for the application, then it’s worth considering those extra benefits in the “pros and cons” analysis. And if you have already decided to use BPEL, it may be worth looking into what management dividends you can harvest from this choice. Of course your mileage may vary depending on how manageable your BPEL infrastructure is. Hint hint

A few related links. Todd Biske has also written about the management value of BPEL, here and here. A similar analysis can be applied to SCA, but at this point in time there are many more applications out there that use BPEL than SCA, making the former more relevant. I briefly described the SCA side of the equation in an earlier exchange with David Chappell. That discussion is summarized here (including a pointer to David’s original piece). In an earlier post, I touched on the manageability potenial of other sources of application metadata, like OGSi and Spring (in addition to SCA and BPEL). Jean-Jacques Dubray provided additional context at InfoQ.

[UPDATED 2008/9/2: Based on this announcement, I can add one more hint.]

3 Comments

Filed under Application Mgmt, BPEL, Business Process, Everything, IT Systems Mgmt, Manageability, Modeling

WS Resource Access at W3C: the good, the bad and the ugly

As far as I know, the W3C is still reviewing the proposal that was made to them to create a new working group to standardize WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange. The suggested name, “Web Services Resource Access Working Group” or WS-RAWG is likely, if it sticks, to end up being shortened to WS-RAW. Which is a bit more cruel than needed. I’d say it’s simply half-baked.

There are many aspects to the specifications and features covered by the proposal. Some goodness, some badness and some ugliness. This post analyzes the good, points at the bad and hints at the ugly. Like your average family-oriented summer movie.

The good

The specifications proposed for W3C standardization describe a way to provide some generally useful features for SOAP messages. Some SOAP messages can get very long. In some cases, I know ahead of time what portion of the long messages promised by the contract (e.g. WSDL) I want. Wouldn’t it be nice, as an optimization, to let the message sender know about this so they can, if they are able to, filter down the message to just the part I want? Alternatively, maybe I do want the full response but I can’t consume it as one big message so I would like to get it in chunks.

You’ll notice that the paragraph above says nothing about “resources”. We are just talking about messaging features for SOAP messages. There are precedents for this. WS-Security can be used to encrypt a message. Any message. WS-ReliableMessaging can be used to ensure delivery of a message. Any message. These “quality of service” specifications are mostly orthogonal to the message content.

WS-RT and WS-Enumeration provide a solution to the “message filtering” and “message chunking”, respectively. But they only address them in the context of a GET-like operation. They can’t be layered on top of any SOAP message. How useful would WS-Security and WS-ReliableMessaging be if they had such a restriction?

If W3C takes on part of the work listed in the proposal, I hope they’ll do so in a way that expends the utility of these features to all SOAP messages.

And just like WS-Security and WS-ReliableMessaging, these features should be provided in a way that leverages the SOAP processing model. Such that I can judiciously use the soap:mustUnderstand header to not break existing services. If I’d like the message to be paired down but I can handle the complete message if need be, I’ll set this attribute to false. If I can’t handle the full message, I’ll set the attribute to true and I’ll get an error if the other party doesn’t understand this extension. At which point I can pick an alternative way to get the task accomplished. Sounds pretty basic but it’s amazing how often this important feature of SOAP (which heralds from and extends XML’s must-ignore semantics) is neglected and obstructed by designers of SOAP messages.

And then there is WS-MetadataExchange. While I am not a huge fan of this specification, I agree with the need for a simple, reliable way to retrieve different types of metadata for an endpoint.

So that’s the (potential) good. A flexible and generally useful way to pair-down long SOAP messages, to chunk them and to retrieve metadata for SOAP endpoints.

The bad

The bad is the whole “resource access” spin. It is not actually intrinsically bad. There are scenarios where such a pattern actually fits. But the way that pattern is being addressed by WS-RT and friends is overly generalized and overly XML-centric. By the latter I mean that it takes XML from an agreed-upon on-the-wire interchange format to an implicit metamodel (e.g. it assumes not just that you agree to exchange XML-formated data but that your model and your business logic are organized and implemented around an XML representation of the domain, which is a much more constraining requirement). I could go on and on about this, especially the use of XPath in the PUT operation. In fact I did go on and on with it, but I spun that off as a separate entry.

In the context of the W3C proposal at hand, this is bad because it burdens the generally useful features (see the “good” section above) with an unneeded and limiting formalism. Not to mention the fact that W3C kind of already has its resource access mechanism, but I’ll leave that aspect of the question to Mark and various bloggers (see a short list of relevant posts at the end of this entry).

The resource access part might be worth doing (one more time), but probably not in the same group as things like metadata discovery, message filtering and message chunking, which are not specific to “resource access” situations. And if someone is going to do this again, rather than repeating the not too useful approaches of the past, it may be good to consider alternatives.

The ugly

That’s the politics around this whole deal. There is, as you would expect, a lot more to it than meets the eye. The underlying drivers for all this have little to do with REST/WS or other architecture considerations. They have a lot to do with control. But that’s a topic for another post (maybe) when more of it can be publicly discussed.

A lot of what I describe in this post was already explained in the WS-ManagementHammer post from a couple of months ago. But that was before the W3C proposal and before WS-MetadataExchange was dragged into the deal. So I thought it might be useful to put the analysis in the context of that proposal. And BTW, this is a personal opinion, not an Oracle position (which is true in general for everything on this blog but is worth repeating specifically for this post).

2 Comments

Filed under Everything, Grid, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, SOAP, SOAP header, Specs, Standards, Tech, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

Who needs XPath fragment-level PUT?

WS-Management and WS-ResourceTransfer (WS-RT) both provide a mechanism to modify the XML representation of the state of a resource in a fine-grained way. The mechanisms differ a bit: WS-Management defines a SOAP header and distinguishes PUT from DELETE at the WS-Transfer operation level, while WS-RT uses the SOAP body and tunnels “modes” (remove, modify, insert) on top of the PUT WS-Transfer operation. But in their complete form both use XPath to point to any arbitrary nodeset and update it.

WS-ResourceProperties (WS-RP) takes a simpler approach. While it too supports XPath-driven retrieval of the content, it doesn’t attempt to provide an XPath-like level of flexibility when it comes to updating the content. All it offers is SET, INSERT, UPDATE and DELETE operations at the level of a property (a top-level child of the XML representation) and nothing more granular.

In this respect at least, WS-RP makes a better choice than its competitor and its aspiring successor.

First, XPath-driven updates sound easy but in fact are hard to specify. Not surprisingly, the current specifications do a pretty incomplete job at it. They often seem to assume that the XPath used to target the value to change returns only one node, but nothing guarantees this. If it picks up more than one node, do you replace all these nodes by the new values as a block (the new values get inserted once, presumably at the location of the first selected node) or do you replace each selected node by all the new values (in which case they get duplicated as needed)? Also, the specifications say nothing about what constitutes compatibility between the targeted nodes and the replacement nodes. One might assume that a “don’t be stupid” approach is all that’s needed. But there is no obvious line between “stupid” and “useful”. Does a request to replace a text node by an attribute node make sense? Not in a strongly-typed world, but a more forgiving implementation might just insert the text value of the attribute in the place of the text node to get to a valid result. What about replacing an element by a text node? Some may reject it for incompatible types but, unless the schema prevents mixed content, it may well result in a perfectly valid document. All in all, specifying a reliable way to edit XML is a pretty hairy task. Much harder than reading XML. It requires very careful considerations that have very little do with on-the-wire protocol considerations. Which is why doing this as part of a SOAP specification is a strange choice. The XQuery group is much more qualified for this. There must be a reason why that group decided to punt on this until they had taken care of the easier “read” case.

Second, it’s usually not all that useful anyway. Which is why the lack of precision in WS-Management’s specification of the fragment PUT haven’t really been a problem so far: people haven’t fully implemented that feature. A lot of the implementations are backed by a CIMOM, an MBean or some other OO store. In these stores, the exposed granularity is typically at the attribute level. The interactions used by programmers and consoles are also at that level. The XPath-driven update is then only used as a mechanism to update many properties at once (rather than going deep into individual properties) but that’s using a machine gun to kill a fly. The WS-RP approach supports these use cases without calling on XPath.

Third, XPath-driven PUT is really hard to implement unless your back-end store happens to be an XML database. You may end up having to write your own XPath parser and interpreter, an exercise during which you will face some impedance mismatches. Your back-end store may not have notions of property order for example, or attribute versus element. How do you handle these XPath instructions? And what kind of interoperability results from implementers having to make these decisions on their own? Implementing XPath selection on a GET is a lot simpler. All it assumes is that there is an XML serialization of the result, on which you can run the XPath expression before shipping it out. That XML serialization is a given in the SOAP world already. But doing an XPath-driven PUT injects XML considerations in your store itself, not just in the communication path.

Those are the practical reasons. In short, it makes the specifications at best complex and at worst non-interoperable, for a feature that is rarely needed. That should be enough already, but there are some architectural reasons to stay away too.

WS-Transfer is sometimes sold for REST over SOAP. And fragment-level WS-Transfer (what WS-Management and WS-RT do) is then REST on steroids. Sorry, not true. REST on crack if anything.

I am not a REST expert, but I know enough to understand that “everything has a URI” really means “anything meaningful has a URI”. It’s the difference between a crystal structure and a pile of mud. REST lets you interact directly with any node in the crystal, but there is a limited number of entities that are considered worthwhile of being a node. There is design involved (sorry, you can’t suddenly fire your architects, as attractive as that sounds). You can’t point to the space between two nodes in the crystal. XPath-on-top-of-WS-Transfer, on the other hand, lets you plunge your spoon anywhere in the pile of mud and scoop out whatever happens to be there.

Let’s take a look at WS-Federation (here is the latest draft), the only specification in a standard body that I know of that is currently using WS-RT. Whether it’s a wise choice or not for them, from a governance perspective, is a separate topic that I won’t cover here (answer: no. oops).

From a technical perspective, it is interesting to see how they went about using WS-RT PUT. They use it to update pseudonyms. But even though there is an XML representation for the pseudonyms, they don’t want to allow users to update any arbitrary part of that XML. So they create a specific dialect (the fed:FilterPseudonyms defined in section 6.1) that lets you, based on semantics that are meaningful in the specific domain covered by the specification, point to pseudonyms.

I believe most potential users of WS-RT PUT are in the same case as WS-Federation and are better served by a domain-specific way to identify entities of interest. At least the WS-Federation authors realized it rather than saying “great, WS-RT XPath fragment PUT gives us all this flexibility for free” and settling their implementers with the impossible task of producing interoperable implementations. Of course this begs the question of why WS-Federation uses WS-RT in the first place. A charitable interpretation is to pin this on overzealous re-use of all things WS-*. A more cynical interpretation sees this as a contrived precedent manufactured in an attempt “prove” that WS-RT provides features of general use rather than specific to the management domain.

Having described at length why XPath-driven updates aren’t as useful as they may seem, I can still think of two cases where a such a generic mechanism to modify an XML document could be useful. One is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page. But I haven’t exactly noticed wiki creators and users clamoring for wiki-over-SOAP, have you? The other situation is if you have a true model-driven system that is supported by a comprehensive system description and validation framework. The kind of thing that SML is trying to deliver. By using Schematron (rather than just XSD which is very limited in its expressivity beyond mere syntactical validation) to provide model validation. This would, in theory, allow the requester to validate the updated model before sending the change request. The change would still be validated on the receiver side (either explicitly or implicitly because a non-valid new model would simply fail when applied to the system), but the existence of the validation framework guarantees a high rate of successs (the sender would rarely send non-valid change requests). That’s very nice and exciting, but we don’t have this. SML is, as far as I can see, going nowhere fast in terms of adoption. Standardizing a model exchange protocol for that use case is, at this point in time, premature. Maybe one day.

5 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, Modeling, REST, SML, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath, XQuery

Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don’t see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

Mapping CIM associations to CMDBf relationships

This post started as a comment on the blog of Van Wiles. When it became too long (and turned into a therapeutic rant at the end) I turned it into a blog post of its own. Please, read Van’s post first. Here is my response to him:

Hi Van. Sounds like what you are after is not a mapping of the CIM_Dependency association to a CMDBf record type (anyone can make up such a mapping as you point out), but a generic algorithm to map any CIM association to a corresponding CMDBf relationship record type. Correct? That algorithm needs to handle the fact that the CIM metamodel has the concept of relationship roles while the CMDBf metamodel doesn’t.

Here is a possible such mapping:

  1. Take a CIM association (called “myAssociation”) that has two roles (called “thisOne” and “theOtherOne”).
  2. Take the item that has role name that comes first alphabetically and make it the source (in this example, it is “theOtherOne”)
  3. Take the item that has role name that comes second alphabetically and make it the target (in this example, it is “thisOne”)
  4. Generate a CMDBf record type called “{associationName} _from_ {firstRoleNameAlphabetically} _to_ {secondRoleNameAlphabetically}”

You’re done. The new CMDBf record type is “myAssociation_from_theOtherOne_to_thisOne”, the source is the item with the role “theOtherOne” and the target is the item with the role “thisOne”. Everyone who follows this algorithm (of course it needs to be formally defined and evangelized, there is no guarantee here unless we bake CIM-specific concepts in the core CMDBf specification, which would be a mistake) will produce the same CMDBf relationship record type for a given CIM association.

Applied to the CIM_Dependency example, this would generate a “CIM_Dependency_from_Antecedent_to_Dependent” CMDBf record type, in which the source is the CIM Antecedent and the target is the CIM Dependent.

Alternatively, you can have the algorithm generate two CMDBf relationship record types (one going in each direction) for each CIM association. So you don’t have to arbitrarily pick the first one (alphabetically) as the source. But then you need to have model metadata to capture the fact that these relationships are the inverse of one another (and imply one another). As you well know,I have been advocating for the use of RDF/RDFS/OWL in CMDBf for a while. :-)

In the end, there are three potential approaches:

1) Someone (the CMDBf group or someone else) creates an authoritative mapping for all CIM associations (or at least all the useful ones) and we expect anyone who uses the CIM model with CMDBf to use that mapping.

2) Someone (again, the CMDBf group or someone else) defines a normative CIM to CMDBf mapping, e.g. the one above, and we expect anyone who generates a CMDBf relationship record type from a CIM association to use this mapping algorithm. From a pure logical perspective, it is the same as defining a CMDBf record type for each CIM association (approach 1), but it is less work and it doesn’t have to be updated every time a CIM association is created/versioned. At the cost of uglier (more arbitrary) CMDBf record types being defined.

3) We let people define the relationships in whatever way they choose and we provide a model metadata framework (aka ontology language) to allow mappings between these approaches. For example, you define, in your namespace, a van:CIM-inspired-dependency CMDBf record type that goes from antecedent to dependent. Separately, I defined, in my namespace, a william:CIM-like-dependency CMDBf record type that carries the same semantics (defined, not so precisely BTW but that’s a different topic, by CIM) except that its source is the dependent and its target is the antecedent. The inverse of yours. A suitable ontology language would allow someone (you, me, or a third party who has to assemble a system that uses both relationship types) to assert that mine is the inverse of yours. Once this assertion is captured, a request for any [A]—(van:CIM-inspired-dependency)—>[B] would also return the instances of [B]—(william:CIM-like-dependency)—>[A] because they are known to be the same. And you know how I am going to conclude, of course: OWL (specifically owl:inverseOf) provides just this.

BTW, approach 3 is not incompatible with 1 or 2. Whether or not we define mappings for CIM relationships and whether or not that mapping gets adopted, there will be plenty of cases in a federated scenario in which you need to reconcile models (CIM-based or not). Model metadata (aka an ontology language) is useful anyway.

Readers who only care about the technical aspects and have little time for rants can stop reading here. But, since I haven’t addressed any constructive criticism to the DMTF in a while, I can’t resist the opportunity to point out that if the mailing list archives for the DMTF working groups were publicly available, we wouldn’t have to have these discussions on our personal blogs. I am very glad that Van posted this on his blog because it is a question that many people will have. Whatever the CMDBf specification ends up doing, developers and architects who make use of it will benefit from having access to the deliberations and considerations that resulted in the specification being what it is. There are many emails in the CMDBf mailing list private archive that I am sure would be useful to future CMDBf implementers, but if they don’t show up on Google they don’t exist for any practical purpose. When grappling with the finer points of some specification or programming language I have often Googled my way into email archives (or old specification drafts) of the working groups that designed them. Sometimes I come out thinking “oh, ok, now I understand why they chose that approach” and other times it’s “ok, that’s what I suspected, these guys were high”. Either way, it’s useful to me as a user of the specification. W3C is the best example (of making working group records available, not of being high): not only is the mailing list available but the phone meetings often have a supporting IRC channel in which key points of the discussion get captured and archived. Here is an example. Making life easier for implementers is probably the single most important thing to make a specification successful. And ultimately, that’s the DMTF’s success too.

And it’s not just for developers and architects. It also impacts industry observers and pundits. Like the IT Skeptic who looked into CMDBf and reported “nothing on the DMTF website but press releases. try to find anything by navigating from the homepage”. And you wonder why his article is titled “the CMDB Federation proceeeds (sic) at its usual glacial pace”. There is good work going on, but there is no way for him to see it. This too is bad for the adoption and credibility of DMTF specifications.

Isn’t it ironic that the DMTF expends resources to sponsor a “hospitality suite” at the Burton Group Catalyst conference (presumably to spread the word about the good work taking place in the organization) but fails to make it easy for the industry to see that same good work taking place? It’s like a main street retail shop that advertises in the newspaper but covers its store window with cardboard, preventing passersby from seeing what’s on offer. I notice that all the other “hospitality suites” seem to be staffed by for-profit vendors (Oracle, IBM, Cisco, Microsoft etc are all there). Somehow W3C and OASIS (whose work is very relevant to some of the conference themes, like identity management and SOA) don’t feel the need to give away pens and key chains at the conference.

Dear DMTF, open source is not just good for code.

2 Comments

Filed under CA, CMDB Federation, CMDBf, Conference, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, RDF, Semantic tech, Specs, Standards, Trade show, W3C