WS Resource Access at W3C: the good, the bad and the ugly

As far as I know, the W3C is still reviewing the proposal that was made to them to create a new working group to standardize WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange. The suggested name, “Web Services Resource Access Working Group” or WS-RAWG is likely, if it sticks, to end up being shortened to WS-RAW. Which is a bit more cruel than needed. I’d say it’s simply half-baked.

There are many aspects to the specifications and features covered by the proposal. Some goodness, some badness and some ugliness. This post analyzes the good, points at the bad and hints at the ugly. Like your average family-oriented summer movie.

The good

The specifications proposed for W3C standardization describe a way to provide some generally useful features for SOAP messages. Some SOAP messages can get very long. In some cases, I know ahead of time what portion of the long messages promised by the contract (e.g. WSDL) I want. Wouldn’t it be nice, as an optimization, to let the message sender know about this so they can, if they are able to, filter down the message to just the part I want? Alternatively, maybe I do want the full response but I can’t consume it as one big message so I would like to get it in chunks.

You’ll notice that the paragraph above says nothing about “resources”. We are just talking about messaging features for SOAP messages. There are precedents for this. WS-Security can be used to encrypt a message. Any message. WS-ReliableMessaging can be used to ensure delivery of a message. Any message. These “quality of service” specifications are mostly orthogonal to the message content.

WS-RT and WS-Enumeration provide a solution to the “message filtering” and “message chunking”, respectively. But they only address them in the context of a GET-like operation. They can’t be layered on top of any SOAP message. How useful would WS-Security and WS-ReliableMessaging be if they had such a restriction?

If W3C takes on part of the work listed in the proposal, I hope they’ll do so in a way that expends the utility of these features to all SOAP messages.

And just like WS-Security and WS-ReliableMessaging, these features should be provided in a way that leverages the SOAP processing model. Such that I can judiciously use the soap:mustUnderstand header to not break existing services. If I’d like the message to be paired down but I can handle the complete message if need be, I’ll set this attribute to false. If I can’t handle the full message, I’ll set the attribute to true and I’ll get an error if the other party doesn’t understand this extension. At which point I can pick an alternative way to get the task accomplished. Sounds pretty basic but it’s amazing how often this important feature of SOAP (which heralds from and extends XML’s must-ignore semantics) is neglected and obstructed by designers of SOAP messages.

And then there is WS-MetadataExchange. While I am not a huge fan of this specification, I agree with the need for a simple, reliable way to retrieve different types of metadata for an endpoint.

So that’s the (potential) good. A flexible and generally useful way to pair-down long SOAP messages, to chunk them and to retrieve metadata for SOAP endpoints.

The bad

The bad is the whole “resource access” spin. It is not actually intrinsically bad. There are scenarios where such a pattern actually fits. But the way that pattern is being addressed by WS-RT and friends is overly generalized and overly XML-centric. By the latter I mean that it takes XML from an agreed-upon on-the-wire interchange format to an implicit metamodel (e.g. it assumes not just that you agree to exchange XML-formated data but that your model and your business logic are organized and implemented around an XML representation of the domain, which is a much more constraining requirement). I could go on and on about this, especially the use of XPath in the PUT operation. In fact I did go on and on with it, but I spun that off as a separate entry.

In the context of the W3C proposal at hand, this is bad because it burdens the generally useful features (see the “good” section above) with an unneeded and limiting formalism. Not to mention the fact that W3C kind of already has its resource access mechanism, but I’ll leave that aspect of the question to Mark and various bloggers (see a short list of relevant posts at the end of this entry).

The resource access part might be worth doing (one more time), but probably not in the same group as things like metadata discovery, message filtering and message chunking, which are not specific to “resource access” situations. And if someone is going to do this again, rather than repeating the not too useful approaches of the past, it may be good to consider alternatives.

The ugly

That’s the politics around this whole deal. There is, as you would expect, a lot more to it than meets the eye. The underlying drivers for all this have little to do with REST/WS or other architecture considerations. They have a lot to do with control. But that’s a topic for another post (maybe) when more of it can be publicly discussed.

A lot of what I describe in this post was already explained in the WS-ManagementHammer post from a couple of months ago. But that was before the W3C proposal and before WS-MetadataExchange was dragged into the deal. So I thought it might be useful to put the analysis in the context of that proposal. And BTW, this is a personal opinion, not an Oracle position (which is true in general for everything on this blog but is worth repeating specifically for this post).

2 Comments

Filed under Everything, Grid, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, SOAP, SOAP header, Specs, Standards, Tech, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

Who needs XPath fragment-level PUT?

WS-Management and WS-ResourceTransfer (WS-RT) both provide a mechanism to modify the XML representation of the state of a resource in a fine-grained way. The mechanisms differ a bit: WS-Management defines a SOAP header and distinguishes PUT from DELETE at the WS-Transfer operation level, while WS-RT uses the SOAP body and tunnels “modes” (remove, modify, insert) on top of the PUT WS-Transfer operation. But in their complete form both use XPath to point to any arbitrary nodeset and update it.

WS-ResourceProperties (WS-RP) takes a simpler approach. While it too supports XPath-driven retrieval of the content, it doesn’t attempt to provide an XPath-like level of flexibility when it comes to updating the content. All it offers is SET, INSERT, UPDATE and DELETE operations at the level of a property (a top-level child of the XML representation) and nothing more granular.

In this respect at least, WS-RP makes a better choice than its competitor and its aspiring successor.

First, XPath-driven updates sound easy but in fact are hard to specify. Not surprisingly, the current specifications do a pretty incomplete job at it. They often seem to assume that the XPath used to target the value to change returns only one node, but nothing guarantees this. If it picks up more than one node, do you replace all these nodes by the new values as a block (the new values get inserted once, presumably at the location of the first selected node) or do you replace each selected node by all the new values (in which case they get duplicated as needed)? Also, the specifications say nothing about what constitutes compatibility between the targeted nodes and the replacement nodes. One might assume that a “don’t be stupid” approach is all that’s needed. But there is no obvious line between “stupid” and “useful”. Does a request to replace a text node by an attribute node make sense? Not in a strongly-typed world, but a more forgiving implementation might just insert the text value of the attribute in the place of the text node to get to a valid result. What about replacing an element by a text node? Some may reject it for incompatible types but, unless the schema prevents mixed content, it may well result in a perfectly valid document. All in all, specifying a reliable way to edit XML is a pretty hairy task. Much harder than reading XML. It requires very careful considerations that have very little do with on-the-wire protocol considerations. Which is why doing this as part of a SOAP specification is a strange choice. The XQuery group is much more qualified for this. There must be a reason why that group decided to punt on this until they had taken care of the easier “read” case.

Second, it’s usually not all that useful anyway. Which is why the lack of precision in WS-Management’s specification of the fragment PUT haven’t really been a problem so far: people haven’t fully implemented that feature. A lot of the implementations are backed by a CIMOM, an MBean or some other OO store. In these stores, the exposed granularity is typically at the attribute level. The interactions used by programmers and consoles are also at that level. The XPath-driven update is then only used as a mechanism to update many properties at once (rather than going deep into individual properties) but that’s using a machine gun to kill a fly. The WS-RP approach supports these use cases without calling on XPath.

Third, XPath-driven PUT is really hard to implement unless your back-end store happens to be an XML database. You may end up having to write your own XPath parser and interpreter, an exercise during which you will face some impedance mismatches. Your back-end store may not have notions of property order for example, or attribute versus element. How do you handle these XPath instructions? And what kind of interoperability results from implementers having to make these decisions on their own? Implementing XPath selection on a GET is a lot simpler. All it assumes is that there is an XML serialization of the result, on which you can run the XPath expression before shipping it out. That XML serialization is a given in the SOAP world already. But doing an XPath-driven PUT injects XML considerations in your store itself, not just in the communication path.

Those are the practical reasons. In short, it makes the specifications at best complex and at worst non-interoperable, for a feature that is rarely needed. That should be enough already, but there are some architectural reasons to stay away too.

WS-Transfer is sometimes sold for REST over SOAP. And fragment-level WS-Transfer (what WS-Management and WS-RT do) is then REST on steroids. Sorry, not true. REST on crack if anything.

I am not a REST expert, but I know enough to understand that “everything has a URI” really means “anything meaningful has a URI”. It’s the difference between a crystal structure and a pile of mud. REST lets you interact directly with any node in the crystal, but there is a limited number of entities that are considered worthwhile of being a node. There is design involved (sorry, you can’t suddenly fire your architects, as attractive as that sounds). You can’t point to the space between two nodes in the crystal. XPath-on-top-of-WS-Transfer, on the other hand, lets you plunge your spoon anywhere in the pile of mud and scoop out whatever happens to be there.

Let’s take a look at WS-Federation (here is the latest draft), the only specification in a standard body that I know of that is currently using WS-RT. Whether it’s a wise choice or not for them, from a governance perspective, is a separate topic that I won’t cover here (answer: no. oops).

From a technical perspective, it is interesting to see how they went about using WS-RT PUT. They use it to update pseudonyms. But even though there is an XML representation for the pseudonyms, they don’t want to allow users to update any arbitrary part of that XML. So they create a specific dialect (the fed:FilterPseudonyms defined in section 6.1) that lets you, based on semantics that are meaningful in the specific domain covered by the specification, point to pseudonyms.

I believe most potential users of WS-RT PUT are in the same case as WS-Federation and are better served by a domain-specific way to identify entities of interest. At least the WS-Federation authors realized it rather than saying “great, WS-RT XPath fragment PUT gives us all this flexibility for free” and settling their implementers with the impossible task of producing interoperable implementations. Of course this begs the question of why WS-Federation uses WS-RT in the first place. A charitable interpretation is to pin this on overzealous re-use of all things WS-*. A more cynical interpretation sees this as a contrived precedent manufactured in an attempt “prove” that WS-RT provides features of general use rather than specific to the management domain.

Having described at length why XPath-driven updates aren’t as useful as they may seem, I can still think of two cases where a such a generic mechanism to modify an XML document could be useful. One is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page. But I haven’t exactly noticed wiki creators and users clamoring for wiki-over-SOAP, have you? The other situation is if you have a true model-driven system that is supported by a comprehensive system description and validation framework. The kind of thing that SML is trying to deliver. By using Schematron (rather than just XSD which is very limited in its expressivity beyond mere syntactical validation) to provide model validation. This would, in theory, allow the requester to validate the updated model before sending the change request. The change would still be validated on the receiver side (either explicitly or implicitly because a non-valid new model would simply fail when applied to the system), but the existence of the validation framework guarantees a high rate of successs (the sender would rarely send non-valid change requests). That’s very nice and exciting, but we don’t have this. SML is, as far as I can see, going nowhere fast in terms of adoption. Standardizing a model exchange protocol for that use case is, at this point in time, premature. Maybe one day.

5 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, Modeling, REST, SML, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath, XQuery

A nice place to stay in Standardstown

You’ve just driven into Standardstown. It’s getting late and you need a place to stay. Your GPS navigation system has five listings under “accommodations”, with the following descriptions:

W3C campground

This campground provides well-equipped tents (free wi-fi throughout the camp). It has the most developed community feeling of all nearby accommodations. Every evening residents gather around a bonfire and the camp elders sing cryptic songs. At the end, the elders nod in approval of the moral of the song. Most campers don’t understand the lyrics but they like the melody. There is a recurring argument about how much soap the campground management should provide to guests. Old timers want to do away with this practice, but management is afraid that business travelers won’t patronize the camp if they are not provided with plenty of soap. The camp is located along a river, downstream from a large factory. When stuff floats down from the factory and lands on the shore of the camp, they call it a submission and thank for factory. So far, the attempts to build a clubhouse from the factory rubbish has mainly created eyesores.

OASIS housing development

This housing development is an option for accommodation because its management will give a plot of land to almost anyone who asks. More specifically, there needs to be at least three of you in the car. If you’re on your own, a common trick is to go pick up the village drunk (offer him a drink) and the village idiot (tell him you want his advice). They can usually be found on the main plaza, arguing about the requirements of imaginary users. Once you have your plot of land, the OASIS management maintains electric power, water and sewer but you can do pretty much what you want otherwise. If you just need temporary housing, you can just pitch a tent. As a result of this approach, there are several houses abandoned half-way through construction. This can make it hard to find your way to the house you are looking for. Residents typically don’t know anything about what’s going on in the house next door. You’ll find nice families living next to a crack house.

Motel DMTF

This motel is hard to find because it hides behind high walls. Even once you’re inside, there are segregated areas. Chances are your room card will give you access to the pool deck but not the clubhouse. Make a mental note of the way to the emergency exits, because there is no evacuation map on the wall (the map exists, but it’s considered confidential). We’ve heard that the best suites have a special door for direct access to the management office. After you leave, you can’t tell your friends what happened there. This review itself probably breaks some confidentiality rule.

WS-I Resort

This time-share resort is the newest development in town. By the time it got built, all the good land was taken so they had to build on land fragments leased from other hotels. The facilities are new and nice, but the owners association is dysfunctional. We’ve been told the feud started when a co-owner tried to organize a private mime show on shared land. Whatever the origin of the disagreement it has resulted in veto rules being commonly invoked, stopping most of the activities that the resort was originally planning to offer. But it remains a good option if you just need a place to sleep. The resort marketing has been pretty efficient: before doing business with you, many local companies will demand to see a receipt to show that you slept there.

Hotel ISO

Just getting a reservation there is a month-long process, so this is not an option if you’re already in town. Unfortunately, if you plan to do business with the local government you are expected to patronize this hotel. If that’s your case, the solution is to sleep in one of the other places in town and just go to this hotel for breakfast. Once there, order their breakfast special (called the “fast-track ruber stamp” which, unfortunately, tastes as bad as it sounds) and staple the breakfast receipt to your hotel bill. That should satisfy the city hall staff that they can do business with you.

11 Comments

Filed under DMTF, Everything, Standards, W3C

Three non-muppets walk into a bar…

I can’t shake the feeling that if Steve Vinoski, Steve Jones and Stuart Charlton had a drink together they’d actually agree on pretty much any distributed computing question that is worded in specific and unambiguous terms.

If you are not subscribed to their three blogs (and I don’t understand why you would not be if you have enough free time to read mine), here is a quick summary of the discussion so far:

Steve Vinoski writes an article critical of RPC approaches. Steve Jones doesn’t agree and explains why in a review of the article. Steve Vinoski is not impressed by the content of the review and even less by the tone. Stu sides with Steve Vinoski.

I think they all agree that, all other things equal, it is a good thing to facilitate the task of developers by providing them with intuitive interfaces. They also all agree that you can’t write distributed applications that shield the developer from the existence of a network. The key questions then boil down to:

  • what degree of network awareness do you require from developers (or what degree do you award them, for a more positive spin)?
  • what are the most appropriate programming constructs to expose that “optimal” degree of network awareness to the developers?

These questions don’t necessarily require words like “REST”, “RPC” and “JAXM” to be thrown around, other than merely as illustrative examples. In fact, the discussion so far seems to indicate that the questions are less likely to be resolved as long as these words are involved.

Once these questions are answered, we can compare the existing toolkits/frameworks (and yes, even architectural styles) to see which ones come closer to the ideal level of network-awareness and which ones present the most useful abstractions for that level. Or how each one can be improved to come closer to the sweetspot. Of course, there isn’t one level of network-awareness that is ideal for all cases, but my guess is that most enterprise applications are not too far apart on this.

[UPDATED 2008/7/27: Eric Newcomer explains it best. It’s just about finding a useful level of abstraction.]

1 Comment

Filed under Everything, REST, SOAP

Oracle/BEA Middleware go-forward plan

The landscape for the post-BEA-acquisition Oracle Fusion Middleware portfolio has been publicly released. You can read a list of all the components. Tracing the history of each back to Oracle-internal developments or acquired companies is left as an exercise for the reader. There are plenty of hints, starting with some of the product names (WebLogic, Tuxedo…). The components with more generic names (SOA Suite, SOA Governance) require a little bit more digging. While the filiation might be of interest to people as a way to map the go-forward plan to current products, in the long term it doesn’t matter as much as the overall quality, consistency and integration. Which is what the stack is optimized for.

The announcement also contains enough podcasts to keep the whole family entertained during the long drive to your campground of choice over the July 4 weekend. If the kids complain, tell them it’s that or Prairie Home Companion and they’ll surrender.

[UPDATED 2008/7/15: The folks at MWD just published their analysis of the go-forward plan (free sign-up required).]

[UPDATE 2008/8/1: A nice bullet-list summary by Ashutossh Pewekar.]

Comments Off on Oracle/BEA Middleware go-forward plan

Filed under Everything, Middleware, Oracle

Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don’t see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

Progress Software acquires IONA… and MindReef too

The acquisition of IONA by Progress Software has been pretty widely written about (sometimes ironically). But that’s not the only thing happening in that neck of the wood. Less widely reported (but still covered on ZDNet, here and here) was their acquisition of MindReef. For the inside perspective, head over to Dan Foody’s blog.

This is yet another confirmation of the fact that testing and IT management are getting ever closer together. And for good reasons, if you want to better integrate application management tasks across the application’s lifecycle. Other signs of this were the recent acquisition of the e-Test suite from Empirix by Oracle (driven by Oracle’s application management team, not by the JDev team) and, some time ago, the fact that HP decided to hang on to Mercury’s testing business rather than spinning it off.

From the Progress perspective, the IT management side of course comes from the earlier Actional acquisition (that company itself having previously merged with WestBridge). From my earlier standards work I have worked with people from Actional, WestBridge and MindReef and there is an impressive wealth of SOAP and WS experience in these teams. What remains to be seen is how much management value can be derived from a very “on the wire” (as opposed to deep in the implementation) view of the interaction. Another challenge for SOAP-centric vendors, which might have been a driver for these acquisitions, is realization that SOAP is going to be only one component of the integration landscape, not its foundation. It may be the JMS of B2B integration, but it won’t be its TCP/IP as was once assumed.

2 Comments

Filed under Everything, IT Systems Mgmt, SOAP, Testing

OVF in action: Kensho

Simon Crosby recently wrote about an upcoming Citrix product (I think that’s what it is, since he doesn’t mention open source anywhere) called Kensho. The post is mostly a teaser (the Wikipedia link in his post will improve your knowledge of oriental philosophy but not your IT management expertise) but it makes interesting claims of virtualization infrastructure interoperability.

OVF gets a lot of credit in Simon’s story. But, unless things have changed a lot since the specification was submitted to DMTF, it is still a wrapper around proprietary virtual disk formats (as previously explained). That wrapper alone can provide a lot of value. But when Simon explains that Kensho can “create VMs from VMware, Hyper-V & XenServer in the OVF format” and when he talks about “OVF virtual appliances” it tends to create the impression that you can deploy any OVF-wrapped VM into any OVF-compliant virtualization platform. Which, AFAIK, is not the case.

For the purpose of a demo, you may be able to make this look like a detail by having a couple of equivalent images and picking one or the other depending on the target hypervisor. But from the perspective of the complete lifecycle management of your virtual machines, having a couple of “equivalent” images in different formats is a bit more than a detail.

All in all, this is an interesting announcement and I take it as a sign that things are progressing well with OVF at DMTF.

[UPDATED 2008/6/29: Chris Wolf (whose firm, the Burton Group, organized the Catalyst conference at which Simon Crosby introduced Kensho) has a nice write-up about what took place there. Plenty of OVF-love in his post too, and actually he gives higher marks to VMWare and Novell than Citrix on that front. Chris makes an interesting forecast: “Look for OVF to start its transition from a standardized metadata format for importing VM appliances to the industry standard format for VM runtime metadata. There’s no technical reason why this cannot happen, so to me runtime metadata seems like OVF’s next step in its logical evolution. So it’s foreseeable that proprietary VM metadata file formats such as .vmc (Microsoft) and .vmx (VMware) could be replaced with a .ovf file”. That would be very nice indeed.]

[2008/7/15: Citrix has hit the “PR” button on Kensho, so we get a couple of articles describing it in a bit more details: Infoworld and Sysmannews (slightly more detailed, including dangling the EC2 carrot).]

Comments Off on OVF in action: Kensho

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, OVF, Standards, Virtualization, Xen, XenSource

WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange on their way to W3C

A bit over a month ago, I mentioned my hope that WS-ResourceTransfer (WS-RT) would be allowed to rest in peace. This is apparently not to be and the specification is now on its way to W3C, along with WS-Transfer, WS-MetadataExchange and WS-Enumeration. This is not all that surprising and I had even hazarded a guess of who would join IBM in doing this. My list was IBM, CA, Fujitsu and Cisco. I got three out of four right, but Oracle replaced Cisco. The fact that the company I got wrong happens to be my employer is something I can’t really comment on, other than acknowledging the irony…

This is a very important development in the area of management standards. Some of the specifications listed here are used by WS-Management. They are also clearly intended to replace the WS-ResourceFramework stack that underpins WSDM. This is especially true of WS-RT which almost directly overlaps with WS-ResourceProperties. Users of both WS-Management and WSDM will take notice. As will those who have been standing on the side, waiting for things to stabilize…

If you are trying to relate this announcement to the WS-Management/WSDM convergence previously going on between Microsoft, IBM, HP and Intel (which is the forum in which WS-RT was originally produced), it looks like this is what the “convergence” has turned into. Except that three of the four vendors seem to have dropped out, thus my quotation marks around the word “convergence”.

The applicability of these specifications outside of the management domain seems to be assumed in this submission. It’s been often asserted but, in my mind, not yet proven. I don’t see the use of WS-RT by WS-Federation as a proof of this relevance (one of these days I’ll write a post to explain why).

It will be interesting to see how the W3C responds to this offer. The expected retort didn’t take long. If WS-RT wasn’t allowed to rest in peace, it won’t be allowed to REST in peace either. You can expect the blogosphere to light up with “WS-Transfer for RESTful applications” discussions (mostly making fun of WS-Transfer’s HTTP envy) very soon. Even though that’s just one of the many angles from which you can view this development, and not the most interesting one.

[UPDATED 2008/7/6: It took a little longer than expected, but the snarky/ironic blog posts have started: Steve, Mark, Tim, Bill, Stefan]

3 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, SOAP, Specs, Standards, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer

SaaS management: it’s MUWS and MOWS all over again

One of the most repetitive tasks when I was evangelizing WSDM was to explain the difference between the MUWS and MOWS specifications (the sum of which composes the entire WSDM body of work). MUWS (management using web services) describes how to use Web services to expose manageability capabilities of potentially any resource (a server, an application, a toaster…). MOWS (management of web services) defines a monitoring and control model for resources that are Web services themselves (so you can measure the number of messages received for example).

I ended up sounding like a cow when I was presenting. A retarded cow even, since my French accent forced me to say it slowly so people could hear the difference.

In retrospect, we should not have tried to tackle both in the same group. And not just because my dignity was bruised. It was a distraction inside the working group, and a source of confusion outside of it. We should have focused on MUWS (as WS-Management did) and possibly created a protocol-independent monitoring/control model for Web services separately. Something that, BTW, is still missing today.

I am being reminded of this MUWS vs. MOWS state of affair these days, when the topics of SaaS and IT management meet, often under the term “SaaS management”. By that, some people mean “delivering IT management as a hosted service, rather than running the management software in the same datacenter as the application”. Other mean “managing, using an on-premise deployment of the management software, a business application that is being delivered as a service (e.g. Oracle CRM On Demand), along with other local IT resources”. The latter is what I was talking about in this post. And sometimes it’s both at the same time (the business application is delivered as a service along with a hosted management console for status/issues/requests…). Not to mention the extra dimension of providing IT management to the administrators in charge of running a multi-tenant application in a SaaS scenario (instead of meeting the needs of their customer’s administrators).

All of these scenarios are valid. So far, we don’t have good names for them. And the MUWS/MOWS experience shows that good names matter. IMaaS (IT Management as a Service) and MoSaaS (Management of Software as a Service) won’t cut it.

[UPDATED 2008/6/23: This seems to be an example of MoSaaS (or rather MoIaaS) delivered through IMaaS. I am subjecting you to such an awful-sounding sentence as a way drive home the need for better names. The real value of course will come when these capabilities are delivered alongside (and integrated with) all your IT management capabilities. John has a nice analysis that lets some air out of the fluff.]

2 Comments

Filed under Application Mgmt, Everything, IT Systems Mgmt, SaaS, Standards, Utility computing

BMC acquires ITM Software

Another BMC acquisition today: ITM Software. Their software suite is designed to help drive IT decisions from the point of view of their business impact.

This is important, of course, for all the reasons that BMC, HP, Oracle and others have been explaining for a while (how often have you heard the word “alignment” over the last three years, compared to the previous thirty?). It’s becoming even more important now, as the options for IT sourcing (from the traditional “give it all to Unisys”, to SaaS, to running your own apps in a utility computing environment…) are multiplicating. Choosing between Intel and AMD CPUs in your datacenter is a technical decision, but choosing between an on-premise application, a SaaS application and running your application on EC2 is driven by business considerations of cost, risks, control, flexibility, etc. And it’s not just a one-time decision, it’s the day to day management that follows these decisions.

I don’t know much about the current ITM offering, but it was never clear to me how much they could deliver as a narrow layer, separate from the heavy-duty IT management stack (I can see how they would deliver financial and project management tools, but what about *really* linking day to day IT administration decisions to the business impact). Being part of BMC, presumably allowing deeper integration into real IT management operations, seems to make sense.

I just wish they didn’t make it sound so easy: “BMC’s purchase of ITM Software creates a unique, integrated solution that provides customers with a single comprehensive view into…”. So just signing the check creates the integration? Now I am going to get calls from our execs asking why it takes so much work to integrate acquired products, if BMC can do it the same day they sign the deal…

While I am at it, here is the press release that HP put out to list the announcements at their Software Universe conference this week. I notice that it’s all about new versions of ex-Mercury products. No OpenView, Peregrine or Opsware content, as far as I can tell. Without looking at it in more details I don’t know how different these new versions really are. What appears pretty new is the SaaS offering (also based on Mercury products) at the end of the press release. On the nitpicking side, can anyone tell me what these “static configuration management databases” are that are “unable to support the real-time needs of today’s complex technology environments”? I can see how a “static” database would be hard-pressed to help, but I haven’t noticed any vendor selling read-only config stores.

[UPDATED 2008/6/18: More details about the HP announcement at InfoWorld. Including quotes from my ex-boss Ramin. Congrats on getting UCMDB 7.5 out of the door!]

2 Comments

Filed under Application Mgmt, BSM, Business, CMDB, Everything, HP, IT Systems Mgmt, ITIL, Mgmt integration

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

Some breathing room for Google App Engine requests

As promised to Felix here is the code that shows how to give extra breathing room to Google App Engine (GAE) requests that may otherwise be killed for taking too long to complete. The approach is similar to the one previously described. But rather than trying to emulate a long-running process, I am simply allowing a request to spread its work over a handful of invocations, thus getting several 9 seconds slots (since this seems to be how much time GAE gives you per request right now).

If all your requests need this then you are going to run into the same “high CPU requests have a small quota, and if you exceed this quota, your app will be temporarily disabled” problem seen in the previous experiment. But if 90% of your requests complete in a normal time and only 10% of the requests need more time, then this approach can help prevent your users from getting an error for 1 out of every 10 requests. And you should fly under the radar of the GAE resource cop.

The way it works is simply that if your request is interrupted for having run too long the client gets a redirect to a new instance of the same handler. Because the code saves its results incrementally in the datastore, the new instance can build on the work of the previous one.

This specific example retrieves the ubuntu-8.04-server-i386.jigdo file (98K) from a handful of Ubuntu mirrors and returns the average/min/max download times (without checking if the transfer was successful or not). I also had to add a 1 second sleep after each fetch in order to trigger the DeadlineExceededError because the fetch operations go too quickly when running on GAE rather than my machine (I guess Google has better connectivity than my mediocre AT&T-provided DSL line, who would have thought).

#!/usr/bin/env python
#
# Copyright 2008 William Vambenepe
#

import wsgiref.handlers
import os
import logging
import time

from google.appengine.ext import db
from google.appengine.ext.webapp import template
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
from google.appengine.runtime import DeadlineExceededError

targetUrls = ["http://mirror.anl.gov/pub/ubuntu-iso/CDs/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ubuntu.mirror.ac.za/ubuntu-release/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://mirrors.cytanet.com.cy/linux/ubuntu/releases/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.kaist.ac.kr/pub/ubuntu-cd/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.itu.edu.tr/Mirror/Ubuntu/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.belnet.be/mirror/ubuntu.com/releases/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ubuntu-releases.sh.cvut.cz/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.crihan.fr/releases/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.uni-kl.de/pub/linux/ubuntu.iso/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://ftp.duth.gr/pub/ubuntu-releases/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://no.releases.ubuntu.com/hardy/ubuntu-8.04-server-i386.jigdo",
              "http://neacm.fe.up.pt/pub/ubuntu-releases/hardy/ubuntu-8.04-server-i386.jigdo"]

class MeasurementSet(db.Model):
  iteration = db.IntegerProperty()
  measurements = db.ListProperty(float)

class MainHandler(webapp.RequestHandler):
  def get(self):
    try:
      key = self.request.get("key")
      set = MeasurementSet.get(key)
      if (set == None):
        raise ValueError
      set.iteration = set.iteration + 1
      set.put()
      logging.debug("Resuming existing set, with key " + str(key))
    except:
      set = MeasurementSet()
      set.iteration = 1
      set.measurements = []
      set.put()
      logging.debug("Starting new set, with key " + str(set.key()))
    try:
      # Dereference remaining URLs
      for target in targetUrls[len(set.measurements):]:
        startTime = time.time()
        urlfetch.fetch(target)
        timeElapsed = time.time() - startTime
        time.sleep(1)
        logging.debug(target + " dereferenced in " + str(timeElapsed) + " sec")
        set.measurements.append(timeElapsed)
        set.put()
      # We're done dereferencing URLs, let's publish the results
      longestIndex = 0
      shortestIndex = 0
      totalTime = set.measurements[0]
      for i in range(1, len(targetUrls)):
        totalTime = totalTime + set.measurements[i]
        if set.measurements[i] < set.measurements[shortestIndex]:
          shortestIndex = i
        elif set.measurements[i] > set.measurements[longestIndex]:
          longestIndex = i
      template_values = {"iterations": set.iteration,
                         "longestTime": set.measurements[longestIndex],
                         "longestTarget": targetUrls[longestIndex],
                         "shortestTime": set.measurements[shortestIndex],
                         "shortestTarget": targetUrls[shortestIndex],
                         "average": totalTime/len(targetUrls)}
      path = os.path.join(os.path.dirname(__file__), "steps.html")
      self.response.out.write(template.render(path, template_values))
      logging.debug("Set with key " + str(set.key()) + " has returned")
    except DeadlineExceededError:
      logging.debug("Set with key " + str(set.key())
                    + " interrupted during iteration "+ str(set.iteration)
                    + " with " + str(len(set.measurements)) + " URLs retrieved")
      self.redirect("/steps?key=" + str(set.key()))
      logging.debug("Set with key " + str(set.key()) + " sent redirection")

def main():
  application = webapp.WSGIApplication([("/steps", MainHandler)], debug=True)
  wsgiref.handlers.CGIHandler().run(application)

if __name__ == "__main__":
  main()

I can’t guarantee I will keep it available, but at the time of this writing the application is deployed here if you want to give it a spin. A typical run produces this kind of log:

06-13 01:44AM 36.814 /steps
XX.XX.XX.XX - - [13/06/2008:01:44:45 -0700] "GET /steps HTTP/1.1" 302 0 - -
  D 06-13 01:44AM 36.847
    Starting new set, with key agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM
  D 06-13 01:44AM 37.870
    http://mirror.anl.gov/pub/ubuntu-iso/CDs/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.022078037262 sec
  D 06-13 01:44AM 38.913
    http://ubuntu.mirror.ac.za/ubuntu-release/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0184168815613 sec
  D 06-13 01:44AM 39.962
    http://mirrors.cytanet.com.cy/linux/ubuntu/releases/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0166189670563 sec
  D 06-13 01:44AM 41.12
    http://ftp.kaist.ac.kr/pub/ubuntu-cd/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0205371379852 sec
  D 06-13 01:44AM 42.103
    http://ftp.itu.edu.tr/Mirror/Ubuntu/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0197179317474 sec
  D 06-13 01:44AM 43.146
    http://ftp.belnet.be/mirror/ubuntu.com/releases/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0171189308167 sec
  D 06-13 01:44AM 44.215
    http://ubuntu-releases.sh.cvut.cz/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0160200595856 sec
  D 06-13 01:44AM 45.256
    http://ftp.crihan.fr/releases/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.015625 sec
  D 06-13 01:44AM 45.805
    Set with key agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM interrupted during iteration 1 with 8 URLs retrieved
  D 06-13 01:44AM 45.806
    Set with key agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM sent redirection
  W 06-13 01:44AM 45.808
    This request used a high amount of CPU, and was roughly 28.5 times over the average request CPU limit.
    High CPU requests have a small quota, and if you exceed this quota, your app will be temporarily disabled.

Followed by:

06-13 01:44AM 46.72 /steps?key=agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM
XX.XX.XX.XX - - [13/06/2008:01:44:50 -0700] "GET /steps?key=agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM HTTP/1.1" 200 472
  D 06-13 01:44AM 46.110
    Resuming existing set, with key agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM
  D 06-13 01:44AM 47.128
    http://ftp.uni-kl.de/pub/linux/ubuntu.iso/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.016991853714 sec
  D 06-13 01:44AM 48.177
    http://ftp.duth.gr/pub/ubuntu-releases/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0238039493561 sec
  D 06-13 01:44AM 49.318
    http://no.releases.ubuntu.com/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0177929401398 sec
  D 06-13 01:44AM 50.378
    http://neacm.fe.up.pt/pub/ubuntu-releases/hardy/ubuntu-8.04-server-i386.jigdo dereferenced in 0.0226020812988 sec
  D 06-13 01:44AM 50.410
    Set with key agN2YnByFAsSDk1lYXN1cmVtZW50U2V0GBoM has returned
  W 06-13 01:44AM 50.413
  This request used a high amount of CPU, and was roughly 13.4 times over the average request CPU limit.
  High CPU requests have a small quota, and if you exceed this quota, your app will be temporarily disabled.

I believe we can optimize the performance by taking advantage of the fact that successive requests are likely (but not guaranteed) to hit the same instance, allowing global variables to be re-used rather than always going to the datastore. My code is a proof of concept, not an optimized implementation.

Of course, the alternative is to drive things from the client, using JavaScript HTTP requests (rather than HTTP redirect) to repeat the HTTP request until the work has been completed. The list of pros and cons of each approach is left as an exercise to the reader.

[UPDATED 2008/6/13: Added log output. Removed handling of “OverQuotaError” which was not useful since, unlike “DeadlineExceededError”, quotas are not per-request. As a result, splitting the work over multiple requests doesn’t help. Slowing down a request might help, at which point the approach above might come in handy to prevent this slowdown from triggering “DeadlineExceededError”.]

[UPDATED 2008/6/30: Steve Jones provides an interesting analysis of the cut-off time for GAE. Confirms that it’s mainly based on wall-clock time rather than CPU time. And that you can sometimes go just over 9 seconds but never up to 10 seconds, which is consistent with my (much less detailed and rigorous) observations.]

2 Comments

Filed under Everything, Google, Google App Engine, Implementation, Utility computing

Mapping CIM associations to CMDBf relationships

This post started as a comment on the blog of Van Wiles. When it became too long (and turned into a therapeutic rant at the end) I turned it into a blog post of its own. Please, read Van’s post first. Here is my response to him:

Hi Van. Sounds like what you are after is not a mapping of the CIM_Dependency association to a CMDBf record type (anyone can make up such a mapping as you point out), but a generic algorithm to map any CIM association to a corresponding CMDBf relationship record type. Correct? That algorithm needs to handle the fact that the CIM metamodel has the concept of relationship roles while the CMDBf metamodel doesn’t.

Here is a possible such mapping:

  1. Take a CIM association (called “myAssociation”) that has two roles (called “thisOne” and “theOtherOne”).
  2. Take the item that has role name that comes first alphabetically and make it the source (in this example, it is “theOtherOne”)
  3. Take the item that has role name that comes second alphabetically and make it the target (in this example, it is “thisOne”)
  4. Generate a CMDBf record type called “{associationName} _from_ {firstRoleNameAlphabetically} _to_ {secondRoleNameAlphabetically}”

You’re done. The new CMDBf record type is “myAssociation_from_theOtherOne_to_thisOne”, the source is the item with the role “theOtherOne” and the target is the item with the role “thisOne”. Everyone who follows this algorithm (of course it needs to be formally defined and evangelized, there is no guarantee here unless we bake CIM-specific concepts in the core CMDBf specification, which would be a mistake) will produce the same CMDBf relationship record type for a given CIM association.

Applied to the CIM_Dependency example, this would generate a “CIM_Dependency_from_Antecedent_to_Dependent” CMDBf record type, in which the source is the CIM Antecedent and the target is the CIM Dependent.

Alternatively, you can have the algorithm generate two CMDBf relationship record types (one going in each direction) for each CIM association. So you don’t have to arbitrarily pick the first one (alphabetically) as the source. But then you need to have model metadata to capture the fact that these relationships are the inverse of one another (and imply one another). As you well know,I have been advocating for the use of RDF/RDFS/OWL in CMDBf for a while. :-)

In the end, there are three potential approaches:

1) Someone (the CMDBf group or someone else) creates an authoritative mapping for all CIM associations (or at least all the useful ones) and we expect anyone who uses the CIM model with CMDBf to use that mapping.

2) Someone (again, the CMDBf group or someone else) defines a normative CIM to CMDBf mapping, e.g. the one above, and we expect anyone who generates a CMDBf relationship record type from a CIM association to use this mapping algorithm. From a pure logical perspective, it is the same as defining a CMDBf record type for each CIM association (approach 1), but it is less work and it doesn’t have to be updated every time a CIM association is created/versioned. At the cost of uglier (more arbitrary) CMDBf record types being defined.

3) We let people define the relationships in whatever way they choose and we provide a model metadata framework (aka ontology language) to allow mappings between these approaches. For example, you define, in your namespace, a van:CIM-inspired-dependency CMDBf record type that goes from antecedent to dependent. Separately, I defined, in my namespace, a william:CIM-like-dependency CMDBf record type that carries the same semantics (defined, not so precisely BTW but that’s a different topic, by CIM) except that its source is the dependent and its target is the antecedent. The inverse of yours. A suitable ontology language would allow someone (you, me, or a third party who has to assemble a system that uses both relationship types) to assert that mine is the inverse of yours. Once this assertion is captured, a request for any [A]—(van:CIM-inspired-dependency)—>[B] would also return the instances of [B]—(william:CIM-like-dependency)—>[A] because they are known to be the same. And you know how I am going to conclude, of course: OWL (specifically owl:inverseOf) provides just this.

BTW, approach 3 is not incompatible with 1 or 2. Whether or not we define mappings for CIM relationships and whether or not that mapping gets adopted, there will be plenty of cases in a federated scenario in which you need to reconcile models (CIM-based or not). Model metadata (aka an ontology language) is useful anyway.

Readers who only care about the technical aspects and have little time for rants can stop reading here. But, since I haven’t addressed any constructive criticism to the DMTF in a while, I can’t resist the opportunity to point out that if the mailing list archives for the DMTF working groups were publicly available, we wouldn’t have to have these discussions on our personal blogs. I am very glad that Van posted this on his blog because it is a question that many people will have. Whatever the CMDBf specification ends up doing, developers and architects who make use of it will benefit from having access to the deliberations and considerations that resulted in the specification being what it is. There are many emails in the CMDBf mailing list private archive that I am sure would be useful to future CMDBf implementers, but if they don’t show up on Google they don’t exist for any practical purpose. When grappling with the finer points of some specification or programming language I have often Googled my way into email archives (or old specification drafts) of the working groups that designed them. Sometimes I come out thinking “oh, ok, now I understand why they chose that approach” and other times it’s “ok, that’s what I suspected, these guys were high”. Either way, it’s useful to me as a user of the specification. W3C is the best example (of making working group records available, not of being high): not only is the mailing list available but the phone meetings often have a supporting IRC channel in which key points of the discussion get captured and archived. Here is an example. Making life easier for implementers is probably the single most important thing to make a specification successful. And ultimately, that’s the DMTF’s success too.

And it’s not just for developers and architects. It also impacts industry observers and pundits. Like the IT Skeptic who looked into CMDBf and reported “nothing on the DMTF website but press releases. try to find anything by navigating from the homepage”. And you wonder why his article is titled “the CMDB Federation proceeeds (sic) at its usual glacial pace”. There is good work going on, but there is no way for him to see it. This too is bad for the adoption and credibility of DMTF specifications.

Isn’t it ironic that the DMTF expends resources to sponsor a “hospitality suite” at the Burton Group Catalyst conference (presumably to spread the word about the good work taking place in the organization) but fails to make it easy for the industry to see that same good work taking place? It’s like a main street retail shop that advertises in the newspaper but covers its store window with cardboard, preventing passersby from seeing what’s on offer. I notice that all the other “hospitality suites” seem to be staffed by for-profit vendors (Oracle, IBM, Cisco, Microsoft etc are all there). Somehow W3C and OASIS (whose work is very relevant to some of the conference themes, like identity management and SOA) don’t feel the need to give away pens and key chains at the conference.

Dear DMTF, open source is not just good for code.

2 Comments

Filed under CA, CMDB Federation, CMDBf, Conference, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, RDF, Semantic tech, Specs, Standards, Trade show, W3C

Oracle is now a leading vendor of application testing tools

The e-TEST suite (previously from Empirix) has turned into a set of Oracle products for application testing (sorry, application quality management). Ever since the announcement of the deal, a couple of months ago, Oracle sales reps have received many unsolicited requests for that product, validating its good reputation in the market. If you have spent any time around sales reps, you know that for them to tell their customers that for the time being they should purchase from Empirix was about as pleasant as for Hillary Clinton to endorse Barack Obama. Fortunately, this awkward period is over. Not only can people buy the products from Oracle, all the technical support (even for people who bought from Empirix) is now provided by Oracle.

I probably don’t need to say it (since the products were created by an independent company) but just to be clear nothing in these products require the target Web applications to use Oracle infrastructure (e.g. DB, Middleware) or to be otherwise managed by Oracle Enterprise Manager. They will work happily with your Ruby-on-Rails application hosted on RedHat or your .NET Web application.

And it’s not just for HTML-driven, user-facing applications: XML-producing web services can also be tested that way.

Comments Off on Oracle is now a leading vendor of application testing tools

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Oracle, Testing

Recent IT management announcements

There were a few announcements relevant to the evolution of IT management over the last week. The most interesting is VMware’s release of the open-source (BSD license) VI SDK, a Java API to manage a host system and the virtual machines that run on it. Interesting that they went the way of a language-specific API. The alternatives, to complement/improve their existing web services SDK, would have been: define CIM classes and implement a WBEM provider (using CIM-HTTP and/or WS-Management), use WS-Management but without the CIM part (define the model as native XML, not XML-from-CIM), use a RESTful HTTP-driven interface to that same native XML model or, on the more sci-fi side, go the MDA way with a controller from which you retrieve the observed state and to which you specify the desired state. The Java API approach is the easiest one for developers to use, as long as they can access the Java ecosystem and they are mainly concerned with controlling the VMWare entities. If the management application also deals with many other resources (like the OS that runs in the guest machines or the hardware under the host, both of which are likely to have CIM models), a more model-centric approach could be more handy. The Java API of course has an underlying model (described here), but the interface itself is not model-centric. So what with all the DMTF-love that VMWare has been displaying lately (OVF submission, board membership, hiring of the DMTF president…). Should we expect a more model-friendly version of this API in the future? How does this relate to the DMTF SVPC working group that recently released some preliminary profiles? The choice to focus on beefing-up the Java-centric management story (which includes Jython, as VMWare was quick to point out) rather than the platform-agnostic, on-the-wire-interop side might be seen by the more twisted minds as a way to not facilitate Microsoft’s “manage VMWare today to replace it tomorrow” plan any more than necessary.

Speaking of Microsoft, in unrelated news we also got a heartbeat from them on the Oslo project: a tech preview of some of the components is scheduled for October. When Oslo was announced, there was a mix of “next gen BizTalk” aspects and “developer-driven DSI” aspects. From this report, the BizTalk part seems to be dominating. No word on use of SML.

And finally, SOA Software (who was previously called Digital Evolution and who acquired Blue Titan, Flamenco and LogicLibrary, in case you’re trying to keep track) has released a “SOA Development Governance Product”. Nothing too exciting from what I can see on InfoQ about it, but that’s a pretty superficial evaluation so don’t let me stop you. Am I the only one who twitches whenever “federation” is used to mean at worst “import” or at best “synchronization”? Did CMDBf start that trend? BTW, is it just an impression or did SOA Software give InfoQ a list of the questions they wanted to be asked?

4 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Open source, Oslo, OVF, SML, Standards, Tech, Virtualization, VMware, WS-Management

Emulating a long-running process (and a scheduler) in Google App Engine

As previously described, Google App Engine (GAE) doesn’t support long running processes. Each process lives in the context of an HTTP request handler and needs to complete within a few seconds. If you’re trying to get extra CPU cycles for some task then Amazon EC2, not GAE, is the right tool (including the option to get high-CPU instances for the CPU-intensive tasks).

More surprising is the fact that GAE doesn’t offer a scheduler. Your app can only get invoked when someone sends it an HTTP request and you can’t ask GAE to generate a canned request every so often (crontab-style). That seems both limiting and arbitrary. In fact, I would be surprised if GAE didn’t soon add support for this.

In the meantime, your best bet is to get an account on a separate server that lets you schedule jobs, at which point you can drive your GAE application from that external scheduler (through HTTP requests to your GAE app). But just for the intellectual exercise, how would one meet the need while staying entirely within the confines of the Google-provided infrastructure?

  • The most obvious option is to piggyback on HTTP requests from your visitors. But:
    • this assumes that you consistently get visitors at a frequency greater than your scheduler’s interval,
    • since you can’t launch sub-processes in GAE, this delays your responses to the visitor,
    • more worrisome, if your scheduled task takes more than a few seconds this means your application might be interrupted by GAE before you respond to the visitor, resulting in a failed request from their perspective.
  • You can try to improve a bit on this by doing this processing not as part of the main request from your visitor but rather by putting in the response HTML some JavaScript that will asynchronously send you HTTP requests in the background (typically not visible to the user). This way, a given visitor will give you repeated invocations for as long as the page is open in the browser. And you can set the invocation interval. You can even create some kind of server-controlled auto-modulation of the interval (increasing it as your number of concurrent visitors increases) so that you don’t eat all your Google-allocated incoming HTTP quota with these XMLHttpRequest invocations. This would probably be a very workable way to do it in practice even though:
    • it only works if your application has visitors who use web browsers, not if it only consumed by programs (e.g. through RSS feeds or other XML format),
    • it puts the burden on your visitors who may or may not appreciate it, assuming they realize it is happening (how would you feel if your real estate agent had to borrow your cell phone to arrange home visits for you and their other customers?).
  • While GAE doesn’t offer a scheduler, another Google service, Google Reader, offers one of sorts. If you register a feed there, Google’s FeedReader will retrieve it once a while (based on my logs, it happens approximately every hour for each of the two feeds for this blog). You can create multiple URLs that all map to the same handler and return some dummy RSS. If you register these feeds with Google Reader, they’ll get pulled once a while. Of course there is no guarantee that the pulling of the different feeds will be nicely spread out, but if you register enough of them you should manage to get invoked with a frequency compatible with you desired scheduler’s frequency.

That’s all nice, but it doesn’t entirely live within the GAE application. It depends on either the visitors or Google Reader. Can we do this entirely within GAE?

The idea is that since a GAE app can only executes within an HTTP request handler, which only runs for a few seconds, you can emulate a long-running process by automatically starting a successor request when the previous one is killed. This is made possible by two characteristics of the GAE runtime:

  • When an HTTP request is canceled on the client side, the request execution on the server is permitted to continue (until it returns or GAE kills it for having run too long).
  • When GAE kills a request for having run too long, it does it through an exception that you have a chance to handle (at least for a few seconds, until you get killed for good), which is when you initiate the HTTP request that spawns the successor process.

If you’ve watched (or played) Rugby, this is equivalent to passing the ball to a teammate during that short interval between when you’re tackled and when you hit the ground (I have no idea whether the analogy also applies to Rugby’s weird cousin called American Football).

In practice, all you have to do is structure your long running task like this:

class StartHandler(webapp.RequestHandler):
  def get(self):
    if (StopExec.all().count() == 0):
      try:
        id = int(self.request.get("id"))
        logging.debug("Request " + str(id) + " is starting its work.")
        # This is where you do your work
      finally:
        logging.debug("Request " + str(id) + " has been stopped.")
        # Save state to the datastore as needed
        logging.debug("Launching successor request with id=" + str(id+1))
        res = urlfetch.fetch("http://myGaeApp.appspot.com/start?id=" + str(id+1))

Once you have deployed this app, just point your browser to http://myGaeApp.appspot.com/start?id=0 (assuming of course that your GAE app is called “myGaeApp”) and the long-running process is started. You can hit the “stop” button on your browser and turn off your computer, the process (or more exactly the succession of processes) has a life of its own entirely within the GAE infrastructure.

The “if (StopExec.all().count() == 0)” statement is my way of keeping control over the beast (if only Dr. Frankenstein had as much foresight). StopExec is an entity type in the datastore for my app. If I want to kill this self-replicating process, I just need to create an entity of this type and the process will stop replicating. Without this, the only way to stop it would be to delete the whole application through the GAE dashboard. In general, using the datastore as shared memory is the way to communicate with this emulation of a long-running process.

A scheduler is an obvious example of a long-running process that could be implemented that way. But there are other examples. The only constraint is that your long-running process should expect to be interrupted (approximately every 9 seconds based on what I have seen so far). It will then re-start as part of a new instance of the same request handler class. You can communicate state between one instance and its successor either via the request parameters (like the “id” integer that I pass in the URL) or by writing to the datastore (in the “finally” clause) and reading from it (at the beginning of your task execution).

By the way, you can’t really test such a system using the toolkit Google provides for local testing, because that toolkit behaves very differently from the real GAE infrastructure in the way it controls long-running processes. You have to run it in the real GAE environment.

Does it work? For a while. The first time I launched it, it worked for almost 30 minutes (that’s a lot of 9 second-long processes). But I started to notice these worrisome warnings in the logs: “This request used a high amount of CPU, and was roughly 21.7 times over the average request CPU limit. High CPU requests have a small quota, and if you exceed this quota, your app will be temporarily disabled.”

And indeed, after 30 minutes of happiness my app was disabled for a bit.

My quota figures on the dashboard actually looked pretty good. This was not a very busy application.

CPU Used 175.81 of 199608.00 Gigacycles (0%)
Data Sent 0.00 of 2048.00 Megabytes (0%)
Data Received 0.00 of 2048.00 Megabytes (0%)
Emails Sent 0.00 of 2000.00 Emails (0%)
Megabytes Stored 0.05 of 500.00 Megabytes (0%)

But the warning in the logs points to some other restriction. Google doesn’t mind if you use a given number of CPU cycles through a lot of small requests, but it complains if you use the same number of cycles through a few longer requests. Which is not really captured in the “understanding application quotas” page. I also question whether my long requests actually consume more CPU than normal (shorter) requests. I stripped the application down to the point where the “this is where you do your work” part was doing nothing. The only actual work, in the “finally” clause, was to opens an HTTP connection and wait for it to return (which never happens) until the GAE runtime kills the request completely. Hard to see how this would actually use much CPU. Yet, same warning. The warning text is probably not very reflective of the actual algorithm that flags my request as a hog.

What this means is that no matter how small and slim the task is, the last line (with the urlfetch.fetch() call) by itself is enough to get my request identified as a hog. Which means that eventually the app is going to get disabled. Which is silly really because by that the time fetch() gets called nothing useful is happening in this request (the work has transitioned to the successor request) and I’d be happy to have it killed as soon as the successor has been spawned. But GAE doesn’t give you a way to set client-side timeout on outgoing HTTP requests. Neither can you configure the GAE cop to kill you early so that you don’t enter the territory of “this request used a high amount of CPU”.

I am pretty confident that the ability to set client-side HTTP timeout will be added to the urlfetch API. Even Google’s documentation acknowledges this limitation: “Note: Since your application must respond to the user’s request within several seconds, a URL fetch action to a slow remote server may cause your application to return a server error to the user. There is currently no way to specify a time limit to the URL fetch action.” Of course, by the time they fix this they may also have added a real scheduler…

In the meantime, this was a fun exploration of the GAE environment. It makes it clear to me that this environment is still a toy. But a very interesting and promising one.

[UPDATED 2009/28: Looks like a real GAE scheduler is coming.]

15 Comments

Filed under Brain teaser, Everything, Google, Google App Engine, Implementation, Testing, Utility computing

Small brain teaser: my work phone number

My work phone number is a typical US 10 digits number. In addition:

  • a) My office is in the same area code as Stanford University.
  • b) The area code appears twice in my phone number
  • c) The number of the beast doesn’t appear in my phone number.
  • d) An even number can only be in an even-numbered position if the value of that number is also its position (the leftmost digit is in position 1; 0 is an even number).
  • e) The number of occurrences of non-zero numbers is always less than the value of the number.
  • f) The answer uses as few different numbers as possible to meet all these conditions. For example, if 123-123-1231 and 123-412-3412 both met all the constraints above (which they obviously don’t), then the answer would be 123-123-1231 because it only uses the numbers 1, 2 and 3, while the other uses an additional number (4).

Asking your Oracle-employed brother-in-law to look me up in the employee phone book is considered cheating…

[UPDATED 2009/1/23: Clarified last bullet with an example, based on reader feedback.]

[UPDATED 2012/10/1: I have now left Oracle, so this is not my number anymore. You can still try to solve the puzzle and email me the answer for verification.]

Comments Off on Small brain teaser: my work phone number

Filed under Brain teaser, Everything, Off-topic

Google App Engine: less is more

“If you have a stove, a saucepan and a bottle of cold water, how can you make boiling water?”

If you ask this question to a mathematician, they’ll think about it a while, and finally tell you to pour the water in the saucepan, light up the stove and put the saucepan on it until the water boils. Makes sense. Then ask them a slightly different question: “if you have a stove and a saucepan filled with cold water, how can you make boiling water?”. They’ll look at you and ask “can I also have a bottle”? If you agree to that request they’ll triumphantly announce: “pour the water from the saucepan into the bottle and we are back to the previous problem, which is already solved.”

In addition to making fun of mathematicians, this is a good illustration of the “fake machine” approach to utility computing embodied by Amazon’s EC2. There is plenty of practical value in emulating physical machines (either in your data center, using VMWare/Xen/OVM or at a utility provider’s site, e.g. EC2). They are all rooted in the fact that there is a huge amount of code written with the assumption that it is running on an identified physical machine (or set of machines), and you want to keep using that code. This will remain true for many many years to come, but is it the future of utility computing?

Google’s App Engine is a clear break from this set of assumptions. From this perspective, the App Engine is more interesting for what it doesn’t provide than for what it provides. As the description of the Sandbox explains:

“An App Engine application runs on many web servers simultaneously. Any web request can go to any web server, and multiple requests from the same user may be handled by different web servers. Distribution across multiple web servers is how App Engine ensures your application stays available while serving many simultaneous users [not to mention that this is also how they keep their costs low — William]. To allow App Engine to distribute your application in this way, the application runs in a restricted ‘sandbox’ environment.”

The page then goes on to succinctly list the limitations of the sandbox (no filesystem, limited networking, no threads, no long-lived requests, no low-level OS functions). The limitations are better described and commented upon here but even that article misses one major limitation, mentioned here: the lack of scheduler/cron.

Rather than a feature-by-feature comparison between the App Engine and EC2 (which Amazon would won handily at this point), what is interesting is to compare the underlying philosophies. Even with Amazon EC2, you don’t get every single feature your local hardware can deliver. For example, in its initial release EC2 didn’t offer a filesystem, only a storage-as-a-service interface (S3 and then SimpleDB). But Amazon worked hard to fix this as quickly as possible in order to be appear as similar to a physical infrastructure as possible. In this entry, announcing persistent storage for EC2, Amazon’s CTO takes pain to highlight this achievement:

“Persistent storage for Amazon EC2 will be offered in the form of storage volumes which you can mount into your EC2 instance as a raw block storage device. It basically looks like an unformatted hard disk. Once you have the volume mounted for the first time you can format it with any file system you want or if you have advanced applications such as high-end database engines, you could use it directly.”

and

“And the great thing is it that it is all done with using standard technologies such that you can use this with any kind of application, middleware or any infrastructure software, whether it is legacy or brand new.”

Amazon works hard to hide (from the application code) the fact that the infrastructure is a huge, shared, distributed system. The beauty (and business value) of their offering is that while the legacy code thinks it is running in a good old data center, the paying customer derives benefits from the fact that this is not the case (e.g. fast/easy/cheap provisioning and reduced management responsibilities).

Google, on the other hand, embraces the change in underlying infrastructure and requires your code to use new abstractions that are optimized for that infrastructure.

To use an automotive analogy, Amazon is offering car drivers to switch to a gas/electric hybrid that refuels in today’s gas stations while Google is pushing for a direct jump to hydrogen fuel cells.

History is rarely kind to promoters of radical departures. The software industry is especially fond of layering the new on top of the old (a practice that has been enabled by the constant increase in underlying computing capacity). If you are wondering why your command prompt, shell terminal or text editor opens with a default width of 80 characters, take a trip back to 1928, when IBM defined its 80-columns punch card format. Will Google beat the odds or be forced to be more accommodating of existing code?

It’s not the idea of moving to a more abstracted development framework that worries me about Google’s offering (JEE, Spring and Ruby on Rails show that developers want this move anyway, for productivity reasons, even if there is no change in the underlying infrastructure to further motivate it). It’s the fact that by defining their offering at the level of this framework (as opposed to one level below, like Amazon), Google puts itself in the position of having to select the right framework. Sure, they can support more than one. But the speed of evolution in that area of the software industry shows that it’s not mature enough (yet?) for any party to guess where application frameworks are going. Community experimentation has been driving application frameworks, and Google App Engine can’t support this. It can only select and freeze a few framework.

Time will tell which approach works best, whether they should exist side by side or whether they slowly merge into a “best of both worlds” offering (Amazon already offers many features, like snapshots, that aim for this “best of both worlds”). Unmanaged code (e.g. C/C++ compiled programs) and managed code (JVM or CLR) have been coexisting for a while now. Traditional applications and utility-enabled applications may do so in the future. For all I know, Google may decide that it makes business sense for them too to offer a Xen-based solution like EC2 and Amazon may decide to offer a more abstracted utility computing environment along the lines of the App Engine. But at this point, I am glad that the leaders in utility computing have taken different paths as this will allow the whole industry to experiment and progress more quickly.

The comparison is somewhat blurred by the fact that the Google offering has not reached the same maturity level as Amazon’s. It has restrictions that are not directly related to the requirements of the underlying infrastructure. For example, I don’t see how the distributed infrastructure prevents the existence of a scheduling service for background jobs. I expect this to be fixed soon. Also, Amazon has a full commercial offering, with a price list and an ecosystem of tools, why Google only offers a very limited beta environment for which you can’t buy extra capacity (but this too is changing).

2 Comments

Filed under Amazon, Everything, Google, Google App Engine, OVM, Portability, Tech, Utility computing, Virtualization, VMware

RESTful JMX access from someone who knows both sides

Anyone interested in application manageability and/or management integration should read about Jean-Francois Denise’s prototype for RESTful Access to JMX Instrumentation. Not (at least for now) as something to make use of, but to force us to think pragmatically about the pros and cons of the WS-* stack when used for management integration.

The interesting question is: which of these two interfaces (the WS-Management-based interface being standardized or the HTTP-centric interface that Jean-Francois prototyped) makes it easier to write a cross-platform management application such as the poker-cheating demo at JavaOne 2008?

Some may say that he cheated in that demo by using the Microsoft-provided WinRM implementation of WS-Management on the VBScript side. Without it, it would have clearly been a lot harder to implement the WS-Management based protocol in VBScript than the REST approach. True, but that’s the exact point of standards, that they allow such libraries to be made available to assist implementers. The question is whether such a library is available for your platform/language, how good and interoperable that library is (it could actually hinder rather than help) and what is the cost to the project of depending on it. Which is why the question is hard to answer in absolute. I suspect that, even with WinRM, the simple use case demonstrated at JavaOne would have been easier to implement using straight HTTP but that things change quickly when you run into more demanding use cases (e.g. event notification with filters, sequencing of large responses into an enumeration…). Which is why I still think that the sweetspot would be a simplified WS-Management specification (freed of the WS-Addressing crud for example) that makes it easy (almost as easy as the HTTP-based interface) to implement simple use cases (like a GET) by hand but is still SOAP-based, which lets it seamlessly enter library-driven territory when more advanced features are added (e.g. WS-Security, WS-Enumeration…). Rather than the current situation in which there is a protocol-level disconnect between the HTTP interface (easy to implement by hand) and the WS-Management interface (for which manually implementation is a cruel – and hopefully unusual – punishment).

So, Jean-Francois, where is this JMX-REST work going now?

While you’re on Jean-Francois’ blog, another must-read is his account of the use of Wiseman and Metro in the WS Connector for JMX Agent RI.

As a side note (that runs all the way to the end of this post), Jean-Francois’ blog is a perfect illustration of the kind of blogs I like to subscribe to. He doesn’t feel the need to post all the time. But when he does (only four entries so far this year, three of them “must read”), he provides a lot of insight on a topic he really understands. That’s the magic of RSS/Atom. There is zero cost to me in keeping his feed in my reader (it doesn’t even appear until he posts something). The opposite of what used to be conventional knowledge (that you need to post often to “keep your readers engaged” as the HP guidelines for bloggers used to say). Leaving the technology aside (there is nothing to RSS/Atom technologically other than the fact that they happen to be agreed upon formats), my biggest hope for these specifications is that they promote that more thoughtful (and occasional) style of web publishing. In my grumpy days (are there others?), a “I can’t believe United lost my luggage again” or “look at the nice flowers in my backyard” post is an almost-automatic cause for unsubscribing (the “no country for old IT guys” series gets a free pass though).

And Jean-Francois even manages to repress his Frenchness enough to not take snipes at people just for the fun of it. Another thing I need to learn from him. For example, look at this paragraph from the post that describes his use of Wiseman and Metro:

“The JAX-WS Endpoint we developed is a Provider<SOAPMessage>. Simply annotating with @WebService was not possible. WS-Addressing makes intensive use of SOAP headers to convey part of the protocol information. To access to such headers, we need full access to the SOAP Message. After some redesigning of the existing code we extracted a WSManAgent Class that is accessible from a JAX-WS Endpoint or a Servlet.”

In one paragraph he describes how to do something that IBM has been claiming for years can’t be done (implement WS-Management on top of JAX-WS). And he doesn’t even rub it in. Is he a saint? Good think I am here to do the dirty work for him.

BTW, did anyone notice the irony that this diatribe (which, by now, is taking as much space as the original topic of the post) is an example of the kind of text that I am glad Jean-Francois doesn’t post? You can take the man out of standards, but you can’t take the double standard out of the man.

[UPDATED 2008/6/3: Jean-Francois now has a second post to continue his exploration of marrying the Zen philosophy with the JMX technology.]

2 Comments

Filed under Application Mgmt, CMDB Federation, Everything, Implementation, IT Systems Mgmt, JMX, Manageability, Mgmt integration, Open source, SOAP, SOAP header, Specs, Standards, WS-Management