Category Archives: Everything

All I know about RDF/OWL I learned in preschool

I don’t want to seem pretentious, but back in preschool I was a star student. At least when it came to potatoes. I am not sure what it’s called in US preschools, but what we meant by a potato, in my French classroom, was an oval shape in which you put objects. The typical example had two overlapping ovals, one for green things and the other for animals. A green armchair goes in the non-overlapping part of the “green” oval. A lion goes in the non-overlapping part of the “animal” oval. A green frog goes in the intersection. A non-green bus goes outside of both ovals. Etc.

As you probably remember, there are many variations on this, including cases where more than two ovals overlap. The hardest part was when we had to draw the ovals ourselves as opposed to positioning objects in pre-drawn ovals: we had to decide whether to make these ovals overlap or not. Typically they would first be drawn separately until an object that belonged to both would come up, prompting some head-scratching and, hopefully, a redrawing of the boundaries. Some ovals were even entirely contained within a larger oval! Hours of fun! I loved it.

[Side note: meanwhile, of course, the cool kids were punching one another in the face or stealing somebody’s lunch money. But they are now stuck with boring million-dollar-a-year jobs as cosmetic surgeons or Wall Street bankers (respectively) while I enjoy the glamorous occupation of modeling IT systems. Who’s laughing now?]

To a large extent, these potatoes really are all you need to understand about RDFS and OWL classes. OO people, especially, are worried about “multiple inheritance”. But we are not talking about programmatic objects here, in which inheritance brings methods with it. Just about intersecting potatoes. Subclassing is just putting a potato inside another one. Unions and intersections are just misshaped potatoes made by following the contours of existing potatoes. How hard can all that be?

Sure there are these “properties” you’ve heard about, but that’s just adding an arrow to show that the lion is sitting on the armchair. Or eating the frog.

Just don’t bring up the fact that these arrows can themselves be classified inside their own potatoes, or the school bully (Alex Emmel) will get you.

4 Comments

Filed under Everything, OWL, RDF, Semantic tech

It’s party time again for the tinkerers

Around 1995 and 1996, if you knew how to set up an HTTP server on a Solaris box, hand-write a few HTML pages and create a simple CGI script to save the content of a form into a file (extra credit if you remembered to append to the file rather than overwriting it every time), then you were a world-class web designer. At least in my neck of the woods, which wasn’t Silicon Valley at the time. These people were self-trained, of course. I made some side money back then, creating a few web sites with just these limited skills. I am sure there were already people who had really thought about web design and could create useful and attractive sites (rather than simply functional ones). But all twelve of them were busy elsewhere and I would guess that none of them spoke French anyway. They were not my competition in Paris, when talking, for example, to a large French bank who wanted to create a web site to hire college students. My only competition was a bunch of Photoshop clowns whose idea of web design was to create a brochure in Photoshop/Framemaker and make the whole web page one big JPEG file.

Compare this to utility computing (aka clouds) today. Any Linux sysadmin who has, over the last year, made the effort to read and experiment with cloud computing (typically Amazon EC2), to survey available tools and to write a few scripts to tie them together is now an IT rock star, a potential catalyst for operations as a competitive advantage.

Just like self-taught HTML dilettantes didn’t keep control of the web design playground for long, early cloud adopters among sysadmins won’t enjoy they differentiation forever. But I would guess that they do today. Anyone has statistics in terms of valuation for such skills on the job market?

Of course the Photoshop crowd eventually got their Frontpage, Dreamweaver, etc to let them claim that they could create web sites. These tools were pretty bad at first because they tried to make things look familiar to graphic designers (image maps galore!). They slowly got better.

The same thing is likely to happen in utility computing. Traditional IT management tools will soon get cloud features. Like the HTML WYSIWYG tools, they’ll probably tend to be too influenced by current IT management concepts and methods. For example, all the ITIL cheerleaders out there are probably going to bend cloud features to fit ITIL rather than the other way around. Even though utility computing might well invalidate some pretty fundamental assumptions/requirements of parts of ITIL.

The productivity increases created by utility computing are probably large enough that even these tools will provide great value. And they’ll improve. In the same way that the Web was a major enough improvement that even poorly designed web sites were way ahead of the alternatives.

Today, you obviously can’t make a living as an “HTML in notepad” developer. You must either be a real graphic designer and use tools to turn your designs in Web artifacts or be deep in Web technologies. Or both. Similarly, you soon won’t be providing much value if you just know how to start and provision EC2 instances. You’ll need to either be a real IT admin who can manage the utility resources as part of a larger system (like the applications) or be a hard-core utility computing expert who tackles hard problems like optimizing your resource consumption across cloud providers or securing and ensuring the compliance of your distributed IT system.

But for now, the party is raging and the dress code is still pretty lax.

Comments Off on It’s party time again for the tinkerers

Filed under Everything, IT Systems Mgmt, Utility computing

Sorry, no server for you today

Imagine that you are leasing a new car. Of course you plan to stay current on your lease payments. When you take delivery of the car, it comes with a loaded gun mounted on the dashboard and pointed at the driver’s head. The sales guy assures you that the gun has been programed to only discharge if your fall behind in your payments. As long as you keep paying, what could go wrong he asks?

Ask this poor VMWare customer (whose virtual machines suddenly refused to power up) what could go wrong. According to a company spokesman, “an issue has been uncovered with ESX 3.5 Update 2 and ESXi 3.5 that causes the product license to expire on August 12”.

Why does anyone accept to use mission-critical infrastructure software that has such a kill switch? Enough things can go wrong with complex software that we don’t need to engineer additional causes of failure.

[UPDATED 2008/8/15: A less dramatic but related example: a Microsoft employee has his Win Server 2008 release candidate license expire on him. Sure it’s an RC so you shouldn’t have production-quality expectations  on it, but that means that the “kill switch” code is there. Even if you plan to free the final release from this constraint, the fact that the code was there at one point means that things can go wrong. This is what happened with VMWare BTW: “the problem is caused by a build timeout that was mistakenly left enabled for the release build”.]

[UPDATED 2008/9/2: A more throrough analysis of the importance of asking “why is this (license enforcement) in the code in the first place” rather than “how did this bug slip through”.]

3 Comments

Filed under Everything, Virtualization, VMware

OVF work in progress published

The DMTF has recently released a draft of the OVF specification. The organization’s newsletter says it’s “available (…) for a limited period as a Work In Progress” and the document itself says that it “expires September 30, 2008”. I am not sure what either means exactly, but I guess if my printed copy bursts into flames on October 1st then I’ll know.

From a very quick scan, there doesn’t seem to be a lot of changes. Implementers of the original specification are sitting pretty. The language seems to have been tightened. The original document made many of its points by example only, while the new one tries to more rigorously define rules, e.g. by using some version of the BNF metasyntax. Also, there is now an internationalization section, one of the typical signs that a specification is growing up.

The old and new documents occupy a similar number of pages, but that’s a bit misleading because the old one inlined the XSD and MOF files, while the new one omits them. Correcting for this, the specification has grown significantly but it seems that most of the added bulk comes from more precise descriptions of existing features rather than new features.

For what it’s worth, I reviewed the original OVF specification from an IT management perspective when it was first released.

For now, I’ll use the DMTF-advertised temporary nature of this document as a justification for not investing the time in doing a better review. If you know of one, please let me know and I’ll link to it.

[UPDATED 2008/10/14: It’s now a preliminary standard, and here is a longer review.]

4 Comments

Filed under Everything, OVF, Specs, Standards, Virtualization, VMware, Xen, XenSource

ITIL certification for Oracle IT Service Management Suite (Pink Elephant)

The Oracle IT Service Management Suite (meaning the combination of Oracle Enterprise Manager and Siebel Service Desk) has earned a V2 certification for ITIL from Pink Elephant. More specifically, the Suite covers six of the seven processes: Incident, Problem, Change, Configuration, Release and SLM.

Here is the “Pink Verified” list.

[UPDATED 2008/9/9: Here is the corresponding press release.]

Comments Off on ITIL certification for Oracle IT Service Management Suite (Pink Elephant)

Filed under Everything, IT Systems Mgmt, ITIL, Oracle

Oracle VM template for Grid Control

Oracle recently released a set of VM templates (aka images) for OVM (Oracle Virtual Machine). In addition to being interesting news for OVM users, it’s also potentially useful for EM (Enterprise Manager) users: one of the images contains a full install of Enterprise Manager Grid Control. It is a patched Grid Control 10.2.0.4 installation and associated DB 10.2.0.4 repository pre-configured. This is running on Oracle Enterprise Linux. It also has a local Oracle Enterprise Linux 4 and 5 Yum repository for Grid Control usage.

You can get the files through the Linux side of edelivery.oracle.com (select “Oracle VM templates” as the “product pack”).

More templates are available here. You can now impress your friends and family with a full Oracle demo/development environment and they won’t need to know that you didn’t have to install or configure any application.

Comments Off on Oracle VM template for Grid Control

Filed under Everything, IT Systems Mgmt, Linux, Oracle, OVM

Grid cloudification #2

On a recent drive to work, I heard another echo of the Grid world in the context of Cloud computing: I was listening to the Cloud Cafe podcast with Enomaly’s Reuven Cohen when he mentioned (near the 27 minute mark) that they use Ganglia for monitoring their environment.

I am familiar with Ganglia from some HP Labs projects around PlanetLab that I was involved in. Ganglia is used quite a lot for monitoring in the PlanetLab environment.

So Ganglia is one. Is any other project/tool/product coming from the Grid/HPC efforts of the last 10 years now used by the cool Cloud kids? Globus? SmartFrog? Platform? Condor? Others?

A few seconds later in the podcast, Reuven provides this juicy quote: “is the cloud an excuse for bad code”. But that’s a topic for another post.

1 Comment

Filed under Everything, Grid, IT Systems Mgmt, Manageability, Utility computing

Grid cloudification

Grid computing is moulting and, to no surprise, the new skin has “cloud” written all over it.

That’s one way to interpret the announcement today that HP, Intel and Yahoo are going to launch a compute cloud. Seeing Intel and HP work together on this is no surprise. Back at HP I had some involvement with the collaboration between HP Labs and Intel on PlanetLab.

I have only read the Gigaom article and Steve’s, so this post is not an analysis of the announcement. Just a few questions that come to mind. They can be most concisely expressed by trying to understand the difference with Amazon’s EC2. The quotes below all come from the Gigaom article.

“six physical locations” -> Amazon has availability zones, including the choice of three geographies.

“between 1,000 and 4,000 mostly Intel cores” -> According to this well-publicized story, Amazon can deliver 5,000 servers (each linked to at least one physical core) to one customer without breaking a sweat.

“We want, unlike other partnerships including Google and IBM’s where the lower-level stacks are not provided in a open manner to the world, open access to all levels of the hardware” -> The quote seems to conveniently avoid comparison with EC2 which provides a much lower abstraction level: virtual machines with mountable raw block storage devices. How much lower can you go without handing out access cards to physically walk into the datacenter? Access to the BMC on the motherboard? Access to some internal bus? Remote-controlled little robots that will slide cards in and out of a chassis?

“researchers will be able to access the cloud through a proposal process later this year” -> Ec2 offers pay-as-you go, which tends to be a good driver for people to use the infrastructure efficiently. And of course someone can always give researchers a grant in the form of EC2 rent money.

Just to be clear, I am not belittling the announcement because for one thing I haven’t read much about it and for another I probably know many of the HP Labs people involved and they are part of the “mucho sapiens” branch of “homo sapiens”. I know they wouldn’t bother putting this out if it was nothing more than giving researchers some free EC2 time.

But these are the questions I’ll be trying to answer for myself as I read more about this project.

[UPDATED 2008/9/19: Russ Daniels (who was HP Software CTO when I was at HP and is now CTO of Cloud Services Strategy) comments on the announcement.]

Comments Off on Grid cloudification

Filed under Amazon, Everything, Grid, HP, Manageability, Tech, Utility computing, Virtualization, Yahoo

WS-Eventing joins the WS-Thingy working group proposal

The original proposal for a “WS Resource Access Working Group” mentioned that WS-Eventing might later join the party. It’s now done, and the proposed name for this expanded W3C working group is “WS Resource Interaction Working Group”.

It takes me no effort to imagine the discussions that turned “access” into “interaction”. Which means I am not cured yet, after a year of post-standards therapy.

IBM hurried to “clarify” how, in their view, this proposal relates to the existing WS-Notification standard. The logic seems to be: WS-notification is a great general-purpose pub/sub spec, WS-Eventing is a pub/sub spec used in the device management spec, to prevent confusion we will make them overlap completely by making WS-Eventing another general purpose pub/sub spec.

Someone who’s been paying attention asks how this relates to the WSDM/WS-Management convergence. IBM’s answer is a model of understatement: “other activities in the WS community should not delay their work in anticipation of new documents being produced”.

As the sign at New York’s pier 59 might have read in 1912: “visitors expecting to great RMS Titanic passengers should not delay their activities in anticipation of the boat arriving in the harbor”.

2 Comments

Filed under Everything, IBM, IT Systems Mgmt, SOAP, Specs, Standards, W3C

Cloud Computing trivia

A few silly trivia questions for everyone out there who has drunk the Kloud-Aid.

Q) When was the cloudcomputing.com domain registered?

A) February 28, 2007. Yes, less a year and a half ago it could have been yours of 10 bucks. A nice reminder of how quickly the buzzword took over. For comparison, utilitycomputing.com was registered in July 2002 and gridcomputing.com in February 2000. By the way, fogcomputing.com got snapped up a month ago today and is currently parked…

Q) who owns cloudcomputing.com?

A) Dell. Ironically, one of the companies that has the most to loose from it… Of course they don’t see it that way and they redirect that domain to a dell.com page that explains all they have to offer in this area.

Q) Where does the name come from?

A) According to Wikipedia, “the term cloud computing derives from the common depiction in most technology architecture diagrams, of the Internet or IP availability, using an illustration of a cloud”. OK, then are databases now called Cylinder Computing?

Q) How does one make money in Cloud Computing?

A) By registering the domain name and re-selling it at the peak of the hype. CylinderComputing.com is still available…

[UPDATED 2008/8/3: For the record, that last answer was supposed to be a joke. It seemed pretty obvious at the time, but one week later the news comes out that Dell is trying to get a trademark on the term “cloud computing”… More analysis here.]

1 Comment

Filed under Everything, Utility computing

Animoto is no infrastructure flexibility benchmark

I have nothing against Animoto. From what I know about them (mostly from John’s podcast with Brad Jefferson) they built their system, using EC2, in a very smart way.

But I do have something against their story being used to set the benchmark for infrastructure flexibility. For those who haven’t heard it five times already, the summary of “their story” is ramping up from 50 to 5000 machines in a week (according to the podcast). Or from 50 to 3500 (according to the this AWS blog entry). Whatever. If I auto-generate my load (which is mostly what they did when they decided to auto-create a custom video for each new user) I too can create the need for a thousands of machines.

This was probably a good business decision for Animoto. They got plenty of visibility at a low cost. Plus the extra publicity from being an EC2 success story (I for one would never have heard of them through their other channels). Good for them. Good for Amazon who made it possible. And who got a poster child out of it. Good for the facebookers who got to waste another 30 seconds of their time straining their eyes. Everyone is happy, no animal got hurt in the process, hurray.

That’s all good but it doesn’t mean that from now on any utility computing solution needs to support ramping up by a factor of 100 in a week. What if Animoto had been STD’ed (slashdoted, technoratied and dugg) at the same time as the Facebook burst, resulting in the need for 50,000 servers? Would 1,000 X be the new benchmark? What if a few of the sites that target the “lonely guy” demographic decided to use Animoto for… ok let’s not got there.

There are three types of user requirements. The Animoto use case is clearly not in the first category but I am not convinced it’s in the third one either.

  1. The “pulled out of thin air” requirements that someone makes up on the fly to justify a feature that they’ve already decided needs to be there. Most frequently encountered in standards working groups.
  2. The “it happened” requirements that assumes that because something happened sometimes somewhere it needs to be supported all the time everywhere.
  3. The “it makes business sense” requirements that include a cost-value analysis. The kind that comes not from asking “would you like this” to a customer but rather “how much more would you pay for this” or “what other feature would you trade for this”.

When cloud computing succeeds (i.e. when you stop hearing about it all the time and, hopefully, we go back to calling it “utility computing”), it will be because the third category of requirements will have been identified and met. Best exemplified by the attitude of Tarus (from OpenNMS) in the latest Redmonk podcast (paraphrased): sure we’ll customize OpenNMS for cloud environments; as soon as someone pays us to do it.

4 Comments

Filed under Amazon, Business, CMDB Federation, Everything, Mgmt integration, Specs, Tech, Utility computing

Forrester report on Oracle’s Enterprise Manager

Forrester’s Jean-Pierre Garbani wrote a short report last month about the current offering and future plans of Oracle’s IT management group (where I work).

As the report points out, Oracle’s IT management products don’t always enjoy a level of industry attention commensurate with the value they deliver. This report will hopefully help fix this.

Forrester: “Oracle Focuses On Business Value”.

1 Comment

Filed under Application Mgmt, BSM, Everything, IT Systems Mgmt, Oracle

Did someone at EDS miss the memo?

Two months ago, HP announced the acquisition of EDS.

One month later, HP Software announced a slew of new service management products, including an updated version (7.5) of Universal CMDB (from the Mercury acquisition).

One month later (today), according to BMC (with supporting quote from an EDS exec), “EDS Asia Pacific Standardises on BMC Software Atrium CMDB to Improve Service Delivery”.

As an ex-colleague pointed out to me, the acquisition isn’t closed yet. Still.

6 Comments

Filed under BSM, CMDB, Everything, HP, IT Systems Mgmt

WS Resource Access at W3C: the good, the bad and the ugly

As far as I know, the W3C is still reviewing the proposal that was made to them to create a new working group to standardize WS-Transfer, WS-ResourceTransfer, WS-Enumeration and WS-MetadataExchange. The suggested name, “Web Services Resource Access Working Group” or WS-RAWG is likely, if it sticks, to end up being shortened to WS-RAW. Which is a bit more cruel than needed. I’d say it’s simply half-baked.

There are many aspects to the specifications and features covered by the proposal. Some goodness, some badness and some ugliness. This post analyzes the good, points at the bad and hints at the ugly. Like your average family-oriented summer movie.

The good

The specifications proposed for W3C standardization describe a way to provide some generally useful features for SOAP messages. Some SOAP messages can get very long. In some cases, I know ahead of time what portion of the long messages promised by the contract (e.g. WSDL) I want. Wouldn’t it be nice, as an optimization, to let the message sender know about this so they can, if they are able to, filter down the message to just the part I want? Alternatively, maybe I do want the full response but I can’t consume it as one big message so I would like to get it in chunks.

You’ll notice that the paragraph above says nothing about “resources”. We are just talking about messaging features for SOAP messages. There are precedents for this. WS-Security can be used to encrypt a message. Any message. WS-ReliableMessaging can be used to ensure delivery of a message. Any message. These “quality of service” specifications are mostly orthogonal to the message content.

WS-RT and WS-Enumeration provide a solution to the “message filtering” and “message chunking”, respectively. But they only address them in the context of a GET-like operation. They can’t be layered on top of any SOAP message. How useful would WS-Security and WS-ReliableMessaging be if they had such a restriction?

If W3C takes on part of the work listed in the proposal, I hope they’ll do so in a way that expends the utility of these features to all SOAP messages.

And just like WS-Security and WS-ReliableMessaging, these features should be provided in a way that leverages the SOAP processing model. Such that I can judiciously use the soap:mustUnderstand header to not break existing services. If I’d like the message to be paired down but I can handle the complete message if need be, I’ll set this attribute to false. If I can’t handle the full message, I’ll set the attribute to true and I’ll get an error if the other party doesn’t understand this extension. At which point I can pick an alternative way to get the task accomplished. Sounds pretty basic but it’s amazing how often this important feature of SOAP (which heralds from and extends XML’s must-ignore semantics) is neglected and obstructed by designers of SOAP messages.

And then there is WS-MetadataExchange. While I am not a huge fan of this specification, I agree with the need for a simple, reliable way to retrieve different types of metadata for an endpoint.

So that’s the (potential) good. A flexible and generally useful way to pair-down long SOAP messages, to chunk them and to retrieve metadata for SOAP endpoints.

The bad

The bad is the whole “resource access” spin. It is not actually intrinsically bad. There are scenarios where such a pattern actually fits. But the way that pattern is being addressed by WS-RT and friends is overly generalized and overly XML-centric. By the latter I mean that it takes XML from an agreed-upon on-the-wire interchange format to an implicit metamodel (e.g. it assumes not just that you agree to exchange XML-formated data but that your model and your business logic are organized and implemented around an XML representation of the domain, which is a much more constraining requirement). I could go on and on about this, especially the use of XPath in the PUT operation. In fact I did go on and on with it, but I spun that off as a separate entry.

In the context of the W3C proposal at hand, this is bad because it burdens the generally useful features (see the “good” section above) with an unneeded and limiting formalism. Not to mention the fact that W3C kind of already has its resource access mechanism, but I’ll leave that aspect of the question to Mark and various bloggers (see a short list of relevant posts at the end of this entry).

The resource access part might be worth doing (one more time), but probably not in the same group as things like metadata discovery, message filtering and message chunking, which are not specific to “resource access” situations. And if someone is going to do this again, rather than repeating the not too useful approaches of the past, it may be good to consider alternatives.

The ugly

That’s the politics around this whole deal. There is, as you would expect, a lot more to it than meets the eye. The underlying drivers for all this have little to do with REST/WS or other architecture considerations. They have a lot to do with control. But that’s a topic for another post (maybe) when more of it can be publicly discussed.

A lot of what I describe in this post was already explained in the WS-ManagementHammer post from a couple of months ago. But that was before the W3C proposal and before WS-MetadataExchange was dragged into the deal. So I thought it might be useful to put the analysis in the context of that proposal. And BTW, this is a personal opinion, not an Oracle position (which is true in general for everything on this blog but is worth repeating specifically for this post).

2 Comments

Filed under Everything, Grid, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, SOAP, SOAP header, Specs, Standards, Tech, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

Who needs XPath fragment-level PUT?

WS-Management and WS-ResourceTransfer (WS-RT) both provide a mechanism to modify the XML representation of the state of a resource in a fine-grained way. The mechanisms differ a bit: WS-Management defines a SOAP header and distinguishes PUT from DELETE at the WS-Transfer operation level, while WS-RT uses the SOAP body and tunnels “modes” (remove, modify, insert) on top of the PUT WS-Transfer operation. But in their complete form both use XPath to point to any arbitrary nodeset and update it.

WS-ResourceProperties (WS-RP) takes a simpler approach. While it too supports XPath-driven retrieval of the content, it doesn’t attempt to provide an XPath-like level of flexibility when it comes to updating the content. All it offers is SET, INSERT, UPDATE and DELETE operations at the level of a property (a top-level child of the XML representation) and nothing more granular.

In this respect at least, WS-RP makes a better choice than its competitor and its aspiring successor.

First, XPath-driven updates sound easy but in fact are hard to specify. Not surprisingly, the current specifications do a pretty incomplete job at it. They often seem to assume that the XPath used to target the value to change returns only one node, but nothing guarantees this. If it picks up more than one node, do you replace all these nodes by the new values as a block (the new values get inserted once, presumably at the location of the first selected node) or do you replace each selected node by all the new values (in which case they get duplicated as needed)? Also, the specifications say nothing about what constitutes compatibility between the targeted nodes and the replacement nodes. One might assume that a “don’t be stupid” approach is all that’s needed. But there is no obvious line between “stupid” and “useful”. Does a request to replace a text node by an attribute node make sense? Not in a strongly-typed world, but a more forgiving implementation might just insert the text value of the attribute in the place of the text node to get to a valid result. What about replacing an element by a text node? Some may reject it for incompatible types but, unless the schema prevents mixed content, it may well result in a perfectly valid document. All in all, specifying a reliable way to edit XML is a pretty hairy task. Much harder than reading XML. It requires very careful considerations that have very little do with on-the-wire protocol considerations. Which is why doing this as part of a SOAP specification is a strange choice. The XQuery group is much more qualified for this. There must be a reason why that group decided to punt on this until they had taken care of the easier “read” case.

Second, it’s usually not all that useful anyway. Which is why the lack of precision in WS-Management’s specification of the fragment PUT haven’t really been a problem so far: people haven’t fully implemented that feature. A lot of the implementations are backed by a CIMOM, an MBean or some other OO store. In these stores, the exposed granularity is typically at the attribute level. The interactions used by programmers and consoles are also at that level. The XPath-driven update is then only used as a mechanism to update many properties at once (rather than going deep into individual properties) but that’s using a machine gun to kill a fly. The WS-RP approach supports these use cases without calling on XPath.

Third, XPath-driven PUT is really hard to implement unless your back-end store happens to be an XML database. You may end up having to write your own XPath parser and interpreter, an exercise during which you will face some impedance mismatches. Your back-end store may not have notions of property order for example, or attribute versus element. How do you handle these XPath instructions? And what kind of interoperability results from implementers having to make these decisions on their own? Implementing XPath selection on a GET is a lot simpler. All it assumes is that there is an XML serialization of the result, on which you can run the XPath expression before shipping it out. That XML serialization is a given in the SOAP world already. But doing an XPath-driven PUT injects XML considerations in your store itself, not just in the communication path.

Those are the practical reasons. In short, it makes the specifications at best complex and at worst non-interoperable, for a feature that is rarely needed. That should be enough already, but there are some architectural reasons to stay away too.

WS-Transfer is sometimes sold for REST over SOAP. And fragment-level WS-Transfer (what WS-Management and WS-RT do) is then REST on steroids. Sorry, not true. REST on crack if anything.

I am not a REST expert, but I know enough to understand that “everything has a URI” really means “anything meaningful has a URI”. It’s the difference between a crystal structure and a pile of mud. REST lets you interact directly with any node in the crystal, but there is a limited number of entities that are considered worthwhile of being a node. There is design involved (sorry, you can’t suddenly fire your architects, as attractive as that sounds). You can’t point to the space between two nodes in the crystal. XPath-on-top-of-WS-Transfer, on the other hand, lets you plunge your spoon anywhere in the pile of mud and scoop out whatever happens to be there.

Let’s take a look at WS-Federation (here is the latest draft), the only specification in a standard body that I know of that is currently using WS-RT. Whether it’s a wise choice or not for them, from a governance perspective, is a separate topic that I won’t cover here (answer: no. oops).

From a technical perspective, it is interesting to see how they went about using WS-RT PUT. They use it to update pseudonyms. But even though there is an XML representation for the pseudonyms, they don’t want to allow users to update any arbitrary part of that XML. So they create a specific dialect (the fed:FilterPseudonyms defined in section 6.1) that lets you, based on semantics that are meaningful in the specific domain covered by the specification, point to pseudonyms.

I believe most potential users of WS-RT PUT are in the same case as WS-Federation and are better served by a domain-specific way to identify entities of interest. At least the WS-Federation authors realized it rather than saying “great, WS-RT XPath fragment PUT gives us all this flexibility for free” and settling their implementers with the impossible task of producing interoperable implementations. Of course this begs the question of why WS-Federation uses WS-RT in the first place. A charitable interpretation is to pin this on overzealous re-use of all things WS-*. A more cynical interpretation sees this as a contrived precedent manufactured in an attempt “prove” that WS-RT provides features of general use rather than specific to the management domain.

Having described at length why XPath-driven updates aren’t as useful as they may seem, I can still think of two cases where a such a generic mechanism to modify an XML document could be useful. One is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page. But I haven’t exactly noticed wiki creators and users clamoring for wiki-over-SOAP, have you? The other situation is if you have a true model-driven system that is supported by a comprehensive system description and validation framework. The kind of thing that SML is trying to deliver. By using Schematron (rather than just XSD which is very limited in its expressivity beyond mere syntactical validation) to provide model validation. This would, in theory, allow the requester to validate the updated model before sending the change request. The change would still be validated on the receiver side (either explicitly or implicitly because a non-valid new model would simply fail when applied to the system), but the existence of the validation framework guarantees a high rate of successs (the sender would rarely send non-valid change requests). That’s very nice and exciting, but we don’t have this. SML is, as far as I can see, going nowhere fast in terms of adoption. Standardizing a model exchange protocol for that use case is, at this point in time, premature. Maybe one day.

5 Comments

Filed under Everything, IT Systems Mgmt, Mgmt integration, Modeling, REST, SML, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath, XQuery

A nice place to stay in Standardstown

You’ve just driven into Standardstown. It’s getting late and you need a place to stay. Your GPS navigation system has five listings under “accommodations”, with the following descriptions:

W3C campground

This campground provides well-equipped tents (free wi-fi throughout the camp). It has the most developed community feeling of all nearby accommodations. Every evening residents gather around a bonfire and the camp elders sing cryptic songs. At the end, the elders nod in approval of the moral of the song. Most campers don’t understand the lyrics but they like the melody. There is a recurring argument about how much soap the campground management should provide to guests. Old timers want to do away with this practice, but management is afraid that business travelers won’t patronize the camp if they are not provided with plenty of soap. The camp is located along a river, downstream from a large factory. When stuff floats down from the factory and lands on the shore of the camp, they call it a submission and thank for factory. So far, the attempts to build a clubhouse from the factory rubbish has mainly created eyesores.

OASIS housing development

This housing development is an option for accommodation because its management will give a plot of land to almost anyone who asks. More specifically, there needs to be at least three of you in the car. If you’re on your own, a common trick is to go pick up the village drunk (offer him a drink) and the village idiot (tell him you want his advice). They can usually be found on the main plaza, arguing about the requirements of imaginary users. Once you have your plot of land, the OASIS management maintains electric power, water and sewer but you can do pretty much what you want otherwise. If you just need temporary housing, you can just pitch a tent. As a result of this approach, there are several houses abandoned half-way through construction. This can make it hard to find your way to the house you are looking for. Residents typically don’t know anything about what’s going on in the house next door. You’ll find nice families living next to a crack house.

Motel DMTF

This motel is hard to find because it hides behind high walls. Even once you’re inside, there are segregated areas. Chances are your room card will give you access to the pool deck but not the clubhouse. Make a mental note of the way to the emergency exits, because there is no evacuation map on the wall (the map exists, but it’s considered confidential). We’ve heard that the best suites have a special door for direct access to the management office. After you leave, you can’t tell your friends what happened there. This review itself probably breaks some confidentiality rule.

WS-I Resort

This time-share resort is the newest development in town. By the time it got built, all the good land was taken so they had to build on land fragments leased from other hotels. The facilities are new and nice, but the owners association is dysfunctional. We’ve been told the feud started when a co-owner tried to organize a private mime show on shared land. Whatever the origin of the disagreement it has resulted in veto rules being commonly invoked, stopping most of the activities that the resort was originally planning to offer. But it remains a good option if you just need a place to sleep. The resort marketing has been pretty efficient: before doing business with you, many local companies will demand to see a receipt to show that you slept there.

Hotel ISO

Just getting a reservation there is a month-long process, so this is not an option if you’re already in town. Unfortunately, if you plan to do business with the local government you are expected to patronize this hotel. If that’s your case, the solution is to sleep in one of the other places in town and just go to this hotel for breakfast. Once there, order their breakfast special (called the “fast-track ruber stamp” which, unfortunately, tastes as bad as it sounds) and staple the breakfast receipt to your hotel bill. That should satisfy the city hall staff that they can do business with you.

11 Comments

Filed under DMTF, Everything, Standards, W3C

Three non-muppets walk into a bar…

I can’t shake the feeling that if Steve Vinoski, Steve Jones and Stuart Charlton had a drink together they’d actually agree on pretty much any distributed computing question that is worded in specific and unambiguous terms.

If you are not subscribed to their three blogs (and I don’t understand why you would not be if you have enough free time to read mine), here is a quick summary of the discussion so far:

Steve Vinoski writes an article critical of RPC approaches. Steve Jones doesn’t agree and explains why in a review of the article. Steve Vinoski is not impressed by the content of the review and even less by the tone. Stu sides with Steve Vinoski.

I think they all agree that, all other things equal, it is a good thing to facilitate the task of developers by providing them with intuitive interfaces. They also all agree that you can’t write distributed applications that shield the developer from the existence of a network. The key questions then boil down to:

  • what degree of network awareness do you require from developers (or what degree do you award them, for a more positive spin)?
  • what are the most appropriate programming constructs to expose that “optimal” degree of network awareness to the developers?

These questions don’t necessarily require words like “REST”, “RPC” and “JAXM” to be thrown around, other than merely as illustrative examples. In fact, the discussion so far seems to indicate that the questions are less likely to be resolved as long as these words are involved.

Once these questions are answered, we can compare the existing toolkits/frameworks (and yes, even architectural styles) to see which ones come closer to the ideal level of network-awareness and which ones present the most useful abstractions for that level. Or how each one can be improved to come closer to the sweetspot. Of course, there isn’t one level of network-awareness that is ideal for all cases, but my guess is that most enterprise applications are not too far apart on this.

[UPDATED 2008/7/27: Eric Newcomer explains it best. It’s just about finding a useful level of abstraction.]

1 Comment

Filed under Everything, REST, SOAP

Oracle/BEA Middleware go-forward plan

The landscape for the post-BEA-acquisition Oracle Fusion Middleware portfolio has been publicly released. You can read a list of all the components. Tracing the history of each back to Oracle-internal developments or acquired companies is left as an exercise for the reader. There are plenty of hints, starting with some of the product names (WebLogic, Tuxedo…). The components with more generic names (SOA Suite, SOA Governance) require a little bit more digging. While the filiation might be of interest to people as a way to map the go-forward plan to current products, in the long term it doesn’t matter as much as the overall quality, consistency and integration. Which is what the stack is optimized for.

The announcement also contains enough podcasts to keep the whole family entertained during the long drive to your campground of choice over the July 4 weekend. If the kids complain, tell them it’s that or Prairie Home Companion and they’ll surrender.

[UPDATED 2008/7/15: The folks at MWD just published their analysis of the go-forward plan (free sign-up required).]

[UPDATE 2008/8/1: A nice bullet-list summary by Ashutossh Pewekar.]

Comments Off on Oracle/BEA Middleware go-forward plan

Filed under Everything, Middleware, Oracle

Moving towards utility/cloud computing standards?

This Forbes article (via John) channels 3Tera’s Bert Armijo’s call for standardization of utility computing. He calls it “Open Cloud” and it would “allow a company’s IT systems to be shared between different cloud computing services and moved freely between them“. Bert talks a bit more about it on his blog and, while he doesn’t reference the Forbes interview (too modest?), he points to Cloudscape as the vision.

A few early thoughts on all this:

  • No offense to Forbes but I wouldn’t read too much into the article. Being Forbes, they get quotes from a list of well-known people/companies (Google and Amazon spokespeople, Forrester analyst, Nick Carr). But these quotes all address the generic idea of utility computing standards, not the specifics of Bert’s project.
  • Saying that “several small cloud-computing firms including Elastra and Rightscale are already on board with 3Tera’s standards group” is ambiguous. Are they on-board with specific goals and a candidate specification? Or are they on board with the general idea that it might be time to talk about some kind of standard in the general area of utility computing?
  • IEEE and W3C are listed as possible hosts for the effort, but they don’t seem like a very good match for this area. I would have thought of DMTF, OASIS or even OGF first. On the face of it, DMTF might be the best place but I fear that companies like 3Tera, Rightscale and Elastra would be eaten alive by the board member companies there. It would be almost impossible for them to drive their vision to completion, unlike what they can do in an OASIS working group.
  • A new consortium might be an option, but a risky and expensive one. I have sometimes wondered (after seeing sad episodes of well-meaning and capable start-ups being ripped apart by entrenched large vendors in standards groups) why VCs don’t play a more active role in standards. Standards sound like the kind of thing VCs should be helping their companies with. VC firms are pretty used to working together, jointly investing in companies. Creating a new standard consortium might be too hard for 3Tera, but if the VCs behind 3Tera, Elastra and Rightscale got together and looked at the utility computing companies in their portfolios, it might make sense to join forces on some well-scoped standardization effort that may not otherwise be given a chance in existing groups.
  • I hope Bert will look into the history of DCML, a similar effort (it was about data center automation, which utility computing is not that far from once you peel away the glossy pictures) spearheaded by a few best-of-bread companies but ignored by the big boys. It didn’t really take off. If it had, utility computing standards might now be built as an update/extension of that specification. Of course DCML started as a new consortium and ended as an OASIS “member section” (a glorified working group), so this puts a grain of salt on my “create a new consortium and/or OASIS group” suggestion above.
  • The effort can’t afford to be disconnected from other standards in the virtualization and IT management domains. How does the effort relate to OVF? To WS-Management? To existing modeling frameworks? That’s the main draw towards DMTF as a host.
  • What’s the open source side of this effort? As John mentions during the latest Redmonk/Willis IT management podcast (starting around minute 24), there needs to a open source side to this. Actually, John thinks all you need is the open source side. Coté brings up Eucalyptus. BTW, if you want an existing combination of standards and open source, have a look at CDDLM (standard) and SmartFrog (implementation, now with EC2/S3 deployment)
  • There seems to be some solid technical raw material to start from. 3Tera’s ADL, combined with Elastra’s ECML/EDML, presumably captures a fair amount of field expertise already. But when you think of them as a starting point to standardization, the mindset needs to switch from “what does my product need to work” to “what will the market adopt that also helps my product to work”.
  • One big question (at least from my perspective) is that of the line between infrastructure and applications. Call me biased, but I think this effort should focus on the infrastructure layer. And provide hooks to allow application-level automation to drive it.
  • The other question is with regards to the management aspect of the resulting system and the role management plays in whatever standard specification comes out of Bert’s effort.

Bottom line: I applaud Bert’s efforts but I couldn’t sleep well tonight if I didn’t also warn him that “there be dragons”.

And for those who haven’t seen it yet, here is a very good document on the topic (but it is focused on big vendors, not on how smaller companies can play the standards game).

[UPDATED 2008/6/30: A couple hours after posting this, I see that Coté has just published a blog post that elaborates on his view of cloud standards. As an addition to the podcast I mentioned earlier.]

[UPDATED 2008/7/2: If you read this in your feed viewer (rather than directly on vambenepe.com) and you don’t see the comments, you should go have a look. There are many clarifications and some additional insight from the best authorities on the topic. Thanks a lot to all the commenters.]

20 Comments

Filed under Amazon, Automation, Business, DMTF, Everything, Google, Google App Engine, Grid, HP, IBM, IT Systems Mgmt, Mgmt integration, Modeling, OVF, Portability, Specs, Standards, Utility computing, Virtualization

Progress Software acquires IONA… and MindReef too

The acquisition of IONA by Progress Software has been pretty widely written about (sometimes ironically). But that’s not the only thing happening in that neck of the wood. Less widely reported (but still covered on ZDNet, here and here) was their acquisition of MindReef. For the inside perspective, head over to Dan Foody’s blog.

This is yet another confirmation of the fact that testing and IT management are getting ever closer together. And for good reasons, if you want to better integrate application management tasks across the application’s lifecycle. Other signs of this were the recent acquisition of the e-Test suite from Empirix by Oracle (driven by Oracle’s application management team, not by the JDev team) and, some time ago, the fact that HP decided to hang on to Mercury’s testing business rather than spinning it off.

From the Progress perspective, the IT management side of course comes from the earlier Actional acquisition (that company itself having previously merged with WestBridge). From my earlier standards work I have worked with people from Actional, WestBridge and MindReef and there is an impressive wealth of SOAP and WS experience in these teams. What remains to be seen is how much management value can be derived from a very “on the wire” (as opposed to deep in the implementation) view of the interaction. Another challenge for SOAP-centric vendors, which might have been a driver for these acquisitions, is realization that SOAP is going to be only one component of the integration landscape, not its foundation. It may be the JMS of B2B integration, but it won’t be its TCP/IP as was once assumed.

2 Comments

Filed under Everything, IT Systems Mgmt, SOAP, Testing