Who said WS-Transfer is for REST?

One more post on the “REST over SOAP” topic, recently revived by the birth of the W3C WS Resource Access working group. Then I’ll go quiet for a bit and let people actually working on it show me why I am wrong to worry about WS-RT.

Before that, I just want to clarify one thing. People seem to assume that WS-Transfer was created as a way to support the creation of RESTful systems that communicate over SOAP. As much as I can tell, this is simply not true.

I never worked for Microsoft and I was not in the room when WS-Transfer was created. But I know what WS-Transfer was created to support: chiefly, it was WS-Management and the Devices Profile for Web Services, neither of which claims to have anything to do with REST. It’s just that they both happen to deal with resources (that word again!) that have properties and they want to access (mostly retrieve, really) the values of these properties. But in both cases, these resources have a lot more than just state. You can call all sorts of type-specific operations on them. No uniform interface. It’s not REST and it’s not trying to be REST. The Devices Profile also happens to make heavy use of WS-Discovery and I am pretty sure that UDP broadcasts aren’t a recommend Web-scale design pattern. And no “hypermedia” in sight in either spec either.

A specification is not RESTful. An application system is. And most application systems that use WS-Transfer don’t even try to be RESTful. Mocking WS-Transfer for not being as good as HTTP to support REST systems is like mocking an airplane for not being as good as your hatchback for grocery shopping. It’s true, but who cares.

So let’s not reflexively attack WS-Transfer for assumed purposes. And similarly, let’s not reflexively defend WS-Transfer as a good way to build RESTful systems.

Just to clarify, this is not meant as a defense of WS-Transfer. I think that, at least in the context of its original purpose, it should be gutted to only its GET operation. The PUT and DELETE tasks should be handled by domain-specific operations. Which would have the consequence of making it look less like a REST wannabe. But my recommendation aims at improving its applicability to the management domain, not at making it comply to an architecture style that is not (at least currently) used in that domain.

4 Comments

Filed under Everything, IT Systems Mgmt, Manageability, Mgmt integration, REST, SOAP, Specs, WS-Transfer

IT management and Cloud: now some products

Many of us have been thinking (a bit) and talking (a lot) about the relationship between Clouds and good old IT management.  John understands both sides and produced a few good posts (like this one).

Maybe it’s just a coincidence that both Hyperic and CA recently made such announcements. In any case, it gives the impression that time has come for some actual product capabilities in the area of managing Cloud-based systems.

I haven’t investigated either, so keep your slideware shields up, but this is what I read:

From Javier Soltero’s “Announcing HQ 4.0”: “It also provides the first cloud-friendly management agent which allows users to manage cloud based virtual machines securely and reliably from either inside the cloud, or from HQ 4.0 installations inside your datacenter”. John approves.

And at CA World, according to InformationWeek, CA will announce a partnership with Amazon to provide management capabilities around Amazon’s EC2 utility computing platform, potentially including discovery of software running on EC2 instances, performance monitoring, configuration management, software deployment capabilities and provisioning”.

When someone looks into these two products (and others, soon to follow or alrady out and that I have missed), it will be interesting to see how these Cloud-friendly capabilities relate to the good old capabilities of management products: “software discovery”, “perf monitoring”, “config management”, “software deployment”, “provisioning”. That all sounds pretty familiar. Is it just a matter of pointing the old tools to an EC2 IP address? Is it all new capabilities, done in a new way? Or, more realistically, where does it land between these extrems? Where do you want them to land? It’s not so obvious.

Utility computing comes with an expectation of additional flexibility (now that is obvious). When tweaking IT management tools to address the domain, does one leave “in datacenter” capabilities the same and branch off to do cool things in the new land? Or do you raise the level of flexibility accross the board?

In other words, rather than snickering at them, maybe we should praise IT management vendors for whom the “look, I do Clouds” marketing spiel is just a repackaging of normal IT management features. Because it may mean that they’ve raised the bar on “in datacenter” automation capabilities. These Opsware and BladeLogic acquisitions have to come in somewhere, don’t they?

BTW, both of the announcements above also perpetuate the confusion between providing utility services (CA’s extended SaaS offering, Hyperic’s release of a pre-packaged Hyperic AMI) and the ability to manage Cloud-based systems. It’s all crammed in the same announcement/article because, hey, it’s all Cloud stuff.

Speaking of CA World, if I was there I would go to this session. At least for old time sake, and maybe to get some interesting ideas. Hopefully Don will blog about it after he is done presenting later today.

5 Comments

Filed under Amazon, Application Mgmt, Articles, CA, Conference, Everything, IT Systems Mgmt, Open source, Standards, Utility computing

WS Resource Access working group starting at W3C

Things went quiet for a while, but the W3C Web Services Resource Access Working Group has finally taken life, as was announced last week. It’s a well-know PR trick to announce bad news on a Friday such that it goes undetected, is it a coincidence that W3C picked a Friday for this announcement?

As you can tell by this last remark, I have no trouble containing my enthusiasm about this new group. Which should not come as a surprise to regular readers of this blog (see this, this, this and this, chronologically).

The most obvious potential pushback against this effort is the questionable architectural need to redo over SOAP what can be done over simple HTTP. Along the lines of Erik Wilde’s “HTTP over SOAP over HTTP” post. But I don’t expect too much noise about this aspect, because even on the blogosphere people eventually get tired of repeating the same arguments. If some really wanted to put up a fight against this, it would have been done when the group was first announced, not now. That resource modeling party is over.

While I understand the “WS-Transfer is just HTTP over SOAP over HTTP” argument, this is not my problem with this group. For one thing, this group is not really about WS-Transfer, it’s about WS-ResourceTransfer (WS-RT) which adds fine-grained resource access on top of WS-Transfer. Which is not something that HTTP gives you out of the box. You may argue that this is not needed (just model your addressable resources in a fine-grained way and use “hypermedia” to navigate between them) but I don’t really buy this. At least not in the context of IT management models, which is where the whole thing started. You may be able to architect an IT management system in such RESTful way, but even if you can it’s too far away from current IT modeling practices to be practical in many scenarios (unfortunately, as it would be a great complement to an RDF-based IT model). On the other hand, I am not convinced that this fine-grained access needs to go beyond “read” (i.e. no need for “fine-grained write”).

The next concern along that “HTTP over SOAP over HTTP” line of thought might then be why build this on top of SOAP rather than on top of HTTP. I don’t really buy this one either. SOAP, through the SOAP processing model (mainly the use of headers, something that WS-RT unfortunately butchers) is better suited than HTTP for such extensions. And enough of them have already been defined that you may want to piggyback on. The main problem with SOAP is the WS-Addressing tumor that grew on it (first I thoughts it was just a wart, but then it metastatized). WS-RT is affected by it, but it’s not intrinsic to WS-RT.

Finally, it would be a little hard for me to reject SOAP-based resources access altogether, having been associated with many such systems: WSMF, WSDM/WSRF, WS-Management and even WS-RT in its pre-submission days (and my pre-Oracle days). Not that I have signed away my rights to change my mind.

So my problem with WS-RAWG is not a fundamental architectural problem. It’s not even a problem with the defects in the current version of WS-RT. They are fixable and the alternative specifications aren’t beauty queens either.

Rather, my concerns are focused on the impact on the interoperability landscape.

When WS-RT started (when I was involved in it), it was as part of a convergence effort between HP, IBM, Intel and Microsoft. With the plan to use this to unify the competing WS-Management and WSDM/WSRF stacks. Sure it was also an opportunity to improve things a bit, but 90% of the value came from the convergence/unification aspect, not technical improvements.

With three of the four companies having given up on this, it isn’t much of a convergence anymore. Rather then paring-down the number of conflicting options that developers have to chose from (a choice that usually results in “I won’t pick either sine there is no consensus, I’ll just do it my own way”), this effort is going to increase it. One more candidate. WS-Management is not going to go away, and it’s pretty likely that in W3C WS-RT will move further away from it.

Not to mention the fact that CMDBf (and its SOAP-based graph-oriented query protocol) has since emerged and is progressing towards standardization. At this point, my (notoriously buggy) crystal ball shows a mix of WS-management and CMDBf taking the prize overall. With WS-Management used to access individual resources and CMDBf used to access any kind of overall system view. Which, as a side note, means that DMTF has really taken this game over (at least in the IT management domain) from W3C and OASIS. Not that W3C really wanted to be part of the game in the first place…

11 Comments

Filed under CMDBf, DMTF, Everything, HP, IBM, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Query, REST, SOAP, SOAP header, Specs, Standards, W3C, WS-Management, WS-ResourceTransfer, WS-Transfer

Barack Obama’s first day on the job

A phone conversation.

– White House IT support.

– Hi, it’s Barack Obama.

– Good morning Mr. President and welcome to the White House.

– Thanks. Hey, I have a problem with the computer on my desk.

– Is it the screensaver? I know, it’s pretty embarrassing. President Bush got it from the vice-president and he really liked it. I was planning to remove it before you arrive this morning, but you got here before me. Sorry about that.

– Forget the screensaver. It’s the keyboard.

– Pretzel crumbs again, I am sure. Just shake it upside down.

– No it’s just the “Z” key.

– What about it?

– I’ve been pressing “control-Z” all morning. The economy is still a mess, the deficit is still huge, we’re still stuck in Iraq and Guantanamo is still open. And now my hand hurts. What gives?

– …

– Can you help?

– I am sorry Mr. President, I am afraid you cannot undo the work of the previous administration that easily.

– Really? Well, how on earth am I going to do it?

– I think it will take a lot more work.

– You’re positive I really can’t use “control-Z”?

– No you can’t.

[UPDATED 2008/11/9: Looks like he is not deterred: “Obama Weighs Quick Undoing of Bush Policy” (New York Times article, November 9, 2008)]

5 Comments

Filed under Everything, Off-topic

First in-depth look at Microsoft’s Oslo and the “M” modeling language

Microsoft’s PDC is taking place this week and more details were shared with the attendees about project Oslo, an effort announced last year to drastically improve the use of models across the application lifecycle. Some code is available (I think the Quadrant code is only for PDC attendees but the Oslo SDK is available to everyone). I am not at PDC, I didn’t see any presentation and I didn’t download any code. But Microsoft has also posted technical details on MSDN and, as far as I am concerned, that’s the most time-effective way to spend a couple of hours learning about Oslo. BTW, the way they share these early design descriptions and accept to make their evolution public is admirable.

For those who only want to spend 10 minutes rather than 2 hours, here are the thoughts that came to my mind as I was reading.

Overall I am somewhat underwhelmed, but not necessarily in a bad way. I know that’s a little schizophrenic so let me explain. After hearing a lot about how Oslo was the next big thing in modeling, it is a little surprising to read a document that can be summarized as “modeling is good, so go create some SQL tables and store them in a RDBMS”. That’s the underwhelming part. But on the other hand, it is more down to earth and practically-minded than I feared. And this is just a summary, in truth there is more than just “use SQL”.

Half of the MSDN documentation basically explains how to use SQL Server to store application models (as of today, the “Developing Models for the Metadata Store” section has only one sub-section, “SQL Server Guidelines for Modeling in the Oslo Repository“). Does this mean that all .NET applications will eventually have to carry with them a deployment of SQL Server 2008 even if they don’t use it to store the their operational data? Sure there are a few extra repository services (e.g. finer-grained change auditing) but most Oslo repository services are generic SQL Server features. That section has quite a lot of T-SQL, but it’s pretty readable. It also has a lot of dependencies on following naming conventions which makes me think that directly creating T-SQL code is not the best approach.

Fortunately there is an alternative, the “M” language. It’s a schema language with a built-in constraint mechanism. I found it more data-oriented (as opposed to resource-oriented) than I expected. Even though “each model is really a set of data structures, relationships, and constraints in serialized form“, there is a lot more support for data structures and constraints than for relationships. It’s just a foreign key. Relationships aren’t items and don’t have any property (or “field” as they’re called in “M”). For example, the relationship between a student’s enrollment record and a given class can’t have, as property, the grade that the student got for that class (as in the example in section 4.1.4 of the second LC of SML). To model this in “M” you need to create another item (e.g. “courseEnrollment”) and have a relationship from the student to that item and another one from that item to the “course” item itself. Or to replace the foreign key in the student table with a complex structure that contains both the foreign key and the properties of the relationship. At the end it has the same expressiveness potential, but in a less streamlined form. I assume Microsoft took this approach for performance reasons.

I am going on a limb here, but it may also be a difference between development-time concerns and operation-time concerns. During development (all the way to testing and packaging), you can still mostly get away with a relatively simple containment structure. You care about the components of your application and how they are packaged inside or next to one another. Sure you care about who calls who outside of the deployment unit but that’s not as core a concern as getting your class dependencies right, your tests in order and your installer configured. In fact, some of the “who calls who” bindings will be only be realized at runtime. Oslo, at least so far, clearly seems more focused on development time than operations so support for a relationship-rich model may not seem critical. At operations time, on the other hand, you don’t really care so much about how things were packaged before installation. You care a lot more about who invokes who (especially for modern distributed applications), what the network layout is, what resources a ticket is attached to, etc. The model looks a lot more like a graph with complex relationships. Something that “M” doesn’t seem ideally suited for.

Except for this caveat, I like “M”. It’s not anti-XML (you can represent values as XML if you’d like) but it avoids the “the answer is XML/XSD what is the question” approach to modeling that is sometimes a little too prevalent. “M” is a much better schema language for IT systems than XSD. I especially like its approach to types. A value is not intrinsically of a given type. A type is a condition that you happen to meet or not at the current time (“take heart little field, you can be anything you want when you grow up”). As such, you can be of several types at the same time. Refined types are potatoes inside potatoes (not sure if “M” supports definition of types as unions and/or intersection of existing types, for intersection I want to write something like”type NewType : OldType1 where this in OldType2” but there is no “this” in “M”). That approach to types (and the way constraints leverage types) is reminiscent of RDF/OWL. It’s a classification more than a typification, but I understand why they didn’t want to call it “class”. The similarities with RDF/OWL don’t go any further. As I wrote earlier, “M”is very data-focused and not resource-focused: as far as I can tell “M” types are defined syntactically, not semantically (the semantics come as a consequence). For example, I don’t think that you can assert that a given item representing a person is of type “friendly” if there is no corresponding data in the item. You’d have to first create a boolean field called “friendly” and define that those that have that field set to “true” are of type “friendly”. Unlike in RDF/OWL where you can just assert that a subject is “friendly”.

Here is another reason why you can’t have “semantics-only” types: “if you do not specify the type of a field or value, M infers a type for it“. Two things don’t sound quite right to me here. First a detail: the sentence (like others in the doc) talks about “the” type of a field of value, while there can be more than one. More importantly, what’s the point of this feature? How does it help me to have my IRC nickname classified as a post code or as a password just because it happens to be made of a compatible combination of letters and numbers? Maybe it makes sense as a storage optimization, but why does it make sense to expose this to the user?

I also like the way “extents” work. The current description of that feature is pretty limited, but based on how it is used in other parts I think one of its usages is to support a non-OO equivalent to inheritance: create two extents, one for the “superclass” and one for the “subclass” where each only contains the properties/fields defined at that level. You should get both of them in order to have the full picture (all the fields). This is, if I understand it correctly, similar to something I have been (unsuccessfully so far because “XML doesn’t do it this way”) trying to sell to the DMTF CMDBf working group: model inheritance through a set of non-overlapping records rather than dealing with a type hierarchy on record types. It’s not just that it makes relational storage easier (even though it does and that’s probably why “M” does it this way), it also makes your query/select operations a lot easier to specify and implement.

All in all (and without having gone through the exercise of defining actual models in “M”), it seems like a fine schema language (except that its dependency on the CLR base types is unpractical for users outside of the Microsoft universe) but I am not sure if it is beefy enough to be a good IT management metamodel. When the document says that “the Oslo repository provides open and flexible access to the data it contains, which enables direct access to SQL Server views of the underlying data. There are no complex data access layers or APIs” it sounds better than saying “it’s just SQL, so map your model to it and if you want relationships or type inheritance just build it on top of it and quit whining”. But it is an admission of limitation at the same time as a claim of simplicity. I also smell an assumption that LINQ will provide enough hand-holding that non-SQL-savvy developers will be ok. We’ll see.

And then there is MGrammar. Things get a little confusing at that point if you try to relate MGrammar to “M”. Actually, the FAQ states that “the M language consists of three parts: MGraph, MSchema and MGrammar“. This came a bit as a surprise to me since at that point I had finished reading (not in details but not too quickly either) the “M” documentation and I hadn’t seen these names mentioned once. Looks like there is some documentation consistency issues here, but that’s hardly surprising considering this is a “hyper-early (pre-alpha)” release as Doug Purdy puts it.

I think that everything that I have referred to as “M” above is MSchema.

MGrammar is something different altogether: it’s the source of the Domain Specific Language (DSL) references we’ve been hearing in relationship with Oslo. Technically, MGrammar is a BNF on steroids plus an automatically generated parser for your syntax. Cute. I assume that “M” (i.e. MSchema) is built as MGrammar-defined DSL but I am not sure why I would care. I am all for reuse and if someone at Microsoft thought that there was something reusable in the way they defined MSchema then it’s a good thing to expose this tool. But where does it come into play in application modeling? The last thing I want is people inventing completely independent languages to describe different domains. I am all for specialization, but a common underlying metamodel is pretty nice when you have to make sense of a whole system. I don’t see any such commonality in MGrammar: as far as I can tell it can be used to define anything from PostScript to sonnets.

From the FAQ, the connection point between MGrammar and MSchema is MGraph (MGrammar languages are parsed into an MGraph, MSchema “builds on MGraph”). That’s nice, but since neither the MSchema nor the MGrammar documentation mention MGraph I don’t really know what to make of this. David Chappell’s white paper also mentions MSchema and MGrammar but not MGraph. The introduction to the MGrammar Language Specification states that “the data that results from Mg [a.k.a. MGrammar] processing is compatible with Mg’s sister language, The Oslo Modeling Language, M, which provides a SQL-compatible schema and query language that can be used to further process the underlying information“. Compatible? I need more information here. In any case, MGrammar sounds like a fun project for a techie. Who am I to deny Microsoft engineers their fun. Jokes aside, I am probably missing something here seeing how prevalent the DSL message is in all discussions of Oslo. Look at the “highlights of this book” section for the upcoming Oslo/M book from the creators of the “M” language: half of it is about the DSL support and there must be a reason beyond pure geekery. As a side note, if you buy this book you need to understand what little shelf life it will have (I can give you a good price on a lightly-used Hailstorm/”.Net my services” specification book).

Aside from the “M” language itself, there are a few models described in the documentation. One corresponds to BPMN (actually, it says that it “closely aligns with” BPPMN 1.1, does this imply that they are not quite the same?). The fact that this model supports imports from Visio is a nice feature.

The Application model (one of the places where you can see “extents” in action) scares me a little bit because I doubt that two different people would use the same “extents” to describe the same software elements. Unless of course that’s being done for them by a pre-defined mapping to their development framework (.NET) enacted by their common development tool (Visual Studio). Which may be the assumption. Yet, the Application model is defined in generic terms, not Microsoft-specific (with a couple of slip-ups, like a WebApplicationModule being defined as a “Web application (module) implemented by IIS or WAS“. Maybe I’ll feel better about the generic applicability of this Application model when I see a full-fledged description (e.g. including relationship semantics as captured in foreign key field names) and an example.

At the bottom of that Application model, there is a lonely “Manageable” type to use if you have a LifecycleState field. This reinforces my impression that despite the claims to link development time with operational time, a lot of the focus to date has been on the former rather than the latter.

The ServiceModel model will look familiar to people familiar with SCA and is presumably complementary to the WorkflowModel and WorkflowServiceModel models, both of which are directly mapped to Windows Workflow Foundation. I guess that’s where Oslo and Dublin touch one another. I am still glad they are now clearly separated.

There is also a “Quadrant” model which concerns me a bit (it seems to be used to store customization of the Quadrant UI which, while convenient to store straight in the repository, doesn’t strike me as necessarily belonging there).

At this point, the question is not whether Microsoft can build Oslo as it is currently defined. SQL Server 2008 already exists, the usage guidelines aren’t unrealistic and even the “M-to-T-SQL” translation doesn’t seem too hard for Microsoft to implement (the SDK presumable already contains an implementation). I have no doubt they can deliver the system they describe. What I don’t know is whether and how it will be actually useful.

Describing “M” in details is good. Describing how the repository is implemented on top of SQL Server 2008 is interesting but not so relevant. What I’d like to see is a description of how all this gets used. How does it change the Visual Studio experience? How does it change the installation process/format? How does it support round-tripping between lifecycle stages (e.g. if the developer changes the workflow model, does that original BPMN model get consequently updated)? How does it relate to SLAs and policies? How does it apply to application monitoring? How does it apply to configuration management, to the change process? Etc. In short, what’s the Oslo ecosystem going to be.

These questions aren’t completely ignored in the MSDN documentation, but they are dispensed with in a couple of pages: “Application Development and Lifecycle Improvements” and “IT Operations Benefits“. The former states, for example, that “having the Oslo repository act as a central location for these models also enables a connection between the design and implementation models. This connection helps prevent these models from becoming disconnected during the development process“. Which all sounds good but is just a set of assertions that we have heard many times before (not just from Microsoft). How do “M” and the Oslo repository really make this true?

On the “IT Operations Benefits” side, things are equally blurry: “the Oslo repository can store all types of machine and application configuration data. When consistently updated, this configuration data is a catalog of the current state of all monitored machines and applications in the environment“. Notice the “when consistently updated” hand wave. That’s kind of the crux if you really want to manage across the lifecycle. How will they achieve this consistency? By centralizing all changes through a model-driven controller a la SDM/SML? Through ongoing discovery and/or change notifications? By relying on good old ITIL/MOF processes?

The FAQ declares that “having a common approach does not necessarily correlate to one physical store, but more of a federated model and we believe that some of the new Repository, along with existing investments in both Configuration Management Database (CMDB) and Team Foundation Server (TFS), will form the foundation for a common Microsoft metadata strategy and should be supported across our set of products“. OK, but who is the source of truth for application configuration data? The Oslo repository or the CMDB? Is one the desired state and the other the observed state? Does the CMDB go back to simply being a Service Desk (and if so, does the Oslo repository take on the responsibility to enforce change processes, something that requires more than the security model in Oslo)? If the CMDB is still going to use SML as its metamodel, how do you efficiently federate across such different metamodels as SML (i.e. XSD + schematron + relationships) and “M”?

Lots of questions remaining. What will Oslo have turned into in a few years? A business process design/implementation/monitoring suite (there is a strong workflow feel to many parts)? A generic drag-and-drop programming environment (“the fact that entire features are already described by models means that for a wide array of application and component categories you can start using visual tools to design and implement your components“)? A control center for end to end application management? All of the above? Nothing?

This was just a quick brain dump after reading the documents. Actually, I just realized it somehow got pretty long (congrats if you’re still reading). I hope this post is not too disorganized. Oslo is an interesting effort, but, as Microsoft is first to admit, it’s at a very early stage. I am just surprised that this first release spends so much time on the “how” rather than the “what”. Maybe it’s just because I only got my information from the MSDN documentation. We’ll see when more content from PDC finds its way online. I just want the slides, watching recorded presentations is rarely time-efficient (and you can expect them to require Silverlight).

Speaking of Silverlight, there is this new site on Oslo if you think watching some videos is worth installing Silverlight. Those screenshots don’t motivate me sufficiently.

[UPDATED 2008/10/30: Rather than going to bed I Googled around a bit and found a  post by Martin Fowler that answers some of my questions about MGrammar, MGraph and MSchema. MGraph is for instances, MSchema is for types. It answers some plumbing question, but I still have questions about expected usage and relevance to applications modeling.]

[UPDATED 2008/10/30: I also found the recordings and slides from past PDC sessions. Nice job Microsoft for this quick turnaround time, even if you require Sliverlight and/or the PPTX viewer. The sessions are:

  • TL23 A Lap around “Oslo” (Doug Purdy, Vijaye Raji)
  • TL27 “Oslo”: The Language (Don Box, David Langworthy)
  • TL18 “Oslo”: Customizing and Extending the Visual Design Experience (Don Box, Florian Voss)
  • TL28 “Oslo”: Repository and Models (Chris Sells)

The first two sessions (deliverd Tuesday) have a replay and slides, the others should, I assume, follow soon.]

[UPDATED 2008/11/3: A nice overview of Oslo by Aaron Skonnard. Unlike most other Oslo articles over the last week, this one tries to paint the (yet-to-be-realized) full picture of the Oslo ecoystem. He mentions that “other Microsoft products and technologies are expected to build on Oslo to provide other runtimes. A few that have already been announced include Microsoft System Center (Operations Manager) and Team Foundation Server (TFS) in Visual Studio Team System”. It’s interesting that he qualifies System Center to be more specifically “operations manager” rather than “configuration manager” but I wouldn’t read too much into it at this point.]

5 Comments

Filed under Application Mgmt, BPM, Business Process, CMDB, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, SML, Specs, Tech

CMDBf work in progress

The DMTF CMDBf working group (of which I am part) has released a work in progress version of the CMDBf specification. The changes from the submitted version are minor. It’s mostly a move to the DMTF template. More important (but not drastic) changes should appear in the next release.

Comments Off on CMDBf work in progress

Filed under CMDB Federation, CMDBf, DMTF, Everything, Graph query, Specs, Standards

Dear Microsoft, here is my $0.25 Windows license fee for the month

Pricing is now available for Windows instances on Amazon EC2. More than the technical availability of Windows AMIs, the fact that you get pay your Windows license fee based on usage is a major change. This is where Microsoft’s announcement goes beyond Oracle’s EC2 announcement at Oracle Open World.

But why stop at EC2 instances? If I can do it there, why can’t I do it at home? Considering how rarely my home desktop is booted to Windows, I would love to pay my Windows license in a metered way. It would basically be limited to time spent editing video and participating in family Skype videconference (at least until I manage to get Skype full screen video to work on Ubuntu).

After all, why only Amazon and not other Cloud providers. And when this happens, I think I may become a cloud provider myself. It would be a small-scale operation. One physical CPU (my desktop). And one user (me). I would meter my usage and dutifully pay Microsoft every month based on the number of hours during which I was running Windows.

How much would that be? Well, a Linux Small Standard Image EC2 instance (the closest thing to my aging desktop) costs $0.10 per hour. The Windows version costs $0.125 per hour, so the Windows license on this machine costs 2.5 cents per hour. On a given month, I don’t use it for more than 10 hours (edit/render one DVD plus a few hours on Skype). That’s 25 cents. Does Microsoft take Paypal? Is the Microsoft tax about to get more progressive?

It will be interesting to see how Microsoft manages to be flexible on server OS licensing (where it has plenty of competition) and while keeping its highly profitable (and unfairly front-loaded and restrictive) desktop OS licensing intact.

[UPDATED 2009/1/19: What do you know, here is a Microsoft patent for a “Metered Pay-As-You-Go Computing Experience”, found through this article.]

1 Comment

Filed under Amazon, Business, Everything, Microsoft, Utility computing

A flash of anti-genius

Just this week, I saw two emails that painfully illustrate what is maybe the single worst thing about the way Flash is used on many web sites: the lack of addressability.

The first email was a request for help about finding a specific view on a Flash-based app (one that, I must shamefully admit, was created by Oracle). The answer came quickly, in the form of a screen capture of the Flash app with the multi-level menu open and pointed at the menu entry that produces the requested view. Does anything with this strike you as wrong?

If not, look at the email that arrived the following day. A fellow Oracle employee wanted to advertise for rent an apartment he owns in the new One Rincon Hill tower in San Francisco. In order to provide a link to the floor plan, here is what he had to put in the email:

Plan 5 – see http://www.onerinconhill.com (Lower right “Skip intro”, then follow the link on Residences and Views -> Condominiums -> Tower One -> 1 Bedroom -> Unit 05)

No need to comment on the “skip intro” part. We all know how stupid these “intros” are. BTW, it would be nice if you didn’t have to download the entire Flash file before clicking on “skip”. But this is a “no Flash, no service” site. There is no alternative. Ironic for a tower in which 95% of occupants own an iPhone (the remaining 5% are  Android-wielding Google employees, also Flash-challenged).

Even more ironic is that fact that Flash is used on this site to navigate menus (usefulness: zero) and when you get to the floor map it’s a plain static image. Even though that’s the place where you could provide innovative features in Flash (like having a list of typical furniture items that people can drag and drop to see how to use the space).

You could say, NRA-style, “Flash apps don’t screw up web sites, bad Flash designers screw up web sites”. Sure. It’s not Flash per se, it’s the way it’s used. There is a good case to be made for small areas of web pages being delivered through Flash for increased interactivity (rather than having Flash become a navigation mechanism). But just like with the gun, when you are on the receiving end the difference seems pretty academic.

In a blog entry three and a half years ago (an entry which, in retrospect, is a strong contender for “most obscure, pretentious title”), I recalled hearing Tim Berners-Lee explain in 1999 on the radio how he came up with the idea of a URL: before the Web, people would create small files that describe where to find information in a human-readable way. TBL wrapped this in a consistent format, the URL.

And now, more than 15 years after TBL’s invention, Flash-drunk nitwits are recreating the problem he solved and forcing people to again “create small files that describe where to find information in a human-readable way”. When WS-Addressing decided to deprecate URLs, they at least provided a replacement (the EPR). What is the Flash equivalent going to be? Who wants to write the DARC (Distributable Addressing for Rich Clients) specification?

[UPDATED 2008/10/3: Someone pointed me at the “solution” for this problem: SWFAddress. Looks interesting. Except that this is an extra step that the Flash developer needs to know about and implement. If your Flash developer has that state of mind and level of competency, you’ve already solved 95% of the problem. For starters, s/he won’t create your whole site as a Flash movie, s/he will just use Flash judiciously on the site. I don’t see how SWFAddress is going to help with the throusands of mostly clueless Flash developers who keep banging out Flash-only sites. If you really want a technology solution to the general problem, it would probably require something like a click tracker that generates a trail of crumbs and packages it in a URL. But I don’t think the solution here is a technology solution. It’s more a “get a clue” solution. After all, almost no web site has an empty, pretty-looking, entry page anymore (except Flash sites of course), even though those were pretty common at a time.]

4 Comments

Filed under Everything, Flash, Off-topic, Tech

BPM origami

Tom Baeyens (leader of JBoss jBPM) recently wrote a DZone article titled “Seven Forms of Business Process Management With JBoss jBPM”. It’s an interesting article. It does a good job of illustrating the difference between using BPM tools to capture/communicate business intent versus using them to implement asynchronous interactions, especially with Web services.

While it is very much worth reading, the article is not a good reference document for defining/explaining BPM, because it is much to tied to the jBPM product. This happens in two ways, one harmless and one more consequential.

The harmless tie-in is that each flavor of BPM comes with a description of the corresponding jBPM features. Not something you want to see in a generic reference document but Tom is very upfront about the fact that the article is going to cover the jBPM product (it’s even in the title) and about his affiliation with jBPM. No problem there.

What bothers me more is a distinct feeling that the choice of these seven use cases is mainly driven by the availability of these supporting jBPM features. It’s not just that the use cases are illustrated through jBPM features. What we are seeing is the meaning of BPM being redefined to match exactly what jBPM offers.

The most egregious example is use case 6, “thread control language”. Yes, threads are hard. It sounds like Tom and team are planning to make this easier by adding some Erlang-like features in jBPM (at this point the tense changes to future “we’ll develop a thread control language…” so there isn’t much specifics). Great. Sounds interesting, I am looking forward to seeing it. But if this is BPM then are threads a BPM features of the various programming languages? Are OS processes a BPM feature? Are multicore CPUs part of BPM while we’re at it?

Use cases 5 (“visual programming”) and 7 (“easy creation of DSLs”) are treading in the same waters. I have the feeling that if jBPM was able to synchronize the podcasts on my MP3 player, we would have had an 8th use case for BPM.

Tom is right to write that “the term BPM is highly overloaded and used for many different things resulting in a lot of confusion”. By adding a few more use cases that nobody, as far as I know, had previously attached to the BPM bandwagon, he is creating more, not less, confusion.

This is especially glaring if you notice that one of the most important BPM use cases, monitoring, is not even mentioned. Maybe it’s just me and my “operations time” bias versus Tom’s “development time” bias. But it seems that he is pulling the BPM blanket a bit far towards his side of the bed (don’t read too much in the analogy, I have never met Tom).

Rather than saying that “these use cases give concrete descriptions for the different interpretations of the term BPM”, it would be more accurate to say “these use case give concrete descriptions for some of the different interpretations of the term BPM, ignore others and add a few new ones”.

I didn’t learn a lot about BPM, but the article did make me interested in learning more about jBPM, which is probably its primary objective. There seem to be some interesting design goals towards providing a flexible set of orchestration-related tools to application developers. Some of it reminds me of the workflow efforts at Microsoft (some already shipping and some to be revealed at PDC).

1 Comment

Filed under Articles, BPEL, BPM, Business Process, Everything, JBoss, Middleware, Open source

Reviewing DMTF OVF as a “preliminary standard”

OVF 1.0.0d is out as a “preliminary standard” so I gave it a quick read over the weekend. Things have not changed much since the “work in progress” document published this summer, which itself wasn’t a big change from the original specification. As I wrote in the review of the “work in progress”, the DMTF tightened the language of the  specification more than it added features.

Since there aren’t too many technical changes (see the end of this post if you’re interested in a few), the interesting discussion is about the marketing of this specification. And boy does it have wings on that front. The level of visibility the specification has received is pretty amazing, especially considering that it doesn’t really do that much technically. But you wouldn’t know it by reading all the announcements about OVF:

  • VMWare supports OVF packaging (which version?) with its new VMWare Studio.
  • Citrix uses OVF in Kensho to create a platform-agnostic VM management.
  • An Open Source “implementation” of OVF has been created. I put “implementation” between quotes because since OVF per se doesn’t do much its implementation is mostly a specialized command line editor for its XML descriptor. It requires a a vendor-specific runtime for deployment/activation. This is not a criticism of the open source project BTW, just a statement of fact about the spec.
  • Enomaly lists “OVF format support” on its roadmap for Q1 2009.
  • Microsoft support for OVF in products is supposedly “on the board” which doesn’t mean very much but their overall marketing/PR response to OVF has been surprisingly positive for a standard that they don’t control.

I have criticized the DMTF marketing efforts in the past (“give away pens and key chains”) but I must admit that, to the extent that DMTF had a significant role in promoting OVF adoption (in addition to marketing efforts directly from the vendors), it is a very nice marketing success. Well done, and so much for my cynicism. OVF may also have benefited from all the interest in the general topic of virtualization/cloud standards (the “cloud” association is silly, of course, but as we’ve just seen I am not a marketing genius) and the fact that there isn’t much else to talk about on these topics. So by default OVF becomes the name to put on your “standards” banner. Right place at the right time for the vendors behind it.

Speaking of the vendors, I have no insight into the functioning of the OVF working group, but judging by the specification’s foreword VMware is throwing plenty of resources at DMTF: it employs the working group chair and both co-editors, which is pretty atypical in my experience in standards efforts. People are usually sensitive to appearances of one company having disproportionate influence and try to distribute responsibilities around, at least on paper. Add to this VMWare’s recent ramp-up at the DMTF board level. They seem to know what they want. And indeed I can see how the industry leader would want some basic level of standardization, but not too much, which is currently just what OVF offers. We’ll see what’s next in store, if anything.

The specification itself is not marketing-free. According to line 122, “it supports the full range of virtual hard disk formats used for hypervisors today, and it is extensible, which will allow it to accommodate formats that may arise in the future”. Sure, in the same way that my car fully supports passengers of all nationalities (and is extensible enough to transport citizens of yet-to-be created countries – and maybe even other planets, as long as they come with buttocks to sit on). Since OVF doesn’t really do anything with the virtual hard disk formats, it can “support” pretty much any such format.

Speaking of extensibility, OVF clearly tries to have a good story there. Section 7.3 tries to move away from the usual “hey, it’s XML, you can add elements/attributes anywhere” approach towards the definition of new “sections”. This seems a bit drastic. Time will tell if this is visionary or short-sighted. OVF also plans to move towards “an extension model based on the design of the open content model in XML Schema 1.1”. I am not following XSD 1.1 too closely, but it is wise for OVF to not build too much dependency on it at least for now. And it seems to me that an extension model is not something that you plan to “plan […] to add” but rather something you need to define from the start (sounds like the good old “the next version will add versioning support”, or “no keyboard detected, press F8 to continue”).

But after all this comes what looks to me, from an extensibility perspective, like a big no-no: using (section 8.1) simple strings (e.g. “vmx-4”, “xen-3”) to represent types of virtual systems. You’d think that in 2008 people would have heard about URIs as a way to allow extensibility and prevent name clashes. On further reading, this doesn’t seem to be the fault of OVF as they get this property (vssd:VirtualSystemType) straight out of the politely named DMTF SVP (System Virtualization Profile) specification, itself a preliminary standard. But that’s not much of an excuse because I suspect large overlap of participation between the two groups and in any case you don’t have to take dependencies on something that’s not right (speaking as someone who authored several specs that took a dependency on WS-Addressing, I shouldn’t give lessons). In any case, I am not on top of all virtualization-related work in DMTF but it seems to me that if they are not going to use URIs then someone should step up and maintain a registry of these identifying “virtual system type” strings.

BTW, when left to its own device OVF does a better job. For example, it properly uses URIs to identify the virtual disk format (section 5.2).

One of the few new features is the addition of the ovf:bound attribute on virtual hardware element items (section 8.3) to specify whether the item description represents the normal, minimal or maximal allocation. My heads spins a bit when trying to apply this metadata to the rasd:Limit property (with ovf:bound=”min” the value of the rasd:Limit element would represent the minimal value of the maximum quantity or resources that will be granted, which takes some parsing effort), but I think it more or less squares out.

The final standard should not differ greatly from this version, so at this point we pretty much know what OVF will be technically. The real question is how it will be used and what, if anything, is going to come to complement it.

[UPDATED 2008/10/14: Good timing. OVF-loving Kensho just launched.]

3 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Open source, OVF, Specs, Standards, Tech, Utility computing, Virtualization, VMware

HP Systinet 3.00: now with more significant digits!

My ex-colleagues at HP have just released a new version of the HP Systinet SOA governance product. Congrats guys.

Just a question. What’s up with the “version 3.00” thing? We used to talk about “v1” and “v2”. Then came the whole “Web 2.0” silliness and we all replaced the “v” prefix with a “dot oh” suffix. Fine. But am I now supposed to say “dot oh oh”? And, more important, where will it stop? Is Santa Claus going to be bellowing “dot oh oh oh” later this year?

Or is it the price? Three dollars?

Since versioning is a big part of SOA management, I guess HP wanted to show that they had thought extra hard about the question and reflect this in their product name. In any case, no-one beats Oracle for granular version number (for example, JDeveloper 10.1.1.0.0 was released today).

More seriously, I noted with interest mentions of BPEL and SCA support in Systinet 3.00, but I couldn’t find any specific about what this means on the HP site. Anyone has more info? Also, no mention of GIF in the release announcement?

Comments Off on HP Systinet 3.00: now with more significant digits!

Filed under Application Mgmt, Everything, Governance, HP

Oslo name clarification

Good news. The Oslo code name now specifically refers to Microsoft’s new modeling technologies (the part that I and, presumably, readers of this blog care about) and not the workflow/biztalk stuff that was always mixed in (to the point where some Oslo stories only mentioned workflow).

[UPDATED 2008/10/10: Now this is getting silly. Yet another name change. It’s not “D” it’s “M”. Whatever. Isn’t the whole point of code names that it doesn’t matter what they are: just pick one and stick with it until you release and then you can come up with the final name? I am not going to do another post just for this like a groupie tracking every news item, however irrelevant, about his/her favorite band. Which, for the record, is not the position I am in wrt to Oslo (at least until I know what it really is). Oh, and their graphical modeling tool is now called Quadrant. I am sure the TopQuadrant folks (creator of the TopBraid RDF/OWL/SPARQL editor which is in a very related domain) will appreciate.]

2 Comments

Filed under Everything, Microsoft, Modeling, Oslo

Go Big Blue, go! Show them who’s the true friend of the little guy.

IBM’s well-publicized new policy for technology standards is an interesting development. The first image it conjured for cynical me is that of an aging Heavy Metal singer ranting against the rudeness of rap lyrics.

Like Charles, I don’t see IBM as an angel in this domain and yet I too think this is a commendable move on their part. Who better to stop a burglar than a (presumably) reformed burglar anyway? I hope this effort will succeed and I am glad to see that my colleague Jim Melton was involved in the discussion facilitated by IBM and that Trond supports it too.

My experience in standards (mostly from back in my HP days) only covers a small portion of IBM’s technology standards involvement of course. But in all instances, both IBM and Microsoft were key players (either through their participation or through their glaring refusal to participate). And within that sample (which does not include OOXML) my impression is that IBM did indeed play more cleanly than Microsoft.

They also mostly lost, while Microsoft mostly won. Whether there is a causality here is possible but not proven. IBM seems to have an ability to loose by winning: because they assign so many people to standards they wear out everybody else and at the end, they get the final document to be the way they want it (through the normal process, just by being relentless). But the specification is by then so over-engineered, so IBM-like in its approach and so late that it’s usually a Pyrrhic victory. Everybody else has moved on and IBM has on their hand something that’s a standard on paper but that only players in the IBM ecosystem implement. Pushing IBM’s CBE event format in WSDM, over-complicating aspects of WSRF like WS-ServiceGroup and butchering the use of SOAP headers in WS-ResourceTransfer to play nice with WebSphere are, in my mind, such examples. They can’t blame Microsoft for those.

Also, nobody forced them to tango with the devil in that whole WS-* saga. What they are saying now is similar in many ways to what Oracle was saying (about openness and fairness) throughout this decennia while Microsoft and IBM were privately defining machine to machine interoperability protocols for the enterprise. And they can’t blame standards for the way Microsoft eventually took advantage of them there, because they *chose* to do this outside of standards. I wish I had been a fly on the whole when this conversation took place:

IBM: We’re going to need a neutral DNS name for all these new XML namespaces. It wouldn’t be right to do it under ibm.com or microsoft.com.
Microsoft: You’re right. Hey, I just registered xmlsoap.org last week with the intent to launch a B2B forum for the detergent industry, but if you want we can use it for our Web services specs.
IBM: Man, that’s perfect. Let me give you twenty bucks to help pay the registration.
Microsoft: No, really, no big deal. It’s on me.
IBM: You’re too cool man.

But here I am, IBM-bashing again while the point of this post is to salute and support their attempt at reform. Bad, bad William.

OK, so now for some (hopefully) constructive remarks and suggestions.

I think commentaries and reports on the news have focused too much on the OOXML/ISO story. Sure it’s probably a big part of the motivation. But how much leverage does IBM really have on ISO? Technology standards is just a portion of what ISO does. And it’s not like ISO has much competition anyway, with its de jure international standing. Organizations like the JCP, DMTF and W3C have a lot more too lose if IBM really gets mad at them.

I think it’s clear that Microsoft is the target, but if ISO reform was the main prize, I don’t think IBM would go at it that way. ISO will only change in response to government pressure. If government influence is a necessary step, isn’t it cheaper and more direct for IBM to hire a couple more lobbyists than to try to rally the blogosphere? I think they really want to impact all standards setting organizations at the same time. If ISO happens to be one of those improved in the process, that’s gravy.

IBM calls its report “standards for standards” (at least that’s the file name). I think (and hope) the double entendre is voluntary. It’s not just a matter a raising the (moral and operational) standards of standards organizations. It should also be an occasion to standardize how they work, to make them more similar to one another.

Follow me for a second here. One of the main problems with many organizations is their opacity. They have boards, task forces, strategic committees, etc. Membership in the organization is stratified, based mostly on how much you are willing to pay. I would guess that most organizations couldn’t make ends meet if all member companies paid the “base membership” fee. They need a dozen companies to pay the “leadership” fee to fund their operations. For these companies to agree to the higher price of participation, they need something in return. They need to have more access than the others. Therefore, some level of access must be denied to the base members (and even more to the non-members, which is why many such organizations make almost no information publicly available).

They are not opaque by accident, they are opaque by design because they need to be in order to be funded. There are two ways to fix this. One is to have fewer organizations, such that the fixed costs of running an organization can be more widely spread. But technology is very specialized and there is value in having organizations that are focused and populated by domain experts. The other way is to drastically reduce the cost of running a standards organization. That’s where standardization of standards organizations comes in. If the development processes, IP policies, bylaws and tools were commonly shared among standards organizations, it would be a lot cheaper to run one.

Today, I can start a new open source project for free on Sourceforge. I can pick one of the clearly-identified open source licenses that have been pre-defined. I can use the usual source control, collaboration and bug reporting tools. Not only is it almost free, my users will know right away how to participate. Why isnt’ it the same for standards organizations? Or only so partially. I know that Kavi is used by many standards organizations. I’ve used their tool both as a DMTF participant and an OASIS participant. And it doesn’t really fit either perfectly because the processes are slightly different. Ballots are conducted differently, attendance rules are different, document visibility rules are different, roles are different, etc.

It sounds superficial, but I am convinced that a more standardized approach to IP policies, organization bylaws and specification development processes would result in big savings that would open the door to much more transparency.

Oh yeah, you’d also have to drop the boondoggle plenary sessions in resorts all over the world. Painful, I know.

Sure there are other costs, such as marketing costs. But fully transparent organizations, by making their products more easily accessible to users, have a much lower need to use traditional marketing to get the word out. In the same way that open source software companies get most of their marketing via their user community. Consistency among standards organizations would also make it a lot easier for small companies to participate since anyone who’s learned the rules once can be effective right away in a new organization.

I want to end with a note of caution directed at IBM. You have responsibilities. I hope you realize that at this point, approximately 20% of all airplane seats are occupied by IBM employees going to or coming back from some standards-related meeting. The airlines are hurting already, you can’t pull out at once. And who will drive all these rental Chevys? Who will eat all the bad sushi in airport food courts and Benihana restaurants?

[UPDATED 2008/10/20: From Tim Bray, another example of IBM loosing by winning in standards: “Unfortunately, that spec [XML 1.1] came with excess baggage, namely changed rules on what constitutes white-space, rammed through by IBM for the convenience of their mainframe customers. In any case, XML 1.1 has been widely ignored”.]

3 Comments

Filed under Conference, Everything, Governance, IBM, ISO, Microsoft, OOXML, Open source, Standards

State modeling: party over, go home now.

Is the Northwest weather softening Savas? Is it the food? I just read the “how do I model state? let me count the ways” article that he, Ian Foster, Paul Watson and Mark McKeown published in the September 2008 Communications of the ACM. In the article, the authors attempt to recap (and advance?) the 5 years-old debate between the WSRF, HTTP-only and “no convention” (e.g. Zen-SOAP as used in CMIS) approaches to interacting with stateful resources over the Web. If you were anywhere near OGF (then called GGF) around 2003, you know what I am talking about. And you remember how heated the arguments were. There was something about this subject (or maybe it was the people involved) that consistently generated great showmanship (and some bruised egos) in the debates.

With that in mind, reading this article felt like watching a Chinese opera adaptation of Apocalypse Now. Or listening to Heavy Metal with the base dialed down to zero.

This would have been a very useful article to have in 2003. At the time, it would have clearly framed the question, shown the overwhelming similarities and small differences between the approaches and allowed people to see that there wasn’t actually that much to debate at a fundamental level, but mainly practical considerations to juggle. It may have prevented the quasi-religious war that erupted.

It took a while, but that period of religious war is well over now and we are firmly in the “I’ve heard you, you’ve heard me, do what you want I’ll do what I want” stage. WSRF people are still doing WSRF (or equivalent like WSRT). REST people are HTTPing right and left. They don’t meet much but when they do they don’t bump shoulders anymore. And in a way this article is a good illustration of this much more dispassionate environment.

So why am I complaining? Because these fights were fun! At least from a spectator’s point of view, but I suspect that Savas and the gang had plenty of fun too (not sure about the other side who, at least at first, expected “why are you throwing away OGSI” kind of pushback rather than this more radical-sounding response).

I printed this ACM article a little bit on the off chance that it would provide some new way to look at the problem, one that hadn’t emerged in the past five years. But in retrospect I think my true motivation was that I expected it to capture, like in the days, some of the entertainment value of a radio talk show. Instead, the excitement level in this article is in the league of NPR’s StarDate astronomy report.

I feel cheated. I haven’t learned anything new and I haven’t been entertained either. This article feels like the end of the party, when the bottles are being put away, the lights are flickering and bad music is playing to nudge the last guests out of the house.

Now that I am grumpy, I guess I have to point out a few highly questionable statements in the article in retribution:

“Fortunately, there seems to be industry support for an integration of the WS-Transfer and WS-RF approaches, based on a WS-Transfer substrate – the WS-ResourceTransfer specification.” See the last two paragraphs of this entry.

“Support for WS-Addressing has since become quasi-universal, and now few find its use objectionable.” Time to pull out the Victor Hugo quote I have been saving for a special occasion: “Et s’il n’en reste qu’un, je serai celui-là“. But frankly I very much doubt that I am the only one still shaking his head sadly in contemplation of WS-Addressing.

In fact, Stu agrees with me on this (see item #6a in his list of disagreements with the article). Looks like he too was made a bit grumpy by the article, for different reasons.

There is one more debatable choice in this article, and it’s more serious than the two above. It introduces an arbitrary difference between the WS-Transfer and HTTP approaches. Compare the third lines of tables 4 and 5 (retrieving the status of a specific job). According to the article, WS-Transfer gives you the choice between two options:

  • retrieve the entire state of the job and fish for the status field inside of it (the approach in table 4), or
  • “a new operation (for example GetEPRtoPart) is defined that requests that a new state representation be exposed, through a different EPR, representing parts of the original state representation”

The way it works for HTTP, on the other hand is through an “application-specific convention” (in this example, appending “/status” at the end of the URL).

Except there is no reason why this third approach cannot be used in the WS-Transfer scenario. The article says that  “in WS-Transfer, the same effect [accessing a subset of the resource state] can be achieved, but only by defining an auxiliary operation that returns an EPR to a desired subset”. What, pray tell, prevents a WS-Transfer implementation from having an “application-specific convention” just like the HTTP kids next door? It can be at the URL level (e.g. adding “/status”). Or at the EPR reference parameter level. The latter is actually exactly what WS-Management does, using the wsman:SelectorSet header. It does not, as the article claims, define a special operation to get these fine-grained EPR. It uses an application convention to do so (which, in the case of WS-Management, happens to be “whatever Windows implements”, but that’s a different debate).

By the way, this question of “convention over specification” is where I don’t quite follow Stu (see his point #4 in his aforementioned list of disagreements) and his invocation of the “hypermedia constraint”. I don’t see how any of the four specifications he calls to the rescue (HTML form submission, XForms submission options, Atompub service documents and URI templates) would prevent me from having to have an application-specific agreement about how to retrieve the state (as opposed to another subset of the representation, like the creation date). URI templates, for example, might support how this agreement is expressed but it doesn’t replace it.

The article does a pretty good job at showing how close the alternatives are (even though, as illustrated above, it still portrays them as more different than they need to be). I am not saying it’s a bad article for the Communications of the ACM. I am saying that the Communications of the ACM is a bad medium for one of the few nerdy debates that have genuine entertainment value.

[UPDATED 2008/10/2: Jim Webber, Savas Parastatidis and Ian Robinson provide a full REST example for InfoQ: how to GET a cup of coffee. Includes state considerations discussed in the ACM article.]

2 Comments

Filed under Articles, Everything, Grid, People, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer

Running Oracle in Amazon’s cloud

The announcement finally came out. Users can now run supported versions of Oracle Enterprise Linux, 11G Database, Fusion Middleware and Enterprise Manager on Amazon EC2 instances. You can create your own AMI or use any of the pre-packaged AMIs with the above-mentioned products. And you don’t have to purchase new licenses, you can transfer existing ones to run on Amazon’s infrastructure.

A separate but related announcement is the possibility to simply and securely backup your databases on Amazon S3 instead of (or in addition to) on tape. I hope BNY Mellon will take notice.

The Amazon AWS blog has a good overview of the news. Forrester covers it with a focus on data warehousing.

This comes in addition to the existing SaaS offering (“On Demand”) from Oracle and the SaaS platform (for others to provide SaaS on top of Oracle’s software). It is a major milestone for utility computing.

[UPDATED 2008/9/21: This is the home page for the Oracle Cloud Computing Center and this is the FAQ.]

[UPDATED 2008/9/23: More Cloud love, this time with Intel. I have no insight into that partnership.]

[UPDATED 2009/2/10: More on WebLogic Server on EC2, from Erik Bergenholtz.]

1 Comment

Filed under Amazon, Conference, Everything, IT Systems Mgmt, Linux, Middleware, Oracle, Oracle Open World, SaaS, Trade show, Utility computing, Virtualization

Application management roundtable

The Oracle Enterprise Manager team is inviting customers to an application management roundtable next week in San Francisco. You’ll learn about recent application management acquisitions (Moniforce, ClearApp and e-TEST), product direction and integration strategy. What we’d like to learn in return is your thoughts, needs and requirements for application management. To that end, we’ll need you to RSVP and to prepare a 5-10 minutes presentation about your application management challenges.

Here is the agenda:

  • Introduction
  • Customer Presentations on Application Management
  • Oracle’s Approach to Application Management
    • Real User Monitoring (Moniforce)
    • End2end Performance Monitoring (ClearApp)
    • Application Quality Management (e-TEST)
  • Breakout Sessions
  • Composite & SOA Application Management
    • E-Business Suite Application Management
    • Siebel Application Management
    • BRM Application Management
    • PeopleSoft Application Management

It will take place at the Four Seasons Hotel (757 Market St) from 9:00AM to 1:00PM (but don’t forget to RSVP before showing up).

You don’t have to be registered for Oracle Open World (OOW) to attend, but of course it’s been timed to be convenient for people who come to OOW.

Speaking of OOW, here is a list of all the sessions about Enterprise Manager from the conference agenda search engine. Also packaged as a nicely-formatted and chronologically-ordered PDF. For those interested in the recent application management acquisitions, check out these sessions:

About Moniforce

  • S298518 (Improve Performance of Your Oracle E-Business Suite and Siebel Applications with Oracle’s Real User Experience Insight)
  • S298536 (Go Beyond Web Analytics: Build Business Intelligence with Oracle Real User Experience Insight)
  • S298516 (How Real User Monitoring Can Improve Application Performance: Go Beyond Web Analytics and Systems Monitoring)

About ClearApp

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)

About e-TEST

  • S298707 (Application Testing Best Practices: Real-World Customer Testimonials)
  • S298706 (Optimizing Application Performance: Application Testing Suite to the Rescue)

About Auptyma

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)
  • S298524 (Application Diagnostics for DBAs: Visibility into Your Application That the Middle-Tier Administrator Cannot Provide You)
  • S298525 (Diagnosing Java Application Issues in Production: Gaining Performance Insight That Even Developers Do Not Have )
  • S300236 (Oracle Enterprise Manager Hands-on Lab: SOA Management and Java Application Diagnostics)

Just for fun, check out Chris Muir’s 10 things we probably wont see at OOW08. The scary part is that of these ten unlikely things the least unlikely is item #1…

BTW, I’ll be at OOW next week (probably Wednesday and Thursday) so if you plan to be there and would like to meet let me know.

Comments Off on Application management roundtable

Filed under Application Mgmt, Conference, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, Oracle, Oracle Open World, Trade show

Last call for SML and SML-IF

The SML working group at W3C has published the “last call” working draft of version 1.1 of the SML and SML-IF (“IF” stands for “interchange format”) specifications. You have until October 3rd to tell them what you think.

With all the Oslo fun, the OMG embrace and the silence from System Center there are more questions than answers about the use of SML at Microsoft. But the Eclipse COSMOS project (IBM and friends) is, as far as I know, valiantly going forward with the store/validator implementation. Which may or may not be the same codebase as what was used for the recent CMDBf interop demo (I am not sure how the SML and CDMBf implementations in COSMOS are articulated).

The COSMOS group also recently published an overview of SML. It doesn’t try to tell you why you’d want to use SML but it’s a good and succint description of what SML is technically (from an XML developer’s perspective).

Comments Off on Last call for SML and SML-IF

Filed under CMDB Federation, CMDBf, Desired State, Everything, IBM, Implementation, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Open source, Oslo, SML, Specs, Standards, Tech, W3C

Here be (XML) dragons

Spoiler alert: if you like to learn things the hard way, don’t follow this link. It points to a clear description of all the problems, frustrations, disillusions and “ah ah!” moments that are ahead of you as you start to use XML and grow into an expert.

If, on the other hand, you like to be fully prepared and informed when you choose a technology and if you don’t mind sacrificing some adventure and excitement in the process, then you owe it to yourself to read Erik Wilde and Robert Glushko’s XML Fever article. Even if you already consider yourself an XML expert. Especially if you do.

I knew I would like it when I read this in the introduction:

Advanced strains of XML fever often take hold after exposure to the proliferation of more complex and esoteric XML-based technologies layered on top of it. These advanced diseases are harder to catch, but they are also harder to remedy because people who have caught these advanced strains tend to congregate with others with the same diseases and they are continually reinfecting each other.

Oh yes they do. And they speak with such authority that they infect others around them. People who don’t even understand these “more complex and esoteric XML-based technologies” end up being convinced of their magical properties and the need to use them.

I am not going to attempt to summarize the article because it is too tightly packed with great content to be summarized without being butchered. The “tree trauma” section alone could probably save the world billions of dollars in lost productivity if it was widely read.  I’ll just quote a few sections to motivate you to go read the whole thing.

Tree tremors. Whereas tree trauma (discussed earlier) is a basic strain of XML fever caused by the various flavors of trees in XML technologies, tree tremors are a more serious condition afflicting victims trying to manage data in XML that is not inherently tree-structured. The most common causes are data models requiring nontree graph structures and document models needing overlapping structures. In both cases, mapping these models to XML’s tree model results in XML structures that cannot conveniently represent the application-level model.

(…)

The choice of schema languages, however, is more often determined by available tool support and acquired habits than by a thorough analysis of what would be the most appropriate language.

(…)

Triple shock. While RDF itself is simple, large datasets easily contain millions of triples (for truly large datasets this can go up to billions), and managing and querying such a big dataset can become a considerable challenge. If the schema of these large datasets is simple, but ontology overkill has set in and it has been reformulated as an ontology, handling this dataset may become considerably harder, without any immediate benefit.

This is true not just for RDF (a graph model that can be serialized in XML) but for any non-tree model that can be serialized in XML (which is to say any model one can think of). Including every graph model.

Maybe it would help if the article stated more clearly that it’s ok to serialize such a model as XML (e.g. for transmission) as long as you don’t process it (at the application level) as XML. As long as it gets accessed using an API and concepts that are aligned with the semantics of the model.

Imagine that you are receiving an RDF dataset over the wire. You could (if your app runs on the network card rather than in CPU) process it as a bunch of electrical impulses, but that wouldn’t be very convenient. You could process it as a bunch of bits, but that’s still hard. You could process it as a character stream but that’s not that much better. You could process it as XML but that’s still no great. Or you could process it as RDF triplets and be home on time to have dinner with your family. It’s not the fact that it is represented as XML at some point that’s the problem, it’s the fact that your application processes it as XML. Said in another way, just because it makes sense to store it or to send it over the network in XML doesn’t mean that you have to process it as XML in your application.

There is at least one more problem (not covered by the article) that people will eventually run into. You’d think that XML technologies are a consistent and complementary set. Not true. The lack of consistency is illustrated by the “tree trauma” section of the article. But there is also a complementarity problem, in the sense that there are large gaps between the specifications, as anyone who has tried to serialize an XPath nodeset has found out.

As the article points out, all this doesn’t mean that XML is bad or useless. XML technologies can be very useful, but for not for all tasks.

3 Comments

Filed under Everything, Graph query, Modeling, Query, RDF, Specs, Standards, Tech, XPath, XQuery

The circus continues…

Here we go again. Yet another institution who “takes the protection of [my] personal information very seriously” wrote to me to let me know that they lost some unencrypted backup tapes with my SSN and everything. In a way I’d prefer if they said that they don’t take the protection of my personal information seriously. Because now I have to assume that they are incompetent even at the tasks they take seriously, which presumably also includes performing financial transactions (it’s a bank). That they plead dumbness rather than carelessness kind of scares me.

Well, not really. This letter is just damage control of course and whatever reassuring verbiage they put doesn’t mean anything. Everyone is just playing pretend, which is how this whole “identify theft” problem started (“we’ll pretend that the SSN is confidential information and that we can use it to authenticate people”).

A few months ago I wrote that it is now safe to steal my identity because the credit watch service provided by Fidelity following their similar screw-up (laptop stolen from a car that time) had expired. Of course the new breach comes with two years of credit monitoring, courtesy of the incompetent bank.

So here is yet another reason to not buy credit monitoring services (in addition to the fact that they don’t work and that you can get the same thing for free): it’s only a matter of months before the next breach and the free two years of credit monitoring that will ensue.

2 Comments

Filed under Everything, Identity theft, Off-topic, Security, SSN

Dell is the best friend of Cloud Computing

Dell took quite a beating last month for (unsuccessfully) trying to trademark the term “Cloud Computing”. This has earned them a reputation as a clown in the Cloud Computing community.

I think it’s unfair. In my experience, the most compelling arguments for Cloud Computing come from Dell. Dell doesn’t make the move to Cloud Computing simply desirable, it makes it indispensable.

How? Not with its “Dell Cloud Computing Solutions” consultants. Not with its XS23 Cloud Server.

With a laptop. The Latitude D420. More specifically, the D420 that I am writing on right now.

I have been using laptops as my primary work machine for over 10 years. This one is by far the worst in terms of stability.

For months, I grappled with undiagnosable crashes. A motherboard replacement fixed those (I think). But the machine still fails to hibernate 20% of the time (sometimes even fresh out of a reboot). And the docking/undocking process is still a roll of the dice. It only works more or less reliably if the laptop is hibernated (but going to hibernation itself is not reliable, see above). If the machine is either turned on or in stand-by, all bets are off. And I am not talking about ending up with a messed up screen resolution. I consider that a successful docking. I am talking about blank screens (laptop and monitor), an unresponsive machine and eventually a hard reboot. By now, the colleagues sitting in the nearby offices must have learned quite a few French swear words.

And please don’t blame Windows XP. It’s not perfect but I’ve had some rock-solid Windows XP laptops, that could go through dozens of hibernate/wake-up cycles and not need a reboot until some OS security patch had to be installed. The NC6400 that I left behind when I quit HP was such an example. More stable than my home Linux laptop.

Anytime my Dell crashes, I risk loosing data in whatever files were open at the time. I’ve become pretty good at rebuilding a corrupted Thunderbird profile and importing the old emails and filters. I’ve learned to appreciate Firefox’s practice to regularly create a backup copy of the bookmarks. I know how to set up auto-save in any application that has the feature. My left hand does the “Ctrl-S” motion on my pillow a hundred times each night.

But above all, I have come to realize how good life will be when all my data, configuration and preferences are in the Cloud. When all my emails, documents, bookmarks, contacts, RSS subscriptions, calendar items are safely removed from this productivity-preventing machine. When recovering from another temperamental bout from this enemy (that I still carry home every day) will only be a matter of logging back onto whatever SaaS application I was using.

Dell has made me a true believer in Cloud Computing.

The first draft of this entry was written (on the afformentioned Linux laptop) during the 13 minutes it takes for the chkdsk.exe process to scan an 80GB hard drive after yet another crash.

2 Comments

Filed under Everything, SaaS, Utility computing