Category Archives: Articles

Review of reviews of iPad reviews

Since we’re talking about the third generation of iPads, it seems silly to stop at the “review of reviews” level and we should be meta-to-the-cube. So here’s a review of reviews of iPad reviews. Because no-one asked for it.

Forbes and Barron’s reviews of reviews are pretty much indistinguishable from one another and cover the same original reviews (with the only difference that Forbes adds a quick quote from John Gruber). And of course, since they both cater to people who see significance in daily stock prices, they both end by marveling that the Apple stock flirted today with the $600 mark.

Om Malik’s review of reviews wins the “most obvious laziness” prize (in a category that invites laziness), but if you really want to know ahead of time how many words each review will inflict on you then he’s got the goods.

Daniel Ionescu’s review of reviews for PCWorld is the most readable of the lot and manages to find a narrative flow among the links and quotes.

The CNET review of reviews comes a close second and organizes the piece by feature rather than by reviewer. Which makes it more a “review-based review” than a “review of reviews” if you’re into that kind of distinctions.

The Washington Post’s review of reviews just slaps quote after quote and can be safely avoided.

You know what you have to do now, don’t you? No, I don’t mean write something original, are you crazy? I mean produce a review of reviews of reviews of reviews, of course. This is 2012.

1 Comment

Filed under Apple, Articles, Off-topic

Everything is PaaSible

That’s the title of an article I wrote for InfoQ and which went live today.

If you can get past the punny title you’ll read about the following points:

  • In traditional (and IaaS) environments, many available application infrastructure features remain rarely used because of the cost (perceived or real) or adding them to the operational environment.
  • Most PaaS environments of today don’t even let you make use of these features, at any cost, because of  constraints imposed by PaaS providers for the sake of simplifying and streamlining their operations.
  • In the future, PaaS will not only make these available but available at a negligible incremental operational cost.
  • Even beyond that, PaaS will make available application services that are, in traditional settings, completely out of scope for the application programmer. Early examples include CDN, DNS and loab balancing services offered, for example, by Amazon. An application developer in most traditional data centers would have to jump through endless hoops if she wanted to control these services within the application. I believe that these network-related services are just the low hanging fruits and many more once-unthinkable infrastructure services will become programmable as part of the application.

PaaS will become less about “hosting” and more about offering application services. In other words, going back to the formula I proposed on Twitter:

Cloud = Hosting + SOA

IaaS is a lot more “hosting” than SOA, PaaS is a lot more “SOA” (application infrastructure services available via APIs) than “hosting”.

You can read the full article for more.

Comments Off on Everything is PaaSible

Filed under API, Application Mgmt, Articles, Cloud Computing, Everything, Manageability, Mgmt integration, Middleware, PaaS, Utility computing

Anthology of blog posts about protocols and data formats

I just finished reading or re-reading a half-dozen great short texts about data formats and protocols, in the XML/RDF space.

I started with this “do we need WADL” post by Joe Gregorio (since the previous entry made me go back to WADL which is used by Rackspace). Under the guise of a Q&A about WADL, Joe’s post disposes of the notion that IDL-based code generation is any good (of course the reference on this topic is Steve’s Alpine paper, but Joe very elegantly captures the gist of it a few sentences). He then explains what it really take to specify a protocol (hint: it’s not just a syntax). This is about WSDL and XSD as much as WADL.

When I reached the point in Joe’s Q&A where he discusses whether one should ever create a new protocol, I remembered a post on this very topic from Tim Bray, which I easily Googled back to life. Two of them actually, one about why you shouldn’t do it and the other about how to do it since he knows his advice will be ignored. There are so many lessons in these that I won’t even attempt to summarize.

Tim’s second piece then delivered me to this excellent article about the various facets of RDF. It’s six years old but still true. Though if it was written today I expect it would add “graph query language” and possibly even “constraint language” as facets of RDF.

While I am at it, I should add to the list this to this bird-eye view of all the XML obstacles that pedestrians run into (I have highlighted this entry in a previous post).

These are all very well written articles by people who think very clearly about the domain. None of them technically taught me anything I didn’t know before, but they definitely helped me clarify my thoughts (and find the words to explain them to others).

We’re not artists. We’re not scientists. We’re not mathematicians. But there is some beauty in computer protocol design too. These writings are museum pieces, in the “lasting/worthwhile” sense of the term (not the “old/outdated” sense that it often has in the computing world). Don’t rush to read them, they are all several years old and have aged very well. Wait until you have the time to read them carefully.

I didn’t set out to create a best-of compilation of writings about protocols and data formats. I just happened to run into these great entries in a 30 minutes period and I was impressed by how much “above average” they all are. Is it luck? Does the topic of computer protocols naturally attract good thinkers and writers? Am I just in a good mood tonight? Who knows.

There must be others, possibly even better. Elliotte Rusty Harold occasionally surfaces one through his not-so-daily “quote of the day“. Suggestions for more articles of this caliber are welcome. A thousand monkeys may not be able to produce Hamlet, but a thousand bloggers may come close to an equivalent of Feynman’s lectures.

[UPDATED 2010/11/12: Over a year later, here’s an addition to this anthology. Stu’s succinct and beautiful explanation of the underlying issues with partial resource update (and REST in general).]

1 Comment

Filed under Articles, Everything, Protocols, RDF, Specs, Standards

With M (Oslo), is Microsoft on the path to reinventing RDF?

I have given up, at least for now, on understanding what Microsoft wants Oslo (and more specifically the “M” part) to be. I used to pull my hair reading inconsistent articles and interviews about what M tries to be (graphical programming! DSL! IT models! generic parser! application components! workflow! SOA framework! generic data layer! SQL/T-SQL for dummies! JSON replacement! all of the above!). Douglas Purdy makes a valiant 4-part effort (1, 2, 3, 4) but it’s still not crisp enough for my small brain. Even David Chapell, explainer extraordinaire, seems to throw up his hands (“a modeling platform that can be applied in lots of different ways”, which BTW is the most exact, if vague, description I’ve heard). Rather than articles, I now mainly look at the base specifications and technical documents that show what it actually is. That’s what I did  when the Oslo SDK first came out last year. A new technical document came out recently, an update to the MGraph Object Model so I took another a look.

And it turns out that MGraph is… RDF. Or rather, “RDF minus entailment”. And with turtle as the base representation rather than an add-on.

Look at section 3 (“RDF concepts”) in this table of content from W3C. It describes the core RDF concepts. Keep the first five concepts (sections 3.1 to 3.5) and drop the last one (“3.6: Entailment”). You have MGraph, a graph-oriented object model.

On top of this, the RDF community adds reasoning capabilities with RDF entailment, RDFS, OWL, SWRL, SPIN, etc and a variety of engines that implement these different levels of reasoning.

Microsoft, on the other hand, seems to ignore that direction. Instead, it focuses on creating a good mapping from this graph object model to programming languages. In two directions:

  • from programming languages to the graph model: they make it easy for you to create a domain-specific language (DSL) that can easily be turned into M instances.
  • from the graph model to programming languages: they make it easy for you to work on these M instances (including storing them) using the .NET technology stack.

So, if Microsoft is indeed reinventing RDF as the title of this entry provocatively suggests, then they are taking an interesting detour on the way. Rather than going straight to “model-based inferencing”, they are first focusing on mapping the core MGraph concepts to programming (by regular developers) and user interactions (with regular users). Something that for a long time had not gotten much attention in the RDF world beyond pointing developers to Jena (though it seemed to have improved over the last few years with companies like TopQuadrant; ironically, the Oslo model browser/editor is code-named “Quadrant”).

Whether the Oslo team sees the inferencing fun as a later addition or something that’s not needed is another question, on which I don’t see any hint at this point.

I hope they eventually get to it. But I like the fact that they cleanly separate the ability to represent and manipulate the graph model from the question of whether instances can be inferred. We could use such a reusable graph representation mechanism. Did CMDBf, for example, really have to create a new graph-oriented metamodel and query language? I failed to convince the group to adopt RDF/SPARQL, but I may have been more successful if there had been a cleanly-separated “static” version of RDF/SPARQL, a way to represent and query a graph independently of whether the edges and nodes in the graph (and their types) are declared or inferred. Instead, the RDF stack has entailment deeply embedded and that’s very scary to many.

But as much as I like this separation, I can’t help squirming when I see the first example in the MGraph document:

// Populate a small village with some people
Villagers => {
  Jenn => Person { Name => 'Jennifer', Age => 28, Spouse => Rich },
  Rich => Person { Name => 'Richard', Age => 26, Spouse => Jenn },
  Charly => Person { Name => 'Charlotte', Age => 12 }
},
HaveSpouses => { Villagers.Rich, Villagers.Jenn }

That last line is an eyesore to anyone who has been anywhere near RDF. I have just declared that Rich and Jenn are one another’s spouse, why do I have to add a line that says that they have spouses? What I want is to say that participation in a “Spouse” relationship entails membership in the “hasSpouse” class. And BTW, I also want to mark the “Spouse” relationship as symmetric so I only have to declare it one way and the inverse can be inferred.

So maybe I don’t really know what I want on this. I want the graph model to be separated from the inference logic and yet I want the syntactic simplicity that derives from base entailments like the example above. Is June too early to start a Christmas wish list?

While I am at it, can we please stop putting people’s ages in the model rather than their dates of birth? I know it’s just an example, but I see it over and over in so many modeling examples. And it’s so wrong in 99% of cases. It just hurts.

There are other things about MGraph edges that look strange if you are used to RDF. For example, edges can be labeled or not, as illustrated on this first example of the graph model:

In this example, “Age” is a labeled edge that points to the atomic node “42”, while the credit score is modeled as a non-atomic node linked from the person via an unlabeled edge. Presumably the “credit score” node is also linked to an atomic node (not shown) that contains the actual score value (e.g. “800”). I can see why one would want to call out the credit score as a node rather than having an edge (labeled “credit score”) that goes to an atomic node containing the actual credit score value (similar to how “age” is handled). For one thing, you may want to attach additional data to that “credit score” node (when was it calculated, which reporting agency provided it, etc) so it helps to have it be a node. But making this edge unlabeled worries me. Originally you may only think of one possible relationship type between a person and a credit score (the person has a credit score). But other may pop up further down the road, e.g. the person could be a loan agent who orders the credit score but the score is about a customer. So now you create a new edge label (“orders”) to link the loan agent person to the credit score. But what happens to all the code that was written previously and navigates the relationship from the person to the score with the expectation that the score is about the person. Do you think that code was careful to only navigate “unlabeled” edges? Unlikely. Most likely it just grabbed whatever credit score was linked to the person. If that code is applied to a person who happens to also be a loan agent, it might well grab a credit score about other people which happened to be ordered by the loan agent. These unlabeled edges remind me of the practice of not bothering with a “version” field in the first version of your work because, hey, there is only one version so far.

The restriction that a node can have at most one edge with a given label coming out of it is another one that puzzles me. Though it may explain why an unlabeled edge is used for the credit score (since you can get several credit scores for the same person, if you ask different rating agencies). But if unlabeled edges are just a way to free yourself from this restriction then it would be better to remove the restriction rather than work around it. Let’s take the “Spouse” label as an example. For one thing in some countries/cultures having more than one such edge might be possible. And having several ex-spouses is possible in many places. Why would the “ex-spouse” relationship have to be defined differently from “spouse”? What about children? How is this modeled? Would we be forced to have a chain of edges from parent to 1st child to next sibling to next sibling, etc? Good luck dealing with half-siblings. And my model may not care so much about capturing the order (especially if the date of birth is already captured anyway). This reminds me of how most XML document formats force element order in places where it is not semantically meaningful, just because of XSD’s bias towards “sequence”.

Having started this entry by declaring that I don’t understand what M tries to be, I really shouldn’t be criticizing its design choices. The “weird” aspects I point out are only weird in the context of a certain usage but they may make perfect sense in the usage that the Oslo team has in mind. So I’ll stop here. The bottom line is that there are traces, in M, of a nice, reusable, graph-oriented data model with strong bridges (in both directions) to programming languages and user interfaces. That is appealing to me. There are also some strange restrictions that puzzle me. We’ll see where this goes (hopefully this article, “Designing Domains and Models Using M” will soon contain more than “to be submitted” and I can better understand the M approach). In any case, kuddos to the team for being so open about their work and the evolution of their design.

8 Comments

Filed under Application Mgmt, Articles, Everything, Graph query, Mgmt integration, Microsoft, Modeling, RDF, Semantic tech

Reality check on Cloud portability

SD Times recently published an interesting article about “cloud interoperability”. It has some well-informed opinions. But, like all Cloud-related discussions, it also suffers from mixing a bunch of things. The word “interoperability” is alternatively applied to the Cloud infrastructure services (in which case this “interoperability” is a way to provide application “portability”) and to the Cloud-hosted applications themselves.

Application-level interoperability (“look, my GAE-hosted app successfully sent an HTTP request to an Azure-hosted app, open the champagne”) is not very new or exciting anymore and is often used as an interoperability smokescreen (hello Salesforce.com). Many of these interop concerns are long solved and the others (like authentication and data migration) need to be solved in ways that don’t care whether the application is hosted in your Silicon Valley garage or near the Columbia river.

Cloud infrastructure compatibility (in other words application portability) is the more interesting discussion. I keep reading that it is needed (“no vendor lock-in, not ever again”) for enterprises to move to the Cloud. Being a natural-born cynic, I always ask myself whether those asking for it are naive (sometimes) or have ulterior motives (e.g. trying to catch-up with Amazon by entangling them in the standards net – some of my fellow cynics see the Open Cloud Manifesto as just this).

Because the reality is that, Manifesto or no Manifesto, you are not going to get application portability across IaaS-type Cloud providers. At least for production applications. Sorry. As a consolation prize, you may get some runtime portability such that we’ll be shown nice demos of prototype apps moving from one provider to another (either as applications or as virtual machines). Clap clap until you realize that they left behind their monitoring capabilities, or that their configuration rules don’t validate anything anymore. And that your printer ran out of red ink when printing the latest compliance report. Oops.

Maybe I am biased because they are both my friends and ex-colleagues, but the HP guys make the most sense in the SD Times article. Tim Hall has it right when he suggests “that the industry should focus on specific problems that it is going to solve around deployment and standardized monitoring”. And the other HP Tim, Mr. van Ash, rightly points out that we should “stop promising miracles”, which Forrester’s Jeffrey Hammond echoes, saying that there is a difference between a standard and “plug-and-play in reality”.

Tim Hall uses SQL as an example of a realistic common baseline. J2EE would be another one. They provide a good reality check. Standards are always supposed to prevent vendor lock-in. And there is a need for some of that, of course. But look at the track records. How many applications do you know that are certified and supported on any SQL database, any Unix operating system and any J2EE app server? And yet, standardizing queries on relational data and standardizing an enterprise-class runtime environment for one programming language are pretty constrained scopes in the grand scheme of things. At least compared to all the aspects that you need to standardize to provide real Cloud portability (security, monitoring, provisioning, configuration, language runtime and/or OS, data storage/retrieval, network configuration, integration with local apps, metering/billing, etc). And we’re supposed to put together a nice bundle of standards that will guarantee drag-and-drop portability across all these concerns? In how many lifetimes? By then, Cloud computing will have been replaced by the next big thing (galaxycomputing.com is still available BTW).

Not to mention that this standardization comes hand in hand with constraints on what you can do. That’s why I read Amazon’s Adam Selipsky’s comment that allowing customers to do “whatever they want” is vital as a way to say “get real” to requests for application portability, while allowing him to sound helpful rather than obstructionist.

This doesn’t mean that these standards are not useful. They make application portability possible if not free. They make for much improved productivity through generic tools and reusable developer knowledge. We still need all this.

Here is the best that can realistically happen in the “application portability across IaaS providers” area for at least 10 years:

  • a set of partial standards for small parts of the Cloud computing domain (see list above), many of which already exist.
  • a set of RightScale-like tools that do a lot of the grunt work of mapping/hiding/transforming between providers, with various degrees of success.
  • the need for application providers to certify their applications on Cloud providers one by one anyway and to provide cloning/migration as a feature of the application rather than an infrastructure-level task.

That’s assuming that IaaS providers become a major business, that there remains a difference between service providers and software providers. The other option is that the whole Cloud excitment goes back to SaaS only, that application creators are also hosting providers, that the only resource you get in a “utility” fashion is the application itself. At which point application portability is not a concern anymore and we go back to “only” worrying about data portability and application interoperability, an easier problem and one on which we have come a long way already. If this is what comes to pass then the challenge of Cloud portability may well be one of the main reasons. Along with the lack of revenue/margin potential for many of the actors in an IaaS world, as my CEO is fond of pointing out.

[UPDATED 2009/4/22: F5’s Lori MacVittie provides a very nice illustration of the same point, in her explanation of why OVF is not a cloud portability silver bullet.]

[UPDATED 2009/6/1: Soon after posting this entry I was contacted by people at SD Times about turning it into a “guest view” article in the June issue. It has just been published. It’s also in the paper version.]

5 Comments

Filed under Amazon, Application Mgmt, Articles, Cloud Computing, Everything, Google App Engine, HP, IT Systems Mgmt, Mgmt integration, People, Portability, Specs, Standards, Utility computing

Open Cloud Manifesto, circa 2004

The mini-scandal of last week was the manifesto-gate. The mini-scandal of this week is shaping out to be the Ulitzer-gate (if you want to make sure not to miss next week’s IT scandal, subscribe to the Register feed, ferreting these out and adding a bass-heavy soundtrack is their specialty).

Turns out I am one of these Ulitzer “unaware authors” through two articles I wrote a while ago for the Web services Journal, a paper publication by Sys-con (based on a request from HP PR) and a blog post I allowed Sys-con to republish. Looks like Ulitzer and Sys-con are one and the same. Three articles, spaced two years apart. That’s enough to earn me a dedicated home page at Ulitzer and a rank of 1,000 among their more than 6,000 authors. Makes you wonder how much the 5,000 “authors” behind me have (unknowingly) produced… Whatever. At least it’s all content that I authorized Sys-con to use, not something that was lifted from my blog as apparently happened to others.

Turns out the oldest of these articles (“From Web Services Management to Utility Computing” , from 2004) is not that different from the recently-published (and amply maligned) Open Cloud Manifesto. I described my article at the time as “an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive.”

It ends with “while it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications.”

Sure it has a lot more emphasis on WS-* specs than is compatible with the current zeitgeist, and it uses the now-obsolete term of “utility computing” rather than the nebulous alternative currently en vogue, but isn’t the main message there?

Just to be clear, I am not laying pretentious claims of prescience and vision (at least not in this entry). There are plenty of documents (e.g. from the Grid community) that make the same points in more eloquent terms and starting many years prior. It’s just fun to see this link from today’s scandal to the one from last week.

for old time sake, here is the content of the 2004 article:

From Web Services Management to Utility Computing
by William Vambenepe

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. This will be achieved through the development of composable and modular standards that expose the management capabilities of the building blocks of enterprise services. The Web services platform is an enabler of this transformation: a Web services-based management infrastructure provides a channel that is appropriate for dynamic resource provisioning, allocation, and configuration – often called utility computing.

We can consider this management infrastructure as a four-layered architecture. Starting at the foundation layer, the work on the base Web services infrastructure is far from over. First, until WSDL 2.0 is widely deployed, designers have to compose around the deficiencies of WSDL 1.1, such as the lack of portType inheritance. Second, there is still no standard for referencing Web services. Finally, key specifications such as WSRF (Web Services Resource Framework) and WSN (Web Services Notification), without which people were left to reinvent Web services interfaces to access stateful resources, have only recently reached the standards community. These issues are being resolved and a set of building blocks for accessing resources through an SOA (service-oriented architecture) is shaping up. It is critical that these building blocks be modular and composable to allow incremental adoption and separation of concerns.

Moving from the foundation to the management protocol layer, the OASIS WSDM (Web Services Distributed Management) technical committee, through its MUWS (Management Using Web Services) specification, is the key articulation point between the base Web services architecture and utility computing. Both the IT management community and the Grid community rely on MUWS. It defines how to express and exercise manageability capabilities through Web services, putting in place a management channel that is more interoperable and accessible than ever before.

Next is the modeling layer. Information models need to be composed so that a service can be represented based on the services that it is assembled from, be they peer or infrastructure services. Since these will be described by different models, the management channel (MUWS) needs to be model-agnostic in order to support a model-centric architecture. For example, CIM (Common Information Model) is a model that focuses on concrete resources. The DMTF WS-CIM subgroup must now open CIM to the Web services platform by developing a standard way to expose CIM-modeled resources through MUWS. Other models provide representations for service security, service-level agreements (SLA), etc. Only by composing these models will, for example, an auction service SLA be adequately managed as it depends on a combination of the performance of the servers on which the service runs, the application server that hosts it, the other services (authentication, billing, etc.) that it makes use of, and the business process engine that controls the bidding. Once this model-centric architecture is in place, management actions can be policy-driven through explicit constraints.

Finally, at the top layer, the architecture includes a set of common services for utility computing. They are being defined collaboratively by DMTF (Utility Computing working group) and GGF (OGSA working group).

All the pieces are falling into place but much remains to be done to allow comprehensive management of enterprise services in a model-centric way through Web services standards. While it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications. These applications will use this to synchronize business and IT and to capitalize on change.

Comments Off on Open Cloud Manifesto, circa 2004

Filed under Application Mgmt, Articles, Automation, Business Process, Cloud Computing, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Specs, Standards, Utility computing, Virtualization

IT management and Cloud: now some products

Many of us have been thinking (a bit) and talking (a lot) about the relationship between Clouds and good old IT management.  John understands both sides and produced a few good posts (like this one).

Maybe it’s just a coincidence that both Hyperic and CA recently made such announcements. In any case, it gives the impression that time has come for some actual product capabilities in the area of managing Cloud-based systems.

I haven’t investigated either, so keep your slideware shields up, but this is what I read:

From Javier Soltero’s “Announcing HQ 4.0”: “It also provides the first cloud-friendly management agent which allows users to manage cloud based virtual machines securely and reliably from either inside the cloud, or from HQ 4.0 installations inside your datacenter”. John approves.

And at CA World, according to InformationWeek, CA will announce a partnership with Amazon to provide management capabilities around Amazon’s EC2 utility computing platform, potentially including discovery of software running on EC2 instances, performance monitoring, configuration management, software deployment capabilities and provisioning”.

When someone looks into these two products (and others, soon to follow or alrady out and that I have missed), it will be interesting to see how these Cloud-friendly capabilities relate to the good old capabilities of management products: “software discovery”, “perf monitoring”, “config management”, “software deployment”, “provisioning”. That all sounds pretty familiar. Is it just a matter of pointing the old tools to an EC2 IP address? Is it all new capabilities, done in a new way? Or, more realistically, where does it land between these extrems? Where do you want them to land? It’s not so obvious.

Utility computing comes with an expectation of additional flexibility (now that is obvious). When tweaking IT management tools to address the domain, does one leave “in datacenter” capabilities the same and branch off to do cool things in the new land? Or do you raise the level of flexibility accross the board?

In other words, rather than snickering at them, maybe we should praise IT management vendors for whom the “look, I do Clouds” marketing spiel is just a repackaging of normal IT management features. Because it may mean that they’ve raised the bar on “in datacenter” automation capabilities. These Opsware and BladeLogic acquisitions have to come in somewhere, don’t they?

BTW, both of the announcements above also perpetuate the confusion between providing utility services (CA’s extended SaaS offering, Hyperic’s release of a pre-packaged Hyperic AMI) and the ability to manage Cloud-based systems. It’s all crammed in the same announcement/article because, hey, it’s all Cloud stuff.

Speaking of CA World, if I was there I would go to this session. At least for old time sake, and maybe to get some interesting ideas. Hopefully Don will blog about it after he is done presenting later today.

5 Comments

Filed under Amazon, Application Mgmt, Articles, CA, Conference, Everything, IT Systems Mgmt, Open source, Standards, Utility computing

BPM origami

Tom Baeyens (leader of JBoss jBPM) recently wrote a DZone article titled “Seven Forms of Business Process Management With JBoss jBPM”. It’s an interesting article. It does a good job of illustrating the difference between using BPM tools to capture/communicate business intent versus using them to implement asynchronous interactions, especially with Web services.

While it is very much worth reading, the article is not a good reference document for defining/explaining BPM, because it is much to tied to the jBPM product. This happens in two ways, one harmless and one more consequential.

The harmless tie-in is that each flavor of BPM comes with a description of the corresponding jBPM features. Not something you want to see in a generic reference document but Tom is very upfront about the fact that the article is going to cover the jBPM product (it’s even in the title) and about his affiliation with jBPM. No problem there.

What bothers me more is a distinct feeling that the choice of these seven use cases is mainly driven by the availability of these supporting jBPM features. It’s not just that the use cases are illustrated through jBPM features. What we are seeing is the meaning of BPM being redefined to match exactly what jBPM offers.

The most egregious example is use case 6, “thread control language”. Yes, threads are hard. It sounds like Tom and team are planning to make this easier by adding some Erlang-like features in jBPM (at this point the tense changes to future “we’ll develop a thread control language…” so there isn’t much specifics). Great. Sounds interesting, I am looking forward to seeing it. But if this is BPM then are threads a BPM features of the various programming languages? Are OS processes a BPM feature? Are multicore CPUs part of BPM while we’re at it?

Use cases 5 (“visual programming”) and 7 (“easy creation of DSLs”) are treading in the same waters. I have the feeling that if jBPM was able to synchronize the podcasts on my MP3 player, we would have had an 8th use case for BPM.

Tom is right to write that “the term BPM is highly overloaded and used for many different things resulting in a lot of confusion”. By adding a few more use cases that nobody, as far as I know, had previously attached to the BPM bandwagon, he is creating more, not less, confusion.

This is especially glaring if you notice that one of the most important BPM use cases, monitoring, is not even mentioned. Maybe it’s just me and my “operations time” bias versus Tom’s “development time” bias. But it seems that he is pulling the BPM blanket a bit far towards his side of the bed (don’t read too much in the analogy, I have never met Tom).

Rather than saying that “these use cases give concrete descriptions for the different interpretations of the term BPM”, it would be more accurate to say “these use case give concrete descriptions for some of the different interpretations of the term BPM, ignore others and add a few new ones”.

I didn’t learn a lot about BPM, but the article did make me interested in learning more about jBPM, which is probably its primary objective. There seem to be some interesting design goals towards providing a flexible set of orchestration-related tools to application developers. Some of it reminds me of the workflow efforts at Microsoft (some already shipping and some to be revealed at PDC).

1 Comment

Filed under Articles, BPEL, BPM, Business Process, Everything, JBoss, Middleware, Open source

State modeling: party over, go home now.

Is the Northwest weather softening Savas? Is it the food? I just read the “how do I model state? let me count the ways” article that he, Ian Foster, Paul Watson and Mark McKeown published in the September 2008 Communications of the ACM. In the article, the authors attempt to recap (and advance?) the 5 years-old debate between the WSRF, HTTP-only and “no convention” (e.g. Zen-SOAP as used in CMIS) approaches to interacting with stateful resources over the Web. If you were anywhere near OGF (then called GGF) around 2003, you know what I am talking about. And you remember how heated the arguments were. There was something about this subject (or maybe it was the people involved) that consistently generated great showmanship (and some bruised egos) in the debates.

With that in mind, reading this article felt like watching a Chinese opera adaptation of Apocalypse Now. Or listening to Heavy Metal with the base dialed down to zero.

This would have been a very useful article to have in 2003. At the time, it would have clearly framed the question, shown the overwhelming similarities and small differences between the approaches and allowed people to see that there wasn’t actually that much to debate at a fundamental level, but mainly practical considerations to juggle. It may have prevented the quasi-religious war that erupted.

It took a while, but that period of religious war is well over now and we are firmly in the “I’ve heard you, you’ve heard me, do what you want I’ll do what I want” stage. WSRF people are still doing WSRF (or equivalent like WSRT). REST people are HTTPing right and left. They don’t meet much but when they do they don’t bump shoulders anymore. And in a way this article is a good illustration of this much more dispassionate environment.

So why am I complaining? Because these fights were fun! At least from a spectator’s point of view, but I suspect that Savas and the gang had plenty of fun too (not sure about the other side who, at least at first, expected “why are you throwing away OGSI” kind of pushback rather than this more radical-sounding response).

I printed this ACM article a little bit on the off chance that it would provide some new way to look at the problem, one that hadn’t emerged in the past five years. But in retrospect I think my true motivation was that I expected it to capture, like in the days, some of the entertainment value of a radio talk show. Instead, the excitement level in this article is in the league of NPR’s StarDate astronomy report.

I feel cheated. I haven’t learned anything new and I haven’t been entertained either. This article feels like the end of the party, when the bottles are being put away, the lights are flickering and bad music is playing to nudge the last guests out of the house.

Now that I am grumpy, I guess I have to point out a few highly questionable statements in the article in retribution:

“Fortunately, there seems to be industry support for an integration of the WS-Transfer and WS-RF approaches, based on a WS-Transfer substrate – the WS-ResourceTransfer specification.” See the last two paragraphs of this entry.

“Support for WS-Addressing has since become quasi-universal, and now few find its use objectionable.” Time to pull out the Victor Hugo quote I have been saving for a special occasion: “Et s’il n’en reste qu’un, je serai celui-là“. But frankly I very much doubt that I am the only one still shaking his head sadly in contemplation of WS-Addressing.

In fact, Stu agrees with me on this (see item #6a in his list of disagreements with the article). Looks like he too was made a bit grumpy by the article, for different reasons.

There is one more debatable choice in this article, and it’s more serious than the two above. It introduces an arbitrary difference between the WS-Transfer and HTTP approaches. Compare the third lines of tables 4 and 5 (retrieving the status of a specific job). According to the article, WS-Transfer gives you the choice between two options:

  • retrieve the entire state of the job and fish for the status field inside of it (the approach in table 4), or
  • “a new operation (for example GetEPRtoPart) is defined that requests that a new state representation be exposed, through a different EPR, representing parts of the original state representation”

The way it works for HTTP, on the other hand is through an “application-specific convention” (in this example, appending “/status” at the end of the URL).

Except there is no reason why this third approach cannot be used in the WS-Transfer scenario. The article says that  “in WS-Transfer, the same effect [accessing a subset of the resource state] can be achieved, but only by defining an auxiliary operation that returns an EPR to a desired subset”. What, pray tell, prevents a WS-Transfer implementation from having an “application-specific convention” just like the HTTP kids next door? It can be at the URL level (e.g. adding “/status”). Or at the EPR reference parameter level. The latter is actually exactly what WS-Management does, using the wsman:SelectorSet header. It does not, as the article claims, define a special operation to get these fine-grained EPR. It uses an application convention to do so (which, in the case of WS-Management, happens to be “whatever Windows implements”, but that’s a different debate).

By the way, this question of “convention over specification” is where I don’t quite follow Stu (see his point #4 in his aforementioned list of disagreements) and his invocation of the “hypermedia constraint”. I don’t see how any of the four specifications he calls to the rescue (HTML form submission, XForms submission options, Atompub service documents and URI templates) would prevent me from having to have an application-specific agreement about how to retrieve the state (as opposed to another subset of the representation, like the creation date). URI templates, for example, might support how this agreement is expressed but it doesn’t replace it.

The article does a pretty good job at showing how close the alternatives are (even though, as illustrated above, it still portrays them as more different than they need to be). I am not saying it’s a bad article for the Communications of the ACM. I am saying that the Communications of the ACM is a bad medium for one of the few nerdy debates that have genuine entertainment value.

[UPDATED 2008/10/2: Jim Webber, Savas Parastatidis and Ian Robinson provide a full REST example for InfoQ: how to GET a cup of coffee. Includes state considerations discussed in the ACM article.]

2 Comments

Filed under Articles, Everything, Grid, People, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer

Various IT management stories

Apparently Coté’s upstairs neighbors were having a party last night and he could not sleep. That’s good for us because as a result he bookmarked a long list IT systems management stories. Several of those picked my interest:

2 Comments

Filed under Application Mgmt, Articles, Everything, HP, IT Systems Mgmt, Manageability, Microsoft, Open source, Oracle

Oracle Enterprise Manager in the news

I missed this good review of Oracle Enterprise Manager (OEM) by eWeek’s Cameron Sturdevant that came out almost two months ago. It is “good” in the sense that it is well researched and well written but it is also “good” in the sense that it is a very positive review. The only drawback listed is the price of some of the features. But you have to evaluate these numbers in comparison to productivity gains of your IT management staff. Or, even more compellingly, in comparison to the cost of business disruption that can result from insufficient management insight into the applications.

I got to this review through this very nice blog post in which my colleague Chung Wu (a director of product management for OEM) describes step by step the key role that OEM plays in effectively managing Oracle technologies and in allowing a smooth and controlled evolution of the deployed portfolio.

Comments Off on Oracle Enterprise Manager in the news

Filed under Application Mgmt, Articles, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle

Grid-enabled SOA article

Dave Chappell and David Berry have recently published an article in SOA Magazine titled “The Next-Generation, Grid-Enabled Service-Oriented Architecture”. I had unexpectedly gotten a quick overview of this work a few weeks ago, when I ran into Dave Chappell at the Oracle gym (since I was coming out of an early morning swim it took Dave a couple of seconds to recognize me, as I walked through the weight room leaving a trail of water behind me). Even if you are more interested in systems management than middleware, this article is worth your reading time because it describes a class of problems (or rather, opportunities) that cut across middleware and IT management. Namely, providing the best environment for scalable, reliable and flexible SOA applications. In other words, making the theoretical promises of SOA practically achievable in real scenarios. The article mentions “self-healing management and SLA enforcement” and it implies lots of capabilities to automatically provision, configure and manage the underlying elements of the Grid as well the SOA applications that make use of it. Those are the capabilities that I am now working on as part of the Oracle Enterprise Manager team. And the beauty of doing this at Oracle, is that we can work on this hand in hand with people like Dave to make sure that we don’t create artificial barriers between middleware and systems management.

1 Comment

Filed under Articles, Everything, Grid, Grid-enabled SOA, IT Systems Mgmt, Tech

WS-ResourceTransfer article

Network World recently published a “technology update” column I wrote for them on WS-ResouceTransfer. It was supposed to come out soon after the release of WS-ResourceTransfer (in August 2006) but got postponed a few times. In the process, the editors requested that I made some improvements but also made some changes to the article that I hadn’t seen until it was published. The title is from them for example, as is this statement which I don’t actually agree with: “Models can be easily translated from one modeling language to another, so the invoker of the model and the service providers don’t need to use the same modeling language. Service Modeling Language, for example, was designed for that purpose.” SML was not designed for the purpose of doing model translation (even though you can of course transform to and from SML) and unfortunately model translation is not always easy. I guess the lesson is that if I had written the article more clearly to start with they wouldn’t have felt the need to make such modifications.

I think the article is still helpful in describing the potential role of WS-ResourceTransfer at the intersection of SOA and model-based management.

Comments Off on WS-ResourceTransfer article

Filed under Articles, Everything, Standards, Tech, WS-ResourceTransfer

Network World article about SML

I wrote a short article on SML for Network World that was just published on-line and in the paper edition.

1 Comment

Filed under Articles, Everything, Standards

Webcast on management roadmap

Some of the authors of the HP/IBM/CA management roadmap (namely Heather from IBM, Kirk from CA and me) are hosting a Webcast to present the roadmap and answer questions. The Webcast starts at 9:00AM Pacific on Tuesday August 30th. More info about the Webcast and registration (it’s free) information at http://www.presentationselect.com/hpinvent/detailsl.asp#977. Talk to you on Tuesday…

Comments Off on Webcast on management roadmap

Filed under Articles, Everything, Standards

HP/IBM/CA roadmap white paper

HP, IBM and CA recently released a white paper describing how we see the different efforts in the area of management for the adaptive enterprise coming together and, more importantly, what else is needed to fulfill the vision. Being a co-author I am arguably more than a little biased, but I recommend the read as an explanatory map of the standards/specifications landscape, from the low levels of the Web services stack all the way up to model transformations and policy-based automated management: http://devresource.hp.com/drc/resources/muwsarch/index.jsp

Comments Off on HP/IBM/CA roadmap white paper

Filed under Articles, Everything, Standards, Tech

Building blocks of an “adaptive enterprise”

Call it “laziness” or “smart reuse”, here is a pointer to a Web services journal opinion piece I wrote a few months back in an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive. This is how it starts:

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. [more…]

1 Comment

Filed under Articles, Business, Everything, Tech