Category Archives: Everything

A flash of anti-genius

Just this week, I saw two emails that painfully illustrate what is maybe the single worst thing about the way Flash is used on many web sites: the lack of addressability.

The first email was a request for help about finding a specific view on a Flash-based app (one that, I must shamefully admit, was created by Oracle). The answer came quickly, in the form of a screen capture of the Flash app with the multi-level menu open and pointed at the menu entry that produces the requested view. Does anything with this strike you as wrong?

If not, look at the email that arrived the following day. A fellow Oracle employee wanted to advertise for rent an apartment he owns in the new One Rincon Hill tower in San Francisco. In order to provide a link to the floor plan, here is what he had to put in the email:

Plan 5 – see http://www.onerinconhill.com (Lower right “Skip intro”, then follow the link on Residences and Views -> Condominiums -> Tower One -> 1 Bedroom -> Unit 05)

No need to comment on the “skip intro” part. We all know how stupid these “intros” are. BTW, it would be nice if you didn’t have to download the entire Flash file before clicking on “skip”. But this is a “no Flash, no service” site. There is no alternative. Ironic for a tower in which 95% of occupants own an iPhone (the remaining 5% are  Android-wielding Google employees, also Flash-challenged).

Even more ironic is that fact that Flash is used on this site to navigate menus (usefulness: zero) and when you get to the floor map it’s a plain static image. Even though that’s the place where you could provide innovative features in Flash (like having a list of typical furniture items that people can drag and drop to see how to use the space).

You could say, NRA-style, “Flash apps don’t screw up web sites, bad Flash designers screw up web sites”. Sure. It’s not Flash per se, it’s the way it’s used. There is a good case to be made for small areas of web pages being delivered through Flash for increased interactivity (rather than having Flash become a navigation mechanism). But just like with the gun, when you are on the receiving end the difference seems pretty academic.

In a blog entry three and a half years ago (an entry which, in retrospect, is a strong contender for “most obscure, pretentious title”), I recalled hearing Tim Berners-Lee explain in 1999 on the radio how he came up with the idea of a URL: before the Web, people would create small files that describe where to find information in a human-readable way. TBL wrapped this in a consistent format, the URL.

And now, more than 15 years after TBL’s invention, Flash-drunk nitwits are recreating the problem he solved and forcing people to again “create small files that describe where to find information in a human-readable way”. When WS-Addressing decided to deprecate URLs, they at least provided a replacement (the EPR). What is the Flash equivalent going to be? Who wants to write the DARC (Distributable Addressing for Rich Clients) specification?

[UPDATED 2008/10/3: Someone pointed me at the “solution” for this problem: SWFAddress. Looks interesting. Except that this is an extra step that the Flash developer needs to know about and implement. If your Flash developer has that state of mind and level of competency, you’ve already solved 95% of the problem. For starters, s/he won’t create your whole site as a Flash movie, s/he will just use Flash judiciously on the site. I don’t see how SWFAddress is going to help with the throusands of mostly clueless Flash developers who keep banging out Flash-only sites. If you really want a technology solution to the general problem, it would probably require something like a click tracker that generates a trail of crumbs and packages it in a URL. But I don’t think the solution here is a technology solution. It’s more a “get a clue” solution. After all, almost no web site has an empty, pretty-looking, entry page anymore (except Flash sites of course), even though those were pretty common at a time.]

4 Comments

Filed under Everything, Flash, Off-topic, Tech

BPM origami

Tom Baeyens (leader of JBoss jBPM) recently wrote a DZone article titled “Seven Forms of Business Process Management With JBoss jBPM”. It’s an interesting article. It does a good job of illustrating the difference between using BPM tools to capture/communicate business intent versus using them to implement asynchronous interactions, especially with Web services.

While it is very much worth reading, the article is not a good reference document for defining/explaining BPM, because it is much to tied to the jBPM product. This happens in two ways, one harmless and one more consequential.

The harmless tie-in is that each flavor of BPM comes with a description of the corresponding jBPM features. Not something you want to see in a generic reference document but Tom is very upfront about the fact that the article is going to cover the jBPM product (it’s even in the title) and about his affiliation with jBPM. No problem there.

What bothers me more is a distinct feeling that the choice of these seven use cases is mainly driven by the availability of these supporting jBPM features. It’s not just that the use cases are illustrated through jBPM features. What we are seeing is the meaning of BPM being redefined to match exactly what jBPM offers.

The most egregious example is use case 6, “thread control language”. Yes, threads are hard. It sounds like Tom and team are planning to make this easier by adding some Erlang-like features in jBPM (at this point the tense changes to future “we’ll develop a thread control language…” so there isn’t much specifics). Great. Sounds interesting, I am looking forward to seeing it. But if this is BPM then are threads a BPM features of the various programming languages? Are OS processes a BPM feature? Are multicore CPUs part of BPM while we’re at it?

Use cases 5 (“visual programming”) and 7 (“easy creation of DSLs”) are treading in the same waters. I have the feeling that if jBPM was able to synchronize the podcasts on my MP3 player, we would have had an 8th use case for BPM.

Tom is right to write that “the term BPM is highly overloaded and used for many different things resulting in a lot of confusion”. By adding a few more use cases that nobody, as far as I know, had previously attached to the BPM bandwagon, he is creating more, not less, confusion.

This is especially glaring if you notice that one of the most important BPM use cases, monitoring, is not even mentioned. Maybe it’s just me and my “operations time” bias versus Tom’s “development time” bias. But it seems that he is pulling the BPM blanket a bit far towards his side of the bed (don’t read too much in the analogy, I have never met Tom).

Rather than saying that “these use cases give concrete descriptions for the different interpretations of the term BPM”, it would be more accurate to say “these use case give concrete descriptions for some of the different interpretations of the term BPM, ignore others and add a few new ones”.

I didn’t learn a lot about BPM, but the article did make me interested in learning more about jBPM, which is probably its primary objective. There seem to be some interesting design goals towards providing a flexible set of orchestration-related tools to application developers. Some of it reminds me of the workflow efforts at Microsoft (some already shipping and some to be revealed at PDC).

1 Comment

Filed under Articles, BPEL, BPM, Business Process, Everything, JBoss, Middleware, Open source

Reviewing DMTF OVF as a “preliminary standard”

OVF 1.0.0d is out as a “preliminary standard” so I gave it a quick read over the weekend. Things have not changed much since the “work in progress” document published this summer, which itself wasn’t a big change from the original specification. As I wrote in the review of the “work in progress”, the DMTF tightened the language of the  specification more than it added features.

Since there aren’t too many technical changes (see the end of this post if you’re interested in a few), the interesting discussion is about the marketing of this specification. And boy does it have wings on that front. The level of visibility the specification has received is pretty amazing, especially considering that it doesn’t really do that much technically. But you wouldn’t know it by reading all the announcements about OVF:

  • VMWare supports OVF packaging (which version?) with its new VMWare Studio.
  • Citrix uses OVF in Kensho to create a platform-agnostic VM management.
  • An Open Source “implementation” of OVF has been created. I put “implementation” between quotes because since OVF per se doesn’t do much its implementation is mostly a specialized command line editor for its XML descriptor. It requires a a vendor-specific runtime for deployment/activation. This is not a criticism of the open source project BTW, just a statement of fact about the spec.
  • Enomaly lists “OVF format support” on its roadmap for Q1 2009.
  • Microsoft support for OVF in products is supposedly “on the board” which doesn’t mean very much but their overall marketing/PR response to OVF has been surprisingly positive for a standard that they don’t control.

I have criticized the DMTF marketing efforts in the past (“give away pens and key chains”) but I must admit that, to the extent that DMTF had a significant role in promoting OVF adoption (in addition to marketing efforts directly from the vendors), it is a very nice marketing success. Well done, and so much for my cynicism. OVF may also have benefited from all the interest in the general topic of virtualization/cloud standards (the “cloud” association is silly, of course, but as we’ve just seen I am not a marketing genius) and the fact that there isn’t much else to talk about on these topics. So by default OVF becomes the name to put on your “standards” banner. Right place at the right time for the vendors behind it.

Speaking of the vendors, I have no insight into the functioning of the OVF working group, but judging by the specification’s foreword VMware is throwing plenty of resources at DMTF: it employs the working group chair and both co-editors, which is pretty atypical in my experience in standards efforts. People are usually sensitive to appearances of one company having disproportionate influence and try to distribute responsibilities around, at least on paper. Add to this VMWare’s recent ramp-up at the DMTF board level. They seem to know what they want. And indeed I can see how the industry leader would want some basic level of standardization, but not too much, which is currently just what OVF offers. We’ll see what’s next in store, if anything.

The specification itself is not marketing-free. According to line 122, “it supports the full range of virtual hard disk formats used for hypervisors today, and it is extensible, which will allow it to accommodate formats that may arise in the future”. Sure, in the same way that my car fully supports passengers of all nationalities (and is extensible enough to transport citizens of yet-to-be created countries – and maybe even other planets, as long as they come with buttocks to sit on). Since OVF doesn’t really do anything with the virtual hard disk formats, it can “support” pretty much any such format.

Speaking of extensibility, OVF clearly tries to have a good story there. Section 7.3 tries to move away from the usual “hey, it’s XML, you can add elements/attributes anywhere” approach towards the definition of new “sections”. This seems a bit drastic. Time will tell if this is visionary or short-sighted. OVF also plans to move towards “an extension model based on the design of the open content model in XML Schema 1.1”. I am not following XSD 1.1 too closely, but it is wise for OVF to not build too much dependency on it at least for now. And it seems to me that an extension model is not something that you plan to “plan […] to add” but rather something you need to define from the start (sounds like the good old “the next version will add versioning support”, or “no keyboard detected, press F8 to continue”).

But after all this comes what looks to me, from an extensibility perspective, like a big no-no: using (section 8.1) simple strings (e.g. “vmx-4”, “xen-3”) to represent types of virtual systems. You’d think that in 2008 people would have heard about URIs as a way to allow extensibility and prevent name clashes. On further reading, this doesn’t seem to be the fault of OVF as they get this property (vssd:VirtualSystemType) straight out of the politely named DMTF SVP (System Virtualization Profile) specification, itself a preliminary standard. But that’s not much of an excuse because I suspect large overlap of participation between the two groups and in any case you don’t have to take dependencies on something that’s not right (speaking as someone who authored several specs that took a dependency on WS-Addressing, I shouldn’t give lessons). In any case, I am not on top of all virtualization-related work in DMTF but it seems to me that if they are not going to use URIs then someone should step up and maintain a registry of these identifying “virtual system type” strings.

BTW, when left to its own device OVF does a better job. For example, it properly uses URIs to identify the virtual disk format (section 5.2).

One of the few new features is the addition of the ovf:bound attribute on virtual hardware element items (section 8.3) to specify whether the item description represents the normal, minimal or maximal allocation. My heads spins a bit when trying to apply this metadata to the rasd:Limit property (with ovf:bound=”min” the value of the rasd:Limit element would represent the minimal value of the maximum quantity or resources that will be granted, which takes some parsing effort), but I think it more or less squares out.

The final standard should not differ greatly from this version, so at this point we pretty much know what OVF will be technically. The real question is how it will be used and what, if anything, is going to come to complement it.

[UPDATED 2008/10/14: Good timing. OVF-loving Kensho just launched.]

3 Comments

Filed under DMTF, Everything, IT Systems Mgmt, Manageability, Open source, OVF, Specs, Standards, Tech, Utility computing, Virtualization, VMware

HP Systinet 3.00: now with more significant digits!

My ex-colleagues at HP have just released a new version of the HP Systinet SOA governance product. Congrats guys.

Just a question. What’s up with the “version 3.00” thing? We used to talk about “v1” and “v2”. Then came the whole “Web 2.0” silliness and we all replaced the “v” prefix with a “dot oh” suffix. Fine. But am I now supposed to say “dot oh oh”? And, more important, where will it stop? Is Santa Claus going to be bellowing “dot oh oh oh” later this year?

Or is it the price? Three dollars?

Since versioning is a big part of SOA management, I guess HP wanted to show that they had thought extra hard about the question and reflect this in their product name. In any case, no-one beats Oracle for granular version number (for example, JDeveloper 10.1.1.0.0 was released today).

More seriously, I noted with interest mentions of BPEL and SCA support in Systinet 3.00, but I couldn’t find any specific about what this means on the HP site. Anyone has more info? Also, no mention of GIF in the release announcement?

Comments Off on HP Systinet 3.00: now with more significant digits!

Filed under Application Mgmt, Everything, Governance, HP

Oslo name clarification

Good news. The Oslo code name now specifically refers to Microsoft’s new modeling technologies (the part that I and, presumably, readers of this blog care about) and not the workflow/biztalk stuff that was always mixed in (to the point where some Oslo stories only mentioned workflow).

[UPDATED 2008/10/10: Now this is getting silly. Yet another name change. It’s not “D” it’s “M”. Whatever. Isn’t the whole point of code names that it doesn’t matter what they are: just pick one and stick with it until you release and then you can come up with the final name? I am not going to do another post just for this like a groupie tracking every news item, however irrelevant, about his/her favorite band. Which, for the record, is not the position I am in wrt to Oslo (at least until I know what it really is). Oh, and their graphical modeling tool is now called Quadrant. I am sure the TopQuadrant folks (creator of the TopBraid RDF/OWL/SPARQL editor which is in a very related domain) will appreciate.]

2 Comments

Filed under Everything, Microsoft, Modeling, Oslo

Go Big Blue, go! Show them who’s the true friend of the little guy.

IBM’s well-publicized new policy for technology standards is an interesting development. The first image it conjured for cynical me is that of an aging Heavy Metal singer ranting against the rudeness of rap lyrics.

Like Charles, I don’t see IBM as an angel in this domain and yet I too think this is a commendable move on their part. Who better to stop a burglar than a (presumably) reformed burglar anyway? I hope this effort will succeed and I am glad to see that my colleague Jim Melton was involved in the discussion facilitated by IBM and that Trond supports it too.

My experience in standards (mostly from back in my HP days) only covers a small portion of IBM’s technology standards involvement of course. But in all instances, both IBM and Microsoft were key players (either through their participation or through their glaring refusal to participate). And within that sample (which does not include OOXML) my impression is that IBM did indeed play more cleanly than Microsoft.

They also mostly lost, while Microsoft mostly won. Whether there is a causality here is possible but not proven. IBM seems to have an ability to loose by winning: because they assign so many people to standards they wear out everybody else and at the end, they get the final document to be the way they want it (through the normal process, just by being relentless). But the specification is by then so over-engineered, so IBM-like in its approach and so late that it’s usually a Pyrrhic victory. Everybody else has moved on and IBM has on their hand something that’s a standard on paper but that only players in the IBM ecosystem implement. Pushing IBM’s CBE event format in WSDM, over-complicating aspects of WSRF like WS-ServiceGroup and butchering the use of SOAP headers in WS-ResourceTransfer to play nice with WebSphere are, in my mind, such examples. They can’t blame Microsoft for those.

Also, nobody forced them to tango with the devil in that whole WS-* saga. What they are saying now is similar in many ways to what Oracle was saying (about openness and fairness) throughout this decennia while Microsoft and IBM were privately defining machine to machine interoperability protocols for the enterprise. And they can’t blame standards for the way Microsoft eventually took advantage of them there, because they *chose* to do this outside of standards. I wish I had been a fly on the whole when this conversation took place:

IBM: We’re going to need a neutral DNS name for all these new XML namespaces. It wouldn’t be right to do it under ibm.com or microsoft.com.
Microsoft: You’re right. Hey, I just registered xmlsoap.org last week with the intent to launch a B2B forum for the detergent industry, but if you want we can use it for our Web services specs.
IBM: Man, that’s perfect. Let me give you twenty bucks to help pay the registration.
Microsoft: No, really, no big deal. It’s on me.
IBM: You’re too cool man.

But here I am, IBM-bashing again while the point of this post is to salute and support their attempt at reform. Bad, bad William.

OK, so now for some (hopefully) constructive remarks and suggestions.

I think commentaries and reports on the news have focused too much on the OOXML/ISO story. Sure it’s probably a big part of the motivation. But how much leverage does IBM really have on ISO? Technology standards is just a portion of what ISO does. And it’s not like ISO has much competition anyway, with its de jure international standing. Organizations like the JCP, DMTF and W3C have a lot more too lose if IBM really gets mad at them.

I think it’s clear that Microsoft is the target, but if ISO reform was the main prize, I don’t think IBM would go at it that way. ISO will only change in response to government pressure. If government influence is a necessary step, isn’t it cheaper and more direct for IBM to hire a couple more lobbyists than to try to rally the blogosphere? I think they really want to impact all standards setting organizations at the same time. If ISO happens to be one of those improved in the process, that’s gravy.

IBM calls its report “standards for standards” (at least that’s the file name). I think (and hope) the double entendre is voluntary. It’s not just a matter a raising the (moral and operational) standards of standards organizations. It should also be an occasion to standardize how they work, to make them more similar to one another.

Follow me for a second here. One of the main problems with many organizations is their opacity. They have boards, task forces, strategic committees, etc. Membership in the organization is stratified, based mostly on how much you are willing to pay. I would guess that most organizations couldn’t make ends meet if all member companies paid the “base membership” fee. They need a dozen companies to pay the “leadership” fee to fund their operations. For these companies to agree to the higher price of participation, they need something in return. They need to have more access than the others. Therefore, some level of access must be denied to the base members (and even more to the non-members, which is why many such organizations make almost no information publicly available).

They are not opaque by accident, they are opaque by design because they need to be in order to be funded. There are two ways to fix this. One is to have fewer organizations, such that the fixed costs of running an organization can be more widely spread. But technology is very specialized and there is value in having organizations that are focused and populated by domain experts. The other way is to drastically reduce the cost of running a standards organization. That’s where standardization of standards organizations comes in. If the development processes, IP policies, bylaws and tools were commonly shared among standards organizations, it would be a lot cheaper to run one.

Today, I can start a new open source project for free on Sourceforge. I can pick one of the clearly-identified open source licenses that have been pre-defined. I can use the usual source control, collaboration and bug reporting tools. Not only is it almost free, my users will know right away how to participate. Why isnt’ it the same for standards organizations? Or only so partially. I know that Kavi is used by many standards organizations. I’ve used their tool both as a DMTF participant and an OASIS participant. And it doesn’t really fit either perfectly because the processes are slightly different. Ballots are conducted differently, attendance rules are different, document visibility rules are different, roles are different, etc.

It sounds superficial, but I am convinced that a more standardized approach to IP policies, organization bylaws and specification development processes would result in big savings that would open the door to much more transparency.

Oh yeah, you’d also have to drop the boondoggle plenary sessions in resorts all over the world. Painful, I know.

Sure there are other costs, such as marketing costs. But fully transparent organizations, by making their products more easily accessible to users, have a much lower need to use traditional marketing to get the word out. In the same way that open source software companies get most of their marketing via their user community. Consistency among standards organizations would also make it a lot easier for small companies to participate since anyone who’s learned the rules once can be effective right away in a new organization.

I want to end with a note of caution directed at IBM. You have responsibilities. I hope you realize that at this point, approximately 20% of all airplane seats are occupied by IBM employees going to or coming back from some standards-related meeting. The airlines are hurting already, you can’t pull out at once. And who will drive all these rental Chevys? Who will eat all the bad sushi in airport food courts and Benihana restaurants?

[UPDATED 2008/10/20: From Tim Bray, another example of IBM loosing by winning in standards: “Unfortunately, that spec [XML 1.1] came with excess baggage, namely changed rules on what constitutes white-space, rammed through by IBM for the convenience of their mainframe customers. In any case, XML 1.1 has been widely ignored”.]

3 Comments

Filed under Conference, Everything, Governance, IBM, ISO, Microsoft, OOXML, Open source, Standards

State modeling: party over, go home now.

Is the Northwest weather softening Savas? Is it the food? I just read the “how do I model state? let me count the ways” article that he, Ian Foster, Paul Watson and Mark McKeown published in the September 2008 Communications of the ACM. In the article, the authors attempt to recap (and advance?) the 5 years-old debate between the WSRF, HTTP-only and “no convention” (e.g. Zen-SOAP as used in CMIS) approaches to interacting with stateful resources over the Web. If you were anywhere near OGF (then called GGF) around 2003, you know what I am talking about. And you remember how heated the arguments were. There was something about this subject (or maybe it was the people involved) that consistently generated great showmanship (and some bruised egos) in the debates.

With that in mind, reading this article felt like watching a Chinese opera adaptation of Apocalypse Now. Or listening to Heavy Metal with the base dialed down to zero.

This would have been a very useful article to have in 2003. At the time, it would have clearly framed the question, shown the overwhelming similarities and small differences between the approaches and allowed people to see that there wasn’t actually that much to debate at a fundamental level, but mainly practical considerations to juggle. It may have prevented the quasi-religious war that erupted.

It took a while, but that period of religious war is well over now and we are firmly in the “I’ve heard you, you’ve heard me, do what you want I’ll do what I want” stage. WSRF people are still doing WSRF (or equivalent like WSRT). REST people are HTTPing right and left. They don’t meet much but when they do they don’t bump shoulders anymore. And in a way this article is a good illustration of this much more dispassionate environment.

So why am I complaining? Because these fights were fun! At least from a spectator’s point of view, but I suspect that Savas and the gang had plenty of fun too (not sure about the other side who, at least at first, expected “why are you throwing away OGSI” kind of pushback rather than this more radical-sounding response).

I printed this ACM article a little bit on the off chance that it would provide some new way to look at the problem, one that hadn’t emerged in the past five years. But in retrospect I think my true motivation was that I expected it to capture, like in the days, some of the entertainment value of a radio talk show. Instead, the excitement level in this article is in the league of NPR’s StarDate astronomy report.

I feel cheated. I haven’t learned anything new and I haven’t been entertained either. This article feels like the end of the party, when the bottles are being put away, the lights are flickering and bad music is playing to nudge the last guests out of the house.

Now that I am grumpy, I guess I have to point out a few highly questionable statements in the article in retribution:

“Fortunately, there seems to be industry support for an integration of the WS-Transfer and WS-RF approaches, based on a WS-Transfer substrate – the WS-ResourceTransfer specification.” See the last two paragraphs of this entry.

“Support for WS-Addressing has since become quasi-universal, and now few find its use objectionable.” Time to pull out the Victor Hugo quote I have been saving for a special occasion: “Et s’il n’en reste qu’un, je serai celui-là“. But frankly I very much doubt that I am the only one still shaking his head sadly in contemplation of WS-Addressing.

In fact, Stu agrees with me on this (see item #6a in his list of disagreements with the article). Looks like he too was made a bit grumpy by the article, for different reasons.

There is one more debatable choice in this article, and it’s more serious than the two above. It introduces an arbitrary difference between the WS-Transfer and HTTP approaches. Compare the third lines of tables 4 and 5 (retrieving the status of a specific job). According to the article, WS-Transfer gives you the choice between two options:

  • retrieve the entire state of the job and fish for the status field inside of it (the approach in table 4), or
  • “a new operation (for example GetEPRtoPart) is defined that requests that a new state representation be exposed, through a different EPR, representing parts of the original state representation”

The way it works for HTTP, on the other hand is through an “application-specific convention” (in this example, appending “/status” at the end of the URL).

Except there is no reason why this third approach cannot be used in the WS-Transfer scenario. The article says that  “in WS-Transfer, the same effect [accessing a subset of the resource state] can be achieved, but only by defining an auxiliary operation that returns an EPR to a desired subset”. What, pray tell, prevents a WS-Transfer implementation from having an “application-specific convention” just like the HTTP kids next door? It can be at the URL level (e.g. adding “/status”). Or at the EPR reference parameter level. The latter is actually exactly what WS-Management does, using the wsman:SelectorSet header. It does not, as the article claims, define a special operation to get these fine-grained EPR. It uses an application convention to do so (which, in the case of WS-Management, happens to be “whatever Windows implements”, but that’s a different debate).

By the way, this question of “convention over specification” is where I don’t quite follow Stu (see his point #4 in his aforementioned list of disagreements) and his invocation of the “hypermedia constraint”. I don’t see how any of the four specifications he calls to the rescue (HTML form submission, XForms submission options, Atompub service documents and URI templates) would prevent me from having to have an application-specific agreement about how to retrieve the state (as opposed to another subset of the representation, like the creation date). URI templates, for example, might support how this agreement is expressed but it doesn’t replace it.

The article does a pretty good job at showing how close the alternatives are (even though, as illustrated above, it still portrays them as more different than they need to be). I am not saying it’s a bad article for the Communications of the ACM. I am saying that the Communications of the ACM is a bad medium for one of the few nerdy debates that have genuine entertainment value.

[UPDATED 2008/10/2: Jim Webber, Savas Parastatidis and Ian Robinson provide a full REST example for InfoQ: how to GET a cup of coffee. Includes state considerations discussed in the ACM article.]

2 Comments

Filed under Articles, Everything, Grid, People, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer

Running Oracle in Amazon’s cloud

The announcement finally came out. Users can now run supported versions of Oracle Enterprise Linux, 11G Database, Fusion Middleware and Enterprise Manager on Amazon EC2 instances. You can create your own AMI or use any of the pre-packaged AMIs with the above-mentioned products. And you don’t have to purchase new licenses, you can transfer existing ones to run on Amazon’s infrastructure.

A separate but related announcement is the possibility to simply and securely backup your databases on Amazon S3 instead of (or in addition to) on tape. I hope BNY Mellon will take notice.

The Amazon AWS blog has a good overview of the news. Forrester covers it with a focus on data warehousing.

This comes in addition to the existing SaaS offering (“On Demand”) from Oracle and the SaaS platform (for others to provide SaaS on top of Oracle’s software). It is a major milestone for utility computing.

[UPDATED 2008/9/21: This is the home page for the Oracle Cloud Computing Center and this is the FAQ.]

[UPDATED 2008/9/23: More Cloud love, this time with Intel. I have no insight into that partnership.]

[UPDATED 2009/2/10: More on WebLogic Server on EC2, from Erik Bergenholtz.]

1 Comment

Filed under Amazon, Conference, Everything, IT Systems Mgmt, Linux, Middleware, Oracle, Oracle Open World, SaaS, Trade show, Utility computing, Virtualization

Application management roundtable

The Oracle Enterprise Manager team is inviting customers to an application management roundtable next week in San Francisco. You’ll learn about recent application management acquisitions (Moniforce, ClearApp and e-TEST), product direction and integration strategy. What we’d like to learn in return is your thoughts, needs and requirements for application management. To that end, we’ll need you to RSVP and to prepare a 5-10 minutes presentation about your application management challenges.

Here is the agenda:

  • Introduction
  • Customer Presentations on Application Management
  • Oracle’s Approach to Application Management
    • Real User Monitoring (Moniforce)
    • End2end Performance Monitoring (ClearApp)
    • Application Quality Management (e-TEST)
  • Breakout Sessions
  • Composite & SOA Application Management
    • E-Business Suite Application Management
    • Siebel Application Management
    • BRM Application Management
    • PeopleSoft Application Management

It will take place at the Four Seasons Hotel (757 Market St) from 9:00AM to 1:00PM (but don’t forget to RSVP before showing up).

You don’t have to be registered for Oracle Open World (OOW) to attend, but of course it’s been timed to be convenient for people who come to OOW.

Speaking of OOW, here is a list of all the sessions about Enterprise Manager from the conference agenda search engine. Also packaged as a nicely-formatted and chronologically-ordered PDF. For those interested in the recent application management acquisitions, check out these sessions:

About Moniforce

  • S298518 (Improve Performance of Your Oracle E-Business Suite and Siebel Applications with Oracle’s Real User Experience Insight)
  • S298536 (Go Beyond Web Analytics: Build Business Intelligence with Oracle Real User Experience Insight)
  • S298516 (How Real User Monitoring Can Improve Application Performance: Go Beyond Web Analytics and Systems Monitoring)

About ClearApp

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)

About e-TEST

  • S298707 (Application Testing Best Practices: Real-World Customer Testimonials)
  • S298706 (Optimizing Application Performance: Application Testing Suite to the Rescue)

About Auptyma

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)
  • S298524 (Application Diagnostics for DBAs: Visibility into Your Application That the Middle-Tier Administrator Cannot Provide You)
  • S298525 (Diagnosing Java Application Issues in Production: Gaining Performance Insight That Even Developers Do Not Have )
  • S300236 (Oracle Enterprise Manager Hands-on Lab: SOA Management and Java Application Diagnostics)

Just for fun, check out Chris Muir’s 10 things we probably wont see at OOW08. The scary part is that of these ten unlikely things the least unlikely is item #1…

BTW, I’ll be at OOW next week (probably Wednesday and Thursday) so if you plan to be there and would like to meet let me know.

Comments Off on Application management roundtable

Filed under Application Mgmt, Conference, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, Oracle, Oracle Open World, Trade show

Last call for SML and SML-IF

The SML working group at W3C has published the “last call” working draft of version 1.1 of the SML and SML-IF (“IF” stands for “interchange format”) specifications. You have until October 3rd to tell them what you think.

With all the Oslo fun, the OMG embrace and the silence from System Center there are more questions than answers about the use of SML at Microsoft. But the Eclipse COSMOS project (IBM and friends) is, as far as I know, valiantly going forward with the store/validator implementation. Which may or may not be the same codebase as what was used for the recent CMDBf interop demo (I am not sure how the SML and CDMBf implementations in COSMOS are articulated).

The COSMOS group also recently published an overview of SML. It doesn’t try to tell you why you’d want to use SML but it’s a good and succint description of what SML is technically (from an XML developer’s perspective).

Comments Off on Last call for SML and SML-IF

Filed under CMDB Federation, CMDBf, Desired State, Everything, IBM, Implementation, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Open source, Oslo, SML, Specs, Standards, Tech, W3C

Here be (XML) dragons

Spoiler alert: if you like to learn things the hard way, don’t follow this link. It points to a clear description of all the problems, frustrations, disillusions and “ah ah!” moments that are ahead of you as you start to use XML and grow into an expert.

If, on the other hand, you like to be fully prepared and informed when you choose a technology and if you don’t mind sacrificing some adventure and excitement in the process, then you owe it to yourself to read Erik Wilde and Robert Glushko’s XML Fever article. Even if you already consider yourself an XML expert. Especially if you do.

I knew I would like it when I read this in the introduction:

Advanced strains of XML fever often take hold after exposure to the proliferation of more complex and esoteric XML-based technologies layered on top of it. These advanced diseases are harder to catch, but they are also harder to remedy because people who have caught these advanced strains tend to congregate with others with the same diseases and they are continually reinfecting each other.

Oh yes they do. And they speak with such authority that they infect others around them. People who don’t even understand these “more complex and esoteric XML-based technologies” end up being convinced of their magical properties and the need to use them.

I am not going to attempt to summarize the article because it is too tightly packed with great content to be summarized without being butchered. The “tree trauma” section alone could probably save the world billions of dollars in lost productivity if it was widely read.  I’ll just quote a few sections to motivate you to go read the whole thing.

Tree tremors. Whereas tree trauma (discussed earlier) is a basic strain of XML fever caused by the various flavors of trees in XML technologies, tree tremors are a more serious condition afflicting victims trying to manage data in XML that is not inherently tree-structured. The most common causes are data models requiring nontree graph structures and document models needing overlapping structures. In both cases, mapping these models to XML’s tree model results in XML structures that cannot conveniently represent the application-level model.

(…)

The choice of schema languages, however, is more often determined by available tool support and acquired habits than by a thorough analysis of what would be the most appropriate language.

(…)

Triple shock. While RDF itself is simple, large datasets easily contain millions of triples (for truly large datasets this can go up to billions), and managing and querying such a big dataset can become a considerable challenge. If the schema of these large datasets is simple, but ontology overkill has set in and it has been reformulated as an ontology, handling this dataset may become considerably harder, without any immediate benefit.

This is true not just for RDF (a graph model that can be serialized in XML) but for any non-tree model that can be serialized in XML (which is to say any model one can think of). Including every graph model.

Maybe it would help if the article stated more clearly that it’s ok to serialize such a model as XML (e.g. for transmission) as long as you don’t process it (at the application level) as XML. As long as it gets accessed using an API and concepts that are aligned with the semantics of the model.

Imagine that you are receiving an RDF dataset over the wire. You could (if your app runs on the network card rather than in CPU) process it as a bunch of electrical impulses, but that wouldn’t be very convenient. You could process it as a bunch of bits, but that’s still hard. You could process it as a character stream but that’s not that much better. You could process it as XML but that’s still no great. Or you could process it as RDF triplets and be home on time to have dinner with your family. It’s not the fact that it is represented as XML at some point that’s the problem, it’s the fact that your application processes it as XML. Said in another way, just because it makes sense to store it or to send it over the network in XML doesn’t mean that you have to process it as XML in your application.

There is at least one more problem (not covered by the article) that people will eventually run into. You’d think that XML technologies are a consistent and complementary set. Not true. The lack of consistency is illustrated by the “tree trauma” section of the article. But there is also a complementarity problem, in the sense that there are large gaps between the specifications, as anyone who has tried to serialize an XPath nodeset has found out.

As the article points out, all this doesn’t mean that XML is bad or useless. XML technologies can be very useful, but for not for all tasks.

3 Comments

Filed under Everything, Graph query, Modeling, Query, RDF, Specs, Standards, Tech, XPath, XQuery

The circus continues…

Here we go again. Yet another institution who “takes the protection of [my] personal information very seriously” wrote to me to let me know that they lost some unencrypted backup tapes with my SSN and everything. In a way I’d prefer if they said that they don’t take the protection of my personal information seriously. Because now I have to assume that they are incompetent even at the tasks they take seriously, which presumably also includes performing financial transactions (it’s a bank). That they plead dumbness rather than carelessness kind of scares me.

Well, not really. This letter is just damage control of course and whatever reassuring verbiage they put doesn’t mean anything. Everyone is just playing pretend, which is how this whole “identify theft” problem started (“we’ll pretend that the SSN is confidential information and that we can use it to authenticate people”).

A few months ago I wrote that it is now safe to steal my identity because the credit watch service provided by Fidelity following their similar screw-up (laptop stolen from a car that time) had expired. Of course the new breach comes with two years of credit monitoring, courtesy of the incompetent bank.

So here is yet another reason to not buy credit monitoring services (in addition to the fact that they don’t work and that you can get the same thing for free): it’s only a matter of months before the next breach and the free two years of credit monitoring that will ensue.

2 Comments

Filed under Everything, Identity theft, Off-topic, Security, SSN

Dell is the best friend of Cloud Computing

Dell took quite a beating last month for (unsuccessfully) trying to trademark the term “Cloud Computing”. This has earned them a reputation as a clown in the Cloud Computing community.

I think it’s unfair. In my experience, the most compelling arguments for Cloud Computing come from Dell. Dell doesn’t make the move to Cloud Computing simply desirable, it makes it indispensable.

How? Not with its “Dell Cloud Computing Solutions” consultants. Not with its XS23 Cloud Server.

With a laptop. The Latitude D420. More specifically, the D420 that I am writing on right now.

I have been using laptops as my primary work machine for over 10 years. This one is by far the worst in terms of stability.

For months, I grappled with undiagnosable crashes. A motherboard replacement fixed those (I think). But the machine still fails to hibernate 20% of the time (sometimes even fresh out of a reboot). And the docking/undocking process is still a roll of the dice. It only works more or less reliably if the laptop is hibernated (but going to hibernation itself is not reliable, see above). If the machine is either turned on or in stand-by, all bets are off. And I am not talking about ending up with a messed up screen resolution. I consider that a successful docking. I am talking about blank screens (laptop and monitor), an unresponsive machine and eventually a hard reboot. By now, the colleagues sitting in the nearby offices must have learned quite a few French swear words.

And please don’t blame Windows XP. It’s not perfect but I’ve had some rock-solid Windows XP laptops, that could go through dozens of hibernate/wake-up cycles and not need a reboot until some OS security patch had to be installed. The NC6400 that I left behind when I quit HP was such an example. More stable than my home Linux laptop.

Anytime my Dell crashes, I risk loosing data in whatever files were open at the time. I’ve become pretty good at rebuilding a corrupted Thunderbird profile and importing the old emails and filters. I’ve learned to appreciate Firefox’s practice to regularly create a backup copy of the bookmarks. I know how to set up auto-save in any application that has the feature. My left hand does the “Ctrl-S” motion on my pillow a hundred times each night.

But above all, I have come to realize how good life will be when all my data, configuration and preferences are in the Cloud. When all my emails, documents, bookmarks, contacts, RSS subscriptions, calendar items are safely removed from this productivity-preventing machine. When recovering from another temperamental bout from this enemy (that I still carry home every day) will only be a matter of logging back onto whatever SaaS application I was using.

Dell has made me a true believer in Cloud Computing.

The first draft of this entry was written (on the afformentioned Linux laptop) during the 13 minutes it takes for the chkdsk.exe process to scan an 80GB hard drive after yet another crash.

2 Comments

Filed under Everything, SaaS, Utility computing

CMIS, APP, Zen-SOAP and WS-KitchenSink: some data points

The recent release of an early draft of a content management specification (CMIS, for Content Management Interoperability Services) provides an interesting perspective on not just SOAP-versus-REST but also Zen-SOAP versus WS-KitchenSink.

I know little about content management and I have no comment about the specification from that respect. Others have better informed opinions on that aspect.

What is of interest to me, and where I have some experience, is the way the spec-defined operations are bound to underlying protocols. Here is the way the specification is structured: Part I describes the data model and the operations exposed by all the services. Part II comes in two flavors: a REST binding (based on APP, the Atom Publishing Protocol) and a Web services binding (based on SOAP).

This is the first time, to my knowledge, that someone (who presumably isn’t a participant in the SOAP/REST religious war but simply wants to get something done) describes two ways to achieve a real-life task, using either APP or SOAP. I expect that this will attract a lot of attention and provide data in the SOAP versus REST debate.

But this is not what I want to write about. I’ll just point out that the REST binding specification somehow is twice as long as the SOAP binding specification, which I find intriguing but not necessarily meaningful (things are looking good for your bet Sanjiva).

What really caught my attention is how SOAP is used in CMIS. You can hardly tell it’s SOAP. CMIS just defines XML messages to be used as payload for requests and responses. You would be excused for forgetting halfway through your implementation that you’re supposed to wrap those in a SOAP envelope. Headers are a no-show. The specification says it uses SOAP faults but it actually goes out of its way to avoid the existing elements for fault code and fault message and instead invent its own. The only SOAP feature it really uses is MTOM.

Except for the MTOM part, this reminds me of what SOAP was at the beginning of the decade, before any header had been defined (other than those used as illustration in the SOAP specification itself). I want to call it Zen-SOAP, by opposition to the WS-KitchenSink approach in which even simple, synchronous, clear-text, request-response SOAP exchanges somehow get saddled with a half dozen WS-Addressing headers before they’ve even left the gate (did I mention that I don’t like WS-Addressing?).

Another comedian in the WS-KitchenSink theater troupe is the WS-Transfer stack and especially WS-ResourceTransfer (WS-RT). Unless I read too much into this draft of CMIS, its content is devastating in two ways for WS-ResourceTransfer: in one fell swoop it shows that the specification is mostly useless and it destroys the argument that WS-ResourceTransfer needs to be stand-alone as opposed to just a part of WS-Management.

In “who needs XPath fragment-level PUT?”, I tried to make the case that the use of XPath in WS-RT to do fine-grained updates is a case of over-engineering. That there is no real need for it. Still, in that article I try to think of cases where the feature might be justified. I came up with two and I wrote that “one is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page”. But I dismissed it because wiki-land is REST country. I didn’t think of it at the time, but there is an “enterprise” version of wiki, a world in which, presumably, SOAP is well-regarded: Content Management Systems. Surely, if there is a domain that needs a fine-grained SOAP-based document editing protocol it’s the CMS world.

Today’s release of CMIS demolishes this use case with two punches to the guts:

  • They do have a query language, but it is SQL-based, not XPath-based.
  • The query is only used for reads, not for updates. Updates are done through specialized operations (addObjectToFolder, moveObject, updateProperties, createRelationship…).

This goes beyond not using a generic fine-grained update mechanism. It also goes against using any generic GET/SET operation. The blow reaches all the way to WS-Transfer. For all this, CMIS comes out a much simpler specification and it also frees itself from the web of dependencies (on specifications at different stages of standardization) that has plagued specifications that use WS-Transfer and will plague WS-Federation for using WS-RT.

It will be interesting to see what happens when the WS-* architects and Microsoft and IBM get hold of the CMIS specification and of its authors in their companies. I am especially worried about the fate of the IBM CMIS authors. The recent news about Oslo show that the XML people at Microsoft are a lot more willing to put the XML tools back in the box when needed.

In truth, the CMIS authors do appear to need some help from the SOAP experts in their companies, if only to fix the way they use SOAP faults and to help the poor soul who put this comment in the WSDL:

<!– had to use include – .net wsdl.exe code generator doesn’t seem to like imports on the schema –>

But they might be getting more “suggestions” than they bargained for. In the same way that the WS-Federation folks were going on their own merry way until it was “suggested” to them by someone (who probably had an agenda) to use WS-RT. I’ll try to keep an eye on how CMIS evolves.

In the meantime, I find in CMIS data points that reinforce my opinion that WS-Transfer should be absorbed by WS-Management, WS-MeX and WS-Federation should return to defining their own operations and WS-RT should be left to die (or, for a more positive spin, be used as inspiration in the next version of WS-Management).

[UPDATED 2008/10/02: Roy Fielding doesn’t like the so-called-RESTful binding. Sam Ruby cautiously defends it. Links via Billy Cripe.]

[UPDATED 2009/5/1: For some reason this entry is attracting a lot of comment spam, so I am disabling comments. Contact me if you’d like to comment.]

4 Comments

Filed under Everything, IBM, Microsoft, Query, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

SOA management: round-up of recent news

It started with a checkpoint on “the state of SOA monitoring and management” by Doug McClure. A good set of questions and a good list of “usual suspects” (but how much did Actional pay to be listed twice?).

Then came this good article from AMIS’ Lucas Jellema reporting on what he learned during a recent Oracle SOA Partner event. He pokes fun at Oracle/BEA for conveniently tweaking their “this is what you need” story to align with the “this is what we offer” part (I am shocked, SHOCKED to hear that a vendor would do that, let alone my employer). But the real focus of his article is to describe the importance of design-time SOA governance (integrated with the other parts of the lifecycle). He does a good job at describing some of the value of the consolidated Oracle/BEA offering.

I couldn’t help smiling when I read this paragraph:

“It struck me that most of what applies in terms of Governance to SOA assets, also applies to other assets in any software engineering process. Trying to manage reusable components for example or even implementing a good maintenance approach for a non-SOA application is a tremendous challenge, that has many parallels with SOA Governance. And to some extent could benefit from applying a tooling infrastructure such as provided by the Enterprise Repository… Well, just a thought for now. I need to know more about the ER before jumping to conclusions.”

If my memory serves me right, the original Flashline product that BEA acquired (what became the Enterprise Repository) was just that, a generic metadata repository for software assets, not something SOA-specific. It’s ironic to see Lucas look at it now and think “hey, maybe this SOA repository can be used for non-SOA apps”. Back to the future. And BTW, Lucas is right about this applicability, as Michael Stamback soon confirmed.

Still in Oracle-land, a few days later came the news that Oracle is acquiring ClearApp. Doug’s post was more about runtime governance (which he calls monitoring/management, and I tend to agree with him even though this is fighting the tide) than design-time governance. In that sense, the ClearApp announcement is more relevant to his questions than Lucas’ post. The ClearApp capabilities fit squarely with Doug’s request for “providing the right level of business visibility into the SOA environment and more importantly the e2e business services, applications, transactions, processes and activities”, as I tried to illustrate before.

More recently, Software AG announced an OEM partnership with Actional (part of Progress) to bring runtime data to its CentraSite registry (which, I assume, comes from the Infravio acquisition by WebMethods before it itself was swallowed by Software AG).

Actional’s Dan Foody of course applauds and uses the opportunity to dispel some FUD (“Actional is tightly tied with Sonic”) and also generate some new FUD (“no vendor had even a half decent offering on both sides [design-time and runtime] of the fence”).

Neil Macehiter has a more neutral commentary on the Software AG news. His analysis ends with some questions about what this means for Amberpoint. Maybe it’s time to restart the “Microsoft might acquire Amberpoint” rumor.

Speaking of Microsoft, the drum roll is getting louder in anticipation for Oslo making its debut at the upcoming PDC. That’s a topic for another post though.

This Oslo detour is a little bit off topic, but not so much. The way Don Box and team envision that giant software model shaping up they probably picture what’s called today “SOA Governance” as just a small application that an intern can build in a week on top of the Oslo repository. Or I am exaggerating?

Unlike Dan Foody I like the approach of keeping SOA Governance closely integrated with the development and IT management infrastructures. At the cost of quoting myself (if I don’t, who will?) “it’s not just about managing Web services or Web sites, it’s about managing the whole SOA application”.

[UPDATED 2008/9/23: It looks like the relationship between CentraSite and Infravio is a little bit more complex than I assumed.]

Comments Off on SOA management: round-up of recent news

Filed under Application Mgmt, Everything, Governance, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, Oslo, SOAP

CMDBf interop demo

IBM and CA are apparently showing an interoperability demo between their respective CMDBs at itSMF Fusion this week. I am not there to see it, but they describe it (it’s a corporate merger scenario) in this press release. It is presumably based on the version of the specification that was submitted to DMTF.

More information about CMDBf, along with another demonstration, will be available in a couple of months for ManDevCon attendees. Three sessions are on the agenda, all in a row and in the same room (so make sure to get a good seat, i.e. one close to a power plug, from the start):

  • CMDB Federation Overview (Vince Kowalski, BMC and Marv Waschke, CA)
  • CMDB Federation Technical Description (Mark Johnson, IBM and Marv Waschke, CA)
  • CMDB Federation Demonstration (Mark Johnson, IBM and Dave Snelling, Fujitsu)

Comments Off on CMDBf interop demo

Filed under CA, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Specs, Standards, Trade show

The boss is back

Today is full of news for Oracle Enterprise Manager. I came into the office this morning expecting the ClearApp announcement (I had even prepared a blog entry on it over the weekend). This, on the other hand, came as a (good) surprise!

Comments Off on The boss is back

Filed under Everything, IT Systems Mgmt, Oracle, People, VMware

Oracle acquires ClearApp for composite application management

Oracle (and more specifically the middleware and applications management part of Oracle Enterprise Manager) has just acquired ClearApp. The company is based in Mountain View (California) and their QuickVision product is a very advanced management tool for composite applications, especially BPEL-based and Portal-based applications.

More information about the acquisition is available from this page and the press release. Information about the QuickVision product can be found on the ClearApp site.

QuickVision is a very complementary addition to our existing products and the acquisitions that we have made over the last year in the application management domain. Let’s take a performance management use case to see how they relate to one another conceptually (this is not an integration roadmap, just a comparison of the features of the existing products): Oracle Real User Experience Insight (from the Moniforce acquisition) will tell you that your users are seeing a performance degradation for a specific function of your Web application. If this is a stand-alone Java application, you can go straight into the Enterprise Manager App Server Diagnostic Pack to start from a URL and analyze where processing time is spent (servlet, JSP, EJB, JDBC…). AD4J (from the Auptyma acquisition) provides deep insight into the JVM. It will give you the line number and call stack of the slow methods. For example, it might lead you to a specific database call that is taking a long time to return. You can then follow the trail deep into the database using the Oracle Database Diagnostic and Tuning packs.

But if your application is a composite application (for example one that makes use of a BPEL process to orchestrate services deployed on different application servers), then you would have a hard time finding which application server to focus on. The QuickVision product fills that gap, taking a BPEL process from its invocation point into all its successive steps and into the code that the different steps invoke. So you can see if the problem is within the BPEL execution (e.g. you loop too many times) or inside an invoked Web service. In that case, QuickVision will lead you to the class that implements that service, at which point you have all the context that you need to fire off AD4J and do a fine-grained analysis of the problematic Java code as described above.

In this scenario (and assuming that the root cause is the slowness of a database query executed by a web services that has been invoked through a BPEL process), the chain of management capabilities goes something like this:

User Experience Insight
    -> QuickVision
        -> App Server Diagnostic Pack
            -> Database management packs

A variation on this would be if the service monitored was a SOAP service as opposed to a Web page. Oracle Web Services Manager could then be used as an alternative to Real User Experience Insight to alert you that something was amiss with the application performance. The rest of the flow would be the same.

At the end, it’s not just about managing Web services or Web sites, it’s about managing the whole SOA application.

Of course, QuickVision is not limited to performance analysis, even though that’s my favorite feature. For example, I could have picked a dependency analysis scenario.

To my new colleagues joining us from ClearApp, welcome!

[UPDATED 2008/9/9: InfoQ coverage of the acquisition by Dilip Krishnan.]

3 Comments

Filed under Application Mgmt, BPEL, Business Process, Everything, IT Systems Mgmt, Manageability, Mashup, Mgmt integration, Modeling, Oracle

BPEL as a source of application management metadata

Let’s put aside for now all the discussions about whether BPEL is an appropriate tool to capture a “true” business process, i.e. to implement the business logic understood by a business analyst (a topic that has been discussed at length already, including here, here, here, here, here, here and around the 5 minute mark of this podcast). Today, let’s look at it as simply another resource in a developer’s toolbox, alongside things like servlets and XML parsers. It’s a tool that can simplify the invocation of remote services (especially asynchronously), the parallelization of tasks, the definition of scoped compensation handlers, the transformation of XML, the encapsulation of key business logic and, most importantly, the reliable implementation of long-lived processes. If you need a few of these features, you might find BPEL a suitable programming tool. Plus, it refreshingly encourages handling of XML as XML (e.g. via XPath) rather than mindless code generation.

In addition to whatever developer productivity benefit you see in BPEL, there are other potential benefits form using it. They are the topic of this post and they relate to application management.

We all know that in an ideal world, no developer would release an application without providing a set of management capabilities that are carefully crafted to reflect the business logic of the application. Such that IT administrators can monitor, configure, optimize and troubleshoot the application in ways that are related to what the application really does (as opposed to generic metrics like memory, CPU and I/O metrics…).

Back in the real world, this is of course rarely the case. Enters BPEL. Just by virtue of using it in a reasonable way, and without any “just for the ops guys” metadata, BPEL provides a management model for the application. Sure it’s not as good as a hand-crafted management model, but at least it’s there. And it has some pretty compelling properties:

  • It feeds directly from the metadata used by the runtime, so it is guaranteed to be accurate (unlike metadata that is created specifically for management but has no role in the actual runtime).
  • It shows what external services the application depends on. Of course there is no guarantee that all remote invocation will be represented in the BPEL process, but since that’s a strength of BPEL it is reasonable to expect that it provides a good view of application dependencies (to be complemented, of course, by the application infrastructure dependencies like the database and the BPEL engine itself…). Remote invocations are a common point of failure and/or performance problems so they are a first class citizen of an application management model.
  • It explicitly captures process instances. No more jumping from one database table to another (assuming you even know where to look) to try to get a sense of the current overall status. The BPEL instances show the number of in-flight transactions in the application. It is also easy to compare the initialization and termination rates to see the trend.
  • It provides a horizontal segmentation of the processing tasks (via the BPEL activities) that is a good complement to the vertical segmentation often offered by application management tools (e.g. time spent in the database, time spent waiting on I/O, etc…).
  • It makes explicit certain exception conditions.

All these only make use of very basic aspects of BPEL: the enumeration of PartnerLinks, the notion of a process instance, the existence of activities, the fault/compensation/termination handlers. A fair amount of visibility into the health of the application can be derived form this alone. I am not making fancy assumptions about the management tool being able to make sense of the routing logic in the process or of the correlation rules. I am not assuming that the BPEL engine provides ways to control individual process instances. I am not assuming that the name attributes of certain elements (e.g. PartnerLink, variable) convey semantics that could help the administrator understand some of the semantics of the application.

At the end, it’s not about managing BPEL, it’s about managing an application that uses BPEL.

My point is not to push everyone to write any application as a BPEL process (or a set of them) as a way to get a great management infrastructure for free. But if BPEL is a potential choice for the application, then it’s worth considering those extra benefits in the “pros and cons” analysis. And if you have already decided to use BPEL, it may be worth looking into what management dividends you can harvest from this choice. Of course your mileage may vary depending on how manageable your BPEL infrastructure is. Hint hint

A few related links. Todd Biske has also written about the management value of BPEL, here and here. A similar analysis can be applied to SCA, but at this point in time there are many more applications out there that use BPEL than SCA, making the former more relevant. I briefly described the SCA side of the equation in an earlier exchange with David Chappell. That discussion is summarized here (including a pointer to David’s original piece). In an earlier post, I touched on the manageability potenial of other sources of application metadata, like OGSi and Spring (in addition to SCA and BPEL). Jean-Jacques Dubray provided additional context at InfoQ.

[UPDATED 2008/9/2: Based on this announcement, I can add one more hint.]

3 Comments

Filed under Application Mgmt, BPEL, Business Process, Everything, IT Systems Mgmt, Manageability, Modeling