Category Archives: Everything

Another IT event standard? I’ll believe it when I CEE it.

Looks like there is yet another attempt to standardize IT events. It’s called the Common Event Expression (CEE). My cynicism would have prevented me from paying much attention to it (how many failed attempts at this do we really need?) if I hadn’t noticed an “event taxonomy” as the first deliverable listed on the home page. These days I am a sucker for the T word. So I dug around a bit and found out that they have a publicly-archived mailing list on which we can see a working draft of a CEE white paper. It looks pretty polished but it is nonetheless a working draft and I am keeping this in mind when reading it (it wouldn’t be fair to hold the group to something they haven’t yet agreed to release).

The first reassuring thing I see (in the “prior efforts” section) is that they are indeed very aware of all the proprietary log formats and all the (mostly failed) past standardization attempts. They are going into this open-eyed (read the “why should we attempt yet another log standard event” section and see if it convinces you). I should disclose that I have some history with one of these proprietary standards (and failed standardization attempts) that probably contributes to my cynicism on the topic. It took place when IBM tried to push their proprietary CBE format into WSDM, which they partially succeeded in doing (as the WSDM Event Format). This all became a moot point when WSDM stalled, but I had become pretty familiar with CBE in the process.

The major advance in CEE is that, unlike previous efforts, it separates the semantics (which they propose to capture in a taxonomy) from the representation. The paper is a bit sloppy at times (e.g. “while the syntax is unique, it can be expressed and transmitted in a number of different ways” uses, I think, “syntax” to mean “semantics”) but that’s the sense I get. That’s nice but I am not sure it goes far enough.

The best part about having a blog is that you get to give unsolicited advice, and that’s what I am about to do. If I wanted to bring real progress to the world of standardized IT logging, I would leave aside the representation part and focus on ontologies. At two levels: first, I would identify a framework for capturing ontologies. I say “identify”, not “invent”, because it has already been invented and implemented. It’s just a matter of selecting relevant parts and explaining how they apply to expressing the semantics of IT events. Then I would define a few ontologies that are applicable to IT events. Yes, plural. There isn’t one ontology for IT events. It depends both on what the events are about (networking, applications, sensors…) and what they are used for (security audit, performance analysis, change management…).

The thing about logs is that when you collect them you don’t necessarily know what they are going to be used for. Which is why you need to collect them in a way that is as close to what really happened as possible. Any transformation towards a more abstracted/general representation looses some information that may turn out to be needed. For example, messages often have several potential ID fields (transport-level, header, application logic…) and if you pick one of them to map it to the canonical messageId field you may loose the others. Let logs be captured in non-standard ways, focus on creating flexible means to attach and process common semantics on top of them.

Should I be optimistic? I look at this proposed list of CEE fields and I think “nope, they’re just going to produce another CBE” (the name similarity doesn’t help). Then I read “by eliminating subjective information, such as perceived impact or importance, sometimes seen in current log messages…” in the white paper draft and I want to kiss (metaphorically, at least until I see a photo) whoever wrote this. Because it shows an understanding of the difference between the base facts and the domain-specific interpretations. Interpretations are useful of course, but should be separated (and ideally automatically mapped to the base facts using ontology-driven rules). I especially like this example because it illustrates one of the points I tried to make during the WSDM/CBE discussions, that severity is relative. It changes based on time (e.g. a malfunction in an order-booking system might be critical towards the end of the quarter but not at the beginning) and based on the perspective of the event consumer (e.g. the disappearance of a $5 cable is trivial from an asset management perspective but critical from an operations perspective if that cable connects your production DB to the network). Not only does CBE (and, to be fair, several other log formats) consider the severity to be intrinsic to the event, it also goes out of its way to say that “it is not mutable once it is set”. Glad to see that the CEE people have a better understanding.

Another sentence that gives me both hope and fear is “another, similar approach would be to define a pseudo-language with subjects, objects, verbs, etc along with a finite set of words”. That’s on the right tracks, but why re-invent? Doesn’t it sound a lot like subject/predicate/object? CEE is hosted by MITRE which has plenty of semantic web expertise. Why not take these guys out to lunch one day and have a chat?

More thoughts on CEE (and its relationship with XDAS) on the Burton Group blog.

Let’s finish on a hopeful note. The “CEE roadmap” sees three phases of adoption for the taxonomy work. The second one is “publish a taxonomy and talk to software vendors for adoption”. The third one is “increase adoption of taxonomy across various logs; have vendors map all new log messages to a taxonomy”. Wouldn’t it be beautiful if it was that simple and free of politics? I wonder if there is a chapter about software standards in The Audacity of Hope.

4 Comments

Filed under Everything, IT Systems Mgmt, Semantic tech, Standards

BPMN to BPEL: going to battle with one hand tied?

I have been looking at business process modeling and I am a bit puzzled about the connections between the different goals (strategy support, process documentation, automated execution….), audiences (LOB, business analysts, developers…) and tools (process editor, registry, simulation bench, IDE…). I see how it would be nice for all these to play well together. What I don’t quite see is exactly how the current tools achieve that.

One example is the goal of improving communications between business analysts and developers by allowing analysts to capture as much of the intended process as possible in a way that can be easily consumed by developers. That is a worthy goal and it should be eventually achievable (though maybe in a reformulated form) based on industry trends (who would have thought that one day business people would use their own computers to retrieve business data rather than having an operator print documents for them). But it is still a very difficult goal, for which many inherent barriers (in terms of shared vocabulary, skills and mindset) must be overcome. My concern is that the current approaches add many artificial barriers to those intrinsic to the problem.

One source of such artificial barriers is that incompatible business process description languages come into play. One common example is the use of BPMN for analyst-level modeling followed by a translation to BPEL for development tasks. I ran into an example of an incompatibility between the two very early in my experimentations with BPMN, in the form of the “inclusive OR” (the diamond with a circle inside in BPMN).

It lets you express something like this: “The customer quote can be reviewed by the region manager, the country manager or the VP of sales. At least one of them must review the quote. More than one may review the quote”. My first thought when encountering this construct was “how does this get mapped to BPEL”, since there is no equivalent BPEL construct. After scratching my head, I could think of two ways to do it, neither of which is very pretty. One is to turn this into a brute-force enumeration of all the legal combinations (“1”, “1 and 2”, “1, 2 and 3”, “2”, “2 and 3”, “3”) which can get out of hand pretty quickly if you have more than three branches. The other relies on event handlers. In both cases, you end up with a BPEL process definition that should be correctly interpreted by a BPEL execution engine but that is hard to read for developers and almost impossible to round-trip back into a nice BPMN description.

Several similar corner cases in BPMN to BPEL translations are described in this paper, in which the authors also have to resort to BPEL event handlers. You can find approaches to improve the BPMN to BPEL mapping in this paper (also check out the list of references, including this paper, for more research on the problem).

Out of curiosity, I ran a translation of a BPMN “inclusive OR” with the Aris tool for Oracle and here is the resulting BPEL fragment in JDeveloper (click on the picture for the full-size version):

Here is a cleaned-up representation of the resulting BPEL (the optional tasks are only represented by “empty” elements because they are waiting to be filled in with real processing instructions):

<flow name="OR__inclusive_">
  <sequence>
    <switch name="OR__inclusive_">
      <case>
        <sequence>
          <scope name="Optional_task_3">
            <sequence>
              <empty name="Optional_task_3"/>
            </sequence>
          </scope>
        </sequence>
      </case>
      <case>
        <sequence>
          <empty name="Empty"/>
        </sequence>
      </case>
    </switch>
  </sequence>
  <sequence>
    <switch name="OR__inclusive_">
      <case>
        <sequence>
          <scope name="Optional_task_1">
            <sequence>
              <empty name="Optional_task_1"/>
            </sequence>
          </scope>
        </sequence>
      </case>
      <case>
        <sequence>
          <empty name="Empty"/>
        </sequence>
      </case>
    </switch>
  </sequence>
  <sequence>
    <switch name="OR__inclusive_">
      <case>
        <sequence>
          <empty name="Empty"/>
        </sequence>
      </case>
      <case>
        <sequence>
          <scope name="Optional_task_2">
            <sequence>
              <empty name="Optional_task_2"/>
            </sequence>
          </scope>
        </sequence>
      </case>
    </switch>
  </sequence>
</flow>

This tools makes the choice to favor BPEL readability at the expense of precision. The BPEL is much nicer but it fails to capture the fact that at least one of the optional tasks must be performed (a BPEL execution engine would allow an instance to go through all three “empty” constructs, bypassing all the optional tasks). In our example, it means that the customer quote could go out without any management review, even though the business requirement is to have at least one.

This is potentially worse than not allowing the analyst to specify the “at least one” business requirement: the analyst assumes that the requirement is captured (since it is conveyed in the BPMN flow) but the developer never sees it (assume the developer only gets hold of the generated BPEL). If the analyst was not able to input the requirement in BPMN, s/he would at least be more likely to add this as an off-line comment for the developer to take into account.

As all the research papers I previously linked to illustrate, this disconnect between BPMN and BPEL is a known problem that people have spent a lot of efforts trying to fix. But in the absence of a satisfying solution, I keep asking myself whether this problem is better circumvented than fixed.

I am not one to shy away from model translations (otherwise I would be in the wrong business) but I see model translation as a tool that can be overused. In the current state, putting my self in the shoes of a BPEL developer, I’d rather get a nice BPMN flow than a weird BPEL process that was auto-generated.

I don’t have a solution to the problem. Maybe it’s to define an implementable subset of BPMN (or an analyst-friendly subset of BPEL, which may be essentially the same). Or maybe not everything goes through explicit business process modeling. The developer will need test cases anyway, so maybe the right approach is to provide a high-level overview of the process followed by a bunch of tests. I can see a system where the business process modeling engine would be used to generate test messages and the analyst would tell, step by step, what happens to each message. The UI could be designed such that the tool could know what element of the message/context the analyst observes in order to chose the next step. And a strawman implementation flow may even be generated based on how the analyst dispatches the messages. At least the messages can be used to drive unit tests. Business process analysis tools already know how to run process simulations (except they are driven by statistical rules to decide what branches are taken rather than interactions with the analyst).

Just wondering.

[UPDATED 2008/10/22: This InfoQ article provides more data about the problems of mapping from BPMN to BPEL.]

[UPDATED 2008/12/11: If this topic is of interest to you, you should read Bruce Silver’s opinion about how to address this in BPMN 2.0.]

13 Comments

Filed under Business Process, Everything

An interesting move

I have been keeping an eye on Don Ferguson’s blog with the hope of one day reading a bit about Microsoft’s Oslo project and maybe the application management aspects of it. Instead, what I saw tonight is that Don is leaving Microsoft, after a short stay, to join CA. Welcome to the fun world of IT management Don! It seems like a safe bet to assume that he will work on application management (sorry, I am supposed to say “service management”), which is what I focus on at Oracle. So forget Oslo, now I have another reason to keep an eye on Don. Microsoft has hired quite a few people out of CA (including Anders Vinberg, a while ago, and my WSDM co-conspirator Igor Sedukhin), so I guess it’s only fair to see some movement the other way.

Since this has turned into a “people magazine” edition of this blog, IT management observers who don’t know it yet might be interested to learn that DMTF president Winston Bumpus left Dell to join VMWare several months ago. Leaving aside the superiority of the SF Bay Area over Round Rock TX for boating purposes, this can also be seen as a clear signal of interest from VMWare for standards and especially DMTF. OVF migth only be the beginning.

If anyone who matters in IT management adopts a baby, checks into rehab or gets into a brawl, you’ll read about it first on this blog. Coming next week: exclusive photos from the beach-side retreat of the itSMF board. We’ll compare to photos from last year to find out whose six-pack shows the most impressive “continual service improvement”. And the following week, you’ll learn what really happened in that Vegas meeting room filled with IT management analysts. On the other hand, I do not cover fashion faux-pas because there are just too many of those in our industry.

1 Comment

Filed under CA, Everything, Microsoft, People

Oracle semantic technologies resources

I have started to look at the semantic technologies available in Oracle’s portfolio and so far I like what I see. At HP, I had access to top experts on semantic technologies (mostly from HP Labs) but no special product (not counting Jena which is available to everyone). At Oracle, I find both top experts and very robust products. If you too are looking into Oracle’s offering related to semantic technologies, here are a few links to publicly-available resources that I have found useful. This is filtered based on my interests (yours may be different, for example I skip content related to life sciences applications).

The main page (and what should be your starting point on that topic) is the Semantic Technologies Center on OTN. Most of the other resources listed below are only a click or two away from there. The Semantic Technologies Forum is the right place for questions. The Semantic Web page on the Oracle Wiki doesn’t contain much right now but that may change.

For an overview of the semantic technology capabilities and their applicability, start with Semantic Data Integration for the Enterprise (white paper) and Why, When, and How to Use Oracle Database 11g Semantic Technologies (slides). Then look at Enterprise Semantic Web in Practice (slides) for many real-life examples.

When you are ready to take advantage of the Oracle semantic technologies capabilities, start with The Semantic Web for Application Developers (slides) followed by RDF Support in Oracle RDBMS (these are more detailed slides but they seem based on 10gR2 rather than 11g so not as complete, no OWL for example). Then grab a thermos of coffee and lock yourself in the basement for a while with the Oracle Database Semantic Technologies Developer’s Guide (also available as a hundred-page PDF).

At that point, you may chose to look into the design choices (with performance analysis) that where made in the Oracle implementation by reading A Scalable RDBMS-Based Inference Engine for RDFS/OWL. There is also a Wiki page on OWLPrime to describe the subset of OWL supported in 11g. Finally, you can turn to the Inference Best Practices with RDFS/OWL white paper for tuning tips on 11g.

To get the actual bits, you can download the Oracle 11g Database on OTN. The semantic technologies support is in the Spatial option for the database, which is included in the base download.

I will keep updating this page as interesting new resouces are created (or as I discover existing ones). For resources on semantic technologies in general (non Oracle specific) good sources are Dave Beckett’s page, the W3C (list of resources or standardization activities) or the Cover Pages.

1 Comment

Filed under Everything, Oracle, OWL, RDF, Semantic tech

If we are not at the table we are on the menu

Earlier this evening I was listening to a podcast from the Commonwealth Club of California. The guest was Frances Beinecke, President of the Natural Resources Defense Council. It wasn’t captivating and my mind had wandered on another topic (a question related to open source) when I caught a sentence that made me think that the podcast had followed me on that topic:

“If we are not at the table we are on the menu”

In fact, she was quoting an energy industry executive explaining why he welcomes upfront discussions w/ NRDC about global warming. But isn’t this also very applicable to what open source means for many companies?

Everything below is off-topic for this blog.

To be fair, I should clarify that not all Commonwealth Club podcasts (here is the RSS feed) fail to keep my attention. While I am at it, here is a quick listener’s guide to recent recordings (with links to the MP3 files) in case some of you also have a nasty commute and want to give the CCC (no, not that one) a try. Contrary to what I expected, I have found panels discussions generally less interesting than talks by individuals. The panel on reconstructing health care was good though. The one on reconciling science and religion was not (in the absence of a more specifically framed question everyone on the panel agreed on everything). They invite speakers from both sides of the aisle: recently Ben Stein (can’t be introduced in a few words) and Tom Campbell (Dean of Haas business school at Berkeley) on the conservative side and Madeleine Albright (no introduction needed) on the progressive side. All three of these were quite good. As I mentioned, the one with Frances Beinecke (NRDC president) wasn’t (it quickly morphed into self-praises for her organization’s work, including taking a surprising amount of credit for Intel’s work towards lower power consumption). Deborah Rodriguez, (director of the “Kabul Beauty School”) was the worst (at least for the first 20 minutes, I wasn’t paid enough to keep listening). Thomas Fingar (Chairman of the National Intelligence Council) was ok but could have been much better (he shared all the truth that couldn’t embarrass or anger anyone, which isn’t much when the topic is the Iraq and Iran intelligence reports on WMD). In the process he explained what the intelligence community calls “open source intelligence” and he wasn’t referring to the RedMonk model. Enjoy…

Comments Off on If we are not at the table we are on the menu

Filed under Everything, Off-topic, Open source

Of graphs and trees: Kingsley Idehen to the rescue

I just read the transcript of Jon Udell’s podcast interview of Kingsley Idehen. It’s almost two years old but it contains something that I have tried (and mostly failed) to explain for a while now, so maybe borrowing someone else’s words (and credibility) would help.

Kingsley says:

“A graph model, ideally, will allow you to explore almost all the comprehensible dimensions of the nodes in that network. So you can traverse that network in a myriad of different ways and it will give you much more flexibility than if you’re confined to a tree, in effect, the difference between XQuery and SPARQL. I always see the difference between these two things as this. If you visualize nodes on a network, SPARQL is going to get you to the right node. Your journey to what you want is facilitated by SPARQL, and then XQuery can then take you deeper into this one node, which has specific data that the graph traversal is taking you to.”

Nicely said, especially considering that this is not a prepared statement but a transcript of a (presumably) unscripted interview.

He later provides an example:

“Let’s take a microformat as an example. HCard, or an hCalendar, is a well-formed format. In a sense, it’s XML. You can locate the hCard in question, so if you had a collection of individuals who had full files on the network in the repository, it could be a graph of a social network or a group of people. Now, through that graph you could ultimately locate common interests. And eventually you may want to set up calendars but if the format of the calendar itself is well formed, with XQuery you can search a location, with XPath it’s even more specific. Here you simply want to get to a node in the content and to get a value. Because the content is well formed you can traverse within the content, but XQuery doesn’t help you find that content as effectively because in effect XQuery is really all about a hierarchical model.”

Here is one way to translate this to the IT management domain. Replace hCard with an XML-formated configuration record. Replace the graph of social relationships with a graph of IT-relevant relationships (dependency, ownership, connections, containment…). Rather than attempt to XQuery across an entire CMDB (or, even worse, an entire CMDB federation), use a graph query (ideally SPARQL) to find the items of interest and then use XPath/XQuery to drill into the content of the resulting records. The graph query language in CMDBf is an attempt to do that, but it has to constantly battle attempts to impose a tree-based view of the world.

This also helps illustrate why SPARQL is superior to the CMDBf query language. It’s not just that it’s a better graph query language, one that has received much more review and validation by people more experienced in graph theory and queries, and one that is already widely implemented. It also does something that CMDBf doesn’t attempt to do: it lets you navigate the graph based on the semantics appropriate for the task at hand (dependency relationships, governance rules, distributed performance management…), something that CMDBf cannot do. There is more to classification than simply class inheritance. I think this is what Kingsley refers to when he says “in a myriad of different ways” in the quote above.

Here is a way to summarize the larger point (that tree and graph views are complementary):

Me Tarzan, you Jena

Where Tarzan (appropriately) represents the ability to navigate trees and Jane/Jena represents the ability to navigate graphs (Jena, from HP Labs, is the leading open source RDF/OWL/SPARQL framework). As in the movie, they complement each other (to the point of saving one another’s life and falling in love, but I don’t ask quite that much of SPARQL and XQuery).

On a related topic, I recently saw some interesting news from TopQuadrant. Based on explicit requests from the majority of their customers, they have added capabilities to their TopBraid Composer product to better make use of the RDF/OWL support in the Oracle database. TopQuadrant is at the forefront of many semantic web applications and the fact that they see Oracle being heavily used by their customers is an interesting external validation.

[UPDATED 2008/03/05: more related news! The W3C RDB2RDF incubator group has started is life at W3C, chaired by my colleague Ashok Malhotra, to work on mappings between RDF/OWL and relational data.]

1 Comment

Filed under CMDB Federation, CMDBf, Everything, Graph query, Query, RDF, SPARQL, Standards, W3C, XPath, XQuery

HP is starting to pull out of Identity Management

Rumors prompted me to do a Google search on << HP “identity management” exit >>. The second resulting link brought confirmation in the form of this Burton Group article.

From the article, HP is not declaring “end of life” on its IDM products (the “Select” family, made up of Select Access, Select Audit, Select Federation and Select Identity) but they are restricting them to the current customers rather than going after new ones. Which sounds like an end of life, albeit a slow one that gives customers plenty of time to plan and execute their transition. Good for HP, because that’s one area in which you really don’t want to make precipitous decisions (as sometimes happens when an IDM effort is kicked off as a result of a negative security event).

My first reaction is to wonder what this means for my ex-colleagues, including the IDM people I sat next to (most of them from the Trustgenix acquisition) and the remotes ones I interacted with in the context of HP’s software standards strategy (Jason Rouault and Archie Reed, both well-known and respected in the corresponding standards efforts). These are all smart people so I am sure they’ll find productive work either in HP or outside (the IDM domain is booming).

My second reaction is puzzlement. This move is not very surprising from the point of view of the market success and financial returns of HP’s IDM suite so far. But it is a lot more surprising in the context of HP’s BTO strategy. I am sure they realize the importance of IDM in that context, I guess they must have decided that they can do it based on partner products rather than HP products. Hopefully they can maintain the expertise even without further developing products.

The Burton Group article quotes Eric Vishria, “HP Software Vice President of Products”. Based on his title I would have been in his organization so I would have known him if he had been there when I was at HP. Which tells me that he probably came from the Opsware acquisition, soon after I left. The Opsware people now have a lot of influence in HP Software and it looks like they are not shying away from bold moves.

[UPDATED 2008/5/22: HP appears to have struck a deal to migrate its IDM users to Novell.]

1 Comment

Filed under Everything, HP, Security

Unintentional comedy

With these two words, “unintentional comedy”,

  • the predictability,
  • the unstated rules of the genre,
  • the stereotypical roles that keep reappearing: the bully, the calculator, the rambler, the simple-minded (that’s the one I used to play),
  • the pretentiousness,
  • the importance of appearances,
  • the necessity of conflict and tension,
  • the repetitiveness,
  • and the fact that after a while people tend to behave as caricatures of themselves.

I don’t mind being (with many others) the butt of the joke when the joke is right on. Plus, I made a similar analogy in the past: Commedia dell (stand)arte (once there, make sure you also follow the link to Umit’s verses).

To be fair, I don’t think this is limited to IT management standards. Other standard areas behave alike (OOXML vs. ODF anyone?). You can also see the bullet points above in action in many open source mailing lists. And most of all in the blogosphere. BTW Damon, why do you think the server for this blog is stage.vambenepe.com and not a more neutral blog.vambenepe.com? It’s not that I got mixed up between my staging server and my production server. It’s that I see a lot of comedy aspects to our part of the blogosphere and I wanted to acknowledge that I, like others, assume a persona on my blog and through it I play a role in a big comedy. Which is not as dismissive as it sounds, comedy can be an excellent vehicle to convey important and serious ideas. But we need people like you to remind us, from time to time, that comedy it is.

5 Comments

Filed under Everything, IT Systems Mgmt, Specs, Standards

MicroSAP scarier than Microhoo

Here are the first three thoughts that came to my mind when I heard about Microsoft’s bid to acquire Yahoo (in order, to the extent that I can remember):

  • After XBox this will take their focus further away from enterprise software. Good for Oracle.
  • I wonder how my friends at Yahoo (none of which I know to be great fans of Microsoft’s software) feel about this (on the other hand the stock price rise can’t be too unpleasant for them)
  • Time to get ready to move away from Yahoo Mail

Turns out I should have added an additional piece of good news to the first bullet: after this they won’t be able to afford SAP for a while. This I just realized after reading this New York Times column which argues, in short, that Microsoft should acquire SAP rather than Yahoo.

A few quotes from the article:

  • you’ve probably never heard of BEA“: this obviously doesn’t apply to readers of this blog.
  • it’s not much fun hanging out on the enterprise side of the software business“: ouch. If it’s fun you’re after, try the IT management segment of enterprise software business.
  • to find the best acquisition strategy, ask, ‘What would Larry do?’“: does this come as a bumper sticker?

Of course if Microsoft gets Yahoo and things go really badly, then it could be SAP who acquires Microsoft…

Comments Off on MicroSAP scarier than Microhoo

Filed under Business, Everything, Microsoft, Off-topic, Oracle, SAP, Yahoo

SCA, OGSi and Spring from an IT management perspective

March starts next week and the middleware blogging bees are busy collecting OSGi-nectar, Spring-nectar, SCA-nectar, bringing it all back to the hive and seeing what kind of honey they can make from it.

Like James Governor, I had to train myself to stop associating OSGi with OGSI (which was the framework created by GGF, now OGF, to implement OGSA, and was – not very successfully – replaced with OASIS’s WSRF, want more acronyms?). Having established that OSGi does not relate to OGSI, how does it relate to SCA and Spring? What with the Sprint-OSGi integration and this call to integrate OSGi and SCA (something Paremus says they already do)? The third leg of the triangle (SCA-Spring integration) is included in the base SCA framework. Call this a disclosure or a plug as you prefer, I’ll note that many of my Oracle colleagues on the middleware side of the house are instrumental in these efforts (Hal, Greg, Khanderao, Dave…).

There is also a white paper (getting a little dated but still very much worth reading) that describes the potential integrations in this triangle in very clear and concrete terms (a rare achievement for this kind of exercise). It ends with “simplicity, flexibility, manageability, testability, reusability. A key combination for enterprise developers”. I am happy to grant the “flexibility” (thanks OSGi), “testability” (thanks Spring) and “reusability” (thanks SCA) claims. Not so for simplicity at this point unless you are one of the handful of people involved in all three efforts. As for the “manageability”, let’s call it “manageability potential” and remain friends.

That last part, manageability, is of course what interests me the most in this area. I mentioned this before in the context of SCA alone but the conjunction of SCA with Spring and/or OSGi only increases the potential. What happened with BPEL adoption provides a good illustration of this:

There are lots of JEE management tools and technologies out there, with different levels of impact on application performance (ideally low enough that they are suitable for production systems). The extent to which enterprise Java has been instrumented, probed and analyzed is unprecedented. These tools are often focused on the performance more than the configuration/dependency aspects of the application, partly because that’s easier to measure. And while they are very useful, they struggle with the task of relating what they measure to a business view of the application, especially in the case of composite applications with many shared components. Enter BPEL. Like SCA, BPEL wasn’t designed for manageability. It was meant for increased productivity, portability and flexibility. It was designed to support the SOA vision of service re-use and to allow more tasks to be moved from Java coding to infrastructure configuration. All this it helps with indeed. But at the same time, it also provides very useful metadata for application management. Both in terms of highlighting the application flow (through activities) and in terms of clarifying the dependencies and associated policies (through partner links). This allowed a new breed of application management tools to emerge that hungrily consumer BPEL process definitions and use them to better relate application management to the user-visible aspects of the application.

But the visibility provided by BPEL only goes so far, and soon the application management tools are back in bytecode instrumentation, heap analysis, transaction tracing, etc. Using a mix of standard mechanisms and “top secret”, “patent pending” tricks. In addition to all of their well-known benefits, SCA, OGSi and Spring also help fill that gap. They provide extra application metadata that can be used by application management tools to provide more application context to management tasks. A simple example is that SCA’s service/reference mechanism extends BPEL partner links to components not implemented with BPEL (and provides a more complete policy framework). Of course, all this metadata doesn’t just magically organize itself in an application management framework and there is a lot of work to harness its value (thus the “potential” qualifier I added to “manageability”). But SCA, OSGi and Spring can improve application management in ways similar to what BPEL does.

Here I am again, taking exciting middleware technologies and squeezing them to extract boring management value. But if you can, like me, get excited about these management aspects then you want to follow the efforts around the conjunction of these three technologies. I understand SCA, but I need to spend more time on OGSi and Spring. Maybe this post is my way of motivating myself to do it (I wish my mental processes were instrumented with better metadata so I could answer this question with more certainty – oh please shoot me now).

And while this is all exciting, part of me also wonders whether it’s not too early to risk connecting these specifications too tightly. I have seen too many “standards framework” kind of powerpoint slides that show how a bunch of under-development specifications would precisely work together to meet all the needs of the world. I may have even written one myself. If one thing is certain in that space, it’s that the failure rate is high and over-eager re-use and linkage between specifications kills. That was one of the errors of WSDM. For a contemporary version, look at this “Leveraging CMDBf” plan at Eclipse. I am very supportive of the effort to create an open-source implementation of the CMDBf specification, but mixing a bunch of other unproven and evolving specifications (in addition to CMDBf, I see WS-ResourceCatalog, SML and a “TBD” WS API which I can’t imagine will be anything other than WS-ResourceTransfer) is very risky. And of course IBM’s good old CBE. Was this HTML page auto-generated from an IBM “standards strategy” powerpoint document? But I digress…

Bonus question: what’s the best acronym to refer to OGSi+SCA+Spring. OSS? Taken (twice). SOS? Taken (and too desperate-sounding). SSO? Taken (twice). OS2? Taken. S2O? Available, as far as I can tell, but who wants a name so easily confused with the stinky and acid-rain causing sulfur dioxide (SO2)? Any suggestion? Did I hear J3EE in the back of the room?

10 Comments

Filed under Everything, IT Systems Mgmt, OSGi, SCA, Specs, Spring, Standards

JSR262 (JMX over WS-Management) public review

If you care about exposing or accessing MBeans via WS-Management, now is a good time to read the public review draft of the JSR262 spec.

JSR262 is very much on the “manageability” side of the “manageability vs. management integration” chasm, which is not the most exciting side to me. But more commonality in manageability protocols is good, I guess, and this falls inside the WS-Management window of opportunity so it may help tip the balance.

There is also a nice white paper which does a nice job of retracing the history from JMX to JMX Remote API to JSR 262 and the different efforts along the way to provide access to the JMX API from outside of the local JVM. The white paper is actually too accurate for its own good: it explains well that models and protocols should be orthogonal (there is a section titled “The Holy Grail of Management: Model, Data and Protocol Independence”) which only highlights the shortcomings of JSR262 in that regard.

In a what looks from the outside like a wonderful exercise of “when you have a hammer” (and also “when you work in a hammer factory” like the JCP), this whole Java app management effort has been API-driven rather than model-driven. What we don’t get out of all this is a clearly defined metamodel and a set of model elements for Java apps with an XML serialization that can be queried and updated. What we do get is a mapping of “WS-Management protocol operations to MBean and MBean server operations” that “exposes JMX technology MBeans as WS-Management resources”.

Yes it now goes over HTTP so it can more easily fool firewalls, but I am yet to see such a need in manageability scenarios (other than from hackers who I am sure are very encouraged by the development). Yes it is easier for a non-Java endpoint to interact with a JSR262 endpoint than before but this is an incremental improvement above the previous JMX over RMI over IIOP because the messages involved still reflect the underlying API.

Maybe that’s all ok. There may very well not be much management integration possible at the level of details provided by JMX APIs. Management integration is probably better served at the SCA and OSGi levels anyway. Having JSR262 just provide incremental progress towards easier Java manageability by HP OVO and the like may be all we should ask of it. I told some of the JSR262 guys, back when they were creating their own XML over HTTP protocol to skirt the WS-Management vs. WSDM debate, that they should build on WS-Management and I am glad they took that route (no idea how much influence my opinion had on this). I just can’t get really excited about the whole thing.

All the details on the current status of JSR262 on Jean-Francois Denise’s blog.

6 Comments

Filed under Everything, JMX, Manageability, Mgmt integration, Specs, Standards, WS-Management

Guest on the Redmonk IT management podcast

Coté and John Willis invited me as a guest on their weekly Redmonk IT management podcast, the 6th since they started. I believe I am the first to be invited on it, so if they stop having guests I’ll know what this means about my performance. Here is the MP3 (58MB, approximately 1 hour).

If you are going to listen to it do me a favor and skip the first 5 minutes. By minute 6, the morning coffee has kicked in and I start to make a bit more sense. All in all, I’d rather you don’t listen to it and read Coté’s excellent overview instead. It’s not that I am embarrassed that I have a French accent, it’s that I sound like Arnold Schwarzenegger trying to fake a French accent. Please tell me it’s because of Skype’s compression algorithm.

Insecurities aside, I had a very good time with Coté and John. These guys’ knowledge of the IT management industry is both encyclopedic and very practical. Hopefully I was able to contribute some insights on the need for better integration in IT management and some of the current efforts to achieve it.

Thanks Coté and John for inviting me. And, at the risk of sounding like Arnold again, I hope that “I’ll be back” on your podcast.

2 Comments

Filed under Everything, IT Systems Mgmt, Podcast

Fog Computing

As happened with Salesforce.com a couple of years ago, Amazon S3 is having serious problems serving its customers today. Like Salesforce.com at the time, Amazon is criticized for not being transparent enough about it.

Right now, “cloud computing” is also “fog computing”. There is very little visibility (if any) into the infrastructure that is being consumed as a service. Part of this is a feature (a key reason for using these services is freedom from low-level administration) but part of it is a defect.

The clamor for Amazon to provide more updates about the outage on the AWS blog is a bit misplaced in that sense. Sure, that kind of visibility (“well folks, it was bring-your-hamster-to-work day at the Amazon data center today and turns out they love chewing cables. Our bad. The local animal refuge is sending us all their cats to help deal with the mess. Stay tuned”) gives a warm fuzzy (!) feeling but that’s not very actionable.

It’s not a matter for Amazon of giving access to its entire management stack (even in view-only mode) to its customers. It’s a matter of extracting customer-facing metrics that are relevant and exposing them in a way that can be consumed by the customer’s IT management tools. So they can be integrated in the overall IT decisions. And it’s not just monitoring even though that’s a good start. Saying “I don’t want to know how you run the service, all I care is what you do for me”, only takes you so far in enterprise computing. This opacity is a great way to hide single points of failure:

I predict (as usual, no date) that we will see companies that thought they were hedging their bets by using two different SaaS providers only to realize, on the day Amazon goes down again, that both SaaS providers were hosting on Amazon EC2 (or equivalent). Or, on the day a BT building catches fire, that both SaaS providers had their data centers there.

Just another version of “for diversification, I had a high yield fund and a low risk fund. I didn’t really read the prospectus. Who would have guessed that they were both loaded with mortgage debt?”

More about IT management in a utility computing world in a previous entry.

[UPDATED: Things have improved a bit since writing this. Amazon now has a status panel. But it’s still limited to monitoring. Today it’s Google App Engine who is taking the heat.]

Comments Off on Fog Computing

Filed under Everything, Governance, IT Systems Mgmt, Utility computing

Comparing Joe Gregorio’s RESTful Partial Updates to WS-ResourceTransfer

Joe Gregorio just proposed a way to do RESTful partial updates. I am not in that boat anymore but, along with my then-colleagues from HP, Microsoft, IBM and Intel, I have spent a fair bit of time trying to address the same problem, albeit in a SOAP-based way. That was WS-ResourceTransfer (WS-RT) which has been out as a draft since summer 2006. In a way, Joe’s proposal is to AtomPub what WS-ResourceTransfer is to WS-Transfer, retrofitting a partial resource update on top of a “full update” mechanism. Because of this, I read his proposal with interest. I have mentioned before that WS-RT isn’t the best-looking cow in the corral so I was ready to like Joe’s presumably simpler approach.

I don’t think it meets the bill for partial update requirements in IT management scenarios.

This is not a REST versus SOAP kind of thing and I am not about to launch in a “how do you do end to end encryption and reliable messaging” tirade. I think it is perfectly possible to meet most management scenarios in a RESTful way. And BTW, I think most management scenarios do not need partial updates at all.

But for those that do, there is just too little flexibility in Joe’s proposal. Not that it means it’s a bad proposal, I don’t have much of an idea of what his use cases are. The proposal might be perfectly adequate for them. But when I read his proposal, it’s IT management I was mentally trying to apply it to and it came short in that regard.

Joe’s proposal requires the server to annotate any element that can be updated. On the positive side, this “puts the server firmly in control of what sub-sections of a document it is willing to handle partial updates on” which can indeed be useful. On the negative side it is not very flexible. If you are interacting with a desired-state controller, the rules that govern what you can and cannot change are often a lot more complex than “you can change X, you can’t change Y”. Look at SML validation for an example.

Another aspect is that the requester has to explicitly name the elements to replace. That could make for a long list. And it creates a risk of race conditions. If I want to change all the elements that have an attribute “foo” with a value “bar” I need to retrieve them first so that I can find their xml:id. Then I need to send a message to update them. But if someone changed them in the meantime, they may not have the “bar” value anymore and I am going to end up updating elements that should not be updated. Again, not necessarily a problem in some domains. An update mechanism that lets you point at the target set via something like XPath helps prevent this round-tripping (at a significant complexity cost unfortunately, something WS-RT tries to address with simplified dialects of XPath).

Joe volunteers another obvious limitation when he writes that “this doesn’t solve all the partial update scenarios, for example this doesn’t help if you have a long sub-list that you want to append to”. Indeed. And it’s even worse if you care about element order. Not something that we normally care much about in IT management (UML, MOF, etc don’t have a notion of property order) but the overuse of XSD in system modeling has resulted in order being important to avoid validation failures (because it’s really hard to write an XSD that doesn’t care about order even though it is often not meaningful to the domain being modeled).

In early 2007, I wrote an implementation of WS-RT and in the process I found many gaps in the specification, especially in the PUT operation. It is not the ideal answer in any way. If one was to try to fix it, a good place to start might be to make the specification a bit less flexible (e.g. restricting the change granularity to the level of an element, not an attribute and not a text node). There is plenty of room to find the simplicity/flexibility sweetspot for IT management scenarios between what WS-RT tries to offer and what Joe’s proposal offers.

Comments Off on Comparing Joe Gregorio’s RESTful Partial Updates to WS-ResourceTransfer

Filed under Everything, IT Systems Mgmt, Specs, WS-ResourceTransfer

Microsoft ditches SML, returns to SDM?

I gave in to the temptation of a tabloid-style title for this post, but the resulting guilt forces me to quickly explain that it is speculation and not based on any information other than what is in the links below (none of which explicitly refers to SDM or SML). And of course I work for a Microsoft competitor, so keep your skeptic hat on, as always.

The smoke that makes me picture that SML/SDM fire comes from this post on the Service Center team blog. In it, the product marketing manager for System Center Service Manager announces that the product will not ship until 2010. Here are the reasons given.

The relevant feedback here can be summarized as:

  • Improve performance
  • Enhance integration with the rest of the System Center product family and with the wider Microsoft product offering

To meet these requirements we have decided to replace specific components of the Service Manager infrastructure. We will also take this opportunity to align the product with the rest of the System Center family by taking advantage of proven technologies in use in those products

Let’s rewind a little bit and bring some context. Microsoft developed the Service Definition Model (SDM) to try to capture a consistent model of IT resources. There are several versions of SDM out there, and one of them is currently used by Operations Manager. It is how you capture domain-specific knowledge in a Management Pack (Microsoft’s name for a plug-in that lets you bring a new target type to Operations Manager). In order to get more people to write management packs that Operations Manager can consume, Microsoft decided to standardize SDM. It approached companies like IBM and HP and the SDM specification became SML. Except that there was a lot in SDM that looked like XSD, so SML was refactored as an extension of XSD (pulling in additions from Schematron) rather than a more stand-alone, management-specific approach like SDM. As I’ve argued before (look for the “XSD in SML” paragraph), in retrospect this was the wrong choice. SML was submitted to W3C and is now well advanced towards completion as a standard. Microsoft was forging ahead with the transition from SDM to SML and when they announced their upcoming CMDB they made it clear that it would use SML as its native metamodel (“we’re taking SML and making it the schema for CMDB” said Kirill Tatarinov who then headed the Service Center group).

Back to the present time. This NetworkWorld article clarifies that it’s a redesign of the CMDB part of Service Center that is causing the delay: “beta testing revealed performance and scalability issues with the CMDB and Microsoft plans to rebuild its architecture using components already used in Operations Manager.” More specifically, Robert Reynolds, a “group product planner for System Center” explains that “the core model-based data store in Operations Manager has the basic pieces that we need”. That “model-based data store” is the one that uses SDM. As a side note, I would very much like to know what part of the “performance and scalability issues” come from using XSD (where a lot of complications come from features not relevant for systems management).

Thus the “enhance integration with the rest of the System Center product family” in the original blog post reads a lot like dumping SML as the metamodel for the CMDB in favor of SDM (or an updated version of SDM). QED. Kind of.

In addition to the problems Microsoft uncovered with the Service Center Beta, the upcoming changes around project Oslo might have further weakened the justification for using SML. In another FUD-spreading blog post, I hypothesized about what Oslo means for SML/CML. This recent development with the CMDB reinforces that view.

I understand that there is probably more to this decision at Microsoft than the SML/SDM question but this aspect is the one that may have an impact not just on Microsoft customers but on others who are considering using SML. In the larger scheme of things, the overarching technical question is whether one metamodel (be it SDM, SML, MOF or something else) can efficiently be used to represent models across the entire IT stack. I am growing increasingly convinced that it cannot.

4 Comments

Filed under CMDB, Everything, IT Systems Mgmt, Microsoft, Oslo, SML, Specs, Standards

IT management for the personal CIO

In the previous post, I described how one can easily run their own web applications to beneficially replace many popular web sites. It was really meant as background for the present article, which is more relevant to the “IT management” topic of this blog.

Despite my assertion that recent developments (and the efforts of some hosting providers) have made the proposition of running your own web apps “easy”, it is still not as easy as it should be. What IT management tools would a “personal CIO” need to manage their personal web applications? Here are a few scenarios:

  • get a catalog of available applications that can be installed and/or updated
  • analyze technical requirements (e.g. PHP version) of an application and make sure it can be installed on your infrastructure
  • migrate data and configuration between comparable applications (or different versions of the same application)
  • migrate applications from one hosting provider to another
  • back-up/snapshot data and configuration
  • central access to application stats/logs in simple format
  • uptime, response time monitoring
  • central access to user management (share users and configure across all your applications)
  • domain name management (registration, renewal)

As the CIO of my personal web applications, I don’t need to see Linux patches that need to be applied or network latency problems. If my hosting provider doesn’t take care of these without me even noticing, I am moving to another provider. What I need to see are the controls that make sense to a user of these applications. Many of the bullet listed above correspond to capabilities that are available today, but in a very brittle and hard-to-put-together form. My hosting provider has a one-click update feature but they have a limited application catalog. I wouldn’t trust them to measure uptime and response time for my sites, but there are third party services that do it. I wouldn’t expect my hosting provider to make it easy to move my apps to a new hosting provider, but it would be nice if someone else offered this. Etc. A neutral web application management service for the “personal CIO” could bring all this together and more. While I am at it, it could also help me backup/manage my devices and computers at home and manage/monitor my DSL or cable connection.

1 Comment

Filed under Everything, IT Systems Mgmt, Portability, Tech

My web apps and me

Registering a domain name: $10 per year
Hosting it with all the features you may need: $80 per year
Controlling your on-line life: priceless

To be frank, the main reason that I do not use Facebook or MySpace is that I am not very social to start with. But, believe it or not, I have a few friends and family member with whom I share photos and personal stories. Not to mention this blog for different kinds of friends and different kinds of stories (you are missing out on the cute toddler photos).

Rather than doing so on Facebook, MySpace, BlogSpot, Flickr, Picasa or whatever the Microsoft copies of these sites are, I maintain a couple of blogs and on-line photo albums on vambenepe.com. They all provide user access control and RSS-based syndication so no-one has to come to vambenepe.com just to check on them. No annoying advertising, no selling out of privacy and no risk of being jerked around by bait-and-switch (or simply directionless) business strategies (“in order to serve you better, we have decided that you will no longer be able to download the high-resolution version of your photos, but you can use them to print with our approved print-by-mail partners”). Have you noticed how people usually do not say “I use Facebook” but rather “I am on Facebook” as if riding a mechanical bull?

The interesting thing is that it doesn’t take a computer genius to set things up in such a way. I use Dreamhost and it, like similar hosting providers, gives you all you need. From the super-easy (e.g. they run WordPress for you) to the slightly more personal (they provide a one-click install of your own WordPress instance backed by your own database) to the do-it-yourself (they give you a PHP or RoR environment to create/deploy whatever app you want). Sure you can further upgrade to a dedicated server if you want to install a servlet container or a CodeGears environment, but my point is that you don’t need to come anywhere near this to own and run your own on-line life. You never need to see a Unix shell, unless you want to.

This is not replacing Facebook lock-in with Dreamhost lock-in. We are talking about an open-source application (WordPress) backed by a MySQL database. I can move it to any other hosting provider. And of course it’s not just blogging (WordPress) but also wiki (MediaWiki), forum (phpBB), etc.

Not that every shinny new on-line service can be replaced with a self-hosted application. You may have to wait a bit. For example, there is more to Facebook than a blog plus photo hosting. But guess what. Sounds like Bob Bickel is on the case. I very much hope that Bob and the ex-Bluestone gang isn’t just going to give us a “Facebook in a box” but also something more innovative, that makes it easy for people to run and own their side of a Facebook-like presence, with the ability to connect with other implementations for the social interactions.

We have always been able to run our own web applications, but it used to be a lot of work. My college nights were soothed by the hum of an always-running Linux server (actually a desktop used as a server) under my desk on which I ran my own SMTP server and HTTPd. My daughter’s “soothing ocean waves” baby toy sounds just the same. There were no turnkey web apps available at the time. I wrote and ran my own Web-based calendar management application in Python. When I left campus, I could have bought some co-locating service but it was a hassle and not cheap, so I didn’t bother [*].

I have a lot less time (and Linux administration skills) now than when I left university, so how come it is now attractive for me to run my own web apps again? What changed in the environment?

The main driver is the rise of the LAMP stack and especially PHP. For all the flaws of the platform and the ugliness of the code, PHP has sparked a huge ecosystem. Not just in terms of developers but also of administrators: most hosting providers are now very comfortable offering and managing PHP services.

The other driver is the rise of virtualization. Amazon hosts Xen images for you. But it’s not just the hypervisor version of virtualization. My Dreamhost server, for example, is not a Xen or VMWare virtual machine. It’s just a regular server that I share with other users but Dreamhost has created an environment that provides enough isolation from other users to meet my needs as an individual. The poor man’s virtualization if you will. Good enough.

These two trends (PHP and virtualization) have allowed Dreamhost and others to create an easy-to-use environment in which people can run and deploy web applications. And it becomes easier every day for someone to compete with Dreamhost on this. Their value to me is not in the hardware they run. It’s in environment they provide that prevents me from having to do low-level LAMP administration that I don’t have time for. Someone could create such an environment and run it on top of Amazon’s utility computing offering. Which is why I am convinced that such environments will be around for the foreseeable future, Dreamhost or no Dreamhost. Running your own web applications won’t be just for geeks anymore, just like using a GPS is not just for the geeks anymore.

Of course this is not a panacea and it won’t allow you to capture all aspects of your on-line life. You can’t host your eBay ratings. You can’t host your Amazon rank as a reviewer. It takes more than just technology to break free, but technology has underpinned many business changes before. In addition to the rise of LAMP and virtualization already mentioned, I am watching with interest the different efforts around data portability: dataportability.org, OpenID, OpenSocial, Facebook API… Except for OpenID, these efforts are driven by Web service providers hoping to canalize the demand for integration. But if they are successful, they should give rise to open source applications you can host on your own to enjoy these services without the lock-in. One should also watch tools like WSO2’s Mashup Server and JackBe Presto for their potential to rescue captive data and exploit freed data. On the “social networks” side, the RDF community has been buzing recently with news that Google is now indexing FOAF documents and exposing the content through its OpenSocial interface.

Bottom line, when you are offered to create a page, account or URL that will represent you or your data, take a second to ask yourself what it would take to do the same thing under your domain name. You don’t need to be a survivalist freak hiding in a mountain cabin in Montana (“it’s been eight years now, I wonder if they’ve started to rebuild cities after the Y2K apocalypse…”) to see value in more self-reliance on the web, especially when it can be easily achieved.

Yes, there is a connection between this post and the topic of this blog, IT management. It will be revealed in the next post (note to self: work on your cliffhangers).

[*] Some of my graduating colleagues took their machines to the dorm basement and plugged them into a switch there. Those Linux Slackware machines had amazing uptimes of months and years. Their demise didn’t come from bugs, hacking or component failures (even when cats made their litter inside a running computer with an open case) but the fire marshal, and only after a couple of years (the network admins had agreed to turn a blind eye).

[UPDATED 2008/7/7: Oh, yeah, another reason to run your own apps is that you won’t end up threatened with jail time for violating the terms of service. You can still end up in trouble if you misbehave, but they’ll have to charge you with something more real, not a whatever-sticks approach.]

[UPDATED 2009/12/30: Ringside (the Bob Bickel endeavor that I mention above), closed a few months after this post. Too bad. We still need what they were working on.]

2 Comments

Filed under Everything, Portability, Tech, Virtualization

David Linthicum on SaaS, enterprise architecture and management

David Linthicum from ZapThink (the world’s most prolific purveyor of analyst quotes for SOA-related press releases) recently wrote an article explaining that “Enterprise Architects Must Plan for SaaS“. A nice, succinct overview. I assume there is a lot more content in the keynote presentation that the article is based on.

The most interesting part from a management perspective is the paragraph before last:

Third, get in the mindset of SaaS-delivered systems being enterprise applications, knowing they have to be managed as such. In many instances, enterprise architects are in a state of denial when it comes to SaaS, despite the fact that these SaaS-delivered systems are becoming mission-critical. If you don’t believe that, just see what happens if Salesforce.com has an outage.

I very much agree with this view and the resulting requirements for us vendors of IT management tools. It is of course not entirely new and in many respect it is just a variant of the existing challenges of managing distributed applications, that SOA practices have been designed to help address. I wrote a slightly more specific description of this requirement in an earlier post:

If my business application calls a mix of internal services, SaaS-type services and possibly some business partner services, managing SLAs and doing impact/root cause analysis works a lot better if you get some management information from these other services. Whether it is offered by the service owner directly, by a proxy/adapter that you put on your end or by a neutral third party in charge of measuring/enforcing SLAs. There are aspects of this that are ‘regular’ SOA management challenges (i.e. that apply whenever you compose services, whether you host them yourself or not) and there are aspects (security, billing, SLA, compliance, selection of partners, negotiation) that are handled differently in the situation where the service is consumed from a third party.

With regards to the first two “tricks” listed in David’s article, people should take a look at what the Oracle AIA Foundation Pack and Industry Reference Models have to offer. They address application integration in general, not specifically SaaS scenarios but most of the semantics/interface/process concerns are not specific to SaaS. For example, the Siebel CRM On Demand Integration Pack for E-Business Suite (catchy name, isn’t it) provides integration between a hosted application (Siebel CRM On Demand) and an on-premises application (Oracle E-Business Suite). Efficiently managing such integrated systems (whether you bought, built or rent the applications and the integration) is critical.

Comments Off on David Linthicum on SaaS, enterprise architecture and management

Filed under Everything, IT Systems Mgmt, Mgmt integration, Oracle, SaaS

DMTF members as primary voters?

I just noticed this result from the 2007 DMTF member survey (taken a year ago, but as far as I can tell just released now). When asked what their “most important interoperability priority” is, members made it pretty clear that they want the current CIM/WBEM infrastructure fixed and polished. They seem a lot less interested in these fancy new SOAP-based protocols and even less in using any other model than CIM.

It will be interesting to see what this means for new DMTF activities, such as CMDBf or WS-RC, that are supposed to be model-neutral. A few possibilities:

  • the priorities of the members change over time to make room for these considerations
  • turn-over (or increase) in membership brings in members with a different perspective
  • the model-neutral activities slowly get more and more CIM-influenced
  • rejection by the DMTF auto-immune system

My guess is that the DMTF leadership is hoping for #1 and/or #2 while the current “base” (to borrow from the US election-season language) wouldn’t mind #3 or #4. I am expecting some mix of #2 and #3.

Pushing the analogy with current US political events further than is reasonable, one can see a correspondence with the Republican primary:

  • CIM/WBEM is Huckabe, favored by the base
  • CMDBf/WS-RC/WS-Management etc is Romney, the choice of the party leadership
  • At the end, some RDF and HTTP-based integration-friendly approach comes from behind and takes the prize (McCain)

Then you still have to win the general election (i.e. industry adoption of whatever the DMTF cooks up).

[UPDATED 2008/2/7: the day after I write this entry, Romney quits the race. Bad omen for CMDBf and WS-RC? ;-) ]

Comments Off on DMTF members as primary voters?

Filed under CMDB Federation, CMDBf, DMTF, Everything, Standards, WS-Management

Going dot-postal

According to this article, the Universal Postal Union is in talks with the ICANN to get its own “.post” TLD. Because, you see, “restricting the ‘.post’ domain name to postal agencies or groups that provide postal services would instill trust in Web sites using such names“. If you’re wondering what these “groups that provide postal services” are, keep reading: “the U.N. agency also could assign names directly to mail-related industries, such as direct marketing and stamp collecting“. I have nothing against stamp collectors, but direct marketing? So much for the “trust” part. Just call it “.spam” and be done with it.

I doubt that having to use a “.com” name has ever registered as a hindrance for FedEx, DHL or UPS in providing web-based services. And these organizations have been offering on-line package tracking and other services since before many of the postal organizations even had a way to locate post offices on their web site. That being said, http://com.post/ would be a great URL for a blog.

If the UPU really wants to innovate, what would be more interesting than a boring TLD would be a URI scheme for postal mail. Something like post:USA/CA/94065/Redwood%20City/Oracle%20Parkway/500/William%20Vambenepe but in a way that allows for the international variations. That would be a nice complement to the “geo:” URI scheme.

Now, should I categorize this as “off-topic”? What would the IT management angle be? Let’s see. Maybe as a way to further integrate the handling of virtual and physical servers? Kind of a stretch (being able to represent the destination as a URI in both cases doesn’t mean that delivering a physical server to an address is the same as provisioning a new VM in a hypervisor). Maybe as an additional notification endpoint (“if the application crashes, don’t send an an email, send me a letter instead”)? As if. Alright, off-topic it is.

Comments Off on Going dot-postal

Filed under Everything, Off-topic