Category Archives: Conference

Exalogic, EC2-on-OVM, Oracle Linux: The Oracle Open World early recap

Among all the announcements at Oracle Open World so far, here is a summary of those I was the most impatient to blog about.

Oracle Exalogic Elastic Cloud

This was the largest part of Larry’s keynote, he called it “one big honkin’ cloud”. An impressive piece of hardware (360 2.93GHz cores, 2.8TB of RAM, 960GB SSD, 40TB disk for one full rack) with excellent InfiniBand connectivity between the nodes. And you can extend the InfiniBand connectivity to other Exalogic and/or Exadata racks. The whole packaged is optimized for the Oracle Fusion Middleware stack (WebLogic, Coherence…) and managed by Oracle Enterprise Manager.

This is really just the start of a long linage of optimized, pre-packaged, simplified (for application administrators and infrastructure administrators) application platforms. Management will play a central role and I am very excited about everything Enterprise Manager can and will bring to it.

If “Exalogic Elastic Cloud” is too taxing to say, you can shorten it to “Exalogic” or even just “EL”. Please, just don’t call it “E2C”. We don’t want to get into a trademark fight with our good friends at Amazon, especially since the next important announcement is…

Run certified Oracle software on OVM at Amazon

Oracle and Amazon have announced that AWS will offer virtual machines that run on top of OVM (Oracle’s hypervisor). Many Oracle products have been certified in this configuration; AMIs will soon be available. There is a joint support process in place between Amazon and Oracle. The virtual machines use hard partitioning and the licensing rules are the same as those that apply if you use OVM and hard partitioning in your own datacenter. You can transfer licenses between AWS and your data center.

One interesting aspect is that there is no extra fee on Amazon’s part for this. Which means that you can run an EC2 VM with Oracle Linux on OVM (an Oracle-tested combination) for the same price (without Oracle Linux support) as some other Linux distribution (also without support) on Amazon’s flavor of Xen. And install any software, including non-Oracle, on this VM. This is not the primary intent of this partnership, but I am curious to see if some people will take advantage of it.

Speaking of Oracle Linux, the next announcement is…

The Unbreakable Enterprise Kernel for Oracle Linux

In addition to the RedHat-compatible kernel that Oracle has been providing for a while (and will keep supporting), Oracle will also offer its own Linux kernel. I am not enough of a Linux geek to get teary-eyed about the birth announcement of a new kernel, but here is why I think this is an important milestone. The stratification of the application runtime stack is largely a relic of the past, when each layer had enough innovation to justify combining them as you see fit. Nowadays, the innovation is not in the hypervisor, in the OS or in the JVM as much as it is in how effectively they all combine. JRockit Virtual Edition is a clear indicator of things to come. Application runtimes will eventually be highly integrated and optimized. No more scheduler on top of a scheduler on top of a scheduler. If you squint, you’ll be able to recognize aspects of a hypervisor here, aspects of an OS there and aspects of a JVM somewhere else. But it will be mostly of interest to historians.

Oracle has by far the most expertise in JVMs and over the years has built a considerable amount of expertise in hypervisors. With the addition of Solaris and this new milestone in Linux access and expertise, what we are seeing is the emergence of a company for which there will be no technical barrier to innovation on making all these pieces work efficiently together. And, unlike many competitors who derive most of their revenues from parts of this infrastructure, no revenue-protection handcuffs hampering innovation either.

Fusion Apps

Larry also talked about Fusion Apps, but I believe he plans to spend more time on this during his Wednesday keynote, so I’ll leave this topic aside for now. Just remember that Enterprise Manager loves Fusion Apps.

And what about Enterprise Manager?

We don’t have many attention-grabbing Enterprise Manager product announcements at Oracle Open World 2010, because we had a big launch of Enterprise Manager 11g earlier this year, in which a lot of new features were released. Technically these are not Oracle Open World news anymore, but many attendees have not seen them yet so we are busy giving demos, hands-on labs and presentations. From an application and middleware perspective, we focus on end-to-end management (e.g. from user experience to BTM to SOA management to Java diagnostic to SQL) for faster resolution, application lifecycle integration (provisioning, configuration management, testing) for lower TCO and unified coverage of all the key parts of the Oracle portfolio for productivity and reliability. We are also sharing some plans and our vision on topics such as application management, Cloud, support integration etc. But in this post, I have chosen to only focus on new product announcements. Things that were not publicly known 48 hours ago. I am also not covering JavaOne (see Alexis). There is just too much going on this week…

Just kidding, we like it this way. And so do the customers I’ve been talking to.

Comments Off on Exalogic, EC2-on-OVM, Oracle Linux: The Oracle Open World early recap

Filed under Amazon, Application Mgmt, Cloud Computing, Conference, Everything, Linux, Manageability, Middleware, Open source, Oracle, Oracle Open World, OVM, Tech, Trade show, Utility computing, Virtualization, Xen

“Freeing SaaS from Cloud”: slides and notes from Cloud Connect keynote

I got invited to give a short keynote presentation during the Cloud Connect conference this week at the Santa Clara Convention Center (thanks Shlomo and Alistair). Here are the slides (as PPT and PDF). They are visual support for my bad jokes rather than a medium for the actual message. So here is an annotated version.

I used this first slide (a compilation of representations of the 3-layer Cloud stack) to poke some fun at this ubiquitous model of the Cloud architecture. Like all models, it’s neither true nor false. It’s just more or less useful to tackle a given task. While this 3-layer stack can be relevant in the context of discussing economic aspects of Cloud Computing (e.g. Opex vs. Capex in an on-demand world), it is useless and even misleading in the context of some more actionable topics for SaaS: chiefly, how you deliver such services, how you consume them and how you manage them.

In those contexts, you shouldn’t let yourself get too distracted by the “aaS” aspect of SaaS and focus on what it really is.

Which is… a web application (by which I include both HTML access for humans and programmatic access via APIs.). To illustrate this point, I summarized the content of this blog entry. No need to repeat it here. The bottom line is that any distinction between SaaS and POWA (Plain Old Web Applications) is at worst arbitrary and at best concerned with the business relationship between the provider and the consumer rather than  technical aspects of the application.

Which means that for most technical aspect of how SaaS is delivered, consumed and managed, what you should care about is that you are dealing with a Web application, not a Cloud service. To illustrate this, I put up the…

… guillotine slide. Which is probably the only thing people will remember from the presentation, based on the ample feedback I got about it. It probably didn’t hurt that I also made fun of my country of origin (you can never go wrong making fun of France), saying that the guillotine was our preferred way of solving any problem and also the last reliable piece of technology invented in France (no customer has ever come back to complain). Plus, enough people in the audience seemed to share my lassitude with the 3-layer Cloud stack to find its beheading cathartic.

Come to think about it, there are more similarities. The guillotine is to the axe what Cloud Computing is to traditional IT. So I may use it again in Cloud presentations.

Of course this beheading is a bit excessive. There are some aspects for which the IaaS/PaaS/SaaS continuum makes sense, e.g. around security and compliance. In areas related to multi-tenancy and the delegation of control to a third party, etc. To the extent that these concerns can be addressed consistently across the Cloud stack they should be.

But focusing on these “Cloud” aspects of SaaS is missing the forest for the tree.

A large part of the Cloud value proposition is increased flexibility. At the infrastructure level, being able to provision a server in minutes rather than days or weeks, being able to turn one off and instantly stop paying for it, are huge gains in flexibility. It doesn’t work quite that way at the application level. You rarely have 500 new employees joining overnight who need to have their email and CRM accounts provisioned. This is not to minimize the difficulties of deploying and scaling individual applications (any improvement is welcome on this). But those difficulties are not what is crippling the ability of IT to respond to business needs.

Rather, at the application level, the true measure of flexibility is the control you maintain on your business processes and their orchestration across applications. How empowered (or scared) you are to change them (either because you want to, e.g. entering a new business, or because you have to, e.g. a new law). How well your enterprise architecture has been defined and implemented. How much visibility you have into the transactions going through your business applications.

It’s about dealing with composite applications, whether or not its components are on-premise or “in the Cloud”. Even applications like Salesforce.com see a large number of invocations from their APIs rather than their HTML front-end. Which means that there are some business applications calling them (either other SaaS, custom applications or packaged applications with an integration to Salesforce). Which means that the actual business transactions go across a composite system and have to be managed as such, independently of the deployment model of each participating application.

[Side note: One joke that fell completely flat was that it was unlikely that the invocations of Salesforce  through the Web services APIs be the works of sales people telneting to port 80 and typing HTTP and SOAP headers. Maybe I spoke too quickly (as I often do), or the audience was less nerdy than I expected (though I recognized some high-ranking members of the nerd aristocracy). Or maybe they all got it but didn’t laugh because I forgot to take encryption into account?]

At this point I launched into a very short summary of the benefits of SOA governance/management, real user experience monitoring, BTM and application-centric IT management in general. Which is very succinctly summarized on the slide by the “SOA” label on the receiving bucket. I would have needed another 10 minutes to do this subject justice. Maybe at Cloud Connect 2011? Alistair?

This picture of me giving the presentation at Cloud Connect is the work of Alex Dunne.

The guillotine picture is the work of Rusty Boxcars who didn’t just take the photo but apparently built the model (yes it’s a small-size model, look closely). Here it is in its original, unedited, glory. My edited version is available under the same CC license to anyone who wants to grab it.

14 Comments

Filed under Application Mgmt, Business Process, Cloud Computing, Conference, Everything, Governance, Mgmt integration, SaaS, Trade show, Utility computing

Standards Disconnect at Cloud Connect

Yesterday’s panel session on the future of Cloud standards at Cloud Connect is still resonating on Twitter tonight. Many were shocked by how acrimonious the debate turned. It didn’t have to be that way but I am not surprised that it was.

The debate was set up and moderated by Bob Marcus (ET-Strategies CTO and master standards coordinator). On stage were Krishna Sankar (Cisco and DMTF Cloud incubator), Archie Reed (HP and CSA), Winston Bumpus (VMWare and DMTF), a gentleman whose name I unfortunately forgot (and who isn’t listed on the program) and me.

If the goal was to glamorize Cloud standards, it was a complete failure. If the goal was to come out with some solutions and agreements, it was also a failure. But if the goal, as I believe, was to surface the current issues, complexities, emotions and misunderstandings surrounding Cloud standards, then I’d say it was a success.

I am not going to attempt to summarize the whole discussion. Charles Babcock, who was in the audience, does a good enough job in this InformationWeek article and, unlike me, he doesn’t have a horse in the race [side note: I am not sure why my country of origin is relevant to his article, but my guess is that this is the main thing he remembered from my presentation during the Cloud Connect keynote earlier that morning, thanks to the “guillotine” slide].

Instead of reporting on what happened during the standards discussion, I’ll just make one comment and provide one take-away.

The comment: the dangers of marketing standards

Early in the session, audience member Reuven Cohen complained that standards organizations don’t do enough to market their specifications. Winston was more than happy to address this and talk about all the marketing work that DMTF does, including trade shows and PR. He added that this is one of the reasons why DMTF needs to charge membership fees, to pay for this marketing. I agree with Winston at one level. Indeed, the DMTF does what he describes and puts a fair amount of efforts into marketing itself and its work. But I disagree with Reuven and Winston that this is a good thing.

First it doesn’t really help. I don’t think that distributing pens and tee-shirts to IT admins and CIO-wannabes results in higher adoption of your standard. Because the end users don’t really care what standard is used. They just want a standard. Whether it comes from DMTF, SNIA, OGF, or OASIS is the least of their concerns. Those that you have to convince to adopt your standard are the vendors and the service providers. The Amazon, Rackspace and GoGrid of the world. The Microsoft, Oracle, VMWare and smaller ones like… Enomaly (Reuven’s company). The highly-specialized consultants who work with them, like Randy. And also, very importantly, the open source developers who provide all the Cloud libraries and frameworks that are the lifeblood of many deployments. I have enough faith left in human nature to assume that all these guys make their strategic standards decisions on a bit more context than exhibit hall loot and press releases. Well, at least we do where I work.

But this traditional approach to marketing is worse than not helping. It’s actually actively harmful, for two reasons. The first is that the cost of these activities, as Winston acknowledges, creates a barrier for participation by requiring higher dues. To Winston it’s an unfortunate side effect, to me it’s a killer. Not necessarily because dropping the membership fee by 50% would bring that many more participants. But because the organizations become so dependent on dues that they are paranoid about making anything public for fear of lowering the incentive for members to keep paying. Which is the worst thing you can do if you want the experts and open source developers, who are the best chance Cloud standards have to not repeat the mistakes of the past, to engage with the standard. Not necessarily as members of the group, also from the outside. Assuming the work happens in public, which is the key issue.

The other reason why it’s harmful to have a standards organization involved in such traditional marketing is that it has a tendency to become a conduit for promoting the agenda of the board members. Promoting a given standard or organization sounds good, until you realize that it’s rarely so pure and unbiased. The trade shows in which the organization participates are often vendor-specific (e.g. Microsoft Management Summit, VMWorld…). The announcements are timed to coincide with relevant corporate announcements. The press releases contain quotes from board members who promote themselves at the same time as the organization. Officers speaking to the press on behalf of the standards organization are often also identified by their position in their company. Etc. The more a standards organization is involved in marketing, the more its low-level members are effectively subsiding the marketing efforts of the board members. Standards have enough inherent conflicts of interest to not add more opportunities.

Just to be clear, that issue of standards marketing is not what consumed most of the time during the session. But it came up and I since I didn’t get a chance to express my view on this while on the panel, I used this blog instead.

My take-away from the panel, on the other hand, is focused on the heart of the discussion that took place.

The take away: confirmation that we are going too fast, too early

Based on this discussion and other experiences, my current feeling on Cloud standards is that it is too early. If you think the practical experience we have today in Cloud Computing corresponds to what the practice of Cloud Computing will be in 10 years, then please go ahead and standardize. But let me tell you that you’re a fool.

The portion of Cloud Computing in which we have some significant experience (get a VM, attach a volume, assign an IP) will still be relevant in 10 years, but it will be a small fraction of Cloud Computing. I can tell you that much even if I can’t tell you what the whole will be. I have my ideas about what the whole will look like but it’s just a guess. Anybody who pretends to know is fooling you, themselves, or both.

I understand the pain of customers today who just want to have a bit more flexibility and portability within the limited scope of the VM/Volume/IP offering. If we really want to do a standard today, fine. Let’s do a very small and pragmatic standard that addresses this. Just a subset of the EC2 API. Don’t attempt to standardize the virtual disk format. Don’t worry about application-level features inside the VM. Don’t sweat the REST or SOA purity aspects of the interface too much either. Don’t stress about scalability of the management API and batching of actions. Just make it simple and provide a reference implementation. A few HTTP messages to provision, attach, update and delete VMs, volumes and IPs. That would be fine. Anything else (and more is indeed needed) would be vendor extensions for now.

Unfortunately, neither of these (waiting, or a limited first standard) is going to happen.

Saying “it’s too early” in the standards world is the same as saying nothing. It puts you out of the game and has no other effect. Amazon, the clear leader in the space, has taken just this position. How has this been understood? Simply as “well I guess we’ll do it without them”. It’s sad, but all it takes is one significant (but not necessarily leader) company trying to capitalize on some market influence to force the standards train to leave the station. And it’s a hard decision for others to not engage the pursuit at that point. In the same way that it only takes one bellicose country among pacifists to start a war.

Prepare yourself for some collateral damages.

While I would prefer for this not to proceed now (not speaking for my employer on this blog, remember), it doesn’t mean that one should necessarily stay on the sidelines rather than make lemonade out of lemons. But having opened the Cloud Connect panel session with somewhat of a mea-culpa (at least for my portion of responsibility) with regards to the failures of the previous IT management standardization wave, it doesn’t make me too happy to see the seeds of another collective mea-culpa, when we’ve made a mess of Cloud standards too. It’s not a given yet. Just a very high risk. As was made clear yesterday.

9 Comments

Filed under Amazon, Cloud Computing, Conference, DMTF, Everything, Mgmt integration, People, Portability, Standards, Trade show, Utility computing

OVF 1.0 and beyond

OVF 1.0 just got released as a DMTF standard. Here is the specification and its companion white paper. After a quick scan I didn’t see any major change from the submitted version, which is consistent with the content of the “preliminary standard” from last year.

The interesting question is what comes next, especially with regards to VMWare’s vCloud. The VMWare press release stated that “as one of the original authors of the Open Virtualization Format (OVF) standard now released from the Distributed Management Task Force (DMTF), VMware will build upon that work by submitting a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds” and Drue Reeves at the Burton group commented on this (Drue, we’re still waiting for part II). I see no reason to believe that VMWare is going to stop playing by the Microsoft playbook in DMTF as it appears to be quite successful so far (I’ll pat myself in the back for predicting over a year ago that “OVF might only be the beginning” for VMWare at DMTF).

This results in what looks like a landgrab from DMTF in Cloud standards. Meanwhile, in Washington DC yesterday, the Strategies and Technologies for Cloud Computing Interoperability (SATCCI) workshop took place. At this point all I know about it is the report from Reuven Cohen that I just read (hopefully Stu, Krishna and other bloggers who participated will provide additional perspectives). From Reuven’s report, Winston Bumpus (Director of Standards Architecture at VMware and President of the DMTF) described OVF as “an ideal cloud migration and deployment package”. Which may be true but is a pretty recent repurposing (the spec and the white paper don’t even mention this application). And while the DMTF is going full speed ahead on this, Reuven reports that “Craig Lee, President of the Open Grid Forum suggested that we need to take more time to examine the overlap between various standards groups, mapping the opportunities for collaboration”. Sure thing. The old timers might remember that when the DMTF decides to run with Microsoft’s WS-Management it wasn’t just OASIS (where WSDM was created) that eventually got hosed but also OGF (then called GGF) which relied on the WSRF/WSDM stack. At the time too there were discussions to identify and reconcile the overlap, for all the good they did (disclosure: I have some history there).

We’ve seen this in the WS-* game before. At the end it’s not so much a matter of what the standards bodies do (and even less of what they say), it’s a matter of what the big players do and where they choose to take their marbles. To the extent that you can separate the two, which becomes tricky in the case of vendor-run bodies like WS-I and DMTF. As I have written before, “at the end, it comes down to what [you think] a standard should be”.

[UPDATED 2009/3/26: Stu has now written a report on the SATCCI meeting.]

5 Comments

Filed under Cloud Computing, Conference, DMTF, Everything, Grid, IT Systems Mgmt, OVF, Portability, Specs, Standards, Utility computing, Virtualization, VMware, WS-Management

The datacenter as a programmable entity

This is an exciting time for those who want to shrink the computer. They are having a field day playing with devices powered by Android, the iPhone’s Cocoa, Palm’s new WebOS, Windows Mobile, JavaFX (maybe one day) and, to a lesser extent, the Blackberry.

But times are good too for those who want to go the other way and program larger things rather than smaller ones. If you are interested in thinking about datacenters as a programmable entity, you are in luck: for these long plane trips when you run out of battery, bring a printout of the proceedings of the research meeting organized last year in Cambridge by Microsoft and HP Labs, titled “The Rise and Rise of the Declarative Datacentre”. When you’re back on-line go check the presentations on the site.

And if you liked Paul Anderson’s “Programming the Data Centre” presentation at the Cambridge meeting, you can also read his “Programming the Virtual Infrastructure” slides from LISA 08. More LISA 08 presentations here.

I got the link to Paul Anderson’s second presentation (and maybe also the first one, some time ago) from Steve Loughran, who also adds a few comments, starting with the debate between the declarative and procedural approaches. This question has plenty of down-the-road implications. There is a lot to like about the declarative approach in terms of composition, manageability and more generally as a framework to manage complexity via encapsulation.

A simple analogy for this debate is to think about driving directions. The declarative approach is for me to give you a map with a circle on it showing where my house is and let you find your way. It’s more work for you but it’s also more resilient. The procedural approach is for me to give you a set of turn-by-turn directions, based on where you are coming from. If you miss one turn or if one road happens to be blocked at the time, then you’re in trouble.

That being said, there are enough powerful and useful PowerShell or Puppet scripts out there to give you a pause before discarding procedural approaches. While the declarative (aka “desired state”, “policy-driven” and sometimes “model-based”) approach looks a lot more elegant, at this point in time the real work usually gets done via scripts, deployment procedures or the likes.

In additin to academia, the competition between these approaches is playing out right now between all the companies and products that want to help you automate and manage your cloud deployments (public and/or private): for example, Rightscale scripts (custom scripts and Righscripts, see here and here) versus the more declarative ECML/EDML documents from Elastra. Or the very declarative approach taken by SmartFrog.

5 Comments

Filed under Automation, Cloud Computing, Conference, Desired State, Everything, Grid, Implementation, Research, Tech, Utility computing

HP introduces “Operations Manager i”

If you’ve seen a lot of news articles about HP’s IT management software this week (e.g. through Cote or Doug) it’s because the company held its Software Universe conference in Vienna this week and timed a bunch of announcements and PR events to match.

Most of the articles linked above just paraphrase the press releases and talking points. So if you’re going to get the company line, might as well get it straight from the horse’s mouth. Which we can now do through a new HP blog about BSM. The first article was penned by Mike Shaw and that’s enough for me to want to subscribe (I worked with Mike a few times when I was at HP and he is very sharp). I think Mike also wrote the other entries but since they are not signed (and the account name, “adsey007”, is pretty opaque) I am not sure. In any case, they are pretty good. This one gives an overview of the Vienna announcements. The next one describes in more details the OMi product. I am not in position to know how well it works but, according to the article, OMi takes the important step of modeling and managing events in the context of the overall model in the CMDB. Such that the event management features (e.g. correlation) can use the already-discovered relationships between the IT elements involved in the events (e.g. dependencies). The article also implies that the CMDB has been integrated with NNM (OpenView), Service Manager (Peregrine) and Server Automation (Opsware). Which is a lot of progress in 16 months since I left HP, so I am taking it with a grain of salt (we all know there are different levels of integration). The press release says that the CMDB is now integrated with 17 HP BTO applications, so you may need a whole salt shaker. In any case it’s great to see that Ramin and team are forging ahead, delivering products and driving the integration of the BTO portfolio.

The last paragraph (“OMi actually sits on top of existing HP Operations Manager installations…”) is intriguing and may provide a clue about the depth of the integration. In any case, OMi is something to keep an eye on as it is positioned to leverage a lot of the key strengths of the HP BTO portfolio.

BTW, this OMi product has nothing to do with this OMI which was a precursor to WSMF, WSDM and WS-Management. And which most people currently working in HP Software have never heard of.

2 Comments

Filed under Application Mgmt, Conference, Everything, HP, IT Systems Mgmt, Mgmt integration, Modeling, People

IT management and Cloud: now some products

Many of us have been thinking (a bit) and talking (a lot) about the relationship between Clouds and good old IT management.  John understands both sides and produced a few good posts (like this one).

Maybe it’s just a coincidence that both Hyperic and CA recently made such announcements. In any case, it gives the impression that time has come for some actual product capabilities in the area of managing Cloud-based systems.

I haven’t investigated either, so keep your slideware shields up, but this is what I read:

From Javier Soltero’s “Announcing HQ 4.0”: “It also provides the first cloud-friendly management agent which allows users to manage cloud based virtual machines securely and reliably from either inside the cloud, or from HQ 4.0 installations inside your datacenter”. John approves.

And at CA World, according to InformationWeek, CA will announce a partnership with Amazon to provide management capabilities around Amazon’s EC2 utility computing platform, potentially including discovery of software running on EC2 instances, performance monitoring, configuration management, software deployment capabilities and provisioning”.

When someone looks into these two products (and others, soon to follow or alrady out and that I have missed), it will be interesting to see how these Cloud-friendly capabilities relate to the good old capabilities of management products: “software discovery”, “perf monitoring”, “config management”, “software deployment”, “provisioning”. That all sounds pretty familiar. Is it just a matter of pointing the old tools to an EC2 IP address? Is it all new capabilities, done in a new way? Or, more realistically, where does it land between these extrems? Where do you want them to land? It’s not so obvious.

Utility computing comes with an expectation of additional flexibility (now that is obvious). When tweaking IT management tools to address the domain, does one leave “in datacenter” capabilities the same and branch off to do cool things in the new land? Or do you raise the level of flexibility accross the board?

In other words, rather than snickering at them, maybe we should praise IT management vendors for whom the “look, I do Clouds” marketing spiel is just a repackaging of normal IT management features. Because it may mean that they’ve raised the bar on “in datacenter” automation capabilities. These Opsware and BladeLogic acquisitions have to come in somewhere, don’t they?

BTW, both of the announcements above also perpetuate the confusion between providing utility services (CA’s extended SaaS offering, Hyperic’s release of a pre-packaged Hyperic AMI) and the ability to manage Cloud-based systems. It’s all crammed in the same announcement/article because, hey, it’s all Cloud stuff.

Speaking of CA World, if I was there I would go to this session. At least for old time sake, and maybe to get some interesting ideas. Hopefully Don will blog about it after he is done presenting later today.

5 Comments

Filed under Amazon, Application Mgmt, Articles, CA, Conference, Everything, IT Systems Mgmt, Open source, Standards, Utility computing

Go Big Blue, go! Show them who’s the true friend of the little guy.

IBM’s well-publicized new policy for technology standards is an interesting development. The first image it conjured for cynical me is that of an aging Heavy Metal singer ranting against the rudeness of rap lyrics.

Like Charles, I don’t see IBM as an angel in this domain and yet I too think this is a commendable move on their part. Who better to stop a burglar than a (presumably) reformed burglar anyway? I hope this effort will succeed and I am glad to see that my colleague Jim Melton was involved in the discussion facilitated by IBM and that Trond supports it too.

My experience in standards (mostly from back in my HP days) only covers a small portion of IBM’s technology standards involvement of course. But in all instances, both IBM and Microsoft were key players (either through their participation or through their glaring refusal to participate). And within that sample (which does not include OOXML) my impression is that IBM did indeed play more cleanly than Microsoft.

They also mostly lost, while Microsoft mostly won. Whether there is a causality here is possible but not proven. IBM seems to have an ability to loose by winning: because they assign so many people to standards they wear out everybody else and at the end, they get the final document to be the way they want it (through the normal process, just by being relentless). But the specification is by then so over-engineered, so IBM-like in its approach and so late that it’s usually a Pyrrhic victory. Everybody else has moved on and IBM has on their hand something that’s a standard on paper but that only players in the IBM ecosystem implement. Pushing IBM’s CBE event format in WSDM, over-complicating aspects of WSRF like WS-ServiceGroup and butchering the use of SOAP headers in WS-ResourceTransfer to play nice with WebSphere are, in my mind, such examples. They can’t blame Microsoft for those.

Also, nobody forced them to tango with the devil in that whole WS-* saga. What they are saying now is similar in many ways to what Oracle was saying (about openness and fairness) throughout this decennia while Microsoft and IBM were privately defining machine to machine interoperability protocols for the enterprise. And they can’t blame standards for the way Microsoft eventually took advantage of them there, because they *chose* to do this outside of standards. I wish I had been a fly on the whole when this conversation took place:

IBM: We’re going to need a neutral DNS name for all these new XML namespaces. It wouldn’t be right to do it under ibm.com or microsoft.com.
Microsoft: You’re right. Hey, I just registered xmlsoap.org last week with the intent to launch a B2B forum for the detergent industry, but if you want we can use it for our Web services specs.
IBM: Man, that’s perfect. Let me give you twenty bucks to help pay the registration.
Microsoft: No, really, no big deal. It’s on me.
IBM: You’re too cool man.

But here I am, IBM-bashing again while the point of this post is to salute and support their attempt at reform. Bad, bad William.

OK, so now for some (hopefully) constructive remarks and suggestions.

I think commentaries and reports on the news have focused too much on the OOXML/ISO story. Sure it’s probably a big part of the motivation. But how much leverage does IBM really have on ISO? Technology standards is just a portion of what ISO does. And it’s not like ISO has much competition anyway, with its de jure international standing. Organizations like the JCP, DMTF and W3C have a lot more too lose if IBM really gets mad at them.

I think it’s clear that Microsoft is the target, but if ISO reform was the main prize, I don’t think IBM would go at it that way. ISO will only change in response to government pressure. If government influence is a necessary step, isn’t it cheaper and more direct for IBM to hire a couple more lobbyists than to try to rally the blogosphere? I think they really want to impact all standards setting organizations at the same time. If ISO happens to be one of those improved in the process, that’s gravy.

IBM calls its report “standards for standards” (at least that’s the file name). I think (and hope) the double entendre is voluntary. It’s not just a matter a raising the (moral and operational) standards of standards organizations. It should also be an occasion to standardize how they work, to make them more similar to one another.

Follow me for a second here. One of the main problems with many organizations is their opacity. They have boards, task forces, strategic committees, etc. Membership in the organization is stratified, based mostly on how much you are willing to pay. I would guess that most organizations couldn’t make ends meet if all member companies paid the “base membership” fee. They need a dozen companies to pay the “leadership” fee to fund their operations. For these companies to agree to the higher price of participation, they need something in return. They need to have more access than the others. Therefore, some level of access must be denied to the base members (and even more to the non-members, which is why many such organizations make almost no information publicly available).

They are not opaque by accident, they are opaque by design because they need to be in order to be funded. There are two ways to fix this. One is to have fewer organizations, such that the fixed costs of running an organization can be more widely spread. But technology is very specialized and there is value in having organizations that are focused and populated by domain experts. The other way is to drastically reduce the cost of running a standards organization. That’s where standardization of standards organizations comes in. If the development processes, IP policies, bylaws and tools were commonly shared among standards organizations, it would be a lot cheaper to run one.

Today, I can start a new open source project for free on Sourceforge. I can pick one of the clearly-identified open source licenses that have been pre-defined. I can use the usual source control, collaboration and bug reporting tools. Not only is it almost free, my users will know right away how to participate. Why isnt’ it the same for standards organizations? Or only so partially. I know that Kavi is used by many standards organizations. I’ve used their tool both as a DMTF participant and an OASIS participant. And it doesn’t really fit either perfectly because the processes are slightly different. Ballots are conducted differently, attendance rules are different, document visibility rules are different, roles are different, etc.

It sounds superficial, but I am convinced that a more standardized approach to IP policies, organization bylaws and specification development processes would result in big savings that would open the door to much more transparency.

Oh yeah, you’d also have to drop the boondoggle plenary sessions in resorts all over the world. Painful, I know.

Sure there are other costs, such as marketing costs. But fully transparent organizations, by making their products more easily accessible to users, have a much lower need to use traditional marketing to get the word out. In the same way that open source software companies get most of their marketing via their user community. Consistency among standards organizations would also make it a lot easier for small companies to participate since anyone who’s learned the rules once can be effective right away in a new organization.

I want to end with a note of caution directed at IBM. You have responsibilities. I hope you realize that at this point, approximately 20% of all airplane seats are occupied by IBM employees going to or coming back from some standards-related meeting. The airlines are hurting already, you can’t pull out at once. And who will drive all these rental Chevys? Who will eat all the bad sushi in airport food courts and Benihana restaurants?

[UPDATED 2008/10/20: From Tim Bray, another example of IBM loosing by winning in standards: “Unfortunately, that spec [XML 1.1] came with excess baggage, namely changed rules on what constitutes white-space, rammed through by IBM for the convenience of their mainframe customers. In any case, XML 1.1 has been widely ignored”.]

3 Comments

Filed under Conference, Everything, Governance, IBM, ISO, Microsoft, OOXML, Open source, Standards

Running Oracle in Amazon’s cloud

The announcement finally came out. Users can now run supported versions of Oracle Enterprise Linux, 11G Database, Fusion Middleware and Enterprise Manager on Amazon EC2 instances. You can create your own AMI or use any of the pre-packaged AMIs with the above-mentioned products. And you don’t have to purchase new licenses, you can transfer existing ones to run on Amazon’s infrastructure.

A separate but related announcement is the possibility to simply and securely backup your databases on Amazon S3 instead of (or in addition to) on tape. I hope BNY Mellon will take notice.

The Amazon AWS blog has a good overview of the news. Forrester covers it with a focus on data warehousing.

This comes in addition to the existing SaaS offering (“On Demand”) from Oracle and the SaaS platform (for others to provide SaaS on top of Oracle’s software). It is a major milestone for utility computing.

[UPDATED 2008/9/21: This is the home page for the Oracle Cloud Computing Center and this is the FAQ.]

[UPDATED 2008/9/23: More Cloud love, this time with Intel. I have no insight into that partnership.]

[UPDATED 2009/2/10: More on WebLogic Server on EC2, from Erik Bergenholtz.]

1 Comment

Filed under Amazon, Conference, Everything, IT Systems Mgmt, Linux, Middleware, Oracle, Oracle Open World, SaaS, Trade show, Utility computing, Virtualization

Application management roundtable

The Oracle Enterprise Manager team is inviting customers to an application management roundtable next week in San Francisco. You’ll learn about recent application management acquisitions (Moniforce, ClearApp and e-TEST), product direction and integration strategy. What we’d like to learn in return is your thoughts, needs and requirements for application management. To that end, we’ll need you to RSVP and to prepare a 5-10 minutes presentation about your application management challenges.

Here is the agenda:

  • Introduction
  • Customer Presentations on Application Management
  • Oracle’s Approach to Application Management
    • Real User Monitoring (Moniforce)
    • End2end Performance Monitoring (ClearApp)
    • Application Quality Management (e-TEST)
  • Breakout Sessions
  • Composite & SOA Application Management
    • E-Business Suite Application Management
    • Siebel Application Management
    • BRM Application Management
    • PeopleSoft Application Management

It will take place at the Four Seasons Hotel (757 Market St) from 9:00AM to 1:00PM (but don’t forget to RSVP before showing up).

You don’t have to be registered for Oracle Open World (OOW) to attend, but of course it’s been timed to be convenient for people who come to OOW.

Speaking of OOW, here is a list of all the sessions about Enterprise Manager from the conference agenda search engine. Also packaged as a nicely-formatted and chronologically-ordered PDF. For those interested in the recent application management acquisitions, check out these sessions:

About Moniforce

  • S298518 (Improve Performance of Your Oracle E-Business Suite and Siebel Applications with Oracle’s Real User Experience Insight)
  • S298536 (Go Beyond Web Analytics: Build Business Intelligence with Oracle Real User Experience Insight)
  • S298516 (How Real User Monitoring Can Improve Application Performance: Go Beyond Web Analytics and Systems Monitoring)

About ClearApp

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)

About e-TEST

  • S298707 (Application Testing Best Practices: Real-World Customer Testimonials)
  • S298706 (Optimizing Application Performance: Application Testing Suite to the Rescue)

About Auptyma

  • S298534 (Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring)
  • S298524 (Application Diagnostics for DBAs: Visibility into Your Application That the Middle-Tier Administrator Cannot Provide You)
  • S298525 (Diagnosing Java Application Issues in Production: Gaining Performance Insight That Even Developers Do Not Have )
  • S300236 (Oracle Enterprise Manager Hands-on Lab: SOA Management and Java Application Diagnostics)

Just for fun, check out Chris Muir’s 10 things we probably wont see at OOW08. The scary part is that of these ten unlikely things the least unlikely is item #1…

BTW, I’ll be at OOW next week (probably Wednesday and Thursday) so if you plan to be there and would like to meet let me know.

Comments Off on Application management roundtable

Filed under Application Mgmt, Conference, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, Oracle, Oracle Open World, Trade show

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

CMDBf interop demo

IBM and CA are apparently showing an interoperability demo between their respective CMDBs at itSMF Fusion this week. I am not there to see it, but they describe it (it’s a corporate merger scenario) in this press release. It is presumably based on the version of the specification that was submitted to DMTF.

More information about CMDBf, along with another demonstration, will be available in a couple of months for ManDevCon attendees. Three sessions are on the agenda, all in a row and in the same room (so make sure to get a good seat, i.e. one close to a power plug, from the start):

  • CMDB Federation Overview (Vince Kowalski, BMC and Marv Waschke, CA)
  • CMDB Federation Technical Description (Mark Johnson, IBM and Marv Waschke, CA)
  • CMDB Federation Demonstration (Mark Johnson, IBM and Dave Snelling, Fujitsu)

Comments Off on CMDBf interop demo

Filed under CA, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Specs, Standards, Trade show

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

Mapping CIM associations to CMDBf relationships

This post started as a comment on the blog of Van Wiles. When it became too long (and turned into a therapeutic rant at the end) I turned it into a blog post of its own. Please, read Van’s post first. Here is my response to him:

Hi Van. Sounds like what you are after is not a mapping of the CIM_Dependency association to a CMDBf record type (anyone can make up such a mapping as you point out), but a generic algorithm to map any CIM association to a corresponding CMDBf relationship record type. Correct? That algorithm needs to handle the fact that the CIM metamodel has the concept of relationship roles while the CMDBf metamodel doesn’t.

Here is a possible such mapping:

  1. Take a CIM association (called “myAssociation”) that has two roles (called “thisOne” and “theOtherOne”).
  2. Take the item that has role name that comes first alphabetically and make it the source (in this example, it is “theOtherOne”)
  3. Take the item that has role name that comes second alphabetically and make it the target (in this example, it is “thisOne”)
  4. Generate a CMDBf record type called “{associationName} _from_ {firstRoleNameAlphabetically} _to_ {secondRoleNameAlphabetically}”

You’re done. The new CMDBf record type is “myAssociation_from_theOtherOne_to_thisOne”, the source is the item with the role “theOtherOne” and the target is the item with the role “thisOne”. Everyone who follows this algorithm (of course it needs to be formally defined and evangelized, there is no guarantee here unless we bake CIM-specific concepts in the core CMDBf specification, which would be a mistake) will produce the same CMDBf relationship record type for a given CIM association.

Applied to the CIM_Dependency example, this would generate a “CIM_Dependency_from_Antecedent_to_Dependent” CMDBf record type, in which the source is the CIM Antecedent and the target is the CIM Dependent.

Alternatively, you can have the algorithm generate two CMDBf relationship record types (one going in each direction) for each CIM association. So you don’t have to arbitrarily pick the first one (alphabetically) as the source. But then you need to have model metadata to capture the fact that these relationships are the inverse of one another (and imply one another). As you well know,I have been advocating for the use of RDF/RDFS/OWL in CMDBf for a while. :-)

In the end, there are three potential approaches:

1) Someone (the CMDBf group or someone else) creates an authoritative mapping for all CIM associations (or at least all the useful ones) and we expect anyone who uses the CIM model with CMDBf to use that mapping.

2) Someone (again, the CMDBf group or someone else) defines a normative CIM to CMDBf mapping, e.g. the one above, and we expect anyone who generates a CMDBf relationship record type from a CIM association to use this mapping algorithm. From a pure logical perspective, it is the same as defining a CMDBf record type for each CIM association (approach 1), but it is less work and it doesn’t have to be updated every time a CIM association is created/versioned. At the cost of uglier (more arbitrary) CMDBf record types being defined.

3) We let people define the relationships in whatever way they choose and we provide a model metadata framework (aka ontology language) to allow mappings between these approaches. For example, you define, in your namespace, a van:CIM-inspired-dependency CMDBf record type that goes from antecedent to dependent. Separately, I defined, in my namespace, a william:CIM-like-dependency CMDBf record type that carries the same semantics (defined, not so precisely BTW but that’s a different topic, by CIM) except that its source is the dependent and its target is the antecedent. The inverse of yours. A suitable ontology language would allow someone (you, me, or a third party who has to assemble a system that uses both relationship types) to assert that mine is the inverse of yours. Once this assertion is captured, a request for any [A]—(van:CIM-inspired-dependency)—>[B] would also return the instances of [B]—(william:CIM-like-dependency)—>[A] because they are known to be the same. And you know how I am going to conclude, of course: OWL (specifically owl:inverseOf) provides just this.

BTW, approach 3 is not incompatible with 1 or 2. Whether or not we define mappings for CIM relationships and whether or not that mapping gets adopted, there will be plenty of cases in a federated scenario in which you need to reconcile models (CIM-based or not). Model metadata (aka an ontology language) is useful anyway.

Readers who only care about the technical aspects and have little time for rants can stop reading here. But, since I haven’t addressed any constructive criticism to the DMTF in a while, I can’t resist the opportunity to point out that if the mailing list archives for the DMTF working groups were publicly available, we wouldn’t have to have these discussions on our personal blogs. I am very glad that Van posted this on his blog because it is a question that many people will have. Whatever the CMDBf specification ends up doing, developers and architects who make use of it will benefit from having access to the deliberations and considerations that resulted in the specification being what it is. There are many emails in the CMDBf mailing list private archive that I am sure would be useful to future CMDBf implementers, but if they don’t show up on Google they don’t exist for any practical purpose. When grappling with the finer points of some specification or programming language I have often Googled my way into email archives (or old specification drafts) of the working groups that designed them. Sometimes I come out thinking “oh, ok, now I understand why they chose that approach” and other times it’s “ok, that’s what I suspected, these guys were high”. Either way, it’s useful to me as a user of the specification. W3C is the best example (of making working group records available, not of being high): not only is the mailing list available but the phone meetings often have a supporting IRC channel in which key points of the discussion get captured and archived. Here is an example. Making life easier for implementers is probably the single most important thing to make a specification successful. And ultimately, that’s the DMTF’s success too.

And it’s not just for developers and architects. It also impacts industry observers and pundits. Like the IT Skeptic who looked into CMDBf and reported “nothing on the DMTF website but press releases. try to find anything by navigating from the homepage”. And you wonder why his article is titled “the CMDB Federation proceeeds (sic) at its usual glacial pace”. There is good work going on, but there is no way for him to see it. This too is bad for the adoption and credibility of DMTF specifications.

Isn’t it ironic that the DMTF expends resources to sponsor a “hospitality suite” at the Burton Group Catalyst conference (presumably to spread the word about the good work taking place in the organization) but fails to make it easy for the industry to see that same good work taking place? It’s like a main street retail shop that advertises in the newspaper but covers its store window with cardboard, preventing passersby from seeing what’s on offer. I notice that all the other “hospitality suites” seem to be staffed by for-profit vendors (Oracle, IBM, Cisco, Microsoft etc are all there). Somehow W3C and OASIS (whose work is very relevant to some of the conference themes, like identity management and SOA) don’t feel the need to give away pens and key chains at the conference.

Dear DMTF, open source is not just good for code.

2 Comments

Filed under CA, CMDB Federation, CMDBf, Conference, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, RDF, Semantic tech, Specs, Standards, Trade show, W3C

I have seen the future of CMDBf

I got a sneak peak at CMDBf v2 today.

I am calling it v2 based on the assumption that the one being currently standardized in DMTF will end up being called 1.0 (because it’s the first one out of DMTF) or 1.1 (to prevent confusion with the submitted version).

At the Semantic Technology Conference, David Booth from HP presented his work (along with his partner, Steve Battle from HP Labs) to provide a SPARQL front-end to HP’s Universal CMDB (the engine under what was the Mercury MAM product). Here are the slides.

The mapping from SPARQL to TQL (the native query interface for UCMDB) was made pretty easy by the fact that TQL is a graph-oriented query language. How much harder would it be to similarly transform a CMDBf (v1) query interface into a SPARQL query interface (and vice-versa)? Not much. The only added difficulty would come from the CMDBf XPath constraints. TQL has a property value mechanism that is very similar to CMDBf’s “propertyValue” constraint and maps well to SPARQL functions. The introduction of XPath as a constraint language in CMDBf makes things harder. It could be handled by adding XPath support to the SPARQL engine using function extensibility. Or by turning the entire XML into RDF and emulating XPath in SPARQL. But in either case, you’ll have impedence mismatch at some point because concepts such as element order that exist in XPath have no native equivalent in RDF.

The use of XPath in selectors on the other hand is not a problem. HP’s prototype uses Gloze (available as a Jena package) to turn the XML returned by UCMDB into RDF. An XSLT transform could turn that same XML into a CMDBf-valid XML response instead and that XSLT could easily handle the XPath selectors from the query request. This is another reason why constraints and selectors should remain separate in CMDBf (fortunately the specification is back to doing this properly).

Here is why I call this prototype CMDBf v2: The CMDBf effort (v1 or 1.1), in its current form of re-inventing a graph query, can succeed. Let’s assume the working group strikes a reasonable balance between completeness and complexity, and vendors choose to compete on innovation and execution rather than lock-in (insert cynical comment here). CMDBf may then end up being supported by the main CMDB vendors. It wouldn’t provide federation capabilities, but having a common CMDB query interface supported by the Big Four would help with management integration. And yet, while the value would be real, it would only provide a little help to solve a larger problem:

  • As a technology limited to IT systems management, it would be unlikely to see widely available tools (e.g. user consoles and language-specific libraries).
  • It wouldn’t get the kind of robustness and interoperability that comes from wide adoption. While pretty similar, there might be some minor differences in the various implementations. Once your implementation has been tweaked to work with the implementations from the Big Four, you’ll call it done. Just like SNMP, another technology that is specific to IT systems management (see it happen here).
  • Even if it works perfectly at the query level, it will just hasten the time when developers run into the real problem, model interoperability. CMDBf doesn’t help at all with this. In fact, it makes it harder by hard-coding some dependencies on an XML back-end (the XPath constraints).

In the long run, IT management has to become more automated and integrated. That’s a given. The way it happens may or may not go through CMDB-like configuration stores. But if it does, we’ll have to eventually move beyond CMDBf (v1) towards something that addresses the three requirements above. And federation. I don’t know if it will be called CMDBf v2, and/or if it will come from the DMTF (by then, the CMDBf brand might be an asset or a liability depending on developer experience with the specification). But I strongly suspect (“probability 0.8” as a Gartner analyst might put it) that it will use semantic technologies. Because the real, hard, underlying problem is a problem of semantic integration. In that sense, David and Steve’s prototype is a sneak peek at what will come after CMDBf v1/1.1.

Pretty much since the beginning of CMDBf I have been pushing for it to ideally embrace SPARQL (with no success) or to at least stay close to it conceptually in order to make the eventual mapping/evolution smooth (with a bit more success). This includes pushing for a topological query language, trying to keep XML idiosyncrasies at bay and keeping constraints and selectors cleanly separated. Rather than working within the CMDBf group, David took the alternative approach of simply doing it. Hopefully this will help convince people of the value of re-using semantic web technology for IT systems management. Yes semantic technologies have been designed for a much more general use case. But the use cases that CMDB systems address are a subset of the use cases addressed by semantic technologies. It’s hard for domain experts to see their domain as just a subset of a larger problem, but this is the case here. Isn’t HTTP serving the IT management community better than a systems management-specific alternative would?

By the way, there is no inferencing taking place in the HP prototype. We are just talking about re-using an existing, well though-through graph query language. Sure OWL inferencing and some rules could be seamless layered on top of this. But this is in no way required to do (better) what CMDBf v1 tries to do.

And then there is the “federation” question. Who do you trust more to deliver this? A bunch of IT system management architects in DMTF or the web and query experts at W3C, HP Labs etc who designed and implemented SPARQL over many years? BTW, it sounds like SPQARL federation was discussed at WWW 2008, based on these meeting notes (search for “federation”).

2 Comments

Filed under Automation, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, Graph query, HP, IT Systems Mgmt, Query, RDF, Semantic tech, SPARQL, Standards, W3C, XPath

At the Semantic Technology Conference this week

I am going to spend some time, starting tomorrow, at the 2008 Semantic Technology Conference in San Jose. I have a few ideas (on ways to use semantic technologies for IT management) that I am trying to validate, improve or kill, as appropriate. I will of course be interested in any discussion/presentation related to applying semantic technologies to IT management. But I am going there in a pretty open frame of mind. Peripheral vision is important when considering technology that is so generic in nature and that has been used in many different fields. In general, I am more interested in concrete projects, lessons learned, best practices etc than new raw technology. There is already a lot more raw semantic web technology available than we can hope to make use of in IT management anytime soon. The question is more around the best ways to practically and opportunistically deliver value in a field that has plenty of existing relational schemas, XML descriptors, semi-standard class models, semi-structured events and in-Joe’s-head-only domain knowledge nuggets floating around.

If you are going to go there and have any interest in the application of semantic technologies to IT management, I’d be happy to hook up.

Comments Off on At the Semantic Technology Conference this week

Filed under Conference, Everything, IT Systems Mgmt, OWL, RDF, Semantic tech