Category Archives: IT Systems Mgmt

DMTF calls the ball on Cloud standards

To no surprise to industry watchers (and especially the small subset of them who read this blog), the DMTF has announced today (warning, PDF) that they are creating their very first “incubator” group and it is chartered with standardizing deployment, management and portability of Cloud systems. You’ve probably skipped it at the time (you’re forgiven), but you may now be motivated to go back and read this short analysis of the DMTF incubator process. And now you know why I bothered to look into this never-used two-year old process. Since it was DMTF-internal information, I couldn’t at the time explain that my motivation was the preparations under way for this Cloud computing incubator.

Since the press release talks about Cloud compatibility and since I am obviously in very self-referencing mood today, I have to point to this “reality check on Cloud portability” for a historical perspective.

Three things to notice in the charter (warning, PDF) of the incubator:

First and foremost, it explicitly takes a very IaaS-centric view of Cloud computing. And within that, a very VM-driven view. VMWare could have written it…

“Virtualization technology and the evolution from software packages that can be created and deployed as a collection of virtual images is becoming the primary focus for delivering and managing software solutions into enterprise customers today”. I guess the “is becoming” formulation provides enough wiggle room (interesting rhetorical twist that lets you make a prognostic and yet use the present tense) that one can’t really call them on it and ask how many enterprise software systems are actually delivered and managed as virtual machines today (see my colleague Adam’s view of what it will take).

Let’s next look at the description of the deliverables:

Cloud taxonomy:
– Terms and definitions
Cloud Interoperability whitepaper
Informational specifications:
– Proposed OVF changes for cloud usage
– Proposed Profiles  for management of resources exposed by a cloud
– Proposed changes to other DMTF standards
Requirements for trust for cloud resource management.
Work register(s) with appropriate alliance partners (See below)

We find the requisite “cloud taxonomy” (all the blog chatter about this a few months ago died without producing much alignment beyond the good old “IaaS, PaaS and SaaS”, or did I miss something). The interesting aspect to notice is the lack of new specification in the list. Just adjustments to the current ones (including OVF) and some profiling on top. I guess we are much closer to Cloud interoperability and portability than I thought! And the lessons form the past have been learned.

The third thing to notice is the name of the “interim co-chairs”. Who happen to be from VMWare and IBM. Who also happen to be the DMTF President and DMTF Chairman. In case you had any doubt, this is very high profile in DMTF. Especially for something that’s theoretically only an “incubator”. It may just be an egg, but there is a baby T-Rex in it.

Who’s missing in the party? Two groups of people. First, DMTF members who chose not to join (Oracle, CA, BMC…). And more importantly, the non-DMTF members who may nevertheless have a few ideas about Clouds: Google, Amazon, Salesforce and all the small Cloud pure-plays. You know, the kind of people who publish their docs in HTML rather than just PDF.

[Note: this is a quick first take written over lunch. More thoughts about the choice of the “incubator process” and the prospects for collaboration with other standards groups to follow, maybe as soon as tonight. — UPDATE: done]

3 Comments

Filed under Cloud Computing, DMTF, Everything, IT Systems Mgmt, OVF, Portability, Standards, Utility computing, Virtualization, VMware

A post-mortem on the previous IT management revolution

Before rushing to standardize “Cloud APIs”, let’s take a look back at the previous attempt to tackle the same problem, which is one of IT management integration and automation. I am referring to the definition of specifications that attempted to use the then-emerging SOAP-based Web services framework to easily integrate IT management systems and their targets.

Leaving aside the “Cloud” spin of today and the “Web services” frenzy of yesterday, the underlying problem remains to provide IT services (mostly applications) in a way that offers the best balance of performance, availability, security and economy. Concretely, it is about being able to deploy whatever IT infrastructure and application bits need to be deployed, configure them and take any required ongoing action (patch, update, scale up/down, optimize…) to keep them humming so customers don’t notice anything bothersome and you don’t break any regulation. Or rather so that any disruption a customer sees and any mandate you violate cost you less than it would have cost to avoid them.

The realization that IT systems are moving more and more towards distributed/connected applications was the primary reason that pushed us towards the definition of Web services protocols geared towards management interactions. By providing a uniform and network-friendly interface, we hoped to make it convenient to integrate management tasks vertically (between layers of the IT stack) and horizontally (across distributed applications). The latter is why we focused so much on managing new entities such as Web services, their execution environments and their conversations. I’ll refer you to the WSMF submission that my HP colleagues and I made to OASIS in 2003 for the first consistent definition of such a management framework. The overview white paper even has a use case called “management as a service” if you’re still not convinced of the alignment with today’s Cloud-talk.

Of course there are some differences between Web service management protocols and Cloud APIs. Virtualization capabilities are more advanced than when the WS effort started. The prospect of using hosted resources is more realistic (though still unproven as a mainstream business practice). Open source component are expected to play a larger role. But none of these considerations fundamentally changes the task at hand.

Let’s start with a quick round-up and update on the most relevant efforts and their status.

Protocols

WSMF (Web Services Management Framework): an HP-created set of specifications, submitted to the OASIS WSDM working group (see below). Was subsumed into WSDM. Not only a protocol BTW, it includes a basic model for Web services-related artifacts.

WS-Manageability: An IBM-led alternative to parts of WSDM, also submitted to OASIS WSDM.

WSDM (Web Services Distributed Management): An OASIS technical committee. Produced two standards (a protocol, “Management Using Web Services” and a model of Web services, “Management Of Web Services”). Makes use of WSRF (see below). Saw a few implementations but never achieved real adoption.

OGSI (Open Grid Services Infrastructure): A GGF (the organization now known as OGF) standard to provide a service-oriented resource manipulation infrastructure for Grid computing. Replaced with WSRF.

WSRF: An OASIS technical committee which produced several standards (the main one is WS-ResourceProperties). Started as an attempt to align the GGF/OGSI approach to resource access with the IT management approach (represented by WSDM). Saw some adoption and is currently quietly in use under the cover in the GGF/OGF space. Basically replaced OGSI but didn’t make it in the IT management world because its vehicle there, WSDM, didn’t.

WS-Management: A DMTF standard, based on a Microsoft-led submission. Similar to WSDM in many ways. Won the adoption battle with it. Based on WS-Transfer and WS-Enumeration.

WS-ResourceTransfer (aka WS-RT): An attempt to reconcile the underlying foundations of WSDM and WS-Management. Stalled as a private effort (IBM, Microsoft, HP, Intel). Was later submitted to the W3C WS-RA working group (see below).

WSRA (Web Services Resource Access): A W3C working group created to standardize the specifications that WS-Management is built on (WS-Transfer etc) and to add features to them in the form of WS-RT (which was also submitted there, in order to be finalized). This is (presumably) the last attempt at standardizing a SOAP-based access framework for distributed resources. Whether the window of opportunity to do so is still open is unclear. Work is ongoing.

WS-ResourceCatalog : A discovery helper companion specification to WS-Management. Started as a Microsoft document, went through the “WSDM/WS-Management reconciliation” effort, emerged as a new specification that was submitted to DMTF in May 2007. Not heard of since.

CMDBf (Configuration Management Database Federation): A DMTF working group (and soon to be standard) that mainly defines a SOAP-based protocol to query repositories of configuration information. Not linked with (or dependent on) any of the specifications listed above (it is debatable whether it belongs in this list or is part of a new breed).

Modeling

DCML (Data Center Markup Language): The first comprehensive effort to model key elements of a data center, their relationships and their policies. Led by EDS and Opsware. Never managed to attract the major management vendors. Transitioned to an OASIS member section and died of being ignored.

SDM (System Definition Model): A Microsoft specification to model an IT system in a way that includes constraints and validation, with the goal of improving automation and better linking the different phases of the application lifecycle. Was the starting point for SML.

SML (Service Modeling Language): Currently a W3C “proposed recommendation” (soon to be a recommendation, I assume) with the same goals as SDM. It was created, starting from SDM, by a consortium of companies that eventually submitted it to W3C. No known adoption other than the Eclipse COSMOS project (Microsoft was supposed to use it, but there hasn’t been any news on that front for a while). Technically, it is a combination of XSD and Schematron. It appears dead, unless it turns out that Microsoft is indeed using it (I don’t know whether System Center is still using SDM, whether they are adopting SML, whether they are moving towards M or whether they have given up on the model-centric vision).

CML (Common Model Library): An effort by the SML authors to create a set of model elements using the SML metamodel. Appears to be dead (no news in a long time and the cml-project.org domain name that was used seems abandoned).

SDD (Solution Deployment Descriptor): An OASIS standard to define a packaging mechanism meant to simplify the deployment and configuration of software units. It is to an application archive what OVF is to a virtual disk. Little adoption that I know of, but maybe I have a blind spot on this.

OVF (Open Virtualization Format): A recently released DMTF standard. Defines a packaging and descriptor format to distribute virtual machines. It does not defined a common virtual machine format, but a wrapper around it. Seems to have some momentum. Like CMDBf, it may be best thought of as part of a new breed than directly associated with WS-Management and friends.

This is not an exhaustive list. I have left aside the eventing aspects (WS-Notification, WS-Eventing, WS-EventNotification) because while relevant it is larger discussion and this entry to too long already (see here and here for some updates from late last year on the eventing front). It also does not cover the Grid work (other than OGSI/WSRF to the extent that they intersect with the IT management world), even though a lot of the work that took place there is just as relevant to Cloud computing as the IT management work listed above. Especially CDDLM/CDL an abandoned effort to port SmartFrog to the then-hot XML standards, from which there are plenty of relevant lessons to extract.

The lessons

What does this inventory tell us that’s relevant to future Cloud API standardization work? The first lesson is that protocols are easy and models are hard. WS-Management and WSDM technically get the job done. CMDBf will be a good query language. But none of the model-related efforts listed above seem to have hit the mark of “doing the job”. With the possible exception of OVF which is promising (though the current expectations on it are often beyond what it really delivers). In general, the more focused and narrow a modeling effort is, the more successful it seems to be (with OVF as the most focused of the list and CML as the other extreme). That’s lesson learned number two: models that encompass a wide range of systems are attractive, but impossible to deliver. Models that focus on a small sub-area are the way to go. The question is whether these specialized models can at least share a common metamodel or other base building blocks (a type system, a serialization, a relationship model, a constraint mechanism, etc), which would make life easier for orchestrators. SML tries (tried?) to be all that, with no luck. RDF could be all that, but hasn’t managed to get noticed in this context. The OVF and SDD examples seems to point out that the best we’ll get is XML as a shared foundation (a type system and a serialization). At this point, I am ready to throw the towel on achieving more modeling uniformity than XML provides, and ready to do the needed transformations in code instead. At least until the next window of opportunity arrives.

I wish that rather than being 80% protocols and 20% models, the effort in the WS-based wave of IT management standards had been the other way around. So we’d have a bit more to show for our work, for example a clear, complete and useful way to capture the operational configuration of application delivery services (VPN, cache, SSL, compression, DoS protection…). Even if the actual specification turns out to not make it, its content should be able to inform its successor (in the same way that even if you don’t use CIM to model your server it is interesting to see what attributes CIM has for a server).

It’s less true with protocols. Either you use them (and they’re very valuable) or you don’t (and they’re largely irrelevant). They don’t capture domain knowledge that’s intrinsically valuable. What value does WSDM provide, for example, now that’s it’s collecting dust? How much will the experience inform its successor (other than trying to avoid the WS-Addressing disaster)? The trend today seems to be that a more direct use of HTTP (“REST”) will replace these protocols. Sure. Fine. But anyone who expects this break from the past to be a vaccination against past problems is in for a nasty surprise. Because, and I am repeating myself, it’s the model, stupid. Not the protocol. Something I (hopefully) explained in my comments on the Sun Cloud API (before I knew that caring about this API might actually become part of my day job) and something on which I’ll come back in a future post.

Another lesson is the need for clear use cases. Yes, it feels silly to utter such an obvious statement. But trust me, standards groups still haven’t gotten this. It’s not until years spent on WSDM and then WS-Management that I realized that most people were not going after management integration, as I was, but rather manageability. Where “manageability” is concerned with discovering and monitoring individual resources, while “management integration” is concerned with providing a systematic view of the environment, with automation as the goal. In other words, manageability standards can allow you to get a traditional IT management console without the need for agents. Management integration standards can allow you to coordinate your management systems and automate their orchestration. WS-Management is for manageability. CMDBf is in the management integration category. Many of the (very respectful and civilized) head-butting sessions I engaged in during the WSDM effort can be traced back to the difference between these two sets of use cases. And there is plenty of room for such disconnect in the so-loosely-defined “Cloud” world.

We have also learned (or re-learned) that arbitrary non-backward compatible versioning, e.g. for political or procedural reasons as with WS-Addressing, is a crime. XML namespaces (of the XSD and WSDL types, as well as URIs used in similar ways in specifications, e.g. to identify a dialect or profile) are tricky, because they don’t have backward compatibility metadata and because of the practice to use organizations domain names in the URI (as opposed to specification-specific names that can be easily transferred, e.g. cmdbf.org versus dmtf.org/cmdbf). In the WS-based management world, we inherited these problems at the protocol level from the generic WS stack. Our hands are more or less clean, but only because we didn’t have enough success/longevity to generate our own versioning problems, at the model level. But those would have been there had these models been able to see the light of day (CML) or see adoption (DCML).

There are also practical lessons that can be learned about the tactics and strategies of the main players. Because it looks like they may not change very much, as corporations or even as individuals. Karla Norsworthy speaks for IBM on Cloud interoperability standards in this article. Andrew Layman represented Microsoft in the post-Manifestogate Cloud patch-up meeting in New York. Winston Bumpus is driving the standards strategy at VMWare. These are all veterans of the WS-Management, WSDM and related wars collaborations (and more generally the whole WS-* effort for the first two). For the details of what there is to learn from the past in that area, you’ll have to corner me in a hotel bar and buy me a few drinks though. I am pretty sure you’d get your money’s worth (I am not a heavy drinker)…

In summary, here are my recommendations for standardizing Cloud API, based on lessons from the Web services management effort. The theme is “focus on domain models”. The line items:

  • Have clear goals for each effort. E.g. is your use case to deploy and run an existing application in a Cloud-like automated environment, or is it to create new applications that efficiently take advantage of the added flexibility. Very different problems.
  • If you want to use OVF, then beef it up to better apply to Cloud situations, but keep it focused on VM packaging: don’t try to grow it into the complete model for the entire data center (e.g. a new DCML).
  • Complement OVF with similar specifications for other domains, like the application delivery systems listed above. Informally try to keep these different specifications consistent, but don’t over-engineer it by repeating the SML attempt. It is more important to have each specification map well to its domain of application than it is to have perfect consistency between them. Discrepancies can be bridged in code, or in a later incarnation.
  • As you segment by domain, as suggested in the previous two bullets, don’t segment the models any further within each domain. Handle configuration, installation and monitoring issues as a whole.
  • Don’t sweat the protocols. HTTP, plain old SOAP (don’t call it POS) or WS-* will meet your need. Pick one. You don’t have a scalability challenge as much as you have a model challenge so don’t get distracted here. If you use REST, do it in the mindset that Tim Bray describes: “If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.” Not as something that needs to scale to Web scale or as a rebuff of WS-*.
  • Beware of versioning. Version for operational changes only, not organizational reasons. Provide metadata to assert and encourage backward compatibility.

This is not a recipe for the ideal result but it is what I see as practically achievable. And fault-tolerant, in the sense that the failure of one piece would not negate the value of the others. As much as I have constrained expectations for Cloud portability, I still want it to improve to the extent possible. If we can’t get a consistent RDF-based (or RDF-like in many ways) modeling framework, let’s at least apply ourselves to properly understanding and modeling the important areas.

In addition to these general lessons, there remains the question of what specific specifications will/should transition to the Cloud universe. Clearly not all of them, since not all of them even made it in the “regular” IT management world for which they were designed. How many then? Not surprisingly (since IBM had a big role in most of them), Karla Norsworthy, in the interview mentioned above, asserts that “infrastructure as a service, or virtualization as a paradigm for deployment, is a situation where a lot of existing interoperability work that the industry has done will surely work to allow integration of services”. And just as unsurprisingly Amazon’s Adam Selipsky, who’s company has nothing to with the previous wave but finds itself in leadership position WRT to Cloud Computing is a lot more circumspect: “whether existing standards can be transferred to this case [of cloud computing] or if it’s a new topic is [too] early to say”. OVF is an obvious candidate. WS-Management is by far the most widely implemented of the bunch, so that gives it an edge too (it is apparently already in use for Cloud monitoring, according to this press release by an “innovation leader in automated network and systems monitoring software” that I had never heard of). Then there is the question of what IBM has in mind for WS-RT (and other specifications that the WS-RA working group is toiling on). If it’s not used as part of a Cloud API then I really don’t know what it will be used for. But selling it as such is going to be an uphill battle. CMDBf is a candidate too, as a model-neutral way to manage the configuration of a distributed system. But here I am, violating two of my own recommendations (“focus on models” and “don’t isolate config from other modeling aspects”). I guess it will take another pass to really learn…

[UPDATED 2009/5/7: Senior moment! When writing this entry I forgot that I wrote an earlier entry (in late 2007) specifically to describe the difference between “manageability” and “management integration”. So here it is, if you care for more details on this topic.]

5 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, People, Portability, REST, SML, SOAP, Specs, Standards, Utility computing, Virtualization, WS-Management, WS-ResourceCatalog, WS-ResourceTransfer

New release of Oracle Composite Application Monitor and Modeler

Version 10.2.0.5 of OCAMM (Oracle Composite Application Monitor and Modeler) was recently released. This is the ex QuickVision product from the ClearApp acquisition last year. It is doing very well in the Enterprise Manager portfolio. In this new release, it has grown to cover additional middleware and application products. So now, in addition to the existing support (J2EE, BPEL, Portal…), it lets you model and monitor deployments of the Oracle Service Bus (OSB) as well as AIA process integrations. Those metadata-rich environments fit very naturally in the OCAMM approach of discovering the distributed model from metadata and then collecting and reporting usage metrics across the whole system in the context of that model.

For a complete list of improvements over release 10.2.0.4.2, refer to the release notes, where we see this list of new features:

  • Oracle Service Bus Support: Oracle Service Bus (formerly called Aqualogic Service Bus) versions 2.6, 2.6.1, 3.0, and 10gR3 are now supported.
  • Oracle AIA Support: Oracle Application Integration Architecture (AIA) 2.2.1 and 2.3 are now supported.
  • WLS and WLP Support: WebLogic Server and WebLogic Portal version 10.3 are now supported.
  • Agent Deployment Added: Agent deployment added to CAMM Administration UI to streamline configuration of new resources
  • Testing Added in Resource Configuration: Connectivity parameter testing added in resource configuration
  • Testing Added in Repository Configuration: Connectivity parameter testing added for CAMM database repository configuration
  • EJB and Web Services Support: EJB and Web Services Annotation are now supported

You can get the bits on OTN. Here is the installation and configuration guide and here is the user’s guide.

Comments Off on New release of Oracle Composite Application Monitor and Modeler

Filed under Application Mgmt, BPEL, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, Modeling, Oracle

Reality check on Cloud portability

SD Times recently published an interesting article about “cloud interoperability”. It has some well-informed opinions. But, like all Cloud-related discussions, it also suffers from mixing a bunch of things. The word “interoperability” is alternatively applied to the Cloud infrastructure services (in which case this “interoperability” is a way to provide application “portability”) and to the Cloud-hosted applications themselves.

Application-level interoperability (“look, my GAE-hosted app successfully sent an HTTP request to an Azure-hosted app, open the champagne”) is not very new or exciting anymore and is often used as an interoperability smokescreen (hello Salesforce.com). Many of these interop concerns are long solved and the others (like authentication and data migration) need to be solved in ways that don’t care whether the application is hosted in your Silicon Valley garage or near the Columbia river.

Cloud infrastructure compatibility (in other words application portability) is the more interesting discussion. I keep reading that it is needed (“no vendor lock-in, not ever again”) for enterprises to move to the Cloud. Being a natural-born cynic, I always ask myself whether those asking for it are naive (sometimes) or have ulterior motives (e.g. trying to catch-up with Amazon by entangling them in the standards net – some of my fellow cynics see the Open Cloud Manifesto as just this).

Because the reality is that, Manifesto or no Manifesto, you are not going to get application portability across IaaS-type Cloud providers. At least for production applications. Sorry. As a consolation prize, you may get some runtime portability such that we’ll be shown nice demos of prototype apps moving from one provider to another (either as applications or as virtual machines). Clap clap until you realize that they left behind their monitoring capabilities, or that their configuration rules don’t validate anything anymore. And that your printer ran out of red ink when printing the latest compliance report. Oops.

Maybe I am biased because they are both my friends and ex-colleagues, but the HP guys make the most sense in the SD Times article. Tim Hall has it right when he suggests “that the industry should focus on specific problems that it is going to solve around deployment and standardized monitoring”. And the other HP Tim, Mr. van Ash, rightly points out that we should “stop promising miracles”, which Forrester’s Jeffrey Hammond echoes, saying that there is a difference between a standard and “plug-and-play in reality”.

Tim Hall uses SQL as an example of a realistic common baseline. J2EE would be another one. They provide a good reality check. Standards are always supposed to prevent vendor lock-in. And there is a need for some of that, of course. But look at the track records. How many applications do you know that are certified and supported on any SQL database, any Unix operating system and any J2EE app server? And yet, standardizing queries on relational data and standardizing an enterprise-class runtime environment for one programming language are pretty constrained scopes in the grand scheme of things. At least compared to all the aspects that you need to standardize to provide real Cloud portability (security, monitoring, provisioning, configuration, language runtime and/or OS, data storage/retrieval, network configuration, integration with local apps, metering/billing, etc). And we’re supposed to put together a nice bundle of standards that will guarantee drag-and-drop portability across all these concerns? In how many lifetimes? By then, Cloud computing will have been replaced by the next big thing (galaxycomputing.com is still available BTW).

Not to mention that this standardization comes hand in hand with constraints on what you can do. That’s why I read Amazon’s Adam Selipsky’s comment that allowing customers to do “whatever they want” is vital as a way to say “get real” to requests for application portability, while allowing him to sound helpful rather than obstructionist.

This doesn’t mean that these standards are not useful. They make application portability possible if not free. They make for much improved productivity through generic tools and reusable developer knowledge. We still need all this.

Here is the best that can realistically happen in the “application portability across IaaS providers” area for at least 10 years:

  • a set of partial standards for small parts of the Cloud computing domain (see list above), many of which already exist.
  • a set of RightScale-like tools that do a lot of the grunt work of mapping/hiding/transforming between providers, with various degrees of success.
  • the need for application providers to certify their applications on Cloud providers one by one anyway and to provide cloning/migration as a feature of the application rather than an infrastructure-level task.

That’s assuming that IaaS providers become a major business, that there remains a difference between service providers and software providers. The other option is that the whole Cloud excitment goes back to SaaS only, that application creators are also hosting providers, that the only resource you get in a “utility” fashion is the application itself. At which point application portability is not a concern anymore and we go back to “only” worrying about data portability and application interoperability, an easier problem and one on which we have come a long way already. If this is what comes to pass then the challenge of Cloud portability may well be one of the main reasons. Along with the lack of revenue/margin potential for many of the actors in an IaaS world, as my CEO is fond of pointing out.

[UPDATED 2009/4/22: F5’s Lori MacVittie provides a very nice illustration of the same point, in her explanation of why OVF is not a cloud portability silver bullet.]

[UPDATED 2009/6/1: Soon after posting this entry I was contacted by people at SD Times about turning it into a “guest view” article in the June issue. It has just been published. It’s also in the paper version.]

5 Comments

Filed under Amazon, Application Mgmt, Articles, Cloud Computing, Everything, Google App Engine, HP, IT Systems Mgmt, Mgmt integration, People, Portability, Specs, Standards, Utility computing

“Federationing”

I am glad to see that, as it inches towards standardization in the DMTF, the CMDBf specification is getting more visibility. Forrester’s Glenn O’Donnell recently wrote very positively about it on his blog, presenting it as a key enabler for a federation of MDRs (Management Data Repositories, a term introduced by the CMDBf specification so don’t look for it in ITIL). He argues this is the only way (rather than a single data store) to fulfill the ITIL-defined role of a CMS. Rob England (the IT Skeptic) has also shared his thoughts about CMDBf and they were noticeably less enthusiastic, to say the least. While Glenn calls the specification “profound”, Rob calls it “the most over-hyped vendor marketing smokescreen ever”. There is plenty of room in between them, which is where I sit. As I explained before, it does have real value (as a query language/protocol for system integration) but is nowhere near providing “federation” capabilities.

I am happy to see Glenn approve of CMDBf and I agree with him that accurate specialized MDRs are more useful than a single store that attempts to capture all the relevant data. As Glenn puts it, “pockets of the truth are far superior to unified ambiguity”. But I wasn’t very comfortable with the tone of his article, which seemed to almost encourage the proliferation of these MDRs. Maybe he was just trying to present a clean break with the “one big CMDB” approach and overreached. Or maybe I am just not reading properly.

Because while I agree that the answer is not “one and only one store” I also don’t want to loose the value of having as much unification of the IT model as possible. Both at the data level (i.e. same metamodel/model, consistent retention/roll-up policies…) and the access level (i.e. in the same physical store, with shared access control, accessible using a well-known DSL for data manipulation…). Metamodel transformation and model bridging are costly (in accuracy, maintenance, reliability). If your CMS does more than just support a  “model navigation” GUI it may then need to run large queries that go across several portions of your IT model, including multiple different domains (e.g. a compliance rule kicked-off at the app level based on the type of data it manipulates that ends up having to look at the physical location of the servers running the hypervisors for the virtual machines that power the app). Through such global queries you can apply configuration rules, do impact analysis, event correlation, provide context to your transaction tracing, etc. No consolidation means no such queries (or a very limited subset). Considering the current state of federation, there is a lot more that you can do with your CMS if you have a very small number of MDRs rather than a sea of “federated” MDRs. This is why, as Oracle acquires IT management companies, we deliberately integrate their repositories with Enterprise Manager.

[UPDATED 2009/4/8: More, along the same line, from Glenn and his co-author Carlos Casanova available here. And my CMDBf partner-in-crime Van Wiles also responded to Glenn, bringing a BMC perspective.]

1 Comment

Filed under Application Mgmt, Automation, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Modeling, Query, Specs, Standards

Open Cloud Manifesto, circa 2004

The mini-scandal of last week was the manifesto-gate. The mini-scandal of this week is shaping out to be the Ulitzer-gate (if you want to make sure not to miss next week’s IT scandal, subscribe to the Register feed, ferreting these out and adding a bass-heavy soundtrack is their specialty).

Turns out I am one of these Ulitzer “unaware authors” through two articles I wrote a while ago for the Web services Journal, a paper publication by Sys-con (based on a request from HP PR) and a blog post I allowed Sys-con to republish. Looks like Ulitzer and Sys-con are one and the same. Three articles, spaced two years apart. That’s enough to earn me a dedicated home page at Ulitzer and a rank of 1,000 among their more than 6,000 authors. Makes you wonder how much the 5,000 “authors” behind me have (unknowingly) produced… Whatever. At least it’s all content that I authorized Sys-con to use, not something that was lifted from my blog as apparently happened to others.

Turns out the oldest of these articles (“From Web Services Management to Utility Computing” , from 2004) is not that different from the recently-published (and amply maligned) Open Cloud Manifesto. I described my article at the time as “an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive.”

It ends with “while it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications.”

Sure it has a lot more emphasis on WS-* specs than is compatible with the current zeitgeist, and it uses the now-obsolete term of “utility computing” rather than the nebulous alternative currently en vogue, but isn’t the main message there?

Just to be clear, I am not laying pretentious claims of prescience and vision (at least not in this entry). There are plenty of documents (e.g. from the Grid community) that make the same points in more eloquent terms and starting many years prior. It’s just fun to see this link from today’s scandal to the one from last week.

for old time sake, here is the content of the 2004 article:

From Web Services Management to Utility Computing
by William Vambenepe

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. This will be achieved through the development of composable and modular standards that expose the management capabilities of the building blocks of enterprise services. The Web services platform is an enabler of this transformation: a Web services-based management infrastructure provides a channel that is appropriate for dynamic resource provisioning, allocation, and configuration – often called utility computing.

We can consider this management infrastructure as a four-layered architecture. Starting at the foundation layer, the work on the base Web services infrastructure is far from over. First, until WSDL 2.0 is widely deployed, designers have to compose around the deficiencies of WSDL 1.1, such as the lack of portType inheritance. Second, there is still no standard for referencing Web services. Finally, key specifications such as WSRF (Web Services Resource Framework) and WSN (Web Services Notification), without which people were left to reinvent Web services interfaces to access stateful resources, have only recently reached the standards community. These issues are being resolved and a set of building blocks for accessing resources through an SOA (service-oriented architecture) is shaping up. It is critical that these building blocks be modular and composable to allow incremental adoption and separation of concerns.

Moving from the foundation to the management protocol layer, the OASIS WSDM (Web Services Distributed Management) technical committee, through its MUWS (Management Using Web Services) specification, is the key articulation point between the base Web services architecture and utility computing. Both the IT management community and the Grid community rely on MUWS. It defines how to express and exercise manageability capabilities through Web services, putting in place a management channel that is more interoperable and accessible than ever before.

Next is the modeling layer. Information models need to be composed so that a service can be represented based on the services that it is assembled from, be they peer or infrastructure services. Since these will be described by different models, the management channel (MUWS) needs to be model-agnostic in order to support a model-centric architecture. For example, CIM (Common Information Model) is a model that focuses on concrete resources. The DMTF WS-CIM subgroup must now open CIM to the Web services platform by developing a standard way to expose CIM-modeled resources through MUWS. Other models provide representations for service security, service-level agreements (SLA), etc. Only by composing these models will, for example, an auction service SLA be adequately managed as it depends on a combination of the performance of the servers on which the service runs, the application server that hosts it, the other services (authentication, billing, etc.) that it makes use of, and the business process engine that controls the bidding. Once this model-centric architecture is in place, management actions can be policy-driven through explicit constraints.

Finally, at the top layer, the architecture includes a set of common services for utility computing. They are being defined collaboratively by DMTF (Utility Computing working group) and GGF (OGSA working group).

All the pieces are falling into place but much remains to be done to allow comprehensive management of enterprise services in a model-centric way through Web services standards. While it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications. These applications will use this to synchronize business and IT and to capitalize on change.

Comments Off on Open Cloud Manifesto, circa 2004

Filed under Application Mgmt, Articles, Automation, Business Process, Cloud Computing, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Specs, Standards, Utility computing, Virtualization

OVF 1.0 and beyond

OVF 1.0 just got released as a DMTF standard. Here is the specification and its companion white paper. After a quick scan I didn’t see any major change from the submitted version, which is consistent with the content of the “preliminary standard” from last year.

The interesting question is what comes next, especially with regards to VMWare’s vCloud. The VMWare press release stated that “as one of the original authors of the Open Virtualization Format (OVF) standard now released from the Distributed Management Task Force (DMTF), VMware will build upon that work by submitting a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds” and Drue Reeves at the Burton group commented on this (Drue, we’re still waiting for part II). I see no reason to believe that VMWare is going to stop playing by the Microsoft playbook in DMTF as it appears to be quite successful so far (I’ll pat myself in the back for predicting over a year ago that “OVF might only be the beginning” for VMWare at DMTF).

This results in what looks like a landgrab from DMTF in Cloud standards. Meanwhile, in Washington DC yesterday, the Strategies and Technologies for Cloud Computing Interoperability (SATCCI) workshop took place. At this point all I know about it is the report from Reuven Cohen that I just read (hopefully Stu, Krishna and other bloggers who participated will provide additional perspectives). From Reuven’s report, Winston Bumpus (Director of Standards Architecture at VMware and President of the DMTF) described OVF as “an ideal cloud migration and deployment package”. Which may be true but is a pretty recent repurposing (the spec and the white paper don’t even mention this application). And while the DMTF is going full speed ahead on this, Reuven reports that “Craig Lee, President of the Open Grid Forum suggested that we need to take more time to examine the overlap between various standards groups, mapping the opportunities for collaboration”. Sure thing. The old timers might remember that when the DMTF decides to run with Microsoft’s WS-Management it wasn’t just OASIS (where WSDM was created) that eventually got hosed but also OGF (then called GGF) which relied on the WSRF/WSDM stack. At the time too there were discussions to identify and reconcile the overlap, for all the good they did (disclosure: I have some history there).

We’ve seen this in the WS-* game before. At the end it’s not so much a matter of what the standards bodies do (and even less of what they say), it’s a matter of what the big players do and where they choose to take their marbles. To the extent that you can separate the two, which becomes tricky in the case of vendor-run bodies like WS-I and DMTF. As I have written before, “at the end, it comes down to what [you think] a standard should be”.

[UPDATED 2009/3/26: Stu has now written a report on the SATCCI meeting.]

5 Comments

Filed under Cloud Computing, Conference, DMTF, Everything, Grid, IT Systems Mgmt, OVF, Portability, Specs, Standards, Utility computing, Virtualization, VMware, WS-Management

Cloud computing: would you like flexibility with your simplicity?

The recent announcement of the Sun Cloud, and more specifically its API is a good occasion to think about how much simplicity we really want in our datacenter automation mechanisms. The Sun API is very simple and its authors are proud of that fact. Indeed they should be proud of avoiding unneeded complexity. They have probably also kept out (at least so far), some needed complexity.

First, let’s focus on the important part:

It’s not REST that matters, it’s the rest

Most of the comments on the API focus on the fact that it’s RESTful. The authoritative source on this is Tim Bray’s description of the API, which he helped shape. But Tim is very down-to-earth about the reasons to use REST:

Why REST? · It’s a sensible question. The chief virtue of RESTful interfaces is massive scaling. But gimme a break, these are data-center management operations; a typical transaction frequency would be a single-digit number per week, with the single digit often being “0”, and it wouldn’t be surprising if a big multi-cluster staged-boot operation had a latency of minutes. The data-center controls are unlikely to be a bottleneck.

Why, then? Simply because we wanted a bits-on-the-wire interface. APIs, in the general case, suck; and are really hard to make portable. Bits-on-the-wire are ultimately flexible and interoperable. If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.

The use of REST is not a fundamental characteristic of the API. In other words, if this API turns out to be useful I can rewrite it as a SOAP API and it would still be useful. Unless the SOAP API is made purposely complicated, it would only be marginally harder to use, not fundamentally less useful.

In fact, we may find out. If the rumor is confirmed and IBM decides to Tivolify (rather than kill) the Sun Cloud, the whole thing can be refactored as WS-RT/XML/XQuery (and maybe WS-ResourceCatalog) in five days, four of which would be spent capturing, sedating and restraining Tim Bray (and his “spec machete”) with the last one used for coding.

In the case of the Sun Cloud API, REST makes the API simpler in the same way that a keyless system makes a car easier to operate. You don’t have to fumble for they key, but you still need to know to parallel park, change a tire and operate the stereo.

By using REST, the Sun team has kept away some arbitrary complexity (e.g. fine-grained PUT; instead Sun decides what are the two valid sets of input parameters to create a cluster). But that’s only a small percentage of the potential complexity of the system. Not to mention that most developer will use libraries rather than on-the-wire protocols so they won’t see any difference. Instead, the real deal is:

The model

By “the model” I mean both the resource model and the capabilities of the resources. For capabilities, I don’t care whether a virtual machine can be started via an HTTP GET request on a URL that ends with ?control=start, or via a SOAP message with the wsa:Action header set to http://iloveclouds.com/vm/start or via an RPC call to a Start(…) method. I just care that the model includes the capability to start a VM. And the list of states a VM can be in.

Look at a datacenter today. Make an inventory of all the networking equipment, storage, servers, hypervisors, operating systems and infrastructure services that it contains. Consider all the configuration settings of all these resources (as they would be represented in a complete, authoritative and consistent CMDB, that most elusive creature). Add to it all the controls and APIs they expose. That’s a lot of data, even if you don’t consider the applications layer. That’s a few orders of magnitude larger than what the model in the Sun Cloud API can describe. That gap (between our CMDB model and the Sun Cloud model) is what we should look at and analyze. Why are they so far apart? How big is the ideal datacenter automation and virtualization model?

Among other things, these hundreds of configuration settings in your current datacenter are used to optimize deployments. No-one would miss the pain of dealing with the optimizations if they went away, but we would miss the performance benefits they bring. So what replaces them if the model is too simple to support any tweak? Is the infrastructure behind the API auto-optimized, based on actual application patterns? Now that would be real progress towards simplicity and may allow us to rely on an API as simple as the Sun API. But the industry has been trying to do this with little success for a long time. I expect incremental, not radical, progress on this. Alternatively, does Cloud Computing change the economics to the point where performance optimizations through configurations are no longer cost-efficient, where scaling out is the answer? Hard to make this a general statement, considering how difficult it remains for many applications to scale out. And this sounds very SUV-like in these footprint-aware times (we see how well the “stretch the hood and add two cylinders to the engine” approach worked for Detroit).

Sun might very well have this covered under the hood. But I don’t know that I want to assume that they have an auto-optimizing system just because they produced an API that would benefit from having it underneath.

Not to mention that not all configuration tweaks have to do with performance optimization. Some of them are driven by licensing, organizational, risk and compliance considerations. If auto-detecting an application performance profile is hard, try auto-detecting its regulatory requirements.

Complexity with a purpose

The right place to be, between the “omniscient CMDB model” and the “Sun Cloud model” is somewhere in the middle, with a couple of incrementally complex layers. Of course they are so far apart that saying “somewhere in the middle” is a cope-out.  The current level of complexity is very hard to manage by humans (assisted by processes and tools, e.g. ITIL) and impossible to really automate. A lot of the complexity and variability is arbitrary rather than flexibility-inducing. We need to reduce this (all-out standardization is one way, stack integration is another). But the simplicity of the model in the Sun Cloud API is too extreme. Look at Amazon EC2. Everyone lauds the simplicity of the APIs and everyone, in the same breath, asks for more options (different instance types, availability zones, reserved instances…). Amazon (and Sun too, I assume) is taking the eminently rational approach of starting from simple and adding complexity (sorry, flexibility) as needed. That’s great. Just don’t get too enamored with the initial simplicity.

[UPDATED 2009/3/20: James Governor lauds the simplicity of Amazon’s cloud offering.  If I understand him correctly, he sees simplicity as coming not just from “few options” but also from backward compatibility with current app infrastructure. That second part is what William Louth criticizes in his comment below. At the very least I like to keep the two separated: “how intrinsicly simple is it” and “how backward compatible is it” even though both can be seen as providing the benefit of simplicity.]

7 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Modeling, REST, Specs, Tech, Utility computing, Virtualization

Exploring “IT management in a changing IT world”

The tagline for this blog is “IT management in a changing IT world”. Of course nobody but their authors care about blog taglines. Still, in the unlikely event that I am asked to expand on the “changing IT world” part I would do it as follows.

The changes currently at work in the IT world can be organized along three axis:

  • IT infrastructure and management
  • Application development and delivery
  • Business and regulation

Each of these categories is ridiculously large. It’s only through the prism of the relationships between them that they provide any value. Think about three balls linked by coil springs.

If you give one of these balls a shake, you will start a hard-to-predict dance between them. This is similar to how the three domains above relate to one another. Changes in one (say a new focus on regulatory compliance in the “business” area, the emergence of virtualization technology in the “infrastructure” area or the appearance of Web 2.0 applications in the “application” area) start a complex movement involving all three. It takes a while to achieve a new equilibrium (and in practice it is never achieved since changes occur too often, adding stimulus to an already excited system). For a visual illustration, see this little YouTube video (but imagine that the three balls are arranged in a triangle rather than linearly and that every so often one of them gets pulled in a random direction).

This is not new of course. There have been changes in these three areas for as long as IT has existed (starting before it was called IT) and they have always driven changes in how IT is managed. To some extent they also have always influenced one another. The “new” part is that the connections are a lot tighter now, that the springs have a much higher force constant (the “k” in “F=-kx”). So here is my attempt at mapping today’s hot buzzwords on a map organized along these areas.

Before you ask: yes of course I have a very rigorous methodology, based on very precise quantitative data, to establish with certainty the exact x, y and z coordinates of each label. Buzzword topology is a precise science.

You may notice that the buzziest buzzword (at least currently), “Cloud”, does not appear on the map. It’s because it buzzes so much that it would be all over it, engulfing what currently appears as “virtualization”, “datacenter automation”, “Iaas”, “PaaS”, “SaaS” and “opex/capex”. There are two main parts in the “Cloud” buzzword: the “Technical Cloud” and the “Business Cloud”. The “Technical Cloud” is where we take virtualization and standardization (of machines, networks and application infrastructure) and turn that mind-boggling complexity into a manageable system that can be programmed to deliver applications (Cisco recently called it “Unified Computing”; HP, IBM and others have been trying to describe and brand it for a long time). Building on these technical capabilities comes the second part of “Cloud”, the “Business Cloud”. It is the ability to use infrastructure owned by a third party (presumably one able to leverage economies of scale) and all the possibilities this opens in the business realm. That’s what “Cloud” started as, back when it was known as “Utility Computing” and before it was applied to everything under the sun. A recent illustration of the relationship between the “Technical Cloud” and the “Business Cloud” is the introduction of vCloud by VMWare (their vision includes using VMotion technology, a piece of the “Technical Cloud”, not just to move machines between neighboring hypervisors but between organizations, enabling the “Business Cloud”). Anyway, that’s why “Cloud” it’s not on the map. It is actually all over it.

The system displayed on the map is vibrating very intensely right now, and I don’t see this changing anytime soon. Just for fun, here are candidates for future boxes on the map:

  • In the “IT infrastructure and management” category, maybe one day we’ll get to real metadata-driven management integration across the stack (as opposed to the more limited “application modeling” area listed above), whether through RDF or not.
  • In the “application development and delivery” category, maybe Doug Purdy’s vision “to make everyone a programmer (even if they don’t know it)” will be realized, whether through Oslo or not.
  • In the “business and regulation” category, maybe one day corporations will actually start caring about the customer data they are entrusted to (but only if mishandling it finally costs them more than “sorry about that, here is a one year credit monitoring subscription now go away”).

In summary, the evolution of IT management is driven not only by changes in IT technology but also by changes in two other fields (“application development and delivery” and “business and regulation”) with which it is tightly connected. Both of these fields are also in a very dynamic state. And they also influence one another, resulting in a complex three-way dance. You can’t understand the trajectory and moves of one dancer without seeing the others.

That’s what I mean by “IT management in a changing IT world”. Thanks for asking.

[UPDATED 2009/6/25: For more on the “technical cloud” versus “business cloud”, go read Neil Ward-Dutton’s nice explanation. He actually breaks down the “business cloud” in two (separating the economic aspect from the strategic aspect).]

1 Comment

Filed under Application Mgmt, Automation, Big picture, BPM, BSM, Business, Cloud Computing, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Open source, Utility computing, Virtualization

Cloudman to the rescue?

GMail had a hiccup last night. Last month it had a more serious outage and so did AWS.

In the movie:
Superman:
Easy, miss. I’ve got you.
Lois Lane:
You’ve got me? Who’s got you?

In the IT world:
Cloud provider: Easy, CIO. I’ve got you.
CIO: You’ve got me? Who’s got you?

With apologies to Werner Vogels.

Comments Off on Cloudman to the rescue?

Filed under Amazon, Application Mgmt, Cloud Computing, Everything, Google, IT Systems Mgmt, Utility computing

WS-Discovery and WS-DeviceProfile public review

We are in the middle of the public review period for the three specifications coming out of the “Web Services Discovery and Web Services Devices Profile (WS-DD)” OASIS technical committee. The specifications are:

This trio has been swirling around for many years. Mostly in the USB/PnP/UPnP community (devices physically attached to Windows boxes) but they have popped up on the enterprise WS-* radar screen from time to time, for two reasons:

  • WS-DD as another specification (in addition to WS-Management and the later incarnations of WS-MeX) who uses WS-Transfer, and therefore as a (poor, in my view) justification for making it a stand-along specification
  • WS-Discovery as potentially useful for datacenter management (for resource discovery, obviously)

Looking at the members list for the technical committee seems to confirm the suspicion that many members would be more interested in the potential datacenter-related applications (of WS-Discovery, presumably) than the USB gadgets use cases: CA, IBM, Novell, Progress, RedHat, Software AG, WSO2. Well, I guess Novell and RedHat could be in the game from the perspective of plugging devices into desktops running their version of Linux rather than from a datacenter perspective. CA and IBM could be in it from an Asset Management (for devices) perspective too. So who knows (if you had no tolerance for idle speculations you wouldn’t be reading blogs, would you?).

In any case, no Apple, Palm, RIM, Google (Android) or Sun (JavaFX). I guess handheld devices aren’t the devices targeted here. Rather it’s more printers and cameras. In which case you have to wonder why HP isn’t there but that’s another question (I’ll pretend not to know).

No point dwelling on this, especially since the member list is a very imperfect representation of the actual participation. As an alternative we can scan the mailing list archive to see who is active. Judging from the last two months (feb 2009, march 2009) it’s Microsoft, Microsoft and that company in Redmond. Microsoft, as we know, created these specifications from the start and they seem to still be the engine.

In the end it remains to be seen if WS-Discovery has any role to play in datacenter infrastructure discovery. That environment is clearly becoming more dynamic, but the most frequent changes will take place at the level of the guest VM (creation of new instances) and above (applications, services…) and all these creations will presumably be orchestrated by some controller so there is little need for a “hello world” resource-initiated discovery mechanism for them.

Which leaves the physical equipment. Do we need WS-Discovery to get from the point of a server being plugged to the point where it has been provisioned? There are alternatives in this somewhat homogenous environment. And UDP multicast only takes you so far in a datacenter (at least without extensions). So my guess is no. But who knows. The drafts are here for you to review if you think it may be relevant.

Comments Off on WS-Discovery and WS-DeviceProfile public review

Filed under Automation, Everything, IT Systems Mgmt, Manageability, Microsoft, Specs, Standards

Managing the stack from top to bottom, including virtualization

The press release for the release of Oracle Enterprise Manager 10gR5 came out yesterday, but that’s not all: the Oracle VM Management Pack for Enterprise Manager was also announced yesterday. What this illustrates is that, in addition to the commonly-cited “one neck to choke” benefit of getting the entire stack from one vendor (from the hypervisor to the application, including the OS, DB and MW), there is also the benefit of getting a unified management environment for the whole stack. Here is how my friend and Oracle colleague Adam Hawley (director of product management for Oracle VM and previously with Enterprise Manager) describes it in more details:

So what’s so big about it and why does this give us a clear advantage over others?

  • No other company can offer management of the virtualization AND the workload that runs inside the virtualization at this depth and scale: not anyone. We now offer a single management product…Enterprise Manager Grid Control…that manages your entire data center from top-to-bottom:  from the packaged application layer (Siebel, PeopleSoft, Beehive, etc.) through all the middleware and database layers to the OS and virtualization itself. And we do that for the both physical and virtual worlds together seamlessly.

    • Other virtualization vendors either ONLY do virtualization management or to the extent they do anything else, it is typically one other category in the stack…virtualization plus the OS or virtualization plus some very specific applications (but no OS…), etc.
    • No one else can provide the entire picture the way we can with Oracle VM
  • So what does that mean for users?
    • It means Oracle VM is virtualization with a difference:
      • It is virtualization that makes application workloads faster, easier, and less error prone to deploy with Oracle VM Templates as pre-built, pre-configured VMs containing complete product solutions maintained in a central software library for easy re-use:  download from Oracle, import the VMs, use the product.  Simple.
      • It is virtualization that makes workloads easier to configure and manage:  Automate deployment of the VMs, installation of the management agent, and enable powerful, in-depth monitoring of guests and Oracle VM Servers including configuration management…
        • Set-up configuration policies to track how your VMs and servers are configured and to alert you if that configuration changes or “drifts” over time
        • What about if you have one VM running perfectly and another supposedly identical one not doing as well?  Run a configuration compare to check for differences not only in packages or application versions in the VM, but also down to OS parameter settings and other key items to rapidly identify differences and address them from the same console
      • It is virtualization that makes workloads easier to troubleshoot and support:

        • Not only is Oracle VM support very affordable compared to anyone out there, management of Oracle VM servers in Enterprise Manager makes it so much easier to rapidly track down issues across the layers of your data center from one UI With other vendors, to troubleshoot an issue with applications or the database, you have to trace it down through your environment, possibly to the virtual machine, but then how do you get all the info about the VM itself like its parameters and which physical server it is hosted on?  You have to jump to another tool entirely… whatever stand-alone tools you are using to manage the virtualization layer… to get the information and then go back-and-forth:  tedious and time consuming With Enterprise Manager, it is all there in one UI.  Need to tweak the number of virtual CPUs based on your database performance analysis report indicating a CPU bottleneck?  Navigate from the performance page for the database to the home page of that virtual machine and adjust the configuration in the same UI.  Done.  Well, OK, you may have to restart the application for the new vCPU setting to take effect but you can do still do that all within Enterprise Manager, saving time and minimizing risks.
        • This can dramatically reduce the time to troubleshoot as well as reduce the chances of human error navigating between multiple products with different structures and concepts to help you maximize your up-time.

So this is where it starts to get interesting. This is where the game starts to really be about not just the virtualization itself, but how it makes the rest of your overall data center better and more efficient.  The Oracle Enterprise Manager Grid Control Oracle VM Management Pack is a huge step forward for users.

[UPDATED 2009/3/21: An Oracle Virtualization blog has recently been created. So now you can hear directly from Adam and his colleagues.]

1 Comment

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, OVM, Virtualization

CMDBf is a lot more and a lot less than you think

The DMTF CMDBf working group has recently published an updated draft of its specification. The final version should follow soon and I don’t expect major changes so now is not a bad time to start thinking about what this baby can do.

Since CMDBf stands for “configuration management database federation”, you might think the obvious answer to the “what can it do” question is “build a federation of configuration management databases”. Except it’s not. Despite its name, CMDBf provides little support for federation unless you take a very loose definition of the term. The specification gives you a query language and a very simple registration interface, with a sprinkle of metadata to improve interoperability. The query language lets you talk to a CMDB to retrieve information on configuration items (CIs) that it knows about. The registration interface lets you keep a CMDB informed of changes to CIs that it may care about. If you want to build on top of this a real federation, one that scales to the type of environment that CMDBs are used for today, you have to go further than what the specification provides. What CMDBf does give you is some amount of integration between CMDBs (at the protocol level at least, not at the model level). It may not sound like much but it is a lot of progress on the current situation and the right incremental step, whether you are aiming for true federation as the end goal or not.

That’s the “a lot less than you think” part. So, what’s the “a lot more than you think” part? Good stuff all around:

CMDBf provides a metamodel that is well-suited for complex IT systems and it provides an elegant graph-oriented query language on top of it. The most convenient representation for an IT system is neither “one big XML document” nor “a sea of nodes and edges”. CMDBf gives you a middle ground: a graph model with XML leaf nodes. So you can precisely model the relationships between your IT elements using explicit relationships (with their own records), but you can also attach a well-understood piece of XML to an item as a record without having to break that XML into a bunch of tiny relationships.

I am pretty sure there are other domains, beyond IT systems, for which this would be useful. It will be interesting to see if the CMDBf specification gets considered outside of its intended scope. But these domains are more likely to end up using RDF/OWL/SPARQL instead. Not everyone has made the leap from XML as a tool to XML as a religion, which made CMDBf necessary for us. But let’s not veer into another rant.

Let’s go back instead to describing how useful CDMBf can be to IT systems management, independently of any “federation” objective. Let me put it this way: if one was to create from scratch a configuration store for IT systems they should strongly consider the CMDBf conceptual model as the base metamodel. And something along the lines of the CMDBf Query (though not necessarily through its XML serialization) as the native query language for it. Most CMDBf implementers of course are not in this situation. Rather than writing the store from scratch they will create a CMDBf wrapper/interface on their current CMDB. And that’s fine too. CMDBf will work well as an interoperability protocol. Putting aside my gripes about XPath overuse, CMDBf strikes a reasonable balance that makes it implementable on top of any back-end technology (relational, XML, RDF, in-memory objects, bags of name-value pairs…). And the query patterns it supports map well to CMDB-to-CMDB integration use cases. But it is underselling it, in my view, to restrict it to this over-the-wire interoperability scenario. CMDBf also provides a very useful foundation for local access to the CMDB. CMDBf graph queries can support powerful visualization of the content of the CMDB. They can support the definition of configuration rules. They can support in-depth inspection of relationships (e.g. fault tree).

And that may jsut be the beginning. It could take three directions after v1:

The first one, as always for a standard, is that it is ignored and becomes irrelevant. I have to reluctantly list this one first, because it is statistically the most likely for a new standard. Especially one that is not a ratification of an existing de facto standard. And one that threatens an important control point for vendors. A slight variation on this scenario is for CMDBf to succeed from a marketing perspective, as a checkmark that most vendors tick, but not as a true technology. This is the “smokescreen” scenario from Mr. Skeptic. One scenario that worries me is that CMDBf could fail because of the poor models of the CMDBs that implement it. If your IT model is not granular enough or if it matches the UI of your application more than the semantics of the IT components, then CMDBf will expose these shortcomings and probably be blamed for them (with bad models, “shoot the messenger” becomes “shoot the protocol”).

The second possible direction is that CMDBf provides enough value in integrating CMDBs that people want more and challenge the group to deliver on the “f” part, federation. That could take the form of a combination of:

  • better integration with other protocols (mostly from the WS-Management family, like WS-Enumeration and WS-Eventing),
  • reconciliation support (here are ways to address it),
  • some model transformations or canonical models,
  • some optimizations in the query mechanism for distributed queries (e.g. data partition rules).

The third possible direction (not exclusive) is for CMDBf to become the basis for a standard rule language for IT models. Yeah, another one (remember SML?). SPIN and SML show us how a generic query language can be used to support configuration rules. I very much like SPIN but it requires adopting RDF as a metamodel, which is a hard sell in XML-land. SML suffers technically from being too reliant on an inappropriate validation tool (XSD) and treating relationships as a second thought rather than an integral part of the model. Which is fine in many areas (EMF does it too), but not, in my view, when modeling IT systems.

If we are not going to use RDF/SPIN then let’s copy them. We can use the CMDBf metamodel (graph-based) where SPIN uses RDF. We can use the CMDBf query language (graph-oriented) where SPIN uses SPARQL. Since CMDBf queries use XPath, we see some commonalities with SML (which uses XPath through Schematron). But in CMDBf XPath is scoped to the leaf nodes of the graph, not the entire model as it is in SML. In other words, SML adds relationship traversal to XPath, while CMDBf adds XPath to its relationship-aware queries. It’s a matter of who’s on top. It sounds academic but it isn’t.

Does the industry really want standardized, re-usable configuration rules? SML/CML seem to say no. The push towards Cloud interop, on the other hand, begs for it. At least if you believe in programming your environment in a way that is partialy declarative rather than entirely procedural.

[UPDATED 2009/3/5: Rob England (a.k.a. Mr. Skeptic as I refer to him above) provides a geek-to-English translation for this post. Neat!]

2 Comments

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Modeling, RDF, SML, Specs, Standards, Tech, XPath

Oracle Enterprise Manager 10g R5

Oracle Enterprise Manager 10g R5 (a.k.a. 10.2.0.5) is coming out. It will be announced by our SVP on Tuesday (subscribe to the Webcast). InfoWorld got a preview. From the application management perspective, it includes new management capabilities for middleware products that came from BEA (WL and OSB) and for several Oracle applications (Siebel, EBS, PeopleSoft, Beehive, BRM). And lots of goodies in other areas (virtualization, database, automation…).

[UPDATED 2009/3/3: bits! bits! bits!]

[UPDATED 2009/3/4 (and later): My colleague Chung Wu provides more details, followed by drill-downs into specific area. I’ll add links here as they become available:

Thanks Chung!]

Comments Off on Oracle Enterprise Manager 10g R5

Filed under Application Mgmt, Everything, IT Systems Mgmt, Oracle

Towards making Cloud services more consumable by enterprises

If you are a hard core network/system management person who has been very suspicious of all the ITIL/ITSM bullshit from the boss, and even more suspicious of the “Cloud” nonsense that occupies the interns while they should be monitoring the event console, then I have bad news for you: they are mating. Not the interns (as far as I know), I am talking about ITSM and the Cloud.

If, on the other hand, you are an ITSM and Cloud enthusiast who sees himself/herself leveraging all these nifty ITSM/BSM tools and shinny Cloud services to become the ultimate CIO, delivering unprecedented business value, compliance and IT efficiency from your iPhone at the beach, then you’ll see this marriage as good news, a sign that your move to Hawaii is approaching.

I am referring to the announcement by Rodrigo Flores that his company, newScale, has released a new product, newScale FrontOffice for Virtual Data Centers, to incorporate Cloud-based services in their IT Service Catalog.

This is not sexy but it’s the kind of support that some classes of enterprises will need in order to really make use of Cloud services. Eventually, Cloud providers are going to have to move their focus away from cool technology and developer evangelism towards making their services easily consumable by corporations. Otherwise they’ll be like an office supply store that doesn’t take American Express.

While the direction is very interesting I can’t comment on how much value this new product actually provides because the company seems to engage in anti-marketing activities. If you want to dig a bit deeper than the press release, you get redirected to this page which requires your info in order to download the “complimentary information brief”. The confirmation page promises an email “containing the information you requested within the next 30 minutes”. I thought that the info I requested was the brief. What I got instead was an email asking me to call them. I don’t know if it’s my unpronouncable last name of the oracle.com in my email address that scares them (I’d guess the latter) but that seems like a lot of precautions for a “complimentary information brief”. Unless this is an attempt to grab the “Cloud” buzzword with little meat to back it up (not that anyone would do this, of course). Hopefully they’ll make more information publicly available when they get around to it.

[UPDATED 2009/2/25: Via Coté, I just saw what I consider supporting envidence, from Gartner that, once we’re out of the early adopter phase, Cloud firms will need to focus less on pleasing developers and more on pleasing IT people: “But my observation from client interactions is that cloud adoption in established, larger organizations (my typical client is $100m+ in revenue) is, and will be, driven by Operations, and not by Development.”]

[UPDATED 2009/3/3: I got the PDF brief, thanks newScale (Ken and Mark). Many of the benefits it describes assumes that there is a pretty robust automation/provisioning infrastructure to back it up, in addition to the Catalog itself. E.g. the Catalog alone will not allow you to “shorten the provisioning cycle time to minutes instead of months”. The brief lists adapter kits to VMWare/EC2 and more internal-minded tools (HP and BMC, presumable through their Opsware and BladeLogic acquisitions respectively). So on the “public cloud” side it’s EC2 for now, not surprisingly. Integration with many of the Cloud tools (like RightScale) could be tricky since these tools bundle a catalog with the automation engine. If we ever do get a useful ontology of Cloud services this catalog would be a natural user for it, when it expands to other services beyond EC2 and tries to help you compare them. I guess newSCale wouldn’t appreciate if I provided a direct link to the PDF, so go request it to see for yourself.]

[UPDATED 2009/3/23: Speaking of managing your IT systems from your cell phone at the beach: VMWare vCenter Mobile Access.]

3 Comments

Filed under Cloud Computing, Everything, Governance, IT Systems Mgmt, ITIL, Mgmt integration, Utility computing

Oracle acquires mValent for application config management

mValent will become part of Enterprise Manager. It comes to complement other recent acquisition in the application management space: Auptyma, Moniforce, Empirix, ClearApp.

The announcement and FAQ are here.

More details about the acquired product and technology are on the mValent site, including here and here.

1 Comment

Filed under Application Mgmt, CMDB, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Oracle

The Hippocratic Oath for IT management systems

The first feature of any IT management application should be “primum non nocere”, or in English “first, do no harm”. Your IT management application should not ever:

  • crash the system it manages
  • materially degrade its performance
  • introduce security vulnerabilities to the system

It’s appropriate that the “primum non nocere” expression comes from a medical background, since IT management software acts, to a large extent, like the doctor of the datacenter. In that spirit, I wondered whether the Hippocratic Oath could also provide good guidelines to those designing IT management software.

Turns out, the original oath hasn’t aged so well (though it cannot hurt to remind IT consultants not to have sex with the IT staff of their customers or attempt to remove kidney stones on them).

On the other hand, there is a modern version (written by Louis Lasagna in 1964) that maps quite well to our domain. Here it is, with each paragraph followed (in bold font) by its translation for IT management. Just food for thoughts.

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.

Follow standards, publish configuration best practices in your domain of expertise, contribute to open source projects.

I will apply, for the benefit of the sick, all measures [that] are required, avoiding those twin traps of overtreatment and therapeutic nihilism.

Provide useful features that fully address customer needs. Not a superficial product, but not a boatload of useless features and meaningless metrics either.

I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug.

Good technology is important, but so are support, documentation and user interface.

I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.

Don’t push “consulting engagements” to stretch your application to deliver features it is not designed for. Instead, integrate with products that perform the function well.

I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.

Implement roles, permissions and auditing. Be very careful not to inadvertently crash, slow or compromise the target system. Keep in mind, in your design, that you and your developers are fallible.

I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick.

You are not managing just a server, a JVM or a database. Considers the entire IT system and the business value it delivers.

I will prevent disease whenever I can, for prevention is preferable to cure.

Proactively monitor the system, detect problems before they impact the system and, when possible, automate resolution.

I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.

Don’t cheat, or abuse your position. Be honest and decent with customers, partners and competitors.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.

If you do all this, customers will keep buying from you and may even like you.

1 Comment

Filed under Business, Everything, IT Systems Mgmt

Sorry, CMDBf doesn’t make coffee either

The IT Skeptic is writing to us from his mountain retreat (via a time-delayed post on his blog), and the topic he felt safe to cover in such fashion (what journalists call an “evergreen”) is the fact that CMDBf is an orchestrated sham, brilliantly executed by IT management vendors.

I’d love to be part of something that’s brilliantly executed for once, even if it is a sham, but I am afraid this is not it. But first I should state the obvious, clarifying that even though I am a member of the CMDBf group at DMTF (and also an author of the original version, under my previous employer) I do not speak for the group or DMTF (or my employer for that matter). Just as myself, as always on this blog.

The problem that Rob England, Mr. Skeptic, has with the CMDBf specification is that it doesn’t do a bunch of things that he’d like it to do, such as specifying how data sources acquire data for their domain, how they store the data, how the underlying resources are reconfigured, what processes are followed etc. See the full list from his post. The list is a copy/paste from the CMDBf specification, with some comments added, so at the very least he has to admit that as far as “smokescreens” go this one is pretty upfront about its limitations…

He concludes that “this is once again a geeky technical solution to a cultural, organizational and procedural problem.” I have to ask: who expects DMTF specifications to solve “cultural, organizational and procedural” problems? Does CIM solve such problems? Does WBEM?

Human-to-human communication is a “cultural, organizational and procedural” problem and SMTP/POP/IMAP/etc (the interoperable protocols used by email systems) are just as geeky as CMDBf. They don’t solve the larger problem, only contribute to the solution. If CMDBf can contribute as much to datacenter management as SMTP/POP/IMAP contribute to human communication (minus the SPAM if possible), I’d call that a success.

And then there is this warning:

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availability monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely.”

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

As far as the glacial pace of progress (“Glacial advance. That’s the way the vendors want it” from an earlier post by the Skeptic), CMDBf is no race horse but I don’t see it going any slower than other standards. Slowness (I mean, deliberation) is part of the landscape. I would submit a slight twist on Hanlon’s razor: “Never attribute to malice that which can be adequately explained by legal, procedural and organizational inertia.”

Having said all this, some of Rob’s criticism is perfectly justified, such as his sarcasm about this sentence from the specification:

“The Federated CMDB operates in a closed environment, in which some security issues are less critical than in open access or public systems.”

OK, that’s stupid indeed. Especially in a public cloud environment where you don’t know who is renting the VM next door. I’ll ask the group to remove this. Actually, that whole appendix is useless and I pointed this out in my earlier review of CMDBf 1.0 (look for the “security boilerplate” section at the bottom of the review).

Rob could also have pointed out that this specification only addresses “federation” if you accept a very scaled-down definition of the term. What it does do is help with CMDB query and synchronization. Not the holy grail, but nothing to sneer at either.

Rob, next time you want to throw tomatoes at CMDBf while you’re on holiday, just give me the password to the site and I’ll do it for you… :-)

[UPDATED 2009/1/21: Rob responds via a comment on his original blog entry.]

2 Comments

Filed under BSM, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Security, Specs, Standards

A new SPIN on enriching a model with domain knowledge (constraints and inferences)

Back when I was at HP and we got involved with what turned into SML (now a W3C candidate recommendation), we tried to make a case for the specification to be based on RDF/OWL rather than XML/XSD/Schematron. It was a strange situation from a technical perspective because RDF is a better foundation for an IT model than XML, but on the other hand XSD/Schematron is a better choice for validation than OWL. OWL is focused on inference, not validation (because of both fundamental design choices, e.g. the open world assumption, and language expressiveness limitations).

So our options were to either use the right way to represent the system (RDF) combined with the wrong way to capture constraints (OWL) or to use the wrong way to represent the system (XML) combined with the right way to constrain it (mostly Schematron, with some limited help from XSD). At the end, of course, this subtle technical debate was crushed under the steamroller of vendor politics and RDF never got a fair chance anyway.

The point of this little background story is to describe the context in which I read this announcement from Holger Knublauch of TopQuadrant: the new version of their TopBraid Composer tool introduces SPIN, a way to complement OWL with a SPARQL-based constraint checking and inference mechanism.

This relates to SML in two ways.

First, there are similarities in the approach: Schematron leverages the XPath language, used to query XML, to create validation rules. SML then marries Schematron with XSD, for a more powerful validation mechanism. Compare this to SPIN: SPIN leverages the SPARQL query language, used to query RDF, to create validation/inference rules. SPIN also marries this with OWL, for a more powerful validation/inference mechanism.

But beyond the mirroring structures of SPIN and SML, the most interesting thing is that it looks like SPIN could nicely solve the conundrum, described above, of RDF being the right foundation for modeling IT systems but OWL being the wrong constraint mechanism. SPIN may do a better job than SML at what SML is aiming to do (validation rules). And at the same time, you get “for free” (or as close to “for free” as you can get with software, which is still far from “free”) a pretty powerful inference mechanism. The most powerful I know of, short of using a general programming language to capture your inference rules (and good luck with maintaining these rules).

This may sound like sci-fi, but it’s the next logical step for IT configuration standardization. Let’s look at where we are today:

  • SML (at W3C) is an attempt to standardize the expression of constraints.
  • CMDBf (at DMTF) is standardizing how the model content is queried (and, to some limited extent at this point, federated).
  • And recently IBM authored a proposal for a reconciliation specification for items in the model and sent it to an Eclipse group (COSMOS).

But once you tackle reconciliation, you are already half-way into inferencing territory. At least if you want to reconcile between models, not just between instances expressed in the same model. Because the models may not be defined at the same level of granularity, and before you can reconcile items you need to infer finer-grained entities in your coarser-grained model (or vice-versa) so that you can reconcile apples with apples.

Today, inferencing for IT models is done as part of the “discovery packs” that you can buy along with your IT management model repository. But not very well, in general. Because the way you write such a discovery module for the HP Universal CMDB is very different from how you write it for the BMC CMDB, IBM’s CCMDB or as a plug-in for Oracle Enterprise Manager or Microsoft System Center. Not to mention the smaller, more specialized, players. As a result, there is little incentive for 3rd party domain experts to put work into capturing inference rules since the work cannot be widely leveraged.

I am going a bit off-topic here, but one interesting thing about standardization of inferencing for IT management, if it happens, is that it is going to be very hard to not use RDF, OWL and some flavor of SPARQL (SPIN or equivalent) there. And once you do that, the XML-based constraint mechanisms (SML or others) are going to be in for a rough ride. After resisting the RDF stack for constraints, queries and basic reconciliation (because the added value was supposedly not “worth the cost” for each of these separately), the XML dam might get a crack for inferencing. And once RDF starts to trickle through that crack, the whole dam is going to come down in a big wave. Just to be clear, this is a prophetic long-term vision, not a prediction for 2009 (unfortunately).

In the meantime, I’d like to take this SPIN feature a… spin (sorry) when I find some time. We’ll see if I can install the new beta of TopBraid composer despite having used up, a year ago, my evaluation license of the earlier version of the product. Despite what I had hopped at some point, this is not directly applicable to my current work, so I am not sure I want to buy a license. But who knows, SPIN may turn out to be the change that eventually puts RDF back on my “day job” list (one can dream)…

It’s also nice that Holger took the pain to deliver SPIN not just as a feature of his product but also as a stand-alone specification, which should make it pretty easy for anyone who has a SPARQL engine handy to support it. Hopefully the next step will be for him to clarify the IP terms for the specification and to decide whether or not he wants to eventually submit it for standardization. Maybe to the W3C SML working group? :-) I’d have a hard time resisting joining if he did.

12 Comments

Filed under CMDB, CMDBf, Everything, IT Systems Mgmt, Modeling, RDF, Semantic tech, SML, SPARQL, Specs, Tech, W3C

The art of reconciling items in your IT management model

Whether you call it a CMDB or some other name, any repository of IT model elements has the problem of establishing whether two entities are the same or not. Here is a quick map of the problem space.

Why there is no “one true solution”

There is no “true” answer to the “sameness” question. The following example illustrates this, even though it is not necessarily representative of datacenter scenarios. Ask any gamer to tell you the history of that 3-year-old PC under their desk. The power cord might be the only original piece left after they’ve upgraded the RAM, video card, sound card, hard disk(s), DVD drive and power supply. Not to mention the tragic overclocking accident that took the life of the motherboard/CPU. After the upgrade/replacement of each of these parts, the user still thought of the machine as the same PC, just upgraded/fixed. But how can it be the same as at the beginning if pretty much every single part has changed? And when time came to reinstall Windows, the registration probably failed because Microsoft decided that the same license was being used for a new machine. Sameness is in the eye of the beholder.

And it’s not just a hardware problem. When you upgrade your Oracle Database and start using a new ORACLE_HOME, it may feel like the same database to most users (including the applications that talk to the database) but a more executable-centric view might conclude that it is a new database.

Defining what makes an IT element unique is not a matter of truth. It’s a matter of usefulness for a given purpose. When trying to establish this for your model, if the conversation ever veers philosophical, you’re off track. This is engineering, not science. “A and B are the same” should be understood to be a shortcut for “it makes sense for my purpose to consider A and B to be the same”. Of course things become complicated when “my purpose” encompasses a whole set of use cases (add “management” after each of: performance, compliance, configuration, change, asset, business service, business transaction…).

How the problem arises

It can arise over time. For example the management agent has to be reinstalled and it forgets the id that had been assigned by the server. When it comes back up, it reports what looks like a new item. But you want to reconcile it with the historical data that came from the agent’s previous incarnation.

It can arise because you have different discovery channels for the same item. For example, a BPEL engine reports to the management server the processes it supports and that model includes the external Web services (partnerLink) invoked by the processes, thereby creating items for these external services in your repository. But some of those external services may be running on servers which you also monitor and the services (and more generally the applications that deliver them) may be separately discovered by the agents on these hosts, resulting in a potential duplicate representation in the repository.

Or the problem can be a result of the integration of IT management products. For example, that Dell server in my asset management system may be the same as the Linux host that runs my production database and appears in Oracle Enterprise Manager.

Fixing the mess

There are two stages to this:

First, you need some level of model alignment. In the general case, the different items that you are trying to reconcile are not expressed in the same model. The view of the server coming from the asset management system does not necessarily contain the same data as its view in your operations console. One contains the lease expiration date, the other one contains the amount of space left on the disk. Some data may be in both (e.g. number of CPUs, host name) but not necessarily in fields of the same name. Or with the same granularity (ownerName versus ownerFirstnameownerLastname). Not to mention type system differences (but if the items are already in the same repository you have presumably already forced some level of metamodel alignment). In short, you first have the challenge of model transformation, a more general problem. With the advantage that the entire model does not need to be translated for item reconciliation, only the subset of data needed to establish “sameness”: the identifying properties. And in some cases (e.g. when a standard model is used or when two instances of the same agent report on the item), the items to reconcile are already described in the same model and this step can be skipped.

Once the necessary level of model alignment has taken place (if needed) so that items can be compared, the real task of reconciliation takes place, based on domain knowledge. It could be through a set of scripts (Python’s mix of simplicity, portability, broad array of libraries and ease of integration make it shine in this usage). It could be through some kind of reconciliation taxonomy, like this draft that IBM has contributed to the Eclipse COSMOS project. Or through metadata such as WSDM’s correlatable properties. [BTW, as the spec editor I got to insert dubious cultural references in the specification (see the <print:PrinterResourcePropDoc> example in section 5.3.3.1), but let me assure you that I have since matured… ;-)]

These are not the only ways to reconcile items, but they are the approaches that can be followed based on just the data in the repository. Beyond that, you can run a dummy transaction and trace it (if possible) across different management systems to reconcile entities between them. There are plenty of other domain-specific tricks, depending on the item type (I remember a machine room, back in the days when each server had a CD drive, where a script to open the CD tray was used to allow the operator to put a sticker on the correct machine). In general, these approaches play on external variables that are not directly part of the model of the item and yet can be influenced through it. Similar to how the bulb temperature is used in this famous brain teaser. I guess the IT equivalent would be to load-stress an application and use IPMI to see which CPUs register a rise in temperature (note: not a recommended approach in production systems…).

Coming back to the IT model repository, you also need to have plumbing in place to deal with the result of the reconciliation: requests and data may come in that reference either one of the reconciled items and you need to be able to deal with that split personality, while providing a unified view in the general case. You also need to be ready to deal with potential data discrepancy between the items (either automatically or through of process that involves humans, but this is out of scope for this entry).

Preventing the mess from happening

Can’t we just prevent the problem from occurring in the first place? To some extent yes. The main way to prevent it is to not reconcile what doesn’t need to be. This may sound heretical in these days of “single source of truth” and “end to end visibility” but reconciliation of key connection points is often enough. You may not need to have one single model that contains everything from your company’s employee directory to the fan speed of all your servers. It’s a matter of delivering on use cases, not hoarding data.

When you do want to consolidate and reconcile, one approach is to standardize on natural IDs for items of different types. But this requires domain experts to carefully select identifying (and therefore immutable) properties of the different object types, which sounds a lot easier than it is. And it requires convincing others to adopt this approach, an even harder task. But as the proverb (almost) goes, one ounce of convention is worth one pound of reconciliation.

[Note: Whenever you talk about item reconciliation, the topic of correlating events is not far behind. It is assisted by a solid underlying IT model, but it has challenges of its own, so I’ll consider this out of scope for this discussion.]

3 Comments

Filed under CMDB, Everything, IT Systems Mgmt, Mgmt integration, Modeling