Category Archives: ITIL

Reading IBM’s proposed standard for Cloud Architecture

Did you enjoy the first version of IBM’s Cloud Computing Reference Architecture? Did you even get certified on it? Then rejoice, because there’s a new version. IBM  recently submitted the IBM Cloud Computing Reference Architecture 2.0 to The Open Group.

I’m a bit out of practice reading this kind of IBMese (let’s just say that The Open Group was the right place to submit it) but I would never let my readers down. So, even though these box-within-a-box-within-a-box diagrams (see section 2) give me flashbacks to the days of OGF and WSRF, I soldiered on.

I didn’t understand the goal of the document enough to give you a fair summary, but I can share some thoughts.

It starts by talking a lot about SOA. I initially thought this was to make the point that Glen Daniels articulated very well in this tweet:

Yup, correct SOA patterns (loose coupling, dyn refs, coarse interfaces…) are exactly what you need for cloud apps. You knew this.

But no. Rather than Glen’s astute remark, IBM’s point is one meta-level lower. It’s that “Cloud solutions are SOA solutions”. Which I have a harder time parsing. If you though “service” was overloaded before…

While some of the IBM authors are SOA experts, others apparently come from a Telco background so we get OSS/BSS analogies next.

By that point, I’ve learned that Cloud is like SOA except when it’s like Telco (but there’s probably another reference architecture somewhere that explains that Telco is SOA, so it all adds up).

One thing that chagrined me was that even though this document is very high-level it still manages to go down into implementatin technologies long enough to assert, wrongly, that virtualization is required for Cloud solutions. Another Cloud canard repeated here is the IaaS/PaaS/SaaS segmentation of the Cloud world, to which IBM adds a BPaaS (Business Process as a Service) layer for good measure (for my take on how Cloud relates to SOA, and how I dislike the IaaS/PaaS/SaaS pyramid, see this write-up of the presentation I gave at last year’s Cloud Connect, especially the 3rd picture).

It gets a lot better if you persevere to page 29, where the “Architecture Principles” finally get introduced (if had been asked to edit the paper, I would have only kept the last 6 pages). They are:

  1. Design for Cloud-scale Efficiencies: When realizing cloud characteristics such as elasticity, self-service access, and flexible sourcing, the cloud design is strictly oriented to high cloud scale efficiencies and short time-to-delivery/time-to-change. (“Efficiency Principle”)
  2. Support Lean Service Management: The Common Cloud Management Platform fosters lean and lightweight service management policies, processes, and technologies. (“Lightweightness Principle”)
  3. Identify and Leverage Commonalities: All commonalities are identified and leveraged in cloud service design. (“Economies-of-scale principle”)
  4. Define and Manage generically along the Lifecycle of Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’ mechanism to support various cloud services using a shared, common management platform (“Genericity”).

Each principle gets a nickname, thanks to which IBM can refer to this list as the ELEG principles (Efficiency, Lightweightness, Economies-of-scale, Genericity). It also spells GLEE, but apparently that’s wasn’t the prefered sequence.

The first principle is hard to disagree with. The second also rings true, including its dings on ITIL (but the irony of IBM exhaulting “Lightweightness” is hard to ignore). The third and fourth principles (by that time I had lost too many brain cells to understand how they differ) really scared me. While I can understand the motivation, they elicited a vision of zombies in blue suits (presumably undead IBM Distinguish Engineers and Fellows) staggering towards me: “frameworks… we want frameworks…”.

There you go. If you want more information (and, more importantly, unbiased information) go read the Reference Architecture yourself. I am not involved in The Open Group, and I have no idea what it plans to do with it (and if it has received other submissions of the same type). Though I wouldn’t be surprised if I see, in 5 years, some panic sales rep asking an internal mailing list “The customer RPF asks for a mapping of our solution to the Open Group Cloud Reference Architecture and apparently IBM has 94 slides about it, what do I do? Has anyone heard about this Reference Architecture? This is urgent.”

Urgent things are long in the making.

1 Comment

Filed under Application Mgmt, Automation, Big picture, BPM, BSM, Business Process, Cloud Computing, Everything, Governance, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Utility computing

CMDB in the Cloud: not your father’s CMDB

Bernd Harzog recently wrote a blog entry to examine whether “the CMDB [is] irrelevant in a Virtual and Cloud based world“. If I can paraphrase, his conclusion is that there will be something that looks like a CMDB but the current CMDB products are ill-equipped to fulfill that function. Here are the main reasons he gives for this prognostic:

  1. A whole new class of data gets created by the virtualization platform – specifically how the virtualization platform itself is configured in support of the guests and the applications that run on the guest.
  2. A whole new set of relationships between the elements in this data get created – specifically new relationships between hosts, hypervisors, guests, virtual networks and virtual storage get created that existing CMDB’s were not built to handle.
  3. New information gets created at a very rapid rate. Hundreds of new guests can get provisioned in time periods much too short to allow for the traditional Extract, Transform and Load processes that feed CMDB’s to be able to keep up.
  4. The environment can change at a rate that existing CMDB’s cannot keep up with. Something as simple as vMotion events can create thousands of configuration changes in a few minutes, something that the entire CMDB architecture is simply not designed to keep up with.
  5. Having portions of IT assets running in a public cloud introduces significant data collection challenges. Leading edge APM vendors like New Relic and AppDynamics have produced APM products that allow these products to collect the data that they need in a cloud friendly way. However, we are still a long way away from having a generic ability to collect the configuration data underlying a cloud based IT infrastructure – notwithstanding the fact that many current cloud vendors would not make this data available to their customers in the first place.
  6. The scope of the CMDB needs to expand beyond just asset and configuration data and incorporate Infrastructure Performance, Applications Performance and Service assurance information in order to be relevant in the virtualization and cloud based worlds.

I wanted to expand on some of these points.

New model elements for Cloud (bullets #1 and #2)

These first bullets are not the killers. Sure, the current CMDBs were designed before the rise of virtualized environment, but they are usually built on a solid modeling foundation that can easily be extend with new resources classes. I don’t think that extending the model to describe VM, VNets, Volumes, hypervisors and their relationships to the physical infrastructure is the real challenge.

New approach to “discovery” (bullets #3 and #4)

This, on the other hand is much more of a “dinosaurs meet meteorite” kind of historical event. A large part of the value provided by current CMDBs is their ability to automate resource discovery. This is often achieved via polling/scanning (at the hardware level) and heuristics/templates (directory names, port numbers, packet inspection, bird entrails…) for application discovery. It’s imprecise but often good enough in static environments (and when it fails, the CMDB complements the automatic discovery with a reconciliation process to let the admin clean things up). And it used to be all you could get anyway so there wasn’t much point complaining about the limitations. The crown jewel of many of today’s big CMDBs can often be traced back to smart start-ups specialized in application discovery/mapping, like Appilog (now HP, by way of Mercury) and nLayers (now EMC). And more recently the purchase of Tideway by BMC (ironically – but unsurprisingly – often cast in Cloud terms).

But this is not going to cut it in “the Cloud” (by which I really mean in a highly automated IT environment). As Bernd Harzog explains, the rate of change can completely overwhelm such discovery heuristics (plus, some of the network scans they sometimes use will get you in trouble in public clouds). And more importantly, there now is a better way. Why discover when you can ask? If resources are created via API calls, there are also API calls to find out which resources exist and how they are configured. This goes beyond the resources accessible via IaaS APIs, like what VMWare, EC2 and OVM let you retrieve. This “don’t guess, ask” approach to discovery needs to also apply at the application level. Rather than guessing what software is installed via packet inspection or filesystem spelunking, we need application-aware discovery that retrieves the application and configuration and dependencies from the application itself (or its underlying framework). And builds a model in which the connections between application entities are expressed in terms of the configuration settings that drive them rather than the side effects by which they can be noticed.

If I can borrow the words of Lew Cirne:

“All solutions built in the pre-cloud era are modeled on jvms (or their equivalent), hosts and ports, rather than the logical application running in a more fluid environment. If the solution identifies a web application by host/port or some other infrastructural id, then you cannot effectively manage it in a cloud environment, since the app will move and grow, and your management system (that is, everything offered by the Big 4, as well as all infrastructure management companies that pay lip service to the application) will provide nearly-useless visibility and extraordinarily high TCO.”

I don’t agree with everything in Lew Cirne’s post, but this diagnostic is correct and well worded. He later adds:

“So application management becomes the strategic center or gravity for the client of a public or private cloud, and infrastructure-centric tools (even ones that claim to be cloudy) take on a lesser role.”

Which is also very true even if counter-intuitive for those who think that

cloud = virtualization (in the “fake machine” interpretation of virtualization)

Embracing such a VM-centric view naturally raises the profile of infrastructure management compared to application management, which is a fallacy in Cloud computing.

Drawing the line between Cloud infrastructure management and application management (bullet #5)

This is another key change that traditional CMDBs are going to have a hard time with. In a Big-4 CMDB, you’re after the mythical “single source of truth”. Even in a federated CMDB (which doesn’t really exist anyway), you’re trying to have a unified logical (if not physical) repository of information. There is an assumption that you want to manage everything from one place, so you can see all the inter-dependencies, across all layers of the stack (even if individual users may have a scope that is limited by permissions). Not so with public Clouds and even, I would argue, any private Cloud that is more than just a “cloud” sticker slapped on an old infrastructure. The fact that there is a clean line between the infrastructure model and the application model is not a limitation. It is empowering. Even if your Cloud provider was willing to expose a detailed view of the underlying infrastructure you should resist the temptation to accept. Despite the fact that it might be handy in the short term and provide an interesting perspective, it is self-defeating in the long term from the perspective of realizing the productivity improvements promised by the Cloud. These improvements require that the infrastructure administrator be freed from application-specific issues and focus on meeting the contract of the platform. And that the application administrator be freed from infrastructure-level concerns (while at the same time being empowered to diagnose application-level concerns). This doesn’t mean that the application and infrastructure models should be disconnected. There is a contract and both models (infrastructure and consumption) should represent it in the same way. It draws a line, albeit one with some width.

Blurring the line between configuration and monitoring (bullet #6)

This is another shortcoming of current CMDBs, but one that I think is more easily addressed. The “contract” between the Cloud infrastructure and the consuming application materializes itself in a mix of configuration settings, administrative capabilities and monitoring data. This contract is not just represented by the configuration-centric Cloud API that immediately comes to mind. It also includes the management capabilities and monitoring points of the resulting instances/runtimes.

Wither CMDB?

Whether all these considerations mean that traditional CMDBs are doomed in the Cloud as Bernd Harzog posits, I don’t know. In this post, BMC’s Kia Behnia acknowledges the importance of application management, though it’s not clear that he agrees with their primacy. I am also waiting to see whether the application management portfolio he has assembled can really maps to the new methods of application discovery and management.

But these are resourceful organization, with plenty of smart people (as I can testify: in the end of my HP tenure I worked with the very sharp CMDB team that came from the Mercury acquisition). And let’s keep in mind that customers also value the continuity of support of their environment. Most of them will be dealing with a mix of old-style and Cloud applications and they’ll be looking for a unified management approach. This helps CMDB incumbents. If you doubt the power to continuity, take a minute to realize that the entire value proposition of hypervisor-style virtualization is centered around it. It’s the value of backward-compatibility versus forward-compatibility. in addition, CMDBs are evolving into CMS and are a lot more than configuration repositories. They are an important supporting tool for IT management processes. Whether, and how, these processes apply to “the Cloud” is a topic for another post. In the meantime, read what the IT Skeptic and Rodrigo Flores have to say.

I wouldn’t be so quick to count the Big-4 out, even though I work every day towards that goal, building Oracle’s application and middleware management capabilities in conjunction with my colleagues focused on infrastructure management.

If the topic of application-centric management in the age of Cloud is of interest to you (and it must be if you’ve read this long entry all the way to the end), You might also find this previous entry relevant: “Generalizing the Cloud vs. SOA Governance debate“.

4 Comments

Filed under Application Mgmt, Automation, Cloud Computing, CMDB, CMDB Federation, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Middleware, Utility computing, Virtualization

Generalizing the Cloud vs. SOA Governance debate

There have been some interesting discussions recently about the relationship between Cloud management and SOA management/governance (run-time and design-time). My only regret is that they are a bit too focused on determining winners and loosers rather than defining what victory looks like (a bit like arguing whether the smartphone is the triumph of the phone over the computer or of the computer over the phone instead of discussing what makes a good smartphone).

To define victory, we need to answer this seemingly simple question: in what ways is the relationship between a VM and its hypervisor different from the relationship between two communicating applications?

More generally, there are three broad categories of relationships between the “active” elements of an IT system (by “active” I am excluding configuration, organization, management and security artifacts, like patch, department, ticket and user, respectively, to concentrate instead on the elements that are on the invocation path at runtime). We need to understand if/how/why these categories differ in how we manage them:

  • Deployment relationships: a machine (or VM) in a physical host (or hypervisor), a JEE application in an application server, a business process in a process engine, etc…
  • Infrastructure dependency relationships (other than containment): from an application to the DB that persists its data, from an application tier to web server that fronts it, from a batch job to the scheduler that launches it, etc…
  • Application dependency relationships: from an application to a web service it invokes, from a mash-up to an Atom feed it pulls, from a portal to a remote portlet, etc…

In the old days, the lines between these categories seemed pretty clear and we rarely even thought of them in the same terms. They were created and managed in different ways, by different people, at different times. Some were established as part of a process, others in a more ad-hoc way. Some took place by walking around with a CD, others via a console, others via a centralized repository. Some of these relationships were inventoried in spreadsheets, others on white boards, some in CMDBs, others just in code and in someone’s head. Some involved senior IT staff, others were up to developers and others were left to whoever was manning the controls when stuff broke.

It was a bit like the relationships you have with the taxi that takes you to the airport, the TSA agent who scans you and the pilot who flies you to your destination. You know they are all involved in your travel, but they are very distinct in how you experience and approach them.

It all changes with the Cloud (used as a short hand for virtualization, management automation, on-demand provisioning, 3rd-party hosting, metered usage, etc…). The advent of the hypervisor is the most obvious source of change: relationships that were mostly static become dynamic; also, where you used to manage just the parts (the host and the OS, often even mixed as one), you now manage not just the parts but the relationship between them (the deployment of a VM in a hypervisor). But it’s not just hypervisors. It’s frameworks, APIs, models, protocols, tools. Put them all together and you realize that:

  • the IT resources involved in all three categories of relationships can all be thought of as services being consumed (an “X86+ethernet emulation” service exposed by the hypervisor, a “JEE-compatible platform” service exposed by the application server, an “RDB service” expose by the database, a Web services exposed via SOAP or XML/JSON over HTTP, etc…),
  • they can also be set up as services, by simply sending a request to the API of the service provider,
  • not only can they be set up as services, they are also invoked as such, via well-documented (and often standard) interfaces,
  • they can also all be managed in a similar service-centric way, via performance metrics, SLAs, policies, etc,
  • your orchestration code may have to deal with all three categories, (e.g. an application slowdown might be addressed either by modifying its application dependencies, reconfiguring its infrastructure or initiating a new deployment),
  • the relationships in all these categories now have the potential to cross organization boundaries and involve external providers, possibly with usage-based billing,
  • as a result of all this, your IT automation system really needs a simple, consistent, standard way to handle all these relationships. Automation works best when you’ve simplified and standardize the environment to which it is applied.

If you’re a SOA person, your mental model for this is SOA++ and you pull out your SOA management and governance (config and runtime) tools. If you are in the WS-* obedience of SOA, you go back to WS-Management, try to see what it would take to slap a WSDL on a hypervisor and start dreaming of OVF over MTOM/XOP. If you’re into middleware modeling you might start to have visions of SCA models that extend all the way down to the hardware, or at least of getting SCA and OSGi to ally and conquer the world. If you’re a CMDB person, you may tell yourself that now is the time for the CMDB to do what you’ve been pretending it was doing all along and actually extend all the way into the application. Then you may have that “single source of truth” on which the automation code can reliably work. Or if you see the world through the “Cloud API” goggles, then this “consistent and standard” way to manage relationships at all three layers looks like what your Cloud API of choice will eventually do, as it grows from IaaS to PaaS and SaaS.

Your background may shape your reference model for this unified service-centric approach to IT management, but the bottom line is that we’d all like a nice, clear conceptual model to bridge and unify Cloud (provisioning and containment), application configuration and SOA relationships. A model in which we have services/containers with well-defined operational contracts (and on-demand provisioning interfaces). Consumers/components with well-defined requirements. APIs to connect the two, with predictable results (both in functional and non-functional terms). Policies and SLAs to fine-tune the quality of service. A management framework that monitors these policies and SLAs. A common security infrastructure that gets out of the way. A metering/billing framework that spans all these interactions. All this while keeping out of sight all the resource-specific work needed behind the scene, so that the automation code can look as Zen as a Japanese garden.

It doesn’t mean that there won’t be separations, roles, processes. We may still want to partition the IT management tasks, but we should first have a chance to rejigger what’s in each category. It might, for example, make sense to handle provider relationships in a consistent way whether they are “deployment relationships” (e.g. EC2 or your private IaaS Cloud) or “application dependency relationships” (e.g. SOA, internal or external). On the other hand, some of the relationships currently lumped in the “infrastructure dependency relationships” category because they are “config files stuff” may find different homes depending on whether they remain low-level and resource-specific or they are absorbed in a higher-level platform contract. Any fracture in the management of this overall IT infrastructure should be voluntary, based on legal, financial or human requirements. And not based on protocol, model, security and tool disconnect, on legacy approaches, on myopic metering, that we later rationalize as “the way we’d want things to be anyway because that’s what we are used to”.

In the application configuration management universe, there is a planetary collision scheduled between the hypervisor-centric view of the world (where virtual disk formats wrap themselves in OVF, then something like OVA to address, at least at launch time, application and infrastructure dependency relationships) and the application-model view of the world (SOA, SCA, Microsoft Oslo at least as it was initially defined, various application frameworks…). Microsoft Azure will have an answer, VMWare/Springsouce will have one, Oracle will too (though I can’t talk about it), Amazon might (especially as it keeps adding to its PaaS portfolio) or it might let its ecosystem sort it out, IBM probably has Rational, WebSphere and Tivoli distinguished engineers locked into a room, discussing and over-engineering it at this very minute, etc.

There is a lot at stake, and it would be nice if this was driven (industry-wide or at least within each of the contenders) by a clear understanding of what we are aiming for rather than a race to cobble together partial solutions based on existing control points and products (e.g. the hypervisor-centric party).

[UPDATED 2010/1/25: For an illustration of my statement that “if you’re a SOA person, your mental model for this is SOA++”, see Joe McKendrick’s “SOA’s Seven Greatest Mysteries Unveiled” (bullet #6: “When you get right down to it, cloud is the acquisition or provisioning of reusable services that cross enterprise walls. (…)  They are service oriented architecture, and they rely on SOA-based principles to function.”)]

6 Comments

Filed under Application Mgmt, Automation, Cloud Computing, CMDB, Everything, Governance, IT Systems Mgmt, ITIL, Mgmt integration, Middleware, Modeling, OSGi, SCA, Utility computing, Virtualization, WS-Management

Cloud platform patching conundrum: PaaS has it much worse than IaaS and SaaS

The potential user impact of changes (e.g. patches or config changes) made on the Cloud infrastructure (by the Cloud provider) is a sore point in the Cloud value proposition (see Hoff’s take for example). You have no control over patching/config actions taken by the provider, any of which could potentially affect you. In a traditional data center, you can test the various changes on specific applications; you don’t have to apply them at the same time on all servers; and you can even decide to skip some infrastructure patches not relevant to your application (“if it aint’ broken…”). Not so in a Cloud environment, where you may not even know about a change until after the fact. And you have no control over the timing and the roll-out of the patch, so that some of your instances may be running on patched nodes and others may not (good luck with troubleshooting that).

Unfortunately, this is even worse for PaaS than IaaS. Simply because you seat on a lot more infrastructure that is opaque to you. In a IaaS environment, the only thing that can change is the hardware (rarely a cause of problem) and the hypervisor (or equivalent Cloud OS). In a PaaS environment, it’s all that plus whatever flavor of OS and application container is used. Depending on how streamlined this all is (just enough OS/AS versus a traditional deployment), that’s potentially a lot of code and configuration. Troubleshooting is also somewhat easier in a IaaS setup because the error logs are localized (or localizable) to a specific instance. Not necessarily so with PaaS (and even if you could localize the error, you couldn’t guarantee that your troubleshooting test runs on the same node anyway).

In a way, PaaS is squeezed between IaaS and SaaS on this. IaaS gets away with a manageable problem because the opaque infrastructure is not too thick. For SaaS it’s manageable too because the consumer is typically either a human (who is a lot more resilient to change) or a very simple and well-understood interface (e.g. IMAP or some Web services). Contrast this with PaaS where the contract is that of an application container (e.g. JEE, RoR, Django).There are all kinds of subtle behaviors (e.g, timing/ordering issues) that are not part of the contract and can surface after a patch: for example, a bug in the application that was never found because before the patch things always happened in a certain order that the application implicitly – and erroneously – relied on. That’s exactly why you always test your key applications today even if the OS/AS patch should, in theory, not change anything for the application. And it’s not just patches that can do that. For example, network upgrades can introduce timing changes that surface new issues in the application.

And it goes both ways. Just like you can be hurt by the Cloud provider patching things, you can be hurt by them not patching things. What if there is an obscure bug in their infrastructure that only affects your application. First you have to convince them to troubleshoot with you. Then you have to convince them to produce (or get their software vendor to produce) and deploy a patch.

So what are the solutions? Is PaaS doomed to never go beyond hobbyists? Of course not. The possible solutions are:

  • Write a bug-free and high-performance PaaS infrastructure from the start, one that never needs to be changed in any way. How hard could it be? ;-)
  • More realistically, narrowly define container types to reduce both the contract and the size of the underlying implementation of each instance. For example, rather than deploying a full JEE+SOA container componentize the application so that each component can deploy in a small container (e.g. a servlet engine, a process management engine, a rule engine, etc). As a result, the interface exposed by each container type can be more easily and fully tested. And because each instance is slimmer, it requires fewer patches over time.
  • PaaS providers may give their users some amount of visibility and control over this. For example, by announcing upgrades ahead of time, providing updated nodes to test on early and allowing users to specify “freeze” periods where nothing changes (unless an urgent security patch is needed, presumably). Time for a Cloud “refresh” in ITIL/ITSM-land?
  • The PaaS providers may also be able to facilitate debugging of infrastructure-related problem. For example by stamping the logs with a version ID for the infrastructure on the node that generated the log entry. And the ability to request that a test runs on a node with the same version. Keeping in mind that in a SOA / Composite world, the root cause of a problem found on one node may be a configuration change on a different node…

Some closing notes:

  • Another incarnation of this problem is likely to show up in the form of PaaS certification. We should not assume that just because you use a PaaS you are the developer of the application. Why can’t I license an ISV app that runs on GAE? But then, what does the ISV certify against? A given PaaS provider, e.g. Google? A given version of the PaaS infrastructure (if there is such a thing… Google advertises versions of the GAE SDK, but not of the actual GAE runtime)? Or maybe a given PaaS software stack, e.g. the Oracle/Microsoft/IBM/VMWare/JBoss/etc, meaning that any Cloud provider who uses this software stack is certified?
  • I have only discussed here changes to the underlying platform that do not change the contract (or at least only introduce backward-compatible changes, i.e. add APIs but don’t remove any). The matter of non-compatible platform updates (and version coexistence) is also a whole other ball of wax, one that comes with echoes of SOA governance discussions (because in PaaS we are talking about pure software contracts, not hardware or hardware-like contracts). Another area in which PaaS has larger challenges than IaaS.
  • Finally, for an illustration of how a highly focused and specialized container cuts down on the need for config changes, look at this photo from earlier today during the presentation of JRockit Virtual Edition at Oracle Open World. This slide shows (in font size 3, don’t worry you’re not supposed to be able to read), the list of configuration files present on a normal Linux instance, versus a stripped-down (“JeOS”) Linux, versus JRockit VE.


By the way, JRockit VE is very interesting and the environment today is much more favorable than when BEA first did it, but that’s a topic for another post.

[UPDATED 2009/10/22: For more on this (in an EC2-centric context) see section 4 (“service problem resolution”) of this IBM paper. It ends with “another possible direction is to develop new mechanisms or APIs to enable cloud users to directly and automatically query and correlate application level events with lower level hardware information to better identify the root cause of the problem”.]

[UPDATES 2012/4/1: An example of a PaaS platform update which didn’t go well.]

9 Comments

Filed under Application Mgmt, Cloud Computing, Everything, Google App Engine, Governance, ITIL, Manageability, Mgmt integration, PaaS, SaaS, Utility computing, Virtualization

Symptoms Autonomic Framework submission to OASIS: CBE meets ITIL?

IBM, Fujitsu and CA have recently proposed a charter for a new OASIS technical committee, called the Symptoms Autonomic Framework (SAF) TC. Including a specification candidate and other submitted documents, listed here.

For context, you need to remember the Common Base Event (CBE) specification that IBM has shopped around for a long time, initially hand in hand with Cisco. As always, the Cover Pages offer the best references on this saga. CBE was submitted to WSDM and came out (in a much-emaciated form) as the WSDM Event Format (WEF) in WSDM 1.1 part 2.

Because so many parts of CBE were left on the floor of the WSDM editing room and because WSDM itself saw little adoption, I have always been expecting IBM to bring CBE back in some form. When I heard of SAF, my instinct was that this was it.

Not so. SAF is meant to sit on top of an event system like CBE. It turns selected events/situations and other data points into symptoms and tells you what to do next. Its focus is on roles, process and knowledge bases. Not on the event format. The operations and payloads defined are not for exchanging events, they are for exchanging “symptoms”, “syndromes”, “prescriptions” and “protocols”.

As the terms show, the specification espouses the medical dialect (even “protocol” is meant to be understand in the medical sense, not as in “HTTP” or “FTP”). While I have been guilty of a similar analogy myself, I also think that if there is one area from which we don’t want to learn in terms of automation, system integration and proper use of IT in general, it’s the medical field. So let’s be careful not to push the analogy too far (section 8.1 of the SAF specification is a fun read, but not necessarily very compelling).

BTW, since when do we use terms strongly associated with one company in the name of standards group (“autonomic”)?

More fundamentally, the main question is what the chances of success of this effort are. Its a huge endeavor (“enabling interoperable diagnosis and treatment of complex systems”) and it tries to structure activities that have been going on for a long time and in many different ways. No-one will adopt this structure for its own sake, so the question is what practical benefits can be derived from this level of standardization. For example, how reliably can incoming events be mapped in practice to symptoms, how efficiently can symptoms be matched to protocols (in typical IBM fashion there seems to be a big  “XPath is my hammer” assumption lurking), etc…

The discussion on the charter is currently open in OASIS if you want to weigh in.

4 Comments

Filed under Automation, Everything, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Specs, Standards

Cloud APIs need to be complemented by Cloud processes

A lot of attention has been focused on technical standards for Cloud computing, especially over the last month (e.g. DMTF incubator announcement). That’s fine, but before we go crazy with detailed technical standards let’s realize that for Cloud computing (of the public variety at least) to take off we’ll need just as much standardization of non-technical interactions. Namely processes.

This, to me, is one of the most interesting angles on the recent announcement by Amazon AWS that they now support (in limited beta) the ability to load data from storage that is physically shipped to them. Have a look at this announcement and you’ll notice that it spends more time describing a logistical process (how to pack, how to ship…) than technical interfaces (storage device requirements, how to create a manifest…). It is still part of the “AWS Developer Guide” but clearly these instructions are not just for developers.

Many more such processes need to be “standardized” (or at least documented) for companies to efficiently be able to use public Clouds (and to some extent even private Clouds). Let’s take SLAs as an example. It sounds good when a Cloud provider says “we offer SLAs”. But what does it mean? Does it mean “we advertise some SLA numbers, you’re responsible for contacting us (trough a phone number hidden somewhere on our site) when you think we’ve violated them; if we agree with your measurements then you may get a check in the mail at some point in the future”? Not so useful. If, on the other hand, there is a clear definition of the metric that the SLA applies to, a clear definition of how it gets measured (do we trust provider performance reports, customer measurement, a third party monitor…), a clear process to claim refund, a clear process to actually provide the refund (credit for future service or direct payment, when/how is the payment made…), then it becomes more useful.

I picked the SLA enforcement example because it happens to be an area that the TMF (TeleManagement Forum) has made partially available as a teaser for its eTOM business process framework (aimed at telco providers). The full list of eTOM processes is only available to paying subscribers. One of the goals of the eTOM process framework is “to simplify procurement, serving as a common language between service providers and suppliers”. Another way to say it is that eTOM “recognizes that the enterprise interacts with external parties, and that the enterprise may need to interact with process flows defined by external parties, as in ebusiness interactions”. Exactly what we are talking about when it comes to making public Clouds easily consumable by enterprises. SLA management is just one small part of the overall eTOM framework (if you look for it in this eTOM overview poster it’s in purple, under “assurance”, in the first row).

My point is not to assert that Cloud providers should adopt eTOM. Nobody adopts eTOM directly as a blueprint anyway. But, while the cultures and maturity levels are sometimes different, it is also hard to argue that Cloud providers have nothing to learn form telco providers (many of which are becoming Cloud providers themselves). I shudder at the idea of AT&T teaching another company how to handle customer service, but have you ever tried to call Google?

Readers of this blog are likely to be more familiar with ITIL than eTOM (who, incidentally, incorporates parts of ITIL in its latest version, 8.0). For those who don’t know about either, one way to think about it is that Cloud providers would implement processes that look somewhat like eTOM processes, that Cloud consumers implement IT management processes that follow to some extent ITIL best practices and that these two sets of processes need to meet for public Clouds to work. I touched on this a few months ago, when I commented on the incorporation of Cloud services in an IT service catalog.

My main point is not about ITIL or eTOM. It’s simply that there are important process aspects to delivering/consuming Cloud services and that they have so far been overshadowed by the technical aspects. The processes sketched in the AWS import/export capability represent the first drop of an upcoming shower.

[UPDATED 2009/5/22: More on telcos becoming Cloud providers from EMC’s Chuck Hollis, with a retort by James Governor. Just listing these as FYI but my main point in this post is not about telcos, it’s about the need to clarify processes, independently of whether the provider is Amazon or AT&T. It’s just that the telcos have been working on such process standardization for a long time. Hoff provides another example of where process standardization is needed in Cloud relationships: right to audit.]

2 Comments

Filed under Amazon, Application Mgmt, Business Process, Cloud Computing, Everything, IT Systems Mgmt, ITIL, Standards, Utility computing

“Federationing”

I am glad to see that, as it inches towards standardization in the DMTF, the CMDBf specification is getting more visibility. Forrester’s Glenn O’Donnell recently wrote very positively about it on his blog, presenting it as a key enabler for a federation of MDRs (Management Data Repositories, a term introduced by the CMDBf specification so don’t look for it in ITIL). He argues this is the only way (rather than a single data store) to fulfill the ITIL-defined role of a CMS. Rob England (the IT Skeptic) has also shared his thoughts about CMDBf and they were noticeably less enthusiastic, to say the least. While Glenn calls the specification “profound”, Rob calls it “the most over-hyped vendor marketing smokescreen ever”. There is plenty of room in between them, which is where I sit. As I explained before, it does have real value (as a query language/protocol for system integration) but is nowhere near providing “federation” capabilities.

I am happy to see Glenn approve of CMDBf and I agree with him that accurate specialized MDRs are more useful than a single store that attempts to capture all the relevant data. As Glenn puts it, “pockets of the truth are far superior to unified ambiguity”. But I wasn’t very comfortable with the tone of his article, which seemed to almost encourage the proliferation of these MDRs. Maybe he was just trying to present a clean break with the “one big CMDB” approach and overreached. Or maybe I am just not reading properly.

Because while I agree that the answer is not “one and only one store” I also don’t want to loose the value of having as much unification of the IT model as possible. Both at the data level (i.e. same metamodel/model, consistent retention/roll-up policies…) and the access level (i.e. in the same physical store, with shared access control, accessible using a well-known DSL for data manipulation…). Metamodel transformation and model bridging are costly (in accuracy, maintenance, reliability). If your CMS does more than just support a  “model navigation” GUI it may then need to run large queries that go across several portions of your IT model, including multiple different domains (e.g. a compliance rule kicked-off at the app level based on the type of data it manipulates that ends up having to look at the physical location of the servers running the hypervisors for the virtual machines that power the app). Through such global queries you can apply configuration rules, do impact analysis, event correlation, provide context to your transaction tracing, etc. No consolidation means no such queries (or a very limited subset). Considering the current state of federation, there is a lot more that you can do with your CMS if you have a very small number of MDRs rather than a sea of “federated” MDRs. This is why, as Oracle acquires IT management companies, we deliberately integrate their repositories with Enterprise Manager.

[UPDATED 2009/4/8: More, along the same line, from Glenn and his co-author Carlos Casanova available here. And my CMDBf partner-in-crime Van Wiles also responded to Glenn, bringing a BMC perspective.]

1 Comment

Filed under Application Mgmt, Automation, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Modeling, Query, Specs, Standards

Exploring “IT management in a changing IT world”

The tagline for this blog is “IT management in a changing IT world”. Of course nobody but their authors care about blog taglines. Still, in the unlikely event that I am asked to expand on the “changing IT world” part I would do it as follows.

The changes currently at work in the IT world can be organized along three axis:

  • IT infrastructure and management
  • Application development and delivery
  • Business and regulation

Each of these categories is ridiculously large. It’s only through the prism of the relationships between them that they provide any value. Think about three balls linked by coil springs.

If you give one of these balls a shake, you will start a hard-to-predict dance between them. This is similar to how the three domains above relate to one another. Changes in one (say a new focus on regulatory compliance in the “business” area, the emergence of virtualization technology in the “infrastructure” area or the appearance of Web 2.0 applications in the “application” area) start a complex movement involving all three. It takes a while to achieve a new equilibrium (and in practice it is never achieved since changes occur too often, adding stimulus to an already excited system). For a visual illustration, see this little YouTube video (but imagine that the three balls are arranged in a triangle rather than linearly and that every so often one of them gets pulled in a random direction).

This is not new of course. There have been changes in these three areas for as long as IT has existed (starting before it was called IT) and they have always driven changes in how IT is managed. To some extent they also have always influenced one another. The “new” part is that the connections are a lot tighter now, that the springs have a much higher force constant (the “k” in “F=-kx”). So here is my attempt at mapping today’s hot buzzwords on a map organized along these areas.

Before you ask: yes of course I have a very rigorous methodology, based on very precise quantitative data, to establish with certainty the exact x, y and z coordinates of each label. Buzzword topology is a precise science.

You may notice that the buzziest buzzword (at least currently), “Cloud”, does not appear on the map. It’s because it buzzes so much that it would be all over it, engulfing what currently appears as “virtualization”, “datacenter automation”, “Iaas”, “PaaS”, “SaaS” and “opex/capex”. There are two main parts in the “Cloud” buzzword: the “Technical Cloud” and the “Business Cloud”. The “Technical Cloud” is where we take virtualization and standardization (of machines, networks and application infrastructure) and turn that mind-boggling complexity into a manageable system that can be programmed to deliver applications (Cisco recently called it “Unified Computing”; HP, IBM and others have been trying to describe and brand it for a long time). Building on these technical capabilities comes the second part of “Cloud”, the “Business Cloud”. It is the ability to use infrastructure owned by a third party (presumably one able to leverage economies of scale) and all the possibilities this opens in the business realm. That’s what “Cloud” started as, back when it was known as “Utility Computing” and before it was applied to everything under the sun. A recent illustration of the relationship between the “Technical Cloud” and the “Business Cloud” is the introduction of vCloud by VMWare (their vision includes using VMotion technology, a piece of the “Technical Cloud”, not just to move machines between neighboring hypervisors but between organizations, enabling the “Business Cloud”). Anyway, that’s why “Cloud” it’s not on the map. It is actually all over it.

The system displayed on the map is vibrating very intensely right now, and I don’t see this changing anytime soon. Just for fun, here are candidates for future boxes on the map:

  • In the “IT infrastructure and management” category, maybe one day we’ll get to real metadata-driven management integration across the stack (as opposed to the more limited “application modeling” area listed above), whether through RDF or not.
  • In the “application development and delivery” category, maybe Doug Purdy’s vision “to make everyone a programmer (even if they don’t know it)” will be realized, whether through Oslo or not.
  • In the “business and regulation” category, maybe one day corporations will actually start caring about the customer data they are entrusted to (but only if mishandling it finally costs them more than “sorry about that, here is a one year credit monitoring subscription now go away”).

In summary, the evolution of IT management is driven not only by changes in IT technology but also by changes in two other fields (“application development and delivery” and “business and regulation”) with which it is tightly connected. Both of these fields are also in a very dynamic state. And they also influence one another, resulting in a complex three-way dance. You can’t understand the trajectory and moves of one dancer without seeing the others.

That’s what I mean by “IT management in a changing IT world”. Thanks for asking.

[UPDATED 2009/6/25: For more on the “technical cloud” versus “business cloud”, go read Neil Ward-Dutton’s nice explanation. He actually breaks down the “business cloud” in two (separating the economic aspect from the strategic aspect).]

1 Comment

Filed under Application Mgmt, Automation, Big picture, BPM, BSM, Business, Cloud Computing, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Open source, Utility computing, Virtualization

Towards making Cloud services more consumable by enterprises

If you are a hard core network/system management person who has been very suspicious of all the ITIL/ITSM bullshit from the boss, and even more suspicious of the “Cloud” nonsense that occupies the interns while they should be monitoring the event console, then I have bad news for you: they are mating. Not the interns (as far as I know), I am talking about ITSM and the Cloud.

If, on the other hand, you are an ITSM and Cloud enthusiast who sees himself/herself leveraging all these nifty ITSM/BSM tools and shinny Cloud services to become the ultimate CIO, delivering unprecedented business value, compliance and IT efficiency from your iPhone at the beach, then you’ll see this marriage as good news, a sign that your move to Hawaii is approaching.

I am referring to the announcement by Rodrigo Flores that his company, newScale, has released a new product, newScale FrontOffice for Virtual Data Centers, to incorporate Cloud-based services in their IT Service Catalog.

This is not sexy but it’s the kind of support that some classes of enterprises will need in order to really make use of Cloud services. Eventually, Cloud providers are going to have to move their focus away from cool technology and developer evangelism towards making their services easily consumable by corporations. Otherwise they’ll be like an office supply store that doesn’t take American Express.

While the direction is very interesting I can’t comment on how much value this new product actually provides because the company seems to engage in anti-marketing activities. If you want to dig a bit deeper than the press release, you get redirected to this page which requires your info in order to download the “complimentary information brief”. The confirmation page promises an email “containing the information you requested within the next 30 minutes”. I thought that the info I requested was the brief. What I got instead was an email asking me to call them. I don’t know if it’s my unpronouncable last name of the oracle.com in my email address that scares them (I’d guess the latter) but that seems like a lot of precautions for a “complimentary information brief”. Unless this is an attempt to grab the “Cloud” buzzword with little meat to back it up (not that anyone would do this, of course). Hopefully they’ll make more information publicly available when they get around to it.

[UPDATED 2009/2/25: Via Coté, I just saw what I consider supporting envidence, from Gartner that, once we’re out of the early adopter phase, Cloud firms will need to focus less on pleasing developers and more on pleasing IT people: “But my observation from client interactions is that cloud adoption in established, larger organizations (my typical client is $100m+ in revenue) is, and will be, driven by Operations, and not by Development.”]

[UPDATED 2009/3/3: I got the PDF brief, thanks newScale (Ken and Mark). Many of the benefits it describes assumes that there is a pretty robust automation/provisioning infrastructure to back it up, in addition to the Catalog itself. E.g. the Catalog alone will not allow you to “shorten the provisioning cycle time to minutes instead of months”. The brief lists adapter kits to VMWare/EC2 and more internal-minded tools (HP and BMC, presumable through their Opsware and BladeLogic acquisitions respectively). So on the “public cloud” side it’s EC2 for now, not surprisingly. Integration with many of the Cloud tools (like RightScale) could be tricky since these tools bundle a catalog with the automation engine. If we ever do get a useful ontology of Cloud services this catalog would be a natural user for it, when it expands to other services beyond EC2 and tries to help you compare them. I guess newSCale wouldn’t appreciate if I provided a direct link to the PDF, so go request it to see for yourself.]

[UPDATED 2009/3/23: Speaking of managing your IT systems from your cell phone at the beach: VMWare vCenter Mobile Access.]

3 Comments

Filed under Cloud Computing, Everything, Governance, IT Systems Mgmt, ITIL, Mgmt integration, Utility computing

Cloud ontology: to boldly go where ITIL, Grid and SOA have gone before

I wasn’t at the recent Cloud interop meeting in Mountain View, but based on blog reports from those who were (Stu, James and Bob) a key takeaway is that we need a “Cloud taxonomy”.

As John points out, there are plenty of existing proposals. The SPI taxonomy (SaaS/PaaS/IaaS), the SADIST-PIMP taxonomy (Storage, App, DB, Info, Security, Test, Platform, Integration, Management, Process – which I took the liberty to re-order) and the UCSB/IBM “Toward a Unified Ontology of Cloud Computing” paper.

Is it a matter of picking one of these? And if we do, how are we better of?

Let’s look at some related domains that are a bit more mature to see how they handled this and what benefits they got out of it. The three most relevant comparison I can come up with are ITSM, Grid Computing and SOA.

ITSM has ITIL, a key part of which is its glossary which provides value in and of itself, even if you don’t explicitly use the rest of the framework. Is this what the Cloud taxonomy should aim for? I contend that the Cloud field is not nearly mature enough to aim for this.

Grid Computing has OGSA, which started as a white paper containing a service model (section 6.1). It was later expanded (within GGF, a standards organization specific to Grid Computing) into this document and was used as a basis for many standard technical specification. Is the goal of the Cloud Ontology to follow a similar path? OGSA was more than a glossary though, it included a vision, goals and architectural recommendations. Sure you can start with the glossary, but I am willing to bet that a stand-alone glossary would change significantly if/when a Cloud Computing architecture is defined based on it. And there isn’t today, for Cloud Computing, the singularity of vision that the Globus gang brought to Grid Computing and that is necessary for an architecture.

SOA was never as structured. There was an early attempt, called the Web Services Architecture at W3C. Like OGSA, this is a high level architecture, not just a taxonomy. In any case, the WSA work didn’t really end up having much of an influence in SOA (or even Web services) practice. Then came the SOA Reference Model, an OASIS standard. Of the three (ITIL, OGSA, SOA-RM) this is what I think is closest to what a Cloud taxonomy should aim for at this point. An ITIL-like glossary would be too brittle (if it has any depth) and it’s too early for an OGSA-like architecture.

Again, I was not at the meeting and only know what was blogged about. This is not a criticism of the proposal, just some thoughts about how it might compare to similar efforts.

Side note #1: I prevented myself from going off about the difference between a controlled vocabulary, a glossary, a taxonomy and an ontology. These differences are not germane to the present discussion. And I am not wearing my pedantic hat today (for once). If some want to call a wedding cake an ontology, who am I to stop them?

Side note #2: Speaking of trying to be less pedantic in 2009, this entry represents my capitulation: I finally created a “Cloud Computing” category in this blog, rather than my prefered designation, “Utility Computing”, which I have been using so far. A small step for me, a microscopic step for humanity.

[UPDATED 2009/1/28: We’re getting there (fast!). Via John, this updated taxonomy graph from Chris Hoff is going in the right direction, I think. Next step: more text describing the relationships between the boxes to have more of a model.]

[UPDATED 2009/1/30: If you think that any single-vendor taxonomy will be suspicious, that a multi-vendor taxonomy won’t happen and that customers are too busy to do it, then you may be asking “aren’t analysts supposed to do this?”. Help may be on the way.]

8 Comments

Filed under Cloud Computing, Everything, Grid, ITIL, Modeling, Standards, Utility computing

Sorry, CMDBf doesn’t make coffee either

The IT Skeptic is writing to us from his mountain retreat (via a time-delayed post on his blog), and the topic he felt safe to cover in such fashion (what journalists call an “evergreen”) is the fact that CMDBf is an orchestrated sham, brilliantly executed by IT management vendors.

I’d love to be part of something that’s brilliantly executed for once, even if it is a sham, but I am afraid this is not it. But first I should state the obvious, clarifying that even though I am a member of the CMDBf group at DMTF (and also an author of the original version, under my previous employer) I do not speak for the group or DMTF (or my employer for that matter). Just as myself, as always on this blog.

The problem that Rob England, Mr. Skeptic, has with the CMDBf specification is that it doesn’t do a bunch of things that he’d like it to do, such as specifying how data sources acquire data for their domain, how they store the data, how the underlying resources are reconfigured, what processes are followed etc. See the full list from his post. The list is a copy/paste from the CMDBf specification, with some comments added, so at the very least he has to admit that as far as “smokescreens” go this one is pretty upfront about its limitations…

He concludes that “this is once again a geeky technical solution to a cultural, organizational and procedural problem.” I have to ask: who expects DMTF specifications to solve “cultural, organizational and procedural” problems? Does CIM solve such problems? Does WBEM?

Human-to-human communication is a “cultural, organizational and procedural” problem and SMTP/POP/IMAP/etc (the interoperable protocols used by email systems) are just as geeky as CMDBf. They don’t solve the larger problem, only contribute to the solution. If CMDBf can contribute as much to datacenter management as SMTP/POP/IMAP contribute to human communication (minus the SPAM if possible), I’d call that a success.

And then there is this warning:

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availability monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely.”

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

As far as the glacial pace of progress (“Glacial advance. That’s the way the vendors want it” from an earlier post by the Skeptic), CMDBf is no race horse but I don’t see it going any slower than other standards. Slowness (I mean, deliberation) is part of the landscape. I would submit a slight twist on Hanlon’s razor: “Never attribute to malice that which can be adequately explained by legal, procedural and organizational inertia.”

Having said all this, some of Rob’s criticism is perfectly justified, such as his sarcasm about this sentence from the specification:

“The Federated CMDB operates in a closed environment, in which some security issues are less critical than in open access or public systems.”

OK, that’s stupid indeed. Especially in a public cloud environment where you don’t know who is renting the VM next door. I’ll ask the group to remove this. Actually, that whole appendix is useless and I pointed this out in my earlier review of CMDBf 1.0 (look for the “security boilerplate” section at the bottom of the review).

Rob could also have pointed out that this specification only addresses “federation” if you accept a very scaled-down definition of the term. What it does do is help with CMDB query and synchronization. Not the holy grail, but nothing to sneer at either.

Rob, next time you want to throw tomatoes at CMDBf while you’re on holiday, just give me the password to the site and I’ll do it for you… :-)

[UPDATED 2009/1/21: Rob responds via a comment on his original blog entry.]

2 Comments

Filed under BSM, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Security, Specs, Standards

CMDBf interop demo

IBM and CA are apparently showing an interoperability demo between their respective CMDBs at itSMF Fusion this week. I am not there to see it, but they describe it (it’s a corporate merger scenario) in this press release. It is presumably based on the version of the specification that was submitted to DMTF.

More information about CMDBf, along with another demonstration, will be available in a couple of months for ManDevCon attendees. Three sessions are on the agenda, all in a row and in the same room (so make sure to get a good seat, i.e. one close to a power plug, from the start):

  • CMDB Federation Overview (Vince Kowalski, BMC and Marv Waschke, CA)
  • CMDB Federation Technical Description (Mark Johnson, IBM and Marv Waschke, CA)
  • CMDB Federation Demonstration (Mark Johnson, IBM and Dave Snelling, Fujitsu)

Comments Off on CMDBf interop demo

Filed under CA, CMDB, CMDB Federation, CMDBf, Conference, DMTF, Everything, IBM, IT Systems Mgmt, ITIL, Mgmt integration, Specs, Standards, Trade show

ITIL certification for Oracle IT Service Management Suite (Pink Elephant)

The Oracle IT Service Management Suite (meaning the combination of Oracle Enterprise Manager and Siebel Service Desk) has earned a V2 certification for ITIL from Pink Elephant. More specifically, the Suite covers six of the seven processes: Incident, Problem, Change, Configuration, Release and SLM.

Here is the “Pink Verified” list.

[UPDATED 2008/9/9: Here is the corresponding press release.]

Comments Off on ITIL certification for Oracle IT Service Management Suite (Pink Elephant)

Filed under Everything, IT Systems Mgmt, ITIL, Oracle

BMC acquires ITM Software

Another BMC acquisition today: ITM Software. Their software suite is designed to help drive IT decisions from the point of view of their business impact.

This is important, of course, for all the reasons that BMC, HP, Oracle and others have been explaining for a while (how often have you heard the word “alignment” over the last three years, compared to the previous thirty?). It’s becoming even more important now, as the options for IT sourcing (from the traditional “give it all to Unisys”, to SaaS, to running your own apps in a utility computing environment…) are multiplicating. Choosing between Intel and AMD CPUs in your datacenter is a technical decision, but choosing between an on-premise application, a SaaS application and running your application on EC2 is driven by business considerations of cost, risks, control, flexibility, etc. And it’s not just a one-time decision, it’s the day to day management that follows these decisions.

I don’t know much about the current ITM offering, but it was never clear to me how much they could deliver as a narrow layer, separate from the heavy-duty IT management stack (I can see how they would deliver financial and project management tools, but what about *really* linking day to day IT administration decisions to the business impact). Being part of BMC, presumably allowing deeper integration into real IT management operations, seems to make sense.

I just wish they didn’t make it sound so easy: “BMC’s purchase of ITM Software creates a unique, integrated solution that provides customers with a single comprehensive view into…”. So just signing the check creates the integration? Now I am going to get calls from our execs asking why it takes so much work to integrate acquired products, if BMC can do it the same day they sign the deal…

While I am at it, here is the press release that HP put out to list the announcements at their Software Universe conference this week. I notice that it’s all about new versions of ex-Mercury products. No OpenView, Peregrine or Opsware content, as far as I can tell. Without looking at it in more details I don’t know how different these new versions really are. What appears pretty new is the SaaS offering (also based on Mercury products) at the end of the press release. On the nitpicking side, can anyone tell me what these “static configuration management databases” are that are “unable to support the real-time needs of today’s complex technology environments”? I can see how a “static” database would be hard-pressed to help, but I haven’t noticed any vendor selling read-only config stores.

[UPDATED 2008/6/18: More details about the HP announcement at InfoWorld. Including quotes from my ex-boss Ramin. Congrats on getting UCMDB 7.5 out of the door!]

2 Comments

Filed under Application Mgmt, BSM, Business, CMDB, Everything, HP, IT Systems Mgmt, ITIL, Mgmt integration

CMDBf now in the hands of the DMTF

It’s now official, the CMDBf specification has been submitted to the DMTF and will be standardized there. Here is the press release and here is the specification (unchanged) republished on the DMTF site. The CMDBf working group was created a while ago at the DMTF but I didn’t report it since it wasn’t clear to me whether that was public information or not. The press release makes this clear now.

As a side note, this is one of my ongoing frustrations with the DMTF. Almost everything happens in private with no publicly-accessible URL until a press release comes out and of course lots of interesting things happen that don’t get a press release. I have heard many times that the DMTF is working on opening up the process, but I still haven’t seen much change. If this had been OASIS or W3C, the call for formation of the new working group would have been publicly accessible even before the group was created. OK, end of ranting.

As always, there isn’t much useful information to be gleaned from the text of the press release. Only that, as expected, the authors addressed the question of how this relates to CIM, since for many DMTF=CIM. So the press release proactively declares that the CMDBf work will not be limited to CIM-modeled configuration data. What this means in practice will be seen later (e.g. will there be CIM-specific extensions?).

Having seen how executive quotes for press releases get generated I hate to read too much into them, but another thing I can’t help noticing in the press release is that none of the quotes from the companies submitting the specification tout federation, but simply “integration” or “sharing”. For example: “integration and interoperability” (BMC), “share data” (CA), “sharing of information” (HP), “view, track and change information” (IBM), “exchange data” (Microsoft). This more realistic assessment of what the specification does stands in contrast to the way the DMTF presents it in the press release : “this specification provides a standard way to federate management data stored in multiple different data models”. At this point, it doesn’t really provide federation and especially not across different models.

All in all, it’s as good thing for this work to be moved to a standards organization. I may join the CMDBf group at the DMTF to track it, but I don’t plan to engage very much as this area isn’t my focus anymore now that I am at Oracle. But of course everything is linked at some level in the management field.

[UPDATE  on 2007/11/30: two days after posting this message I got the monthly DMTF newsletter which touches on points I raise here. So here are the relevant links. First, Mike Baskey, DMTF Chairman, shares his view on what CMDBf means for DMTF. Second, as if to respond to my rant on the opacity of the DMTF, Josh Cohen, DMTF Vice-chairman, gives an update on process improvements. Some progress indeed, but still a far cry from opening up mailing list archives so that observers can see in real time what issues are addressed and can go back in time to understand how a specific technical decision was made and what were the considerations.]

Comments Off on CMDBf now in the hands of the DMTF

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Specs, Standards

Review of the CMDBf specification version 1.0

Having read the recently released CMDBf 1.0 specification over the weekend, I see several improvements since 0.95, including:

  • the introduction of depthLimit
  • the lastModified metadata element
  • the ability to specify more than one instanceId in a template
  • the ability to advertise what parts of the specification you implement
  • the definition of faults

But while 1.0 is more complete than 0.95, I think it makes it harder to achieve interoperability. Here are the main friction points for interop:

New role for XPath

The xpathExpression element (which replaces xpath1Selector) changes in two very important ways. First, rather than being limited to XPath 1.0, it now also allows XPath 2.0. Support for this is a lot harder to achieve for people who don’t use XML as the backend format for their data. Considering the current state of adoption of XPath 2.0 and the low level of XML complexity exposed by most CMDB models, I don’t think it was opportune to bring this into CMDBf yet. And my guess is that most implementations will stay away from this. But there is a second change, less obvious but even more problematic. XPath is not just another constraint mechanism for a CMDBf template anymore, one that returns a boolean result indicating whether the instance meets the constraint or not, as it used to be in 0.95. It is now an alternative selection and filtering mechanism that lives in parallel to all the other elements in a template (and can’t mix with them). Overall, I think this change goes too far in the direction of turning a shared agreement to exchange data in XML into an assumption that the internal data models are all based on XML. And the killer with regards to interoperability is that the specification says nothing about how the resulting node sets are serialized in the response. There may be a serialization for the XPath 2.0 model, but there is no such thing for XPath 1.0 and I don’t see in the current state of the specification how two implementations have any chance to interoperate when using this feature.

Introduction of linkDepth

As I mentioned earlier, linkDepth is a very useful addition (even though it pales in comparison to the inferencing capabilities that could have been derived from basing CMDBf on RDF). But it is also a complicated feature. The intermediateItemTemplate attribute is a good re-use of the existing plumbing, but it needs at least a detailed example. I trust that the group will generate one once they’ve caught their breath from putting out the specification.

Service capability metadata

There is a new section (#6) to provide ways to describe what CMDBf features an implementation supports. But it is a very granular representation. Basically, for every feature you can describe if you support it or not. So someone may describe that they support everything inside propertyValue, except for the “like” operator. And someone else might support all the operators but not the caseSensitive modifier. That might be ok for human consumptions, but automated scenarios rely on pre-programmed queries and that is made very hard by all the possible combinations of options. What we need is a few well-defined profiles that people implement fully. Starting of course with a profile that rules out xpathExpression.

Record metadata

This new version introduces metadata on records. While recordId and lastModified are probably well understood and interoperably usable I am a bit more dubious about whether baselineId and snapshotId are going to be interoperable across vendors based on their limited description in the specification. The nice thing is that this metadata can not only be returned but also searched on. Well, at least that’s the intent. But this goes through the recordMetadata attribute on propertyValue which, while present in the pseudo-schema, is missing in the XSD…

The contentSelector element

This new element is more flexible that the propertySubsetDirective element that it replaces. In addition to specifying what properties you want returned it also allows you to specify that you only want certain record types and/or that you only want the record(s) that were used to satisfy constraints in the template. Those are nice additions, but the way the second part is implemented (through the use of the matchedRecords attribute) seems to assume that only one record in the instance was used to match all the constraints in the template. This is not necessarily the case, an instance can be selected by having different records match the different constraints in the template as long as it has at least one matching record per constraint (line 765 says “the item satisfies all the constraints”, not “a record of the item satisfies all the constraints” and you can also see this in the example in section 4.2 where the records mentioned on lines 637 and 639 don’t have to be the same). So do you return all records that have a role in matching the template, or only those (if there is any) that matches all the constraints on their own as the text seems to imply? And if several record combinations inside an instance can be used to match the constraints in a template, do I return all of them or can I just pick any subset that matches? Also, how can I say that I want all records that established the template match, independently of their type? There doesn’t seem to be a way to do this, or is it by putting a contentSelector element with no child element and the matchedRecords attribute set to false? There won’t be much interoperability on this feature until all this is clarified.

Relationships as items

A major change between 0.95 and 1.0 is that now a relationship can match an itemTemplate. For example, if you ask for all items that were modified during the last 24 hours you will get all the items and all the relationships that meet that criteria while in the previous version you’d have to explicitly request the relationships with a relationshipTemplate if you wanted to get them too). There is a good case to be made for either view and the one that works best largely depends on your backend implementation technology (RDF, objects, SQL, CIM…). But the important thing is for the spec to be clear and on this point I think the change wasn’t made explicit enough in the query section of the specification. If Van hadn’t called my attention to this on his blog, I would have missed this important change.

Security boilerplate

There is a person at IBM (probably located in a well-stoked underground bunker in upstate NY) who has instilled the fear of god in all IBM employees (at least all those who author publicly available specifications) and forces them to include a boilerplate “security considerations” section everywhere. I have co-authored several documents with IBM employees and it never fails, even thought it doesn’t add anything useful to the specification. You should see the look of fear on the face of the IBM employees when someone else suggests doing without it. We somehow managed to sneak one such slimmer specification past the IBMers with CMDBf 0.95 but I see that this has been “corrected” in 1.0. I hope that whatever painful punishment Scott, Jacob, Andrew and Mark (or their families and pets) were subjected to in the process by the IBM security ogre wasn’t too cruel. Sure, this doesn’t really impact interoperability, but now that I don’t work for a company that makes money from ink anymore, I have even less patience for this bloating.

OK, that’s enough back seat driving for now. Hopefully the standards group that will take over the specification will address all these questions. In the context of the entire specification, these are pretty small issues and mostly easy to fix. And the CMDBf group can go on to address the hard issues of federation (including security-related issues that abound in this field if one really wants to tackle them). The current specification is a useful graph-oriented query language that is a good match for CMDB data. But it’s really just a query language (plus a simple registration system).

[UPDATE: while updating the CMDBf query algorithm, I noticed another small error: maxIntermediateItems is an attribute in the pseudo-schema but an element in the schema. Something else to fix in the next version.]

3 Comments

Filed under CMDB, CMDB Federation, CMDBf, Everything, Graph query, IT Systems Mgmt, ITIL, Query, Specs, Standards, Tech