Category Archives: Everything

Too hot to count #2

It’s been a while since I added an entry in the CrayStats category. On my drive back home tonight, I heard a gem, courtesy of both British and French public broadcast. So I guess it’s not just a problem at NPR.

I was listening to a podcast from a history program from France Inter (French public radio). It was about the eruption of Mount Vesuvius that destroyed Pompeii and Herculaneum in AD 79. They played a clip from a BBC program that explained that the lava coming down was “five times hotter than boiling water”, a figure that the French host later repeated.

Never mind the fact that, on the Kelvin scale, the same lava is only twice as hot as boiling water. More on the siliness of applying percentages to temperatures here.

1 Comment

Filed under CrazyStats, Everything, Off-topic

Reality check on Cloud portability

SD Times recently published an interesting article about “cloud interoperability”. It has some well-informed opinions. But, like all Cloud-related discussions, it also suffers from mixing a bunch of things. The word “interoperability” is alternatively applied to the Cloud infrastructure services (in which case this “interoperability” is a way to provide application “portability”) and to the Cloud-hosted applications themselves.

Application-level interoperability (“look, my GAE-hosted app successfully sent an HTTP request to an Azure-hosted app, open the champagne”) is not very new or exciting anymore and is often used as an interoperability smokescreen (hello Salesforce.com). Many of these interop concerns are long solved and the others (like authentication and data migration) need to be solved in ways that don’t care whether the application is hosted in your Silicon Valley garage or near the Columbia river.

Cloud infrastructure compatibility (in other words application portability) is the more interesting discussion. I keep reading that it is needed (“no vendor lock-in, not ever again”) for enterprises to move to the Cloud. Being a natural-born cynic, I always ask myself whether those asking for it are naive (sometimes) or have ulterior motives (e.g. trying to catch-up with Amazon by entangling them in the standards net – some of my fellow cynics see the Open Cloud Manifesto as just this).

Because the reality is that, Manifesto or no Manifesto, you are not going to get application portability across IaaS-type Cloud providers. At least for production applications. Sorry. As a consolation prize, you may get some runtime portability such that we’ll be shown nice demos of prototype apps moving from one provider to another (either as applications or as virtual machines). Clap clap until you realize that they left behind their monitoring capabilities, or that their configuration rules don’t validate anything anymore. And that your printer ran out of red ink when printing the latest compliance report. Oops.

Maybe I am biased because they are both my friends and ex-colleagues, but the HP guys make the most sense in the SD Times article. Tim Hall has it right when he suggests “that the industry should focus on specific problems that it is going to solve around deployment and standardized monitoring”. And the other HP Tim, Mr. van Ash, rightly points out that we should “stop promising miracles”, which Forrester’s Jeffrey Hammond echoes, saying that there is a difference between a standard and “plug-and-play in reality”.

Tim Hall uses SQL as an example of a realistic common baseline. J2EE would be another one. They provide a good reality check. Standards are always supposed to prevent vendor lock-in. And there is a need for some of that, of course. But look at the track records. How many applications do you know that are certified and supported on any SQL database, any Unix operating system and any J2EE app server? And yet, standardizing queries on relational data and standardizing an enterprise-class runtime environment for one programming language are pretty constrained scopes in the grand scheme of things. At least compared to all the aspects that you need to standardize to provide real Cloud portability (security, monitoring, provisioning, configuration, language runtime and/or OS, data storage/retrieval, network configuration, integration with local apps, metering/billing, etc). And we’re supposed to put together a nice bundle of standards that will guarantee drag-and-drop portability across all these concerns? In how many lifetimes? By then, Cloud computing will have been replaced by the next big thing (galaxycomputing.com is still available BTW).

Not to mention that this standardization comes hand in hand with constraints on what you can do. That’s why I read Amazon’s Adam Selipsky’s comment that allowing customers to do “whatever they want” is vital as a way to say “get real” to requests for application portability, while allowing him to sound helpful rather than obstructionist.

This doesn’t mean that these standards are not useful. They make application portability possible if not free. They make for much improved productivity through generic tools and reusable developer knowledge. We still need all this.

Here is the best that can realistically happen in the “application portability across IaaS providers” area for at least 10 years:

  • a set of partial standards for small parts of the Cloud computing domain (see list above), many of which already exist.
  • a set of RightScale-like tools that do a lot of the grunt work of mapping/hiding/transforming between providers, with various degrees of success.
  • the need for application providers to certify their applications on Cloud providers one by one anyway and to provide cloning/migration as a feature of the application rather than an infrastructure-level task.

That’s assuming that IaaS providers become a major business, that there remains a difference between service providers and software providers. The other option is that the whole Cloud excitment goes back to SaaS only, that application creators are also hosting providers, that the only resource you get in a “utility” fashion is the application itself. At which point application portability is not a concern anymore and we go back to “only” worrying about data portability and application interoperability, an easier problem and one on which we have come a long way already. If this is what comes to pass then the challenge of Cloud portability may well be one of the main reasons. Along with the lack of revenue/margin potential for many of the actors in an IaaS world, as my CEO is fond of pointing out.

[UPDATED 2009/4/22: F5’s Lori MacVittie provides a very nice illustration of the same point, in her explanation of why OVF is not a cloud portability silver bullet.]

[UPDATED 2009/6/1: Soon after posting this entry I was contacted by people at SD Times about turning it into a “guest view” article in the June issue. It has just been published. It’s also in the paper version.]

5 Comments

Filed under Amazon, Application Mgmt, Articles, Cloud Computing, Everything, Google App Engine, HP, IT Systems Mgmt, Mgmt integration, People, Portability, Specs, Standards, Utility computing

“Federationing”

I am glad to see that, as it inches towards standardization in the DMTF, the CMDBf specification is getting more visibility. Forrester’s Glenn O’Donnell recently wrote very positively about it on his blog, presenting it as a key enabler for a federation of MDRs (Management Data Repositories, a term introduced by the CMDBf specification so don’t look for it in ITIL). He argues this is the only way (rather than a single data store) to fulfill the ITIL-defined role of a CMS. Rob England (the IT Skeptic) has also shared his thoughts about CMDBf and they were noticeably less enthusiastic, to say the least. While Glenn calls the specification “profound”, Rob calls it “the most over-hyped vendor marketing smokescreen ever”. There is plenty of room in between them, which is where I sit. As I explained before, it does have real value (as a query language/protocol for system integration) but is nowhere near providing “federation” capabilities.

I am happy to see Glenn approve of CMDBf and I agree with him that accurate specialized MDRs are more useful than a single store that attempts to capture all the relevant data. As Glenn puts it, “pockets of the truth are far superior to unified ambiguity”. But I wasn’t very comfortable with the tone of his article, which seemed to almost encourage the proliferation of these MDRs. Maybe he was just trying to present a clean break with the “one big CMDB” approach and overreached. Or maybe I am just not reading properly.

Because while I agree that the answer is not “one and only one store” I also don’t want to loose the value of having as much unification of the IT model as possible. Both at the data level (i.e. same metamodel/model, consistent retention/roll-up policies…) and the access level (i.e. in the same physical store, with shared access control, accessible using a well-known DSL for data manipulation…). Metamodel transformation and model bridging are costly (in accuracy, maintenance, reliability). If your CMS does more than just support a  “model navigation” GUI it may then need to run large queries that go across several portions of your IT model, including multiple different domains (e.g. a compliance rule kicked-off at the app level based on the type of data it manipulates that ends up having to look at the physical location of the servers running the hypervisors for the virtual machines that power the app). Through such global queries you can apply configuration rules, do impact analysis, event correlation, provide context to your transaction tracing, etc. No consolidation means no such queries (or a very limited subset). Considering the current state of federation, there is a lot more that you can do with your CMS if you have a very small number of MDRs rather than a sea of “federated” MDRs. This is why, as Oracle acquires IT management companies, we deliberately integrate their repositories with Enterprise Manager.

[UPDATED 2009/4/8: More, along the same line, from Glenn and his co-author Carlos Casanova available here. And my CMDBf partner-in-crime Van Wiles also responded to Glenn, bringing a BMC perspective.]

1 Comment

Filed under Application Mgmt, Automation, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Modeling, Query, Specs, Standards

Open Cloud Manifesto, circa 2004

The mini-scandal of last week was the manifesto-gate. The mini-scandal of this week is shaping out to be the Ulitzer-gate (if you want to make sure not to miss next week’s IT scandal, subscribe to the Register feed, ferreting these out and adding a bass-heavy soundtrack is their specialty).

Turns out I am one of these Ulitzer “unaware authors” through two articles I wrote a while ago for the Web services Journal, a paper publication by Sys-con (based on a request from HP PR) and a blog post I allowed Sys-con to republish. Looks like Ulitzer and Sys-con are one and the same. Three articles, spaced two years apart. That’s enough to earn me a dedicated home page at Ulitzer and a rank of 1,000 among their more than 6,000 authors. Makes you wonder how much the 5,000 “authors” behind me have (unknowingly) produced… Whatever. At least it’s all content that I authorized Sys-con to use, not something that was lifted from my blog as apparently happened to others.

Turns out the oldest of these articles (“From Web Services Management to Utility Computing” , from 2004) is not that different from the recently-published (and amply maligned) Open Cloud Manifesto. I described my article at the time as “an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive.”

It ends with “while it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications.”

Sure it has a lot more emphasis on WS-* specs than is compatible with the current zeitgeist, and it uses the now-obsolete term of “utility computing” rather than the nebulous alternative currently en vogue, but isn’t the main message there?

Just to be clear, I am not laying pretentious claims of prescience and vision (at least not in this entry). There are plenty of documents (e.g. from the Grid community) that make the same points in more eloquent terms and starting many years prior. It’s just fun to see this link from today’s scandal to the one from last week.

for old time sake, here is the content of the 2004 article:

From Web Services Management to Utility Computing
by William Vambenepe

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. This will be achieved through the development of composable and modular standards that expose the management capabilities of the building blocks of enterprise services. The Web services platform is an enabler of this transformation: a Web services-based management infrastructure provides a channel that is appropriate for dynamic resource provisioning, allocation, and configuration – often called utility computing.

We can consider this management infrastructure as a four-layered architecture. Starting at the foundation layer, the work on the base Web services infrastructure is far from over. First, until WSDL 2.0 is widely deployed, designers have to compose around the deficiencies of WSDL 1.1, such as the lack of portType inheritance. Second, there is still no standard for referencing Web services. Finally, key specifications such as WSRF (Web Services Resource Framework) and WSN (Web Services Notification), without which people were left to reinvent Web services interfaces to access stateful resources, have only recently reached the standards community. These issues are being resolved and a set of building blocks for accessing resources through an SOA (service-oriented architecture) is shaping up. It is critical that these building blocks be modular and composable to allow incremental adoption and separation of concerns.

Moving from the foundation to the management protocol layer, the OASIS WSDM (Web Services Distributed Management) technical committee, through its MUWS (Management Using Web Services) specification, is the key articulation point between the base Web services architecture and utility computing. Both the IT management community and the Grid community rely on MUWS. It defines how to express and exercise manageability capabilities through Web services, putting in place a management channel that is more interoperable and accessible than ever before.

Next is the modeling layer. Information models need to be composed so that a service can be represented based on the services that it is assembled from, be they peer or infrastructure services. Since these will be described by different models, the management channel (MUWS) needs to be model-agnostic in order to support a model-centric architecture. For example, CIM (Common Information Model) is a model that focuses on concrete resources. The DMTF WS-CIM subgroup must now open CIM to the Web services platform by developing a standard way to expose CIM-modeled resources through MUWS. Other models provide representations for service security, service-level agreements (SLA), etc. Only by composing these models will, for example, an auction service SLA be adequately managed as it depends on a combination of the performance of the servers on which the service runs, the application server that hosts it, the other services (authentication, billing, etc.) that it makes use of, and the business process engine that controls the bidding. Once this model-centric architecture is in place, management actions can be policy-driven through explicit constraints.

Finally, at the top layer, the architecture includes a set of common services for utility computing. They are being defined collaboratively by DMTF (Utility Computing working group) and GGF (OGSA working group).

All the pieces are falling into place but much remains to be done to allow comprehensive management of enterprise services in a model-centric way through Web services standards. While it would be easier to develop an end-to-end model specific to one company’s offering, standardization allows the integration of the management capabilities of all the components that compose enterprise services. We must keep the pressure on vendors to deliver modular and composable specifications (for format, function, and protocol) that expose management capabilities of infrastructure services, applications, and business processes in such a way that these capabilities can be composed by the next generation of management applications. These applications will use this to synchronize business and IT and to capitalize on change.

Comments Off on Open Cloud Manifesto, circa 2004

Filed under Application Mgmt, Articles, Automation, Business Process, Cloud Computing, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Specs, Standards, Utility computing, Virtualization

Manifesto hyperventilation

The Cloud Manifesto debacle of the last few days is another sign of the landgrab atmosphere in the Cloud interoperability/standardization space. It’s like the crowd at 6:00AM in front of an electronics store on the day of the Big Sale. This is not the first spark and it won’t be the last one.

Two aspects of this crisis (soon to be anecdote) are especially ironic.

First, as many have picked up, the irony of a Microsoft manager complaining about a document having to be signed “as is” is something to savor slowly. I know Microsoft is a large company and I can believe that Steven Martin may never personally have engaged in such practices. But our credulity gets stretched when in the same post he lauds the WS-* process as an example of “organic” industry collaboration. Pick any WS-* spec and try contacting the listed co-authors other than Microsoft and (usually but not always) IBM. Ask them how much input they had and how much they were able to change. The answer won’t always be zero, but there will be plenty of them.

And these WS-* documents were standard candidates: technical specifications that these “take it or leave it” authors would presumably eventually have to implement in their products. Which takes me to the second point of irony, one I haven’t noticed mentioned:

This is a manifesto folks! I am not familiar with all of these other manifestos (plus this one) but at least for those I know one of their defining characteristics is that they stand in opposition to other views. Some may seem non-controversial now, but at the time of their publication they were very much so. Otherwise they wouldn’t have been called “manifestos”. The very idea of a manifesto being criticized for not being inclusive enough makes my head spin.

If anything (based on this draft), what the “manifesto” can be criticized for is being too meek and consensual  to be worthy of that name. How many important manifestos give their readers a “whenever appropriate” escape clause in their guiding principles?

[Note: If you are in an investigative mood, start with this whois search and if the name of the registrant sounds familiar it may be because you’ve read this blog post before.]

[UPDATED 2009/3/30: It’s live.]

[UPDATED 2009/3/31: Mike DiPetrillo also thinks this is a bit too motherhood-and-applepie for being called a “manifesto”. Thomas Bittman’s translation of the manifesto is also good.]

1 Comment

Filed under Cloud Computing, Everything, Microsoft, People, Portability, Standards, Utility computing

OVF 1.0 and beyond

OVF 1.0 just got released as a DMTF standard. Here is the specification and its companion white paper. After a quick scan I didn’t see any major change from the submitted version, which is consistent with the content of the “preliminary standard” from last year.

The interesting question is what comes next, especially with regards to VMWare’s vCloud. The VMWare press release stated that “as one of the original authors of the Open Virtualization Format (OVF) standard now released from the Distributed Management Task Force (DMTF), VMware will build upon that work by submitting a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds” and Drue Reeves at the Burton group commented on this (Drue, we’re still waiting for part II). I see no reason to believe that VMWare is going to stop playing by the Microsoft playbook in DMTF as it appears to be quite successful so far (I’ll pat myself in the back for predicting over a year ago that “OVF might only be the beginning” for VMWare at DMTF).

This results in what looks like a landgrab from DMTF in Cloud standards. Meanwhile, in Washington DC yesterday, the Strategies and Technologies for Cloud Computing Interoperability (SATCCI) workshop took place. At this point all I know about it is the report from Reuven Cohen that I just read (hopefully Stu, Krishna and other bloggers who participated will provide additional perspectives). From Reuven’s report, Winston Bumpus (Director of Standards Architecture at VMware and President of the DMTF) described OVF as “an ideal cloud migration and deployment package”. Which may be true but is a pretty recent repurposing (the spec and the white paper don’t even mention this application). And while the DMTF is going full speed ahead on this, Reuven reports that “Craig Lee, President of the Open Grid Forum suggested that we need to take more time to examine the overlap between various standards groups, mapping the opportunities for collaboration”. Sure thing. The old timers might remember that when the DMTF decides to run with Microsoft’s WS-Management it wasn’t just OASIS (where WSDM was created) that eventually got hosed but also OGF (then called GGF) which relied on the WSRF/WSDM stack. At the time too there were discussions to identify and reconcile the overlap, for all the good they did (disclosure: I have some history there).

We’ve seen this in the WS-* game before. At the end it’s not so much a matter of what the standards bodies do (and even less of what they say), it’s a matter of what the big players do and where they choose to take their marbles. To the extent that you can separate the two, which becomes tricky in the case of vendor-run bodies like WS-I and DMTF. As I have written before, “at the end, it comes down to what [you think] a standard should be”.

[UPDATED 2009/3/26: Stu has now written a report on the SATCCI meeting.]

5 Comments

Filed under Cloud Computing, Conference, DMTF, Everything, Grid, IT Systems Mgmt, OVF, Portability, Specs, Standards, Utility computing, Virtualization, VMware, WS-Management

Cloud computing: would you like flexibility with your simplicity?

The recent announcement of the Sun Cloud, and more specifically its API is a good occasion to think about how much simplicity we really want in our datacenter automation mechanisms. The Sun API is very simple and its authors are proud of that fact. Indeed they should be proud of avoiding unneeded complexity. They have probably also kept out (at least so far), some needed complexity.

First, let’s focus on the important part:

It’s not REST that matters, it’s the rest

Most of the comments on the API focus on the fact that it’s RESTful. The authoritative source on this is Tim Bray’s description of the API, which he helped shape. But Tim is very down-to-earth about the reasons to use REST:

Why REST? · It’s a sensible question. The chief virtue of RESTful interfaces is massive scaling. But gimme a break, these are data-center management operations; a typical transaction frequency would be a single-digit number per week, with the single digit often being “0”, and it wouldn’t be surprising if a big multi-cluster staged-boot operation had a latency of minutes. The data-center controls are unlikely to be a bottleneck.

Why, then? Simply because we wanted a bits-on-the-wire interface. APIs, in the general case, suck; and are really hard to make portable. Bits-on-the-wire are ultimately flexible and interoperable. If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.

The use of REST is not a fundamental characteristic of the API. In other words, if this API turns out to be useful I can rewrite it as a SOAP API and it would still be useful. Unless the SOAP API is made purposely complicated, it would only be marginally harder to use, not fundamentally less useful.

In fact, we may find out. If the rumor is confirmed and IBM decides to Tivolify (rather than kill) the Sun Cloud, the whole thing can be refactored as WS-RT/XML/XQuery (and maybe WS-ResourceCatalog) in five days, four of which would be spent capturing, sedating and restraining Tim Bray (and his “spec machete”) with the last one used for coding.

In the case of the Sun Cloud API, REST makes the API simpler in the same way that a keyless system makes a car easier to operate. You don’t have to fumble for they key, but you still need to know to parallel park, change a tire and operate the stereo.

By using REST, the Sun team has kept away some arbitrary complexity (e.g. fine-grained PUT; instead Sun decides what are the two valid sets of input parameters to create a cluster). But that’s only a small percentage of the potential complexity of the system. Not to mention that most developer will use libraries rather than on-the-wire protocols so they won’t see any difference. Instead, the real deal is:

The model

By “the model” I mean both the resource model and the capabilities of the resources. For capabilities, I don’t care whether a virtual machine can be started via an HTTP GET request on a URL that ends with ?control=start, or via a SOAP message with the wsa:Action header set to http://iloveclouds.com/vm/start or via an RPC call to a Start(…) method. I just care that the model includes the capability to start a VM. And the list of states a VM can be in.

Look at a datacenter today. Make an inventory of all the networking equipment, storage, servers, hypervisors, operating systems and infrastructure services that it contains. Consider all the configuration settings of all these resources (as they would be represented in a complete, authoritative and consistent CMDB, that most elusive creature). Add to it all the controls and APIs they expose. That’s a lot of data, even if you don’t consider the applications layer. That’s a few orders of magnitude larger than what the model in the Sun Cloud API can describe. That gap (between our CMDB model and the Sun Cloud model) is what we should look at and analyze. Why are they so far apart? How big is the ideal datacenter automation and virtualization model?

Among other things, these hundreds of configuration settings in your current datacenter are used to optimize deployments. No-one would miss the pain of dealing with the optimizations if they went away, but we would miss the performance benefits they bring. So what replaces them if the model is too simple to support any tweak? Is the infrastructure behind the API auto-optimized, based on actual application patterns? Now that would be real progress towards simplicity and may allow us to rely on an API as simple as the Sun API. But the industry has been trying to do this with little success for a long time. I expect incremental, not radical, progress on this. Alternatively, does Cloud Computing change the economics to the point where performance optimizations through configurations are no longer cost-efficient, where scaling out is the answer? Hard to make this a general statement, considering how difficult it remains for many applications to scale out. And this sounds very SUV-like in these footprint-aware times (we see how well the “stretch the hood and add two cylinders to the engine” approach worked for Detroit).

Sun might very well have this covered under the hood. But I don’t know that I want to assume that they have an auto-optimizing system just because they produced an API that would benefit from having it underneath.

Not to mention that not all configuration tweaks have to do with performance optimization. Some of them are driven by licensing, organizational, risk and compliance considerations. If auto-detecting an application performance profile is hard, try auto-detecting its regulatory requirements.

Complexity with a purpose

The right place to be, between the “omniscient CMDB model” and the “Sun Cloud model” is somewhere in the middle, with a couple of incrementally complex layers. Of course they are so far apart that saying “somewhere in the middle” is a cope-out.  The current level of complexity is very hard to manage by humans (assisted by processes and tools, e.g. ITIL) and impossible to really automate. A lot of the complexity and variability is arbitrary rather than flexibility-inducing. We need to reduce this (all-out standardization is one way, stack integration is another). But the simplicity of the model in the Sun Cloud API is too extreme. Look at Amazon EC2. Everyone lauds the simplicity of the APIs and everyone, in the same breath, asks for more options (different instance types, availability zones, reserved instances…). Amazon (and Sun too, I assume) is taking the eminently rational approach of starting from simple and adding complexity (sorry, flexibility) as needed. That’s great. Just don’t get too enamored with the initial simplicity.

[UPDATED 2009/3/20: James Governor lauds the simplicity of Amazon’s cloud offering.  If I understand him correctly, he sees simplicity as coming not just from “few options” but also from backward compatibility with current app infrastructure. That second part is what William Louth criticizes in his comment below. At the very least I like to keep the two separated: “how intrinsicly simple is it” and “how backward compatible is it” even though both can be seen as providing the benefit of simplicity.]

7 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Modeling, REST, Specs, Tech, Utility computing, Virtualization

It feels like an AON ago

Here is yet another reminder of the short attention span in our industry: in this week of all-Cisco-all-the-time coverage and commentary, induced by the Unified Computing announcement, not one article or blog posting mentioned AON (Application Oriented Networking). Remember AON, introduced by Cisco in 2005? If not you’re not alone. At least according to Technorati and Google News who don’t find a single mention of it in the Unified Computing coverage (until Technorati re-indexes this blog, at which point this entry will ironically make a liar of itself…).

If Cisco is not going to tell us how AON relates to Unified Computing it would be nice if some of the trade publications and analysts who covered AON at the time made an effort to update those of us who can’t remember the neighbor’s name but never forget an acronym. Wasn’t AON Cisco’s first attempt to move from the network layer to the application layer, which is what Unified Computing is also about? Is this the second step? A reset? What was learned from AON?

At least there are parts of Unified Computing I understand (Cisco selling blades. Check. Partnership with BMC for management software. Check. etc…). That’s more than I could say at the time for AON (even after moderating a panel at the IEEE ICWS 2005 conference in which a Cisco manager described it).

A search on the Cisco site seems to indicate that AON is indeed available for purchase. It looks like a DataPower-like XML network appliance (message security, routing and monitoring). If they had described it like that at the time I am fairly sure I would have understood. Especially since such appliances already existed. Let’s see if Unified Computing has more success as a bold vision for the programmable datacenter or if it too ends up as a lonely blade SKU in Cisco’s price sheet.

[UPDATED 2009/3/18: At least one analyst made the link. Congratulations to Eric Siegel from the Burton Group. But his linkage was too subtle for Technorati or Google to pick it up: he didn’t mention AON in his blog entry about Unified Computing. That post, titled “Cisco the Computer Company, Act IV” refers to “Cisco the Computer Company, Act III”, which refers to “Cisco the Computer Company, Act II” which refers to, you’ve guessed it, “Cisco the Computer Company” which covers AON. Bingo. Even though he doesn’t directly tackle the “how does Unified Computing relate to AON” question, Eric still gets the prize for follow-through. And for prescience. More Burton Group coverage of Unified Computing here and here. This is from the DCS (“Data Center Strategies”) side of the house, as opposed to the NTS (“Network and Telecom Strategies”) side where Eric lives. If nothing else Cisco is challenging one thing with this move: the organizational structure (DCS vs. NTS) of the Burton Group…]

Comments Off on It feels like an AON ago

Filed under Application Mgmt, Automation, Everything, Virtualization

Exploring “IT management in a changing IT world”

The tagline for this blog is “IT management in a changing IT world”. Of course nobody but their authors care about blog taglines. Still, in the unlikely event that I am asked to expand on the “changing IT world” part I would do it as follows.

The changes currently at work in the IT world can be organized along three axis:

  • IT infrastructure and management
  • Application development and delivery
  • Business and regulation

Each of these categories is ridiculously large. It’s only through the prism of the relationships between them that they provide any value. Think about three balls linked by coil springs.

If you give one of these balls a shake, you will start a hard-to-predict dance between them. This is similar to how the three domains above relate to one another. Changes in one (say a new focus on regulatory compliance in the “business” area, the emergence of virtualization technology in the “infrastructure” area or the appearance of Web 2.0 applications in the “application” area) start a complex movement involving all three. It takes a while to achieve a new equilibrium (and in practice it is never achieved since changes occur too often, adding stimulus to an already excited system). For a visual illustration, see this little YouTube video (but imagine that the three balls are arranged in a triangle rather than linearly and that every so often one of them gets pulled in a random direction).

This is not new of course. There have been changes in these three areas for as long as IT has existed (starting before it was called IT) and they have always driven changes in how IT is managed. To some extent they also have always influenced one another. The “new” part is that the connections are a lot tighter now, that the springs have a much higher force constant (the “k” in “F=-kx”). So here is my attempt at mapping today’s hot buzzwords on a map organized along these areas.

Before you ask: yes of course I have a very rigorous methodology, based on very precise quantitative data, to establish with certainty the exact x, y and z coordinates of each label. Buzzword topology is a precise science.

You may notice that the buzziest buzzword (at least currently), “Cloud”, does not appear on the map. It’s because it buzzes so much that it would be all over it, engulfing what currently appears as “virtualization”, “datacenter automation”, “Iaas”, “PaaS”, “SaaS” and “opex/capex”. There are two main parts in the “Cloud” buzzword: the “Technical Cloud” and the “Business Cloud”. The “Technical Cloud” is where we take virtualization and standardization (of machines, networks and application infrastructure) and turn that mind-boggling complexity into a manageable system that can be programmed to deliver applications (Cisco recently called it “Unified Computing”; HP, IBM and others have been trying to describe and brand it for a long time). Building on these technical capabilities comes the second part of “Cloud”, the “Business Cloud”. It is the ability to use infrastructure owned by a third party (presumably one able to leverage economies of scale) and all the possibilities this opens in the business realm. That’s what “Cloud” started as, back when it was known as “Utility Computing” and before it was applied to everything under the sun. A recent illustration of the relationship between the “Technical Cloud” and the “Business Cloud” is the introduction of vCloud by VMWare (their vision includes using VMotion technology, a piece of the “Technical Cloud”, not just to move machines between neighboring hypervisors but between organizations, enabling the “Business Cloud”). Anyway, that’s why “Cloud” it’s not on the map. It is actually all over it.

The system displayed on the map is vibrating very intensely right now, and I don’t see this changing anytime soon. Just for fun, here are candidates for future boxes on the map:

  • In the “IT infrastructure and management” category, maybe one day we’ll get to real metadata-driven management integration across the stack (as opposed to the more limited “application modeling” area listed above), whether through RDF or not.
  • In the “application development and delivery” category, maybe Doug Purdy’s vision “to make everyone a programmer (even if they don’t know it)” will be realized, whether through Oslo or not.
  • In the “business and regulation” category, maybe one day corporations will actually start caring about the customer data they are entrusted to (but only if mishandling it finally costs them more than “sorry about that, here is a one year credit monitoring subscription now go away”).

In summary, the evolution of IT management is driven not only by changes in IT technology but also by changes in two other fields (“application development and delivery” and “business and regulation”) with which it is tightly connected. Both of these fields are also in a very dynamic state. And they also influence one another, resulting in a complex three-way dance. You can’t understand the trajectory and moves of one dancer without seeing the others.

That’s what I mean by “IT management in a changing IT world”. Thanks for asking.

[UPDATED 2009/6/25: For more on the “technical cloud” versus “business cloud”, go read Neil Ward-Dutton’s nice explanation. He actually breaks down the “business cloud” in two (separating the economic aspect from the strategic aspect).]

1 Comment

Filed under Application Mgmt, Automation, Big picture, BPM, BSM, Business, Cloud Computing, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Open source, Utility computing, Virtualization

Cloudman to the rescue?

GMail had a hiccup last night. Last month it had a more serious outage and so did AWS.

In the movie:
Superman:
Easy, miss. I’ve got you.
Lois Lane:
You’ve got me? Who’s got you?

In the IT world:
Cloud provider: Easy, CIO. I’ve got you.
CIO: You’ve got me? Who’s got you?

With apologies to Werner Vogels.

Comments Off on Cloudman to the rescue?

Filed under Amazon, Application Mgmt, Cloud Computing, Everything, Google, IT Systems Mgmt, Utility computing

WS-Discovery and WS-DeviceProfile public review

We are in the middle of the public review period for the three specifications coming out of the “Web Services Discovery and Web Services Devices Profile (WS-DD)” OASIS technical committee. The specifications are:

This trio has been swirling around for many years. Mostly in the USB/PnP/UPnP community (devices physically attached to Windows boxes) but they have popped up on the enterprise WS-* radar screen from time to time, for two reasons:

  • WS-DD as another specification (in addition to WS-Management and the later incarnations of WS-MeX) who uses WS-Transfer, and therefore as a (poor, in my view) justification for making it a stand-along specification
  • WS-Discovery as potentially useful for datacenter management (for resource discovery, obviously)

Looking at the members list for the technical committee seems to confirm the suspicion that many members would be more interested in the potential datacenter-related applications (of WS-Discovery, presumably) than the USB gadgets use cases: CA, IBM, Novell, Progress, RedHat, Software AG, WSO2. Well, I guess Novell and RedHat could be in the game from the perspective of plugging devices into desktops running their version of Linux rather than from a datacenter perspective. CA and IBM could be in it from an Asset Management (for devices) perspective too. So who knows (if you had no tolerance for idle speculations you wouldn’t be reading blogs, would you?).

In any case, no Apple, Palm, RIM, Google (Android) or Sun (JavaFX). I guess handheld devices aren’t the devices targeted here. Rather it’s more printers and cameras. In which case you have to wonder why HP isn’t there but that’s another question (I’ll pretend not to know).

No point dwelling on this, especially since the member list is a very imperfect representation of the actual participation. As an alternative we can scan the mailing list archive to see who is active. Judging from the last two months (feb 2009, march 2009) it’s Microsoft, Microsoft and that company in Redmond. Microsoft, as we know, created these specifications from the start and they seem to still be the engine.

In the end it remains to be seen if WS-Discovery has any role to play in datacenter infrastructure discovery. That environment is clearly becoming more dynamic, but the most frequent changes will take place at the level of the guest VM (creation of new instances) and above (applications, services…) and all these creations will presumably be orchestrated by some controller so there is little need for a “hello world” resource-initiated discovery mechanism for them.

Which leaves the physical equipment. Do we need WS-Discovery to get from the point of a server being plugged to the point where it has been provisioned? There are alternatives in this somewhat homogenous environment. And UDP multicast only takes you so far in a datacenter (at least without extensions). So my guess is no. But who knows. The drafts are here for you to review if you think it may be relevant.

Comments Off on WS-Discovery and WS-DeviceProfile public review

Filed under Automation, Everything, IT Systems Mgmt, Manageability, Microsoft, Specs, Standards

Managing the stack from top to bottom, including virtualization

The press release for the release of Oracle Enterprise Manager 10gR5 came out yesterday, but that’s not all: the Oracle VM Management Pack for Enterprise Manager was also announced yesterday. What this illustrates is that, in addition to the commonly-cited “one neck to choke” benefit of getting the entire stack from one vendor (from the hypervisor to the application, including the OS, DB and MW), there is also the benefit of getting a unified management environment for the whole stack. Here is how my friend and Oracle colleague Adam Hawley (director of product management for Oracle VM and previously with Enterprise Manager) describes it in more details:

So what’s so big about it and why does this give us a clear advantage over others?

  • No other company can offer management of the virtualization AND the workload that runs inside the virtualization at this depth and scale: not anyone. We now offer a single management product…Enterprise Manager Grid Control…that manages your entire data center from top-to-bottom:  from the packaged application layer (Siebel, PeopleSoft, Beehive, etc.) through all the middleware and database layers to the OS and virtualization itself. And we do that for the both physical and virtual worlds together seamlessly.

    • Other virtualization vendors either ONLY do virtualization management or to the extent they do anything else, it is typically one other category in the stack…virtualization plus the OS or virtualization plus some very specific applications (but no OS…), etc.
    • No one else can provide the entire picture the way we can with Oracle VM
  • So what does that mean for users?
    • It means Oracle VM is virtualization with a difference:
      • It is virtualization that makes application workloads faster, easier, and less error prone to deploy with Oracle VM Templates as pre-built, pre-configured VMs containing complete product solutions maintained in a central software library for easy re-use:  download from Oracle, import the VMs, use the product.  Simple.
      • It is virtualization that makes workloads easier to configure and manage:  Automate deployment of the VMs, installation of the management agent, and enable powerful, in-depth monitoring of guests and Oracle VM Servers including configuration management…
        • Set-up configuration policies to track how your VMs and servers are configured and to alert you if that configuration changes or “drifts” over time
        • What about if you have one VM running perfectly and another supposedly identical one not doing as well?  Run a configuration compare to check for differences not only in packages or application versions in the VM, but also down to OS parameter settings and other key items to rapidly identify differences and address them from the same console
      • It is virtualization that makes workloads easier to troubleshoot and support:

        • Not only is Oracle VM support very affordable compared to anyone out there, management of Oracle VM servers in Enterprise Manager makes it so much easier to rapidly track down issues across the layers of your data center from one UI With other vendors, to troubleshoot an issue with applications or the database, you have to trace it down through your environment, possibly to the virtual machine, but then how do you get all the info about the VM itself like its parameters and which physical server it is hosted on?  You have to jump to another tool entirely… whatever stand-alone tools you are using to manage the virtualization layer… to get the information and then go back-and-forth:  tedious and time consuming With Enterprise Manager, it is all there in one UI.  Need to tweak the number of virtual CPUs based on your database performance analysis report indicating a CPU bottleneck?  Navigate from the performance page for the database to the home page of that virtual machine and adjust the configuration in the same UI.  Done.  Well, OK, you may have to restart the application for the new vCPU setting to take effect but you can do still do that all within Enterprise Manager, saving time and minimizing risks.
        • This can dramatically reduce the time to troubleshoot as well as reduce the chances of human error navigating between multiple products with different structures and concepts to help you maximize your up-time.

So this is where it starts to get interesting. This is where the game starts to really be about not just the virtualization itself, but how it makes the rest of your overall data center better and more efficient.  The Oracle Enterprise Manager Grid Control Oracle VM Management Pack is a huge step forward for users.

[UPDATED 2009/3/21: An Oracle Virtualization blog has recently been created. So now you can hear directly from Adam and his colleagues.]

1 Comment

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, OVM, Virtualization

CMDBf is a lot more and a lot less than you think

The DMTF CMDBf working group has recently published an updated draft of its specification. The final version should follow soon and I don’t expect major changes so now is not a bad time to start thinking about what this baby can do.

Since CMDBf stands for “configuration management database federation”, you might think the obvious answer to the “what can it do” question is “build a federation of configuration management databases”. Except it’s not. Despite its name, CMDBf provides little support for federation unless you take a very loose definition of the term. The specification gives you a query language and a very simple registration interface, with a sprinkle of metadata to improve interoperability. The query language lets you talk to a CMDB to retrieve information on configuration items (CIs) that it knows about. The registration interface lets you keep a CMDB informed of changes to CIs that it may care about. If you want to build on top of this a real federation, one that scales to the type of environment that CMDBs are used for today, you have to go further than what the specification provides. What CMDBf does give you is some amount of integration between CMDBs (at the protocol level at least, not at the model level). It may not sound like much but it is a lot of progress on the current situation and the right incremental step, whether you are aiming for true federation as the end goal or not.

That’s the “a lot less than you think” part. So, what’s the “a lot more than you think” part? Good stuff all around:

CMDBf provides a metamodel that is well-suited for complex IT systems and it provides an elegant graph-oriented query language on top of it. The most convenient representation for an IT system is neither “one big XML document” nor “a sea of nodes and edges”. CMDBf gives you a middle ground: a graph model with XML leaf nodes. So you can precisely model the relationships between your IT elements using explicit relationships (with their own records), but you can also attach a well-understood piece of XML to an item as a record without having to break that XML into a bunch of tiny relationships.

I am pretty sure there are other domains, beyond IT systems, for which this would be useful. It will be interesting to see if the CMDBf specification gets considered outside of its intended scope. But these domains are more likely to end up using RDF/OWL/SPARQL instead. Not everyone has made the leap from XML as a tool to XML as a religion, which made CMDBf necessary for us. But let’s not veer into another rant.

Let’s go back instead to describing how useful CDMBf can be to IT systems management, independently of any “federation” objective. Let me put it this way: if one was to create from scratch a configuration store for IT systems they should strongly consider the CMDBf conceptual model as the base metamodel. And something along the lines of the CMDBf Query (though not necessarily through its XML serialization) as the native query language for it. Most CMDBf implementers of course are not in this situation. Rather than writing the store from scratch they will create a CMDBf wrapper/interface on their current CMDB. And that’s fine too. CMDBf will work well as an interoperability protocol. Putting aside my gripes about XPath overuse, CMDBf strikes a reasonable balance that makes it implementable on top of any back-end technology (relational, XML, RDF, in-memory objects, bags of name-value pairs…). And the query patterns it supports map well to CMDB-to-CMDB integration use cases. But it is underselling it, in my view, to restrict it to this over-the-wire interoperability scenario. CMDBf also provides a very useful foundation for local access to the CMDB. CMDBf graph queries can support powerful visualization of the content of the CMDB. They can support the definition of configuration rules. They can support in-depth inspection of relationships (e.g. fault tree).

And that may jsut be the beginning. It could take three directions after v1:

The first one, as always for a standard, is that it is ignored and becomes irrelevant. I have to reluctantly list this one first, because it is statistically the most likely for a new standard. Especially one that is not a ratification of an existing de facto standard. And one that threatens an important control point for vendors. A slight variation on this scenario is for CMDBf to succeed from a marketing perspective, as a checkmark that most vendors tick, but not as a true technology. This is the “smokescreen” scenario from Mr. Skeptic. One scenario that worries me is that CMDBf could fail because of the poor models of the CMDBs that implement it. If your IT model is not granular enough or if it matches the UI of your application more than the semantics of the IT components, then CMDBf will expose these shortcomings and probably be blamed for them (with bad models, “shoot the messenger” becomes “shoot the protocol”).

The second possible direction is that CMDBf provides enough value in integrating CMDBs that people want more and challenge the group to deliver on the “f” part, federation. That could take the form of a combination of:

  • better integration with other protocols (mostly from the WS-Management family, like WS-Enumeration and WS-Eventing),
  • reconciliation support (here are ways to address it),
  • some model transformations or canonical models,
  • some optimizations in the query mechanism for distributed queries (e.g. data partition rules).

The third possible direction (not exclusive) is for CMDBf to become the basis for a standard rule language for IT models. Yeah, another one (remember SML?). SPIN and SML show us how a generic query language can be used to support configuration rules. I very much like SPIN but it requires adopting RDF as a metamodel, which is a hard sell in XML-land. SML suffers technically from being too reliant on an inappropriate validation tool (XSD) and treating relationships as a second thought rather than an integral part of the model. Which is fine in many areas (EMF does it too), but not, in my view, when modeling IT systems.

If we are not going to use RDF/SPIN then let’s copy them. We can use the CMDBf metamodel (graph-based) where SPIN uses RDF. We can use the CMDBf query language (graph-oriented) where SPIN uses SPARQL. Since CMDBf queries use XPath, we see some commonalities with SML (which uses XPath through Schematron). But in CMDBf XPath is scoped to the leaf nodes of the graph, not the entire model as it is in SML. In other words, SML adds relationship traversal to XPath, while CMDBf adds XPath to its relationship-aware queries. It’s a matter of who’s on top. It sounds academic but it isn’t.

Does the industry really want standardized, re-usable configuration rules? SML/CML seem to say no. The push towards Cloud interop, on the other hand, begs for it. At least if you believe in programming your environment in a way that is partialy declarative rather than entirely procedural.

[UPDATED 2009/3/5: Rob England (a.k.a. Mr. Skeptic as I refer to him above) provides a geek-to-English translation for this post. Neat!]

2 Comments

Filed under CMDB, CMDB Federation, CMDBf, DMTF, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Modeling, RDF, SML, Specs, Standards, Tech, XPath

Analyzing the DMTF incubator process

Depending on how you choose to look at it, either the DMTF has streamlined the process of defining standards or it has created a rubberstamping machine. I am referring to the “DMTF Standards Incubation Process”. It is recent, but not brand new (the DSP that defines it is dated April 6, 2007). I had heard about it but never really looked into it. Until this weekend, when I finally got motivated to investigate a bit. AFAIK this process has not yet produced any specification.

As I understand it, the goal of this incubation process is to allow a group of like-minded companies to get together in the DMTF and produce an “informational specification”, which is typically a refinement of a vendor submission. The informational specification would then go through a normal DMTF working group but often in an expedited fashion, only allowing limited changes. That’s not a guaranteed outcome, but it seems to be the “normal” case as envisioned in this process.

This overview should make it clear why this can be seen as a rubberstamping machine. Here are a few key points (the quotes come from the process description):

  • “Standards Incubators are often formed in conjunction with an initial baseline contribution by the founding members with the expectation that the group will serve to evolve and finalize that contribution” and later it says that leadership members should have a “commitment to maintaining alignment with the input submission”.
  • Only leadership-level DMTF members get to be on the review board (the part of the incubator that makes decisions). They have to fairly consider comments from the other companies, whatever “fairly” means.
  • It is necessary but not even sufficient to be DMTF leadership-level company to be on the review board of an incubator. If you are not there when the incubator is created then you have to be approved by the current leadership members to join. It is unclear to me whether any DMTF leadership-level company can join the “review board” of an incubator at the start or whether those who propose the incubators get to choose who they let in.

There is nothing sneaky here, the incubator process is pretty upfront about being designed to allow a vendor-provided specification to be considered/reviewed/improved in a friendly environment, in which opponents are kept away. The question is what happens next. There are four possible dispositions once an incubator finishes its work and its has produced an informational specification.

The first two, “bootstrap / expedited delivery” and “finalization” are pretty close in practice. A working group is created that is restricted to not making any significant change to the “information specification”. The “bootstrap” approach only allows small corrections, “finalization” also allows some additions (but no change of what is already there). In other words, the working group is mandated to pretty much rubberstamp the informational specification: with these two dispositions, there is no opportunity (in the incubator or in the ratification group) for people to suggest technical approaches that are significantly different from those in the initial submission.

The place where technical alternatives can be considered is if a competing incubator is created. At this point, the DMTF board may decide that a working group should be created to reconcile the two. Even then, the board may pick a winner (in which case the reconciliation amounts to adding in the selected specification some features that are only present in the other specification, in effect protecting existing implementations of the winner). And if this is the path taken, the process makes it clear that this should be driven not by technical merits but by “adoption and momentum”. Which implies that the companies that ship the most products get to pick since they can single-handly create “adoption and momentum”.

And finally, the last possible disposition is “termination”, in which no further work on the informational specification takes place in the DMTF. But the barrier is pretty high for this direction to be taken: the specification has to have “little adoption or industry interest”. It seems reasonable to interpret this to mean that this would only apply if the initial proponents (who created the incubator) have lost interest themselves, otherwise they alone would provide sufficient “industry interest” for the termination to not take place and force another outcome (which can only be one of the first two, the rubberstamping options,  if there is no competing incubation group). And even if the work is indeed terminated, the specification remains available indefinitely as a “DMTF informational specification” which people can (and will, if it serves their purpose) simplify to “a DMTF specification”. The difference between this and a DMTF standard will be lost on 99% of IT writers and IT buyers. I submit as evidence all the confusion about the status of a “W3C note” (the confusion prompted W3C to eventually rename this to “submission”). It will be even less clear with DMTF “informational specifications” because, unlike W3C notes, they did go through some modifications in the DMTF and they are indeed a product of the DMTF. I would not be surprised if some “DMTF informational specifications” stayed just that and lived a happy life as a pseudo-standard. One thing that remains unclear is whether such a terminated informational specification can be taken over by another standards organization.

Bottom line, if you don’t like a submission to an incubator group you’d better put together an alternative (quickly) to stay in the game. And if your position is not “this is a bad technical approach” but rather “this is something we should do in a more open and deliberative way, or maybe later” then you only get one chance to make your case: you’d better make a convincing argument to the board to not allow the incubator to be created because after that there is little chance to stop the train. And how many standard organization boards do you know who say “no” to submissions from large vendors?

Having said all this, is this really a bad thing? Let’s look at it from the “glass half full” side.

First let’s realize that this happens anyway. Companies get together (often around an initial document created by a single leader) to create a specification and then look for a standard organization to ratify it with as few changes as possible. Other than CIM, it seems that all recent DMTF efforts started out this way (WS-Management, CMDBf, OVF). This is how Microsoft (sometimes with IBM) built the whole WS-* stack. They even had a name for it (the “workshop process”) to try to make it sound more open than it was. I’ve been on the inside (SML, CMDBf, WSRF/WSRT) and outside (WS-Management and other WS-* specifications) of it and it’s a pain whether you’re inside or out. It’s very opaque. Efforts may die and nobody ever knows (for example, does anyone know what’s up with CML)? Even when those inside want to get feedback and share their work they have to deal with a tangle of legal agreements that make it unnecessarily hard for everyone. In addition, all the work and discussions that go into the submitted specification usually get lost as the work transitions to a standards body (e.g. no email archive and unclear IP/confidentiality rules in re-using them). And the fact that these efforts are private does not prevent companies from demanding guarantees that their submissions won’t be changed too much. For example, the WS-Management working group had an explicit goal in its charter to maintain compatibility with the submission and the same debate was played over and over again in drafting the charters of several OASIS and W3C groups. Companies play one standard organization against another if necessary to get this guarantee.

Anything that can take us away from this mess merits consideration even if it is not a perfect alternative (there isn’t any). The DMTF incubator process doesn’t seem to relax the control of the sponsor companies, but it provides some level of transparency (at least for DMTF members) and, presumably, some continuity between the incubation and ratification phases.

Standards organizations constantly get blamed for either being too slow/procedural (e.g. HTML at W3C) or being rubberstamping machines (e.g. OOXML at ISO). Or both at the same time (most WS-* work). Most steps an organization can take to address one criticism makes the other worse. This “incubator” process is an example.

Everyone complains about “design by committee” and how inconsistent and bloated specifications become when everyone is listened to and made to feel included. The specifications end up with too many options (a killer of interoperability) and no guiding vision. A more constrained set of authors usually produce a simpler and more consistent specification. Has anyone ever seen a standard that is shorter than the submission that started it?

Not to mention the fact that working group chairs are often in an uncomfortable position, forced to choose between, on one hand, accusations of being dictatorial and, on the other hand, seeing his/her working group drift away from one rathole to the next, with no end in sight. The standards world has its fair share of obstructionists and pontificators. Some do it on purpose (they have been mandated by their employer to prevent progress in a group), most do it just because of their personality (and the fact that their employer has no real interest in what the person does in this group, as long as his/her presence allows the employer to claim to be part of the game). Forcing people who want an alternative approach to actually put together a proposal is a way to keep pontificators at bay. Unfortunately, it also shuts off qualified people who know a domain well and want to share their knowledge but don’t have the time (or employer sponsorship) to put together an entire alternative specification around their proposal.

At the end, it comes down to what a standard should be. If you think a standard should capture the knowledge of most experts in the industry and give an equal voice to all organizations, then this is a step in the wrong direction. If, on the other hand, your position is that the big guys will effectively set standards anyway so it might as well be done in a way that is fast, relatively transparent and consistent with their implementation, then you’ll applaud this initiative.

In creating this process, the DMTF made a clear (though grammatically challenged) statement on this topic: “adoption and momentum may outweigh technical issue regarding success”.

2 Comments

Filed under DMTF, Everything, Mgmt integration, Specs, Standards

Oracle Enterprise Manager 10g R5

Oracle Enterprise Manager 10g R5 (a.k.a. 10.2.0.5) is coming out. It will be announced by our SVP on Tuesday (subscribe to the Webcast). InfoWorld got a preview. From the application management perspective, it includes new management capabilities for middleware products that came from BEA (WL and OSB) and for several Oracle applications (Siebel, EBS, PeopleSoft, Beehive, BRM). And lots of goodies in other areas (virtualization, database, automation…).

[UPDATED 2009/3/3: bits! bits! bits!]

[UPDATED 2009/3/4 (and later): My colleague Chung Wu provides more details, followed by drill-downs into specific area. I’ll add links here as they become available:

Thanks Chung!]

Comments Off on Oracle Enterprise Manager 10g R5

Filed under Application Mgmt, Everything, IT Systems Mgmt, Oracle

Towards making Cloud services more consumable by enterprises

If you are a hard core network/system management person who has been very suspicious of all the ITIL/ITSM bullshit from the boss, and even more suspicious of the “Cloud” nonsense that occupies the interns while they should be monitoring the event console, then I have bad news for you: they are mating. Not the interns (as far as I know), I am talking about ITSM and the Cloud.

If, on the other hand, you are an ITSM and Cloud enthusiast who sees himself/herself leveraging all these nifty ITSM/BSM tools and shinny Cloud services to become the ultimate CIO, delivering unprecedented business value, compliance and IT efficiency from your iPhone at the beach, then you’ll see this marriage as good news, a sign that your move to Hawaii is approaching.

I am referring to the announcement by Rodrigo Flores that his company, newScale, has released a new product, newScale FrontOffice for Virtual Data Centers, to incorporate Cloud-based services in their IT Service Catalog.

This is not sexy but it’s the kind of support that some classes of enterprises will need in order to really make use of Cloud services. Eventually, Cloud providers are going to have to move their focus away from cool technology and developer evangelism towards making their services easily consumable by corporations. Otherwise they’ll be like an office supply store that doesn’t take American Express.

While the direction is very interesting I can’t comment on how much value this new product actually provides because the company seems to engage in anti-marketing activities. If you want to dig a bit deeper than the press release, you get redirected to this page which requires your info in order to download the “complimentary information brief”. The confirmation page promises an email “containing the information you requested within the next 30 minutes”. I thought that the info I requested was the brief. What I got instead was an email asking me to call them. I don’t know if it’s my unpronouncable last name of the oracle.com in my email address that scares them (I’d guess the latter) but that seems like a lot of precautions for a “complimentary information brief”. Unless this is an attempt to grab the “Cloud” buzzword with little meat to back it up (not that anyone would do this, of course). Hopefully they’ll make more information publicly available when they get around to it.

[UPDATED 2009/2/25: Via Coté, I just saw what I consider supporting envidence, from Gartner that, once we’re out of the early adopter phase, Cloud firms will need to focus less on pleasing developers and more on pleasing IT people: “But my observation from client interactions is that cloud adoption in established, larger organizations (my typical client is $100m+ in revenue) is, and will be, driven by Operations, and not by Development.”]

[UPDATED 2009/3/3: I got the PDF brief, thanks newScale (Ken and Mark). Many of the benefits it describes assumes that there is a pretty robust automation/provisioning infrastructure to back it up, in addition to the Catalog itself. E.g. the Catalog alone will not allow you to “shorten the provisioning cycle time to minutes instead of months”. The brief lists adapter kits to VMWare/EC2 and more internal-minded tools (HP and BMC, presumable through their Opsware and BladeLogic acquisitions respectively). So on the “public cloud” side it’s EC2 for now, not surprisingly. Integration with many of the Cloud tools (like RightScale) could be tricky since these tools bundle a catalog with the automation engine. If we ever do get a useful ontology of Cloud services this catalog would be a natural user for it, when it expands to other services beyond EC2 and tries to help you compare them. I guess newSCale wouldn’t appreciate if I provided a direct link to the PDF, so go request it to see for yourself.]

[UPDATED 2009/3/23: Speaking of managing your IT systems from your cell phone at the beach: VMWare vCenter Mobile Access.]

3 Comments

Filed under Cloud Computing, Everything, Governance, IT Systems Mgmt, ITIL, Mgmt integration, Utility computing

The datacenter as a programmable entity

This is an exciting time for those who want to shrink the computer. They are having a field day playing with devices powered by Android, the iPhone’s Cocoa, Palm’s new WebOS, Windows Mobile, JavaFX (maybe one day) and, to a lesser extent, the Blackberry.

But times are good too for those who want to go the other way and program larger things rather than smaller ones. If you are interested in thinking about datacenters as a programmable entity, you are in luck: for these long plane trips when you run out of battery, bring a printout of the proceedings of the research meeting organized last year in Cambridge by Microsoft and HP Labs, titled “The Rise and Rise of the Declarative Datacentre”. When you’re back on-line go check the presentations on the site.

And if you liked Paul Anderson’s “Programming the Data Centre” presentation at the Cambridge meeting, you can also read his “Programming the Virtual Infrastructure” slides from LISA 08. More LISA 08 presentations here.

I got the link to Paul Anderson’s second presentation (and maybe also the first one, some time ago) from Steve Loughran, who also adds a few comments, starting with the debate between the declarative and procedural approaches. This question has plenty of down-the-road implications. There is a lot to like about the declarative approach in terms of composition, manageability and more generally as a framework to manage complexity via encapsulation.

A simple analogy for this debate is to think about driving directions. The declarative approach is for me to give you a map with a circle on it showing where my house is and let you find your way. It’s more work for you but it’s also more resilient. The procedural approach is for me to give you a set of turn-by-turn directions, based on where you are coming from. If you miss one turn or if one road happens to be blocked at the time, then you’re in trouble.

That being said, there are enough powerful and useful PowerShell or Puppet scripts out there to give you a pause before discarding procedural approaches. While the declarative (aka “desired state”, “policy-driven” and sometimes “model-based”) approach looks a lot more elegant, at this point in time the real work usually gets done via scripts, deployment procedures or the likes.

In additin to academia, the competition between these approaches is playing out right now between all the companies and products that want to help you automate and manage your cloud deployments (public and/or private): for example, Rightscale scripts (custom scripts and Righscripts, see here and here) versus the more declarative ECML/EDML documents from Elastra. Or the very declarative approach taken by SmartFrog.

5 Comments

Filed under Automation, Cloud Computing, Conference, Desired State, Everything, Grid, Implementation, Research, Tech, Utility computing

Is notification wrapping getting a bum rap?

Looks like the question of whether to wrap SOAP-based notifications is back. Like Gil I prefer to stay away from wrapping notifications but my reasons are somewhat different.

I am not convinced by WSDL-centric arguments one way or the other. Proponents of wrapping say that it gives them a WSDL they can use for creating a generic listener, while opponents say that avoiding wrapping gives them a WSDL that generates useful code (payload-aware). I am not a big fan of WSDL-based code generation, but even if you are going to do it nobody says that you have to do it based on the WSDL document that ships with the specification. You’re free to modify the WSDL any way you want before feeding it to your code generation tool, as long as the result correctly describes the messages. One can write an infinity of WSDL documents for a given set of messages, some more precise and others more high-level (in which you quickly hit an xs:any). So, if the spec gives you a WSDL where the payload is xs:any and you know that in your case the payload is going to be sec:intrusionDetected, feel free to insert that element in the WSDL before running wsdl2java or whatever.

At the end, the question is not about what the WSDL in the specification looks like. The question is simply to what extent you know ahead of time the payload of the events you are going to have to handle. And you’d better know enough about the payload to create whatever logic your event consumer has to apply to the notification. Whether that’s through WSDL or some other mean. If you are not going to apply any payload-dependent logic (“generic sink”) then you don’t need to know anything about the payload. And I don’t see why someone needs a wrapper to create a generic sink.

Rather, what I don’t like about wrapping notifications is that you force them to be handled only as notifications, not as regular SOAP messages. You put them in a separate world and you make it hard for someone to create a service that can be invoked either in a subscription-driven way or in a direct way.

Here is a made-up example: consider a message to indicate that a physical intrusion has been detected in a building. There are many possible consumers for this message (local security staff, private security company, police, sound alarm, the cell phone of the owner, audit log, etc…). There are many possible sources for the message. In some cases, the message does not come from a subscription (e.g. a homeowner calls the security company and the operator enters data in a system that produces the message, or the sensor is hard-coded to sound the alarm). In others, there is a subscription (e.g. a home alarm system allows someone to register phone numbers and email addresses to which to send intrusion alerts). Sometimes something that starts as a subscription-based notification gets forwarded to someone who did not register for anything. It’s a good thing if web services that consume this message do not have to know (if they don’t care) whether this message originated because of a subscription or not. All they need to worry about is that there is a message that they have to respond to (e.g. by dispatching a patrol of clowns with orange lights on their car).

Here is a simpler analogy. Imagine that you have a filter in your email client to move all messages from Joe to a given folder. How much would you like to have to write the rule twice, one for messages that Joe sends to you directly and one for messages that Joe sends to a mailing list to which you are subscribed? Not very much I imagine.

At the same time, most notification systems are aware that they are processing notifications and there may be notification-related data that you’d like to have available in a consistent way (e.g. enough information to manage the subscription that resulted in you receiving this message). That’s fine but you don’t need an intrusive wrapper for this. Just use a SOAP header. It’s out of the way if you don’t care about it and it’s right there if you do (if you want to subject yourself to a two-year-old rant about how the SOAP processing model is unfortunately underutilized, be my guest).

One place where you need some kind of wrapping is when delivering several events at a time (either because you use pull-style retrieval or because you find it more efficient to push them in batches). If that’s what you’re after (and you want to handle it within one SOAP message rather than boxcarring a set of SOAP messages) then go ahead define a wrapper but make it a specialized wrapper that serves this purpose: collecting notifications and properly attaching whatever metadata to each. That’s a real purpose, not some WSDL make-believe.

Another use case is if you apply some transformation to the notification before sending it. Say that instead of returning a large notification you filter it by running an XPath on it and returning a serialization of the resulting node set (assuming you first solve the XPath serialization conundrum). You’d need some kind of wrapper to contain the result and put it in context, but again that should be a specialized wrapper for you filter mechanism. Not a generic wrapper.

It’s been a while since I really thought about this. My recollection may be flawed but I think I was already holding this position in the OASIS WS-Notification technical committee (which completed its work by publishing three standards in October 2006). I remember David Hull making a very eloquent case in the same direction (“wrapping” as policy-advertised option, not a part of the base framework), and strong pushback from IBM. I learned a lot about pub/sub systems from my WS-Notification committee co-chair, IBM’s Peter Niblett (a leading expert on the topic) while working on WS-Notification, but this is one area in which he did not convert me.

Comments Off on Is notification wrapping getting a bum rap?

Filed under Everything, Mashup, Mgmt integration, Middleware, SOAP, SOAP header, Specs, Standards, Tech

Long-running processes on Google App Engine: it finally works

I am probably taking things a bit too personally, but I feel like I just successfully guilt-tripped the Google App Engine (GAE) team. Just last night I was complaining that they were teasing me (supporting urllib2, but an older version, without timeout support). And tonight I noticed a new post on their blog, announcing the end of these “High CPU Requests” that have been the bane of my GAE experience.

The reason why I was looking for timeout support in the first place is to avoid generating these dreaded “High CPU Requests” that quickly result in your application being disabled. It’s all explained here and  here.

But now that they are gone, I don’t need timeouts anymore. Around 11:00PM Pacific (on Thursday 2/12) I restarted my application. All it does is create 5 new simple entities in the datastore, one every second (it sleeps for one second between each entry). Then it spawns its successor, which will do the same thing, ad vitam aeternam. The application’s name, “rere”, is short for “request relay”, the pattern used to emulate a long running process. The default page for the app (available here) just returns a list of the 30 last entities created. The point is to illustrate that a single original request spawned an ever-lasting computing task on GAE.

Here is the code:

#!/usr/bin/env python
#
# Copyright 2009 William Vambenepe
#

import wsgiref.handlers
import os
import logging
import time

from google.appengine.ext import db
from google.appengine.ext.webapp import template
from google.appengine.ext import webapp
from google.appengine.api import urlfetch

numberOfHeartbeatsViewed = 30
secondsDurationOfTaskWait = 1
numberOfTasksPerRequest = 5

class HeartBeat(db.Model):
  requestId = db.IntegerProperty()
  date = db.DateTimeProperty(auto_now_add=True)

# The mere existence of an instance of this class in the DB means that the relay has to stop.
class StopExec(db.Model):
  date = db.DateTimeProperty(auto_now_add=True)

class MainHandler(webapp.RequestHandler):
  def get(self):
    hbs = HeartBeat.all().order("-date").fetch(numberOfHeartbeatsViewed)
    template_values = {"hbs": hbs}
    path = os.path.join(os.path.dirname(__file__), "index.html")
    self.response.out.write(template.render(path, template_values))

class StartHandler(webapp.RequestHandler):
  def get(self):
    if (StopExec.all().count() == 0):
      try:
        id = int(self.request.get("id"))
      except (TypeError, ValueError):
        id = 0
      try:
        logging.debug("Request " + str(id) + " launching background task.")
        loopCount = 0
        while(loopCount < numberOfTasksPerRequest):
          hb = HeartBeat()
          hb.requestId = id
          hb.put()
          logging.debug("Request " + str(id) + " saved heartbeat #" + str(loopCount))
          time.sleep(secondsDurationOfTaskWait)
          loopCount = loopCount+1
      finally:
        logging.debug("Launching successor request with id=" + str(id+1))

# This silly back and forth between the two URLs is because of
# "App cannot fetch the same URL as the one used for the request" error.
        if (self.request.url.find("start2") == -1):
          urlfetch.fetch("http://localhost/start2?id=" + str(id+1))
        else:
          urlfetch.fetch("http://localhost/start?id=" + str(id+1))
        logging.debug("Request " + str(id) + " completed")

def main():
  application = webapp.WSGIApplication([("/", MainHandler), ("/start", StartHandler), ("/start2", StartHandler)], debug=True)
  wsgiref.handlers.CGIHandler().run(application)

if __name__ == "__main__":
  main()

One thing I had to change from the earlier version (written using version 1.1.0 of the GAE SDK) is that urlfetch now returns an error if your app tries to invoke itself at the same URL (“App cannot fetch the same URL as the one used for the request”). So I have to alternate between http://localhost/start and http://localhost/start2, both of which are mapped to the same handler. This was added sometimes between SDK 1.1.0 and SDK 1.1.9. If it is aimed at preventing the kind of batton-passing that I am doing, it is pretty unefficient considering how easy it is to circumvent.

It is now 1:02AM Pacific the next day (Friday 2/13) and the process is still progressing, based on the single HTTP request I sent to it at 11:00PM the previous evening. The result page currently returns:

  • From request # 1056, with date Fri, 13 Feb 2009 09:02:15 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:14 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:13 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:12 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:11 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:10 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:09 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:08 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:07 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:06 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:05 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:04 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:03 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:02 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:01 +0000
  • From request # 1053, with date Fri, 13 Feb 2009 09:01:59 +0000
  • From request # 1053, with date Fri, 13 Feb 2009 09:01:58 +0000

Which shows that 1056 successive requests have participated in the relay (the last one just happened, at 09:02:15 UTC which is 1:02AM Pacific).

Hopefully it will still be running when I wake up tomorrow.

[UPDATED 2009/2/13, 9:08AM Pacific: It’s alive!

  • From request # 6411, with date Fri, 13 Feb 2009 17:08:45 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:44 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:43 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:42 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:41 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:40 +0000
  • From request # 6409, with date Fri, 13 Feb 2009 17:08:39 +0000

BTW, the code provided uses localhost to run on my local machine. The version uploaded to Google of course replaces this with rere.appspot.com.]

[UPDATED 2009/5/1: For some reason this entry is attracting a lot of comment spam, so I am disabling comments. Contact me if you’d like to comment.]

4 Comments

Filed under Cloud Computing, Everything, Google App Engine, Implementation, Tech, Utility computing

Google App Engine is teasing me

Version 1.1.9 of the Google App Engine (GAE) SDK was released earlier this week. The first item in the announcement covers the big news, that developers can “use the Python standard libraries urllib, urllib2 or httplib to make HTTP requests”. My first thought reading this was that I was finally going to be able to use timeouts on outgoing HTTP requests. I care about this because my earlier attempt to emulate a long-running process in GAE was stymied by the GAE quota system, something I think I can work around if I can timeout a request once it has spawned its successor (more precisely, once it has spawned the successor of the incoming request that created the new outgoing request).

I got the new SDK last evening and moved the code from using urlfetch to using urllib2 (with timeout). On my local machine it seems to work, but the quota system (that I am trying to finesse) doesn’t run in the SDK. So the only real test happens once you deploy the app in the Google environment. Which is when I realized that GAE uses Python 2.5.2 and that the timeout parameter in urllib2 (and httplib) came with 2.6. Slap.

I was especially disappointed because the links for urllib2 and httplib in the GAE 1.1.9 SDK announcement take us to the Python 2.6.1 documentation. The timeout parameter is right there at the top of these pages, staring at me. It would be more accurate for this announcement to point to the 2.5.2 documentation (here it is for urllib2 and httplib).

It doesn’t really matter of course because this is just a toy project. And a real scheduler seems to be in the works (see this pre-announcement and this work in progress). I just do this as a fun way to get a glimpse of what it takes to turn existing infrastructure into a *aaS product, something that is going on in different ways in many places these days. Linux wasn’t created to run on something else than hardware. Xen wasn’t created to support EC2. Python wasn’t created to support GAE.

And GAE wasn’t created to support long-running processes. But I haven’t given up.

1 Comment

Filed under Cloud Computing, Everything, Google App Engine, Implementation, Tech, Utility computing