Category Archives: Manageability

Would you like some management with that appliance?

Andi Mann recently wrote an interesting post about virtual appliances . He uses the domain name pleasediscuss.com for his blog so I figured I’d do just that. More specifically, I have three comments on his article.

Opaque or transparent appliance

Andi’s concerns about the security and management problems posed by virtual appliances are real, but he seems to assume that the content of the appliance is necessarily opaque to the customer and under the responsibility of the appliance provider. Why can’t a virtual appliance be transparent in the sense that the customer is able to efficiently manage at least some aspects of the software installed on it? “You can’t put agents on most virtual appliances, they don’t come with WMI, and most have only a GUI for management” says Andi. Why can’t an appliance come with an agent (especially in these days of consolidation where many vendors provide many layers of the stack – hypervisor / OS / application container / application / management tools – including their agent)? Why can’t it implement a standard management API (most servers nowadays implement WBEM, WS-Management and/or IPMI pre-boot – on the motherboard – which is a lot more challenging to do than supporting a similar protocol in a virtual appliance). Andi is really criticizing the current offering more than the virtual appliance model per se and in this I can join him.

Let me put it differently, since this is probably just a question of definition: what would Andi call a virtual appliance that does expose management APIs for its infrastructure (e.g. WS-Management for the OS, JMX for the java stack) or that comes with an agent (HP, IBM, BMC, Oracle…) installed on it?

Such an appliance (let’s call it a “transparent virtual appliance” for now) doesn’t provide all the commonly claimed benefits of an appliance (zero config/admin) but as Andi points out these benefits come with major intrinsic drawbacks. A transparent virtual appliance still drastically simplifies installation (especially useful for test/dev/demo/POC). It doesn’t entirely free you of monitoring and configuration but at least it provides you with a very consistent and controlled starting point, manageable from the start (no need to subsequently install an agent). In addition, it can be made “just enough” (just enough OS, just enough app server…) to require a lot less maintenance than an application stack that you assemble yourself out of generic parts. We’ll always have trade offs between how optimized/customized it is versus how uniform your overall environment can be, but I don’t see the use of an appliance as a delivery mechanism as necessarily cornering you into a completely opaque situation, from a management perspective.

Those who attended Oracle Open World a few weeks ago were treated to an example of such an appliance, if they attended any of the sessions that covered Oracle’s Appliance Builder (the main one was, I believe, Virtualizing Oracle Fusion Middleware in the Modern Data Center, in case you have access to the Open World On Demand replay and slides). I believe it’s probably the same content that @jayfry3 was shown when he tweeted about “Oracle is demoing their private cloud self-service app”. These appliances are not at all opaque from a management perspective. To the contrary, they are highly manageable, coming with an Enterprise Manager agent installed that can manage everything in the appliance (and when that “everything” doesn’t include the OS, it’s because there isn’t one thanks to JRockit Virtual Edition, making things slimmer, faster, safer and more manageable). And of course the OVM-based environment in which you deploy these appliances is also managed by Enterprise Manager. OK, my point here wasn’t to go into marketing mode, but this is cool stuff and an example of what virtual appliances should be. BTW, this was also demonstrated during Hasan Rizvi’s keynote at OpenWorld, including the management of these systems through Enterprise Manager.

In the long run it’s irrelevant

As with all things computer-related, the issue is going to get blurrier and then irrelevant . The great thing about software is that there is no solid line. In this case, we will eventually get more customized appliances (via appliance builders or model-driven appliance generation) blurring the line between installed software and appliance-based software.

Waiting for PaaS

Towards the end of his post, Andi paints an optimistic vision of the future: “I also think that virtual appliances have a bright future – but in some ways I continue to see them as a beta version of what could (or should) come next.  By adding in capabilities for responsible and accountable management, they could form the basis of more fully-functional virtual service management containers. These in turn could form the basis of elastic, mobile, network-deployed, responsible cloud appliances that deliver complete end-to-end service management without regard to physical location or domain of control.”

I mostly agree with this vision, though when I describe it it is in the guise of a PaaS platform. Where your appliance (which today goes from the OS all the way to the app) has shrunk to an application template that you deploy in the PaaS environment (rather than in a hypervisor). If/when the underlying PaaS environment has reached the right level of management automation you get all the benefits of an appliance while maintaining the consistency of your environment and its adherence to your management policies (because the environment is the PaaS platform and its management is driven from your policies).

[As is often the case, this started as a comment (on Andi’s blog) and quickly outgrew that environment, leading to this new post. Plus, Andi’s blog is brand new and seems to be well worth spreading the word about (Andi himself is under-marketing it).]

3 Comments

Filed under Application Mgmt, Automation, Desired State, Everything, IT Systems Mgmt, Manageability, Oracle, Oracle Open World, OVM, PaaS, Virtual appliance, WS-Management

Cloud platform patching conundrum: PaaS has it much worse than IaaS and SaaS

The potential user impact of changes (e.g. patches or config changes) made on the Cloud infrastructure (by the Cloud provider) is a sore point in the Cloud value proposition (see Hoff’s take for example). You have no control over patching/config actions taken by the provider, any of which could potentially affect you. In a traditional data center, you can test the various changes on specific applications; you don’t have to apply them at the same time on all servers; and you can even decide to skip some infrastructure patches not relevant to your application (“if it aint’ broken…”). Not so in a Cloud environment, where you may not even know about a change until after the fact. And you have no control over the timing and the roll-out of the patch, so that some of your instances may be running on patched nodes and others may not (good luck with troubleshooting that).

Unfortunately, this is even worse for PaaS than IaaS. Simply because you seat on a lot more infrastructure that is opaque to you. In a IaaS environment, the only thing that can change is the hardware (rarely a cause of problem) and the hypervisor (or equivalent Cloud OS). In a PaaS environment, it’s all that plus whatever flavor of OS and application container is used. Depending on how streamlined this all is (just enough OS/AS versus a traditional deployment), that’s potentially a lot of code and configuration. Troubleshooting is also somewhat easier in a IaaS setup because the error logs are localized (or localizable) to a specific instance. Not necessarily so with PaaS (and even if you could localize the error, you couldn’t guarantee that your troubleshooting test runs on the same node anyway).

In a way, PaaS is squeezed between IaaS and SaaS on this. IaaS gets away with a manageable problem because the opaque infrastructure is not too thick. For SaaS it’s manageable too because the consumer is typically either a human (who is a lot more resilient to change) or a very simple and well-understood interface (e.g. IMAP or some Web services). Contrast this with PaaS where the contract is that of an application container (e.g. JEE, RoR, Django).There are all kinds of subtle behaviors (e.g, timing/ordering issues) that are not part of the contract and can surface after a patch: for example, a bug in the application that was never found because before the patch things always happened in a certain order that the application implicitly – and erroneously – relied on. That’s exactly why you always test your key applications today even if the OS/AS patch should, in theory, not change anything for the application. And it’s not just patches that can do that. For example, network upgrades can introduce timing changes that surface new issues in the application.

And it goes both ways. Just like you can be hurt by the Cloud provider patching things, you can be hurt by them not patching things. What if there is an obscure bug in their infrastructure that only affects your application. First you have to convince them to troubleshoot with you. Then you have to convince them to produce (or get their software vendor to produce) and deploy a patch.

So what are the solutions? Is PaaS doomed to never go beyond hobbyists? Of course not. The possible solutions are:

  • Write a bug-free and high-performance PaaS infrastructure from the start, one that never needs to be changed in any way. How hard could it be? ;-)
  • More realistically, narrowly define container types to reduce both the contract and the size of the underlying implementation of each instance. For example, rather than deploying a full JEE+SOA container componentize the application so that each component can deploy in a small container (e.g. a servlet engine, a process management engine, a rule engine, etc). As a result, the interface exposed by each container type can be more easily and fully tested. And because each instance is slimmer, it requires fewer patches over time.
  • PaaS providers may give their users some amount of visibility and control over this. For example, by announcing upgrades ahead of time, providing updated nodes to test on early and allowing users to specify “freeze” periods where nothing changes (unless an urgent security patch is needed, presumably). Time for a Cloud “refresh” in ITIL/ITSM-land?
  • The PaaS providers may also be able to facilitate debugging of infrastructure-related problem. For example by stamping the logs with a version ID for the infrastructure on the node that generated the log entry. And the ability to request that a test runs on a node with the same version. Keeping in mind that in a SOA / Composite world, the root cause of a problem found on one node may be a configuration change on a different node…

Some closing notes:

  • Another incarnation of this problem is likely to show up in the form of PaaS certification. We should not assume that just because you use a PaaS you are the developer of the application. Why can’t I license an ISV app that runs on GAE? But then, what does the ISV certify against? A given PaaS provider, e.g. Google? A given version of the PaaS infrastructure (if there is such a thing… Google advertises versions of the GAE SDK, but not of the actual GAE runtime)? Or maybe a given PaaS software stack, e.g. the Oracle/Microsoft/IBM/VMWare/JBoss/etc, meaning that any Cloud provider who uses this software stack is certified?
  • I have only discussed here changes to the underlying platform that do not change the contract (or at least only introduce backward-compatible changes, i.e. add APIs but don’t remove any). The matter of non-compatible platform updates (and version coexistence) is also a whole other ball of wax, one that comes with echoes of SOA governance discussions (because in PaaS we are talking about pure software contracts, not hardware or hardware-like contracts). Another area in which PaaS has larger challenges than IaaS.
  • Finally, for an illustration of how a highly focused and specialized container cuts down on the need for config changes, look at this photo from earlier today during the presentation of JRockit Virtual Edition at Oracle Open World. This slide shows (in font size 3, don’t worry you’re not supposed to be able to read), the list of configuration files present on a normal Linux instance, versus a stripped-down (“JeOS”) Linux, versus JRockit VE.


By the way, JRockit VE is very interesting and the environment today is much more favorable than when BEA first did it, but that’s a topic for another post.

[UPDATED 2009/10/22: For more on this (in an EC2-centric context) see section 4 (“service problem resolution”) of this IBM paper. It ends with “another possible direction is to develop new mechanisms or APIs to enable cloud users to directly and automatically query and correlate application level events with lower level hardware information to better identify the root cause of the problem”.]

[UPDATES 2012/4/1: An example of a PaaS platform update which didn’t go well.]

9 Comments

Filed under Application Mgmt, Cloud Computing, Everything, Google App Engine, Governance, ITIL, Manageability, Mgmt integration, PaaS, SaaS, Utility computing, Virtualization

The future (2006 version), has arrived

Remember 2006? Things were starting to fall into place for IT management integration and automation:

  • SDD was already on its way to cleanly describe/package/manage the lifecycle of simple and composite applications alike,
  • the first version of SML came out to capture all the relevant constraints of complex and composite systems and open the door to “desired-state management”,
  • the CMDBf effort was started to seamlessly integrate all sources of configuration and provide a bird-eye view of your entire IT infrastructure, and
  • the WSDM/WS-Management convergence/reconciliation was announced and promised to free management consoles from supporting many resource discovery, collection and control mechanisms and from having platform/library dependencies between the manager and its targets.

It looked like we were a year or two from standardization on all these and another year or two from shipping implementations. Things were looking good.

Good news: the schedule was respected. SDD, SML and CMDBf are now all standards (at OASIS, W3C and DMTF respectively). And today the Eclipse COSMOS project announced the release of COSMOS 1.1 which implements them all. The WSDM/WS-Management convergence is the only one that didn’t quite go according to the plan but it is about to come out as a standard too (in a pared-down form).

Bad news: nobody cares. We’ve moved on to “private clouds”.

Having been involved with these specifications in various degrees (a little bit on SDD, a fair amount on SML and a lot on CMDBf and WSDM/WS-Management) I am not as detached as my sarcastic tone may suggest. But as they say in action movies, “don’t let sentiments get in the way of the mission”.

There is still a chance to reuse parts of this stack (e.g. the CMDBf query language) and there are lessons to learn from our errors. The over-promising, the technical misjudgments, the political bickering, the lack of concrete customer validation, etc. To some extent this work was also victim of collateral damages from the excesses of WS-* (I am looking at you WS-Addressing). We also failed to notice the rise of the hypervisor in our peripheral vision.

I tried to capture some important lessons in this post-mortem. For the edification of the cloud generation. I also see a pendulum in action. Where we over-engineered I now see some under-engineering (overly granular interaction models, overemphasis on the virtual machine as the unit of everything, simplistic constraint models, underestimation of config/patching issues…). Things will come around and may eventually look familiar (suggested exercise: compare PubSubHubBub with WS-Notification).

As long as each iteration gets us closer to the goal things are good.

See you in 2012. Same place, same day, same time.

3 Comments

Filed under Application Mgmt, Automation, Cloud Computing, CMDB, CMDB Federation, CMDBf, Desired State, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, Protocols, SML, Specs, Standards, Utility computing, WS-Management

Thoughts on the “Simple Cloud API”

PHP developers with Cloud aspirations rejoice! Zend has announced a PHP toolkit (called the Simple Cloud API project) to abstract and access application-level Cloud services. This is not just YACA (yet another Cloud API), as there are interesting differences between this and all the other Cloud toolkits out there.

First it’s PHP, which was not covered by the existing toolkits. Considering how many web applications are written in PHP (including the one that serves this very blog) this may seem strange, until you realize that most Cloud toolkits out there are focused on provisioning/managing low-level compute resources of the IaaS kind. Something that is far out of PHP’s sweetspot and much more practically handled with Java, Python, Ruby or some .NET language accessible via PowerShell.

Which takes us to the second, and arguably most interesting, characteristic of this toolkit: it is focused on application-level Cloud services (files, documents and queues for now) rather than infrastructure-level. In other word, it’s the first (to my knowledge) PaaS toolkit.

I also notice that Zend has gotten endorsements from IBM, Microsoft, Nirvanix, Rackspace and GoGrid. The first two especially seem to have impressed InfoWorld. Let’s keep in mind that at this point all we are talking about are canned quotes in a press release. Which rank only above politician campaign promises as predictor of behavior. In any case that can’t be the full extent of IBM and Microsoft’s response to the VMWare/Cisco push on IaaS standards. But it may suggest that their response will move the battlefield to include PaaS, which would be a smart move.

Now for a few more acerbic comments:

  • It has “simple” in its name, like SOAP (as Pete Lacey famously lampooned). In the long term this tends to negatively correlate with simplicity, just like the presence of “democratic” in the official name of a country does not bode well for actual democracy.
  • Please, don’t shorten “Simple Cloud API” to SCA which is already claimed in a (potentially) closely related field.
  • Reuven Cohen is technically correct to see it as “a way to create other higher level programmatic API interfaces such as REST or SOAP using an easy, yet portable PHP programming environment”. But pay attention to how many turtles are on this pile: the native provider API, the adapter to the “simple cloud API”, the SOAP or REST remote API and the consuming application’s native API. How much real isolation are you getting when you build your house on such a wobbly foundation

[UPDATE: Comments from someone in the know:  a programmer working on adding Azure support for this Simple Cloud API project.]

2 Comments

Filed under Application Mgmt, Cloud Computing, Everything, IBM, Manageability, Mgmt integration, Microsoft, Middleware, Open source, Portability, Utility computing

Toolkits to wrap and bridge Cloud management protocols

Cloud development toolkits like Libcloud (for Python) and jcloud (for Java) have been around for some time, but over the last two months they have been joined by several other open source contenders. They all claim to abstract the on-the-wire Cloud management protocols sufficiently to let you access different Clouds via the same code; while at the same time providing objects in your programming language of choice and saving you the trouble of dealing with on-the-wire messages. By focusing on interoperability, they slot themselves below the larger role of a “Cloud broker” (which also deals with tasks like transfer and choice). Here is the list, starting with the more recent contenders:

DeltaCloud shares the same goal of translating between different Cloud management protocols but they present their own interface as yet another Cloud REST API/protocol rather than a language-specific toolkit. More along the lines of what UCI is trying to do (not sure what’s up with that project, I recorded my skepticism earlier and am still waiting to be pleasantly surprised).

Of course there are also programming toolkits that are specific to one Cloud provider. They are language-specific wrappers around one Cloud management protocol. AWS protocols (EC2, S3, etc…) represent the most common case, for example amazon-ec2 (a Ruby Gem), Power-EC2Dream (in C# which gives it the tantalizing advantage of being invokable via PowerShell) and typica (for Java). For Clouds beyond AWS, check out the various RightScale Ruby Gems.

The main point of this entry was to list the cross-Cloud development toolkits in the bullet list above. But if you’re in the mood for some pontification you can keep reading.

For some reason, what used to be called “protocols” is often called “APIs” in Cloud settings. Witness the Sun Cloud “API” or the vCloud “API” which only define XML formats for on-the-wire messages. I have never heard of CIM/XML over HTTP, WSDM or WS-Management being referred as APIs though they occupy a very similar place. They are usually considered “protocols”.

It’s a just question of definition whether an on-the-wire protocol (rather than a language-specific set of objects/methods) qualifies as an “Application Programming Interface”. It’s not an “interface” in the Java sense of the term. But I can “program” against it so it could go either way. On this blog I have gone along with the “API” term because that seemed widely used, though in verbal conversations I have tended to stick to “protocol”. One problem with “API” is that it pushes you towards mixing the “what” and the “how” and not respecting the protocol/model dichotomy.

Where is becomes relevant is when you start to see language-specific APIs for Cloud control pop-up as listed above. You now have two classes of things called “API” and it gets a bit confusing. Is it time to bring back the “protocol” term for on-the-wire definitions?

As a developer, whether you’re better off eating your Cloud noodles using chopsticks (on-the-wire protocol definitions) or a fork (language-specific APIs) is an important decision that will stay with you and may come back to bit you (e.g. when the interfaces are versioned). There is a place for both of course, but if we are to learn anything from WS-* it’s that we went way too far in the “give me a java stub” direction. Which doesn’t mean there is no room for them, but be careful how far from the wire semantics you get. It become even trickier when your stub tries not jsut to bridge between XML and Java but also to smooth out the differences between several on-the-wire protocols, as the toolkits above do. The hope, of course, is that there will eventually be enough standardization of on-the-wire protocols to make this a moot point.

2 Comments

Filed under Amazon, API, Automation, Cloud Computing, Everything, Google App Engine, Implementation, IT Systems Mgmt, Manageability, Mgmt integration, Open source, Protocols, Utility computing

Separating model from protocol in Cloud APIs

What happened to the separation between the model and the protocol in management APIs? For all the arguments we had in the design of WSDM and WS-Management, this was one fundamental concept that took little discussion before everyone agreed: that the protocol (the interaction model and the on-the-wire shape of the messages used) should be defined in a way that is agnostic to the type of resource being managed (computers, elevators or toasters — the perennial silly example). To this end, WSDM took pains to release MUWS (Management Using Web Services) and MOWS (Management Of Web Services) as two different specifications.

Contrast that to the different Cloud APIs (there is a new one released every other day). If they have one thing in common it is that they happily ignore this principle and tackle protocol concerns alongside the resource model. Here are my guesses as to why that is:

1) It’s a land grad

The goal is not to produce the best long-term API, it’s to be out early, to stake your claim and to gain leverage, so that you can steer the final standard close to your implementation. Editorial niceties like properly factoring the specification are not major concerns, there will be plenty of time for this during the standardization process. In fact, leaving such improvements for the standardization phase is a nice way to make it look like the group is not just rubberstamping, while not changing much that actually impacts your implementation. The good old “give them something insignificant to argue about” trick. It works BTW.

As an example of how rushed some of these submissions can be, did you notice that what VMWare submitted to DMTF this week is the vCloud API Specification v0.8 (a 7-page document that is simply a list of operations), not the accompanying vCloud API programming guide v0.8 which is ten times longer and is the real specification, the place where the operation semantics, payload formats and protocol considerations are actually described and without which the previous document cannot possibly be implemented. Presumably the VMWare team was pressed to release on time for a VMWorld announcement and they came up with this to be able to submit without finishing all the needed editorial work. I assume this will follow soon and in the meantime the DMTF members will retrieve the programming guide from the VMWare site in order to make sense of what was submitted to them.

This kind of rush is not rare in the history of specification submission, even those that have been in the work for a long time . For example, the initial CBE submission by IBM had “IBM Confidential” all over the specification and a mention that one should retrieve the most up to date version from the “Autonomic Computing Problem Determination Offering Team Notes Database” (presumably non-IBMers were supposed to break into the server).

If lack of time is the main reason why all these APIs do not factor out the protocol aspects then I have no problem, there is plenty of time to address it. But I suspect that there may be other reasons, that some may see it as a feature rather than a bug. For example:

2) Anything but WS-*

SOAP-based interfaces (WS-* or WS-DeathStar) have a bad rap and doing anything in the opposite way is a crowd pleaser (well, in the blogosphere at least). Modularity and composition of specifications is a major driving force behind the WS-* work, therefore it is bad and we should make all specifications of the new REST order stand-alone.

3) Keep it simple

A more benevolent way to put it is the concern to keep things simple. If you factor specifications out you put on the developer the burden of assembling the complete documentation, plus you introduce versioning issues between the parts. One API document that fully describes the contract is simpler.

4) We don’t need no stinking’ protocol, we have HTTP

Isn’t this the protocol? Through the magic of REST, all that’s needed is a resource model, right? But if you look in the specifications you see sections about authentication, fault handling, long-lived operations, enumeration of long result sets, etc… Things that have nothing to do with the resource model.

So what?

Why is this confluence of model and protocol in one specification bad? If nothing else, the “keep it simple” argument (#3) above has plenty of merits, doesn’t it? Aren’t WSDM and WS-Management just over-engineered?

They may be, but not because they offer this separation. Consider the following practical benefits of separating the protocol from the model:

1) We can at least agree on one part

Thanks to the “REST is the new black” attitude in Cloud circles, there are lots of commonalities between these various Cloud APIs. Especially the more recent ones, those that I think of as “second generation” APIs: vCloud, Sun API, GoGrid and OCCI (Amazon EC2 is the main “1st generation” Cloud API, back when people weren’t too self-conscious about not just using HTTP but really “doing REST”). As an example of convergence between second generation specifications, see for example, how vCloud and the Sun API both use “202 Accepted” and a dedicated “status” resource to handle long-lived operations. More comparisons here.

Where they differ on such protocol matters, it wouldn’t be hard to modify one’s implementation to use an alternative approach. Things become a lot more sensitive when you touch the resource model, which reflects the actual capabilities of the Cloud management infrastructure. How much flexibility in the network setup? What kind of application provisioning? What affinity/anti-affinity control level? Can I get block-level storage? Etc. Having to implement the other guy’s interface in these matters is not just a matter of glue code, it’s a major product feature. As a result, the resource model is a much more strategic control point than the protocol. Would you rather dictate the terms of a contract or the color of the ink in which it is printed?

That being the case, I suspect that there could be relatively quick and painless agreement on that first layer of the Cloud API: a set of protocol considerations, based on HTTP and REST, that provide a resource control framework with support for security, events, long-running operations, faults, many-as-one semantics, enumeration, etc. Or rather, that if there is to be a “quick and painless” agreement on anything related to Cloud computing standards it can only be on something that is limited to protocol concerns. It doesn’t have to be long and complex. It doesn’t have to be factored in 8 different specifications like WS-* did. It can be just one specification. Keep it simple, ignore all use cases that aren’t related to Cloud Computing. In the end, please call it MUR (Management Using REST)… ;-)

2) Many Clouds, one protocol to rule them all

Whichever Cloud taxonomy strikes your fancy (I am so disappointed that SADIST-PIMP hasn’t caught on), it’s pretty clear that there will not be one kind of Cloud. There will be at least some IaaS, some PaaS and plenty of SaaS. There will not be one API that provides control of them all, but they can share a base protocol that will make life a lot easier for developers. These Clouds won’t be isolated, developers will use them as a continuum.

3) Not just one access model

As much as it makes sense to start from simple and mostly synchronous operations, there will be many different interaction models for Cloud Computing. In addition to the base operations, we may get more of a desired-state/blueprint interaction pattern, based on the same resource model. Or, somewhere in-between, some kind of stored execution flow where modules are passed around rather than individual operations. Also, as the level of automation increases you may want a base framework that is more event-friendly for rapid close-loop management. And there are other considerations involved (like resource monitoring, policies…) not currently covered by these specifications but that can surely reuse the protocol aspects. By factoring out the resource model, you make it possible for these other interaction patterns to emerge in a compatible way.

The current Cloud APIs are not far away from this clean factoring. It would be an easy task to extract protocol considerations as a separate document, in large part due to the fact that REST prevents you from burying the resource model inside convoluted operation semantics. To some extent it’s just a partitioning issue, but the same can be said of many intractable and bloody armed conflicts around the world… Good fences make good neighbors in the world of IT specs too.

[UPDATE: Soon after this entry went to “press” (meaning soon after I pressed the publish button), I noticed this report of a “REST-*” proposal by Mark Little of RedHat/JBOSS. I will reserve judgment until Mark has blogged about it or I have seen some other authoritative description. We may be talking about the same thing here. Or maybe not. The REST-* name surprises me a bit as I would expect opponents of such a proposal to name it just this way. We’ll see.]

[UPDATE 2009/9/6: Apparently I am something like the 26th person to think of the “one protocol/API to rule them all” sentence. We geeks have such a shallow set of shared cultural references it’s scary at times.]

[UPDATED 2009/11/12: Lori MacVittie has a very nice follow-up on this, with examples and interesting analogies. Check it out.]

8 Comments

Filed under API, Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, Protocols, REST, Specs, Standards, Utility computing

Cloud catalog catalyst or cloud catalog cataclysm?

Like librarians, we IT wonks tend to like things cataloged. To date, the last instance of this has been SOA governance and its various registries and repositories. With UDDI limping along as some kind of organizing standard for the effort. One issue I have with UDDI  is that its technical awkwardness is preventing us from learning from its failure to realize its ambitious goals (“e-business heaven”). It would be too easy to attribute the UDDI disappointment to UDDI. Rather, it should be laid at the feet of unreasonable initial expectations.

The SOA governance saga is still ongoing, now away from the spotlight and mostly from an implementation perspective rather than a standard perspective (by the way, what’s up with GIF?). Instead, the spotlight has turned to Cloud computing and that’s what we are supposedly going to control through cataloging next.

Earlier this year, I commented on the release of an ITSM catalog product for Cloud computing (though I was addressing the convergence of ITSM and Cloud computing more than catalogs per se).

More recently, Lori MacVittie related SOA governance to the need for Cloud catalogs. She makes some good points, but I also see some familiar-looking “irrational exuberance”. The idea of dynamically discovering and invoking a Cloud service reminds me too much of the initial “yellow pages” scenarios for UDDI (which quickly got dropped in favor of a more modest internal governance focus).

I am not convinced by the reason Lori gives for why things are different this time around (“one of the interesting things virtualization brings to the table that SOA did not is the ability to abstract management of services”). She argues that SOA governance only gave you access to the operational WSDL of a Web service, while Cloud catalogs will give you access to their management API. But if your service is an IT service, then your so-called management API (launch/configure/control VMs) is really its operational interface. The real management interface is the one Amazon uses under the cover and they are not going to expose it to you anymore than your bank is going to expose its application server administration console to you (if they do, move your money somewhere else before someone does it for you).

After all, isn’t SOA governance pretty close to a SaaS catalog which is itself a small part of the overall Cloud (IaaS+PaaS+SaaS) catalog question? If we still haven’t succeeded in the smaller scope, what are the odds of striking gold quickly in the larger effort?

Some analysts take a more pragmatic view, involving active brokers rather than simply a new DNS record type. I am doubtful about these brokers (0.2 probability, as Gartner would put it) but at least this moves the question onto business terms (leverage, control) rather than technical terms. Which is where the battle will be fought.

When it comes to Cloud catalogs, I think they are needed (if only for the categorization of Cloud services that they require) but will only play a supporting role, if any, in any move towards dynamic Cloud provisioning. As with SOA governance it’s as an internal tool, supported by strong processes, that they will be most useful.

Throughout human history, catalogs have been substitutes for control more often than instruments of control. Think of astronomy, zoology and… nephology for example. What kind will IT Cloud catalogs be?

2 Comments

Filed under Application Mgmt, Automation, Business, Cloud Computing, Everything, Governance, Manageability, Mgmt integration, Portability, Specs, Utility computing

REST in practice for IT and Cloud management (part 1: Cloud APIs)

In this entry I compare four public Cloud APIs (AWS EC2, GoGrid, Rackspace and Sun Cloud) to see what practical benefits REST provides for resource management protocols.

As someone who was involved with the creation of the WS-* stack (especially the parts related to resource management) and who genuinely likes the SOAP processing model I have a tendency to be a little defensive about REST, which is often defined in opposition to WS-*. On the other hand, as someone who started writing web apps when the state of the art was a CGI Perl script, who loves on-the-wire protocols (e.g. this recent exploration of the Windows management stack from an on-the-wire perspective), who is happy to deal with raw XML (as long as I get to do it with a good library), who appreciates the semantic web, and who values models over protocols the REST principles are very natural to me.

I have read the introduction and the bible but beyond this I haven’t seen a lot of practical and profound information about using REST (by “profound” I mean something that is not obvious to anyone who has written web applications). I had high hopes when Pete Lacey promised to deliver this through a realistic example, but it seems to have stalled after two posts. Still, his conversation with Stefan Tilkov (video + transcript) remains the most informed comparison of WS-* and REST.

The domain I care the most about is IT resource management (which includes “Cloud” in my view). I am familiar with most of the remote API mechanisms in this area (SNMP to WBEM to WMI to JMX/RMI to OGSI, to WSDM/WS-Management to a flurry of proprietary interfaces). I can think of ways in which some REST principles would help in this area, but they are mainly along the lines of “any consistent set of principles would help” rather than anything specific to REST. For a while now I have been wondering if I am missing something important about REST and its applicability to IT management or if it’s mostly a matter of “just pick one protocol and focus on the model” (as well as simply avoiding the various drawbacks of the alternative methods, which is a valid reason but not an intrinsic benefit of REST).

I have been trying to learn from others, by looking at how they apply REST to IT/Cloud management scenarios. The Cloud area has been especially fecund in such specifications so I will focus on this for part 1. Here is what I think we can learn from this body of work.

Amazon EC2

When it came out a few years ago, the Amazon EC2 API, with its equivalent SOAP and plain-HTTP alternatives, did nothing to move me from the view that it’s just a matter of picking a protocol and being consistent. They give you the choice of plain HTTP versus SOAP, but it’s just a matter of tweaking how the messages are serialized (URL parameters versus a SOAP message in the input; whether or not there is a SOAP wrapper in the output). The operations are the same whether you use SOAP or not. The responses don’t even contain URLs. For example, “RunInstances” returns the IDs of the instances, not a URL for each of them. You then call “TerminateInstances” and pass these instance IDs as parameters rather than doing a “delete” on an instance URL. This API seems to have served Amazon (and their ecosystem) well. It’s easy to understand, easy to use and it provides a convenient way to handle many instances at once. Since no SOAP header is supported, the SOAP wrapper adds no value (I remember reading that the adoption rate for the EC2 SOAP API reflect this though I don’t have a link handy).

Overall, seeing the EC2 API did not weaken my suspicion that there was no fundamental difference between REST and SOAP in the IT/Cloud management field. But I was very aware that Amazon didn’t really “do” REST in the EC2 API, so the possibility remained that someone would, in a way that would open my eyes to the benefits of true REST for IT/Cloud management.

Fast forward to 2009 and many people have now created and published RESTful APIs for Cloud computing. APIs that are backed by real implementations and that explicitly claim RESTfulness (unlike Amazon). Plus, their authors have great credentials in datacenter automation and/or REST design. First came GoGrid, then the Sun Cloud API and recently Rackspace. So now we have concrete specifications to analyze to understand what REST means for resource management.

I am not going to do a detailed comparative review of these three APIs, though I may get to that in a future post. Overall, they are pretty similar in many dimensions. They let you do similar things (create server instances based on images, destroy them, assign IPs to them…). Some features differ: GoGrid supports more load balancing features, Rackspace gives you control of backup schedules, Sun gives you clusters (a way to achieve the kind of manage-as-group features inherent in the EC2 API), etc. Leaving aside the feature-per-feature comparison, here is what I learned about what REST means in practice for resource management from each of the three specifications.

GoGrid

Though it calls itself “REST-like”, the GoGrid API is actually more along the lines of EC2. The first version of their API claimed that “the API is a REST-like API meaning all API calls are submitted as HTTP GET or POST requests” which is the kind of “HTTP ergo REST” declaration that makes me cringe. It’s been somewhat rephrased in later versions (thank you) though they still use the undefined term “REST-like”. Maybe it refers to their use of “call patterns”. The main difference with EC2 is that they put the operation name in the URI path rather than the arguments. For example, EC2 uses

https://ec2.amazonaws.com/?Action=TerminateInstances&InstanceId.1=i-2ea64347&…(auth-parameters)…

while GoGrid uses

https://api.gogrid.com/api/grid/server/delete?name=My+Server+Name&…(auth-parameters)…

So they have action-specific endpoints rather than a do-everything endpoint. It’s unclear to me that this change anything in practice. They don’t pass resource-specific URLs around (especially since, like EC2, they include the authentication parameters in the URL), they simply pass IDs, again like EC2 (but unlike EC2 they only let you delete one server at a time). So whatever “REST-like” means in their mind, it doesn’t seem to be “RESTful”. Again, the EC2 API gets the job done and I have no reason to think that GoGrid doesn’t also. My comments are not necessarily a criticism of the API. It’s just that it doesn’t move the needle for my appreciation of REST in the context of IT management. But then again, “instruct William Vambenepe” was probably not a goal in their functional spec

Rackspace

In this “interview” to announce the release of the Rackspace “Cloud Servers” API, lead architects Erik Carlin and Jason Seats make a big deal of their goal to apply REST principles: “We wanted to adhere as strictly as possible to RESTful practice. We iterated several times on the design to make it more and more RESTful. We actually did an update this week where we made some final changes because we just didn’t feel like it was RESTful enough”. So presumably this API should finally show me the benefits of true REST in the IT resource management domain. And to be sure it does a better job than EC2 and GoGrid at applying REST principles. The authentication uses HTTP headers, keeping URLs clean. They use the different HTTP verbs the way they are intended. Well mostly, as some of the logic escapes me: doing a GET on /servers/id (where id is the server ID) returns the details of the server configuration, doing a DELETE on it terminates the server, but doing a PUT on the same URL changes the admin username/password of the server. Weird. I understand that the output of a GET can’t always have the same content as the input of a PUT on the same resource, but here they are not even similar. For non-CRUD actions, the API introduces a special URL (/servers/id/action) to which you can POST. The type of the payload describes the action to execute (reboot, resize, rebuild…). This is very similar to Sun’s “controller URLs” (see below).

I came out thinking that this is a nice on-the-wire interface that should be easy to use. But it’s not clear to me what REST-specific benefit it exhibits. For example, how would this API be less useful if “delete” was another action POSTed to /servers/id/action rather than being a DELETE on /servers/id? The authors carefully define the HTTP behavior (content compression, caching…) but I fail to see how the volume of data involved in using this API necessitates this (we are talking about commands here, not passing disk images around). Maybe I am a lazy pig, but I would systematically bypass the cache because I suspect that the performance benefit would be nothing in comparison to the cost of having to handle in my code the possibility of caching taking place (“is it ok here that the content might be stale? what about here? and here?”).

Sun

Like Rackspace, the Sun Cloud API is explicitly RESTful. And, by virtue of Tim Bray being on board, we benefit from not just seeing the API but also reading in well-explained details the issues, alternatives and choices that went into it. It is pretty similar to the Rackspace API (e.g. the “controller URL” approach mentioned above) but I like it a bit better and not just because the underlying model is richer (and getting richer every day as I just realized by re-reading it tonight). It handles many-as-one management through clusters in a way that is consistent with the direct resource access paradigm. And what you PUT on a resource is closely related to what you GET from it.

I have commented before on the Sun Cloud API (though the increasing richness of their model is starting to make my comments less understandable, maybe I should look into changing the links to a point-in-time version of Kenai). It shows that at the end it’s the model, not the protocol that matters. And Tim is right to see REST in this case as more of a set of hygiene guidelines for on-the-wire protocols then as the enabler for some unneeded scalability (which takes me back to wondering why the Rackspace guys care so much about caching).

Anything learned?

So, what do these APIs teach us about the practical value of REST for IT/Cloud management?

I haven’t written code against all of them, but I get the feeling that the Sun and Rackspace APIs are those I would most enjoy using (Sun because it’s the most polished, Rackspace because it doesn’t force me to use JSON). The JSON part has two component. One is simply my lack of familiarity with using it compared to XML, but I assume I’ll quickly get over this when I start using it. The second is my concern that it will be cumbersome when the models handled get more complex, heterogeneous and versioned, chiefly from the lack of namespace support. But this is a topic for another day.

I can’t tell if it’s a coincidence that the most attractive APIs to me happen to be the most explicitly RESTful. On the one hand, I don’t think they would be any less useful if all the interactions where replaced by XML RPC calls. Where the payloads of the requests and responses correspond to the parameters the APIs define for the different operations. The Sun API could still return resource URLs to me (e.g. a VM URL as a result of creating a VM) and I would send reboot/destroy commands to this VM via XML RPC messages to this URL. How would it matter that everything goes over HTTP POST instead of skillfully choosing the right HTTP verb for each operation? BTW, whether the XML RPC is SOAP-wrapped or not is only a secondary concern.

On the other hand, maybe the process of following REST alone forces you to come up with a clear resource model that makes for a clean API, independently of many of the other REST principles. In this view, REST is to IT management protocol design what classical music training is to a rock musician.

So, at least for the short-term expected usage of these APIs (automating deployments, auto-scaling, cloudburst, load testing, etc) I don’t think there is anything inherently beneficial in REST for IT/Cloud management protocols. What matter is the amount of thought you put into it and that it has a clear on-the-wire definition.

What about longer term scenarios? Wouldn’t it be nice to just use a Web browser to navigate HTML pages representing the different Cloud resources? Could I use these resource representations to create mashups tying together current configuration, metrics history and events from wherever they reside? In other words, could I throw away my IT management console because all the pages it laboriously generates today would exist already in the ether, served by the controllers of the resources. Or rather as a mashup of what is served by these controllers. Such that my IT management console is really “in the cloud”, meaning not just running in somebody else’s datacenter but rather assembled on the fly from scattered pieces of information that live close to the resources managed. And wouldn’t this be especially convenient if/when I use a “federated” cloud, one that spans my own datacenter and/or multiple Cloud providers? The scalability of REST could then become more relevant, but more importantly its mashup-friendliness and location transparency would be essential.

This, to me, is the intriguing aspect of using REST for IT/Cloud management. This is where the Sun Cloud API would beat the EC2 API. Tim says that in the Sun Cloud “the router is just a big case statement over URI-matching regexps”. Tomorrow this router could turn into five different routers deployed in different locations and it wouldn’t change anything for the API user. Because they’d still just follow URLs. Unlike all the others APIs listed above, for which you know the instance ID but you need to somehow know which controller to talk to about this instance. Today it doesn’t matter because there is one controller per Cloud and you use one Cloud at a time. Tomorrow? As Tim says, “the API doesn’t constrain the design of the URI space at all” and this, to me, is the most compelling long-term reason to use REST. But it only applies if you use it properly, rather than just calling your whatever-over-HTTP interface RESTful. And it won’t differentiate you in the short term.

The second part in the “REST in practice for IT and Cloud management” series will be about the use of REST for configuration management and especially federation. Where you can expect to read more about the benefits of links (I mean “hypermedia”).

[UPDATE: Part 2 is now available. Also make sure to read the comments below.]

35 Comments

Filed under Amazon, API, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, REST, SOA, SOAP, SOAP header, Specs, Utility computing, Virtualization

YACSOE

Yet another cloud standards organization effort. This one is better than the others because it has the best domain name.

A press release to announce a Wiki. Sure. Whatever. Electrons are cheap.

Cynicism aside, it can’t hurt. But what would be really useful is if all these working groups opened up their mailing list archives and document repositories so that the Wiki can be a launching pad to actual content rather than a set of one-line descriptions of what each group is supposed to work on. With useful direct links to the most recent drafts and lists of issues under consideration. Similar to the home page of a W3C working group, but across groups. Let’s hope this is a first step in that direction.

I am also interested in where they’ll draw the line between Cloud computing and IT management. If such a line remains.

2 Comments

Filed under Cloud Computing, DMTF, Everything, Grid, Manageability, Mgmt integration, Specs, Standards, Utility computing, Virtualization, W3C

File upload/download and remote program execution using WS-Management – a practical solution

The previous blog post described a way to upload and (in theory at least) download text files to/from a remote Windows machine using WS-Management. In practice, the applicability of the method is  limited for upload (text files only, slow for large files) and almost nonexistent for download. Here is a much improved version.

This is another example of something that was too obvious for me to see last weekend when I was in the thick of fighting with WS-Management SOAP messages and learning about WMI classes. It just took a day of not thinking about it to have the solution pop in my mind: use ftp.exe. For the longest time (at least since Windows NT) Windows has been shipping with this FTP client. And the documentation shows that you can call it from the command line and provide it with the name of a text file containing the commands to execute. Bingo.

Specifically, here are the steps. Let’s say that I want to run a program called task.exe on a remote Windows machine and that program takes a large binary file (data.bin) as input. I want to transfer both to the remote machine and then run the program. This can be done in 3 simple steps:

Step 1: upload the FTP command file to the remote Windows machine. The content of the command file is below. mgmtserver.myco.com is the name of the machine from which the two files can be retrieved over FTP. I use anonymous FTP here, but you could just as well provide a username and password.

open mgmtserver.myco.com
anonymous
binary
get task.exe
get data.bin
quit

Step 2: execute the FTP commands above. This downloads task.exe and data.bin from mgmtserver.myco.com onto the remote Windows machine.

Step 3: execute the program on the remote Windows machine (“task.exe data.bin”).

Here are the on-the-wire messages corresponding to each step:

Step 1: upload the FTP command file to the remote Windows machine

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
  xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
  xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
  <s:Header>
    <a:To>http://server:80/wsman</a:To>
    <w:ResourceURI s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process </w:ResourceURI>
    <a:ReplyTo>
    <a:Address s:mustUnderstand="true">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
    </a:ReplyTo>
    <a:Action s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create</a:Action>
    <a:MessageID>uuid:9A989269-283B-4624-BAC5-BC291F72E854</a:MessageID>
  </s:Header>
  <s:Body>
    <p:Create_INPUT xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process">
      <p:CommandLine>cmd /c echo open mgmtserver.myco.com>ftpscript&amp;&amp;echo
      anonymous>>ftpscript&amp;&amp;echo binary>>ftpscript&amp;&amp;echo get
      task.exe>>ftpscript&amp;&amp;echo get data.bin>>ftpscript&amp;&amp;echo
      quit>>ftpscript</p:CommandLine>
      <p:CurrentDirectory>C:datawinrm-test</p:CurrentDirectory>
    </p:Create_INPUT>
  </s:Body>
</s:Envelope>

As before, you need to set the Content-Type HTTP header to “application/soap+xml;charset=UTF-8” (or UTF-16).

Step 2: execute the FTP commands to download the files from your server

It’s the same message, except the <p:CommandLine> element now has this value:

<p:CommandLine>ftp -s:ftpscript</p:CommandLine>

Step 3: execute the task.exe program on the remote Windows machine

Again, the same message except that the command line is simply:

<p:CommandLine>C:datawinrm-testtask.exe data.bin</p:CommandLine>

Note that I have broken this down in three messages for clarity, but you can easily bundle all three steps in one SOAP message. Just use this command line:

<p:CommandLine>cmd /c echo open mgmtserver.myco.com>ftpscript&amp;&amp;echo
anonymous>>ftpscript&amp;&amp;echo binary>>ftpscript&amp;&amp;echo get
task.exe>>ftpscript&amp;&amp;echo get data.bin>>ftpscript&amp;&amp;echo
quit>>ftpscript&amp;&amp;ftp -s:ftpscript&amp;&amp;C:datawinrm-testtask.exe
data.bin</p:CommandLine>

Of course this can also be used in reverse, to download files from the remote Windows machine rather than upload files to it. Just use PUT or MPUT as FTP commands instead of GET or MGET.

This mechanism is a major improvement, for many use cases, over what I originally described. I feel a bit like someone who just changed a flat tire by loosening the lug nuts with his teeth and then found the lug wrench under the spare tire.

2 Comments

Filed under Everything, Implementation, IT Systems Mgmt, Manageability, Microsoft, Portability, SOAP, Standards, WS-Management

Uploading a file to a Windows machine via WMI/WS-Management

[UPDATED 2009/6/30: Check the following post for a more practical solution.]

Here is a simple way to upload a text (i.e. not binary) file to a Windows machine. Because my interest is to be able to do it from any platform, I investigated the use of WS-Management. But the method relies on invoking WMI methods over WS-Management, so I don’t see why it would not also work in a straight WMI scenario if you prefer.

I am not a Windows management expert, so there may be a much better way to do this (e.g. BITS). But if what you’re after is the simplest possible way to drop a file on a Windows machine it from a non-Windows machine, it doesn’t get much simpler than sending an XML doc over HTTP and calling it a day. Here is how.

The easiest would be if the CIM_DataFile WMI class had a “create” method to create a new file. It doesn’t. But Win32_Process does. Invoking this method creates a new process and you get to specify the command line to execute. All you need to do is come up with a command line that invokes a program that will create the file that you want to upload.

There may be alternatives, but the command line I came up with for this purpose uses the “cmd.exe” interpreter (the Windows command-line shell). By using the “/c” option, you can invoke this interpreter with its instructions as parameters directly on the command line (it gets a bit confusing because we have two “command lines” here, the one that is used to launch the “cmd.exe” shell and the one that is presented inside the “cmd.exe” shell).

Anyway, if you type the following line inside the “start/run” field in Windows

cmd /c echo 1st line > test1.txt

It will have the same effect as opening a command shell, typing “echo 1st line > test1.txt” in it and the closing it. It creates a new file called “test1.txt” with one line of content (“1st line”). If you want a second line, you can do this by adding a second command that uses “>>” (append) instead of “>”. And the two commands can be joined by “&&” to invoke them in one pass. So to create a file with three lines, we’d execute:

cmd /c echo 1st line > test1.txt && echo 2nd line >> test1.txt
&& echo 3rd line >> test1.txt

Now all we have to do is package this in a WS-Management SOAP message and post it to the WS-Management listener of the Windows machine. In the process, we have to escape the “&” in the command line to “&amp;” because of XML syntax rules. The resulting message looks like:

<s:Envelope
  xmlns:s="http://www.w3.org/2003/05/soap-envelope"
  xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
  xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
<s:Header>
<a:To>http://localhost/wsman</a:To>
<w:ResourceURI s:mustUnderstand="true">
  http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process
</w:ResourceURI>
<a:ReplyTo>
<a:Address s:mustUnderstand="true">
  http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous
</a:Address>
</a:ReplyTo>
<a:Action s:mustUnderstand="true">
  http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create
</a:Action>
<a:MessageID>uuid:9A989269-283B-4624-BAC5-BC291F72E854</a:MessageID>
</s:Header>
<s:Body>
<p:Create_INPUT
  xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process">
<p:CommandLine>cmd /c echo 1st line > test1.txt &amp;&amp; echo 2nd line >>
  test1.txt &amp;&amp; echo 3rd line >> test1.txt</p:CommandLine>
<p:CurrentDirectory>C:datawinrm-test</p:CurrentDirectory>
</p:Create_INPUT>
</s:Body>
</s:Envelope>

You don’t even need a WS-Management toolkit to do this as the only WS-Management header is w:ResourceURI which can easily be set manually. You don’t need a WS-Addressing library either as all the headers are also static (except for the MessageID even though nobody will care in practice if you always send the same value; I hereby authorize you to re-use the one in my example as much as you want). As a side note, this is yet another illustration of how useless this header (and more generally WS-Addressing) is in 95% of the case. And yet the Microsoft WS-Management implementation (like many others) will make a point to fault if you don’t send it. But ranting against WS-Addressing is a topic for another day (look for a future post titled “WS-IfInteroperabilityWasEasyItWouldNotBeFunWouldIt”).

I should mention that you want to set the Content-Type HTTP header to “application/soap+xml;charset=UTF-8” for this message. Or UTF-16 if that’s what you’re sending.

A few comments:

  • This obviously only works for character-based files, not binaries
  • I’ve noticed that the parsing of the wsa:Action header is pretty minimalistic. The Microsoft implementation seems to just pick up the text behind the last “/”. So you can type send “blahblah/Create” and it works just as well as the correct value, “http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create” (it knows what class to apply the operation on from the Resource URI). Interestingly, there is only one URL ending in “/Create” that doesn’t work and it’s the WS-Transfer “Create” operation (“http://schemas.xmlsoap.org/ws/2004/09/transfer/Create”). That’s because the “Create” operation invoked in the message above is not the WS-Transfer “Create” operation but rather the homonymous operation on the WMI class.
  • Using the “/k” modifier on “cmd” in the command line (instead of “/c”) would also work, but the command shell would stay alive after returning so over time you’d have quite a few of them hanging out and using up memory on the remote machine. Not a good move.
  • As part of this exercise, I noticed an error in the MSDN page describing the “invoke” method of Win32_Process. In the SOAP body, the URI for the “p” namespace prefix uses “…/cim/…” instead of “…/cimv2/…”, which caused my first attempts to fail.

If the file you want to upload is large, you can break the upload over several successive messages similar to the one above. As long as you use the same file name and use “>>” instead of “>” you’ll keep appending to the end of the file until it’s complete.

Of course this could be any type of text file, including XML (watch for the character-escaping rules though, both for XML and for “cmd” as you have to apply them in the right sequence). Even better, it could be a Python, Perl or PowerShell script too. And in that case (assuming the corresponding interpreter is installed on the machine) you can use the same mechanism to also invoke the script for execution. So that you use this WS-Management interface just to bootstrap into a more comfortable remote-control mechanism.

The next logical question (for extra credit) is whether WS-Management can be used to read files remotely instead of writing them. In theory yes, though in practice you’re much better off with alternate solutions, like the remote shell extension to WS-Management that I have described as “dumb SSH” previously.

But since you ask, here is the theory. My first attempt was to do a WS-Management “Get” (the Get operation from WS-Transfer) on an instance of CIM_DataFile (using the “Name” selector and setting it to “C:datawinrm-testtest1.txt”). But this returns the properties of the file rather than its content. Whether this is kosher is an interesting theoretical question to ponder from a REST-beard-stroking perspective, but it’s useless for my file retrieval purpose. As before, one solution is to use the magical Win32_Process “Create” method to overcome the shortcomings of the CIM_DataFile class. The windows command shell “type” command can be used to display the content of a text file. But the WMI Win32_Process “create” operation that we use here only returns the processId and a result code, not the stdout stream (unlike the remote shell protocol that I mentioned above). We cannot therefore use it directly to return the output of the “type” command over the wire.

The solution is to use one Win32_Process “create” operation over WS-Management to write the content of the file in a place where a subsequent WS-Management opeation can read it. I can think of two examples off the top of my head: directory names and environment variables.

Here is how you’d do it with directory names. The following command takes the test1.txt file, reads it and creates nested subdirectories, one for each line in the input file. The name of the directory is the content of the corresponding line in the file.

for /f "delims=" %I in (test1.txt) do @mkdir "%I" && cd "%I"

For example, if the file content is

1st line
2nd line
3rd line

The command will generate the following three subdirectories:

1st line
  |_ 2nd line
      |_ 3rd line

What’s the point? You can use WS-Management enumeration to retrieve the names of all directories (using the Win32_Directory WMI class). Now that may be a bit overwhelming, so you want to add a WS-Enumeration filter to your WS-Management request. The Microsoft WS-Management implementation supports the WQL filter syntax that lets you do just that.

BTW, you can presumably do the same thing with files, but directories by their nesting make it easy to read the lines in the order in which their appear in the file. Though you’d quickly run into path length limitations (and characters that are not valid in file/directory names).

A slightly more robust approach may be to set each line of the file in an environment variable (again via the “for”, and using “set” after the “do”). You can then read these environment variables over WS-Management by doing a WS-Transfer Get on the Win32_Environment WMI class. Unlike CIM_DataFile (for which Get only return properties, not the content), a Get on Win32_Environment includes the value of the environment variable as one of the properties. The pragmatic reasons for this dichotomy are obvious, but the architectural consequences will give a headache to anyone who still has any illusion that WS-Transfer has anything to do with REST.

As a side note, the “for” instruction can keep no more than 52 variables at a time, so if your file has more than 52 lines you’d have to send successive WS-Management requests and add a “skip” option to the “for” operation on subsequent requests (“skip=52”, “skip=104”, etc…). Again, practicality isn’t much of a concern here, we’re just playing with theory (Ed: “we”? how many people do you expect will still be reading at this point?).

That’s it for today’s episod of “Windows management for the on-the-wire-protocol guy”. Maybe next weekend I’ll take some time to look more into the remote shell over WS-Management protocol extention and how it can be misued/abused.

[UPDATE: The next post describes a more practical approach.]

5 Comments

Filed under DMTF, Everything, Implementation, IT Systems Mgmt, Manageability, Microsoft, SOAP header, Specs, Standards, WS-Management

Native “SSH” on Windows via WS-Management

Did you know that you can now SSH to a Windows machine over WS-Management and its is a documented protocol that can be implemented from any platform and programming language? This is big news to me and I am surprised that, as management protocol geek, I hadn’t heard about it until I started to search MSDN for a related but much smaller feature (file transfer over WS-Management).

OK, so it’s not exactly SSH but it is a remote shell. In fact it comes in two flavors, which I think of as “dumb SSH” and “super SSH”.

Dumb SSH

Dumb SSH is the ability to remotely run a DOS-like command shell over WS-Management. Anyone who has had to use the Windows command shell as a scripting language ersatz understands why I call it “dumb”. I expect that even in Microsoft most would agree (otherwise why would they have created PowerShell?).

Still, you can do quite a few basic things using the Windows command shell and being able to do them remotely is not something to sneer at if you’re building a management product. If you’re interested, you need to read MS-WSMV, the WS-Management Protocol Extensions for Windows Vista specification (available here as a PDF). By the name of the specification, I expected a laundry list of tweaks that the WS-Management and WS-CIM implementation in Vista makes on top of the standards (e.g. proprietary extensions, default values, unsupported features, etc). And there is plenty of that, in sections 3.1, 3.2 and 3.3. The kind of “this is my way” decisions that you’d come to expect from Microsoft on implementing standards. A bit frustrating when you know that they pretty much wrote the standard but at least it’s well documented. Plus, being one of those that forced a few changes in WS-Management between the Microsoft submission and the DMTF standard (under laments from Microsoft that “it’s too late to change Longhorn”) I am not really in position to complain that “Longhorn” (now Vista) indeed deviates from the standard.

But then we get to section 3.4 and we enter a new realm. These are not tweaks to WS-Management anymore. It’s a stateful tunneling protocol going over WS-Management, complete with base-64-encoded streams (stdin, stdout, stderr) and signals. It gives you all you need to run a remote command shell over WS-Management. In addition to the base Windows command shell, it also supports “custom remote shells”, which lets you leverage the tunneling mechanism for another protocol than the one made of Windows shell commands. For example, you could build an HTTP emulation over this on top of which you could run WS-Management on top of which… you know where this is going, don’t you?

A more serious example of such a “custom remote shell” is PowerShell, which takes us to…

Super SSH

Imagine SSH with the guarantee that the shell that you log into on the other side was a Python interpreter, complete with full access to the server’s management API. I think that would qualify as “super SSH”, at least for IT management purposes (no so exciting if all you want to do is check your email with mutt). This is equivalent to what you get when the remote shell invoked over WS-Management (or rather WS-Management plus Vista extensions described above) is PowerShell instead of the the Windows command shell. I have always liked PowerShell but it hasn’t really be all that relevant to me (other than as a design study) because of its ties to the Windows platform. Now, thanks to MS-PSRP, the PowerShell Remoting Protocol specification (PDF here) we are only a good Java (or Python, or Ruby) library away from being able to invoke PowerShell commands from any language, anywhere.

I have criticized over-reliance on libraries to shield developers from XML for task that really would be much better handled by simply learning to use XML. But in this case we really need a library because there is quite a bit of work involved in this protocol, most of which has nothing to do with XML. We have to fragment/defragment packets, compress/decompress messages, not to mention the security aspects. At this point you may question what the value of doing all this on top of WS-Management is, for which I respectfully redirect you to your local Microsoft technology evangelist, MVP or, in last resort, sales representative.

Even if PowerShell is not your scripting language of choice, you can at least use it to create a bootstrap mechanism that will install whatever execution engine you want (e.g. Ruby) and download scripts from your management server. At which point you can sign out of PowerShell. For some reason, I get the feeling that we just got one step closer to Puppet managing Windows machines.

A few closing comments

First, while the MS-WSMV part that lets you run a basic command shell seems already available (Vista SP1, Win2K3R2, Win2K8, etc), the PowerShell part is a lot greener. The MS-PSRP specification is marked “preliminary” and the supported platform list only contains Windows 7 and Win2K8R2. Nevertheless, the word from Microsoft is that they have the intention to make this available on XP and above shortly after Windows 7 comes out. Let’s hope this is the case, otherwise this technology will remain largely irrelevant for years to come.

The other caveat comes from the standard angle. In this post, I only concern myself with the technical aspects. If you want to implement these specifications you have to also take into account that they are proprietary specifications with no IP grant (“Microsoft has patents that may cover your implementations of the technologies described in the Open Specifications. Neither this notice nor Microsoft’s delivery of the documentation grants any licenses under those or any other Microsoft patents”) and fully controlled by Microsoft (who could radically change or kill them tomorrow). As to whether Microsoft plans to eventually standardize them, I would again refer you to your friendly local Microsoft representative. I can just predict, based on the content of the specification, that it would make for some interesting debates in the DMTF (or wherever they may go).

This is a big step towards the citizenship of Windows machines in an automated datacenter (and, incidentally, an endorsement for the “these scripts have to grow up” approach to automation). As Windows comes to parity with Unix in remote scripting abilities, the only question remaining (well, in addition to the pesky license) will be “why another mechanism”. Which could be solved either via standardization of MS-PSRP, de-facto adoption (PowerShell on Suse Linux is only one Microsoft-to-Novell check away) or simply using PowerShell as just a bootstrapping mechanism for Puppet or others, as mentioned above.

[UPDATE: On a related topic, these two posts describe ways to transfer files over WS-Management.]

8 Comments

Filed under Automation, DMTF, Everything, Implementation, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Portability, Specs, Standards, WS-Management

Interesting links

A few interesting links I noticed tonight.

HP Delivers Industry-first Management Capabilities for Microsoft System Center

That’s not going to improve the relationship between the Insight Control group (part of the server hardware group, of Compaq heritage) and the BTO group (part of HP Software, of HP heritage plus many acquisitions) in HP.  The Microsoft relationship was already a point of tension when they were still called SIM and OpenView, respectively.

CA Acquires Cassatt

Constructive destruction at work.

Setting up a load-balanced Oracle Weblogic cluster in Amazon EC2

It’s got to become easier, whether Oracle or somebody else does it. In the meantime, this is a good reference.

[UPDATED 2009/07/12: If you liked the “WebLogic on EC2” article, check out the follow-up: “Full Weblogic Load-Balancing in EC2 with Amazon ELB”.]

Full Weblogic Load-Balancing in EC2 with Amazon ELB

Comments Off on Interesting links

Filed under Amazon, Application Mgmt, Automation, CA, Cloud Computing, Everything, HP, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Middleware, Oracle, Utility computing, Virtualization

Oracle buys Virtual Iron

The rumor had some legs. Oracle announced today that is has acquired Virtual Iron for its virtualization management technology. This publicly-available white paper is a great description of the technology and product capabilities.

Here is a short overview (from here).

VI-Center provides the following capabilities:

  • Physical infrastructure: Physical hardware discovery, bare metal provisioning, configuration, control, and monitoring
  • Virtual Infrastructure: Virtual environment creation and hierarchy, visual status dashboards, access controls
  • Virtual Servers: Create, Manage, Stop, Start, Migrate, LiveMigrate
  • Policy-based Automation: LiveCapacity™, LiveRecovery™, LiveMaintenance, Rules Engine, Statistics, Event Monitor, Custom policies
  • Reports: Resource utilization, System events

Interesting footnote: I read that SAP Ventures was an investor in Virtual Iron…

I also notice that the word “cloud” does not appear once in the list of all press releases issued by Virtual Iron over three years. For a virtualization start-up, that’s a pretty impressive level of restrain and hype resistance.

1 Comment

Filed under Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, Virtualization, Xen

Hyperic joins SpringSource

SpringSource’s Rod Johnson tells us today that his company just bought Hyperic. The press release is a bit more specific, announcing that SpringSource acquired “substantially all of the assets of Hyperic”, which sounds different from acquiring the company itself. Maybe not for SpringSource customers, but possibly for current Hyperic customers (and investors). Acquiring the assets of an open source company may sound like a bit of an oxymoron (though I understand it’s not just about the source code), but Hyperic is what’s called an “open core” company, which means not all the code is open source (see Tarus’ take on it). But the main difference between this and forking might be that you are getting the key employees; who are nice enough with their investors do to it in an orderly way.

Anyway, this is not a business or HR blog, it’s about the technology. And on that front, this looks like an interesting way for SpringSource to expand their monitoring from just the application down into some parts of the infrastructure, at least to some extent. SpringSource’s AMS (Application Management Suite) was already based on Hyperic, so the integration headaches should be minimal. And Hyperic has been doing some Cloud monitoring work too (see this podcast if you want to learn more about it), which if nothing else is PR gold these days (I am not saying it’s just that, but it is that for sure).

As a side note, it is ironic that Hyperic (which started inside Covalent until Javier Soltero spun it off and became its CEO) is now reunited with its mothership (SpringSource acquired Covalent last year).

I am a big proponent of management capabilities in application infrastructure. I applauded Rod Johnson for writing something along the same line last year and I am pleased to see him really push this approach with this acquisition.

Here are the questions that come to my mind when I read about this deal (keep in mind that this is competition from my perspective, so feel free to “question my questions” as you read):

I was going to ask whether this acquisition means that Hyperic users who don’t care for Spring are going to see diminishing value as the product becomes more tied to Spring. But if you look at what Hyperic gives you on the resources it manages, it’s mainly a list of metrics and a few control operations. These will still be there because they’ll be needed for the Spring-centric view anyway. It would be more of a question if Hyperic had advanced discovery features (e.g. examine all the config files of the managed resources and extract infrastructure topology from them). I would wonder if these would still be maintained/improved for non-Spring middleware. But again, not an issue here since I don’t think there is much of this in Hyperic today. And since presumably SpringSource made the acquisition in part to cover more resources types in their management offering (Rod talks about DB and VM management in his post), the list of supported infrastructure elements (OS, DB, VM, network…) will presumably grow rather than shrink. What may be trimmed down eventually is the list of application runtimes currently supported. If you’re a Hyperic/Coldfusion user you should probably attend the upcoming webcast to hear about the plans.

Still on the topic of Hyperic’s monitoring-only capabilities, it means that if Rod Johnson really wants to provide everything for Java developers to put “applications into production without the mediation of operations”, as he says, then he should keep his checkbook open (as a side note, if a developer puts “applications into production” then s/he doesn’t bypass operations but rather becomes operations; you may not think of yourself as one, but if you’re the one who gets called when the application crashes then you are in “operations”). SpringSource is still a long way from offering the complete picture. Here are my guesses for the management features on Rod’s grocery list:

  • configuration management -many potential acquisition candidates
  • in depth database management (going beyond the “you want metrics? we’ve got metrics!” approach to DB management) – fewer candidates

As far as in-house developement, I would expect this acquisition to first yield some auto-discovery of application (and infrastructure) topology in a Spring environment. Then they’ll have to decide if they want to double-down on Cloud support and build/buy more automation features or rather focus on application-centric management and join the fray of BTM / transaction tracing. Doing both at the same time would be very ambitious. This Register article seems to imply the former (Cloud) but my guess is that SpringSource will make the smart choice of focusing on the latter (application-centric management). I see in the Register that, “Peter Cooper-Ellis, SpringSource’s senior vice president of engineering and product management called management of the cloud and virtualized datacenters a strategic driver for the deal”. But this sounds more like telling a buzzword-hungry reporter what he wants to hear rather than actual strategy to me. We’ll see. I hope this acquisition and its follow-through will help move the industry in the right direction of application-centric management, something that will take more than one company.

[UPDATED 2009/5/7: A nice article on the acquisition by Charles Humble at InfoQ. Though I have to take issue with the assertion that “many aspects of monitoring that are essential in a data centre, such as OS and network monitoring, are irrelevant in the context of the cloud”.]

[UPDATED 2009/6/23: Via Coté, an announcement that shows that the Cloud angle might have more post-aquisition juice than I expected. Unless this thing coasted on momentum alone.]

3 Comments

Filed under Application Mgmt, Business, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Middleware, Open source, Spring, Utility computing

A post-mortem on the previous IT management revolution

Before rushing to standardize “Cloud APIs”, let’s take a look back at the previous attempt to tackle the same problem, which is one of IT management integration and automation. I am referring to the definition of specifications that attempted to use the then-emerging SOAP-based Web services framework to easily integrate IT management systems and their targets.

Leaving aside the “Cloud” spin of today and the “Web services” frenzy of yesterday, the underlying problem remains to provide IT services (mostly applications) in a way that offers the best balance of performance, availability, security and economy. Concretely, it is about being able to deploy whatever IT infrastructure and application bits need to be deployed, configure them and take any required ongoing action (patch, update, scale up/down, optimize…) to keep them humming so customers don’t notice anything bothersome and you don’t break any regulation. Or rather so that any disruption a customer sees and any mandate you violate cost you less than it would have cost to avoid them.

The realization that IT systems are moving more and more towards distributed/connected applications was the primary reason that pushed us towards the definition of Web services protocols geared towards management interactions. By providing a uniform and network-friendly interface, we hoped to make it convenient to integrate management tasks vertically (between layers of the IT stack) and horizontally (across distributed applications). The latter is why we focused so much on managing new entities such as Web services, their execution environments and their conversations. I’ll refer you to the WSMF submission that my HP colleagues and I made to OASIS in 2003 for the first consistent definition of such a management framework. The overview white paper even has a use case called “management as a service” if you’re still not convinced of the alignment with today’s Cloud-talk.

Of course there are some differences between Web service management protocols and Cloud APIs. Virtualization capabilities are more advanced than when the WS effort started. The prospect of using hosted resources is more realistic (though still unproven as a mainstream business practice). Open source component are expected to play a larger role. But none of these considerations fundamentally changes the task at hand.

Let’s start with a quick round-up and update on the most relevant efforts and their status.

Protocols

WSMF (Web Services Management Framework): an HP-created set of specifications, submitted to the OASIS WSDM working group (see below). Was subsumed into WSDM. Not only a protocol BTW, it includes a basic model for Web services-related artifacts.

WS-Manageability: An IBM-led alternative to parts of WSDM, also submitted to OASIS WSDM.

WSDM (Web Services Distributed Management): An OASIS technical committee. Produced two standards (a protocol, “Management Using Web Services” and a model of Web services, “Management Of Web Services”). Makes use of WSRF (see below). Saw a few implementations but never achieved real adoption.

OGSI (Open Grid Services Infrastructure): A GGF (the organization now known as OGF) standard to provide a service-oriented resource manipulation infrastructure for Grid computing. Replaced with WSRF.

WSRF: An OASIS technical committee which produced several standards (the main one is WS-ResourceProperties). Started as an attempt to align the GGF/OGSI approach to resource access with the IT management approach (represented by WSDM). Saw some adoption and is currently quietly in use under the cover in the GGF/OGF space. Basically replaced OGSI but didn’t make it in the IT management world because its vehicle there, WSDM, didn’t.

WS-Management: A DMTF standard, based on a Microsoft-led submission. Similar to WSDM in many ways. Won the adoption battle with it. Based on WS-Transfer and WS-Enumeration.

WS-ResourceTransfer (aka WS-RT): An attempt to reconcile the underlying foundations of WSDM and WS-Management. Stalled as a private effort (IBM, Microsoft, HP, Intel). Was later submitted to the W3C WS-RA working group (see below).

WSRA (Web Services Resource Access): A W3C working group created to standardize the specifications that WS-Management is built on (WS-Transfer etc) and to add features to them in the form of WS-RT (which was also submitted there, in order to be finalized). This is (presumably) the last attempt at standardizing a SOAP-based access framework for distributed resources. Whether the window of opportunity to do so is still open is unclear. Work is ongoing.

WS-ResourceCatalog : A discovery helper companion specification to WS-Management. Started as a Microsoft document, went through the “WSDM/WS-Management reconciliation” effort, emerged as a new specification that was submitted to DMTF in May 2007. Not heard of since.

CMDBf (Configuration Management Database Federation): A DMTF working group (and soon to be standard) that mainly defines a SOAP-based protocol to query repositories of configuration information. Not linked with (or dependent on) any of the specifications listed above (it is debatable whether it belongs in this list or is part of a new breed).

Modeling

DCML (Data Center Markup Language): The first comprehensive effort to model key elements of a data center, their relationships and their policies. Led by EDS and Opsware. Never managed to attract the major management vendors. Transitioned to an OASIS member section and died of being ignored.

SDM (System Definition Model): A Microsoft specification to model an IT system in a way that includes constraints and validation, with the goal of improving automation and better linking the different phases of the application lifecycle. Was the starting point for SML.

SML (Service Modeling Language): Currently a W3C “proposed recommendation” (soon to be a recommendation, I assume) with the same goals as SDM. It was created, starting from SDM, by a consortium of companies that eventually submitted it to W3C. No known adoption other than the Eclipse COSMOS project (Microsoft was supposed to use it, but there hasn’t been any news on that front for a while). Technically, it is a combination of XSD and Schematron. It appears dead, unless it turns out that Microsoft is indeed using it (I don’t know whether System Center is still using SDM, whether they are adopting SML, whether they are moving towards M or whether they have given up on the model-centric vision).

CML (Common Model Library): An effort by the SML authors to create a set of model elements using the SML metamodel. Appears to be dead (no news in a long time and the cml-project.org domain name that was used seems abandoned).

SDD (Solution Deployment Descriptor): An OASIS standard to define a packaging mechanism meant to simplify the deployment and configuration of software units. It is to an application archive what OVF is to a virtual disk. Little adoption that I know of, but maybe I have a blind spot on this.

OVF (Open Virtualization Format): A recently released DMTF standard. Defines a packaging and descriptor format to distribute virtual machines. It does not defined a common virtual machine format, but a wrapper around it. Seems to have some momentum. Like CMDBf, it may be best thought of as part of a new breed than directly associated with WS-Management and friends.

This is not an exhaustive list. I have left aside the eventing aspects (WS-Notification, WS-Eventing, WS-EventNotification) because while relevant it is larger discussion and this entry to too long already (see here and here for some updates from late last year on the eventing front). It also does not cover the Grid work (other than OGSI/WSRF to the extent that they intersect with the IT management world), even though a lot of the work that took place there is just as relevant to Cloud computing as the IT management work listed above. Especially CDDLM/CDL an abandoned effort to port SmartFrog to the then-hot XML standards, from which there are plenty of relevant lessons to extract.

The lessons

What does this inventory tell us that’s relevant to future Cloud API standardization work? The first lesson is that protocols are easy and models are hard. WS-Management and WSDM technically get the job done. CMDBf will be a good query language. But none of the model-related efforts listed above seem to have hit the mark of “doing the job”. With the possible exception of OVF which is promising (though the current expectations on it are often beyond what it really delivers). In general, the more focused and narrow a modeling effort is, the more successful it seems to be (with OVF as the most focused of the list and CML as the other extreme). That’s lesson learned number two: models that encompass a wide range of systems are attractive, but impossible to deliver. Models that focus on a small sub-area are the way to go. The question is whether these specialized models can at least share a common metamodel or other base building blocks (a type system, a serialization, a relationship model, a constraint mechanism, etc), which would make life easier for orchestrators. SML tries (tried?) to be all that, with no luck. RDF could be all that, but hasn’t managed to get noticed in this context. The OVF and SDD examples seems to point out that the best we’ll get is XML as a shared foundation (a type system and a serialization). At this point, I am ready to throw the towel on achieving more modeling uniformity than XML provides, and ready to do the needed transformations in code instead. At least until the next window of opportunity arrives.

I wish that rather than being 80% protocols and 20% models, the effort in the WS-based wave of IT management standards had been the other way around. So we’d have a bit more to show for our work, for example a clear, complete and useful way to capture the operational configuration of application delivery services (VPN, cache, SSL, compression, DoS protection…). Even if the actual specification turns out to not make it, its content should be able to inform its successor (in the same way that even if you don’t use CIM to model your server it is interesting to see what attributes CIM has for a server).

It’s less true with protocols. Either you use them (and they’re very valuable) or you don’t (and they’re largely irrelevant). They don’t capture domain knowledge that’s intrinsically valuable. What value does WSDM provide, for example, now that’s it’s collecting dust? How much will the experience inform its successor (other than trying to avoid the WS-Addressing disaster)? The trend today seems to be that a more direct use of HTTP (“REST”) will replace these protocols. Sure. Fine. But anyone who expects this break from the past to be a vaccination against past problems is in for a nasty surprise. Because, and I am repeating myself, it’s the model, stupid. Not the protocol. Something I (hopefully) explained in my comments on the Sun Cloud API (before I knew that caring about this API might actually become part of my day job) and something on which I’ll come back in a future post.

Another lesson is the need for clear use cases. Yes, it feels silly to utter such an obvious statement. But trust me, standards groups still haven’t gotten this. It’s not until years spent on WSDM and then WS-Management that I realized that most people were not going after management integration, as I was, but rather manageability. Where “manageability” is concerned with discovering and monitoring individual resources, while “management integration” is concerned with providing a systematic view of the environment, with automation as the goal. In other words, manageability standards can allow you to get a traditional IT management console without the need for agents. Management integration standards can allow you to coordinate your management systems and automate their orchestration. WS-Management is for manageability. CMDBf is in the management integration category. Many of the (very respectful and civilized) head-butting sessions I engaged in during the WSDM effort can be traced back to the difference between these two sets of use cases. And there is plenty of room for such disconnect in the so-loosely-defined “Cloud” world.

We have also learned (or re-learned) that arbitrary non-backward compatible versioning, e.g. for political or procedural reasons as with WS-Addressing, is a crime. XML namespaces (of the XSD and WSDL types, as well as URIs used in similar ways in specifications, e.g. to identify a dialect or profile) are tricky, because they don’t have backward compatibility metadata and because of the practice to use organizations domain names in the URI (as opposed to specification-specific names that can be easily transferred, e.g. cmdbf.org versus dmtf.org/cmdbf). In the WS-based management world, we inherited these problems at the protocol level from the generic WS stack. Our hands are more or less clean, but only because we didn’t have enough success/longevity to generate our own versioning problems, at the model level. But those would have been there had these models been able to see the light of day (CML) or see adoption (DCML).

There are also practical lessons that can be learned about the tactics and strategies of the main players. Because it looks like they may not change very much, as corporations or even as individuals. Karla Norsworthy speaks for IBM on Cloud interoperability standards in this article. Andrew Layman represented Microsoft in the post-Manifestogate Cloud patch-up meeting in New York. Winston Bumpus is driving the standards strategy at VMWare. These are all veterans of the WS-Management, WSDM and related wars collaborations (and more generally the whole WS-* effort for the first two). For the details of what there is to learn from the past in that area, you’ll have to corner me in a hotel bar and buy me a few drinks though. I am pretty sure you’d get your money’s worth (I am not a heavy drinker)…

In summary, here are my recommendations for standardizing Cloud API, based on lessons from the Web services management effort. The theme is “focus on domain models”. The line items:

  • Have clear goals for each effort. E.g. is your use case to deploy and run an existing application in a Cloud-like automated environment, or is it to create new applications that efficiently take advantage of the added flexibility. Very different problems.
  • If you want to use OVF, then beef it up to better apply to Cloud situations, but keep it focused on VM packaging: don’t try to grow it into the complete model for the entire data center (e.g. a new DCML).
  • Complement OVF with similar specifications for other domains, like the application delivery systems listed above. Informally try to keep these different specifications consistent, but don’t over-engineer it by repeating the SML attempt. It is more important to have each specification map well to its domain of application than it is to have perfect consistency between them. Discrepancies can be bridged in code, or in a later incarnation.
  • As you segment by domain, as suggested in the previous two bullets, don’t segment the models any further within each domain. Handle configuration, installation and monitoring issues as a whole.
  • Don’t sweat the protocols. HTTP, plain old SOAP (don’t call it POS) or WS-* will meet your need. Pick one. You don’t have a scalability challenge as much as you have a model challenge so don’t get distracted here. If you use REST, do it in the mindset that Tim Bray describes: “If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.” Not as something that needs to scale to Web scale or as a rebuff of WS-*.
  • Beware of versioning. Version for operational changes only, not organizational reasons. Provide metadata to assert and encourage backward compatibility.

This is not a recipe for the ideal result but it is what I see as practically achievable. And fault-tolerant, in the sense that the failure of one piece would not negate the value of the others. As much as I have constrained expectations for Cloud portability, I still want it to improve to the extent possible. If we can’t get a consistent RDF-based (or RDF-like in many ways) modeling framework, let’s at least apply ourselves to properly understanding and modeling the important areas.

In addition to these general lessons, there remains the question of what specific specifications will/should transition to the Cloud universe. Clearly not all of them, since not all of them even made it in the “regular” IT management world for which they were designed. How many then? Not surprisingly (since IBM had a big role in most of them), Karla Norsworthy, in the interview mentioned above, asserts that “infrastructure as a service, or virtualization as a paradigm for deployment, is a situation where a lot of existing interoperability work that the industry has done will surely work to allow integration of services”. And just as unsurprisingly Amazon’s Adam Selipsky, who’s company has nothing to with the previous wave but finds itself in leadership position WRT to Cloud Computing is a lot more circumspect: “whether existing standards can be transferred to this case [of cloud computing] or if it’s a new topic is [too] early to say”. OVF is an obvious candidate. WS-Management is by far the most widely implemented of the bunch, so that gives it an edge too (it is apparently already in use for Cloud monitoring, according to this press release by an “innovation leader in automated network and systems monitoring software” that I had never heard of). Then there is the question of what IBM has in mind for WS-RT (and other specifications that the WS-RA working group is toiling on). If it’s not used as part of a Cloud API then I really don’t know what it will be used for. But selling it as such is going to be an uphill battle. CMDBf is a candidate too, as a model-neutral way to manage the configuration of a distributed system. But here I am, violating two of my own recommendations (“focus on models” and “don’t isolate config from other modeling aspects”). I guess it will take another pass to really learn…

[UPDATED 2009/5/7: Senior moment! When writing this entry I forgot that I wrote an earlier entry (in late 2007) specifically to describe the difference between “manageability” and “management integration”. So here it is, if you care for more details on this topic.]

5 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, People, Portability, REST, SML, SOAP, Specs, Standards, Utility computing, Virtualization, WS-Management, WS-ResourceCatalog, WS-ResourceTransfer

New release of Oracle Composite Application Monitor and Modeler

Version 10.2.0.5 of OCAMM (Oracle Composite Application Monitor and Modeler) was recently released. This is the ex QuickVision product from the ClearApp acquisition last year. It is doing very well in the Enterprise Manager portfolio. In this new release, it has grown to cover additional middleware and application products. So now, in addition to the existing support (J2EE, BPEL, Portal…), it lets you model and monitor deployments of the Oracle Service Bus (OSB) as well as AIA process integrations. Those metadata-rich environments fit very naturally in the OCAMM approach of discovering the distributed model from metadata and then collecting and reporting usage metrics across the whole system in the context of that model.

For a complete list of improvements over release 10.2.0.4.2, refer to the release notes, where we see this list of new features:

  • Oracle Service Bus Support: Oracle Service Bus (formerly called Aqualogic Service Bus) versions 2.6, 2.6.1, 3.0, and 10gR3 are now supported.
  • Oracle AIA Support: Oracle Application Integration Architecture (AIA) 2.2.1 and 2.3 are now supported.
  • WLS and WLP Support: WebLogic Server and WebLogic Portal version 10.3 are now supported.
  • Agent Deployment Added: Agent deployment added to CAMM Administration UI to streamline configuration of new resources
  • Testing Added in Resource Configuration: Connectivity parameter testing added in resource configuration
  • Testing Added in Repository Configuration: Connectivity parameter testing added for CAMM database repository configuration
  • EJB and Web Services Support: EJB and Web Services Annotation are now supported

You can get the bits on OTN. Here is the installation and configuration guide and here is the user’s guide.

Comments Off on New release of Oracle Composite Application Monitor and Modeler

Filed under Application Mgmt, BPEL, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, Modeling, Oracle

WS-Discovery and WS-DeviceProfile public review

We are in the middle of the public review period for the three specifications coming out of the “Web Services Discovery and Web Services Devices Profile (WS-DD)” OASIS technical committee. The specifications are:

This trio has been swirling around for many years. Mostly in the USB/PnP/UPnP community (devices physically attached to Windows boxes) but they have popped up on the enterprise WS-* radar screen from time to time, for two reasons:

  • WS-DD as another specification (in addition to WS-Management and the later incarnations of WS-MeX) who uses WS-Transfer, and therefore as a (poor, in my view) justification for making it a stand-along specification
  • WS-Discovery as potentially useful for datacenter management (for resource discovery, obviously)

Looking at the members list for the technical committee seems to confirm the suspicion that many members would be more interested in the potential datacenter-related applications (of WS-Discovery, presumably) than the USB gadgets use cases: CA, IBM, Novell, Progress, RedHat, Software AG, WSO2. Well, I guess Novell and RedHat could be in the game from the perspective of plugging devices into desktops running their version of Linux rather than from a datacenter perspective. CA and IBM could be in it from an Asset Management (for devices) perspective too. So who knows (if you had no tolerance for idle speculations you wouldn’t be reading blogs, would you?).

In any case, no Apple, Palm, RIM, Google (Android) or Sun (JavaFX). I guess handheld devices aren’t the devices targeted here. Rather it’s more printers and cameras. In which case you have to wonder why HP isn’t there but that’s another question (I’ll pretend not to know).

No point dwelling on this, especially since the member list is a very imperfect representation of the actual participation. As an alternative we can scan the mailing list archive to see who is active. Judging from the last two months (feb 2009, march 2009) it’s Microsoft, Microsoft and that company in Redmond. Microsoft, as we know, created these specifications from the start and they seem to still be the engine.

In the end it remains to be seen if WS-Discovery has any role to play in datacenter infrastructure discovery. That environment is clearly becoming more dynamic, but the most frequent changes will take place at the level of the guest VM (creation of new instances) and above (applications, services…) and all these creations will presumably be orchestrated by some controller so there is little need for a “hello world” resource-initiated discovery mechanism for them.

Which leaves the physical equipment. Do we need WS-Discovery to get from the point of a server being plugged to the point where it has been provisioned? There are alternatives in this somewhat homogenous environment. And UDP multicast only takes you so far in a datacenter (at least without extensions). So my guess is no. But who knows. The drafts are here for you to review if you think it may be relevant.

Comments Off on WS-Discovery and WS-DeviceProfile public review

Filed under Automation, Everything, IT Systems Mgmt, Manageability, Microsoft, Specs, Standards

Managing the stack from top to bottom, including virtualization

The press release for the release of Oracle Enterprise Manager 10gR5 came out yesterday, but that’s not all: the Oracle VM Management Pack for Enterprise Manager was also announced yesterday. What this illustrates is that, in addition to the commonly-cited “one neck to choke” benefit of getting the entire stack from one vendor (from the hypervisor to the application, including the OS, DB and MW), there is also the benefit of getting a unified management environment for the whole stack. Here is how my friend and Oracle colleague Adam Hawley (director of product management for Oracle VM and previously with Enterprise Manager) describes it in more details:

So what’s so big about it and why does this give us a clear advantage over others?

  • No other company can offer management of the virtualization AND the workload that runs inside the virtualization at this depth and scale: not anyone. We now offer a single management product…Enterprise Manager Grid Control…that manages your entire data center from top-to-bottom:  from the packaged application layer (Siebel, PeopleSoft, Beehive, etc.) through all the middleware and database layers to the OS and virtualization itself. And we do that for the both physical and virtual worlds together seamlessly.

    • Other virtualization vendors either ONLY do virtualization management or to the extent they do anything else, it is typically one other category in the stack…virtualization plus the OS or virtualization plus some very specific applications (but no OS…), etc.
    • No one else can provide the entire picture the way we can with Oracle VM
  • So what does that mean for users?
    • It means Oracle VM is virtualization with a difference:
      • It is virtualization that makes application workloads faster, easier, and less error prone to deploy with Oracle VM Templates as pre-built, pre-configured VMs containing complete product solutions maintained in a central software library for easy re-use:  download from Oracle, import the VMs, use the product.  Simple.
      • It is virtualization that makes workloads easier to configure and manage:  Automate deployment of the VMs, installation of the management agent, and enable powerful, in-depth monitoring of guests and Oracle VM Servers including configuration management…
        • Set-up configuration policies to track how your VMs and servers are configured and to alert you if that configuration changes or “drifts” over time
        • What about if you have one VM running perfectly and another supposedly identical one not doing as well?  Run a configuration compare to check for differences not only in packages or application versions in the VM, but also down to OS parameter settings and other key items to rapidly identify differences and address them from the same console
      • It is virtualization that makes workloads easier to troubleshoot and support:

        • Not only is Oracle VM support very affordable compared to anyone out there, management of Oracle VM servers in Enterprise Manager makes it so much easier to rapidly track down issues across the layers of your data center from one UI With other vendors, to troubleshoot an issue with applications or the database, you have to trace it down through your environment, possibly to the virtual machine, but then how do you get all the info about the VM itself like its parameters and which physical server it is hosted on?  You have to jump to another tool entirely… whatever stand-alone tools you are using to manage the virtualization layer… to get the information and then go back-and-forth:  tedious and time consuming With Enterprise Manager, it is all there in one UI.  Need to tweak the number of virtual CPUs based on your database performance analysis report indicating a CPU bottleneck?  Navigate from the performance page for the database to the home page of that virtual machine and adjust the configuration in the same UI.  Done.  Well, OK, you may have to restart the application for the new vCPU setting to take effect but you can do still do that all within Enterprise Manager, saving time and minimizing risks.
        • This can dramatically reduce the time to troubleshoot as well as reduce the chances of human error navigating between multiple products with different structures and concepts to help you maximize your up-time.

So this is where it starts to get interesting. This is where the game starts to really be about not just the virtualization itself, but how it makes the rest of your overall data center better and more efficient.  The Oracle Enterprise Manager Grid Control Oracle VM Management Pack is a huge step forward for users.

[UPDATED 2009/3/21: An Oracle Virtualization blog has recently been created. So now you can hear directly from Adam and his colleagues.]

1 Comment

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, OVM, Virtualization

Who said WS-Transfer is for REST?

One more post on the “REST over SOAP” topic, recently revived by the birth of the W3C WS Resource Access working group. Then I’ll go quiet for a bit and let people actually working on it show me why I am wrong to worry about WS-RT.

Before that, I just want to clarify one thing. People seem to assume that WS-Transfer was created as a way to support the creation of RESTful systems that communicate over SOAP. As much as I can tell, this is simply not true.

I never worked for Microsoft and I was not in the room when WS-Transfer was created. But I know what WS-Transfer was created to support: chiefly, it was WS-Management and the Devices Profile for Web Services, neither of which claims to have anything to do with REST. It’s just that they both happen to deal with resources (that word again!) that have properties and they want to access (mostly retrieve, really) the values of these properties. But in both cases, these resources have a lot more than just state. You can call all sorts of type-specific operations on them. No uniform interface. It’s not REST and it’s not trying to be REST. The Devices Profile also happens to make heavy use of WS-Discovery and I am pretty sure that UDP broadcasts aren’t a recommend Web-scale design pattern. And no “hypermedia” in sight in either spec either.

A specification is not RESTful. An application system is. And most application systems that use WS-Transfer don’t even try to be RESTful. Mocking WS-Transfer for not being as good as HTTP to support REST systems is like mocking an airplane for not being as good as your hatchback for grocery shopping. It’s true, but who cares.

So let’s not reflexively attack WS-Transfer for assumed purposes. And similarly, let’s not reflexively defend WS-Transfer as a good way to build RESTful systems.

Just to clarify, this is not meant as a defense of WS-Transfer. I think that, at least in the context of its original purpose, it should be gutted to only its GET operation. The PUT and DELETE tasks should be handled by domain-specific operations. Which would have the consequence of making it look less like a REST wannabe. But my recommendation aims at improving its applicability to the management domain, not at making it comply to an architecture style that is not (at least currently) used in that domain.

4 Comments

Filed under Everything, IT Systems Mgmt, Manageability, Mgmt integration, REST, SOAP, Specs, WS-Transfer