Category Archives: Microsoft

First in-depth look at Microsoft’s Oslo and the “M” modeling language

Microsoft’s PDC is taking place this week and more details were shared with the attendees about project Oslo, an effort announced last year to drastically improve the use of models across the application lifecycle. Some code is available (I think the Quadrant code is only for PDC attendees but the Oslo SDK is available to everyone). I am not at PDC, I didn’t see any presentation and I didn’t download any code. But Microsoft has also posted technical details on MSDN and, as far as I am concerned, that’s the most time-effective way to spend a couple of hours learning about Oslo. BTW, the way they share these early design descriptions and accept to make their evolution public is admirable.

For those who only want to spend 10 minutes rather than 2 hours, here are the thoughts that came to my mind as I was reading.

Overall I am somewhat underwhelmed, but not necessarily in a bad way. I know that’s a little schizophrenic so let me explain. After hearing a lot about how Oslo was the next big thing in modeling, it is a little surprising to read a document that can be summarized as “modeling is good, so go create some SQL tables and store them in a RDBMS”. That’s the underwhelming part. But on the other hand, it is more down to earth and practically-minded than I feared. And this is just a summary, in truth there is more than just “use SQL”.

Half of the MSDN documentation basically explains how to use SQL Server to store application models (as of today, the “Developing Models for the Metadata Store” section has only one sub-section, “SQL Server Guidelines for Modeling in the Oslo Repository“). Does this mean that all .NET applications will eventually have to carry with them a deployment of SQL Server 2008 even if they don’t use it to store the their operational data? Sure there are a few extra repository services (e.g. finer-grained change auditing) but most Oslo repository services are generic SQL Server features. That section has quite a lot of T-SQL, but it’s pretty readable. It also has a lot of dependencies on following naming conventions which makes me think that directly creating T-SQL code is not the best approach.

Fortunately there is an alternative, the “M” language. It’s a schema language with a built-in constraint mechanism. I found it more data-oriented (as opposed to resource-oriented) than I expected. Even though “each model is really a set of data structures, relationships, and constraints in serialized form“, there is a lot more support for data structures and constraints than for relationships. It’s just a foreign key. Relationships aren’t items and don’t have any property (or “field” as they’re called in “M”). For example, the relationship between a student’s enrollment record and a given class can’t have, as property, the grade that the student got for that class (as in the example in section 4.1.4 of the second LC of SML). To model this in “M” you need to create another item (e.g. “courseEnrollment”) and have a relationship from the student to that item and another one from that item to the “course” item itself. Or to replace the foreign key in the student table with a complex structure that contains both the foreign key and the properties of the relationship. At the end it has the same expressiveness potential, but in a less streamlined form. I assume Microsoft took this approach for performance reasons.

I am going on a limb here, but it may also be a difference between development-time concerns and operation-time concerns. During development (all the way to testing and packaging), you can still mostly get away with a relatively simple containment structure. You care about the components of your application and how they are packaged inside or next to one another. Sure you care about who calls who outside of the deployment unit but that’s not as core a concern as getting your class dependencies right, your tests in order and your installer configured. In fact, some of the “who calls who” bindings will be only be realized at runtime. Oslo, at least so far, clearly seems more focused on development time than operations so support for a relationship-rich model may not seem critical. At operations time, on the other hand, you don’t really care so much about how things were packaged before installation. You care a lot more about who invokes who (especially for modern distributed applications), what the network layout is, what resources a ticket is attached to, etc. The model looks a lot more like a graph with complex relationships. Something that “M” doesn’t seem ideally suited for.

Except for this caveat, I like “M”. It’s not anti-XML (you can represent values as XML if you’d like) but it avoids the “the answer is XML/XSD what is the question” approach to modeling that is sometimes a little too prevalent. “M” is a much better schema language for IT systems than XSD. I especially like its approach to types. A value is not intrinsically of a given type. A type is a condition that you happen to meet or not at the current time (“take heart little field, you can be anything you want when you grow up”). As such, you can be of several types at the same time. Refined types are potatoes inside potatoes (not sure if “M” supports definition of types as unions and/or intersection of existing types, for intersection I want to write something like”type NewType : OldType1 where this in OldType2” but there is no “this” in “M”). That approach to types (and the way constraints leverage types) is reminiscent of RDF/OWL. It’s a classification more than a typification, but I understand why they didn’t want to call it “class”. The similarities with RDF/OWL don’t go any further. As I wrote earlier, “M”is very data-focused and not resource-focused: as far as I can tell “M” types are defined syntactically, not semantically (the semantics come as a consequence). For example, I don’t think that you can assert that a given item representing a person is of type “friendly” if there is no corresponding data in the item. You’d have to first create a boolean field called “friendly” and define that those that have that field set to “true” are of type “friendly”. Unlike in RDF/OWL where you can just assert that a subject is “friendly”.

Here is another reason why you can’t have “semantics-only” types: “if you do not specify the type of a field or value, M infers a type for it“. Two things don’t sound quite right to me here. First a detail: the sentence (like others in the doc) talks about “the” type of a field of value, while there can be more than one. More importantly, what’s the point of this feature? How does it help me to have my IRC nickname classified as a post code or as a password just because it happens to be made of a compatible combination of letters and numbers? Maybe it makes sense as a storage optimization, but why does it make sense to expose this to the user?

I also like the way “extents” work. The current description of that feature is pretty limited, but based on how it is used in other parts I think one of its usages is to support a non-OO equivalent to inheritance: create two extents, one for the “superclass” and one for the “subclass” where each only contains the properties/fields defined at that level. You should get both of them in order to have the full picture (all the fields). This is, if I understand it correctly, similar to something I have been (unsuccessfully so far because “XML doesn’t do it this way”) trying to sell to the DMTF CMDBf working group: model inheritance through a set of non-overlapping records rather than dealing with a type hierarchy on record types. It’s not just that it makes relational storage easier (even though it does and that’s probably why “M” does it this way), it also makes your query/select operations a lot easier to specify and implement.

All in all (and without having gone through the exercise of defining actual models in “M”), it seems like a fine schema language (except that its dependency on the CLR base types is unpractical for users outside of the Microsoft universe) but I am not sure if it is beefy enough to be a good IT management metamodel. When the document says that “the Oslo repository provides open and flexible access to the data it contains, which enables direct access to SQL Server views of the underlying data. There are no complex data access layers or APIs” it sounds better than saying “it’s just SQL, so map your model to it and if you want relationships or type inheritance just build it on top of it and quit whining”. But it is an admission of limitation at the same time as a claim of simplicity. I also smell an assumption that LINQ will provide enough hand-holding that non-SQL-savvy developers will be ok. We’ll see.

And then there is MGrammar. Things get a little confusing at that point if you try to relate MGrammar to “M”. Actually, the FAQ states that “the M language consists of three parts: MGraph, MSchema and MGrammar“. This came a bit as a surprise to me since at that point I had finished reading (not in details but not too quickly either) the “M” documentation and I hadn’t seen these names mentioned once. Looks like there is some documentation consistency issues here, but that’s hardly surprising considering this is a “hyper-early (pre-alpha)” release as Doug Purdy puts it.

I think that everything that I have referred to as “M” above is MSchema.

MGrammar is something different altogether: it’s the source of the Domain Specific Language (DSL) references we’ve been hearing in relationship with Oslo. Technically, MGrammar is a BNF on steroids plus an automatically generated parser for your syntax. Cute. I assume that “M” (i.e. MSchema) is built as MGrammar-defined DSL but I am not sure why I would care. I am all for reuse and if someone at Microsoft thought that there was something reusable in the way they defined MSchema then it’s a good thing to expose this tool. But where does it come into play in application modeling? The last thing I want is people inventing completely independent languages to describe different domains. I am all for specialization, but a common underlying metamodel is pretty nice when you have to make sense of a whole system. I don’t see any such commonality in MGrammar: as far as I can tell it can be used to define anything from PostScript to sonnets.

From the FAQ, the connection point between MGrammar and MSchema is MGraph (MGrammar languages are parsed into an MGraph, MSchema “builds on MGraph”). That’s nice, but since neither the MSchema nor the MGrammar documentation mention MGraph I don’t really know what to make of this. David Chappell’s white paper also mentions MSchema and MGrammar but not MGraph. The introduction to the MGrammar Language Specification states that “the data that results from Mg [a.k.a. MGrammar] processing is compatible with Mg’s sister language, The Oslo Modeling Language, M, which provides a SQL-compatible schema and query language that can be used to further process the underlying information“. Compatible? I need more information here. In any case, MGrammar sounds like a fun project for a techie. Who am I to deny Microsoft engineers their fun. Jokes aside, I am probably missing something here seeing how prevalent the DSL message is in all discussions of Oslo. Look at the “highlights of this book” section for the upcoming Oslo/M book from the creators of the “M” language: half of it is about the DSL support and there must be a reason beyond pure geekery. As a side note, if you buy this book you need to understand what little shelf life it will have (I can give you a good price on a lightly-used Hailstorm/”.Net my services” specification book).

Aside from the “M” language itself, there are a few models described in the documentation. One corresponds to BPMN (actually, it says that it “closely aligns with” BPPMN 1.1, does this imply that they are not quite the same?). The fact that this model supports imports from Visio is a nice feature.

The Application model (one of the places where you can see “extents” in action) scares me a little bit because I doubt that two different people would use the same “extents” to describe the same software elements. Unless of course that’s being done for them by a pre-defined mapping to their development framework (.NET) enacted by their common development tool (Visual Studio). Which may be the assumption. Yet, the Application model is defined in generic terms, not Microsoft-specific (with a couple of slip-ups, like a WebApplicationModule being defined as a “Web application (module) implemented by IIS or WAS“. Maybe I’ll feel better about the generic applicability of this Application model when I see a full-fledged description (e.g. including relationship semantics as captured in foreign key field names) and an example.

At the bottom of that Application model, there is a lonely “Manageable” type to use if you have a LifecycleState field. This reinforces my impression that despite the claims to link development time with operational time, a lot of the focus to date has been on the former rather than the latter.

The ServiceModel model will look familiar to people familiar with SCA and is presumably complementary to the WorkflowModel and WorkflowServiceModel models, both of which are directly mapped to Windows Workflow Foundation. I guess that’s where Oslo and Dublin touch one another. I am still glad they are now clearly separated.

There is also a “Quadrant” model which concerns me a bit (it seems to be used to store customization of the Quadrant UI which, while convenient to store straight in the repository, doesn’t strike me as necessarily belonging there).

At this point, the question is not whether Microsoft can build Oslo as it is currently defined. SQL Server 2008 already exists, the usage guidelines aren’t unrealistic and even the “M-to-T-SQL” translation doesn’t seem too hard for Microsoft to implement (the SDK presumable already contains an implementation). I have no doubt they can deliver the system they describe. What I don’t know is whether and how it will be actually useful.

Describing “M” in details is good. Describing how the repository is implemented on top of SQL Server 2008 is interesting but not so relevant. What I’d like to see is a description of how all this gets used. How does it change the Visual Studio experience? How does it change the installation process/format? How does it support round-tripping between lifecycle stages (e.g. if the developer changes the workflow model, does that original BPMN model get consequently updated)? How does it relate to SLAs and policies? How does it apply to application monitoring? How does it apply to configuration management, to the change process? Etc. In short, what’s the Oslo ecosystem going to be.

These questions aren’t completely ignored in the MSDN documentation, but they are dispensed with in a couple of pages: “Application Development and Lifecycle Improvements” and “IT Operations Benefits“. The former states, for example, that “having the Oslo repository act as a central location for these models also enables a connection between the design and implementation models. This connection helps prevent these models from becoming disconnected during the development process“. Which all sounds good but is just a set of assertions that we have heard many times before (not just from Microsoft). How do “M” and the Oslo repository really make this true?

On the “IT Operations Benefits” side, things are equally blurry: “the Oslo repository can store all types of machine and application configuration data. When consistently updated, this configuration data is a catalog of the current state of all monitored machines and applications in the environment“. Notice the “when consistently updated” hand wave. That’s kind of the crux if you really want to manage across the lifecycle. How will they achieve this consistency? By centralizing all changes through a model-driven controller a la SDM/SML? Through ongoing discovery and/or change notifications? By relying on good old ITIL/MOF processes?

The FAQ declares that “having a common approach does not necessarily correlate to one physical store, but more of a federated model and we believe that some of the new Repository, along with existing investments in both Configuration Management Database (CMDB) and Team Foundation Server (TFS), will form the foundation for a common Microsoft metadata strategy and should be supported across our set of products“. OK, but who is the source of truth for application configuration data? The Oslo repository or the CMDB? Is one the desired state and the other the observed state? Does the CMDB go back to simply being a Service Desk (and if so, does the Oslo repository take on the responsibility to enforce change processes, something that requires more than the security model in Oslo)? If the CMDB is still going to use SML as its metamodel, how do you efficiently federate across such different metamodels as SML (i.e. XSD + schematron + relationships) and “M”?

Lots of questions remaining. What will Oslo have turned into in a few years? A business process design/implementation/monitoring suite (there is a strong workflow feel to many parts)? A generic drag-and-drop programming environment (“the fact that entire features are already described by models means that for a wide array of application and component categories you can start using visual tools to design and implement your components“)? A control center for end to end application management? All of the above? Nothing?

This was just a quick brain dump after reading the documents. Actually, I just realized it somehow got pretty long (congrats if you’re still reading). I hope this post is not too disorganized. Oslo is an interesting effort, but, as Microsoft is first to admit, it’s at a very early stage. I am just surprised that this first release spends so much time on the “how” rather than the “what”. Maybe it’s just because I only got my information from the MSDN documentation. We’ll see when more content from PDC finds its way online. I just want the slides, watching recorded presentations is rarely time-efficient (and you can expect them to require Silverlight).

Speaking of Silverlight, there is this new site on Oslo if you think watching some videos is worth installing Silverlight. Those screenshots don’t motivate me sufficiently.

[UPDATED 2008/10/30: Rather than going to bed I Googled around a bit and found a  post by Martin Fowler that answers some of my questions about MGrammar, MGraph and MSchema. MGraph is for instances, MSchema is for types. It answers some plumbing question, but I still have questions about expected usage and relevance to applications modeling.]

[UPDATED 2008/10/30: I also found the recordings and slides from past PDC sessions. Nice job Microsoft for this quick turnaround time, even if you require Sliverlight and/or the PPTX viewer. The sessions are:

  • TL23 A Lap around “Oslo” (Doug Purdy, Vijaye Raji)
  • TL27 “Oslo”: The Language (Don Box, David Langworthy)
  • TL18 “Oslo”: Customizing and Extending the Visual Design Experience (Don Box, Florian Voss)
  • TL28 “Oslo”: Repository and Models (Chris Sells)

The first two sessions (deliverd Tuesday) have a replay and slides, the others should, I assume, follow soon.]

[UPDATED 2008/11/3: A nice overview of Oslo by Aaron Skonnard. Unlike most other Oslo articles over the last week, this one tries to paint the (yet-to-be-realized) full picture of the Oslo ecoystem. He mentions that “other Microsoft products and technologies are expected to build on Oslo to provide other runtimes. A few that have already been announced include Microsoft System Center (Operations Manager) and Team Foundation Server (TFS) in Visual Studio Team System”. It’s interesting that he qualifies System Center to be more specifically “operations manager” rather than “configuration manager” but I wouldn’t read too much into it at this point.]

5 Comments

Filed under Application Mgmt, BPM, Business Process, CMDB, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, SML, Specs, Tech

Dear Microsoft, here is my $0.25 Windows license fee for the month

Pricing is now available for Windows instances on Amazon EC2. More than the technical availability of Windows AMIs, the fact that you get pay your Windows license fee based on usage is a major change. This is where Microsoft’s announcement goes beyond Oracle’s EC2 announcement at Oracle Open World.

But why stop at EC2 instances? If I can do it there, why can’t I do it at home? Considering how rarely my home desktop is booted to Windows, I would love to pay my Windows license in a metered way. It would basically be limited to time spent editing video and participating in family Skype videconference (at least until I manage to get Skype full screen video to work on Ubuntu).

After all, why only Amazon and not other Cloud providers. And when this happens, I think I may become a cloud provider myself. It would be a small-scale operation. One physical CPU (my desktop). And one user (me). I would meter my usage and dutifully pay Microsoft every month based on the number of hours during which I was running Windows.

How much would that be? Well, a Linux Small Standard Image EC2 instance (the closest thing to my aging desktop) costs $0.10 per hour. The Windows version costs $0.125 per hour, so the Windows license on this machine costs 2.5 cents per hour. On a given month, I don’t use it for more than 10 hours (edit/render one DVD plus a few hours on Skype). That’s 25 cents. Does Microsoft take Paypal? Is the Microsoft tax about to get more progressive?

It will be interesting to see how Microsoft manages to be flexible on server OS licensing (where it has plenty of competition) and while keeping its highly profitable (and unfairly front-loaded and restrictive) desktop OS licensing intact.

[UPDATED 2009/1/19: What do you know, here is a Microsoft patent for a “Metered Pay-As-You-Go Computing Experience”, found through this article.]

1 Comment

Filed under Amazon, Business, Everything, Microsoft, Utility computing

Oslo name clarification

Good news. The Oslo code name now specifically refers to Microsoft’s new modeling technologies (the part that I and, presumably, readers of this blog care about) and not the workflow/biztalk stuff that was always mixed in (to the point where some Oslo stories only mentioned workflow).

[UPDATED 2008/10/10: Now this is getting silly. Yet another name change. It’s not “D” it’s “M”. Whatever. Isn’t the whole point of code names that it doesn’t matter what they are: just pick one and stick with it until you release and then you can come up with the final name? I am not going to do another post just for this like a groupie tracking every news item, however irrelevant, about his/her favorite band. Which, for the record, is not the position I am in wrt to Oslo (at least until I know what it really is). Oh, and their graphical modeling tool is now called Quadrant. I am sure the TopQuadrant folks (creator of the TopBraid RDF/OWL/SPARQL editor which is in a very related domain) will appreciate.]

2 Comments

Filed under Everything, Microsoft, Modeling, Oslo

Go Big Blue, go! Show them who’s the true friend of the little guy.

IBM’s well-publicized new policy for technology standards is an interesting development. The first image it conjured for cynical me is that of an aging Heavy Metal singer ranting against the rudeness of rap lyrics.

Like Charles, I don’t see IBM as an angel in this domain and yet I too think this is a commendable move on their part. Who better to stop a burglar than a (presumably) reformed burglar anyway? I hope this effort will succeed and I am glad to see that my colleague Jim Melton was involved in the discussion facilitated by IBM and that Trond supports it too.

My experience in standards (mostly from back in my HP days) only covers a small portion of IBM’s technology standards involvement of course. But in all instances, both IBM and Microsoft were key players (either through their participation or through their glaring refusal to participate). And within that sample (which does not include OOXML) my impression is that IBM did indeed play more cleanly than Microsoft.

They also mostly lost, while Microsoft mostly won. Whether there is a causality here is possible but not proven. IBM seems to have an ability to loose by winning: because they assign so many people to standards they wear out everybody else and at the end, they get the final document to be the way they want it (through the normal process, just by being relentless). But the specification is by then so over-engineered, so IBM-like in its approach and so late that it’s usually a Pyrrhic victory. Everybody else has moved on and IBM has on their hand something that’s a standard on paper but that only players in the IBM ecosystem implement. Pushing IBM’s CBE event format in WSDM, over-complicating aspects of WSRF like WS-ServiceGroup and butchering the use of SOAP headers in WS-ResourceTransfer to play nice with WebSphere are, in my mind, such examples. They can’t blame Microsoft for those.

Also, nobody forced them to tango with the devil in that whole WS-* saga. What they are saying now is similar in many ways to what Oracle was saying (about openness and fairness) throughout this decennia while Microsoft and IBM were privately defining machine to machine interoperability protocols for the enterprise. And they can’t blame standards for the way Microsoft eventually took advantage of them there, because they *chose* to do this outside of standards. I wish I had been a fly on the whole when this conversation took place:

IBM: We’re going to need a neutral DNS name for all these new XML namespaces. It wouldn’t be right to do it under ibm.com or microsoft.com.
Microsoft: You’re right. Hey, I just registered xmlsoap.org last week with the intent to launch a B2B forum for the detergent industry, but if you want we can use it for our Web services specs.
IBM: Man, that’s perfect. Let me give you twenty bucks to help pay the registration.
Microsoft: No, really, no big deal. It’s on me.
IBM: You’re too cool man.

But here I am, IBM-bashing again while the point of this post is to salute and support their attempt at reform. Bad, bad William.

OK, so now for some (hopefully) constructive remarks and suggestions.

I think commentaries and reports on the news have focused too much on the OOXML/ISO story. Sure it’s probably a big part of the motivation. But how much leverage does IBM really have on ISO? Technology standards is just a portion of what ISO does. And it’s not like ISO has much competition anyway, with its de jure international standing. Organizations like the JCP, DMTF and W3C have a lot more too lose if IBM really gets mad at them.

I think it’s clear that Microsoft is the target, but if ISO reform was the main prize, I don’t think IBM would go at it that way. ISO will only change in response to government pressure. If government influence is a necessary step, isn’t it cheaper and more direct for IBM to hire a couple more lobbyists than to try to rally the blogosphere? I think they really want to impact all standards setting organizations at the same time. If ISO happens to be one of those improved in the process, that’s gravy.

IBM calls its report “standards for standards” (at least that’s the file name). I think (and hope) the double entendre is voluntary. It’s not just a matter a raising the (moral and operational) standards of standards organizations. It should also be an occasion to standardize how they work, to make them more similar to one another.

Follow me for a second here. One of the main problems with many organizations is their opacity. They have boards, task forces, strategic committees, etc. Membership in the organization is stratified, based mostly on how much you are willing to pay. I would guess that most organizations couldn’t make ends meet if all member companies paid the “base membership” fee. They need a dozen companies to pay the “leadership” fee to fund their operations. For these companies to agree to the higher price of participation, they need something in return. They need to have more access than the others. Therefore, some level of access must be denied to the base members (and even more to the non-members, which is why many such organizations make almost no information publicly available).

They are not opaque by accident, they are opaque by design because they need to be in order to be funded. There are two ways to fix this. One is to have fewer organizations, such that the fixed costs of running an organization can be more widely spread. But technology is very specialized and there is value in having organizations that are focused and populated by domain experts. The other way is to drastically reduce the cost of running a standards organization. That’s where standardization of standards organizations comes in. If the development processes, IP policies, bylaws and tools were commonly shared among standards organizations, it would be a lot cheaper to run one.

Today, I can start a new open source project for free on Sourceforge. I can pick one of the clearly-identified open source licenses that have been pre-defined. I can use the usual source control, collaboration and bug reporting tools. Not only is it almost free, my users will know right away how to participate. Why isnt’ it the same for standards organizations? Or only so partially. I know that Kavi is used by many standards organizations. I’ve used their tool both as a DMTF participant and an OASIS participant. And it doesn’t really fit either perfectly because the processes are slightly different. Ballots are conducted differently, attendance rules are different, document visibility rules are different, roles are different, etc.

It sounds superficial, but I am convinced that a more standardized approach to IP policies, organization bylaws and specification development processes would result in big savings that would open the door to much more transparency.

Oh yeah, you’d also have to drop the boondoggle plenary sessions in resorts all over the world. Painful, I know.

Sure there are other costs, such as marketing costs. But fully transparent organizations, by making their products more easily accessible to users, have a much lower need to use traditional marketing to get the word out. In the same way that open source software companies get most of their marketing via their user community. Consistency among standards organizations would also make it a lot easier for small companies to participate since anyone who’s learned the rules once can be effective right away in a new organization.

I want to end with a note of caution directed at IBM. You have responsibilities. I hope you realize that at this point, approximately 20% of all airplane seats are occupied by IBM employees going to or coming back from some standards-related meeting. The airlines are hurting already, you can’t pull out at once. And who will drive all these rental Chevys? Who will eat all the bad sushi in airport food courts and Benihana restaurants?

[UPDATED 2008/10/20: From Tim Bray, another example of IBM loosing by winning in standards: “Unfortunately, that spec [XML 1.1] came with excess baggage, namely changed rules on what constitutes white-space, rammed through by IBM for the convenience of their mainframe customers. In any case, XML 1.1 has been widely ignored”.]

3 Comments

Filed under Conference, Everything, Governance, IBM, ISO, Microsoft, OOXML, Open source, Standards

Last call for SML and SML-IF

The SML working group at W3C has published the “last call” working draft of version 1.1 of the SML and SML-IF (“IF” stands for “interchange format”) specifications. You have until October 3rd to tell them what you think.

With all the Oslo fun, the OMG embrace and the silence from System Center there are more questions than answers about the use of SML at Microsoft. But the Eclipse COSMOS project (IBM and friends) is, as far as I know, valiantly going forward with the store/validator implementation. Which may or may not be the same codebase as what was used for the recent CMDBf interop demo (I am not sure how the SML and CDMBf implementations in COSMOS are articulated).

The COSMOS group also recently published an overview of SML. It doesn’t try to tell you why you’d want to use SML but it’s a good and succint description of what SML is technically (from an XML developer’s perspective).

Comments Off on Last call for SML and SML-IF

Filed under CMDB Federation, CMDBf, Desired State, Everything, IBM, Implementation, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Open source, Oslo, SML, Specs, Standards, Tech, W3C

CMIS, APP, Zen-SOAP and WS-KitchenSink: some data points

The recent release of an early draft of a content management specification (CMIS, for Content Management Interoperability Services) provides an interesting perspective on not just SOAP-versus-REST but also Zen-SOAP versus WS-KitchenSink.

I know little about content management and I have no comment about the specification from that respect. Others have better informed opinions on that aspect.

What is of interest to me, and where I have some experience, is the way the spec-defined operations are bound to underlying protocols. Here is the way the specification is structured: Part I describes the data model and the operations exposed by all the services. Part II comes in two flavors: a REST binding (based on APP, the Atom Publishing Protocol) and a Web services binding (based on SOAP).

This is the first time, to my knowledge, that someone (who presumably isn’t a participant in the SOAP/REST religious war but simply wants to get something done) describes two ways to achieve a real-life task, using either APP or SOAP. I expect that this will attract a lot of attention and provide data in the SOAP versus REST debate.

But this is not what I want to write about. I’ll just point out that the REST binding specification somehow is twice as long as the SOAP binding specification, which I find intriguing but not necessarily meaningful (things are looking good for your bet Sanjiva).

What really caught my attention is how SOAP is used in CMIS. You can hardly tell it’s SOAP. CMIS just defines XML messages to be used as payload for requests and responses. You would be excused for forgetting halfway through your implementation that you’re supposed to wrap those in a SOAP envelope. Headers are a no-show. The specification says it uses SOAP faults but it actually goes out of its way to avoid the existing elements for fault code and fault message and instead invent its own. The only SOAP feature it really uses is MTOM.

Except for the MTOM part, this reminds me of what SOAP was at the beginning of the decade, before any header had been defined (other than those used as illustration in the SOAP specification itself). I want to call it Zen-SOAP, by opposition to the WS-KitchenSink approach in which even simple, synchronous, clear-text, request-response SOAP exchanges somehow get saddled with a half dozen WS-Addressing headers before they’ve even left the gate (did I mention that I don’t like WS-Addressing?).

Another comedian in the WS-KitchenSink theater troupe is the WS-Transfer stack and especially WS-ResourceTransfer (WS-RT). Unless I read too much into this draft of CMIS, its content is devastating in two ways for WS-ResourceTransfer: in one fell swoop it shows that the specification is mostly useless and it destroys the argument that WS-ResourceTransfer needs to be stand-alone as opposed to just a part of WS-Management.

In “who needs XPath fragment-level PUT?”, I tried to make the case that the use of XPath in WS-RT to do fine-grained updates is a case of over-engineering. That there is no real need for it. Still, in that article I try to think of cases where the feature might be justified. I came up with two and I wrote that “one is if the resource actually is a document (as opposed to having its state represented by a document). For example, a wiki page”. But I dismissed it because wiki-land is REST country. I didn’t think of it at the time, but there is an “enterprise” version of wiki, a world in which, presumably, SOAP is well-regarded: Content Management Systems. Surely, if there is a domain that needs a fine-grained SOAP-based document editing protocol it’s the CMS world.

Today’s release of CMIS demolishes this use case with two punches to the guts:

  • They do have a query language, but it is SQL-based, not XPath-based.
  • The query is only used for reads, not for updates. Updates are done through specialized operations (addObjectToFolder, moveObject, updateProperties, createRelationship…).

This goes beyond not using a generic fine-grained update mechanism. It also goes against using any generic GET/SET operation. The blow reaches all the way to WS-Transfer. For all this, CMIS comes out a much simpler specification and it also frees itself from the web of dependencies (on specifications at different stages of standardization) that has plagued specifications that use WS-Transfer and will plague WS-Federation for using WS-RT.

It will be interesting to see what happens when the WS-* architects and Microsoft and IBM get hold of the CMIS specification and of its authors in their companies. I am especially worried about the fate of the IBM CMIS authors. The recent news about Oslo show that the XML people at Microsoft are a lot more willing to put the XML tools back in the box when needed.

In truth, the CMIS authors do appear to need some help from the SOAP experts in their companies, if only to fix the way they use SOAP faults and to help the poor soul who put this comment in the WSDL:

<!– had to use include – .net wsdl.exe code generator doesn’t seem to like imports on the schema –>

But they might be getting more “suggestions” than they bargained for. In the same way that the WS-Federation folks were going on their own merry way until it was “suggested” to them by someone (who probably had an agenda) to use WS-RT. I’ll try to keep an eye on how CMIS evolves.

In the meantime, I find in CMIS data points that reinforce my opinion that WS-Transfer should be absorbed by WS-Management, WS-MeX and WS-Federation should return to defining their own operations and WS-RT should be left to die (or, for a more positive spin, be used as inspiration in the next version of WS-Management).

[UPDATED 2008/10/02: Roy Fielding doesn’t like the so-called-RESTful binding. Sam Ruby cautiously defends it. Links via Billy Cripe.]

[UPDATED 2009/5/1: For some reason this entry is attracting a lot of comment spam, so I am disabling comments. Contact me if you’d like to comment.]

4 Comments

Filed under Everything, IBM, Microsoft, Query, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer, XPath

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

JSR262 public review ballot

The Public Review Ballot for JSR #262 that took place in the Executive Committee for SE/EE has closed. I am not familiar enough with the JCP process to know exactly what this milestone represents. But the results are interesting in any case.

The vote narrowly passed with 6 yes, 5 no and 1 abstain.

The overiding concern listed by the “no” voters (and several of the “yes” voters) is the fact that JSR262 uses WS-Management (a DMTF standard) which itself makes use of specifications that have been submitted to W3C but are not currently in the process of standardization (WS-Transfer, WS-Eventing, WS-Enumeration). And that it uses an older version of a now-standard specification (WS-Addressing).

SAP makes the most insightful comment: that this is not really a JCP problem but a DMTF problem. Hopefully the DMTF (and Microsoft, since it controls the fate of the specifications in question) will step up to the plate on this. This is likely to happen. Even if the DMTF and Microsoft didn’t care about making the JCP happy (but they do, don’t they?), they will run into similar issues if/when they push WS-Management towards ANSI/ISO standardization.

Next to this “non-standard dependencies” issue, there is only one technical issue mentioned. As you guessed, it’s IBM whining about the lack of a WSDL to feed their tools. This is becoming so repetitive that I may eventually stop making fun of it (but don’t hold your breath, I am not known for being very good at ending long-running jokes). It is pretty ironic to hear IBM claim that without that WSDL you can’t implement the spec on JAX-WS when you know that the wiseman reference implementation by Sun and HP is based on JAX-WS…

Comments Off on JSR262 public review ballot

Filed under Application Mgmt, Everything, IBM, Implementation, ISO, IT Systems Mgmt, JMX, Manageability, Microsoft, Specs, Standards, WS-Management, WS-Transfer

Various IT management stories

Apparently Coté’s upstairs neighbors were having a party last night and he could not sleep. That’s good for us because as a result he bookmarked a long list IT systems management stories. Several of those picked my interest:

2 Comments

Filed under Application Mgmt, Articles, Everything, HP, IT Systems Mgmt, Manageability, Microsoft, Open source, Oracle

WS-ManagementHammer: don’t do it but if you are going to do it anyway then…

With the IBM/Microsoft/Intel/HP WSDM/WS-Management convergence now implicitly (if not yet officially) dead, it will be interesting to see what IBM is going to do with WSRF. WSRF is being used today, rarely explicitly but rather in an embedded fashion. People who use WSDM use it, people who use CDDLM use it, people who use the Globus Toolkit use it, etc. IBM could write off the convergence work (WS-ResourceTransfer, which was published as a draft, and WS-ResourceEnumeration and WS-EventNotification which were never published) and stick to using the existing WSRF specifications when they need the corresponding functionality. That’s what I hope they do.

Alternatively, they could decide to get the forceps out of the drawer. They can create a new, IBM-friendly (e.g. Fujitsu, CA, Cisco…) private consortium to take over the unfinished drafts (if the IBM/Microsoft/Intel/HP legal agreement allows this) or start new ones. Or they could go directly to W3C, OASIS or OGF and push for a new working group to do the work in the open (and since no-one else would really care about this work IBM should have relatively free hands there, the way Microsoft did in DMTF when IBM chose to boycott WS-Management). Why W3C would care and why OASIS or OGF would want to start commitees to obsolete their existing work is a separate question.

While I hope that IBM doesn’t try to push another pile of WS-* resouce management specifications on an industry that already has too many, if they do I hope that at least they’ll do it right. And that means doing away with the approach embedded in WS-ResourceTransfer. Having personally been involved in many iterations on this problem, I hope to have some insight to contribute.

Along the lines of the age-old parental advice “don’t do it but if you are going to do it then use a condom”, here is my advice to anyone thinking of doing another iteration on the WSRF question: don’t do it but if you are going to do it then be specific about what problem you are addressing.

First, let’s separate three scenarios.

Database query

WS-ResourceTransfer should not be seen as a way to query an XML database. Use XQuery for this.

REST

While architecturally it should be possible to build RESTful applications on top of WS-Transfer‘s operations, this is simply not what is happening. WS-Transfer is being used either by CIM people (who get to it via WS-Management) or by big-SOA people (who get is as part of the whole WS-* stack) and neither of them is doing anything remotely RESTful. So just leave that aside and don’t see WS-ResourceTransfer as a way to do “fine-grained REST”. No REST user is loosing sleep over WS-ResourceTransfer being in limbo.

A flexible way to interact with a complex system

This is the use case that you should focus on. You have a system made up of many parts (e.g. a composite application or a server that is made of many components) that you can represent as an XML document. The XML repesentation contains some important information about the system, but it isn’t the system. There are identified resources within the system that have lifecycles, management capabilities and internal parameters. Not everything relevant is captured in the XML model. This is why it is different from an XML database.

In general, I don’t think that XML is the best way to represent complex IT systems. It has plenty of complications that are not relevant to IT management and it doesn’t elegantly support the representation of graphs, often the most natural way to represent such a system (more on this here). CMDBf, with its graph-oriented approach, is a better choice in general. But there are plenty of areas (especially smaller, well-defined, sub-systems) in which XML formats have been defined to represent systems. SCA and SML for example.

In the case where you are dealing with such an XML-described system, then there is value in standard ways to simplify interactions with the system and its parts. But here too, we need to distinguished different patterns rather than trying to handle them all in the same way.

Filtering/sequencing of returned data

Complex IT systems can generate a lot of configuration and/or monitoring data and often you only care for a small subset. For example, an asset record has dozens of elements (lease terms, owner, assigned user…) but you may only care to retrieve the date the lease expires. When you do a GET on the record, you want to qualify it by specifying that only that date needs to be returned. That’s what WS-RP, WS-RT and the WS-Management wsman:TransferFragment header allow. In a variation of this, you want all the data but you don’t want it in one go, you want to pull it piece by piece. That’s what WS-Enumeration gives you. The problem with all these specifications is that they only offer that feature when you are retrieving the resource representation (a WS-Transfer GET or equivalent), not for other operations. But how is this different from invoking an AirlineBooking operation and saying that you only want to be sent the confirmation code, not the full itinerary, equipment type, assigned seat, etc? Bundling this inside WS-RT (or equivalent) is not helpful. A generic SOAP header that can go on any message would be more appropriate (the definition of this header would need to pay special attention to security considerations, especially if the response is signed, because it could be abused to trick the server into sending, and signing, specifically-crafted messages).

Interacting with a sub-element of the system

If you have a handle to a computer system resource and you know that it has one CPU and that this CPU is represented by the /comp:CPU element of the system, why would you need to use some out-of-band discovery mechanism to interact with that CPU? It’s right there, you can see it, you can point to it. Surely there must be a way to address operations to it directly, right? WS-Management tries to do it with its wsman:Selector mechanism, but the selectors are not tied to the model and require, effectively, a separate out-of-band agreement for addressing. There shouldn’t be a need for such an additional agreement once an agreement has already been reached on the model.

What is needed is a way, for systems that have a known XML model, to address message to subpart by using the model itself to support that addressing. Call it SOAPy mashup if you want to feel like you are part of the cool kids. I described such a mechanism a while ago. In effect, it is an improvement on wsman:Selector that an eventual new iteration of WSRF should at least consider.

In some cases, namely when the operation is a WS-Transfer GET, this capability overlaps with the “filtering of returned data” capability. One way to look at it is that you are doing a GET at the level of the overall computer system and filtering the results down to the part that represents the CPU. Another way to look at it is that you are pinpointing the message to a subset of the model (the CPU part) and doing an unmodified GET on it. It doesn’t matter how you choose to think about it. In my proposal, these two ways produce the same message. Like the wave view and particle view of a photon, that in the end, describe the same physical entity with each being the best representation for a set of situations.

The problem with WS-RT and its predecessors is that it doesn’t recognise that this is just the intersection of two orthogonal concerns (filering of output versus addressing of sub-elements) and only handles that intersection.

Interacting with a set of resources as a set

The same kind of expression (typically XPath) that lets you point at a sub-element inside of a system also lets you point at a set of such sub-elements. But even though from an XPath perspective there isn’t much of a different (the first one just happens to return a nodeset that contains only one node), from an architectural perspective it is a very different use case. If you want to support such a use case then you have handle it as such and define all the associated semantics (sequential/parallel execution, fault handling, partial completion, resource-specific permissions…). You can’t just cross your fingers and assume that you get such features “for free” just because XPath can return a nodeset.

I know that this post illustrates a way of giving free advice that virtually ensures that it gets ignored. Similar (if you’ll allow the big stretch) to the way Chirac and Villepin were arguing againt an Iraq invasion in ways that probably reinforced the Bush administration’s determination to do it. When will the world finally learn to appreciate the oh-so-slightly obnoxious undertone that is inherently French (because, let me tell you, we’re not about to loose it)? At least, when my grandchildren ask me “where were you when IBM invented WS-ManagementHammer?” I can point to this post and say “I tried to stop it, I tried”.

[UPDATED 2008/5/15: How timely! Just after publishing this I find, via Coté, what looks like another example of French abrasiveness in the systems management world: the attitude, name and the way Jeff ends with a French-language quote make it quite likely that the “Jacques” person discounting the fact that his company’s SNMP agent is broken is indeed a compatriot. French obnoxiousness aside, and despite my respect for standards, my advice to Jeff is that if a given SNMP agent works with HP, IBM, BMC and CA you will probably save yourself time in the long run by finding a way to support it (even if it is not spec-compliant) rather than getting the vendor to change. There are lots of sites out there that work fine with Firefox and IE but are not compliant with Web standards. Good luck getting them all fixed.]

[UPDATED 2008/7/14: I don’t really plan to turn this post into a ongoing set of updates about “French attitude” but since today is Bastille Day I’ll point to this map of the world as seen from Paris. If I wasn’t on strike right now, I’d explain why the commenter is wrong to assert that “French self-deprecating humour” is rare.]

4 Comments

Filed under Everything, HP, IBM, IT Systems Mgmt, Mgmt integration, Microsoft, SCA, SML, SOAP, SOAP header, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

System Center “Cross Platform Extension”: too many distractions

I was hoping that by the time MMS was over there would be more clarity about the “Cross Platform Extension” to System Center that Microsoft announced there. But most of the comments I have seen have focused on two non-technical aspects: Microsoft is interested in heterogeneous management and Microsoft makes use of open source. That’s also the focus of Coté’s coverage.

So what? Is it still that exciting, in 2008, to learn that Microsoft recognizes that Linux and OSS are major players in enterprise computing? If Steve Ballmer eventually gets hold of Yahoo, do you think his first priority will be to move all the servers to Windows or to build up its search and advertising audience? It’s been now 10 years since the Halloween documents came out. They can be seen as the start of Microsoft’s realization that Linux/OSS are here for good. It is not surprising to see that one of their main authors is now the driving force behind WS-Management, an effort that illustrates the acceptance of heterogeneity and the need to deal with it (on Microsoft’s terms if possible, of course). The WS-Management effort started years ago and it was a clear sign that Microsoft knew it had to tackle heterogeneous management (despite the reassuring talk that “it’s all about making Windows the most manageable platform” to HP and others). Basically, Microsoft is using WS-Management to support heterogeneity without having to do too much work: by creating an industry standard that everyone writes to and that Microsoft uses internally. Heterogeneous management is intrinsic to DSI if DSI is to be anything more than a demo.

But all of this was known before MMS 2008 to anyone who was paying attention. Instead of all this Microsoft/OSS/heterogeneous talk, I am a lot more interested in the technical aspects of the “Cross Platform Extension”.

OpenPegasus has been around for a long time, as a C++ CIMOM with a bunch of associated providers and CIM-XML interoperability over HTTP with CIM clients. I don’t know where WS-Management support was on the OpenPegasus development timeline, but even without Microsoft getting involved it would have eventually happened. And this should have been sufficient for System Center to access the CIMOM (BTW, does System Center not support CIM-XML when WS-Management is not present and if it does then what is different in practice with WS-Management?).

I can see how Microsoft would bring some extra (and much welcome) development resources for the WS-Management implementation (BTW the guys at Intel already have an open-source C implementation of WS-Management) as well as some extra marketing/visibility/distribution. Nice, but not earth-shattering. Do they bring anything else to OpenPegasus?

And what else is in the “Cross Platform Extension” in addition to an OpenPegasus WS-Management-capable CIMOM? Is there any extra modeling capability beyond CIM? Any Microsoft-specific classes? Any discovery/reconciliation capability? How much actual configuration management versus just monitoring? Security? Health models? Desired state management? Or is it just a WS-Management CIMOM? Any pointer to specific information is welcome.

Of course the underlying question is whether others than Microsoft can manage resources that have an OpenPegasus-based System Center management pack on them. The Open Management Consortium guys have talked about an open management agent. Could, against all expectations, Microsoft be the one delivering it?

In the IT management world, there are the big 4 (HP, BMC, CA and IBM), the little 4 (Zenoss, Hyperic, GroundWorks and openQRM) and the mighty 3 (Oracle, Microsoft and EMC). Sorry John, I am reclaiming the use of the “mighty” term: your “mighty 2” (or 2.5) are really still the “little 2” (or 2.5). At least for now.

The interesting thing is that in that industry configuration there are topics on which the little ones and the mighty ones share common interests. For example, the big 4 have a lot more management packs for all kinds of resources, built up over the years. Some standard-based mechanism that partially resets the stage helps the little ones and the mighty ones better compete against the big 4. Even better if it has an attractive (and extensible) implementation ready in the form of an agent. But let’s be clear that it takes more than a CIMOM to make a management pack. You need domains-specific expertise in the form of health models, deployment/configuration scripts and/or descriptors, configuration validation, role management etc. Thus my questions about what else (beyond CIM over WS-Management) Microsoft is bringing to the table. SML and CML are supposed to address this space, but I didn’t hear them mentioned once in the MMS coverage.

[UPDATED on 2008/5/7: Another perspective on Microsoft and open source: Microsoft Ex-Pats Developing Open Source Software Outside of Redmond]

[UPDATED 2008/5/7: I got an answer to the question about System Center support for CIM-XML: it doesn’t have it. So indeed it’s either WS-Management of WMI. If you’re a Linux box, that means it’s WS-Management.]

1 Comment

Filed under CA, Everything, HP, IBM, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Open source, Oracle, SML, Standards, WS-Management, Yahoo

Oracle/BEA, WS-Management and MMS: announcements of the day

A few announcements came out today.

The good news: Oracle’s acquisition of BEA closes. Unobstructed technical work can start.

The conveniently-timed news: WS-Management officially a standard.

Speaking of MMS 2008, any announcement there? Not much so far, as explained by Ian Blyth. If I parse the cross-platform part of the press release correctly, it says that management of non-Windows resources by Operations Manager is based on WS-Management, but WS-Management alone is not enough so Microsoft is providing a development kit for several non-Microsoft operating systems. It will be interesting to see what exactly is produced by these management packs. Can they be called on by management tools other Operations Manager or is the stuff that rides on top of WS-Management too proprietary to allow this? No word on SML/CML.

By the end of the week we may have a clearer picture, including what’s going on with the previously-announced reset on System Center Service Manager. Coté is on the scene and will undoubtedly share his thoughts.

As a side note, the way the MMS main page loads betrays the fact that, in 2008, Microsoft (or more likely its event marketing contractor) is using the same clueless HTML design approach that I first saw in 1995 and recently wrote about. All the text in the center of the MMS home page is contained in one large picture (available here). They didn’t even bother with a “ALT” field, so good luck to blind users. The part that says “Registration Overview Page” was made blue and underlined to suggest that it is a link, but it is just a part of the picture. Which, presumably, was supposed to be turned into a link using an image map. Well, turns out they can’t even get that right.

They tried to use a client-side image map (not available in 1995) but somehow the actual map code is commented out in the HTML source:

<!--<map name=Map>
  <area shape=RECT coords=18,549,210,572 href="registrationoverview.aspx">
  <area shape=RECT coords=17,596,222,634 href="registrationoverview.aspx">
</map>-->

As a result, the single most preeminent link on the home page is dead. And there is no server-side image map mechanism as a backup (which I remember used to be best practice when client support for client-side image maps was spotty).

Looking at the HTML source also reveals that tables are over-used. That’s the kind of HTML I can write, and I don’t mean that as a compliment.

[UPDATED 2008/5/5: As expected/hoped, Coté did share his thoughts on this “cross-platform” move from the MMS floor.]

Comments Off on Oracle/BEA, WS-Management and MMS: announcements of the day

Filed under CMDB, DMTF, Everything, IT Systems Mgmt, Linux, Manageability, Microsoft, Oracle, Standards, Trade show

Unhealthy fun with IP aspects of optionality in specifications

The previous blog post has re-awaken the spec lawyer in me (on the hobby glamor scale, spec lawyering ranks just below collecting dead bugs). Which brought back to my mind a peculiar aspect of the “Microsoft Open Specification Promise“.

The promise was published to address fears some people had that adopting Microsoft-created specifications (especially non-standard ones) would put them at risk of patent claims from Microsoft. The core of the promise is only two paragraphs long. The first one contains this section:

“To clarify, ‘Microsoft Necessary Claims’ are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.”

That seams to pretty clearly state that only the required portions of a specification are covered by this promise. Which is a very significant limitation, as specifications often tend to (over-) use optional features. But if you read further, the list of “Covered Specifications” (those to which the promise applies), contains this statement:

“this Promise also applies to the required elements of optional portions of such specifications.”

I find this very puzzling because it seems to contradict the previous statement. And more importantly, it’s hard to understand what it really means. That’s where the fun starts:

For example, if my spec defines a document <a> with an optional element <b> that itself has an optional sub-element <c>, as in:

<a>
  ...
  <b>
    ...
    <c>...</c>
  </b>
</a>

The <b> element is a required part of the “b” optional portion of the spec (the portion of the spec that defines that element), so I guess it is covered, but is <c>? That’s an optional element of an optional portion (the “b” portion) of the spec, so it isn’t. Unless you consider the portion of the spec that defines <c> (the “c” portion of the spec) to be an optional portion of the spec itself. In which case the <c> element is covered.

But if you take that second line of reasoning, then everything in the spec is covered because for any feature, no matter how “optional” it is, there is a portion (optional or not) of the specification that describes this feature. And if you are implementing that portion, for example the portion that defines element <foo>, by definition element <foo> is required for it (how can an element not be a required part of its own definition?). But if Microsoft intended to cover all parts of the specification, why not say so rather than this recursion-inducing “required elements of optional portions” statement? And if not, why do they choose to only cover optional elements that are one degree removed from the base of the specification?

Wouldn’t it be fun to see a court of law deal with a suit that hinges on this statement (provided that you’re not a party in the suit, of course)?

When a real spec lawyer took a look at this promise, he didn’t comment on the second statement, the one that raises the most questions in my mind.

[UPDATED 2008/4/29: The “promise” has seen many updates. The original (which is the one Andy Updegrove reviewed at the previous link) came out on 2006/9/12. The one I reviewed is dated 2008/3/25. There is no change history on the Microsoft site, but the Wayback machine has archived some older versions. The oldest one I can find is dated 2006/10/23 and it does not contain the sentence about “required elements of optional portions” that puzzles me. So it’s likely that the version Andy reviewed didn’t include this either and as such was clearly limited to required portions of the specifications (something that Andy pointed out).]

Comments Off on Unhealthy fun with IP aspects of optionality in specifications

Filed under Business, Everything, Microsoft, Patents, Specs, Standards

WS-Transfer, its WSDL and its WS-I compliance: the art of engineered uselessness

Several years ago, Chris Ferris wrote a blog entry in which he explains that WS-Transfer is not WS-I Basic Profile (BP) compliant.

Chris’ main point is correct: the WSDL document in appendix II of the WS-Transfer specification is not compliant with the WS-I Basic Profile. But what does this mean and why should one care?

If you search for the word “wsdl” in WS-Transfer, you first find it in the table that declares namespace prefixes used in the specification. But the prefix is not used in the specification, so it could just as well be removed from that table.

We see it next mentioned in the “compliance” boilerplate where it is declared to be the least authoritative of all information in the specification.

The next occurrence is all the way down in section 8, as a reference to the WSDL 1.1 W3C note. The only place where that reference is used, is further below, in Appendix II.

In short, for all practical purposes there is no mention of WSDL in WS-Transfer except for this one appendix that contains a WSDL document. Since there is no MUST or REQUIRED statement that refers to it, it is at best a testing tool that one can use to validate WS-Transfer messages produced. There is no requirement at all that the implementation produces that WSDL (e.g. as a response to a WS-MeX request) or consumes it.

And if you look at the content of the WSDL, it is mostly XML gymnastics aimed at creating “empty” and “any” types to express almost nothing useful about the messages sent and received.

You don’t have to take my statement that the WS-Transfer WSDL is useless at face value. Here are two other proofs:

  • Chris doesn’t just point out the WS-I BP violation in the WS-Transfer WSDL, he also proposes a way to fix it. He writes: “I actually think that a more appropriate approach to handling WS-Transfer’s ‘Get’ would be to specify the output message as you would any doc-literal operation and merely annotate the operation with the appropriate wsa:Action attribute values” (he also provides an example). And he is perfectly right. If you really want a WSDL for your WS-Transfer operations, create one that is specific to the resource type (server, toaster…) that you are dealing with. By definition that WSDL can’t be baked into the model-agnostic WS-Transfer specification. While Chris doesn’t say it, the natural conclusion of his remark is that there is not point for a WSDL in WS-Transfer (because any resource-agnostic WSDL is useless).
  • The WS-Transfer XSD and WSDL have been modified, sometimes in backward-incompatible ways, without changing the target namespace. From the original version to the first W3C submission, some minor changes (message names, introduction of WS-Addressing). From the first W3C submission to the current submission, some potentially backward-incompatible changes (the GET input can now be non-empty, the CREATE response can now contain anything as a result of trying to support different versions of WS-Addressing). On top of that, all these XSD and WSDL documents embedded in various versions of the spec are “non-normative”. The normative versions are said to be the ones at xmlsoap.org (XSD, WSDL). Those have not changed, which means that both versions on the W3C web site contain an incorrect version of the XSD/WSDL in the spec. Shouldn’t that lack of XML hygiene be a big deal for a specification that is implemented (via WS-Management, which references the W3C submission) in resources with long product development cycles, such as servers from Dell, HP and others that have WS-Management support directly on the motherboard? It would, if the XSD and WSDL had any relevance for the implementers. The fact that there was no outcry is yet another proof that the WS-Transfer XSD and the WSDL are irrelevant.

So yes, Chris is right that the WS-Transfer WSDL (BTW all versions have the problem that Chris describes even though it could have been fixed in a backward-compatible way when the WSDL was altered) is not WS-I BP compliant. But since that WSDL is useless anyway, this shouldn’t keep anyone up at night. The WS-Transfer WSDL serves no purpose other than to annoy people who like things to be WS-I BP compliant.

But is it just the WS-Transfer WSDL that’s useless, or it is all of WS-Transfer?

I am not planning to go into WS-* vs. REST territory here. To those who are confused by the similarity between the names of WS-Transfer operations and HTTP methods and see WS-Transfer as a way to do “REST over SOAP” I’ll just point out that WS-Transfer is rarely used on its own but rather in conjunction with many other SOAP messages (like those defined by WS-Eventing and WS-Enumeration, plus countless custom operations). So much for uniform interfaces. WS-Transfer, at least as it is used today, is not about REST.

Rather, the reasons why I question the usefulness of WS-Transfer are more pragmatic than architectural. I can think of three potential justifications to carve out WS-Transfer as a separate specification, none of which is really convincing at this point in time.

The first reason is simply to avoid repeating the same text over and over again. If many specifications are going to describe the same SOAP message, just describe it once and refer to that description. Sounds good. But I know of three specifications that use WS-Transfer: WS-Management, WS-MeX and the Devices Profile for Web Services.

WS-MeX and the Devices Profile only use the GET operation. Which means that the only specification text that they can re-use from WS-Transfer is something like “send an empty get request and get something back”. WS-Transfer can’t say what that something is, only the domain-specific specifications can. As a result, you are spending as much time referencing WS-Transfer as would be spent defining a simple GET operation. For all practical purposes, you can implement WS-MeX and the Devices Profile without ever reading WS-Transfer.

The second potential reason is to provide a stand-alone piece of functionality that can be implemented once (e.g. as a library/module) and re-used for different purposes. Something that automatically kicks in when a WS-Transfer wsa:Action is detected. Think of a stand-alone encryption/decryption library for example, that looks for specific SOAP headers. Or WS-Eventing, for which a library can take over the task of managing the subscription lifecycle. Except WS-Transfer defines so little that it’s not clear what a stand-alone WS-Transfer implementation would do. Receive messages and do what with them? It is so tied to the back-end that there isn’t much you can do in a general fashion. Unless you are creating a library for a database product and you see WS-Transfer as a query interface for your database. But this only makes sense if you want to provide more fine-grained access to the XML content, which WS-Transfer does not do.

Which takes us to the third potential value of WS-Transfer, as a foundational specification on which to build extensions. Of the three this is the only one that I believed in at some point. WS-ResourceTransfer (WS-RT) was the main attempt at doing this. Any service that uses WS-Transfer could, via the magic of the SOAP processing model, offer a more precise/powerful access to the resources. But while this was possible in theory it hasn’t really panned out in practice for many reasons:

  • Some people (hints: Armonk; Blue) pushed hard to put WS-RT instructions in the body rather than in headers, seriously compromising its ability to seamlessly compose with existing SOAP messages.
  • WS-MeX and the Devices Profile typically deal with documents small enough that manipulating them as a whole is rarely a problem. This only leaves WS-Management which has its own “fragment transfer” mechanism so it doesn’t really need a stand-alone mechanism.
  • XQuery is now developing support for an update capability.

What then is left, in the Spring of 2008, to justify the need for WS-Transfer as a separate layer, rather than considering it an integral part of WS-Management? Not much. WS-MeX, in an earlier version, used to define its own GET operation and it wouldn’t be any worse off if it had stayed that way (or returned to it). Ditto for the Device Profile. At this point, it’s mostly a matter of pragmatically cleaning up the mess without creating another one.

In retrospect (color me partially guilty), maybe one shouldn’t use the same architectural rules when attempting to design an interoperable standard stack for an industry than when refactoring a software project. Maybe one should resist the urge to refactor the “code” (or rather the PowerPoint stack) every time one detects the smallest conceptual redundancy. There is a cost in constant changes. There is a cost in specification cross-dependencies. WSDM experienced it firth hand with the different versions of WS-Addressing (another dependency that didn’t need to be). WS-Management is seeing it from the perspective of standardization.

1 Comment

Filed under Everything, Microsoft, SOAP, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XQuery

Windows XP Service Pack 3

Microsoft announced SP3 for Windows XP today. This white paper gives an overview of its content. It will ship on April 29th through Windows Update. Many of the updates are related to improved management, which makes sense at this stage of the game for the OS. It also makes sense as a attempt to position the OS against the rising desktop Linux threat. I wanted to see what specific management-related updates were contained. They are:

This is all good but it seems to take a very System Center-centric view of Windows management. There may be some more third-party-friendly improvements in the complete list of updates contained, but the link provided by the white paper (Knowledge Base article 936929) doesn’t seem to work at this time.

This was announced by Chris Keroack, the release manager for SP3, on this forum.

I can’t resist the temptation to translation into common English a few selected sentences from the white paper:

“For customers with existing Windows XP installations, Windows XP SP3 fills gaps in the updates they might have missed—for example, by declining individual updates when using Automatic Updates…”

Translation: You obviously did not know what you were doing when you refused that update last year so we will now force it on you inside a bundle.

“Developing service packs for operating systems like Windows XP, which is nearing its end-of-sales period, is a standard practice, and Microsoft does this for the convenience of its customers and partners.”

Translation: Don’t get too excited you will still eventually have to move to Vista.

“With few exceptions, Microsoft is not adding Windows Vista features to Windows XP through SP3. As noted earlier, one exception is the addition of NAP to Windows XP to help organizations running Windows XP to take advantage of new features in Windows Server 2008.”

Translation: We are not going to let this cut into the sales of Vista, except when not doing it cuts into the sales of Windows Server 2008.

[UPDATED 2008/4/29: Turns out it’s not shipping today on Windows Update, as previously announced, because SP3 breaks Microsoft’s own Retail Management System (RMS) application. As does Vista SP1. I wonder if other vendors can also ask Microsoft to hold a service pack if it breaks their application…]

[UPDATED 2008/5/12: And when it does ship it causes some computers to fail to boot. This page has a lot of information. One of my machines is one of the affected AMD-based HP desktop with OEM OS so I am very happy to have seen this before applying the service pack. I think I’ll take my time to apply it in case other things shop up in adddition to the need to disable the Intel drivers. This KB article covers the same issue.]

Comments Off on Windows XP Service Pack 3

Filed under Everything, Microsoft

An interesting move

I have been keeping an eye on Don Ferguson’s blog with the hope of one day reading a bit about Microsoft’s Oslo project and maybe the application management aspects of it. Instead, what I saw tonight is that Don is leaving Microsoft, after a short stay, to join CA. Welcome to the fun world of IT management Don! It seems like a safe bet to assume that he will work on application management (sorry, I am supposed to say “service management”), which is what I focus on at Oracle. So forget Oslo, now I have another reason to keep an eye on Don. Microsoft has hired quite a few people out of CA (including Anders Vinberg, a while ago, and my WSDM co-conspirator Igor Sedukhin), so I guess it’s only fair to see some movement the other way.

Since this has turned into a “people magazine” edition of this blog, IT management observers who don’t know it yet might be interested to learn that DMTF president Winston Bumpus left Dell to join VMWare several months ago. Leaving aside the superiority of the SF Bay Area over Round Rock TX for boating purposes, this can also be seen as a clear signal of interest from VMWare for standards and especially DMTF. OVF migth only be the beginning.

If anyone who matters in IT management adopts a baby, checks into rehab or gets into a brawl, you’ll read about it first on this blog. Coming next week: exclusive photos from the beach-side retreat of the itSMF board. We’ll compare to photos from last year to find out whose six-pack shows the most impressive “continual service improvement”. And the following week, you’ll learn what really happened in that Vegas meeting room filled with IT management analysts. On the other hand, I do not cover fashion faux-pas because there are just too many of those in our industry.

1 Comment

Filed under CA, Everything, Microsoft, People

MicroSAP scarier than Microhoo

Here are the first three thoughts that came to my mind when I heard about Microsoft’s bid to acquire Yahoo (in order, to the extent that I can remember):

  • After XBox this will take their focus further away from enterprise software. Good for Oracle.
  • I wonder how my friends at Yahoo (none of which I know to be great fans of Microsoft’s software) feel about this (on the other hand the stock price rise can’t be too unpleasant for them)
  • Time to get ready to move away from Yahoo Mail

Turns out I should have added an additional piece of good news to the first bullet: after this they won’t be able to afford SAP for a while. This I just realized after reading this New York Times column which argues, in short, that Microsoft should acquire SAP rather than Yahoo.

A few quotes from the article:

  • you’ve probably never heard of BEA“: this obviously doesn’t apply to readers of this blog.
  • it’s not much fun hanging out on the enterprise side of the software business“: ouch. If it’s fun you’re after, try the IT management segment of enterprise software business.
  • to find the best acquisition strategy, ask, ‘What would Larry do?’“: does this come as a bumper sticker?

Of course if Microsoft gets Yahoo and things go really badly, then it could be SAP who acquires Microsoft…

Comments Off on MicroSAP scarier than Microhoo

Filed under Business, Everything, Microsoft, Off-topic, Oracle, SAP, Yahoo

Microsoft ditches SML, returns to SDM?

I gave in to the temptation of a tabloid-style title for this post, but the resulting guilt forces me to quickly explain that it is speculation and not based on any information other than what is in the links below (none of which explicitly refers to SDM or SML). And of course I work for a Microsoft competitor, so keep your skeptic hat on, as always.

The smoke that makes me picture that SML/SDM fire comes from this post on the Service Center team blog. In it, the product marketing manager for System Center Service Manager announces that the product will not ship until 2010. Here are the reasons given.

The relevant feedback here can be summarized as:

  • Improve performance
  • Enhance integration with the rest of the System Center product family and with the wider Microsoft product offering

To meet these requirements we have decided to replace specific components of the Service Manager infrastructure. We will also take this opportunity to align the product with the rest of the System Center family by taking advantage of proven technologies in use in those products

Let’s rewind a little bit and bring some context. Microsoft developed the Service Definition Model (SDM) to try to capture a consistent model of IT resources. There are several versions of SDM out there, and one of them is currently used by Operations Manager. It is how you capture domain-specific knowledge in a Management Pack (Microsoft’s name for a plug-in that lets you bring a new target type to Operations Manager). In order to get more people to write management packs that Operations Manager can consume, Microsoft decided to standardize SDM. It approached companies like IBM and HP and the SDM specification became SML. Except that there was a lot in SDM that looked like XSD, so SML was refactored as an extension of XSD (pulling in additions from Schematron) rather than a more stand-alone, management-specific approach like SDM. As I’ve argued before (look for the “XSD in SML” paragraph), in retrospect this was the wrong choice. SML was submitted to W3C and is now well advanced towards completion as a standard. Microsoft was forging ahead with the transition from SDM to SML and when they announced their upcoming CMDB they made it clear that it would use SML as its native metamodel (“we’re taking SML and making it the schema for CMDB” said Kirill Tatarinov who then headed the Service Center group).

Back to the present time. This NetworkWorld article clarifies that it’s a redesign of the CMDB part of Service Center that is causing the delay: “beta testing revealed performance and scalability issues with the CMDB and Microsoft plans to rebuild its architecture using components already used in Operations Manager.” More specifically, Robert Reynolds, a “group product planner for System Center” explains that “the core model-based data store in Operations Manager has the basic pieces that we need”. That “model-based data store” is the one that uses SDM. As a side note, I would very much like to know what part of the “performance and scalability issues” come from using XSD (where a lot of complications come from features not relevant for systems management).

Thus the “enhance integration with the rest of the System Center product family” in the original blog post reads a lot like dumping SML as the metamodel for the CMDB in favor of SDM (or an updated version of SDM). QED. Kind of.

In addition to the problems Microsoft uncovered with the Service Center Beta, the upcoming changes around project Oslo might have further weakened the justification for using SML. In another FUD-spreading blog post, I hypothesized about what Oslo means for SML/CML. This recent development with the CMDB reinforces that view.

I understand that there is probably more to this decision at Microsoft than the SML/SDM question but this aspect is the one that may have an impact not just on Microsoft customers but on others who are considering using SML. In the larger scheme of things, the overarching technical question is whether one metamodel (be it SDM, SML, MOF or something else) can efficiently be used to represent models across the entire IT stack. I am growing increasingly convinced that it cannot.

4 Comments

Filed under CMDB, Everything, IT Systems Mgmt, Microsoft, Oslo, SML, Specs, Standards

Microsoft’s Bob Muglia opens the virtualized kimono

In a recently published “executive e-mail”, Microsoft’s Bob Muglia describes the company’s view of virtualization. You won’t be surprised to learn that he thinks it’s a big deal. Being an IT management geek, I fast-forwarded to the part about management and of course I fully agree with him on the “the importance of integrated management”. But his definition of “integrated” is slightly different from mine as becomes clear when he further qualifies it as the use of “a single set of management tools”. Sure, that makes for easier integration, but I am still of the school of thought (despite the current sorry state of management integration) that we can and must find ways to integrate heterogeneous management tools.

“Although virtualization has been around for more than four decades, the software industry is just beginning to understand the full implications of this important technology” says Bob Muglia. I am tempted to slightly re-write the second part of the sentence as “the software marketing industry is just beginning to understand the full potential of this important buzzword”. To illustrate this, look no further than that same executive e-mail, in which we learn that Terminal Server actually provides “presentation virtualization”. Soon we’ll hear that the Windows TCP/IP stack provides “geographic virtualization” and that solitaire.exe provides “card deck virtualization”.

Then there is SoftGrid (or rather, “Microsoft SoftGrid Application Virtualization”). I like the technology behind SoftGrid but when Microsoft announced this acquisition my initial thought was that coming from the company that owns the OS and the development/deployment environment on top of it, this acquisition was quite an admission of failure. And I am still very puzzled by the relevance of the SoftGrid approach in the current environment. Here is my proposed motto for SoftGrid: “can’t AJAX please go away”. Yes, I know, CAD, Photoshop, blah, blah, but what proportion of the users of these applications want desktop virtualization? And of those, what proportion can’t be satisfied with “regular” desktop virtualization (like Virtual PC, especially when reinforced with the graphical rendering capabilities from Calista which Microsoft just acquired)?

In an inspirational statement, Bob Muglia asks us to “imagine, for example, if your employees could access their personalized desktop, with all of their settings and preferences intact, on any machine, from any location”. Yes, imagine that. We’d call it the Web.

In tangentially related news, David Chappell recently released a Microsoft-sponsored white paper that describes what Microsoft calls “Software + Service”. As usual, David does a good job of explaining what Microsoft means, using clearly-defined terms (e.g. “on-premises” is used as an organizational, not geographical concept) and by making the obvious connections with existing practices such as invoking partner/supplier services and SOA. There isn’t a ton of meat behind the concept of S+S once you’ve gotten the point that even in a “cloud computing” world there is still some software that you’ll run in your organization. But since, like Microsoft, my employer (Oracle) also makes most of its money from licenses today, I can’t disagree with bringing that up…

And like Microsoft, Oracle is also very aware of the move towards SaaS and engaged in it. In that respect, figure 11 of the white paper is where a pro-Microsoft bias appears (even though I understand that the names in the figure are simply supposed to be “representative examples”). Going by it, there are the SaaS companies (that would be the cool cats of Amazon, Salesforce.com and Google plus of course Microsoft) and there are the on-premises companies (where Microsoft is joined by Oracle, SAP and IBM). Which tends to ignore the fact that Oracle is arguably more advanced than Microsoft both in terms of delivering infrastructure to SaaS providers and being a SaaS provider itself. And SAP and IBM would also probably want to have a word with you on this. But then again, they can sponsor their own white paper.

Comments Off on Microsoft’s Bob Muglia opens the virtualized kimono

Filed under Everything, Mgmt integration, Microsoft, Virtualization