Category Archives: Business

Unhealthy fun with IP aspects of optionality in specifications

The previous blog post has re-awaken the spec lawyer in me (on the hobby glamor scale, spec lawyering ranks just below collecting dead bugs). Which brought back to my mind a peculiar aspect of the “Microsoft Open Specification Promise“.

The promise was published to address fears some people had that adopting Microsoft-created specifications (especially non-standard ones) would put them at risk of patent claims from Microsoft. The core of the promise is only two paragraphs long. The first one contains this section:

“To clarify, ‘Microsoft Necessary Claims’ are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.”

That seams to pretty clearly state that only the required portions of a specification are covered by this promise. Which is a very significant limitation, as specifications often tend to (over-) use optional features. But if you read further, the list of “Covered Specifications” (those to which the promise applies), contains this statement:

“this Promise also applies to the required elements of optional portions of such specifications.”

I find this very puzzling because it seems to contradict the previous statement. And more importantly, it’s hard to understand what it really means. That’s where the fun starts:

For example, if my spec defines a document <a> with an optional element <b> that itself has an optional sub-element <c>, as in:


The <b> element is a required part of the “b” optional portion of the spec (the portion of the spec that defines that element), so I guess it is covered, but is <c>? That’s an optional element of an optional portion (the “b” portion) of the spec, so it isn’t. Unless you consider the portion of the spec that defines <c> (the “c” portion of the spec) to be an optional portion of the spec itself. In which case the <c> element is covered.

But if you take that second line of reasoning, then everything in the spec is covered because for any feature, no matter how “optional” it is, there is a portion (optional or not) of the specification that describes this feature. And if you are implementing that portion, for example the portion that defines element <foo>, by definition element <foo> is required for it (how can an element not be a required part of its own definition?). But if Microsoft intended to cover all parts of the specification, why not say so rather than this recursion-inducing “required elements of optional portions” statement? And if not, why do they choose to only cover optional elements that are one degree removed from the base of the specification?

Wouldn’t it be fun to see a court of law deal with a suit that hinges on this statement (provided that you’re not a party in the suit, of course)?

When a real spec lawyer took a look at this promise, he didn’t comment on the second statement, the one that raises the most questions in my mind.

[UPDATED 2008/4/29: The “promise” has seen many updates. The original (which is the one Andy Updegrove reviewed at the previous link) came out on 2006/9/12. The one I reviewed is dated 2008/3/25. There is no change history on the Microsoft site, but the Wayback machine has archived some older versions. The oldest one I can find is dated 2006/10/23 and it does not contain the sentence about “required elements of optional portions” that puzzles me. So it’s likely that the version Andy reviewed didn’t include this either and as such was clearly limited to required portions of the specifications (something that Andy pointed out).]

Comments Off on Unhealthy fun with IP aspects of optionality in specifications

Filed under Business, Everything, Microsoft, Patents, Specs, Standards

MicroSAP scarier than Microhoo

Here are the first three thoughts that came to my mind when I heard about Microsoft’s bid to acquire Yahoo (in order, to the extent that I can remember):

  • After XBox this will take their focus further away from enterprise software. Good for Oracle.
  • I wonder how my friends at Yahoo (none of which I know to be great fans of Microsoft’s software) feel about this (on the other hand the stock price rise can’t be too unpleasant for them)
  • Time to get ready to move away from Yahoo Mail

Turns out I should have added an additional piece of good news to the first bullet: after this they won’t be able to afford SAP for a while. This I just realized after reading this New York Times column which argues, in short, that Microsoft should acquire SAP rather than Yahoo.

A few quotes from the article:

  • you’ve probably never heard of BEA“: this obviously doesn’t apply to readers of this blog.
  • it’s not much fun hanging out on the enterprise side of the software business“: ouch. If it’s fun you’re after, try the IT management segment of enterprise software business.
  • to find the best acquisition strategy, ask, ‘What would Larry do?’“: does this come as a bumper sticker?

Of course if Microsoft gets Yahoo and things go really badly, then it could be SAP who acquires Microsoft…

Comments Off on MicroSAP scarier than Microhoo

Filed under Business, Everything, Microsoft, Off-topic, Oracle, SAP, Yahoo

+1 to the FTC

I noticed two patent-related news items tonight that could be of interest to those of us who have to deal with the “fun” of patents as they apply to IT. The first one is an FTC settlement that enforces a patent promise made in a standard body. It is not uncommon for participation in a standardization group to require some form of patent grant (royalty-free, RAND, etc). This is why employees in companies with large patent portfolios have to jump through endless loops and go through legal reviews just to be authorized to join a working group at OASIS (one of the organizations with the clearest patent policy, patiently crafted through a lot of debate). Something similar seems to have happened at IEEE during the work on the Ethernet standard: National Semiconductor promised a flat $1,000 license for two of their patents (pending at the time) that are essential to the implementation of the standard. And we all know that that little standard happened to become quite successful (to IBM’s despair). Years later, a patent troll that had gotten hold of the patents tried to walk away from the promise. In short, the FTC stopped them. If this is of interest to you, go read Andy Updegrove’s much more detailed analysis (including his view that this is important not just for standards but also for open source).

At my level of understanding of intellectual property law as it applies to the IT industry (I am not a lawyer, but I have spent a fair amount of time discussing the topic with them), this sounds like a good decision. But it is a tiny light in an ocean of darkness that creates so many opportunities for abuse. And the resulting fear prevents a lot of good work from happening. The second patent-related news item of the day (a patent reform bill driven by “major U.S. high-tech companies”) might do something to address the larger problem. Reducing damages, strengthening the post-grant review process and ending the “forum shopping” that sends most of these suits to Texas sounds like positive steps. All in all, I am more sympathetic to “major U.S. high-tech companies” (which include my current and former employers) than to patent trolls. At the same time, I have no illusion that “major U.S. high-tech companies” are out to watch for the best interest of entrepreneurs and customers.

Comments Off on +1 to the FTC

Filed under Business, Everything, Patents

IT management in a world of utility IT

A cynic might call it “could computing” rather than “cloud computing”. What if you could get rid of your data center. What if you could pay only for what you use. What if you could ramp up your capacity on the fly. We’ve been hearing these promising pitches for a while now and recently the intensity has increased, fueled by some real advances.

As an IT management architect who is unfortunately unlikely to be in position to retire anytime soon (donations accepted for the send-William-to-retirement-on-a-beach fund) it forces me to wonder what IT management would look like in a world in which utility computing is a common reality.

First, these utility computing providers themselves will need plenty of IT management, if not necessarily the exact same kind that is being sold to enterprises today. You still need provisioning (automated of course). You definitely need access measuring and billing. Disaster recovery. You still have to deal with change planning, asset management and maybe portfolio management. You need processes and tools to support them. Of course you still have to monitor, manage SLAs, and pinpoints problems and opportunities for improvement. Etc. Are all of these a source of competitive advantage? Google is well-known for writing its infrastructure software (and of course also its applications) in house but there is no reason it should be that way, especially as the industry matures. Even when your business is to run a data center, not all aspects of IT management provide competitive differentiation. It is also very unclear at this point what the mix will be of utility providers that offer raw infrastructure (like EC2/S3) versus applications (like CRM as a service), a difference that may change the scope of what they would consider their crown jewels.

An important variable in determining the market for IT management software directed at utility providers is the number of these providers. Will there be a handful or hundreds? Many people seem to assume a small number, but my intuition goes the other way. The two main reasons for being only a handful would be regulation and infrastructure limitations. But, unlike with today’s utilities, I don’t see either taking place for utility computing (unless you assume that the network infrastructure is going to get vertically integrated in the utility data center offering). The more independent utility computing providers there are, the more it makes sense for them to pool resources (either explicitly through projects like the Collaborative Software Initiative or implicitly by buying from the same set of vendors) which creates a market for IT management products for utility providers. And conversely, the more of a market offering there is for the software and hardware building blocks of a utility computing provider, the lower the economies of scale (e.g. in software development costs) that would tend to concentrate the industry.

Oracle for one is already selling to utility providers (SaaS-type more than EC2-type at this point) with solutions that address scalability, SLA and multi-tenancy. Those solutions go beyond the scope of this article (they include not just IT management software but also databases and applications) but Oracle Enterprise Manager for IT management is also part of the solution. According to this Aberdeen report the company is doing very well in that market.

The other side of the equation is the IT management software that is needed by the consumers of utility computing. Network management becomes even more important. Identity/security management. Desktop management of some sort (depending on whether and what kind of desktop virtualization you use). And, as Microsoft reminds us with S+S, you will most likely still be running some software on-premises that needs to be managed (Carr agrees). The new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software. Which sounds eerily familiar. In the early days of WSMF, one of the scenarios we were attempting to address (arguably ahead of the times) was service management across business partners (that is, the protocols and models were supposed to allow companies to expose some amount of manageability along with the operational services, so that service consumers would be able to optimize their IT management decision by taking into account management aspects of the consumed services). You can see this in the fact that the WSMF-WSM specification (that I co-authored and edited many years ago at HP) contains a model of a “conversation” that represents “set of related messages exchanged with other Web services” (a decentralized view of a BPEL instance, one that represents just one service’s view of its participation in the instance). Well, replace “business partner” with “SaaS provider” and you’re in a very similar situation. If my business application calls a mix of internal services, SaaS-type services and possibly some business partner services, managing SLAs and doing impact/root cause analysis works a lot better if you get some management information from these other services. Whether it is offered by the service owner directly, by a proxy/adapter that you put on your end or by a neutral third party in charge of measuring/enforcing SLAs. There are aspects of this that are “regular” SOA management challenges (i.e. that apply whenever you compose services, whether you host them yourself or not) and there are aspects (security, billing, SLA, compliance, selection of partners, negotiation) that are handled differently in the situation where the service is consumed from a third party. But by and large, it remains a problem of management integration in a word of composed, orchestrated and/or distributed applications. Which is where it connects with my day job at Oracle.

Depending on the usage type and the level of industry standardization, switching from one utility computing provider to the other may be relatively painless and easy (modify some registry entries or some policy or even let it happen automatically based on automated policies triggered by a price change for example) or a major task (transferring huge amounts of data, translating virtual machines from one VM format to another, performing in-depth security analysis…). Market realities will impact the IT tools that get developed and the available IT tools will in return shape the market.

Another intriguing opportunity, if you assume a mix of on-premises computing and utility-based computing, is that of selling back your spare capacity on the grid. That too would require plenty of supporting IT management software for provisioning, securing, monitoring and policing (coming soon to an SEC filing: “our business was hurt by weak sales of our flagship Pepsi cola drink, partially offset by revenue from renting computing power from our data center to the Coca cola company to handle their exploding ERP application volume”). I believe my neighbors with solar panels on their roofs are able to run their electric counter backward and sell power to PG&E when they generate more than they use. But I’ll stop here with the electric grid analogy because it is already overused. I haven’t read Carr’s book so the comment may be unfair, but based on extracts he posted and reviews he seems to have a hard time letting go of that analogy. It does a good job of making the initial point but gets tiresome after a while. Having personally experienced the Silicon Valley summer rolling black-outs, I very much hope the economics of utility computing won’t be as warped. For example, I hope that the telcos will only act as technical, not commercial intermediaries. One of the many problems in California is that the consumer don’t buy from the producers but from a distributor (PG&E in the Bay Area) who sells at a fixed price and then has to buy at pretty much any price from the producers and brokers who made a killing manipulating the supply during these summers. Utility computing is another area in which economics and technology are intrinsically and dynamically linked in a way that makes predictions very difficult.

For those not yet bored of this topic (or in search of a more insightful analysis), Redmonk’s Coté has taken a crack at that same question, but unlike me he stays clear of any amateurish attempt at an economic analysis. You may also want to read Ian Foster’s analysis (interweaving pieces of technology, standards, economy, marketing, computer history and even some movie trivia) on how these “clouds” line up with the “grids” that he and others have been working on for a while now. Some will see his post as a welcome reminder that the only thing really new in “cloud” computing is the name and others will say that the other new thing is that it is actually happening in a way that matters to more than a few academics and that Ian is just trying to hitch his jalopy to the express train that’s passing him. For once I am in the “less cynical” camp on this and I think a lot of the “traditional” Grid work is still very relevant. Did I hear “EC2 components for SmartFrog”?

[UPDATED 2008/6/30: For a comparison of “cloud” and “grid”, see here.]

[UPDATED 2008/9/22: More on the Cloud vs. Grid debate: a paper critical of Grid (in the OGF sense of the term) efforts and Ian Foster’s reply (reat the comments too).]


Filed under Business, Everything, IT Systems Mgmt, Utility computing, Virtualization

Standards are good for customers… right?

Standards are good for customers. They avoid vendor lock-in. They protect the customer’s investment. Demanding standards compliance is a tool customers have to defend their interests when dealing with vendors. Right?

Well, in general yes. Except when standards become tools for vendors to attempt to confuse customers.

In the recent past, I have indirectly witnessed vendors liberally using the “standard” word and making claims of compliance with (and touting the need to conform to) specifications…

  • that have barely been submitted for standardization (SML),
  • that haven’t even been published in any form (CMDBF), or
  • that don’t even exist as a draft (CML – no link available, and for a reason).

Doesn’t something sound fishy when the logic goes through such self-negating statements as: “standards are good for you because they give you a choice of vendor. And we are the only vendor who supports standard X so you need to buy from us.” Especially when if it was true that the vendor in question implemented standard X, then it would not be their software that I would want to buy from them but their time machine.

All this doesn’t negate the fundamental usefulness of standards. And I don’t mean to attack the three specifications listed above either. They all have a very good potential to turn out to be useful. HP is fully engaged in the creation of all three (I am personally involved in authoring them, which is generally why wind of these exaggerated vendor claims eventually get back to me).

Vendors who are used to creating proprietary environments haven’t all changed their mind. They’ve sometimes just changed their rhetoric and updated their practices to play the standards game (changing the game itself in the process, and often not for the better). Over-eagerness should always arouse suspicion.

Comments Off on Standards are good for customers… right?

Filed under Business, CMDB Federation, CMDBf, CML, Everything, SML, Standards

Going shopping

The Mercury Interactive acquisition is very exciting. Especially since it brings Systinet along, and not the Systinet of a few years ago. The company very smarty got itself out of the “Web services infrastructure” melee and is now the leader in SOA governance. A much better match for OpenView.

On the other hand, this would be scary. I am sure we’ll get to $6 billion in revenue but I am willing to wait a little bit and earn it.

Comments Off on Going shopping

Filed under Business, Everything

CA joining “Federated CMDB” effort

CA announced today that they are joining BMC, HP, IBM and Fujitsu in the effort announced last week to standardize ways to create a Federated CMDB out of distributed configuration repositories. Welcome!

Since I am pointing to press releases left and right, here is one more.

HP is not only tackling the challenge of supporting ITSM processes in a top-down fashion (like Federated CMDB). We are also attacking it bottom up through some very concrete integrations. Such as the SOAP-based incident exchange interface developed with SAP that is mentioned in this press release.

Comments Off on CA joining “Federated CMDB” effort

Filed under Business, Everything, Standards

HP/IBM/Intel/Microsoft roadmap

HP, IBM, Intel and Microsoft just released a roadmap to describe how we plan to converge the myriad of Web services specs currently used for resource acces, eventing and management. Basically converging WS-Management and the stack under it with WSDM and the stack under it. This roadmap should make users of these specs feel a lot more at ease. It also is specific enough to give a good indication of the smart way to architect systems today in a way that will align with the reconciled version. Even though we don’t have spec candidates ready to share at this point, we thought it would be valuable to let people know of the direction we are heading towards. The resulting set of specifications will be based on the currently existing WS-Transfer, WS-Eventing and WS-Enumeration. Which, as it happens, are published as member submissions by the W3C today.

Comments Off on HP/IBM/Intel/Microsoft roadmap

Filed under Business, Everything, Standards

Keeping track

After Systinet, it’s now Actional’s turn to take the plunge. For those trying to keep track, Jeff Schneider has a useful recap of SOA-related acquisitions and mergers. It’s only missing the name changes to be complete (e.g. Corporate Oxygen to Confluent, Digital Evolution to SOA Software…).

Comments Off on Keeping track

Filed under Business, Everything

Bridging the gap between business and IT: application to software pricing

With the ongoing virtualization of the computing infrastructure as well as the proliferation of multi-core processors, revising software pricing strategies (often based on number of processors) is a hot topic. The usual spin is: we can’t keep using the current model (as “number of processors” doesn’t mean much anymore) so we have to think of a new one. But there is another way to look at it. Revising the pricing strategy not because we have to but because we can.

Pricing software based on the number of processors only makes sense because we are used to it. We are used to it because it is prevalent. It is prevalent because it is easy to measure and apply (or was until recently). It’s hard to measure the value to the business of a piece of software but it is easy to measure how many processors run it. So we use the number of processors as an approximation of the value. This approach to pricing is very similar to the approach of policy-driven IT management that creates SLAs at different levels of the architecture. The IT administrator is told to make sure that a certain server stays up 99.9% of the time. Does the business really care that the server is up? No, what it cares about is that the business processes can progress and these processes happen to use applications running on the server. But if we told the IT admin “make sure the business processes can progress”, he doesn’t know what to do in practical terms. He doesn’t know whether the downtime to patch the server is worth it or not. By giving him a more measurable metric (uptime), the IT admin is now able to make the necessary decisions to meet the specific uptime SLA. Just like the number of processors is used as a convenient approximation of the business value of the software, the uptime SLA is used as a convenient approximation of the business need. Like any approximation, they are not perfect and making decisions based on them rarely leads to optimal decisions. But when that’s all you can do you call it good enough and you go with it.

One of the key promises of the effort to “bridge the gap between business and IT” is to better align infrastructure-level decisions with the real business impact. Products like OpenView’s Business Process Insight allow you to map business processes to the IT infrastructure that powers the steps of the process. So that you can make decisions on managing the IT elements based on their real impact on the business rather than fixed SLAs. We are seeing a huge amount of interest for this and there is a lot of room for optimization once this correlation is established. At this point, the focus is on using this to automate and optimize IT management. But this is so similar to the software pricing issues that one has to wonder whether these technologies won’t eventually allow us to price software in a way better aligned with the real business value provided by the software. And who knows, maybe one day management software will be used to tie salaries to business value rather than being driven by approximations such as “number of hours worked”, “number of bugs fixed”, “uptime of the server”, “number of specs produced”.

Comments Off on Bridging the gap between business and IT: application to software pricing

Filed under Business, Everything

Spreading the word of SOA and SOA management

Over the last couple days, a few articles came up that help explain HP’s vision for Management of the Adaptive Enterprise, so here are the links.

Yesterday, Mark Potts published an article describing the value of SOA for enterprises and more specifically the management aspects of SOA (security, life cycle and configuration, management of infrastructure services and business services, governance, etc). BTW, the SOA practice from HP Consulting and Integration that Mark refers to at the end of his article is what I mentioned in my previous post.

Another interesting article is Alan Weissberger’s entusiastic report from GGF 14. Alan follows GGF and related OASIS activities very closely, doesn’t fall for fluff and is not easily impressed so this a testimony to the great work that Heather, Bryan, Bill and Barry did there, presenting a WSDM deep dive, the HP/IBM WSDM demos (which they also showed at IEEE ICAC in Seattle) and talking about the recently released HP/IBM/CA roadmap for management using Web services. These four should call themselves “Heather and the Bs” or “HB3” for short if they keep touring the world showing their cool demos. Can’t wait to see them at the Shoreline Amphitheatre. Of course, Alan’s positive comments also and mainly come out of all the hard technical work that lead to this successful GGF14, including the OGSA WSRF Basic Profile.

Two more articles to finish, both about the HP/IBM/CA roadmap. I talked to the journalists for both of these articles, one form ComputerWorld and one from the Computer Business Review.

Four good articles in two days, it is very encouraging to see how the understanding of how we are unleashing the power of SOAs through adaptive management is growing. This is what the roadmap is all about, explaining the objectives to people and inviting them on board.

Comments Off on Spreading the word of SOA and SOA management

Filed under Business, Everything, Tech

Sea, Services and Sun

There is a lot to like about HP’s announcement today that the company’s consulting arm is now offering seven new SOA services (including, of course, SOA Management) and opening four SOA competency centers (see the press release and’s report). I must admit that the idea of one day moving from the software group to HP Services and working on SOA solutions in the French Riviera at Sophia Antipolis (one of the four competency centers) is not without appeal. I am now spending a lot more time with customers than I used to anyway so it wouldn’t be too wide a chasm in that respect.

Even putting aside my bias for the good life in the “Côte d’Azur”, this is very good news. Good news of course for OpenView, including our SOA Manager product, but HP Services actually only represents a relatively small portion of OpenView sales.

More importantly, the SOA specialists in HP Services can help customers build an SOA by putting together parts from all our partners (Oracle, SAP, BEA, Microsoft, etc) as well as open source. Which is how you really want to go about building an SOA. In theory it is possible to build an SOA using homogenous products from the same vendor, but in practice this is as likely as designing a reusable and well factored-out interface while having only one use case and knowing about only one client for your service. In both conditions, assumptions creep in unnoticed into your contracts and abstractions. And you end up with a more tightly coupled system, which comes back to bite you as the number of participants grow.

Comments Off on Sea, Services and Sun

Filed under Business, Everything

Someone is paying attention

It’s nice to see that, while most of the tech press seems happy to copy/paste from misleading press briefing documents rather than do any checking of their own, some analysts take a little bit more time to look through the smoke. So, when Gartner looks into the recent Microsoft/Sun announcement (see “Progress Report on Sun/Microsoft Initiative Lacks Substance”) their recommendation is to “view the latest Sun/Microsoft announcement as primarily public-relations-oriented”. Similar take from Jason Bloomberg from ZapThink who thinks that this “doesn’t do anything to contradict the fact that Microsoft is the big gorilla in this relationship”. And Forrester’s Randy Heffner (quoted in “Analysts Question Microsoft-Sun Alliance”) thinks that “Bottom line: Web services interoperability is not yet part of the picture”. Oh, and by the way “the WS-Management group has yet to come clean on how they will work with the WSDM standard approved by OASIS,” Heffner also says. “Again, WS-Management is still just a specification in the hands of vendors”. Very much so. But in PR-land everything looks different. As tech journalists write these articles including insight from analysts that contradict what the tech press reported a couple days earlier, I wonder if they ever think “hum, maybe I should be the one doing reality checks on the content of press releases rather than going around collecting quotes and then the analysts would focus on real in-depth analysis rather than just doing the basic debunking work…”

Comments Off on Someone is paying attention

Filed under Business, Everything, Standards, Tech

Reality check on Microsoft/Sun claims about single sign-on

This morning I learned that Microsoft and Sun had a public event where the CEOs reported on a year of working together. This is a follow-up to Greg Papadopoulos’ report on the progress of the “technical collaboration”. In that post, Greg told us about the amazing technical outcomes of the work between the two companies and, being very familiar with the specs he was referring to, I couldn’t help but point out that the result of the “technical collaboration” he was talking about looked a lot like Sun rubber-stamping a bunch of Microsoft specifications without much input from Sun engineers.

So when I heard this morning that the two companies were coming out publicly with the result of their work, I thought it would be fair for me to update my blog and include this information.

Plus, reading the press release and Greg’s Q&A session, it sounded pretty impressive and it would have been bad faith from my part to not acknowledge that indeed Greg actually had something to brag about, it just wasn’t yet public at the time. In effect, it sounded like they had found a way to make the Liberty Alliance specs and WS-Federation interoperate with one another.

From Greg’s Q&A: “In a nutshell, we resolved and aligned what Microsoft was trying to accomplish with Passport and the WS-Federation with what we’ve been doing with the Liberty Alliance. So, we’ve agreed upon a way to enable single sign-on to the Internet (whether through a .NET service or a Java Enterprise System service), and federate across those platforms based on service-level agreements and/or identity agreements between those services. That’s a major milestone.”

Yes Greg, it would have been. Except this is not what is delivered. The two specs that are supposed to support these claims are Web SSO MEX and Web SSO Interop Profile. Which are 14 and 9 pages long respectively. Now I know better than to equate length of a spec with value, but when you cut the boilerplate content out of these 14 and 9 pages, there is very little left for delivering on ambitious claims such as those Greg makes.

The reason is that these specs in no way provide interop between a system built using Liberty Alliance and a system built using WS-Federation. All they do is to allow each system to find out what spec the other uses.

One way to think about it is that we have an English speaker and a Korean speaker in the same room and they are not able to talk. What the two new specs do is put a lapel pin with a British flag on the english speakers and a lapel pin with a Korean flag on the korean speaker. Yes, this helps a bit. At least now the Korean speaker will know what the weird language that the other guy is speaking is and he can go to school and learn it. But just finding out what language the other guy speaks is a far cry from actually being able to communicate with him.

Even with these specs, a system based on Liberty Alliance and one based on WS-Federation are still incompatible and you cannot single sign-on from one to the other. Or rather, you can only if your client implements both. This is said explicitly in the Web SSO Interop Profile spec (look for the first line of page 5): “A compliant identity provider implementation MUST support both protocol suites”. Well, this isn’t interop, it’s duplication. Otherwise I could claim I have solved the problem of interoperability between English and Korean just by asking everyone to learn both languages. Not very convincing…

But of course Microsoft and Sun knew that they could get away with that in the press. For example, CNet wrote “The Web Single Sign-On Metadata Exchange Protocol and Web Single Sign-On Interoperability Profile will bridge Web identity management systems based on the Liberty and Web services specifications, the companies said”. As the Columbia Journalism Review keeps pointing out, real journalists don’t just report what people say, they check if it’s true. And in this case, it simply isn’t.

1 Comment

Filed under Business, Everything, Security, Standards, Tech

Greg Papadopoulos on “collaborating” with Microsoft

Greg Papadopoulos (Sun’s CTO) recently posted a blog entry to tell us, a year after, what’s it’s been like working with Microsoft. For those who forgot, a year ago Microsoft sent a $2 billion check to Sun to settle some legal disputes and turn Sun into a technical partner. So what kind of technical partnership is that? Well, according to Greg they’ve been making “some real architectural progress”. And he gives us four examples: WS-Addressing, WS-Management, WS-Eventing, WS-MetadataExchange. The funny thing is that for each one of these specifications Microsoft had written and publicized the specification before Sun became a partner and just put out a slightly updated version with Sun and other companies added as authors. Go ahead and check for yourself:

  • WS-Addressing: the “before Sun” version (March 2004) and the “after Sun” version (August 2004)
  • WS-Management: the “before Sun” version was called WMX but I can’t find a URI for it, only an overview document so on this one you’re on your own to find the “before Sun” document to compare (hint: call Microsoft, not Sun for this doc). Here is the “after Sun” version (October 2004)
  • WS-Eventing: the “before Sun” version (January 2004) and the “after Sun” version (August 2004)
  • WS-MetadataExchange: the “before Sun” version (March 2004) and the “after Sun” version (September 2004)

There might be a lot of in-depth technical collaboration going on between Sun and Microsoft that we are not allowed to see, but the only examples Greg has for us in his “one year later” piece make it sound a lot more like a business deal than technical collaboration. Maybe they have the CTO write about it because the CFO doesn’t have a blog?

In that same piece, Greg also tells us that “the ‘interoperate’ message is louder than even the ‘standardize’ one”. This is probably why 3 of the 4 specs he brings up are proprietary specs. This explains a lot about what to expect from Sun in terms of standard support. I agreed when Sun used to say that standards are the best way to provide specifications that can be safely implemented, including by small companies and open-source projects (in financial terms, legal terms and control terms) and that this is a key promise of Web services. Simon Phipps (Sun’s chief technology evangelist) explained it well. But this was in year 1BC (Before Check). How things change.

Comments Off on Greg Papadopoulos on “collaborating” with Microsoft

Filed under Business, Everything, Standards

Building blocks of an “adaptive enterprise”

Call it “laziness” or “smart reuse”, here is a pointer to a Web services journal opinion piece I wrote a few months back in an attempt to explain how the different efforts going on in the industry around Web services, grid, SOA management, virtualization, utility computing, <insert your favorite buzword>, fit together to provide organizations with the flexibility and efficiency they need from their IT in order to thrive. This is how it starts:

Enterprise services are created by combining infrastructure services, applications, and business processes. To be able to adapt quickly to business changes, enterprise IT must evolve from management of individual resources to management of interrelated services. [more…]

1 Comment

Filed under Articles, Business, Everything, Tech