Category Archives: People

Big Data career adviser says you should be a… Big Data analyst

LinkedIn CEO Jeff Weiner wrote an interesting post on “the future of LinkedIn and the economic graph“. There’s a lot to like about his vision. The part about making education and career choices better informed by data especially resonates with me:

With the existence of an economic graph, we could look at where the jobs are in any given locality, identify the fastest growing jobs in that area, the skills required to obtain those jobs, the skills of the existing aggregate workforce there, and then quantify the size of the gap. Even more importantly, we could then provide a feed of that data to local vocational training facilities, junior colleges, etc. so they could develop a just-in-time curriculum that provides local job seekers the skills they need to obtain the jobs that are and will be, and not just the jobs that once were.

I consider myself very lucky. I happened to like computers and enjoy programming them. This eventually lead me to an engineering degree, a specialization in Computer Science and a very enjoyable career in an attractive industry. I could have been similarly attracted by other domains which would have been unlikely to give me such great professional options. Not everyone is so lucky, and better data could help make better career and education choices. The benefits, both at the individual and societal levels, could be immense.

Of course, like for every Big Data example, you can’t expect a crystal ball either. It’s unlikely that the “economic graph” for France in 1994 would have told me: “this would be a good time to install Linux Slackware, learn Python and write your first CGI script”. It’s also debatable whether that “economic graph” would have been able to avoid one of the worst talent waste of recent time, when too many science and engineering graduates went into banking. The “economic graph” might actually have encouraged that.

But, even under moderate expectations, there is a lot of potential for better informed education and career decision (both on the part of the training profession and the students themselves) and I am glad that LinkedIn is going after that. Along with the choice of a life partner (and other companies are after that problem), this is maybe the most important and least informed decision people will make in their lifetime.

Jeff Weiner also made proclamation of openness in that same article:

Once realized, we then want to get out of the way and allow all of the nodes on this network to connect seamlessly by removing as much friction as possible and allowing all forms of capital, e.g. working capital, intellectual capital, and human capital, to flow to where it can best be leveraged.

I’m naturally suspicious of such claims. And a few hours later, I get a nice email from LinkedIn, announcing that as of tomorrow they are dropping the “blog link” application which, as far as I can tell, fetches recent posts form my blog and includes them on my LinkedIn profile. Seems to me that this was a nice and easy way to “allow all of the nodes on this network to connect seamlessly by removing as much friction as possible”…

1 Comment

Filed under Big Data, Everything, Linked Data, People, Social networks

Joining Google

Next Monday, I will start at Google, in the Cloud Platform team.

I’ve been watching that platform, and especially Google App Engine (GAE), since it started in 2008. It shaped my thoughts on Cloud Computing and on the tension between PaaS and IaaS. In my first post about GAE, 4.5 years ago, I wrote about that tension:

History is rarely kind to promoters of radical departures. The software industry is especially fond of layering the new on top of the old (a practice that has been enabled by the constant increase in underlying computing capacity). If you are wondering why your command prompt, shell terminal or text editor opens with a default width of 80 characters, take a trip back to 1928, when IBM defined its 80-columns punch card format. Will Google beat the odds or be forced to be more accommodating of existing code?

This debate (which I later characterized as “backward-compatible vs. forward-compatible”) is still ongoing. App Engine has grown a lot and shed its early limitations (I had a lot of fun trying to engineer around them in the early days). Google’s Cloud Platform today is also a lot more than App Engine, with Cloud Storage, Compute Engine, etc. It’s much more welcoming to existing applications.

The core question remains, however. How far, and how quickly will we move from the abstractions inherited from seeing the physical server as the natural unit of computation? What benefits will we derive from this transformation and will they make it worthwhile? Where’s the next point of equilibrium in the storm provoked by these shifts:

  • IT management technology was ripe for a change, applying to itself the automation capabilities that it had brought to other domains.
  • Software platforms were ripe for a change, as we keep discovering all the Web can be, all the data we can handle, and how best to take advantage of both.
  • The business of IT was ripe for a change, having grown too important to escape scrutiny of its inefficiency and sluggishness.

These three transformations didn’t have to take place at the same time. But they are, which leaves us with a fascinating multi-variable equation to optimize. I believe Google is the right place to crack this nut.

This is my view today, looking at the larger Cloud environment and observing Google’s Compute Platform from the outside. In a week’s time, I’ll be looking at it from the inside. October me may scoff at the naïveté of September me; or not. Either way, I’m looking forward to it.

7 Comments

Filed under Cloud Computing, Everything, Google, Google App Engine, Google Cloud Platform, People, Uncategorized, Utility computing

Good-bye Oracle

Tomorrow will be my last day at Oracle. It’s been a great 5 years. By virtue of my position as architect for the Application and Middleware Management part of Oracle Enterprise Manager, I got to work with people all over the Enterprise Manager team as well as many other groups, whose products we are managing: from WebLogic to Fusion Apps, from Exalogic to Oracle’s Public Cloud and many others.

I was hired for, and initially focused on, defining and building Oracle’s application management capabilities. This was done via a mix of organic development and targeted acquisitions (Moniforce, ClearApp, Amberpoint…). The exercise was made even more interesting by acquisitions in other parts of the company (especially BEA) which redefined the scope of what we had to manage.

The second half of my time at Oracle was mostly focused on Cloud Computing. Enterprise Manager Cloud Control 12c, which we released a year ago at Oracle Open World 2011, introduced Private Cloud support with IaaS (including multi-VM assemblies) and DBaaS. Last week, we release EMCC 12c Release 2 which, among other things, adds support for Java PaaS. I was supposed to present on this, and demo the Java service, at Oracle Open World next week. If you’re attending, make sure to hear my friends Dhruv Gupta and Fred Carter deliver that session, titled “Platform as a Service: Taking Enterprise Clouds Beyond Virtualization” on Wednesday October 3 at 3:30 in Moscone West 3018.

I also got to work on the Oracle Public Cloud, in which Enterprise Manager plays a huge role. I was responsible for the integration of new Cloud service types with Enterprise Manager, which gave me a great view of what’s in the Oracle Public Cloud pipeline. Larry Ellison has promised to show a lot more next week at Oracle Open World, stay tuned for that. The breadth and speed with which Oracle has now embraced Cloud (both public and private) is impressive. Part of Oracle’s greatness is how quickly its leadership can steer the giant ship. I am honored to have been part of that, in the context of the move to Cloud Computing, and I will remember fondly my years in Redwood Shores and my friends there.

I am taking next week off. On Monday October 8, I am starting in a new and exciting job, still in the Cloud Computing space. That’s a topic for another post.

6 Comments

Filed under Everything, Oracle, Oracle Cloud, Oracle Open World, People

Podcast with Oracle Cloud experts

A couple of weeks ago, Bob Rhubart (who runs the OTN architect community) assembled four of us who are involved in Oracle’s Cloud efforts, public and private. The conversation turned into a four-part podcast, of which the first part is now available. The participants were James Baty (VP of Global Enterprise Architecture Program), Mark Nelson (lead architect for Oracle’s public Cloud among other responsibilities), Ajay Srivastava (VP of OnDemand platform), and me.

I think the conversation will give a good idea of the many ways in which we think about Cloud at Oracle. Our customers both provide and consume Cloud services. Oracle itself provides both private Cloud systems and a global public Cloud. All these are connected, both in terms of usage patterns (hybrid) and architecture/implementation (via technology sharing between our products and our Cloud services, such as Enterprise Manager’s central role in running the Oracle Cloud).

That makes for many topics and a lively conversation when the four of us get together.

Comments Off on Podcast with Oracle Cloud experts

Filed under Cloud Computing, Everything, Oracle, Oracle Cloud, People, Podcast, Portability, Standards, Utility computing

BSM with Oracle Enterprise Manager 11g

My colleagues Ashwin Karkala and Govinda Sambamurthy have written a book about modeling and managing business services using the current version of Enterprise Manager Grid Control (11g R1). Nobody would have been better qualified for this task since they built a lot of the features they describe. I acted as a technical reviewer for this book and very much enjoyed reading it in the process.

Whether you are a current EM user who wants to make sure you know and use the BSM features or someone just considering EM for that task, this is the book you want.

The full title is Oracle Enterprise Manager Grid Control 11g R1: Business Service Management.

As a bonus feature, and for a limited time only, if you purchase this book over the next 48 hours you get to follow the authors, @ashwinkarkala and @govindars on Twitter at no extra cost! A $2,000 value (at least).

Comments Off on BSM with Oracle Enterprise Manager 11g

Filed under Application Mgmt, Book review, BSM, Everything, IT Systems Mgmt, Mgmt integration, Modeling, Oracle, People

Steve Ballmer gets Cloud

Steve Ballmer wants devops

Devops? What’s devops? See these articles:

3 Comments

Filed under Cloud Computing, DevOps, Everything, IT Systems Mgmt, Microsoft, People

Enterprise application integration patterns for IT management: a blast from the past or from the future?

In a recent blog post, Don Ferguson (CTO at CA) describes CA Catalyst, a major architectural overhaul which “applies enterprise application integration patterns to the problem of integrating IT management systems”. Reading this was fascinating to me. Not because the content was some kind of revelation, but exactly for the opposite reason. Because it is so familiar.

For the better part of the last decade, I tried to build just this at HP. In the process, I worked with (and sometimes against) Don’s colleague at IBM, who were on the same mission. Both companies wanted a flexible and reliable integration platform for all aspects of IT management. We had decided to use Web services and SOA to achieve it. The Web services management protocols that I worked on (WSMF, WSDM, WS-Management and the “reconciliation stack”) were meant for this. We were after management integration more than manageability. Then came CMDBf, another piece of the puzzle. From what I could tell, the focus on SOA and Web services had made Don (who was then Mr. WebSphere) the spiritual father of this effort at IBM, even though he wasn’t at the time focused on IT management.

As far as I know, neither IBM nor HP got there. I covered some of the reasons in this post-mortem. The standards bickering. The focus on protocols rather than models. The confusion between the CMDB as a tool for process/service management versus a tool for software integration. Within HP, the turmoil from the many software acquisitions didn’t help, and there were other reasons. I am not sure at this point whether either company is still aiming for this vision or if they are taking a different approach.

But apparently CA is still on this path, and got somewhere. At least according to Don’s post. I have no insight into what was built beyond what’s in the post. I am not endorsing CA Catalyst, just agreeing with the design goals listed by Don. If indeed they have built it, and the integration framework resists the test of time, that’s impressive. And exciting. It apparently even uses some the same pieces we were planning to use, namely WS-Management and CMDBf (I am reluctantly associated with the first and proudly with the second).

While most readers might not share my historical connection with this work, this is still relevant and important to anyone who cares about IT management in the enterprise. If you’re planning to be at CA World, go listen to Don. Web services may have a bad name, but the technical problems of IT management integration remain. There are only a few routes to IT management automation (I count seven, the one taken by CA is #2). You can throw away SOAP if you want, you still need to deal with protocol compatibility, model alignment and instance reconciliation. You need to centralize or orchestrate the management operations performed. You need to be able to integrate with complementary products or at the very least to effectively incorporate your acquisitions. It’s hard stuff.

Bonus point to Don for not forcing a “Cloud” angle for extra sparkle. This is core IT management.

Comments Off on Enterprise application integration patterns for IT management: a blast from the past or from the future?

Filed under Automation, CA, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, Mgmt integration, Modeling, People, Protocols, SOAP, Specs, Standards, Tech, Web services, WS-Management

Standards Disconnect at Cloud Connect

Yesterday’s panel session on the future of Cloud standards at Cloud Connect is still resonating on Twitter tonight. Many were shocked by how acrimonious the debate turned. It didn’t have to be that way but I am not surprised that it was.

The debate was set up and moderated by Bob Marcus (ET-Strategies CTO and master standards coordinator). On stage were Krishna Sankar (Cisco and DMTF Cloud incubator), Archie Reed (HP and CSA), Winston Bumpus (VMWare and DMTF), a gentleman whose name I unfortunately forgot (and who isn’t listed on the program) and me.

If the goal was to glamorize Cloud standards, it was a complete failure. If the goal was to come out with some solutions and agreements, it was also a failure. But if the goal, as I believe, was to surface the current issues, complexities, emotions and misunderstandings surrounding Cloud standards, then I’d say it was a success.

I am not going to attempt to summarize the whole discussion. Charles Babcock, who was in the audience, does a good enough job in this InformationWeek article and, unlike me, he doesn’t have a horse in the race [side note: I am not sure why my country of origin is relevant to his article, but my guess is that this is the main thing he remembered from my presentation during the Cloud Connect keynote earlier that morning, thanks to the “guillotine” slide].

Instead of reporting on what happened during the standards discussion, I’ll just make one comment and provide one take-away.

The comment: the dangers of marketing standards

Early in the session, audience member Reuven Cohen complained that standards organizations don’t do enough to market their specifications. Winston was more than happy to address this and talk about all the marketing work that DMTF does, including trade shows and PR. He added that this is one of the reasons why DMTF needs to charge membership fees, to pay for this marketing. I agree with Winston at one level. Indeed, the DMTF does what he describes and puts a fair amount of efforts into marketing itself and its work. But I disagree with Reuven and Winston that this is a good thing.

First it doesn’t really help. I don’t think that distributing pens and tee-shirts to IT admins and CIO-wannabes results in higher adoption of your standard. Because the end users don’t really care what standard is used. They just want a standard. Whether it comes from DMTF, SNIA, OGF, or OASIS is the least of their concerns. Those that you have to convince to adopt your standard are the vendors and the service providers. The Amazon, Rackspace and GoGrid of the world. The Microsoft, Oracle, VMWare and smaller ones like… Enomaly (Reuven’s company). The highly-specialized consultants who work with them, like Randy. And also, very importantly, the open source developers who provide all the Cloud libraries and frameworks that are the lifeblood of many deployments. I have enough faith left in human nature to assume that all these guys make their strategic standards decisions on a bit more context than exhibit hall loot and press releases. Well, at least we do where I work.

But this traditional approach to marketing is worse than not helping. It’s actually actively harmful, for two reasons. The first is that the cost of these activities, as Winston acknowledges, creates a barrier for participation by requiring higher dues. To Winston it’s an unfortunate side effect, to me it’s a killer. Not necessarily because dropping the membership fee by 50% would bring that many more participants. But because the organizations become so dependent on dues that they are paranoid about making anything public for fear of lowering the incentive for members to keep paying. Which is the worst thing you can do if you want the experts and open source developers, who are the best chance Cloud standards have to not repeat the mistakes of the past, to engage with the standard. Not necessarily as members of the group, also from the outside. Assuming the work happens in public, which is the key issue.

The other reason why it’s harmful to have a standards organization involved in such traditional marketing is that it has a tendency to become a conduit for promoting the agenda of the board members. Promoting a given standard or organization sounds good, until you realize that it’s rarely so pure and unbiased. The trade shows in which the organization participates are often vendor-specific (e.g. Microsoft Management Summit, VMWorld…). The announcements are timed to coincide with relevant corporate announcements. The press releases contain quotes from board members who promote themselves at the same time as the organization. Officers speaking to the press on behalf of the standards organization are often also identified by their position in their company. Etc. The more a standards organization is involved in marketing, the more its low-level members are effectively subsiding the marketing efforts of the board members. Standards have enough inherent conflicts of interest to not add more opportunities.

Just to be clear, that issue of standards marketing is not what consumed most of the time during the session. But it came up and I since I didn’t get a chance to express my view on this while on the panel, I used this blog instead.

My take-away from the panel, on the other hand, is focused on the heart of the discussion that took place.

The take away: confirmation that we are going too fast, too early

Based on this discussion and other experiences, my current feeling on Cloud standards is that it is too early. If you think the practical experience we have today in Cloud Computing corresponds to what the practice of Cloud Computing will be in 10 years, then please go ahead and standardize. But let me tell you that you’re a fool.

The portion of Cloud Computing in which we have some significant experience (get a VM, attach a volume, assign an IP) will still be relevant in 10 years, but it will be a small fraction of Cloud Computing. I can tell you that much even if I can’t tell you what the whole will be. I have my ideas about what the whole will look like but it’s just a guess. Anybody who pretends to know is fooling you, themselves, or both.

I understand the pain of customers today who just want to have a bit more flexibility and portability within the limited scope of the VM/Volume/IP offering. If we really want to do a standard today, fine. Let’s do a very small and pragmatic standard that addresses this. Just a subset of the EC2 API. Don’t attempt to standardize the virtual disk format. Don’t worry about application-level features inside the VM. Don’t sweat the REST or SOA purity aspects of the interface too much either. Don’t stress about scalability of the management API and batching of actions. Just make it simple and provide a reference implementation. A few HTTP messages to provision, attach, update and delete VMs, volumes and IPs. That would be fine. Anything else (and more is indeed needed) would be vendor extensions for now.

Unfortunately, neither of these (waiting, or a limited first standard) is going to happen.

Saying “it’s too early” in the standards world is the same as saying nothing. It puts you out of the game and has no other effect. Amazon, the clear leader in the space, has taken just this position. How has this been understood? Simply as “well I guess we’ll do it without them”. It’s sad, but all it takes is one significant (but not necessarily leader) company trying to capitalize on some market influence to force the standards train to leave the station. And it’s a hard decision for others to not engage the pursuit at that point. In the same way that it only takes one bellicose country among pacifists to start a war.

Prepare yourself for some collateral damages.

While I would prefer for this not to proceed now (not speaking for my employer on this blog, remember), it doesn’t mean that one should necessarily stay on the sidelines rather than make lemonade out of lemons. But having opened the Cloud Connect panel session with somewhat of a mea-culpa (at least for my portion of responsibility) with regards to the failures of the previous IT management standardization wave, it doesn’t make me too happy to see the seeds of another collective mea-culpa, when we’ve made a mess of Cloud standards too. It’s not a given yet. Just a very high risk. As was made clear yesterday.

9 Comments

Filed under Amazon, Cloud Computing, Conference, DMTF, Everything, Mgmt integration, People, Portability, Standards, Trade show, Utility computing

Can Cloud standards be saved?

Then: Web services standards

One of the most frustrating aspects of how Web services standards shot themselves in the foot via unchecked complexity is that plenty of people were pointing out the problem as it happened. Mark Baker (to whom I noticed Don Box also paid tribute recently) is the poster child. I remember Tom Jordahl tirelessly arguing for keeping it simple in the WSDL working group. Amberpoint’s Fred Carter did it in WSDM (in the post announcing the recent Amberpoint acquisition, I mentioned that “their engineers brought to the [WSDM] group a unique level of experience and practical-mindedness” but I could have added “… which we, the large companies, mostly ignored.”)

The commonality between all these voices is that they didn’t come from the large companies. Instead they came from the “specialists” (independent contractors and representatives from small, specialized companies). Many of the WS-* debates were fought along alliance lines. Depending on the season it could be “IBM vs. Microsoft”, “IBM+Microsoft vs. Oracle”, “IBM+HP vs. Microsoft+Intel”, etc… They’d battle over one another’s proposal but tacitly agreed to brush off proposals from the smaller players. At least if they contained anything radically different from the content of the submission by the large companies. And simplicity is radical.

Now: Cloud standards

I do not reminisce about the WS-* standards wars just for old time sake or the joy of self-flagellation. I also hope that the current (and very important) wave of standards, related to all things Cloud, can do better than the Web services wave did with regards to involving on-the-ground experts.

Even though I still work for a large company, I’d like to see this fixed for Cloud standards. Not because I am a good guy (though I hope I am), but because I now realize that in the long run this lack of perspective even hurts the large companies themselves. We (and that includes IBM and Microsoft, the ringleaders of the WS-* effort) would be better off now if we had paid more attention then.

Here are two reasons why the necessity to involve and include specialists is even more applicable to Cloud standards than Web services.

First, there are many more individuals (or small companies) today with a lot of practical Cloud experience than there were small players with practical Web services experience when the WS-* standardization started (Shlomo Swidler, Mitch Garnaat, Randy Bias, John M. Willis, Sam Johnston, David Kavanagh, Adrian Cole, Edward M. Goldberg, Eric Hammond, Thorsten von Eicken and Guy Rosen come to mind, though this is nowhere near an exhaustive list). Which means there is even more to gain by ensuring that the Cloud standard process is open to them, should they choose to engage in some form.

Second, there is a transparency problem much larger than with Web services standards. For all their flaws, W3C and OASIS, where most of the WS-* work took place, are relatively transparent. Their processes and IP policies are clear and, most importantly, their mailing list archives are open to the public. DMTF, where VMWare, Fujitsu and others have submitted Cloud specifications, is at the other hand of the transparency spectrum. A few examples of what I mean by that:

  • I can tell you that VMWare and Fujitsu submitted specifications to DMTF, because the two companies each issued a press release to announce it. I can’t tell you which others did (and you can’t read their submissions) because these companies didn’t think it worthy of a press release. And DMTF keeps the submission confidential. That’s why I blogged about the vCloud submission and the Fujitsu submission but couldn’t provide equivalent analysis for the others.
  • The mailing lists of DMTF working groups are confidential. Even a DMTF member cannot see the message archive of a group unless he/she is a member of that specific group. The general public cannot see anything at all. And unless I missed it on the site, they cannot even know what DMTF working groups exist. It makes you wonder whether Dick Cheney decided to call his social club of energy company executives a “Task Force” because he was inspired by the secrecy of the DMTF (“Distributed Management Task Force”). Even when the work is finished and the standard published, the DMTF won’t release the mailing list archive, even though these discussions can be a great reference for people who later use the specification.
  • Working documents are also confidential. Working groups can decide to publish some intermediate work, but this needs to be an explicit decision of the group, then approved by its parent group, and in practice it happens rarely (mileage varies depending on the groups).
  • Even when a document is published, the process to provide feedback from the outside seems designed to thwart any attempt. Or at least that’s what it does in practice. Having blogged a fair amount on technical details of two DMTF standards (CMDBf and WS-Management) I often get questions and comments about these specifications from readers. I encourage them to bring their comments to the group and point them to the official feedback page. Not once have I, as a working group participant, seen the comments come out on the other end of the process.

So let’s recap. People outside of DMTF don’t know what work is going on (even if they happen to know that a working group called “Cloud this” or “Cloud that” has been started, the charter documents and therefore the precise scope and list of deliverables are also confidential). Even if they knew, they couldn’t get to see the work. And even if they did, there is no convenient way for them to provide feedback (which would probably arrive too late anyway). And joining the organization would be quite a selfless act because they then have to pay for the privilege of sharing their expertise while not being included in the real deciding circles anyway (unless there are ready to pony up for the top membership levels). That’s because of the unclear and unstable processes as well as the inordinate influence of board members and officers who all are also company representatives (in W3C, the strong staff balances the influence of the sponsors, in OASIS the bylaws limit arbitrariness by the board members).

What we are missing out on

Many in the standards community have heard me rant on this topic before. What pushed me over the edge and motivated me to write this entry was stumbling on a crystal clear illustration of what we are missing out on. I submit to you this post by Adrian Cole and the follow-up (twice)by Thorsten von Eicken. After spending two days at a face to face meeting of the DMTF Cloud incubator (in an undisclosed location) this week, I’ll just say that these posts illustrate a level of practically and a grounding in real-life Cloud usage that was not evident in all the discussions of the incubator. You don’t see Adrian and Thorsten arguing about the meaning of the word “infrastructure”, do you? I’d love to point you to the DMTF meeting minutes so you can judge for yourself, but by now you should understand why I can’t.

So instead of helping in the forum where big vendors submit their specifications, the specialists (some of them at least) go work in OGF, and produce OCCI (here is the mailing list archive). When Thorsten von Eicken blogs about his experience using Cloud APIs, they welcome the feedback and engage him to look at their work. The OCCI work is nice, but my concern is that we are now going to end up with at least two sets of standard specifications (in addition to the multitude of company-controlled specifications, like the ubiquitous EC2 API). One from the big companies and one from the specialists. And if you think that the simplest, clearest and most practical one will automatically win, well I envy your optimism. Up to a point. I don’t know if one specification will crush the other, if we’ll have a “reconciliation” process, if one is going to be used in “private Clouds” and the other in “public Clouds” or if the conflict will just make both mostly irrelevant. What I do know is that this is not what I want to see happen. Rather, the big vendors (whose imprimatur is needed) and the specialists (whose experience is indispensable) should work together to make the standard technically practical and widely adopted. I don’t care where it happens. I don’t know whether now is the right time or too early. I just know that when the time comes it needs to be done right. And I don’t like the way it’s shaping up at the moment. Well-meaning but toothless efforts like cloud-standards.org don’t make me feel better.

I know this blog post will be read both by my friends in DMTF and by my friends in Clouderati. I just want them to meet. That could be quite a party.

IBM was on to something when it produced this standards participation policy (which I commented on in a cynical-yet-supportive way – and yes I realize the same cynicism can apply to me). But I haven’t heard of any practical effect of this policy change. Has anyone seen any? Isn’t the Cloud standard wave the right time to translate it into action?

Transparency first

I realize that it takes more than transparency to convince specialists to take a look at what a working group is doing and share their thoughts. Even in a fully transparent situation, specialists will eventually give up if they are stonewalled by process lawyers or just ignored and marginalized (many working group participants have little bandwidth and typically take their cues from the big vendors even in the absence of explicit corporate alignment). And this is hard to fix. Processes serve a purpose. While they can be used against the smaller players, they also in many cases protect them. Plus, for every enlightened specialist who gets discouraged, there is a nutcase who gets neutralized by the need to put up a clear proposal and follow a process. I don’t see a good way to prevent large vendors from using the process to pressure smaller ones if that’s what they intend to do. Let’s at least prevent this from happening unintentionally. Maybe some of my colleagues  from large companies will also ask themselves whether it wouldn’t be to their own benefit to actually help qualified specialists to contribute. Some “positive discrimination” might be in order, to lighten the process burden in some way for those with practical expertise, limited resources, and the willingness to offer some could-otherwise-be-billable hours.

In any case, improving transparency is the simplest, fastest and most obvious step that needs to be taken. Not doing it because it won’t solve everything is like not doing CPR on someone on the pretext that it would only restart his heart but not cure his rheumatism.

What’s at risk if we fail to leverage the huge amount of practical Cloud expertise from smaller players in the standards work? Nothing less than an unpractical set of specifications that will fail to realize the promises of Cloud interoperability. And quite possibly even delay them. We’ve seen it before, haven’t we?

Notice how I haven’t mentioned customers? It’s a typical “feel-good” line in every lament about standards to say that “we need more customer involvement”. It’s true, but the lament is old and hasn’t, in my experience, solved anything. And today’s economical climate makes me even more dubious that direct customer involvement is going to keep us on track for this standardization wave (though I’d love to be proven wrong). Opening the door to on-the-ground-working-with-customers experts with a very neutral and pragmatic perspective has a better chance of success in my mind.

As a point of clarification, I am not asking large companies to pick a few small companies out of their partner ecosystem and give them a 10% discount on their alliance membership fee in exchange for showing up in the standards groups and supporting their friendly sponsor. This is a common trick, used to pack a committee, get the votes and create an impression of overwhelming industry support. Nobody should pick who the specialists are. We should do all we can to encourage them to come. It will be pretty clear who they are when they start to ask pointed questions about the work.

Finally, from the archives, a more humorous look at how various standards bodies compare. And the proof that my complaints about DMTF secrecy aren’t new.

12 Comments

Filed under Cloud Computing, CMDBf, DMTF, Everything, HP, IBM, Mgmt integration, Microsoft, Oracle, People, Protocols, Specs, Standards, Utility computing, VMware, W3C, Web services, WS-Management

Missing out on the OCCI fun

As a recovering “design by committee” offender I have to be careful when lurking near standards groups mailing list, for fear my instincts may take over and I might join the fray. But tonight a few tweets containing alluring words like “header” and “metadata” got the better of me and sent me plowing through a long and heated discussion thread in the OGF OCCI mailing list archive.

I found the discussion fascinating, both from a technical perspective and a theatrical perspective.

Technically, the discussion is about whether to use HTTP headers to carry “metadata” (by which I think they  mean everything that’s not part of the business payload, e.g. an OVF document or other domain-specific payload). I don’t have enough context on the specific proposal to care to express my opinion on its merits, but what I find very interesting is that this shines another light on the age-old issue of how to carry non-payload info when designing a protocol. Whatever you call these data fields, you have to specify (by decreasing order of architectural importance):

  • How you deal with unknown fields: mustUnderstand or mustIgnore semantics.
  • How you keep them apart (prevent two people defining fields by the same name, telling different versions apart).
  • How you parse their content (and are they all parsed in the same manner or is it specific to each field).
  • Where they go.

SOAP provides one set of answers.

  • You can tag each one with a mustUnderstand attribute to force any consumer who doesn’t understand them to fault.
  • They are namespace-qualified.
  • They are XML-formatted.
  • They go at the top of the XML doc, in a section called the SOAP header.

You may agree or not with the approach SOAP took, but it’s important to realize that at its core SOAP is just this: the answer (in the form of the SOAP processing model) to these simple questions (here is more about the SOAP processing model and the abuses it has suffered if you’re interested). WSDL is something else. The WS-* stack is also something else. It’s probably too late to rescue SOAP from these associations, but I wanted to point this out for the record.

Whatever you answer to the four “non-payload data fields” questions above, there are many practical concerns that you have to consider when validating your proposal. They may not all be relevant to your use case, but then explicitly decide that they are not. They are things like:

  • Performance
  • Ability to process in a stream-based system
  • Ease of development (tool support, runtime accessibility…)
  • Ease of debugging
  • Field length limitations
  • Security
  • Ability to structure the data in the fields
  • Ability to use different transports (way overplayed in SOAP, but not totally irrelevant either)
  • Ability to survive intermediaries / proxies

Now leaving the technology aside, this OCCI email thread is also interesting from a human and organizational perspective. Another take on the good old Commedia dell standarte. Again, I don’t have enough context in the history of this specific group to have an opinion about the dynamics. I’ll just say that things are a bit more “free-flowing” than when people like my friend Dave Snelling were in charge in OGF. In any case, it’s great that the debate is taking place in public. If it had been a closed discussion they probably would not have benefited from Tim Bray dropping in to share his experience. On the plus side, they would have avoided my pontifications…

4 Comments

Filed under Cloud Computing, Everything, People, Protocols, SOAP, SOAP header, Specs, Standards, Utility computing

A post-mortem on the previous IT management revolution

Before rushing to standardize “Cloud APIs”, let’s take a look back at the previous attempt to tackle the same problem, which is one of IT management integration and automation. I am referring to the definition of specifications that attempted to use the then-emerging SOAP-based Web services framework to easily integrate IT management systems and their targets.

Leaving aside the “Cloud” spin of today and the “Web services” frenzy of yesterday, the underlying problem remains to provide IT services (mostly applications) in a way that offers the best balance of performance, availability, security and economy. Concretely, it is about being able to deploy whatever IT infrastructure and application bits need to be deployed, configure them and take any required ongoing action (patch, update, scale up/down, optimize…) to keep them humming so customers don’t notice anything bothersome and you don’t break any regulation. Or rather so that any disruption a customer sees and any mandate you violate cost you less than it would have cost to avoid them.

The realization that IT systems are moving more and more towards distributed/connected applications was the primary reason that pushed us towards the definition of Web services protocols geared towards management interactions. By providing a uniform and network-friendly interface, we hoped to make it convenient to integrate management tasks vertically (between layers of the IT stack) and horizontally (across distributed applications). The latter is why we focused so much on managing new entities such as Web services, their execution environments and their conversations. I’ll refer you to the WSMF submission that my HP colleagues and I made to OASIS in 2003 for the first consistent definition of such a management framework. The overview white paper even has a use case called “management as a service” if you’re still not convinced of the alignment with today’s Cloud-talk.

Of course there are some differences between Web service management protocols and Cloud APIs. Virtualization capabilities are more advanced than when the WS effort started. The prospect of using hosted resources is more realistic (though still unproven as a mainstream business practice). Open source component are expected to play a larger role. But none of these considerations fundamentally changes the task at hand.

Let’s start with a quick round-up and update on the most relevant efforts and their status.

Protocols

WSMF (Web Services Management Framework): an HP-created set of specifications, submitted to the OASIS WSDM working group (see below). Was subsumed into WSDM. Not only a protocol BTW, it includes a basic model for Web services-related artifacts.

WS-Manageability: An IBM-led alternative to parts of WSDM, also submitted to OASIS WSDM.

WSDM (Web Services Distributed Management): An OASIS technical committee. Produced two standards (a protocol, “Management Using Web Services” and a model of Web services, “Management Of Web Services”). Makes use of WSRF (see below). Saw a few implementations but never achieved real adoption.

OGSI (Open Grid Services Infrastructure): A GGF (the organization now known as OGF) standard to provide a service-oriented resource manipulation infrastructure for Grid computing. Replaced with WSRF.

WSRF: An OASIS technical committee which produced several standards (the main one is WS-ResourceProperties). Started as an attempt to align the GGF/OGSI approach to resource access with the IT management approach (represented by WSDM). Saw some adoption and is currently quietly in use under the cover in the GGF/OGF space. Basically replaced OGSI but didn’t make it in the IT management world because its vehicle there, WSDM, didn’t.

WS-Management: A DMTF standard, based on a Microsoft-led submission. Similar to WSDM in many ways. Won the adoption battle with it. Based on WS-Transfer and WS-Enumeration.

WS-ResourceTransfer (aka WS-RT): An attempt to reconcile the underlying foundations of WSDM and WS-Management. Stalled as a private effort (IBM, Microsoft, HP, Intel). Was later submitted to the W3C WS-RA working group (see below).

WSRA (Web Services Resource Access): A W3C working group created to standardize the specifications that WS-Management is built on (WS-Transfer etc) and to add features to them in the form of WS-RT (which was also submitted there, in order to be finalized). This is (presumably) the last attempt at standardizing a SOAP-based access framework for distributed resources. Whether the window of opportunity to do so is still open is unclear. Work is ongoing.

WS-ResourceCatalog : A discovery helper companion specification to WS-Management. Started as a Microsoft document, went through the “WSDM/WS-Management reconciliation” effort, emerged as a new specification that was submitted to DMTF in May 2007. Not heard of since.

CMDBf (Configuration Management Database Federation): A DMTF working group (and soon to be standard) that mainly defines a SOAP-based protocol to query repositories of configuration information. Not linked with (or dependent on) any of the specifications listed above (it is debatable whether it belongs in this list or is part of a new breed).

Modeling

DCML (Data Center Markup Language): The first comprehensive effort to model key elements of a data center, their relationships and their policies. Led by EDS and Opsware. Never managed to attract the major management vendors. Transitioned to an OASIS member section and died of being ignored.

SDM (System Definition Model): A Microsoft specification to model an IT system in a way that includes constraints and validation, with the goal of improving automation and better linking the different phases of the application lifecycle. Was the starting point for SML.

SML (Service Modeling Language): Currently a W3C “proposed recommendation” (soon to be a recommendation, I assume) with the same goals as SDM. It was created, starting from SDM, by a consortium of companies that eventually submitted it to W3C. No known adoption other than the Eclipse COSMOS project (Microsoft was supposed to use it, but there hasn’t been any news on that front for a while). Technically, it is a combination of XSD and Schematron. It appears dead, unless it turns out that Microsoft is indeed using it (I don’t know whether System Center is still using SDM, whether they are adopting SML, whether they are moving towards M or whether they have given up on the model-centric vision).

CML (Common Model Library): An effort by the SML authors to create a set of model elements using the SML metamodel. Appears to be dead (no news in a long time and the cml-project.org domain name that was used seems abandoned).

SDD (Solution Deployment Descriptor): An OASIS standard to define a packaging mechanism meant to simplify the deployment and configuration of software units. It is to an application archive what OVF is to a virtual disk. Little adoption that I know of, but maybe I have a blind spot on this.

OVF (Open Virtualization Format): A recently released DMTF standard. Defines a packaging and descriptor format to distribute virtual machines. It does not defined a common virtual machine format, but a wrapper around it. Seems to have some momentum. Like CMDBf, it may be best thought of as part of a new breed than directly associated with WS-Management and friends.

This is not an exhaustive list. I have left aside the eventing aspects (WS-Notification, WS-Eventing, WS-EventNotification) because while relevant it is larger discussion and this entry to too long already (see here and here for some updates from late last year on the eventing front). It also does not cover the Grid work (other than OGSI/WSRF to the extent that they intersect with the IT management world), even though a lot of the work that took place there is just as relevant to Cloud computing as the IT management work listed above. Especially CDDLM/CDL an abandoned effort to port SmartFrog to the then-hot XML standards, from which there are plenty of relevant lessons to extract.

The lessons

What does this inventory tell us that’s relevant to future Cloud API standardization work? The first lesson is that protocols are easy and models are hard. WS-Management and WSDM technically get the job done. CMDBf will be a good query language. But none of the model-related efforts listed above seem to have hit the mark of “doing the job”. With the possible exception of OVF which is promising (though the current expectations on it are often beyond what it really delivers). In general, the more focused and narrow a modeling effort is, the more successful it seems to be (with OVF as the most focused of the list and CML as the other extreme). That’s lesson learned number two: models that encompass a wide range of systems are attractive, but impossible to deliver. Models that focus on a small sub-area are the way to go. The question is whether these specialized models can at least share a common metamodel or other base building blocks (a type system, a serialization, a relationship model, a constraint mechanism, etc), which would make life easier for orchestrators. SML tries (tried?) to be all that, with no luck. RDF could be all that, but hasn’t managed to get noticed in this context. The OVF and SDD examples seems to point out that the best we’ll get is XML as a shared foundation (a type system and a serialization). At this point, I am ready to throw the towel on achieving more modeling uniformity than XML provides, and ready to do the needed transformations in code instead. At least until the next window of opportunity arrives.

I wish that rather than being 80% protocols and 20% models, the effort in the WS-based wave of IT management standards had been the other way around. So we’d have a bit more to show for our work, for example a clear, complete and useful way to capture the operational configuration of application delivery services (VPN, cache, SSL, compression, DoS protection…). Even if the actual specification turns out to not make it, its content should be able to inform its successor (in the same way that even if you don’t use CIM to model your server it is interesting to see what attributes CIM has for a server).

It’s less true with protocols. Either you use them (and they’re very valuable) or you don’t (and they’re largely irrelevant). They don’t capture domain knowledge that’s intrinsically valuable. What value does WSDM provide, for example, now that’s it’s collecting dust? How much will the experience inform its successor (other than trying to avoid the WS-Addressing disaster)? The trend today seems to be that a more direct use of HTTP (“REST”) will replace these protocols. Sure. Fine. But anyone who expects this break from the past to be a vaccination against past problems is in for a nasty surprise. Because, and I am repeating myself, it’s the model, stupid. Not the protocol. Something I (hopefully) explained in my comments on the Sun Cloud API (before I knew that caring about this API might actually become part of my day job) and something on which I’ll come back in a future post.

Another lesson is the need for clear use cases. Yes, it feels silly to utter such an obvious statement. But trust me, standards groups still haven’t gotten this. It’s not until years spent on WSDM and then WS-Management that I realized that most people were not going after management integration, as I was, but rather manageability. Where “manageability” is concerned with discovering and monitoring individual resources, while “management integration” is concerned with providing a systematic view of the environment, with automation as the goal. In other words, manageability standards can allow you to get a traditional IT management console without the need for agents. Management integration standards can allow you to coordinate your management systems and automate their orchestration. WS-Management is for manageability. CMDBf is in the management integration category. Many of the (very respectful and civilized) head-butting sessions I engaged in during the WSDM effort can be traced back to the difference between these two sets of use cases. And there is plenty of room for such disconnect in the so-loosely-defined “Cloud” world.

We have also learned (or re-learned) that arbitrary non-backward compatible versioning, e.g. for political or procedural reasons as with WS-Addressing, is a crime. XML namespaces (of the XSD and WSDL types, as well as URIs used in similar ways in specifications, e.g. to identify a dialect or profile) are tricky, because they don’t have backward compatibility metadata and because of the practice to use organizations domain names in the URI (as opposed to specification-specific names that can be easily transferred, e.g. cmdbf.org versus dmtf.org/cmdbf). In the WS-based management world, we inherited these problems at the protocol level from the generic WS stack. Our hands are more or less clean, but only because we didn’t have enough success/longevity to generate our own versioning problems, at the model level. But those would have been there had these models been able to see the light of day (CML) or see adoption (DCML).

There are also practical lessons that can be learned about the tactics and strategies of the main players. Because it looks like they may not change very much, as corporations or even as individuals. Karla Norsworthy speaks for IBM on Cloud interoperability standards in this article. Andrew Layman represented Microsoft in the post-Manifestogate Cloud patch-up meeting in New York. Winston Bumpus is driving the standards strategy at VMWare. These are all veterans of the WS-Management, WSDM and related wars collaborations (and more generally the whole WS-* effort for the first two). For the details of what there is to learn from the past in that area, you’ll have to corner me in a hotel bar and buy me a few drinks though. I am pretty sure you’d get your money’s worth (I am not a heavy drinker)…

In summary, here are my recommendations for standardizing Cloud API, based on lessons from the Web services management effort. The theme is “focus on domain models”. The line items:

  • Have clear goals for each effort. E.g. is your use case to deploy and run an existing application in a Cloud-like automated environment, or is it to create new applications that efficiently take advantage of the added flexibility. Very different problems.
  • If you want to use OVF, then beef it up to better apply to Cloud situations, but keep it focused on VM packaging: don’t try to grow it into the complete model for the entire data center (e.g. a new DCML).
  • Complement OVF with similar specifications for other domains, like the application delivery systems listed above. Informally try to keep these different specifications consistent, but don’t over-engineer it by repeating the SML attempt. It is more important to have each specification map well to its domain of application than it is to have perfect consistency between them. Discrepancies can be bridged in code, or in a later incarnation.
  • As you segment by domain, as suggested in the previous two bullets, don’t segment the models any further within each domain. Handle configuration, installation and monitoring issues as a whole.
  • Don’t sweat the protocols. HTTP, plain old SOAP (don’t call it POS) or WS-* will meet your need. Pick one. You don’t have a scalability challenge as much as you have a model challenge so don’t get distracted here. If you use REST, do it in the mindset that Tim Bray describes: “If you’re going to do bits-on-the-wire, Why not use HTTP? And if you’re going to use HTTP, use it right. That’s all.” Not as something that needs to scale to Web scale or as a rebuff of WS-*.
  • Beware of versioning. Version for operational changes only, not organizational reasons. Provide metadata to assert and encourage backward compatibility.

This is not a recipe for the ideal result but it is what I see as practically achievable. And fault-tolerant, in the sense that the failure of one piece would not negate the value of the others. As much as I have constrained expectations for Cloud portability, I still want it to improve to the extent possible. If we can’t get a consistent RDF-based (or RDF-like in many ways) modeling framework, let’s at least apply ourselves to properly understanding and modeling the important areas.

In addition to these general lessons, there remains the question of what specific specifications will/should transition to the Cloud universe. Clearly not all of them, since not all of them even made it in the “regular” IT management world for which they were designed. How many then? Not surprisingly (since IBM had a big role in most of them), Karla Norsworthy, in the interview mentioned above, asserts that “infrastructure as a service, or virtualization as a paradigm for deployment, is a situation where a lot of existing interoperability work that the industry has done will surely work to allow integration of services”. And just as unsurprisingly Amazon’s Adam Selipsky, who’s company has nothing to with the previous wave but finds itself in leadership position WRT to Cloud Computing is a lot more circumspect: “whether existing standards can be transferred to this case [of cloud computing] or if it’s a new topic is [too] early to say”. OVF is an obvious candidate. WS-Management is by far the most widely implemented of the bunch, so that gives it an edge too (it is apparently already in use for Cloud monitoring, according to this press release by an “innovation leader in automated network and systems monitoring software” that I had never heard of). Then there is the question of what IBM has in mind for WS-RT (and other specifications that the WS-RA working group is toiling on). If it’s not used as part of a Cloud API then I really don’t know what it will be used for. But selling it as such is going to be an uphill battle. CMDBf is a candidate too, as a model-neutral way to manage the configuration of a distributed system. But here I am, violating two of my own recommendations (“focus on models” and “don’t isolate config from other modeling aspects”). I guess it will take another pass to really learn…

[UPDATED 2009/5/7: Senior moment! When writing this entry I forgot that I wrote an earlier entry (in late 2007) specifically to describe the difference between “manageability” and “management integration”. So here it is, if you care for more details on this topic.]

5 Comments

Filed under Automation, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Modeling, People, Portability, REST, SML, SOAP, Specs, Standards, Utility computing, Virtualization, WS-Management, WS-ResourceCatalog, WS-ResourceTransfer

Reality check on Cloud portability

SD Times recently published an interesting article about “cloud interoperability”. It has some well-informed opinions. But, like all Cloud-related discussions, it also suffers from mixing a bunch of things. The word “interoperability” is alternatively applied to the Cloud infrastructure services (in which case this “interoperability” is a way to provide application “portability”) and to the Cloud-hosted applications themselves.

Application-level interoperability (“look, my GAE-hosted app successfully sent an HTTP request to an Azure-hosted app, open the champagne”) is not very new or exciting anymore and is often used as an interoperability smokescreen (hello Salesforce.com). Many of these interop concerns are long solved and the others (like authentication and data migration) need to be solved in ways that don’t care whether the application is hosted in your Silicon Valley garage or near the Columbia river.

Cloud infrastructure compatibility (in other words application portability) is the more interesting discussion. I keep reading that it is needed (“no vendor lock-in, not ever again”) for enterprises to move to the Cloud. Being a natural-born cynic, I always ask myself whether those asking for it are naive (sometimes) or have ulterior motives (e.g. trying to catch-up with Amazon by entangling them in the standards net – some of my fellow cynics see the Open Cloud Manifesto as just this).

Because the reality is that, Manifesto or no Manifesto, you are not going to get application portability across IaaS-type Cloud providers. At least for production applications. Sorry. As a consolation prize, you may get some runtime portability such that we’ll be shown nice demos of prototype apps moving from one provider to another (either as applications or as virtual machines). Clap clap until you realize that they left behind their monitoring capabilities, or that their configuration rules don’t validate anything anymore. And that your printer ran out of red ink when printing the latest compliance report. Oops.

Maybe I am biased because they are both my friends and ex-colleagues, but the HP guys make the most sense in the SD Times article. Tim Hall has it right when he suggests “that the industry should focus on specific problems that it is going to solve around deployment and standardized monitoring”. And the other HP Tim, Mr. van Ash, rightly points out that we should “stop promising miracles”, which Forrester’s Jeffrey Hammond echoes, saying that there is a difference between a standard and “plug-and-play in reality”.

Tim Hall uses SQL as an example of a realistic common baseline. J2EE would be another one. They provide a good reality check. Standards are always supposed to prevent vendor lock-in. And there is a need for some of that, of course. But look at the track records. How many applications do you know that are certified and supported on any SQL database, any Unix operating system and any J2EE app server? And yet, standardizing queries on relational data and standardizing an enterprise-class runtime environment for one programming language are pretty constrained scopes in the grand scheme of things. At least compared to all the aspects that you need to standardize to provide real Cloud portability (security, monitoring, provisioning, configuration, language runtime and/or OS, data storage/retrieval, network configuration, integration with local apps, metering/billing, etc). And we’re supposed to put together a nice bundle of standards that will guarantee drag-and-drop portability across all these concerns? In how many lifetimes? By then, Cloud computing will have been replaced by the next big thing (galaxycomputing.com is still available BTW).

Not to mention that this standardization comes hand in hand with constraints on what you can do. That’s why I read Amazon’s Adam Selipsky’s comment that allowing customers to do “whatever they want” is vital as a way to say “get real” to requests for application portability, while allowing him to sound helpful rather than obstructionist.

This doesn’t mean that these standards are not useful. They make application portability possible if not free. They make for much improved productivity through generic tools and reusable developer knowledge. We still need all this.

Here is the best that can realistically happen in the “application portability across IaaS providers” area for at least 10 years:

  • a set of partial standards for small parts of the Cloud computing domain (see list above), many of which already exist.
  • a set of RightScale-like tools that do a lot of the grunt work of mapping/hiding/transforming between providers, with various degrees of success.
  • the need for application providers to certify their applications on Cloud providers one by one anyway and to provide cloning/migration as a feature of the application rather than an infrastructure-level task.

That’s assuming that IaaS providers become a major business, that there remains a difference between service providers and software providers. The other option is that the whole Cloud excitment goes back to SaaS only, that application creators are also hosting providers, that the only resource you get in a “utility” fashion is the application itself. At which point application portability is not a concern anymore and we go back to “only” worrying about data portability and application interoperability, an easier problem and one on which we have come a long way already. If this is what comes to pass then the challenge of Cloud portability may well be one of the main reasons. Along with the lack of revenue/margin potential for many of the actors in an IaaS world, as my CEO is fond of pointing out.

[UPDATED 2009/4/22: F5’s Lori MacVittie provides a very nice illustration of the same point, in her explanation of why OVF is not a cloud portability silver bullet.]

[UPDATED 2009/6/1: Soon after posting this entry I was contacted by people at SD Times about turning it into a “guest view” article in the June issue. It has just been published. It’s also in the paper version.]

5 Comments

Filed under Amazon, Application Mgmt, Articles, Cloud Computing, Everything, Google App Engine, HP, IT Systems Mgmt, Mgmt integration, People, Portability, Specs, Standards, Utility computing

Manifesto hyperventilation

The Cloud Manifesto debacle of the last few days is another sign of the landgrab atmosphere in the Cloud interoperability/standardization space. It’s like the crowd at 6:00AM in front of an electronics store on the day of the Big Sale. This is not the first spark and it won’t be the last one.

Two aspects of this crisis (soon to be anecdote) are especially ironic.

First, as many have picked up, the irony of a Microsoft manager complaining about a document having to be signed “as is” is something to savor slowly. I know Microsoft is a large company and I can believe that Steven Martin may never personally have engaged in such practices. But our credulity gets stretched when in the same post he lauds the WS-* process as an example of “organic” industry collaboration. Pick any WS-* spec and try contacting the listed co-authors other than Microsoft and (usually but not always) IBM. Ask them how much input they had and how much they were able to change. The answer won’t always be zero, but there will be plenty of them.

And these WS-* documents were standard candidates: technical specifications that these “take it or leave it” authors would presumably eventually have to implement in their products. Which takes me to the second point of irony, one I haven’t noticed mentioned:

This is a manifesto folks! I am not familiar with all of these other manifestos (plus this one) but at least for those I know one of their defining characteristics is that they stand in opposition to other views. Some may seem non-controversial now, but at the time of their publication they were very much so. Otherwise they wouldn’t have been called “manifestos”. The very idea of a manifesto being criticized for not being inclusive enough makes my head spin.

If anything (based on this draft), what the “manifesto” can be criticized for is being too meek and consensual  to be worthy of that name. How many important manifestos give their readers a “whenever appropriate” escape clause in their guiding principles?

[Note: If you are in an investigative mood, start with this whois search and if the name of the registrant sounds familiar it may be because you’ve read this blog post before.]

[UPDATED 2009/3/30: It’s live.]

[UPDATED 2009/3/31: Mike DiPetrillo also thinks this is a bit too motherhood-and-applepie for being called a “manifesto”. Thomas Bittman’s translation of the manifesto is also good.]

1 Comment

Filed under Cloud Computing, Everything, Microsoft, People, Portability, Standards, Utility computing

HP introduces “Operations Manager i”

If you’ve seen a lot of news articles about HP’s IT management software this week (e.g. through Cote or Doug) it’s because the company held its Software Universe conference in Vienna this week and timed a bunch of announcements and PR events to match.

Most of the articles linked above just paraphrase the press releases and talking points. So if you’re going to get the company line, might as well get it straight from the horse’s mouth. Which we can now do through a new HP blog about BSM. The first article was penned by Mike Shaw and that’s enough for me to want to subscribe (I worked with Mike a few times when I was at HP and he is very sharp). I think Mike also wrote the other entries but since they are not signed (and the account name, “adsey007”, is pretty opaque) I am not sure. In any case, they are pretty good. This one gives an overview of the Vienna announcements. The next one describes in more details the OMi product. I am not in position to know how well it works but, according to the article, OMi takes the important step of modeling and managing events in the context of the overall model in the CMDB. Such that the event management features (e.g. correlation) can use the already-discovered relationships between the IT elements involved in the events (e.g. dependencies). The article also implies that the CMDB has been integrated with NNM (OpenView), Service Manager (Peregrine) and Server Automation (Opsware). Which is a lot of progress in 16 months since I left HP, so I am taking it with a grain of salt (we all know there are different levels of integration). The press release says that the CMDB is now integrated with 17 HP BTO applications, so you may need a whole salt shaker. In any case it’s great to see that Ramin and team are forging ahead, delivering products and driving the integration of the BTO portfolio.

The last paragraph (“OMi actually sits on top of existing HP Operations Manager installations…”) is intriguing and may provide a clue about the depth of the integration. In any case, OMi is something to keep an eye on as it is positioned to leverage a lot of the key strengths of the HP BTO portfolio.

BTW, this OMi product has nothing to do with this OMI which was a precursor to WSMF, WSDM and WS-Management. And which most people currently working in HP Software have never heard of.

2 Comments

Filed under Application Mgmt, Conference, Everything, HP, IT Systems Mgmt, Mgmt integration, Modeling, People

State modeling: party over, go home now.

Is the Northwest weather softening Savas? Is it the food? I just read the “how do I model state? let me count the ways” article that he, Ian Foster, Paul Watson and Mark McKeown published in the September 2008 Communications of the ACM. In the article, the authors attempt to recap (and advance?) the 5 years-old debate between the WSRF, HTTP-only and “no convention” (e.g. Zen-SOAP as used in CMIS) approaches to interacting with stateful resources over the Web. If you were anywhere near OGF (then called GGF) around 2003, you know what I am talking about. And you remember how heated the arguments were. There was something about this subject (or maybe it was the people involved) that consistently generated great showmanship (and some bruised egos) in the debates.

With that in mind, reading this article felt like watching a Chinese opera adaptation of Apocalypse Now. Or listening to Heavy Metal with the base dialed down to zero.

This would have been a very useful article to have in 2003. At the time, it would have clearly framed the question, shown the overwhelming similarities and small differences between the approaches and allowed people to see that there wasn’t actually that much to debate at a fundamental level, but mainly practical considerations to juggle. It may have prevented the quasi-religious war that erupted.

It took a while, but that period of religious war is well over now and we are firmly in the “I’ve heard you, you’ve heard me, do what you want I’ll do what I want” stage. WSRF people are still doing WSRF (or equivalent like WSRT). REST people are HTTPing right and left. They don’t meet much but when they do they don’t bump shoulders anymore. And in a way this article is a good illustration of this much more dispassionate environment.

So why am I complaining? Because these fights were fun! At least from a spectator’s point of view, but I suspect that Savas and the gang had plenty of fun too (not sure about the other side who, at least at first, expected “why are you throwing away OGSI” kind of pushback rather than this more radical-sounding response).

I printed this ACM article a little bit on the off chance that it would provide some new way to look at the problem, one that hadn’t emerged in the past five years. But in retrospect I think my true motivation was that I expected it to capture, like in the days, some of the entertainment value of a radio talk show. Instead, the excitement level in this article is in the league of NPR’s StarDate astronomy report.

I feel cheated. I haven’t learned anything new and I haven’t been entertained either. This article feels like the end of the party, when the bottles are being put away, the lights are flickering and bad music is playing to nudge the last guests out of the house.

Now that I am grumpy, I guess I have to point out a few highly questionable statements in the article in retribution:

“Fortunately, there seems to be industry support for an integration of the WS-Transfer and WS-RF approaches, based on a WS-Transfer substrate – the WS-ResourceTransfer specification.” See the last two paragraphs of this entry.

“Support for WS-Addressing has since become quasi-universal, and now few find its use objectionable.” Time to pull out the Victor Hugo quote I have been saving for a special occasion: “Et s’il n’en reste qu’un, je serai celui-là“. But frankly I very much doubt that I am the only one still shaking his head sadly in contemplation of WS-Addressing.

In fact, Stu agrees with me on this (see item #6a in his list of disagreements with the article). Looks like he too was made a bit grumpy by the article, for different reasons.

There is one more debatable choice in this article, and it’s more serious than the two above. It introduces an arbitrary difference between the WS-Transfer and HTTP approaches. Compare the third lines of tables 4 and 5 (retrieving the status of a specific job). According to the article, WS-Transfer gives you the choice between two options:

  • retrieve the entire state of the job and fish for the status field inside of it (the approach in table 4), or
  • “a new operation (for example GetEPRtoPart) is defined that requests that a new state representation be exposed, through a different EPR, representing parts of the original state representation”

The way it works for HTTP, on the other hand is through an “application-specific convention” (in this example, appending “/status” at the end of the URL).

Except there is no reason why this third approach cannot be used in the WS-Transfer scenario. The article says that  “in WS-Transfer, the same effect [accessing a subset of the resource state] can be achieved, but only by defining an auxiliary operation that returns an EPR to a desired subset”. What, pray tell, prevents a WS-Transfer implementation from having an “application-specific convention” just like the HTTP kids next door? It can be at the URL level (e.g. adding “/status”). Or at the EPR reference parameter level. The latter is actually exactly what WS-Management does, using the wsman:SelectorSet header. It does not, as the article claims, define a special operation to get these fine-grained EPR. It uses an application convention to do so (which, in the case of WS-Management, happens to be “whatever Windows implements”, but that’s a different debate).

By the way, this question of “convention over specification” is where I don’t quite follow Stu (see his point #4 in his aforementioned list of disagreements) and his invocation of the “hypermedia constraint”. I don’t see how any of the four specifications he calls to the rescue (HTML form submission, XForms submission options, Atompub service documents and URI templates) would prevent me from having to have an application-specific agreement about how to retrieve the state (as opposed to another subset of the representation, like the creation date). URI templates, for example, might support how this agreement is expressed but it doesn’t replace it.

The article does a pretty good job at showing how close the alternatives are (even though, as illustrated above, it still portrays them as more different than they need to be). I am not saying it’s a bad article for the Communications of the ACM. I am saying that the Communications of the ACM is a bad medium for one of the few nerdy debates that have genuine entertainment value.

[UPDATED 2008/10/2: Jim Webber, Savas Parastatidis and Ian Robinson provide a full REST example for InfoQ: how to GET a cup of coffee. Includes state considerations discussed in the ACM article.]

2 Comments

Filed under Articles, Everything, Grid, People, REST, SOAP, SOAP header, Specs, Standards, Tech, WS-Management, WS-ResourceTransfer, WS-Transfer

The boss is back

Today is full of news for Oracle Enterprise Manager. I came into the office this morning expecting the ClearApp announcement (I had even prepared a blog entry on it over the weekend). This, on the other hand, came as a (good) surprise!

Comments Off on The boss is back

Filed under Everything, IT Systems Mgmt, Oracle, People, VMware

Oedipus meets IT management?

Having received John’s approval to reclaim the “mighty” adjective, I am going to have a bit of fun with it. More specifically, I am toying with adding VMWare to the list. Clearly, VMWare doesn’t want to go the way Sun did with Solaris (nice technology, right place at the right time, but commoditized in the long term). They have supposedly surrounded themselves with a pretty good patent minefield to slow the commoditization trend, but it will happen anyway and they know it. Especially with improved virtualization support in hardware making some of these patents less relevant. For this reason, they are putting a lot of effort on developing the IT management side of their portfolio.

One illustration of this is the fact that VMWare recently recruited the Senior VP of systems management at Oracle to become its Executive VP of R&D (incidentally, this happened a couple months after I joined his team at Oracle; maybe the knowledge that he wouldn’t have to deal with my bad sense of humor for too long made it easier for him to approve my hiring). I don’t think it’s a coincidence that they chose someone who is not a virtualization expert but an enterprise infrastructure expert (namely database performance and management software).

So, do we have the “Mighty Four” (Oracle, Microsoft, EMC and VMWare) for a nice symetry with the “Big Four” (HP, IBM, BMC and CA)? Or does the fact that EMC owns most of VMWare make us pause here? Might a mighty mother a mighty? How do you run a 85%-owned company whose strategic directions takes it toward direct competition with its corporate owner? EMC and VMWare are attacking IT management from different directions (EMC is actually going at it from several directions at the same time, based on its historical storage products, plus new software from acquisitions, plus hiring a few smart people away from IBM to put the whole thing together), so on paper their portfolios look pretty complementary. But if aligning and collaborating more closely may make sense from a product engineering perspective, it doesn’t make sense from a financial engineering perspective. At least as long as investors are so hungry for the few VMWare share available on the open market (as a side issue, I wonder if they like it so much because of the virtualization market per se or because they see VMWare’s position in that market as a beachhead for the larger enterprise IT infrastructure software market). And, as should not be suprising, the financial view is likely to prevail, which will keep the companies at arms length. But if both VMWare and EMC are succesful in assumbling a comprehensive enterprise infrastructure management system, things will get interesting.

[UPDATED 2008/5/28: The day after I write this, VMWare buys application performance management vendor B-hive. I am pretty lucky with my timing on this one.]

2 Comments

Filed under Everything, IT Systems Mgmt, Patents, People, Virtualization, VMware

An interesting move

I have been keeping an eye on Don Ferguson’s blog with the hope of one day reading a bit about Microsoft’s Oslo project and maybe the application management aspects of it. Instead, what I saw tonight is that Don is leaving Microsoft, after a short stay, to join CA. Welcome to the fun world of IT management Don! It seems like a safe bet to assume that he will work on application management (sorry, I am supposed to say “service management”), which is what I focus on at Oracle. So forget Oslo, now I have another reason to keep an eye on Don. Microsoft has hired quite a few people out of CA (including Anders Vinberg, a while ago, and my WSDM co-conspirator Igor Sedukhin), so I guess it’s only fair to see some movement the other way.

Since this has turned into a “people magazine” edition of this blog, IT management observers who don’t know it yet might be interested to learn that DMTF president Winston Bumpus left Dell to join VMWare several months ago. Leaving aside the superiority of the SF Bay Area over Round Rock TX for boating purposes, this can also be seen as a clear signal of interest from VMWare for standards and especially DMTF. OVF migth only be the beginning.

If anyone who matters in IT management adopts a baby, checks into rehab or gets into a brawl, you’ll read about it first on this blog. Coming next week: exclusive photos from the beach-side retreat of the itSMF board. We’ll compare to photos from last year to find out whose six-pack shows the most impressive “continual service improvement”. And the following week, you’ll learn what really happened in that Vegas meeting room filled with IT management analysts. On the other hand, I do not cover fashion faux-pas because there are just too many of those in our industry.

1 Comment

Filed under CA, Everything, Microsoft, People