Fog Computing

As happened with Salesforce.com a couple of years ago, Amazon S3 is having serious problems serving its customers today. Like Salesforce.com at the time, Amazon is criticized for not being transparent enough about it.

Right now, “cloud computing” is also “fog computing”. There is very little visibility (if any) into the infrastructure that is being consumed as a service. Part of this is a feature (a key reason for using these services is freedom from low-level administration) but part of it is a defect.

The clamor for Amazon to provide more updates about the outage on the AWS blog is a bit misplaced in that sense. Sure, that kind of visibility (“well folks, it was bring-your-hamster-to-work day at the Amazon data center today and turns out they love chewing cables. Our bad. The local animal refuge is sending us all their cats to help deal with the mess. Stay tuned”) gives a warm fuzzy (!) feeling but that’s not very actionable.

It’s not a matter for Amazon of giving access to its entire management stack (even in view-only mode) to its customers. It’s a matter of extracting customer-facing metrics that are relevant and exposing them in a way that can be consumed by the customer’s IT management tools. So they can be integrated in the overall IT decisions. And it’s not just monitoring even though that’s a good start. Saying “I don’t want to know how you run the service, all I care is what you do for me”, only takes you so far in enterprise computing. This opacity is a great way to hide single points of failure:

I predict (as usual, no date) that we will see companies that thought they were hedging their bets by using two different SaaS providers only to realize, on the day Amazon goes down again, that both SaaS providers were hosting on Amazon EC2 (or equivalent). Or, on the day a BT building catches fire, that both SaaS providers had their data centers there.

Just another version of “for diversification, I had a high yield fund and a low risk fund. I didn’t really read the prospectus. Who would have guessed that they were both loaded with mortgage debt?”

More about IT management in a utility computing world in a previous entry.

[UPDATED: Things have improved a bit since writing this. Amazon now has a status panel. But it’s still limited to monitoring. Today it’s Google App Engine who is taking the heat.]

Comments Off on Fog Computing

Filed under Everything, Governance, IT Systems Mgmt, Utility computing

Comparing Joe Gregorio’s RESTful Partial Updates to WS-ResourceTransfer

Joe Gregorio just proposed a way to do RESTful partial updates. I am not in that boat anymore but, along with my then-colleagues from HP, Microsoft, IBM and Intel, I have spent a fair bit of time trying to address the same problem, albeit in a SOAP-based way. That was WS-ResourceTransfer (WS-RT) which has been out as a draft since summer 2006. In a way, Joe’s proposal is to AtomPub what WS-ResourceTransfer is to WS-Transfer, retrofitting a partial resource update on top of a “full update” mechanism. Because of this, I read his proposal with interest. I have mentioned before that WS-RT isn’t the best-looking cow in the corral so I was ready to like Joe’s presumably simpler approach.

I don’t think it meets the bill for partial update requirements in IT management scenarios.

This is not a REST versus SOAP kind of thing and I am not about to launch in a “how do you do end to end encryption and reliable messaging” tirade. I think it is perfectly possible to meet most management scenarios in a RESTful way. And BTW, I think most management scenarios do not need partial updates at all.

But for those that do, there is just too little flexibility in Joe’s proposal. Not that it means it’s a bad proposal, I don’t have much of an idea of what his use cases are. The proposal might be perfectly adequate for them. But when I read his proposal, it’s IT management I was mentally trying to apply it to and it came short in that regard.

Joe’s proposal requires the server to annotate any element that can be updated. On the positive side, this “puts the server firmly in control of what sub-sections of a document it is willing to handle partial updates on” which can indeed be useful. On the negative side it is not very flexible. If you are interacting with a desired-state controller, the rules that govern what you can and cannot change are often a lot more complex than “you can change X, you can’t change Y”. Look at SML validation for an example.

Another aspect is that the requester has to explicitly name the elements to replace. That could make for a long list. And it creates a risk of race conditions. If I want to change all the elements that have an attribute “foo” with a value “bar” I need to retrieve them first so that I can find their xml:id. Then I need to send a message to update them. But if someone changed them in the meantime, they may not have the “bar” value anymore and I am going to end up updating elements that should not be updated. Again, not necessarily a problem in some domains. An update mechanism that lets you point at the target set via something like XPath helps prevent this round-tripping (at a significant complexity cost unfortunately, something WS-RT tries to address with simplified dialects of XPath).

Joe volunteers another obvious limitation when he writes that “this doesn’t solve all the partial update scenarios, for example this doesn’t help if you have a long sub-list that you want to append to”. Indeed. And it’s even worse if you care about element order. Not something that we normally care much about in IT management (UML, MOF, etc don’t have a notion of property order) but the overuse of XSD in system modeling has resulted in order being important to avoid validation failures (because it’s really hard to write an XSD that doesn’t care about order even though it is often not meaningful to the domain being modeled).

In early 2007, I wrote an implementation of WS-RT and in the process I found many gaps in the specification, especially in the PUT operation. It is not the ideal answer in any way. If one was to try to fix it, a good place to start might be to make the specification a bit less flexible (e.g. restricting the change granularity to the level of an element, not an attribute and not a text node). There is plenty of room to find the simplicity/flexibility sweetspot for IT management scenarios between what WS-RT tries to offer and what Joe’s proposal offers.

Comments Off on Comparing Joe Gregorio’s RESTful Partial Updates to WS-ResourceTransfer

Filed under Everything, IT Systems Mgmt, Specs, WS-ResourceTransfer

Microsoft ditches SML, returns to SDM?

I gave in to the temptation of a tabloid-style title for this post, but the resulting guilt forces me to quickly explain that it is speculation and not based on any information other than what is in the links below (none of which explicitly refers to SDM or SML). And of course I work for a Microsoft competitor, so keep your skeptic hat on, as always.

The smoke that makes me picture that SML/SDM fire comes from this post on the Service Center team blog. In it, the product marketing manager for System Center Service Manager announces that the product will not ship until 2010. Here are the reasons given.

The relevant feedback here can be summarized as:

  • Improve performance
  • Enhance integration with the rest of the System Center product family and with the wider Microsoft product offering

To meet these requirements we have decided to replace specific components of the Service Manager infrastructure. We will also take this opportunity to align the product with the rest of the System Center family by taking advantage of proven technologies in use in those products

Let’s rewind a little bit and bring some context. Microsoft developed the Service Definition Model (SDM) to try to capture a consistent model of IT resources. There are several versions of SDM out there, and one of them is currently used by Operations Manager. It is how you capture domain-specific knowledge in a Management Pack (Microsoft’s name for a plug-in that lets you bring a new target type to Operations Manager). In order to get more people to write management packs that Operations Manager can consume, Microsoft decided to standardize SDM. It approached companies like IBM and HP and the SDM specification became SML. Except that there was a lot in SDM that looked like XSD, so SML was refactored as an extension of XSD (pulling in additions from Schematron) rather than a more stand-alone, management-specific approach like SDM. As I’ve argued before (look for the “XSD in SML” paragraph), in retrospect this was the wrong choice. SML was submitted to W3C and is now well advanced towards completion as a standard. Microsoft was forging ahead with the transition from SDM to SML and when they announced their upcoming CMDB they made it clear that it would use SML as its native metamodel (“we’re taking SML and making it the schema for CMDB” said Kirill Tatarinov who then headed the Service Center group).

Back to the present time. This NetworkWorld article clarifies that it’s a redesign of the CMDB part of Service Center that is causing the delay: “beta testing revealed performance and scalability issues with the CMDB and Microsoft plans to rebuild its architecture using components already used in Operations Manager.” More specifically, Robert Reynolds, a “group product planner for System Center” explains that “the core model-based data store in Operations Manager has the basic pieces that we need”. That “model-based data store” is the one that uses SDM. As a side note, I would very much like to know what part of the “performance and scalability issues” come from using XSD (where a lot of complications come from features not relevant for systems management).

Thus the “enhance integration with the rest of the System Center product family” in the original blog post reads a lot like dumping SML as the metamodel for the CMDB in favor of SDM (or an updated version of SDM). QED. Kind of.

In addition to the problems Microsoft uncovered with the Service Center Beta, the upcoming changes around project Oslo might have further weakened the justification for using SML. In another FUD-spreading blog post, I hypothesized about what Oslo means for SML/CML. This recent development with the CMDB reinforces that view.

I understand that there is probably more to this decision at Microsoft than the SML/SDM question but this aspect is the one that may have an impact not just on Microsoft customers but on others who are considering using SML. In the larger scheme of things, the overarching technical question is whether one metamodel (be it SDM, SML, MOF or something else) can efficiently be used to represent models across the entire IT stack. I am growing increasingly convinced that it cannot.

4 Comments

Filed under CMDB, Everything, IT Systems Mgmt, Microsoft, Oslo, SML, Specs, Standards

IT management for the personal CIO

In the previous post, I described how one can easily run their own web applications to beneficially replace many popular web sites. It was really meant as background for the present article, which is more relevant to the “IT management” topic of this blog.

Despite my assertion that recent developments (and the efforts of some hosting providers) have made the proposition of running your own web apps “easy”, it is still not as easy as it should be. What IT management tools would a “personal CIO” need to manage their personal web applications? Here are a few scenarios:

  • get a catalog of available applications that can be installed and/or updated
  • analyze technical requirements (e.g. PHP version) of an application and make sure it can be installed on your infrastructure
  • migrate data and configuration between comparable applications (or different versions of the same application)
  • migrate applications from one hosting provider to another
  • back-up/snapshot data and configuration
  • central access to application stats/logs in simple format
  • uptime, response time monitoring
  • central access to user management (share users and configure across all your applications)
  • domain name management (registration, renewal)

As the CIO of my personal web applications, I don’t need to see Linux patches that need to be applied or network latency problems. If my hosting provider doesn’t take care of these without me even noticing, I am moving to another provider. What I need to see are the controls that make sense to a user of these applications. Many of the bullet listed above correspond to capabilities that are available today, but in a very brittle and hard-to-put-together form. My hosting provider has a one-click update feature but they have a limited application catalog. I wouldn’t trust them to measure uptime and response time for my sites, but there are third party services that do it. I wouldn’t expect my hosting provider to make it easy to move my apps to a new hosting provider, but it would be nice if someone else offered this. Etc. A neutral web application management service for the “personal CIO” could bring all this together and more. While I am at it, it could also help me backup/manage my devices and computers at home and manage/monitor my DSL or cable connection.

1 Comment

Filed under Everything, IT Systems Mgmt, Portability, Tech

My web apps and me

Registering a domain name: $10 per year
Hosting it with all the features you may need: $80 per year
Controlling your on-line life: priceless

To be frank, the main reason that I do not use Facebook or MySpace is that I am not very social to start with. But, believe it or not, I have a few friends and family member with whom I share photos and personal stories. Not to mention this blog for different kinds of friends and different kinds of stories (you are missing out on the cute toddler photos).

Rather than doing so on Facebook, MySpace, BlogSpot, Flickr, Picasa or whatever the Microsoft copies of these sites are, I maintain a couple of blogs and on-line photo albums on vambenepe.com. They all provide user access control and RSS-based syndication so no-one has to come to vambenepe.com just to check on them. No annoying advertising, no selling out of privacy and no risk of being jerked around by bait-and-switch (or simply directionless) business strategies (“in order to serve you better, we have decided that you will no longer be able to download the high-resolution version of your photos, but you can use them to print with our approved print-by-mail partners”). Have you noticed how people usually do not say “I use Facebook” but rather “I am on Facebook” as if riding a mechanical bull?

The interesting thing is that it doesn’t take a computer genius to set things up in such a way. I use Dreamhost and it, like similar hosting providers, gives you all you need. From the super-easy (e.g. they run WordPress for you) to the slightly more personal (they provide a one-click install of your own WordPress instance backed by your own database) to the do-it-yourself (they give you a PHP or RoR environment to create/deploy whatever app you want). Sure you can further upgrade to a dedicated server if you want to install a servlet container or a CodeGears environment, but my point is that you don’t need to come anywhere near this to own and run your own on-line life. You never need to see a Unix shell, unless you want to.

This is not replacing Facebook lock-in with Dreamhost lock-in. We are talking about an open-source application (WordPress) backed by a MySQL database. I can move it to any other hosting provider. And of course it’s not just blogging (WordPress) but also wiki (MediaWiki), forum (phpBB), etc.

Not that every shinny new on-line service can be replaced with a self-hosted application. You may have to wait a bit. For example, there is more to Facebook than a blog plus photo hosting. But guess what. Sounds like Bob Bickel is on the case. I very much hope that Bob and the ex-Bluestone gang isn’t just going to give us a “Facebook in a box” but also something more innovative, that makes it easy for people to run and own their side of a Facebook-like presence, with the ability to connect with other implementations for the social interactions.

We have always been able to run our own web applications, but it used to be a lot of work. My college nights were soothed by the hum of an always-running Linux server (actually a desktop used as a server) under my desk on which I ran my own SMTP server and HTTPd. My daughter’s “soothing ocean waves” baby toy sounds just the same. There were no turnkey web apps available at the time. I wrote and ran my own Web-based calendar management application in Python. When I left campus, I could have bought some co-locating service but it was a hassle and not cheap, so I didn’t bother [*].

I have a lot less time (and Linux administration skills) now than when I left university, so how come it is now attractive for me to run my own web apps again? What changed in the environment?

The main driver is the rise of the LAMP stack and especially PHP. For all the flaws of the platform and the ugliness of the code, PHP has sparked a huge ecosystem. Not just in terms of developers but also of administrators: most hosting providers are now very comfortable offering and managing PHP services.

The other driver is the rise of virtualization. Amazon hosts Xen images for you. But it’s not just the hypervisor version of virtualization. My Dreamhost server, for example, is not a Xen or VMWare virtual machine. It’s just a regular server that I share with other users but Dreamhost has created an environment that provides enough isolation from other users to meet my needs as an individual. The poor man’s virtualization if you will. Good enough.

These two trends (PHP and virtualization) have allowed Dreamhost and others to create an easy-to-use environment in which people can run and deploy web applications. And it becomes easier every day for someone to compete with Dreamhost on this. Their value to me is not in the hardware they run. It’s in environment they provide that prevents me from having to do low-level LAMP administration that I don’t have time for. Someone could create such an environment and run it on top of Amazon’s utility computing offering. Which is why I am convinced that such environments will be around for the foreseeable future, Dreamhost or no Dreamhost. Running your own web applications won’t be just for geeks anymore, just like using a GPS is not just for the geeks anymore.

Of course this is not a panacea and it won’t allow you to capture all aspects of your on-line life. You can’t host your eBay ratings. You can’t host your Amazon rank as a reviewer. It takes more than just technology to break free, but technology has underpinned many business changes before. In addition to the rise of LAMP and virtualization already mentioned, I am watching with interest the different efforts around data portability: dataportability.org, OpenID, OpenSocial, Facebook API… Except for OpenID, these efforts are driven by Web service providers hoping to canalize the demand for integration. But if they are successful, they should give rise to open source applications you can host on your own to enjoy these services without the lock-in. One should also watch tools like WSO2’s Mashup Server and JackBe Presto for their potential to rescue captive data and exploit freed data. On the “social networks” side, the RDF community has been buzing recently with news that Google is now indexing FOAF documents and exposing the content through its OpenSocial interface.

Bottom line, when you are offered to create a page, account or URL that will represent you or your data, take a second to ask yourself what it would take to do the same thing under your domain name. You don’t need to be a survivalist freak hiding in a mountain cabin in Montana (“it’s been eight years now, I wonder if they’ve started to rebuild cities after the Y2K apocalypse…”) to see value in more self-reliance on the web, especially when it can be easily achieved.

Yes, there is a connection between this post and the topic of this blog, IT management. It will be revealed in the next post (note to self: work on your cliffhangers).

[*] Some of my graduating colleagues took their machines to the dorm basement and plugged them into a switch there. Those Linux Slackware machines had amazing uptimes of months and years. Their demise didn’t come from bugs, hacking or component failures (even when cats made their litter inside a running computer with an open case) but the fire marshal, and only after a couple of years (the network admins had agreed to turn a blind eye).

[UPDATED 2008/7/7: Oh, yeah, another reason to run your own apps is that you won’t end up threatened with jail time for violating the terms of service. You can still end up in trouble if you misbehave, but they’ll have to charge you with something more real, not a whatever-sticks approach.]

[UPDATED 2009/12/30: Ringside (the Bob Bickel endeavor that I mention above), closed a few months after this post. Too bad. We still need what they were working on.]

2 Comments

Filed under Everything, Portability, Tech, Virtualization

David Linthicum on SaaS, enterprise architecture and management

David Linthicum from ZapThink (the world’s most prolific purveyor of analyst quotes for SOA-related press releases) recently wrote an article explaining that “Enterprise Architects Must Plan for SaaS“. A nice, succinct overview. I assume there is a lot more content in the keynote presentation that the article is based on.

The most interesting part from a management perspective is the paragraph before last:

Third, get in the mindset of SaaS-delivered systems being enterprise applications, knowing they have to be managed as such. In many instances, enterprise architects are in a state of denial when it comes to SaaS, despite the fact that these SaaS-delivered systems are becoming mission-critical. If you don’t believe that, just see what happens if Salesforce.com has an outage.

I very much agree with this view and the resulting requirements for us vendors of IT management tools. It is of course not entirely new and in many respect it is just a variant of the existing challenges of managing distributed applications, that SOA practices have been designed to help address. I wrote a slightly more specific description of this requirement in an earlier post:

If my business application calls a mix of internal services, SaaS-type services and possibly some business partner services, managing SLAs and doing impact/root cause analysis works a lot better if you get some management information from these other services. Whether it is offered by the service owner directly, by a proxy/adapter that you put on your end or by a neutral third party in charge of measuring/enforcing SLAs. There are aspects of this that are ‘regular’ SOA management challenges (i.e. that apply whenever you compose services, whether you host them yourself or not) and there are aspects (security, billing, SLA, compliance, selection of partners, negotiation) that are handled differently in the situation where the service is consumed from a third party.

With regards to the first two “tricks” listed in David’s article, people should take a look at what the Oracle AIA Foundation Pack and Industry Reference Models have to offer. They address application integration in general, not specifically SaaS scenarios but most of the semantics/interface/process concerns are not specific to SaaS. For example, the Siebel CRM On Demand Integration Pack for E-Business Suite (catchy name, isn’t it) provides integration between a hosted application (Siebel CRM On Demand) and an on-premises application (Oracle E-Business Suite). Efficiently managing such integrated systems (whether you bought, built or rent the applications and the integration) is critical.

Comments Off on David Linthicum on SaaS, enterprise architecture and management

Filed under Everything, IT Systems Mgmt, Mgmt integration, Oracle, SaaS

DMTF members as primary voters?

I just noticed this result from the 2007 DMTF member survey (taken a year ago, but as far as I can tell just released now). When asked what their “most important interoperability priority” is, members made it pretty clear that they want the current CIM/WBEM infrastructure fixed and polished. They seem a lot less interested in these fancy new SOAP-based protocols and even less in using any other model than CIM.

It will be interesting to see what this means for new DMTF activities, such as CMDBf or WS-RC, that are supposed to be model-neutral. A few possibilities:

  • the priorities of the members change over time to make room for these considerations
  • turn-over (or increase) in membership brings in members with a different perspective
  • the model-neutral activities slowly get more and more CIM-influenced
  • rejection by the DMTF auto-immune system

My guess is that the DMTF leadership is hoping for #1 and/or #2 while the current “base” (to borrow from the US election-season language) wouldn’t mind #3 or #4. I am expecting some mix of #2 and #3.

Pushing the analogy with current US political events further than is reasonable, one can see a correspondence with the Republican primary:

  • CIM/WBEM is Huckabe, favored by the base
  • CMDBf/WS-RC/WS-Management etc is Romney, the choice of the party leadership
  • At the end, some RDF and HTTP-based integration-friendly approach comes from behind and takes the prize (McCain)

Then you still have to win the general election (i.e. industry adoption of whatever the DMTF cooks up).

[UPDATED 2008/2/7: the day after I write this entry, Romney quits the race. Bad omen for CMDBf and WS-RC? ;-) ]

Comments Off on DMTF members as primary voters?

Filed under CMDB Federation, CMDBf, DMTF, Everything, Standards, WS-Management

Going dot-postal

According to this article, the Universal Postal Union is in talks with the ICANN to get its own “.post” TLD. Because, you see, “restricting the ‘.post’ domain name to postal agencies or groups that provide postal services would instill trust in Web sites using such names“. If you’re wondering what these “groups that provide postal services” are, keep reading: “the U.N. agency also could assign names directly to mail-related industries, such as direct marketing and stamp collecting“. I have nothing against stamp collectors, but direct marketing? So much for the “trust” part. Just call it “.spam” and be done with it.

I doubt that having to use a “.com” name has ever registered as a hindrance for FedEx, DHL or UPS in providing web-based services. And these organizations have been offering on-line package tracking and other services since before many of the postal organizations even had a way to locate post offices on their web site. That being said, http://com.post/ would be a great URL for a blog.

If the UPU really wants to innovate, what would be more interesting than a boring TLD would be a URI scheme for postal mail. Something like post:USA/CA/94065/Redwood%20City/Oracle%20Parkway/500/William%20Vambenepe but in a way that allows for the international variations. That would be a nice complement to the “geo:” URI scheme.

Now, should I categorize this as “off-topic”? What would the IT management angle be? Let’s see. Maybe as a way to further integrate the handling of virtual and physical servers? Kind of a stretch (being able to represent the destination as a URI in both cases doesn’t mean that delivering a physical server to an address is the same as provisioning a new VM in a hypervisor). Maybe as an additional notification endpoint (“if the application crashes, don’t send an an email, send me a letter instead”)? As if. Alright, off-topic it is.

Comments Off on Going dot-postal

Filed under Everything, Off-topic

Loosening my coupling with Yahoo (excuse my SOA-speak)

For those of you who still have my @yahoo.com personal email in your address book, now is a good time to replace it with the more portable one composed of my first name @vambenepe.com. This way there won’t be any problem when I move away from Yahoo (which is where my personal emails are currently redirected) after the Microsoft acquisition.

This is not a knee-jerk anti-Microsoft reaction. It’s just an intuition that their attempt to acquire Yahoo is driven more by lust for Yahoo’s audience than anything else (Tim Bray seems to agree). And that having acquired the audience, Microsoft is going to want something more for its $44.6 billions (or whatever the final price ends up being) than the few dollars I send to Yahoo every year for freedom from ads and a few additional services. Things like promoting Silverlight for example (did you hear that the Web broadcast of the 2008 Olympics will supposedly require Silverlight? Since I don’t own a TV, that would make me a little upset if I cared about the Olympics).

When the time comes (I am willing to give Yahoo-Microsoft a chance to prove me wrong), I’ll probably just move to my own server unless I find a provider who offers a great email-and-only-email service. It won’t be GMail.

In the meantime, whether this acquisition succeeds or not, thank you for updating your address books.

1 Comment

Filed under Everything, Off-topic, Yahoo

+1 to the FTC

I noticed two patent-related news items tonight that could be of interest to those of us who have to deal with the “fun” of patents as they apply to IT. The first one is an FTC settlement that enforces a patent promise made in a standard body. It is not uncommon for participation in a standardization group to require some form of patent grant (royalty-free, RAND, etc). This is why employees in companies with large patent portfolios have to jump through endless loops and go through legal reviews just to be authorized to join a working group at OASIS (one of the organizations with the clearest patent policy, patiently crafted through a lot of debate). Something similar seems to have happened at IEEE during the work on the Ethernet standard: National Semiconductor promised a flat $1,000 license for two of their patents (pending at the time) that are essential to the implementation of the standard. And we all know that that little standard happened to become quite successful (to IBM’s despair). Years later, a patent troll that had gotten hold of the patents tried to walk away from the promise. In short, the FTC stopped them. If this is of interest to you, go read Andy Updegrove’s much more detailed analysis (including his view that this is important not just for standards but also for open source).

At my level of understanding of intellectual property law as it applies to the IT industry (I am not a lawyer, but I have spent a fair amount of time discussing the topic with them), this sounds like a good decision. But it is a tiny light in an ocean of darkness that creates so many opportunities for abuse. And the resulting fear prevents a lot of good work from happening. The second patent-related news item of the day (a patent reform bill driven by “major U.S. high-tech companies”) might do something to address the larger problem. Reducing damages, strengthening the post-grant review process and ending the “forum shopping” that sends most of these suits to Texas sounds like positive steps. All in all, I am more sympathetic to “major U.S. high-tech companies” (which include my current and former employers) than to patent trolls. At the same time, I have no illusion that “major U.S. high-tech companies” are out to watch for the best interest of entrepreneurs and customers.

Comments Off on +1 to the FTC

Filed under Business, Everything, Patents

HP’s GIFt to the SOA world

I just noticed a press release from HP to announce the release of GIF (the Governance Interoperability Framework). In short, GIF is a specification that describes how to use the HP SOA Registry for governance tasks that go beyond what UDDI can do. It has been around for a long time inside Systinet then Mercury then HP and some partners had been somewhat enrolled in the program (whatever that meant) but it wasn’t clear what HP was really planning to do with it. Looks like they have decided to put some muscle into it by attracting more partners and releasing it. Or at least announcing that they would release it. I can’t find it on the HP site so I can’t see if and how the specification changed since when I was in HP. It will be interesting to see if they present it as a neutral specification of which the HP SOA Registry is one implementation or as something that requires that registry.

I also looked for it on Wikipedia since the press release declares that it will be made available there but to no avail. That part puzzles me a bit since this would be pretty atypical for Wikipedia. At most there could be an article about GIF that links to the specification on hp.com. And even then, you’d have to convince Wikipedia editors that the topic is “worthy of notice”. Or maybe they meant to refer to an HP wiki and some confused editor turned that into Wikipedia?

The press release has a few additional items (yet more fancy-sounding SOA offering from HP Services and some new OEM-friendly packaging for the Registry) but they don’t seem too exciting to me. The GIF action is what could be interesting if things really get moving on this. In any case, congratulations to Luc and Dave.

[UPDATED 2008/2/4: turns out Luc isn’t at HP anymore, he’s joined Active Endpoints as Senior Director of Product Management. Double congrats then, one set for the past work on GIF and the other for the new job.]

[UPDATED 2008/2/5: there is now a Wikipedia page with a description of GIF. But still no sign of the specification itself on the HP GIF page.]

[UPDATED 2008/3/16: you can now download the spec (but you’ll need to register with HP first).]

1 Comment

Filed under Everything, Governance, HP, Specs

Freeform Dynamics on IT management

We can find on the Register site a Microsoft-sponsored report by Freeform Dynamics on the daily frustrations of IT management work. The results are introduced as “surprising and interesting” but I don’t see where the surprise comes from. The main take-away is that the tools and systems that support IT management are fragmented and that better integration is needed. This is very true but hardly qualifies as a surprise unless you’ve been living in a PowerPoint world (where the boxes are always nicely layered and connected – after all there even are built-in functions in PowerPoint to polish this lie by ensuring that objects look well distributed and aligned). But the report is still an interesting short read.

1 Comment

Filed under Everything, IT Systems Mgmt, Mgmt integration

WSO2 Mashup Server

I see that WSO2 has just released version 1.0 of their Mashup Server. Congratulations to Jonathan and the rest of the team. I haven’t played with the earlier betas of the Mashup Server but I have read enough about it to be interested. Now that it’s been released, it might be a good time to invest a few hours to look into it (I just downloaded it and I filled a small documentation bug already). I know (and like) many of the WSO2 guys (Jonathan, but also Sanjiva and Glen) from the early days of the W3C WSDL working group. Plus, you have to give credit to a company that offers visibility on its web site not just to its board and management team but also to its engineers.

But the Mashup Server is not interesting to me just because I know some of its authors. There are tow more important reasons. One is that it is the integration product in WSO2’s portfolio that is the most different in its approach from the many integration products in Oracle Fusion Middleware. We want Oracle Enterprise Manager to do an outstanding job at managing Oracle Fusion Middleware, but we also want it to manage other integrations approaches as well (we manage Tomcat for example). At this point there is of course no market demand for managing WSO2’s Mashup Server, but from an architectural perspective it’s a good alternative to keep in mind along with the BPEL, ESB, ODI, etc that are already in heavy use. I am always interested in perspectives that help make sure that the most abstract application/service management concepts remain suitably abstracted, so learning a bit about the Mashup Server can’t hurt. I’ll know more once I’ve looked at it, but my impression is that the Mashup Server is somewhere between BPEL and Ruby on Rails (or TurboGears) in terms of declarativity and introspectability (yes I like to make up words) for management purposes.

This may well be sweet spot and it’s my second reason for being interested in the Mashup Server. I am always interested in tools that help with quick prototyping and the best tool is different for each job. The Mashup Server is pretty unique and I can imagine it being a nice tool for some management integration prototypes once the participating services have been suitably XML-ized (something that that Oracle Fusion Middleware makes easy).

Interestingly, the release of this JavaScript-based platform comes on the same day that Joe Gregorio declares JavaScript to be the new SmallTalk.

Comments Off on WSO2 Mashup Server

Filed under Everything, JavaScript, Mashup, Mgmt integration, Tech, WSO2

Lyon shares

The New York Times published an article describing a plan to partially replicate the city of Lyon in Dubai. I wasn’t born in Lyon but I grew up there. At the cost of another off-topic post, I will take this opportunity to tell my American friends, whose itineraries in France tend to take them from Paris straight to the French Riviera, that they are missing out on a great city located half-way between these two spots.

The Lyon apartment building I lived in stands on what used to be a trading post for Gauls and Romans. Napoleon Bonaparte presided over the earth breaking ceremony for this building. A couple of windows in the apartment were later blocked with bricks because of a 19th century tax that was assessed based on the number and size of windows in your home (*). Through the remaining windows, the view from the apartment is over place Bellecour on which you can see a statue of king Louis XIV that was melted during the French revolution to make cannons and replaced during the Restauration period. There was also a guillotine in action there during the revolution. During WW2, the Gestapo took over the building (my elderly same-floor neighbor told me about being evicted by them – he came back after the war). And Antoine de Saint Exupery was born next door. That’s a lot of history for just one apartment building. Good luck replicating that in the desert.

Of course that’s not necessary and there is a lot you can be inspired by in Lyon without emulating its past (I don’t recommend cutting a few heads in public just to “capture the feel” of Lyon’s revolutionary history). The Times article lists a few challenges. The importance of pork and wine in the local cuisine is manageable. Once you accept that you’re not going to get a carbon copy, the challenge of Lyon-inspired cooking without these ingredients is one chefs could rise to (a generic prohibition on heavy sauces would be more problematic). The role of the rivers in the “feel” of the city seems more challenging to me. I lived in the peninsula formed by the meeting of the Rhone and Saone rivers. The rivers and the wide walking areas by their sides make for great (sometimes windy) walks during which you can see nice bridges and historic buildings (universities, a hospital, a courthouse and many Renaissance apartment buildings). And even if they manage to create an equivalent body of water in Dubai, the strong flow of the water coming down from the Alps is likely to be missing. There is a reason why the picture that illustrates the Times article shows a pedestrian bridge (looks like Passerelle Saint Vincent over the Saone river).

I am not sure what it really means to replicate an old city but there certainly is a lot to learned about urban life from Lyon’s long evolution. I am sure the people of Lyon don’t mind the money but even more they probably love being told that they represent a model to emulate. And it must feel good to steal the limelight from Paris just once. I don’t have millions to invest in the city like Dubai does, but I too am happy to speak highly of Lyon and encourage people to visit. Feel free to contact me if you plan such a visit and would like recommendations.

(*) the number of doors was also part of the tax calculations. The goal was to achieve some degree of proportionality in taxation since rich people presumably had more doors and windows in their homes. It wasn’t a new idea, Julius Caesar imposed similar taxes (called ostiarium and columnarium) on the numbers of doors and columns respectively. Looks like he didn’t care for McMansions either. Maybe it’s time to resuscitate the columnarium in US suburbia.

Comments Off on Lyon shares

Filed under Everything, Off-topic

IT management in a world of utility IT

A cynic might call it “could computing” rather than “cloud computing”. What if you could get rid of your data center. What if you could pay only for what you use. What if you could ramp up your capacity on the fly. We’ve been hearing these promising pitches for a while now and recently the intensity has increased, fueled by some real advances.

As an IT management architect who is unfortunately unlikely to be in position to retire anytime soon (donations accepted for the send-William-to-retirement-on-a-beach fund) it forces me to wonder what IT management would look like in a world in which utility computing is a common reality.

First, these utility computing providers themselves will need plenty of IT management, if not necessarily the exact same kind that is being sold to enterprises today. You still need provisioning (automated of course). You definitely need access measuring and billing. Disaster recovery. You still have to deal with change planning, asset management and maybe portfolio management. You need processes and tools to support them. Of course you still have to monitor, manage SLAs, and pinpoints problems and opportunities for improvement. Etc. Are all of these a source of competitive advantage? Google is well-known for writing its infrastructure software (and of course also its applications) in house but there is no reason it should be that way, especially as the industry matures. Even when your business is to run a data center, not all aspects of IT management provide competitive differentiation. It is also very unclear at this point what the mix will be of utility providers that offer raw infrastructure (like EC2/S3) versus applications (like CRM as a service), a difference that may change the scope of what they would consider their crown jewels.

An important variable in determining the market for IT management software directed at utility providers is the number of these providers. Will there be a handful or hundreds? Many people seem to assume a small number, but my intuition goes the other way. The two main reasons for being only a handful would be regulation and infrastructure limitations. But, unlike with today’s utilities, I don’t see either taking place for utility computing (unless you assume that the network infrastructure is going to get vertically integrated in the utility data center offering). The more independent utility computing providers there are, the more it makes sense for them to pool resources (either explicitly through projects like the Collaborative Software Initiative or implicitly by buying from the same set of vendors) which creates a market for IT management products for utility providers. And conversely, the more of a market offering there is for the software and hardware building blocks of a utility computing provider, the lower the economies of scale (e.g. in software development costs) that would tend to concentrate the industry.

Oracle for one is already selling to utility providers (SaaS-type more than EC2-type at this point) with solutions that address scalability, SLA and multi-tenancy. Those solutions go beyond the scope of this article (they include not just IT management software but also databases and applications) but Oracle Enterprise Manager for IT management is also part of the solution. According to this Aberdeen report the company is doing very well in that market.

The other side of the equation is the IT management software that is needed by the consumers of utility computing. Network management becomes even more important. Identity/security management. Desktop management of some sort (depending on whether and what kind of desktop virtualization you use). And, as Microsoft reminds us with S+S, you will most likely still be running some software on-premises that needs to be managed (Carr agrees). The new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software. Which sounds eerily familiar. In the early days of WSMF, one of the scenarios we were attempting to address (arguably ahead of the times) was service management across business partners (that is, the protocols and models were supposed to allow companies to expose some amount of manageability along with the operational services, so that service consumers would be able to optimize their IT management decision by taking into account management aspects of the consumed services). You can see this in the fact that the WSMF-WSM specification (that I co-authored and edited many years ago at HP) contains a model of a “conversation” that represents “set of related messages exchanged with other Web services” (a decentralized view of a BPEL instance, one that represents just one service’s view of its participation in the instance). Well, replace “business partner” with “SaaS provider” and you’re in a very similar situation. If my business application calls a mix of internal services, SaaS-type services and possibly some business partner services, managing SLAs and doing impact/root cause analysis works a lot better if you get some management information from these other services. Whether it is offered by the service owner directly, by a proxy/adapter that you put on your end or by a neutral third party in charge of measuring/enforcing SLAs. There are aspects of this that are “regular” SOA management challenges (i.e. that apply whenever you compose services, whether you host them yourself or not) and there are aspects (security, billing, SLA, compliance, selection of partners, negotiation) that are handled differently in the situation where the service is consumed from a third party. But by and large, it remains a problem of management integration in a word of composed, orchestrated and/or distributed applications. Which is where it connects with my day job at Oracle.

Depending on the usage type and the level of industry standardization, switching from one utility computing provider to the other may be relatively painless and easy (modify some registry entries or some policy or even let it happen automatically based on automated policies triggered by a price change for example) or a major task (transferring huge amounts of data, translating virtual machines from one VM format to another, performing in-depth security analysis…). Market realities will impact the IT tools that get developed and the available IT tools will in return shape the market.

Another intriguing opportunity, if you assume a mix of on-premises computing and utility-based computing, is that of selling back your spare capacity on the grid. That too would require plenty of supporting IT management software for provisioning, securing, monitoring and policing (coming soon to an SEC filing: “our business was hurt by weak sales of our flagship Pepsi cola drink, partially offset by revenue from renting computing power from our data center to the Coca cola company to handle their exploding ERP application volume”). I believe my neighbors with solar panels on their roofs are able to run their electric counter backward and sell power to PG&E when they generate more than they use. But I’ll stop here with the electric grid analogy because it is already overused. I haven’t read Carr’s book so the comment may be unfair, but based on extracts he posted and reviews he seems to have a hard time letting go of that analogy. It does a good job of making the initial point but gets tiresome after a while. Having personally experienced the Silicon Valley summer rolling black-outs, I very much hope the economics of utility computing won’t be as warped. For example, I hope that the telcos will only act as technical, not commercial intermediaries. One of the many problems in California is that the consumer don’t buy from the producers but from a distributor (PG&E in the Bay Area) who sells at a fixed price and then has to buy at pretty much any price from the producers and brokers who made a killing manipulating the supply during these summers. Utility computing is another area in which economics and technology are intrinsically and dynamically linked in a way that makes predictions very difficult.

For those not yet bored of this topic (or in search of a more insightful analysis), Redmonk’s Coté has taken a crack at that same question, but unlike me he stays clear of any amateurish attempt at an economic analysis. You may also want to read Ian Foster’s analysis (interweaving pieces of technology, standards, economy, marketing, computer history and even some movie trivia) on how these “clouds” line up with the “grids” that he and others have been working on for a while now. Some will see his post as a welcome reminder that the only thing really new in “cloud” computing is the name and others will say that the other new thing is that it is actually happening in a way that matters to more than a few academics and that Ian is just trying to hitch his jalopy to the express train that’s passing him. For once I am in the “less cynical” camp on this and I think a lot of the “traditional” Grid work is still very relevant. Did I hear “EC2 components for SmartFrog”?

[UPDATED 2008/6/30: For a comparison of “cloud” and “grid”, see here.]

[UPDATED 2008/9/22: More on the Cloud vs. Grid debate: a paper critical of Grid (in the OGF sense of the term) efforts and Ian Foster’s reply (reat the comments too).]

11 Comments

Filed under Business, Everything, IT Systems Mgmt, Utility computing, Virtualization

Spring flowers

Via Greg, some interesting adoption data on Spring vs. EJB. Of course Rod Johnson (Springsource CEO and Spring inventor) is anything but unbiased on this. I haven’t seen any corroboration of his data but it is consistent with the zeitgeist. Greg’s take on what it means for standards is interesting too. I think what he says is especially true for standards that target portability (like J2EE and SCA) versus those that target interoperability. Standardization (including de-facto) is a must for a protocol but a “nice to have” for a development framework. But then again, now that even IT management has BarCamps, maybe even boring IT management interoperability protocols could emerge from the bottom up.

1 Comment

Filed under Everything, Standards

TAG you’re it for Ashok

Congratulations to Oracle’s Ashok Malhotra for his election to serve on the W3C TAG. Coincidentally, I met Ashok face to face for the first time today. This is good news for the Semantic Web, good news for W3C, good news for Oracle and, I hope, good news for Ashok too. If the name sounds familiar it may be because of his key role in the XML schema datatypes specification, which is arguably the most useful thing that came out of XSD.

Comments Off on TAG you’re it for Ashok

Filed under Everything, W3C

Microsoft’s Bob Muglia opens the virtualized kimono

In a recently published “executive e-mail”, Microsoft’s Bob Muglia describes the company’s view of virtualization. You won’t be surprised to learn that he thinks it’s a big deal. Being an IT management geek, I fast-forwarded to the part about management and of course I fully agree with him on the “the importance of integrated management”. But his definition of “integrated” is slightly different from mine as becomes clear when he further qualifies it as the use of “a single set of management tools”. Sure, that makes for easier integration, but I am still of the school of thought (despite the current sorry state of management integration) that we can and must find ways to integrate heterogeneous management tools.

“Although virtualization has been around for more than four decades, the software industry is just beginning to understand the full implications of this important technology” says Bob Muglia. I am tempted to slightly re-write the second part of the sentence as “the software marketing industry is just beginning to understand the full potential of this important buzzword”. To illustrate this, look no further than that same executive e-mail, in which we learn that Terminal Server actually provides “presentation virtualization”. Soon we’ll hear that the Windows TCP/IP stack provides “geographic virtualization” and that solitaire.exe provides “card deck virtualization”.

Then there is SoftGrid (or rather, “Microsoft SoftGrid Application Virtualization”). I like the technology behind SoftGrid but when Microsoft announced this acquisition my initial thought was that coming from the company that owns the OS and the development/deployment environment on top of it, this acquisition was quite an admission of failure. And I am still very puzzled by the relevance of the SoftGrid approach in the current environment. Here is my proposed motto for SoftGrid: “can’t AJAX please go away”. Yes, I know, CAD, Photoshop, blah, blah, but what proportion of the users of these applications want desktop virtualization? And of those, what proportion can’t be satisfied with “regular” desktop virtualization (like Virtual PC, especially when reinforced with the graphical rendering capabilities from Calista which Microsoft just acquired)?

In an inspirational statement, Bob Muglia asks us to “imagine, for example, if your employees could access their personalized desktop, with all of their settings and preferences intact, on any machine, from any location”. Yes, imagine that. We’d call it the Web.

In tangentially related news, David Chappell recently released a Microsoft-sponsored white paper that describes what Microsoft calls “Software + Service”. As usual, David does a good job of explaining what Microsoft means, using clearly-defined terms (e.g. “on-premises” is used as an organizational, not geographical concept) and by making the obvious connections with existing practices such as invoking partner/supplier services and SOA. There isn’t a ton of meat behind the concept of S+S once you’ve gotten the point that even in a “cloud computing” world there is still some software that you’ll run in your organization. But since, like Microsoft, my employer (Oracle) also makes most of its money from licenses today, I can’t disagree with bringing that up…

And like Microsoft, Oracle is also very aware of the move towards SaaS and engaged in it. In that respect, figure 11 of the white paper is where a pro-Microsoft bias appears (even though I understand that the names in the figure are simply supposed to be “representative examples”). Going by it, there are the SaaS companies (that would be the cool cats of Amazon, Salesforce.com and Google plus of course Microsoft) and there are the on-premises companies (where Microsoft is joined by Oracle, SAP and IBM). Which tends to ignore the fact that Oracle is arguably more advanced than Microsoft both in terms of delivering infrastructure to SaaS providers and being a SaaS provider itself. And SAP and IBM would also probably want to have a word with you on this. But then again, they can sponsor their own white paper.

Comments Off on Microsoft’s Bob Muglia opens the virtualized kimono

Filed under Everything, Mgmt integration, Microsoft, Virtualization

Book review: Xen Virtualization

Someone from Packt Publishing asked me if I was interested in reviewing the Xen Virtualization book by Prabhakar Chaganti that they recently published. I said yes and it was in my mailbox a few days letter.

The sub-title is “a fast and practical guide to supporting multiple operating systems with the Xen hypervisor” and it turns out that the operating word is “fast”. It’s a short book (approx 130 pages, many filled with screen captures and console output listings). It is best used as an introduction to Xen for people who understand computer administration (especially Linux) but are new to virtualization.

The book contains a brief overview of virtualization, followed by a description of the most common tasks:

  • the Xen install process (from binary and source) on Fedora core 6
  • creating virtual machines (using NetBSD plus three different flavors of Linux)
  • basic management of Xen using the xm command line or the XenMan and virt-manager tools
  • setting up simple networking
  • setting up simple storage
  • encrypting partitions used by virtual machines
  • simple migration of virtual machines (stopped and live)

For all of these tasks, what we get is a step by step process that corresponds to the simple case and does not cover any troubleshooting. It is likely that anyone who embarks on the task described will need options that are not covered in the book. That’s why I write that it is an introduction that shows the kind of thing you need to do, rather than a reference that will give you the information you need in your deployment. You’ll probably need to read additional documentation, but the book will give you an idea of what stage you are in the process and what comes next.

Even with this limited scope, it is pretty light on explanations. It’s mostly a set of commands followed by a display of the result. Since it’s closer to my background I’ll take the “managing Xen” chapter as an example. There is nothing more basic to management than understanding the state of a resource. The book shows how to retrieve it (“xm list”) and very briefly describes the different states (“running”, “blocked”, “paused”, “shutdown”, “crashed”) but you would expect a bit more precision and details. For example, “blocked” is supposed to correspond to “waiting for an external event” but what does “external” mean? Sure the machine could be waiting on I/O, but it could also be on a timer (e.g. “sleep(1000)”) or simply have run out of things to do. I don’t think of a cron job as an “external event”. Also, when running “xm list” you should expect to always see dom0 in the “running” state (since dom0 is busy running your xm command) and on a one-core single-CPU machine (as is the case in the book) that means that none of the other domains can be in that state. That’s the kind of clarification (obvious in retrospect) that goes one step beyond the basic command description and saves some head scratching but the book doesn’t really go there. As another example, We are told in the “encryption” section that LUKS helps prevent “low entropy-attacks” but if you’re the kind of person who already knows what that means you probably don’t have much to learn from the “encryption” chapter of the book. In case you care, it is a class of attacks that take advantage of poor sources of random numbers and you can read all the details of how entropy is defined in this classic 1948 paper (it doesn’t have much to do with how the term is defined in physics).

Among the many more advanced topics that are not covered I can think of: advanced networking, clustering, advanced storage, Windows guests (even though it’s not Xen’s strong point), migration between physical and virtual, relationship to other IT management tasks (e.g. server and OS management), performance aspects, partitioning I/O so domains play well together, security considerations (beyond simply encrypting the file system), new challenges introduced by virtualization…

Xen documentation on the web is pretty poor at this point and the book provides more than most simple “how-to” guides on installing/configuring Xen that you can Google for. And it brings a consistent sequence of such “how-to” guides together in one package. If that’s worth it to you then get the book. But don’t expect this to cover all your documentation needs for anything beyond the simplest (and luckiest) deployment. I would be pleased to see the book on the desk of an IT manager in a shop that is considering using virtualization, I would be scared to see it on the desk of an IT administrator in a shop that is actually using Xen.

[UPDATED on 2008/02/01: Dan Magenheimer, a Xen expert who works on the Oracle VM, highly recommends another Xen book that just came out: Professional Xen Virtualization by William von Hagen. I haven’t seen that book but I trust Dan on this topic.]

Comments Off on Book review: Xen Virtualization

Filed under Book review, Everything, Virtualization, Xen

DevCampTivoli

Our esteemed competitors at IBM Tivoli are organizing a BarCamp focused on the use of ITM for BSM. Should be very interesting if they manage to convince a good group that this is a valuable way to spend a weekend. BSM on Tivoli seems like an ambitious topic for a “getting your hands dirty” kind of session since by definition BSM involves managing complex systems and solving the needs of the kind of people who don’t necessarily attend BarCamps. Very different from the more typical BarCamp environment in which people bring code (typically open source) they know in and out and try to get these projects to do things that they themselves plan to make use of.

Just setting up a realistic (even if fake) environment to get your “hands dirty” on can take a lot of time. Long downloads and complex installation procedures aren’t your friends when you only have a few hours (and when participants don’t have to stay in the room if they’re bored). It will be an interesting challenge for the organizers to decide how much (if anything at all) to prepare ahead of time while keeping the whole thing open and participant-driven.

My guess is that even if they don’t get a lot in terms of BSM insight per se, they will learn a lot about the ease of installation, integration and extension of the various products involved and how to increase it, which will be beneficial all the same. Good luck to Doug, John and the other participants, I think you’ll do well and I hope you’ll achieve even more than I predict. Kudos for the initiative.

Of course the real challenge only starts after the BarCamp: it is to take the lessons back to the mothership…

And for those who say I only speak critically of IBM on this blog, this post is the proof that you are as prejudiced as a WebSphere architect. ;-)

[UPDATED on 2008/01/17: Make sure to read John’s clarifications in the comments section, including the link to BarCampESM which is happening this coming weekend in Austin. I hadn’t heard about it before.]

6 Comments

Filed under BarCamp, BSM, Everything, IBM