Introducing the Oracle Cloud API

Oracle recently published a Cloud management API on OTN and also submitted a subset of the API to the new DMTF Cloud Management working group. The OTN specification, titled “Oracle Cloud Resource Model API”, is available here. In typical DMTF fashion, the DMTF-submitted specification is not publicly available (if you have a DMTF account and are a member of the right group you can find it here). It is titled the “Oracle Cloud Elemental Resource Model” and is essentially the same as the OTN version, minus sections 9.2, 9.4, 9.6, 9.8, 9.9 and 9.10 (I’ll explain below why these sections have been removed from the DMTF submission). Here is also a slideset that was recently used to present the submitted specification at a DMTF meeting.

So why two documents? Because they serve different purposes. The Elemental Resource Model, submitted to DMTF, represents the technical foundation for the IaaS layer. It’s not all of IaaS, just its core. You can think of its scope as that of the base EC2 service (boot a VM from an image, attach a volume, connect to a network). It’s the part that appears in all the various IaaS APIs out there, and that looks very similar, in its model, across all of them. It’s the part that’s ripe for a simple standard, hopefully free of much of the drama of a more open-ended and speculative effort. A standard that can come out quickly and provide interoperability right out of the gate (for the simple use cases it supports), not after years of plugfests and profiles. This is the narrow scope I described in an earlier rant about Cloud standards:

I understand the pain of customers today who just want to have a bit more flexibility and portability within the limited scope of the VM/Volume/IP offering. If we really want to do a standard today, fine. Let’s do a very small and pragmatic standard that addresses this. Just a subset of the EC2 API. Don’t attempt to standardize the virtual disk format. Don’t worry about application-level features inside the VM. Don’t sweat the REST or SOA purity aspects of the interface too much either. Don’t stress about scalability of the management API and batching of actions. Just make it simple and provide a reference implementation. A few HTTP messages to provision, attach, update and delete VMs, volumes and IPs. That would be fine. Anything else (and more is indeed needed) would be vendor extensions for now.

Of course IaaS goes beyond the scope of the Elemental Resource Model. We’ll need load balancing. We’ll need tunneling to the private datacenter. We’ll need low-latency sub-networks. We’ll need the ability to map multi-tier applications to different security zones. Etc. Some Cloud platforms support some of these (e.g. Amazon has an answer to all but the last one), but there is a lot more divergence (both in the “what” and the “how”) between the various Cloud APIs on this. That part of IaaS is not ready for standardization.

Then there are the extensions that attempt to make the IaaS APIs more application-aware. These too exist in some Cloud APIs (e.g. vCloud vApp) but not others. They haven’t naturally converged between implementations. They haven’t seen nearly as much usage in the industry as the base IaaS features. It would be a mistake to overreach in the initial phase of IaaS standardization and try to tackle these questions. It would not just delay the availability of a standard for the base IaaS use cases, it would put its emergence and adoption in jeopardy.

This is why Oracle withheld these application-aware aspects from the DMTF submission, though we are sharing them in the specification published on OTN. We want to expose them and get feedback. We’re open to collaborating on them, maybe even in the scope of a standard group if that’s the best way to ensure an open IP framework for the work. But it shouldn’t make the upcoming DMTF IaaS specification more complex and speculative than it needs to be, so we are keeping them as separate extensions. Not to mention that DMTF as an organization has a lot more infrastructure expertise than middleware and application expertise.

Again, the “Elemental Resource Model” specification submitted to DMTF is the same as the “Oracle Cloud Resource Model API” on OTN except that it has a different license (a license grant to DMTF instead of the usual OTN license) and is missing some resources in the list of resource types (section 9).

Both specifications share the exact same protocol aspects. It’s pretty cleanly RESTful and uses a JSON serialization. The credit for the nice RESTful protocol goes to the folks who created the original Sun Cloud API as this is pretty much what the Oracle Cloud API adopted in its entirety. Tim Bray described the genesis and design philosophy of the Sun Cloud API last year. He also described his role and explained that “most of the heavy lifting was done by Craig McClanahan with guidance from Lew Tucker“. It’s a shame that the Oracle specification fails to credit the Sun team and I kick myself for not noticing this in my reviews. This heritage was noted from the get go in the slides and is, in my mind, a selling point for the specification. When I reviewed the main Cloud APIs available last summer (the first part in a “REST in practice for IT and Cloud management” series), I liked Sun’s protocol design the best.

The resource model, while still based on the Sun Cloud API, has seen many more changes. That’s where our tireless editor, Jack Yu, with help from Mark Carlson, has spent most of the countless hours he devoted to the specification. I won’t do a point to point comparison of the Sun model and the Oracle model, but in general most of the changes and additions are motivated by use cases that are more heavily tilted towards private clouds and compatibility with existing application infrastructure. For example, the semantics of a Zone have been relaxed to allow a private Cloud administrator to choose how to partition the Cloud (by location is an obvious option, but it could also by security zone or by organizational ownership, as heretic as this may sound to Cloud purists).

The most important differences between the DMTF and OTN versions relate to the support for assemblies, which are groups of VMs that jointly participate in the delivery of a composite application. This goes hand-in-hand with the recently-released Oracle Virtual Assembly Builder, a framework for creating, packing, deploying and configuring multi-tier applications. To support this approach, the Cloud Resource Model (but not the Elemental Model, as explained above) adds resource types such as AssemblyTemplate, AssemblyInstance and ScalabilityGroup.

So what now? The DMTF working group has received a large number of IaaS APIs as submissions (though not the one that matters most or the one that may well soon matter a lot too). If all goes well it will succeed in delivering a simple and useful standard for the base IaaS use cases, and we’ll be down to a somewhat manageable triplet (EC2, RackSpace/OpenStack and DMTF) of IaaS specifications. If not (either because the DMTF group tries to bite too much or because it succumbs to infighting) then DMTF will be out of the game entirely and it will be between EC2, OpenStack and a bunch of private specifications. It will be the reign of toolkits/library/brokers and hell on earth for all those who think that such a bridging approach is as good as a standard. And for this reason it will have to coalesce at some point.

As far as the more application-centric approach to hypervisor-based Cloud, well, the interesting things are really just starting. Let’s experiment. And let’s talk.

6 Comments

Filed under Amazon, API, Application Mgmt, Cloud Computing, DMTF, Everything, IT Systems Mgmt, Mgmt integration, Modeling, OpenStack, Oracle, Portability, Protocols, Specs, Standards, Utility computing, Virtual appliance, Virtualization

CMDB in the Cloud: not your father’s CMDB

Bernd Harzog recently wrote a blog entry to examine whether “the CMDB [is] irrelevant in a Virtual and Cloud based world“. If I can paraphrase, his conclusion is that there will be something that looks like a CMDB but the current CMDB products are ill-equipped to fulfill that function. Here are the main reasons he gives for this prognostic:

  1. A whole new class of data gets created by the virtualization platform – specifically how the virtualization platform itself is configured in support of the guests and the applications that run on the guest.
  2. A whole new set of relationships between the elements in this data get created – specifically new relationships between hosts, hypervisors, guests, virtual networks and virtual storage get created that existing CMDB’s were not built to handle.
  3. New information gets created at a very rapid rate. Hundreds of new guests can get provisioned in time periods much too short to allow for the traditional Extract, Transform and Load processes that feed CMDB’s to be able to keep up.
  4. The environment can change at a rate that existing CMDB’s cannot keep up with. Something as simple as vMotion events can create thousands of configuration changes in a few minutes, something that the entire CMDB architecture is simply not designed to keep up with.
  5. Having portions of IT assets running in a public cloud introduces significant data collection challenges. Leading edge APM vendors like New Relic and AppDynamics have produced APM products that allow these products to collect the data that they need in a cloud friendly way. However, we are still a long way away from having a generic ability to collect the configuration data underlying a cloud based IT infrastructure – notwithstanding the fact that many current cloud vendors would not make this data available to their customers in the first place.
  6. The scope of the CMDB needs to expand beyond just asset and configuration data and incorporate Infrastructure Performance, Applications Performance and Service assurance information in order to be relevant in the virtualization and cloud based worlds.

I wanted to expand on some of these points.

New model elements for Cloud (bullets #1 and #2)

These first bullets are not the killers. Sure, the current CMDBs were designed before the rise of virtualized environment, but they are usually built on a solid modeling foundation that can easily be extend with new resources classes. I don’t think that extending the model to describe VM, VNets, Volumes, hypervisors and their relationships to the physical infrastructure is the real challenge.

New approach to “discovery” (bullets #3 and #4)

This, on the other hand is much more of a “dinosaurs meet meteorite” kind of historical event. A large part of the value provided by current CMDBs is their ability to automate resource discovery. This is often achieved via polling/scanning (at the hardware level) and heuristics/templates (directory names, port numbers, packet inspection, bird entrails…) for application discovery. It’s imprecise but often good enough in static environments (and when it fails, the CMDB complements the automatic discovery with a reconciliation process to let the admin clean things up). And it used to be all you could get anyway so there wasn’t much point complaining about the limitations. The crown jewel of many of today’s big CMDBs can often be traced back to smart start-ups specialized in application discovery/mapping, like Appilog (now HP, by way of Mercury) and nLayers (now EMC). And more recently the purchase of Tideway by BMC (ironically – but unsurprisingly – often cast in Cloud terms).

But this is not going to cut it in “the Cloud” (by which I really mean in a highly automated IT environment). As Bernd Harzog explains, the rate of change can completely overwhelm such discovery heuristics (plus, some of the network scans they sometimes use will get you in trouble in public clouds). And more importantly, there now is a better way. Why discover when you can ask? If resources are created via API calls, there are also API calls to find out which resources exist and how they are configured. This goes beyond the resources accessible via IaaS APIs, like what VMWare, EC2 and OVM let you retrieve. This “don’t guess, ask” approach to discovery needs to also apply at the application level. Rather than guessing what software is installed via packet inspection or filesystem spelunking, we need application-aware discovery that retrieves the application and configuration and dependencies from the application itself (or its underlying framework). And builds a model in which the connections between application entities are expressed in terms of the configuration settings that drive them rather than the side effects by which they can be noticed.

If I can borrow the words of Lew Cirne:

“All solutions built in the pre-cloud era are modeled on jvms (or their equivalent), hosts and ports, rather than the logical application running in a more fluid environment. If the solution identifies a web application by host/port or some other infrastructural id, then you cannot effectively manage it in a cloud environment, since the app will move and grow, and your management system (that is, everything offered by the Big 4, as well as all infrastructure management companies that pay lip service to the application) will provide nearly-useless visibility and extraordinarily high TCO.”

I don’t agree with everything in Lew Cirne’s post, but this diagnostic is correct and well worded. He later adds:

“So application management becomes the strategic center or gravity for the client of a public or private cloud, and infrastructure-centric tools (even ones that claim to be cloudy) take on a lesser role.”

Which is also very true even if counter-intuitive for those who think that

cloud = virtualization (in the “fake machine” interpretation of virtualization)

Embracing such a VM-centric view naturally raises the profile of infrastructure management compared to application management, which is a fallacy in Cloud computing.

Drawing the line between Cloud infrastructure management and application management (bullet #5)

This is another key change that traditional CMDBs are going to have a hard time with. In a Big-4 CMDB, you’re after the mythical “single source of truth”. Even in a federated CMDB (which doesn’t really exist anyway), you’re trying to have a unified logical (if not physical) repository of information. There is an assumption that you want to manage everything from one place, so you can see all the inter-dependencies, across all layers of the stack (even if individual users may have a scope that is limited by permissions). Not so with public Clouds and even, I would argue, any private Cloud that is more than just a “cloud” sticker slapped on an old infrastructure. The fact that there is a clean line between the infrastructure model and the application model is not a limitation. It is empowering. Even if your Cloud provider was willing to expose a detailed view of the underlying infrastructure you should resist the temptation to accept. Despite the fact that it might be handy in the short term and provide an interesting perspective, it is self-defeating in the long term from the perspective of realizing the productivity improvements promised by the Cloud. These improvements require that the infrastructure administrator be freed from application-specific issues and focus on meeting the contract of the platform. And that the application administrator be freed from infrastructure-level concerns (while at the same time being empowered to diagnose application-level concerns). This doesn’t mean that the application and infrastructure models should be disconnected. There is a contract and both models (infrastructure and consumption) should represent it in the same way. It draws a line, albeit one with some width.

Blurring the line between configuration and monitoring (bullet #6)

This is another shortcoming of current CMDBs, but one that I think is more easily addressed. The “contract” between the Cloud infrastructure and the consuming application materializes itself in a mix of configuration settings, administrative capabilities and monitoring data. This contract is not just represented by the configuration-centric Cloud API that immediately comes to mind. It also includes the management capabilities and monitoring points of the resulting instances/runtimes.

Wither CMDB?

Whether all these considerations mean that traditional CMDBs are doomed in the Cloud as Bernd Harzog posits, I don’t know. In this post, BMC’s Kia Behnia acknowledges the importance of application management, though it’s not clear that he agrees with their primacy. I am also waiting to see whether the application management portfolio he has assembled can really maps to the new methods of application discovery and management.

But these are resourceful organization, with plenty of smart people (as I can testify: in the end of my HP tenure I worked with the very sharp CMDB team that came from the Mercury acquisition). And let’s keep in mind that customers also value the continuity of support of their environment. Most of them will be dealing with a mix of old-style and Cloud applications and they’ll be looking for a unified management approach. This helps CMDB incumbents. If you doubt the power to continuity, take a minute to realize that the entire value proposition of hypervisor-style virtualization is centered around it. It’s the value of backward-compatibility versus forward-compatibility. in addition, CMDBs are evolving into CMS and are a lot more than configuration repositories. They are an important supporting tool for IT management processes. Whether, and how, these processes apply to “the Cloud” is a topic for another post. In the meantime, read what the IT Skeptic and Rodrigo Flores have to say.

I wouldn’t be so quick to count the Big-4 out, even though I work every day towards that goal, building Oracle’s application and middleware management capabilities in conjunction with my colleagues focused on infrastructure management.

If the topic of application-centric management in the age of Cloud is of interest to you (and it must be if you’ve read this long entry all the way to the end), You might also find this previous entry relevant: “Generalizing the Cloud vs. SOA Governance debate“.

4 Comments

Filed under Application Mgmt, Automation, Cloud Computing, CMDB, CMDB Federation, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Middleware, Utility computing, Virtualization

Twitter changes the rules for URLs in tweets: the end of privacy or the end of the 140 character limit?

Twitter has decided that for our good and their own it would be better if any time you click a link in a tweet the request first went to Twitter before being redirected to the intended destination. This blog entry announces the decision, but a lot of the interesting details are hidden in the more technical description of the change sent to the Twitter developers mailing list.

Here is a quick analysis of the announcement and its ramifications.

The advertised benefits

For users:

  • Twitter scans the links for malware, offering a layer of protection
  • It becomes easy to shorten links directly from the Tweet box on twitter.com (or from any client that doesn’t have a built-in link shortening feature)

For Twitter:

  • they collect a lot of profiling data on user behavior, which can be used to improve the “Promoted Tweets” system (and possibly plenty of other uses)

You don’t have to be much of a cynic to notice that the user benefits are already available today for those who care about them (get a link scanner on your computer, get a Twitter client with built-in link shortening) while the benefit to Twitter is a brand new and major addition…

One interesting side-effect: the erosion of the 140 character limitation

Without going into the technical details of the new system, one change is that each URL will now “cost” 20 characters (out of the 140 allowed per tweet), no matter how long it really is. But in most cases the user will still see the complete URL in the tweet (clients may choose to display something else but I doubt they will, at least by default, except for SMS). So you could now see tweets like (line breaks added):

In the town where I was born / Lived a man who sailed to sea /
And he told us of his life / In the land of submarines, http://more.com/
So.we.sailed.on.to.the.sun/Till.we.found.the.sea.green/
And.we.lived.beneath.the.waves/In.our.yellow.submarine/
We.all.live.in.yellow.submarine/Yellow.submarine,yellow.submarine/
We.all.live in.yellow.submarine/Yellow.submarine,yellow.submarine.

Based on the Twitter proposal, clicking on this link would send you to a Twitter-operated link shortener (e.g. http://t.co/J7erFi3) which would then redirect you to the full URL above. The site (e.g. more.com in this example) could be trivially set up so that such URLs are all valid and they return a clean version of the encoded text (for the benefit of users of Twitter clients that may not show the full URL).

This long URL example may seem a bit overkill (just post 3 tweets), but if you are only short by 20 or 30 characters and just can’t find another way to shorten the tweet the temptation will be big to take this easy escape.

A cool new URL shortening domain

You may have noticed the t.co domain in the previous paragraph. Yes, it’s a real one. That’s the hard-to-beat domain that Twitter was able to score for this. Cute. But frankly I am tired of the whole URL shortening deal and these short domain names have stopped to amuse me. You?

Enforcement

How, you may wonder, can Twitter ensure that the clicks go through its gateway if the full URL is available as part of the Tweet? Simple: they change the terms of service to forbid doing otherwise. It’s interesting how the paragraph in the email to developers which announces that aspect starts by asking nicely “we really do hope that…”, “please send the user through the t.co link”, “please still send him or her through t.co” and ends with a more constraining “we will be updating the TOS to require you to check t.co and register the click”. Speak softly and carry a big stick.

It will be obviously easy to avoid this, and you won’t have to resort top copy/pasting URLs. Even if the client developers play ball, open source clients can be recompiled. Proxies can be put in front of clients to remember the mapping and do the substitution without ever hitting t.co. Plug-ins and Greasemonkey scripts can be developed. Etc. Twitter knows that for sure and probably doesn’t care. As long as by default most users go through t.co the company will get the metrics it needs. It’s actually to Twitter’s benefit to make it easy enough (but not too easy) to circumvent this, as it ensure that those who care will find a solution and therefore keep using the service without too much of a fuss. We can’t tell you to cheat but we’ll hint that we don’t mind if you do.

The privacy angle

This is a big deal, and disappointing to me. Obviously the hopes I had for Twitter to become the backbone of an open, user-controlled, social data bus are not shared by its management. Until now, Twitter was a good citizen in a world of privacy-violating social networks because the data it shared (your tweets and your basic profile data) had always and unambiguously been expected to be public. Not true with your click stream. An average user will have no idea that when he clicks on http://cnn.com/some.story the request first goes to Twitter. Twitter now has access to identified personal data (the click stream) that its users do not mean to share. I realize that this is old news in a world of syndicated web advertising and centralized analytics, but this is new for Twitter and now puts them in position to mishandle this data, purposely or not, in the way they store it, use it and share it.

I wouldn’t be surprised if they end up forced to offer an option to not go through t.co and they had additional metadata in the user profile to inform the Twitter client that it is OK for this user not to go through the gateway when they click on a link. We’ll see.

The impact on the Twitter application ecosystem

The assault on URL shorteners was expected since the Chirp conference (and even before). Most Twitter application developers had already swallowed the “if you just fill a hole in the platform we’ll eventually crush you” message. What’s new in this announcement is that it also shows that Twitter is willing to use its Terms of Service document as a competitive tool, just like Steve Jobs does with Flash. Not unexpected, but it wasn’t quite as clear before.

It’s not a pretty implementation

Here is what it looks like at the API level: the actual tweet text contains the t.co shortened URL. And for each such URL there is a piece of metadata that gives you the t.co URL, the corresponding full-length URL (which you should display instead of the t.co one) and the begin/end character position of the t.co URL in the original tweet so you can easily pull it out (not sure why you need the end position since it’s always beginning+20 but serialized Twitter message are already happily duplicative).

As a modeling and serialization geek, I am not impressed by the technical approach taken here. But before I flame Twitter I should acknowledge the obvious: that the Twitter API has seen a rate of adoption several orders of magnitude larger than any protocol I had anything to do with. Still, it would take more than this detail of history to prevent me from pontificating.

For such a young company, the payload of a Twitter message is already quite a mess, mixing duplications, backward-compatible band-aids, real technical constraints and self-imposed constraints. Why, pray tell, do we even need to shorten URLs? If you’re an outsider forced to live within the constraints of the Twitter rules (chiefly, the 140 character limit), they make sense. But if you’re Twitter itself? With the amount of cruft and repetition in a serialized Twitter message, don’t tell me these characters actually matter on the wire. I know they do for SMS, but then just shorten the links in tweets sent over SMS. In the other cases, it reminds me of the frustrating experience of being told by the owner of a Mom-and-Pop store that they can’t accede to your demand because of “company policy”.

Isn’t it time for the text of tweets to contain real markup so that rather than staring at a URL we see highlighted words that point somewhere? Just like… any web page. Isn’t it the easiest for an application to process and doesn’t it offer the reader a more fluid text (Gillmor and Carr notwithstanding)? By now, isn’t this how people are used to consuming hypertext?

Couldn’t the backward compatibility issue of such an approach be solved simply by allowing client applications to specify in their Twitter registration settings that yes, they are able to handle such earth-shattering concept as a <a href=””></a> element. This doesn’t prevent a URL to “cost” you some fixed number of characters, it doesn’t prevent the use of a tracker/filter gateway if that’s your business decision.

We’ll see how users (and application developers) react to this change. As fans of Douglas Adams know, the risk of claiming that “all your click are belong to us” is that you expose yourself to hearing “so long, and thanks for all the whale” as an answer…

11 Comments

Filed under Everything, Off-topic, Twitter

Dear Cloud API, your fault line is showing

Most APIs are like hospital gowns. They seem to provide good coverage, until you turn around.

I am talking about the dreadful state of fault reporting in remote APIs, from Twitter to Cloud interfaces. They are badly described in the interface documentation and the implementations often don’t even conform to what little is documented.

If, when reading a specification, you get the impression that the “normal” part of the specification is the result of hours of whiteboard debate but that the section that describes the faults is a stream-of-consciousness late-night dump that no-one reviewed, well… you’re most likely right. And this is not only the case for standard-by-committee kind of specifications. Even when the specification is written to match the behavior of an existing implementation, error handling is often incorrectly and incompletely described. In part because developers may not even know what their application returns in all error conditions.

After learning the lessons of SOAP-RPC, programmers are now more willing to acknowledge and understand the on-the-wire messages received and produced. But when it comes to faults, there is still a tendency to throw their hands in the air, write to the application log and then let the stack do whatever it does when an unhandled exception occurs, on-the-wire compliance be damned. If that means sending an HTML error message in response to a request for a JSON payload, so be it. After all, it’s just a fault.

But even if fault messages may only represent 0.001% of the messages your application sends, they still represent 85% of those that the client-side developers will look at.

Client developers can’t even reverse-engineer the fault behavior by hitting a reference implementation (whether official or de-facto) the way they do with regular messages. That’s because while you can generate response messages for any successful request, you don’t know what error conditions to simulate. You can’t tell your Cloud provider “please bring down your user account database for five minutes so I can see what faults you really send me when that happens”. Also, when testing against a live application you may get a different fault behavior depending on the time of day. A late-night coder (or a daytime coder in another time zone) might never see the various faults emitted when the application (like Twitter) is over capacity. And yet these will be quite common at peak time (when the coder is busy with his day job… or sleeping).

All these reasons make it even more important to carefully (and accurately) document fault behavior.

The move to REST makes matters even worse, in part because it removes SOAP faults. There’s nothing magical about SOAP faults, but at least they force you to think about providing an information payload inside your fault message. Many REST APIs replace that with HTTP error codes, often accompanied by a one-line description with a sometimes unclear relationship with the semantics of the application. Either it’s a standard error code, which by definition is very generic or it’s an application-defined code at which point it most likely overlaps with one or more standard codes and you don’t know when you should expect one or the other. Either way, there is too much faith put in the HTTP code versus the payload of the error. Let’s be realistic. There are very few things most applications can do automatically in response to a fault. Mainly:

  • Ask the user to re-enter credentials (if it’s an authentication/permission issue)
  • Retry (immediately or after some time)
  • Report a problem and fail

So make sure that your HTTP errors support this simple decision tree. Beyond that point, listing a panoply of application-specific error codes looks like an attempt to look “RESTful” by overdoing it. In most cases, application-specific error codes are too detailed for most automated processing and not detailed enough to help the developer understand and correct the issue. I am not against using them but what matters most is the payload data that comes along.

On that aspect, implementations generally fail in one of two extremes. Some of them tell you nothing. For example the payload is a string that just repeats what the documentation says about the error code. Others dump the kitchen sink on you and you get a full stack trace of where the error occurred in the server implementation. The former is justified as a security precaution. The latter as a way to help you debug. More likely, they both just reflect laziness.

In the ideal world, you’d get a detailed error payload telling you exactly which of the input parameters the application choked on and why. Not just vague words like “invalid”. Is parameter “foo” invalid for syntactical reasons? Is it invalid because inconsistent with another parameter value in the request? Is it invalid because it doesn’t match the state on the server side? Realistically, implementations often can’t spend too many CPU cycles analyzing errors and generating such detailed reports. That’s fine, but then they can include a link to a wiki a knowledge base where more details are available about the error, its common causes and the workarounds.

Your API should document all messages accurately and comprehensively. Faults are messages too.

9 Comments

Filed under API, Application Mgmt, Automation, Cloud Computing, Everything, Mgmt integration, Protocols, REST, SOAP, Specs, Standards, Tech, Testing, Twitter, Utility computing

From VMWare + SalesForce.com (VMForce) to VMWare + Google: VMWare’s PaaS milestones

Three weeks ago, VMWare and Salesforce.com launched VMForce, a Salesforce-hosted PaaS solution based on VMWare runtime technology and force.com application services. In my analysis of the announcement, I wrote:

VMWare wants us to know they are under the covers because of course they have much larger aspirations than to be a provider to SalesForce. They want to use this as a proof point to sell their SpringSource+VMWare stack in other settings, such as private clouds and other public cloud providers (modulo whatever exclusivity period may be in their contract with SalesForce)

Well, looks like there wasn’t much of an exclusivity period after all as today VMWare is going an another date, with Google App Engine (GAE) this time. But a comparison of the two partnerships reveals that there is a lot more in the VMForce announcement than in today’s collaboration.

The difference in features

Before VMForce, we could not deploy Java applications on force.com. After VMForce we can (with the restriction that they have to be Spring-based). That’s a big difference. On the GAE side, the announcement today that we can now more easily run Spring Roo applications on GAE is nothing drastic. We could do it before, it was just harder. There was an ongoing effort to simplify it and there was no secret about this. Looks like it has delivered some improvements as part of the 1.1.0.M1 release. This only becomes major news via the magic of being announced as Google I/O rather than as an email on a developer mailing list.

Now, I don’t want to belittle the benefit of making things simpler. Simplicity is a feature. Especially for Roo where it is key to the value proposition. This looks very nice (and will probably push me over the edge to actually give Roo a spin now that I have an easy way to host whatever toy app I produce). It just doesn’t represent a major technology announcement.

The difference in collaboration

Salesforce is using VMWare infrastructure to run VMForce.com. They present a unified customer support service across the companies. They also presumably have some kind of revenue-sharing agreement. That makes for a close integration as far as business partnership go. No such thing that I can see on the Google side. No VMWare hypervisor is getting inside the Google infrastructure. No share of revenues from Google App Engine for Business goes to VMWare (any SpringSource infrastructure used is brought by the customer, not provided by Google, unlike with VMForce). And while Rod Johnson asserts that “today’s announcement  makes Spring the preferred programming model for Google App Engine” (a sentence repeated on the vCloud blog), I don’t see any corresponding declaration on the Google side that would echo the implicit deprioritization of Python and non-Spring Java support. Though I am sure they had nice things to say on stage at Google I/O. From Google’s perspective this sounds more like “we’re happy that they’ve made their tools work well with OUR infrastructure” than an architectural inflection. I think Google has too much pride in their infrastructure to see VMWare/SpringSource as an infrastructure provider rather than just a tool/library. But maybe I’m just jealous… And tools are important anyway.

What we’re not seeing

These VMWare+Salesforce and VMWare+Google announcements are also interesting for what is NOT there. Isn’t it interesting that the companies that VMWare enables with a PaaS platform are those which… already have a PaaS platform? What we haven’t seen is VMWare enabling a mid-tier telco to become a PaaS provider. Someone who has power, servers and wires and wants to become a Cloud provider. VMWare is just starting to sell them an IaaS platform (vCloud) and cannot provide them with a turnkey PaaS platform yet by lack of application services (IDM, storage…) and of a comprehensive (i.e. not just virtualization) management platform. Make a list of what application services you think a PaaS platform requires and you probably have a good idea of the VMWare shopping list. But with Salesforce and Google this is not a problem, as they already have these services and all they need from VMWare is an application runtime (or just a runtime interface for Google) and some development tools.

The application services are also where the real PaaS portability issues appear. Which is also probably why Salesforce.com and Google are not too worried about being commoditized as interchangeable Spring runtime infrastructures. They know that the differentiation is with the application services. Their respective values in this domain are their business focus (processes, analytics, SaaS integration) for Salesforce and their scalability and low cost for Google (and more and more also the SaaS part). Not to mention Amazon who is not resting on its IaaS laurels and also keeps innovating in Cloud application services (e.g. RRS just yesterday). On their side, the value proposition seems to be centered on practicality and scale.

This is good news overall. As Steve Herrod writes, SpringSource has indeed been very busy and displays an impressive amount of energy in figuring this PaaS thing out. We’re still in the very early days of the battle of the Cloud Frameworks and at this point the armies are establishing beachheads and not yet running into one another. There is plenty of space for experimentation. 2010 is the year of PaaS.

2 Comments

Filed under Everything

Integration patterns for social data: the Open Social Data Bus

The previous entry, “Don’t tell Facebook what you like, tell Twitter“, used Twitter and Facebook as examples to illustrate a general point about the integration of social profile data. Unfortunately, the examples may have overshadowed the larger point. In the post, I didn’t consider Twitter as a social network but as a message conduit. Most people on the other hand think of Twitter as a social network (after all, which Twitterer is not watching his/her follower count?) and could come out with the impression that I was just saying that Twitter is a better social network than Facebook. It wasn’t my point.

The main point is about defining the right integration pattern for social data: is it a “message bus” pattern or a “shared database” pattern. For readers who haven’t had the joy of dealing with integration architecture and enterprise integration patterns, here is a one-paragraph primer:

The expense report application in a company needs to be in sync with the data in the HR system, so that an expense report can be sent to the right manager for review/approval. Implementing such application integration in an efficient, resilient and flexible way is hard. Battle-tested approaches (high-level “patterns”) have emerged that have been successful, in the right context. Architects have learned that 99% of the time they are better off asking themselves which of these enterprise integration patterns is right for their problem, rather than trying to invent a new approach. Two of the most common basic patterns are the “shared database” pattern and the “message bus” pattern. In the “shared database” pattern, all the applications read and write to the same repository. In the “message bus” pattern, applications post messages on a shared channel (the “bus”) and also listen on the channel for messages from other applications that they are interested in. It’s similar to a radio channel of the kind used by police and ham radio operators.

(diagrams by Hohpe/Woolf, under cc license)

Facebook wants your social data to be shared across sites and applications using the “shared database” pattern, in which Facebook is the central database (and also the primary application). What I described in the previous post was the use of a “message bus” pattern (in which Twitter was used as the bus).

A bus has the following advantages when applied to the problem of sharing social data:

  • All applications have equal access
  • The applications are loosely-coupled, meaning that changing one doesn’t break the others
  • If applications only communicate via the bus, you get to observe the data shared about you
  • It can scale well

There are lots of interesting considerations about how to build and operate such a bus: security, scalability, access protocols, payload format, etc. But they are secondary to the choice of the integration pattern. For the sake of illustration, Twitter’s approach to security is OAuth, their scalable architecture is described here, the access protocols here and the payload format here. Reasonable alternatives exist for all these functions.

It’s hard for me to imagine the content of the messages on this bus not resembling RDF-like subject/verb/object triplets, in which the subject is implicit (the user attached to the message). The verbs could be simple strings or represented by URIs and have an associated taxonomy. And as in RDF, the objects should be either URIs or simple values (mostly strings, of a limited size, be it 140 characters or something else). Possible examples (the subject is implicit, the verb is in square brackets):

[say] I just had coffeecake for breakfast
[like] http://www.hobees.com/
[location] http://www.hobees.com/redwood.html

I still think Twitter is the most practical implementation of the Open Social Data Bus, for reasons I listed before:

  • It’s here today
  • It’s open and makes no pretense of (often violated) “privacy settings”
  • It can scale (give or take some growing pains and some still-drastic quota restrictions)
  • It has a delegated authorization model (though not quite as fine-grained as I’d like)
  • It already has a large ecosystem of provider/consumer applications
  • Humans look at the messages, ensuring that any integration of personal data will remain at a human scale and therefore controllable
  • It has proven to be a very successful environment for semantic tags to emerge spontaneously
  • It is persisted by many actors, including Google, Bing and the Library of Congress
  • Did I mention that it’s here today?

I remember discussions, in the early-to-mid-nineties, about whether the Internet, this quirky but fast-growing network, would turn into the expected global “information superhighway” or whether a superior one would have to emerge. This might seem like a silly discussion today but it wasn’t so obvious at the time. Wondering whether Twitter will turn out to be the Open Social Data Bus will seem just as silly in 15 years, though I don’t know if it will be deemed silly because the answer was obviously “no” or obviously “yes”…

The tension between Twitter as an infrastructure provider and Twitter as a competitor in the Twitter app marketplace is well-known. The company understands that what makes them different from other social networks is the ecosystem of applications that was enabled by this “message bus” pattern. Which is why, even as they announced that they were going to create their own applications to tap into the stream, they took pains to explain that they would be calling the same interfaces as everybody else.

On the other hand, Twitter obviously also needs to worry about making money.  If their service becomes a low-level service, invisible to users (almost like DNS), then who is going to pay for the operations? Especially since the expectations on Twitter are currently so high that a “normal” rate of profit on operating such an infrastructure would be a huge letdown for investors. But this is not a post about the business prospects and strategic challenges of Twitter. It’s about allowing integration of social profile data in a way that benefits users.

I’d be fine with some other Open Social Data Bus implementation taking over and serving this need, as long as it fulfills the key requirements of being equally open to all applications and allowing individuals to control what gets posted about them. There are other avenues if Twitter cannot (or doesn’t want to) play this role. As the DNS example shows, it doesn’t necessarily have to be operated by a single operator. And there are a variety of funding models for such essential infrastructure (see “who funds root name server operations?” in the DNS root name servers FAQ). Alternatively, applications might be charged based on how much data they get from the bus.

Corporate support can take different forms. From wireless frequencies to wi-fi networks to DNS to supporting Firefox Google has shown a willingness to support the development and operation of the internet infrastructure, confident that they’ll be in the best position to benefit from it. Especially if the alternative is what Pete Cashmore describes as “Google’s nightmare“.

You could even think of this service eventually falling under the “common carrier” model, with the corresponding legal constraints. Especially in societies that are more privacy-aware.

I don’t know what the right business/operating model is for the Open Social Data Bus. What I know is that it’s how I want my social profile data to flow between applications.

[UPDATED 2010/5/20: Some supporting evidence for my recollection of “discussions, in the early-to-mid-nineties, about whether the Internet, this quirky but fast-growing network, would turn into the expected global ‘information superhighway’ or whether a superior one would have to emerge”:

Gates’s 286-page book [The Road Ahead, 1995] mentions the World Wide Web on only four of its pages, and portrays the Internet as a subset of a much a larger “Information Superhighway.” The Internet, wrote Gates, is one of “the important precursors of the information highway,” along with PCs, CD-ROMs, phone networks, and cable systems, but “none represents the actual information highway. … today’s Internet is not the information highway I imagine, although you can think of it as the beginning of the highway.”]

4 Comments

Filed under Everything, Facebook, Google, Social networks, Tech, Twitter

Don’t tell Facebook what you like, tell Twitter

There seems to be a lot to like technically about the announcements at Facebook’s f8 conference, especially for a Semantic Web aficionado. But I won’t have anything to do with it as a user. Along with the usual “your privacy is our toy” subtext, I really don’t like the lack of data portability. “Web 2.0” is starting to look a lot like “AOL 2.0”. Here is a better way to do it.

Taking the new “like” button as a simple example, I’d much rather tell Twitter what I like than Facebook. A simple #like hashtag in a tweet can be used to express positive feelings for what the tweet describes. Here is a quick list of the many advantages of this approach over the newly-introduced Facebook “like” feature.

It’s public

Your tweets are available to all. Your Facebook profile can still consume them, so if you think Facebook does the best job at organizing this information about you and your friends you can still go there to view the results. But other applications and networks can tap into the same data, so you can also benefits from innovation coming out of companies which do not want to be Facebook sharecroppers.

It’s publicly public

By which I mean that there is no pretense of privacy and no nasty surprise when trust is violated. Which is going to happen again and again. Especially when it’s not just a matter of displaying data but also of inferring new information based on the raw data collected. At which point it’s almost impossible to segregate access to the derived information based on the privacy settings of the individual data pieces. On Twitter, it’s all public, we all know it from the start, and as such we’re not fooled into sharing more than we should. See the fallacy of privacy settings.

It works on all things

Rather than only being on a web page, you can use a #like hashtag to describe any URI (dereferenceable or not) or even plain text. Just like RDF allows the value of an attribute to be either a URI or a scalar value (string, number…). For example, you can express that you like a quote or a verse of a poem by including them directly in the tweet. It’s not as identifiable as something that has a URI, but it can still be part of your profile. And smart consumers of this data might still be able to do some processing on it (e.g. recognizing it as a line from a song).

It can still be 1-click

You don’t necessarily have to copy/paste a URL (or text) into twitter. A web site can still do this for you, as long as it has your permission to post on your behalf. With that approach, it looks exactly like the Twitter “like” button to the user. You don’t have to be a Twitter user, just to have a Twitter account. No need for a Twitter client or to visit the Twitter web site if you don’t want to. It’s also OK if you have zero followers, Twitter is just a technical conduit in this approach.

It can evolve

The success of Twitter is also the success of self-organization as illustrated by the emergence of @replies, #hashtags and RT, directly form the users. Rather than having Facebook decide what verbs make sense to allow users to express their thoughts on the Web, let people decide and see what verbs emerge (e.g. to describe what you like, dislike, are curious about, are considering buying, etc). The only thing we need is an understanding that the hashtag qualifies the user’s attitude towards what’s described by the rest of the tweet. Or maybe hashtags should not be reused for this, maybe we need a new breed, “semtags” (semantic tags), with a different syntax, e.g. “^like”. This way you can semtag a hashtag, e.g. “^like #nyc” might replace “I ♥ NY” on twitter feeds (and tee shirts). It can be as simple or as complex as needed, based on what sticks in the real world. Nerds like me will try to qualify it (e.g. “^!like” for “I don’t like”) and might even come up with ontologies (^love subClassOf ^like). These experiences will probably fail and that’s fine. Evolution strives on failures.

It is transparent

Even if you let a site write these messages on your twitter feed, you can see exactly what goes on. There is no secret channel as with Facebook. The fact that it goes on your Twitter timeline acts as a validation, ensuring that only relevant, human-readable messages get added to your profile. Which is the only way in which we can maintain control of our profile information. If sites start to send too much information or opaque information you’ll see it. And so will your followers. This will put pressure on sites to make the posted data sparse and meaningful, because they know that their users won’t want to scare away their followers with social spam. See, for example, how the outcries over foursquare spam seem to have forced a clean-up (or at least so it looks to me, but maybe it’s just because I’ve unfollowed the spammers). Keeping social profiling on a human scale is a bug, not a feature.

It is persisted in many places

Who do you think is more likely to be around in 20 years, Facebook or the Library of Congress? Tweets are archived in many places, including Twitter itself, of course, but also Google, Bing and the Library of Congress. Plus, it’s very easy for you to set up a system to save all your tweets. Even if Twitter disappears, all the data in your profile that was built from your tweets will still be around. And if Google, Bing and the Library of Congress all go dark before Facebook, well that’s fine because the profile data from your tweets can be there too.

In effect, you should think of Facebook as a repository and Twitter as a stream. Don’t publish directly to one repository. Publish to a stream and benefit from all the repositories and other consumers that tap into it. It’s a well-known enterprise integration pattern (message bus), but it’s not just good for enterprise applications.

In fact, more than Twitter itself it’s this pattern that I want to encourage. Twitter is just the most obvious implementation, at this time, of a profile data bus. It already has almost everything we need (though a more fine-grained authorization model, or a delegated authorization model, would make me more likely to allow sites to tweet on my behalf). What matters is the switch from social networks owning data to you owning your data and social networks competing on how much value they can deliver to you based on the data. For example, LinkedIn might be the best for work connections, Facebook for personal connections, Google for brute search/retrieval of information, etc. I don’t want to maintain different profile data and privacy settings for each of them. I have one global privacy settings, which controls what I share with the world. Based on this, I want these sites to compete on the value they provide to me. It may not be what Facebook wants, but if what works best for us.

If you like this proposal, you know what you have to do. Go ahead and tweet:

^like https://stage.vambenepe.com/archives/1464

Or just retweet it.

[UPDATED 2010/5/6: See the next post for some clarifications.]

10 Comments

Filed under Everything, Facebook, Google, Mashup, RDF, Semantic tech, Social networks, Twitter

Exclusive! Mark Hurd pulls a Steve Jobs on Microsoft

When Mark Hurd read Steve Job’s rant against Flash (saying, in effect, “we have to tolerate Flash on our desktops/laptops for now but this piece of crap is not going to soil our iPhones, iPads and iPods”) he must have thought “hey, if I can pull one of these stunts maybe I too will have groupies screaming my name when I unveil our tablet”. Sources tell me he is planning to work on this over the weekend and publish it on hp.com on Monday. I was able to get hold of an early draft, which an HP staffer (or Mark himself?) left in a beer garden. Here is what Mark Hurd has to say to about Windows for mobile devices:

Thoughts on Windows

HP has a long relationship with Microsoft… [Note from Mark: someone inserts some hypocritical blah blah about how we used to love each other – sure hasn’t been the case since I’ve been here]. Today the two companies still work together to serve their joint customers – HP customers buy a big chunk of Windows licenses – but beyond that there are few joint interests.

I wanted to jot down some of our thoughts on Microsoft’s Windows products so that customers and critics may better understand why we will not use Windows in our phones and tablets. Microsoft will characterize our decision as being primarily business driven – they say we want to taste decent margins for once – but in reality it is based on technology issues. Microsoft will claim that we are a closed system, and that Windows is open, but in fact the opposite is true. Let me explain.

First, there’s “Open”.

Microsoft Windows products are 100% proprietary. They are only available from Microsoft, and Microsoft has sole authority as to their future enhancement, pricing, etc. While Microsoft Windows products are widely available, this does not mean they are open, since they are controlled entirely by Microsoft and available only from Microsoft. By almost any definition, Windows is a closed system.

HP has many proprietary products too. Though the WebOS operating system we’ll use in our phones and tablets is proprietary, we strongly believe that all standards pertaining to the web should be open. Rather than use Windows, HP has adopted HTML5, CSS and JavaScript – all open standards. HP’s mobile devices will all ship with high performance, low power implementations of these open standards. HTML5, the new web standard that has been adopted by HP, Apple, Google and many others, lets web developers create advanced applications without relying on proprietary APIs (like Windows). HTML5 is completely open and controlled by a standards committee.

Second, there’s the “full application ecosystem”.

Microsoft has repeatedly said that HP mobile devices will not be able to access “the full application ecosystem” because 75% of applications are Windows applications. What they don’t say is that almost all these applications are also available in a more modern form, using Web standards and implemented as a service, and usable on HP’s upcoming phones and tablets. Microsoft Office, Outlook, financial software etc all have excellent Web-based alternatives. Users of HP’s phones and tablets won’t be missing many applications.

Third, there’s reliability, security and performance.

Windows has had an awful security records for twenty years. We also know first hand that Windows is the number one reason PCs crash. We have been working with Microsoft to fix these problems, but they have persisted for several years now. We don’t want to reduce the reliability and security of our phones and tablets by using Windows.

In addition, Windows has not performed well on mobile devices. We have routinely asked Microsoft to show us Windows performing well on a mobile device, any mobile device, for a few years now. We have never seen it. Microsoft publicly said that Windows would work well on a device starting with the first Windows CE in 1996. Then came Pocket PC 2000, then Pocket PC 2002, then Windows Mobile 2003, then Windows Mobile 5, 6, 6.1 and 6.5, none of which was any good. And now they say it will be with Windows Phone 7. We think it will eventually ship, but we’re glad we didn’t hold our breath. Who knows how it will perform?

Fourth, there’s battery life.

To achieve long battery life, mobile devices must use thin and efficient software and Windows is anything but that. It only runs on power-hungry Intel processors while the same features can be delivered by much smaller and more efficient processors when using WebOS. Not only does the battery last longer, the devices are lighter and don’t leave burn marks on your clothes.

Fifth, there’s Touch.

Windows was designed for PCs using mice, not for touch screens using fingers. For example, many Windows applications have such crappy UI that users depend on tooltips to figure out what a button does. They pop up when the mouse arrow hovers over a specific spot. WebOS revolutionary multi-touch interface doesn’t use a mouse, and there is no concept of a tooltip. Most Windows applications will need to be rewritten to support touch-based devices. If developers need to rewrite their Windows applications, why not use modern technologies like HTML5, CSS and JavaScript?

Even if HP phones and tablets used Windows, it would not solve the problem that most Windows applications need to be rewritten to support touch-based devices.

Sixth, the most important reason.

Besides the fact that Windows is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we will not use Windows on our phones and tablets. Windows is an abstraction layer that covers very different underlying hardware.

We know from painful experience that letting a third party layer of software come between the hardware and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of hardware enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying an operating system that runs on hardware from many vendors. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.

Windows is a multi-hardware abstraction. It is not Microsoft’s goal to help developers write the best application for HP’s phones and tablets. It is their goal to help developers write applications that will run on Windows devices from all hardware manufacturers. [Note from Mark: should I describe how Microsoft has been getting in the way of how our PCs talk to our printers and making a mess of desktop printing for the last 20 years or is this off-topic?]

Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.

Conclusions.

Windows was created during the PC era – for PCs and mice. Windows is a successful business for Microsoft, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Windows falls short.

The avalanche of Web-based applications accessible from Web-enabled mobile devices demonstrates that Windows is no longer necessary to access application functionalities of any kind.

New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too [Note from Mark: maybe I should remove that parenthesis or we’ll give Ballmer a heart attack]). Perhaps Microsoft should focus more on creating a great Web-centric platform for the future, and less on criticizing HP for leaving the past behind.

Mark Hurd
April, 2010

[UPDATED 2010/5/18: I hear echos of “should I describe how Microsoft has been getting in the way of how our PCs talk to our printers and making a mess of desktop printing for the last 20 years or is this off-topic?” in the statements Mark Hurd made during his post-earning analyst call today: “when you look across the HP ecosystem of interconnected devices, it is a large family of devices and we think of printers, you’ve now got a whole series of web connected printers and as they connect to the web, [they] need an OS.” Though I am really puzzled by the next line: “Hurd adds that HP prefers to own the OS to “control the customer experience” as it always has in printing.” HP doesn’t control the customer experience at all in printing, because of Windows. It’s only because we are so used to it that we don’t realize how awful the printing experience is, whether using a connected printer or over the network. Glad to see that they intend to apply the Palm acquisition to this problem too.]

[UPDATED 2010/5/26: According to some, this breakup letter from HP caused a breakup inside Microsoft: The reason Robbie Bach was fired]

1 Comment

Filed under Everything, HP, Microsoft, Mobile, Off-topic

Flavors of PaaS

How many flavors of PaaS do we need?

  • PaaS for business apps
  • PaaS for toy apps (simple form-based CRUD) and simple business front ends (e.g. restaurant web sites)
  • PaaS for games, mash-ups and social apps
  • PaaS for multimedia delivery and live streams
  • PaaS for high performance and scientific computing
  • PaaS for spamming, hacking and other illegal activities (Zombies as a Service)
  • Other?

BTW, doesn’t “flavors of PaaS” sound like the name of a perfume? At least when I say it with my French accent it does.

6 Comments

Filed under Application Mgmt, Cloud Computing, Everything, PaaS, Utility computing

PaaS portability challenges and the VMforce example

The VMforce announcement is a great step for SalesForce, in large part because it lets them address a recurring concern about the force.com PaaS offering: the lack of portability of Apex applications. Now they can be written using Java and Spring instead. A great illustration of how painful this issue was for SalesForce is to see the contortions that Peter Coffee goes through just to acknowledge it: “On the downside, a project might be delayed by debates—some in good faith, others driven by vendor FUD—over the perception of platform lock-in. Political barriers, far more than technical barriers, have likely delayed many organizations’ return on the advantages of the cloud”. The issue is not lock-in it’s the potential delays that may come from the perception of lock-in. Poetic.

Similarly, portability between clouds is also a big theme in Steve Herrod’s blog covering VMforce as illustrated by the figure below. The message is that “write once run anywhere” is coming to the Cloud.

Because this is such a big part of the VMforce value proposition, both from the SalesForce and the VMWare/SpringSource side (as well as for PaaS in general), it’s worth looking at the portability aspect in more details. At least to the extent that we can do so based on this pre-announcement (VMforce is not open for developers yet). And while I am taking VMforce as an example, all the considerations below apply to any enterprise PaaS offering. VMforce just happens to be one of the brave pioneers, willing to take a first step into the jungle.

Beyond the use of Java as a programming language and Spring as a framework, the portability also comes from the supporting tools. This is something I did not cover in my initial analysis of VMforce but that Michael Cote covers well on his blog and Carl Brooks in his comment. Unlike the more general considerations in my previous post, these matters of tooling are hard to discuss until the tools are actually out. We can describe what they “could”, “should” and “would” do all day long, but in the end we need to look at the application in practice and see what, if anything, needs to change when I redirect my deployment target from one cloud to the other. As SalesForce’s Umit Yalcinalp commented, “the details are going to be forthcoming in the coming months and it is too early to speculate”.

So rather than speculating on what VMforce tooling will do, I’ll describe what portability questions any PaaS platform would have to address (or explicitly decline to address).

Code portability

That’s the easiest to address. Thanks to Java, the runtime portability problem for the core language is pretty much solved. Still, moving applications around require changes to way the application communicates with its infrastructure. Can your libraries and frameworks for data access and identity, for example, successfully encapsulate and hide the different kinds of data/identity stores behind them? Even when the stores are functionally equivalent (e.g. SQL, LDAP), they may have operational differences that matter for an enterprise application. Especially if the database is delivered (and paid for) as a service. I may well design my application differently depending on whether I am charged by the amount of data in the DB, by the number of requests to the DB, by the quantity of app-to-DB traffic or by the total processing time of my requests in the DB. Apparently force.com considers the number of “database objects” in its pricing plans and going over 200 pushes you from the “Enterprise” version to the more expensive “Unlimited” version. If I run against my local relational database I don’t think twice about having 201 “database objects”. But if I run in force.com and I otherwise can live within the limits of the “Enterprise” version I’d probably be tempted to slightly alter my data model to fit under 200 objects. The example is borderline silly, but the underlying truth is that not all differences in application infrastructure can be automatically encapsulated by libraries.

While code portability is a solvable problem for a reasonably large set of use cases, things get hairier for the more demanding applications. A large part of the PaaS value proposition is contingent on the willingness to give up some low-level optimizations. This, and harder portability in some cases, may just have to be part of the cost of running demanding applications in a PaaS environment. Or just keep these off PaaS for now. This is part of the backward-compatible versus forward compatible Cloud dilemma.

Data portability

I have covered data portability in the previous entry, in response to Steve Herrod’s comment that “you should be able to extract the code from the cloud it currently runs in and move it, along with its data, to another cloud choice”. Your data in the force.com database can already be moved somewhere else… as long as you’re willing to write code to get it and perform any needed transformation. In theory, any data that you can read is data that you can move (thus fulfilling Steve’s promise). The question is at what cost. Presumably Steve is referring to data migration tools that VMWare will build (or acquire) and make part of its cloud enablement platform. Another way in which VMWare is trying to assemble a more complete middleware portfolio (see Oracle ODI for an example of a complete data integration offering, which goes far beyond ETL).

There is a subtle difference between the intrinsic portability of Java (which will run in any JRE, modulo JDK version) and the extrinsic portability of data which can in theory be moved anywhere but each place you move it to may require a different process. A car and an oak armoire are both “portable”, but one is designed for moving while the other will only move if you bring a truck and two strong guys.

Application service portability

I covered this in my previous entry and Bob Warfield summarized it as “take advantage of all those juicy services and it will be hard to back out of that platform, Java or no Java”. He is referring to all the platform services (search, reporting, mobile, integration, BPM, IdM, administration) that make a large part of the force.com value proposition. They won’t be waiting for you in your private cloud (though some may be remotely invocable, depending on how SalesForce wants to play its cards). Applications that depend on them will have to be changed, at least until we have standards interfaces for all these services (don’t hold your breath).

Management portability

Even if you can seamlessly migrate your application and your data from your internal servers to force.com, what do you think is going to happen to your management console, especially if it uses operating system agents? These agents are not coming along for the ride, that’s for sure. Are you going to tell your administrators that rather than having a centralized configuration/monitoring/event console they are going to have to look at cute “monitoring” web pages for each application? And all the transaction tracing, event correlation, configuration policy and end-user monitoring features they were relying on are unfortunate victims of the relentless march of progress? Good luck with that sale.

VMWare’s answer will probably be that they will eventually provide you with all the management capabilities that you need. And it’s a fair one, along the lines of the “Application-to-Disk Management” message at the recent launch of Oracle Enterprise Manager 11G. With the difference that EM is not the only way to manage a top-to-bottom Oracle stack, just the one that we think is the best. BMC and HP aren’t locked out.

VMWare and SpringSource (+Hyperic) could indeed theoretically assemble a full-fledged management solution. But this doesn’t happen overnight, even with acquisitions as I know from experience both at HP Software and currently at Oracle. Integration (of management domains across the stack, of acquired application management products, of support data/services from oracle.com) is one of the main advances in Enterprise Manager 11G and it took work.

And even then, this leads to the next logical question. If you can move from cloud to cloud but you are forced to use VMWare development, deployment and management tools, haven’t you traded one lock-in problem for another?

Not to mention that your portability between clouds, if it depends on VMWare tools, is limited to VMWare-powered clouds (private or public). In effect, there are now three levels of portability:

  • not portable (only runs on VMforce)
  • portable to any cloud (public or private) built using VMWare infrastructure
  • portable to any Java/Spring Cloud platform

Is your application portable the way cash is portable, or the way a gift card is portable (across stores of a retail chain)?

If this reminds you of the java portability debates of the early days of Enterprise Java that’s no surprise. Remember, we’re replaying the tape.

4 Comments

Filed under Application Mgmt, Cloud Computing, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Middleware, PaaS, Portability, Spring, Standards, Tech, Utility computing, VMforce, VMware

Analyzing the VMforce announcement

Let’s start with the disclosures: by most interpretations I work for a competitor to what Salesforce.com and VMWare are trying to do with VMforce. And all I know about VMforce is what I read in a few authoritative blogs by VMWare’s Steve Herrod, VMWare/SpringSource’s Rod Johnson and Salesforce’s Anshu Sharma. So no hard feeling if you jump off right now.

Overall, I like what I see. Let me put it this way. I am now a lot more likely to write an application on force.com than I was last week. How could this not be a good thing for SalesForce, me and others like me?

On the other hand, this is also not the major announcement that the “VMforce is coming” drum-roll had tried to make us expect. If you fell for it, then I guess you can be disappointed. I didn’t and I’m not (Phil Wainewright fell for it and yet isn’t disappointed, asserting that “VMforce.com redefines the PaaS landscape” for reasons not entirely clear to me even after reading his article).

The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it. That’s the VMforce announcement for all practical purposes from a user’s perspective. It’s a great step forward for force.com which was hampered by the non-standard nature of Apex, but it’s just a new runtime. All the other benefits that Anshu Sharma lists in his blog (search, reporting, mobile, integration, BPM, IdM, administration) are not new. They are the platform services that force.com offers to application writers, whether they use Apex or the new Java/Spring runtime.

It’s important to realize that there are two main parts to a full PaaS platform like force.com or Google App Engine. First there are application runtimes (Apex and now Java for force.com, Python and Java for GAE). They are language-dependent and you can have several of them to support different programming languages. Second are the platform services (reports, mobile, BPM, IdM etc for force.com as we saw above, mostly IdM for Google at this point) which are mostly language agnostic (beyond a library used to access them). I think of data storage (e.g. mySQL, force.com database, Google DataStore) as part of the runtime, but it’s on the edge of the grey zone. A third category is made of actual application services (e.g. the CRM web services out of SalesForce.com or the application services out of Google Apps) which I tend not to consider part of PaaS but again there are gray zones between application support services and application services. E.g. how domain-specific does your rule engine have to be before it moves from one category to the other?

As Umit Yalcinalp (who works for SalesForce) told me on Twitter “regardless of the runtime the devs using the Force.com db will get the same platform benefits, chatter, workflow, analytics”. What I called the platform services above. Which, really, is where most of the PaaS value lies anyway. A language runtime is just a starting point.

So where are VMWare and SpringSource in this picture? Well, from the point of view of the user nowhere, really. SalesForce could have built this platform themselves, using the Spring framework on top of Tomcat, WebLogic, JBoss… Itself running on any OS they want. With or without a hypervisor. These are all implementation details and are SalesForce’s problem, not ours as application developers.

It so happens that they have chosen to run this as a partnership with VMWare/SpringSource which makes a lot of sense from a portfolio/expertise perspective, of course. But this choice is not visible to the application developer making use of this platform. And it shouldn’t be. That’s the whole point of PaaS after all, that we don’t have to care.

But VMWare and SpringSource really want us to know that they are there, so Rod Johnson leads by lifting the curtain and explaining that:

“VMforce uses the Force.com physical infrastructure to run vSphere with a special customized vCloud layer that allows for seamless scaling and management. Above this layer VMforce runs SpringSource tc Server instances that provide the execution environment for the enterprise applications that run on VMforce.”

[Side note: notice what’s missing? The operating system. It’s there of course, most likely some Linux distribution but Rod glances over it, maybe because it’s a missing link in VMWare’s “we have all the pieces” story; unlike Oracle who can provide one or, even better, do without.  Just saying…]

VMWare wants us to know they are under the covers because of course they have much larger aspirations than to be a provider to SalesForce. They want to use this as a proof point to sell their SpringSource+VMWare stack in other settings, such as private clouds and other public cloud providers (modulo whatever exclusivity period may be in their contract with SalesForce). And VMforce, if it works well when it launches, is a great validation for this strategy. It’s natural that they want people to know that they are behind the curtain and can be called on to replicate this elsewhere.

But let’s be clear about what part they can replicate. It’s the Java/Spring language runtime and its underlying infrastructure. Not the platform services that are part of the SalesForce platform. Not an IdM solution, not a rules engine, not a business process engine, etc. We can expect that they are hard at work trying to fill these gaps, as the RabbitMQ acquisition illustrates, but for now all this comes from force.com and isn’t directly replicable. Which means that applications that use them aren’t quite so portable.

In his post, Steve Herrod quickly moves past the VMforce announcement to focus on the SpringSource+VMWare infrastructure part, the one he hopes to see multiplied everywhere. The key promise, from the developers’ perspective, is application portability. And while the use of Java+Spring definitely helps a lot in terms of code portability I see some promises in terms of data portability that will warrant scrutiny when VMforce actually rolls out: “you should be able to extract the code from the cloud it currently runs in and move it, along with its data, to another cloud choice”.

It sounds very nice, but the underlying issues are:

  • Does the code change depending on whether I am talking to a local relational DB in my private cloud or whether I am on VMforce and using the force.com database?
  • If it doesn’t then the application is portable, but an extra service i still needed to actually move the data from one cloud to the other (can this be done in-flight? what downtime is needed?)
  • What about the other VMforce.com services (chatter, workflow, analytics…)? If I use them in my code can I keep using them once I migrate out of VMforce to a private cloud? Are they remotely invocable? Does the code change? And if I want to completely sever my links with SalesForce, can I find alternative implementations of these application platform services in my private cloud? Or from another public cloud provider? The answer to these is probably no, which means that you are only portable out of VMforce if you restrain yourself from using much of the value of the platform. It’s not even clear whether you can completely restrain yourself from using it, e.g. can you run on force.com without using their IdM system?

All these are hard questions. I am not blaming anyone for not answering them today since no-one does. But we shouldn’t sweep them under the rug. I am sure VMWare is working on finding workable compromises but I doubt it will be as simple, clean and portable as Steve Herrod implies. It’s funny  how Steve and Anshu’s posts seem to reinforce and congratulate one another, until you realize that they are in large part talking about very different things. Anshu’s is almost entirely about the force.com application platform services (sprinkled with some weird Facebook envy), Steve’s is entirely about the application runtime and its infrastructure.

One thing that I am surprised not to see mentioned is the management aspect of the platform, especially considering the investment that SpringSource made in Hyperic. I can only assume that work is under way on this and that we’ll hear about it soon. One aspect of the management story that concerns me a bit is the lack of acknowledgment of the challenges of configuration management in a PaaS setting. Especially when I read Steve Herrod asserting that the VMWare/SpringSource PaaS platform is going to free us from the burden of “handling code modifications that may be required as the middleware versions change”. There seems to be a misconception that because the application administrators are not the ones doing the infrastructure updates they don’t need to worry about the impact of these updates on their application. Is Steve implying that the first release of the VMWare/SpringSource PaaS stack is going to be so perfect that the hypervisor, guest OS and app server will never have to be patched and versioned? If that’s not the case, then why are those patches suddenly less likely impact the application code? In fact the situation is even worse as the application administrator does not know which hypervisor/OS/middleware patches are being applied and when. They can’t test against the new version ahead of time for validation and they can’t make sure the change is scheduled during a non-critical period for their business. I wrote an entire blog post on this issue six months ago and it’s a little bit disheartening to see the issue flatly denied and ignored. Management is not just monitoring.

Here is another intriguing comment in Steve’s entry: “one of the key differentiators with EC2 based PaaS will be the efficiencies for the many-app model. Customers are frustrated with the need to buy a whole VM as the minimum service unit for their applications. Our PaaS will provide fine-grained resource separation”. I had to read it twice when I realized that the VMWare CTO was telling us that splitting a physical machine into VMs is not a good enough way to share its resources and that you really need middleware-level multi-tenancy. But who can disagree that a GAE-like architecture can support more low-traffic applications on the same server than anything based on VM-based sharing? Which (along with deep pockets) puts Google in position to offer free hosting for low-traffic applications, a great way to build adoption.

These are very early days in the history of PaaS. VMWare, like the rest of us, will need to tackle all these issues one by one. In the meantime, this is an interesting announcement and a noticeable milestone. Let’s just keep our eyes open on the incremental nature of progress and the long list of remaining issues.

[UPDATED 2010/4/29: See the follow-up post, PaaS portability challenges and the VMforce example.]

[UPDATED 2010/6/9: This entry points out how the OS level is a gap in VMWare’s portfolio. They took a step in addressing this today, by partnering with Novell to offer SUSE support.]

yalcinalp

12 Comments

Filed under Application Mgmt, BPM, Cloud Computing, Everything, Google App Engine, Middleware, PaaS, Portability, Spring, Tech, Virtualization, VMforce, VMware

The fallacy of privacy settings

Another round of “update your Facebook privacy settings right now” messages recently swept through Twitter and blogs. As also happened a few months ago, when Facebook last modified some privacy settings to better accommodate their business goals. This is borderline silly. So, once and for all, here is the rule:

Don’t put anything on any social network that you don’t want to be made public.

Don’t count on your privacy settings on the site to keep your “private” data out of the public eye. Here are the many ways in which they can fail (using Facebook as a stand-in for all the other social networks, this is not specific to Facebook):

  • You make a mistake when configuring the privacy settings
  • Facebook changes the privacy mechanisms on you during one of their privacy policy updates
  • Facebook has a security flaw that bypasses access control
  • One of you friends who has access to your private data accidentally/stupidly/maliciously shares it more widely
  • A Facebook application to which you grant access betrays your trust in accessing the data and exposing it
  • A Facebook application gets hacked
  • A Facebook application retains your data in its cache
  • Your account (or one of your friends’ account) gets hacked
  • Anonymized data that Facebook shares with researchers gets correlated back to real users
  • Some legal action (not necessarily related to you personally) results in a large amount of Facebook data (including yours) seized and exported for legal review
  • Facebook looses some backup media
  • Facebook gets acquired (or it goes out of business and its assets are sold to the highest bidder)
  • Facebook (or whoever runs their hardware) disposes of hardware without properly wiping it
  • [Added 2012/3/8] Your employer or schoold demands that you hand over your account password (or “friend” a monitor)
  • Etc…

All in all, you should not think of these privacy settings as locks protecting your data. Think of them as simply a “do not disturb” sign (or a necktie…) hanging on the knob of an unlocked door. I am not advising against using privacy settings, just against counting on them to work reliably. If you’d rather your work colleagues don’t see your holiday pictures, then set your privacy settings so they can’t see them. But if it would really bother you if they saw them, then don’t post the pictures on Facebook at all. Think of it like keeping a photo in your wallet. You get to choose who you show it to, until the day you forget your wallet in the office bathroom, or at a party, and someone opens it to find the owner. You already know this instinctively, which is why you probably wouldn’t carry photos in your wallet that shouldn’t be shown publicly. It’s the same on Facebook.

This is what was so disturbing about the Buzz/GMail privacy fiasco. It took data (your list of GMail contacts) that was not created for the purpose of sharing it with anyone, and turned this into profile data in a social network. People who signed up for GMail didn’t sign up for a social network, they signed up for a Web-based email. What Google wants, on the other hand, is a large social network like Facebook, so it tried to make GMail into one by auto-following GMail contacts in your Buzz profile. It’s as if your insurance company suddenly decided it wanted to enter the social networking business and announced one day that you were now “friends” with all their customers who share the same medical condition. And will you please log in and update your privacy settings if you have a problem with that, you backward-looking, privacy-hugging, profit-dissipating idiot.

On the other hand, that’s one thing I like about Twitter. By and large (except for the few people who lock their accounts) almost all the information you put in Twitter is expected to be public. There is no misrepresentation, confusion or surprise. I don’t consider this lack of configurable privacy as a sign that Twitter doesn’t respect the privacy of its users. To the contrary, I almost see this as the most privacy-friendly approach: make it clear that everything is public. Because it is anyway.

One could almost make a counter-intuitive case that providing privacy settings is anti-privacy because it gives an unwarranted sense of security and nudges users towards providing more private data than they otherwise would. At least if the policy settings are not contractual (can you sue Facebook for changing its privacy terms on you?). At least it’s been working that way so far for Facebook, intentionally of not, as illustrated by all the articles that stress the importance of setting our privacy settings right (implicit message: it’s ok to put private information as long as you set  privacy settings).

Yes you should have clear privacy settings. But the place to store them is in your brain and the place to enforce them is by controlling what your fingers do before data gets on Facebook. Facebook and similar networks can only leak data that they posses. A lot of that data comes from you directly uploading it. And that’s the point where you have control. After this, you really don’t. Other data comes from tracking and analyzing your activities and connections, without explicit data upload from you. That’s a lot harder for you to control (you rarely even get asked for your privacy preferences on this data), but that’s out of scope for this blog entry.

Just like banks that are too big to fail are too big to exist, data that is too sensitive to leak from Facebook is too sensitive to be on Facebook.

5 Comments

Filed under Everything, Facebook, Google, Off-topic, Security, Social networks, Twitter

The battle of the Cloud Frameworks: Application Servers redux?

The battle of the Cloud Frameworks has started, and it will look a lot like the battle of the Application Servers which played out over the last decade and a half. Cloud Frameworks (which manage IT automation and runtime outsourcing) are to the Programmable Datacenter what Application Servers are to the individual IT server. In the longer term, these battlefronts may merge, but for now we’ve been transported back in time, to the early days of Web programming. The underlying dynamic is the same. It starts with a disruptive IT event (part new technology, part new mindset). 15 years ago the disruptive event was the Web. Today it’s Cloud Computing.

Stage 1

It always starts with very simple use cases. For the Web, in the mid-nineties, the basic use case was “how do I return HTML that is generated by a script as opposed to a static file”. For Cloud Computing today, it is “how do I programmatically create, launch and stop servers as opposed to having to physically install them”.

In that sense, the IaaS APIs of today are the equivalent of the Common Gateway Interface (CGI) circa 1993/1994. Like the EC2 API and its brethren, CGI was not optimized, not polished, but it met the basic use cases and allowed many developers to write their first Web apps (which we just called “CGI scripts” at the time).

Stage 2

But the limitations became soon apparent. In the CGI case, it had to do with performance (the cost of the “one process per request” approach). Plus, the business potential was becoming clearer and attracted a different breed of contenders than just academic and research institutions. So we got NSAPI, ISAPI, FastCGI, Apache Modules, JServ, ZDAC…

We haven’t reached that stage for Cloud yet. That will be when the IaaS APIs start to support events, enumerations, queries, federated identity etc…

Stage 3

Stage 2 looked like the real deal, when we were in it, but little did we know that we were still just nibbling on the hors d’oeuvres. And it was short-lived. People quickly decided that they wanted more than a way to handle HTTP requests. If the Web was going to be central to most programs, then all aspects of programming had to fit well in the context of the Web. We didn’t want Web servers anymore, we wanted application servers (re-purposing a term that had been used for client-server). It needed more features, covering data access, encapsulation, UI frameworks, integration, sessions. It also needed to meet non-functional requirements: availability, scalability (hello clustering), management, identity…

That turned into the battle between the various Java application servers as well as between Java and Microsoft (with .Net coming along), along with other technology stacks. That’s where things got really interesting too, because we explored different ways to attack the problem. People could still program at the HTTP request level. They could use MVC framework, ColdFusion/ASP/JSP/PHP-style markup-driven applications, or portals and other higher-level modular authoring frameworks. They got access to adapters, message buses, process flows and other asynchronous mechanisms. It became clear that there was not just one way to write Web applications. And the discovery is still going on, as illustrated by the later emergence of Ruby on Rails and similar frameworks.

Stage 4

Stage 3 is not over for Web applications, but stage 4 is already there, as illustrated by the fact that some of the gurus of stage 3 have jumped to stage 4. It’s when the Web is everywhere. Clients are everywhere and so are servers for that matter. The distinction blurs. We’re just starting to figure out the applications that will define this stage, and the frameworks that will best support them. The game is far from over.

So what does it mean for Cloud Frameworks?

If, like me, you think that the development of Cloud Frameworks will follow a path similar to that of Application Servers, then the quick retrospective above can be used as a (imperfect) crystal ball. I don’t pretend to be a Middleware historian or that these four stages are the most accurate representation, but I think they are a reasonable perspective. And they hold some lessons for Cloud Frameworks.

It’s early

We are at stage 1. I’ll admit that my decision to separate stages 1 and 2 is debatable and mainly serves to illustrate how early in the process we are with Cloud frameworks. Current IaaS APIs (and the toolkits that support them) are the equivalent of CGI (and the early httpd), something that’s still around (Google App Engine emulates CGI in its Python incarnation) but almost no-one programs to directly anymore. It’s raw, it’s clunky, it’s primitive. But it was a needed starting point that launched the whole field of Web development. Just like IaaS APIs like EC2 have launched the field Cloud Computing.

Cloud Frameworks will need to go through the equivalent of all the other stages. First, the IaaS APIs will get more optimized and capable (stage 2). Then, at stage 3, we will focus on higher-level, more productive abstraction layers (generally referred to as PaaS) at which point we should expect a thousand different approaches to bloom, and several of them to survive. I will not hazard a guess as to what stage 4 will look like (here is my guess for stage 3, in two parts).

No need to rush standards

One benefit of this retrospective is to highlight the tragedy of Cloud standards compared to Web development standards. Wouldn’t we be better off today if the development leads of AWS and a couple of other Cloud providers had been openly cooperating in a Cloud equivalent of the www-talk mailing list of yore? Out of it came a rough agreement on HTML and CGI that allowed developers to write basic Web applications in a reasonably portable way. If the same informal collaboration had taken place for IaaS APIs, we’d have a simple de-facto consensus that would support the low-hanging fruits of basic IaaS. It would allow Cloud developers to support the simplest use cases, and relieve the self-defeating pressure to standardize too early. Standards played a huge role in the development of Application Servers (especially of the Java kind), but that really took place as part of stage 3. In the absence of an equivalent to CGI in the Cloud world, we are at risk of rushing the standardization without the benefit of the experimentation and lessons that come in stage 3.

I am not trying to sugar-coat the history of Web standards. The HTML saga is nothing to be inspired by. But there was an original effort to build consensus that wasn’t even attempted with Cloud APIs. I like the staged process of a rough consensus that covers the basic use cases, followed by experimentation and proprietary specifications and later a more formal standardization effort. If we skip the rough consensus stage, as we did with Cloud, we end up rushing to do final step (to the tune of “customers demand Cloud standards”) even though all we need for now is an interoperable way to meet the basic use cases.

Winners and losers

Whoever you think of as the current leaders of the Application Server battle (hint: I work for one), they were not the obvious leaders of stages 1 and 2. So don’t be in too much of a hurry to crown the Cloud Framework kings. Those you think of today may turn out to be the Netscapes of that battle.

New roles

It’s not just new technology. The development of Cloud Frameworks will shape the roles of the people involved. We used to have designers who thought their job was done when they produced a picture or at best a FrameMaker or QuarkXPress document, which is what they were used to. We had “webmasters” who thought they were set for life with their new Apache skills, then quickly had to evolve or make way, a lesson for IaaS gurus of today. Under terms like “DevOps” new roles are created and existing roles are transformed. Nobody yet knows what will stick. But if I was an EC2 guru today I’d make sure to not get stuck providing just that. Don’t wait for other domains of Cloud expertise to be in higher demand than your current IaaS area, as by then you’ll be too late.

It’s the stack

There aren’t many companies out there making a living selling a stand-alone Web server. Even Zeus, who has a nice one, seems to be downplaying it on its site compared to its application delivery products. The combined pressure of commoditization (hello Apache) and of the demand for a full stack has made it pretty hard to sell just a Web server.

Similarly, it’s going to be hard to stay in business selling just portions of a Cloud Framework. For example, just provisioning, just monitoring, just IaaS-level features, etc. That’s well-understood and it’s fueling a lot of the acquisitions (e.g. VMWare’s purchase of SpringSource which in turn recently purchased RabbitMQ) and partnerships (e.g. recently between Eucalyptus and GroundWork though rarely do such “partnerships” rise to the level of integration of a real framework).

It’s not even clear what the right scope for a Cloud Framework is. What makes a full stack and what is beyond it? Is it just software to manage a private Cloud environment and/or deployments into public Clouds? Does the framework also include the actual public Cloud service? Does it include hardware in some sort of “private Cloud in a box”, of the kind that this recent Dell/Ubuntu announcement seems to be inching towards?

Integration

If indeed we can go by the history of Application Server to predict the future of Cloud Frameworks, then we’ll have a few stacks (with different levels of completeness, standardized or proprietary). This is what happened for Web development (the JEE stack, the .Net stack, a more loosely-defined alternative stack which is mostly open-source, niche stacks like the backend offered by Adobe for Flash apps, etc) and at some point the effort moved from focusing on standardizing the different application environment technology alternatives (e.g. J2EE) towards standardizing how the different platforms can interoperate (e.g. WS-*). I expect the same thing for Cloud Frameworks, especially as they grow out of stages 1 and 2 and embrace what we call today PaaS. At which point the two battlefields (Application Servers and Cloud Frameworks) will merge. And when this happens, I just can’t picture how one stack/framework will suffice for all. So we’ll have to define meaningful integration between them and make them work.

If you’re a spectator, grab plenty of popcorn. If you’re a soldier in this battle, get ready for a long campaign.

7 Comments

Filed under Application Mgmt, Automation, Big picture, Cloud Computing, Everything, Mgmt integration, Middleware, Standards, Utility computing

A week of Oracle Middleware, Management and Cloud

Oracle has a busy week in store for people who are interested in application management. Today, the company announced:

  • Oracle Virtual Assembly Builder, to package and easily deploy virtualized composite applications. It’s an application-aware (via metadata) set of VM disk images. It comes with a graphical builder tool.
  • Oracle WebLogic Suite Virtualization Option (not the most Twitter-friendly name, so if you see me tweet about “WebLogic Virtual” or “WLV” that’s what I mean), an optimized version of WebLogic Server which runs on JRockit Virtual Edition, itself on top of OVM. Notice what’s missing? The OS. If you think you’ll miss it, you may be suffering from learned helplessness. Seek help.

Later this week, Oracle will announce Oracle Enterprise Manager 11g. I am not going to steal the thunder a couple of days before the announcement, but I can safely say that a large chunk of the new features relate to application management.

[UPDATED 2010/4/21: Adam and Blake‘s blogs on the Virtual Assembly Builder and WebLogic Suite Virtualization Option announcements. And Chung on the upcoming EM release.]

Comments Off on A week of Oracle Middleware, Management and Cloud

Filed under Application Mgmt, Everything, IT Systems Mgmt, Middleware, Oracle, Virtual appliance, Virtualization

Steve Ballmer gets Cloud

Steve Ballmer wants devops

Devops? What’s devops? See these articles:

3 Comments

Filed under Cloud Computing, DevOps, Everything, IT Systems Mgmt, Microsoft, People

Enterprise application integration patterns for IT management: a blast from the past or from the future?

In a recent blog post, Don Ferguson (CTO at CA) describes CA Catalyst, a major architectural overhaul which “applies enterprise application integration patterns to the problem of integrating IT management systems”. Reading this was fascinating to me. Not because the content was some kind of revelation, but exactly for the opposite reason. Because it is so familiar.

For the better part of the last decade, I tried to build just this at HP. In the process, I worked with (and sometimes against) Don’s colleague at IBM, who were on the same mission. Both companies wanted a flexible and reliable integration platform for all aspects of IT management. We had decided to use Web services and SOA to achieve it. The Web services management protocols that I worked on (WSMF, WSDM, WS-Management and the “reconciliation stack”) were meant for this. We were after management integration more than manageability. Then came CMDBf, another piece of the puzzle. From what I could tell, the focus on SOA and Web services had made Don (who was then Mr. WebSphere) the spiritual father of this effort at IBM, even though he wasn’t at the time focused on IT management.

As far as I know, neither IBM nor HP got there. I covered some of the reasons in this post-mortem. The standards bickering. The focus on protocols rather than models. The confusion between the CMDB as a tool for process/service management versus a tool for software integration. Within HP, the turmoil from the many software acquisitions didn’t help, and there were other reasons. I am not sure at this point whether either company is still aiming for this vision or if they are taking a different approach.

But apparently CA is still on this path, and got somewhere. At least according to Don’s post. I have no insight into what was built beyond what’s in the post. I am not endorsing CA Catalyst, just agreeing with the design goals listed by Don. If indeed they have built it, and the integration framework resists the test of time, that’s impressive. And exciting. It apparently even uses some the same pieces we were planning to use, namely WS-Management and CMDBf (I am reluctantly associated with the first and proudly with the second).

While most readers might not share my historical connection with this work, this is still relevant and important to anyone who cares about IT management in the enterprise. If you’re planning to be at CA World, go listen to Don. Web services may have a bad name, but the technical problems of IT management integration remain. There are only a few routes to IT management automation (I count seven, the one taken by CA is #2). You can throw away SOAP if you want, you still need to deal with protocol compatibility, model alignment and instance reconciliation. You need to centralize or orchestrate the management operations performed. You need to be able to integrate with complementary products or at the very least to effectively incorporate your acquisitions. It’s hard stuff.

Bonus point to Don for not forcing a “Cloud” angle for extra sparkle. This is core IT management.

Comments Off on Enterprise application integration patterns for IT management: a blast from the past or from the future?

Filed under Automation, CA, CMDB, CMDB Federation, CMDBf, Everything, IT Systems Mgmt, Mgmt integration, Modeling, People, Protocols, SOAP, Specs, Standards, Tech, Web services, WS-Management

Smoothing a discrete world

For the short term (until we sell one) there are three cars in my household. A manual transmission, an automatic and a CVT (continuous variable transmission). This makes me uniquely qualified to write about Cloud Computing.

That’s because Cloud Computing is yet another area in which the manual/automatic transmission analogy can be put to good use. We can even stretch it to a 4-layer analogy (now that’s elasticity):

Manual transmission

That’s traditional IT. Scaling up or down is done manually, by a skilled operator. It’s usually not rocket science but it takes practice to do it well. At least if you want it to be reliable, smooth and mostly unnoticed by the passengers.

Manumatic transmission (a.k.a. Tiptronic)

The driver still decides when to shift up or down, but only gives the command. The actual process of shifting is automated. This is how many Cloud-hosted applications work. The scale-up/down action is automated but, still contingent on being triggered by an administrator. Which is what most IaaS-deployed apps should probably aspire to at this point in time despite the glossy brochures about everything being entirely automated.

Automatic transmission

That’s when the scale up/down process is not just automated in its execution but also triggered automatically, based on some metrics (e.g. load, response time) and some policies. The scenario described in the aforementioned glossy brochures.

Continuous variable transmission

That’s when the notion of discrete gears goes away. You don’t think in terms of what gear you’re in but how much torque you want. On the IT side, you’re in PaaS territory. You don’t measure the number of servers, but rather a continuously variable application capacity metric. At least in theory (most PaaS implementations often betray the underlying work, e.g. via a spike in application response time when the app is not-so-transparently deployed to a new node).

More?

OK, that’s the analogy. There are many more of the same kind. Would you like to hear how hybrid Cloud deployments (private+public) are like hybrid cars (gas+electric)? How virtualization is like carpooling (including how you can also be inconvenienced by the BO of a co-hosted VM)? Do you want to know why painting flames on the side of your servers doesn’t make them go faster?

Driving and IT management have a lot in common, including bringing out the foul-mouth in us when things go wrong.

So, anyone wants to buy a manual VW Golf Turbo? Low mileage. Cloud-checked.

4 Comments

Filed under Application Mgmt, Automation, Cloud Computing, Everything, IT Systems Mgmt, Utility computing

“Freeing SaaS from Cloud”: slides and notes from Cloud Connect keynote

I got invited to give a short keynote presentation during the Cloud Connect conference this week at the Santa Clara Convention Center (thanks Shlomo and Alistair). Here are the slides (as PPT and PDF). They are visual support for my bad jokes rather than a medium for the actual message. So here is an annotated version.

I used this first slide (a compilation of representations of the 3-layer Cloud stack) to poke some fun at this ubiquitous model of the Cloud architecture. Like all models, it’s neither true nor false. It’s just more or less useful to tackle a given task. While this 3-layer stack can be relevant in the context of discussing economic aspects of Cloud Computing (e.g. Opex vs. Capex in an on-demand world), it is useless and even misleading in the context of some more actionable topics for SaaS: chiefly, how you deliver such services, how you consume them and how you manage them.

In those contexts, you shouldn’t let yourself get too distracted by the “aaS” aspect of SaaS and focus on what it really is.

Which is… a web application (by which I include both HTML access for humans and programmatic access via APIs.). To illustrate this point, I summarized the content of this blog entry. No need to repeat it here. The bottom line is that any distinction between SaaS and POWA (Plain Old Web Applications) is at worst arbitrary and at best concerned with the business relationship between the provider and the consumer rather than  technical aspects of the application.

Which means that for most technical aspect of how SaaS is delivered, consumed and managed, what you should care about is that you are dealing with a Web application, not a Cloud service. To illustrate this, I put up the…

… guillotine slide. Which is probably the only thing people will remember from the presentation, based on the ample feedback I got about it. It probably didn’t hurt that I also made fun of my country of origin (you can never go wrong making fun of France), saying that the guillotine was our preferred way of solving any problem and also the last reliable piece of technology invented in France (no customer has ever come back to complain). Plus, enough people in the audience seemed to share my lassitude with the 3-layer Cloud stack to find its beheading cathartic.

Come to think about it, there are more similarities. The guillotine is to the axe what Cloud Computing is to traditional IT. So I may use it again in Cloud presentations.

Of course this beheading is a bit excessive. There are some aspects for which the IaaS/PaaS/SaaS continuum makes sense, e.g. around security and compliance. In areas related to multi-tenancy and the delegation of control to a third party, etc. To the extent that these concerns can be addressed consistently across the Cloud stack they should be.

But focusing on these “Cloud” aspects of SaaS is missing the forest for the tree.

A large part of the Cloud value proposition is increased flexibility. At the infrastructure level, being able to provision a server in minutes rather than days or weeks, being able to turn one off and instantly stop paying for it, are huge gains in flexibility. It doesn’t work quite that way at the application level. You rarely have 500 new employees joining overnight who need to have their email and CRM accounts provisioned. This is not to minimize the difficulties of deploying and scaling individual applications (any improvement is welcome on this). But those difficulties are not what is crippling the ability of IT to respond to business needs.

Rather, at the application level, the true measure of flexibility is the control you maintain on your business processes and their orchestration across applications. How empowered (or scared) you are to change them (either because you want to, e.g. entering a new business, or because you have to, e.g. a new law). How well your enterprise architecture has been defined and implemented. How much visibility you have into the transactions going through your business applications.

It’s about dealing with composite applications, whether or not its components are on-premise or “in the Cloud”. Even applications like Salesforce.com see a large number of invocations from their APIs rather than their HTML front-end. Which means that there are some business applications calling them (either other SaaS, custom applications or packaged applications with an integration to Salesforce). Which means that the actual business transactions go across a composite system and have to be managed as such, independently of the deployment model of each participating application.

[Side note: One joke that fell completely flat was that it was unlikely that the invocations of Salesforce  through the Web services APIs be the works of sales people telneting to port 80 and typing HTTP and SOAP headers. Maybe I spoke too quickly (as I often do), or the audience was less nerdy than I expected (though I recognized some high-ranking members of the nerd aristocracy). Or maybe they all got it but didn’t laugh because I forgot to take encryption into account?]

At this point I launched into a very short summary of the benefits of SOA governance/management, real user experience monitoring, BTM and application-centric IT management in general. Which is very succinctly summarized on the slide by the “SOA” label on the receiving bucket. I would have needed another 10 minutes to do this subject justice. Maybe at Cloud Connect 2011? Alistair?

This picture of me giving the presentation at Cloud Connect is the work of Alex Dunne.

The guillotine picture is the work of Rusty Boxcars who didn’t just take the photo but apparently built the model (yes it’s a small-size model, look closely). Here it is in its original, unedited, glory. My edited version is available under the same CC license to anyone who wants to grab it.

14 Comments

Filed under Application Mgmt, Business Process, Cloud Computing, Conference, Everything, Governance, Mgmt integration, SaaS, Trade show, Utility computing

Standards Disconnect at Cloud Connect

Yesterday’s panel session on the future of Cloud standards at Cloud Connect is still resonating on Twitter tonight. Many were shocked by how acrimonious the debate turned. It didn’t have to be that way but I am not surprised that it was.

The debate was set up and moderated by Bob Marcus (ET-Strategies CTO and master standards coordinator). On stage were Krishna Sankar (Cisco and DMTF Cloud incubator), Archie Reed (HP and CSA), Winston Bumpus (VMWare and DMTF), a gentleman whose name I unfortunately forgot (and who isn’t listed on the program) and me.

If the goal was to glamorize Cloud standards, it was a complete failure. If the goal was to come out with some solutions and agreements, it was also a failure. But if the goal, as I believe, was to surface the current issues, complexities, emotions and misunderstandings surrounding Cloud standards, then I’d say it was a success.

I am not going to attempt to summarize the whole discussion. Charles Babcock, who was in the audience, does a good enough job in this InformationWeek article and, unlike me, he doesn’t have a horse in the race [side note: I am not sure why my country of origin is relevant to his article, but my guess is that this is the main thing he remembered from my presentation during the Cloud Connect keynote earlier that morning, thanks to the “guillotine” slide].

Instead of reporting on what happened during the standards discussion, I’ll just make one comment and provide one take-away.

The comment: the dangers of marketing standards

Early in the session, audience member Reuven Cohen complained that standards organizations don’t do enough to market their specifications. Winston was more than happy to address this and talk about all the marketing work that DMTF does, including trade shows and PR. He added that this is one of the reasons why DMTF needs to charge membership fees, to pay for this marketing. I agree with Winston at one level. Indeed, the DMTF does what he describes and puts a fair amount of efforts into marketing itself and its work. But I disagree with Reuven and Winston that this is a good thing.

First it doesn’t really help. I don’t think that distributing pens and tee-shirts to IT admins and CIO-wannabes results in higher adoption of your standard. Because the end users don’t really care what standard is used. They just want a standard. Whether it comes from DMTF, SNIA, OGF, or OASIS is the least of their concerns. Those that you have to convince to adopt your standard are the vendors and the service providers. The Amazon, Rackspace and GoGrid of the world. The Microsoft, Oracle, VMWare and smaller ones like… Enomaly (Reuven’s company). The highly-specialized consultants who work with them, like Randy. And also, very importantly, the open source developers who provide all the Cloud libraries and frameworks that are the lifeblood of many deployments. I have enough faith left in human nature to assume that all these guys make their strategic standards decisions on a bit more context than exhibit hall loot and press releases. Well, at least we do where I work.

But this traditional approach to marketing is worse than not helping. It’s actually actively harmful, for two reasons. The first is that the cost of these activities, as Winston acknowledges, creates a barrier for participation by requiring higher dues. To Winston it’s an unfortunate side effect, to me it’s a killer. Not necessarily because dropping the membership fee by 50% would bring that many more participants. But because the organizations become so dependent on dues that they are paranoid about making anything public for fear of lowering the incentive for members to keep paying. Which is the worst thing you can do if you want the experts and open source developers, who are the best chance Cloud standards have to not repeat the mistakes of the past, to engage with the standard. Not necessarily as members of the group, also from the outside. Assuming the work happens in public, which is the key issue.

The other reason why it’s harmful to have a standards organization involved in such traditional marketing is that it has a tendency to become a conduit for promoting the agenda of the board members. Promoting a given standard or organization sounds good, until you realize that it’s rarely so pure and unbiased. The trade shows in which the organization participates are often vendor-specific (e.g. Microsoft Management Summit, VMWorld…). The announcements are timed to coincide with relevant corporate announcements. The press releases contain quotes from board members who promote themselves at the same time as the organization. Officers speaking to the press on behalf of the standards organization are often also identified by their position in their company. Etc. The more a standards organization is involved in marketing, the more its low-level members are effectively subsiding the marketing efforts of the board members. Standards have enough inherent conflicts of interest to not add more opportunities.

Just to be clear, that issue of standards marketing is not what consumed most of the time during the session. But it came up and I since I didn’t get a chance to express my view on this while on the panel, I used this blog instead.

My take-away from the panel, on the other hand, is focused on the heart of the discussion that took place.

The take away: confirmation that we are going too fast, too early

Based on this discussion and other experiences, my current feeling on Cloud standards is that it is too early. If you think the practical experience we have today in Cloud Computing corresponds to what the practice of Cloud Computing will be in 10 years, then please go ahead and standardize. But let me tell you that you’re a fool.

The portion of Cloud Computing in which we have some significant experience (get a VM, attach a volume, assign an IP) will still be relevant in 10 years, but it will be a small fraction of Cloud Computing. I can tell you that much even if I can’t tell you what the whole will be. I have my ideas about what the whole will look like but it’s just a guess. Anybody who pretends to know is fooling you, themselves, or both.

I understand the pain of customers today who just want to have a bit more flexibility and portability within the limited scope of the VM/Volume/IP offering. If we really want to do a standard today, fine. Let’s do a very small and pragmatic standard that addresses this. Just a subset of the EC2 API. Don’t attempt to standardize the virtual disk format. Don’t worry about application-level features inside the VM. Don’t sweat the REST or SOA purity aspects of the interface too much either. Don’t stress about scalability of the management API and batching of actions. Just make it simple and provide a reference implementation. A few HTTP messages to provision, attach, update and delete VMs, volumes and IPs. That would be fine. Anything else (and more is indeed needed) would be vendor extensions for now.

Unfortunately, neither of these (waiting, or a limited first standard) is going to happen.

Saying “it’s too early” in the standards world is the same as saying nothing. It puts you out of the game and has no other effect. Amazon, the clear leader in the space, has taken just this position. How has this been understood? Simply as “well I guess we’ll do it without them”. It’s sad, but all it takes is one significant (but not necessarily leader) company trying to capitalize on some market influence to force the standards train to leave the station. And it’s a hard decision for others to not engage the pursuit at that point. In the same way that it only takes one bellicose country among pacifists to start a war.

Prepare yourself for some collateral damages.

While I would prefer for this not to proceed now (not speaking for my employer on this blog, remember), it doesn’t mean that one should necessarily stay on the sidelines rather than make lemonade out of lemons. But having opened the Cloud Connect panel session with somewhat of a mea-culpa (at least for my portion of responsibility) with regards to the failures of the previous IT management standardization wave, it doesn’t make me too happy to see the seeds of another collective mea-culpa, when we’ve made a mess of Cloud standards too. It’s not a given yet. Just a very high risk. As was made clear yesterday.

9 Comments

Filed under Amazon, Cloud Computing, Conference, DMTF, Everything, Mgmt integration, People, Portability, Standards, Trade show, Utility computing

There should be a word for this (Blog/Twitter edition) part 2

Back in October (see “there should be a word for this” part 1) I listed a few concepts (related to twitter and/or blogging) for which new words were needed. Since it’s such a rich field, I barely scratched the surface. Here is the second installment.

#9 The temptation to repeat a brilliant tweet of yours that went unnoticed when you expected a RT storm in response (maybe it was a bad time of the day when everyone was offline? maybe it fell in a twitter mini-outage?)

#10 The new pair of eyes you get the second after you post a tweet.

#11 The act of sharing (e.g. via delicious…) or RTing a URL to an article you haven’t actually read (but you think it makes you look smart). For example, I’d love to give a test to everyone who RTed this entry.

#12 The shock of seeing a delivery error when DMing someone you were positive was following you (this is related to definition #1 from part 1, so Shlomo’s followimp could apply).

#13 The minimum number of people to follow on twitter, of blog feeds to subscribe to and of Facebook friends to have such that you can cycles through all three continuous and never run out of new content. In the TV world, the equivalent would be the minimum number of cable channels needed to cycle through them and never feel like you’ve established that there is nothing worth watching.

#14 The awful feeling when the twitter/blog/facebook cycle from #13 breaks on a Friday night because others have a life.

#15 When a twitter conversation has reached a dead-end because of the short form. When the response you get makes you wonder what the other person understood from your last tweet. But forcing a clarification would take a half-dozen tweets at least and risk turning you into a twoll (another coinage for the twitter era, by Andi Mann).

#16 The compression rate of a sentence: how hard it is to further compress it (e.g. in order to squeeze in an RT comment), whether all the easy shortcuts have been taken already.

Please submit your candidate terms for these definitions.

[UPDATED 2010/8/12: there is now a part 3.]

9 Comments

Filed under Everything, Media, Off-topic, Social networks, Twitter