The enterprise Cloud battle will be an integration battle

The real threat the Cloud poses for incumbent enterprise software vendors is not the obvious one. It’s not about hosting. Sure, they would prefer if nothing changed and they could stick to shipping software and letting customers deal with installing, managing and operating it. Those were the good times.

But these days are over; Oracle, SAP and others realize it. They’re too smart to fight for a lost cause and try to stop this change (though of course they wouldn’t mind if it slowed down). They understand that they will have to host and operate their software themselves. Right now, their datacenter operations and software stacks aren’t as optimized for that model as those of native Cloud providers (at least the good ones), but they’ll get there. They have the smarts and the resources. They don’t have to re-invent it either, the skills and techniques needed will spread through the industry. They’ll also make some acquisitions to hurry things up. In the meantime, they’re probably willing to swallow the operational costs of under-optimized operations in order to prevent those of their customers most eager to move to the Cloud from defecting.

That transition, from running on the customer’s machines to running on the software vendor’s machine, while inconvenient for the vendors, is not by itself a fundamental threat.

The scary part, for enterprise software vendors transitioning to the SaaS model, is whether the enterprise application integration model will also change in the process.

[note: I include “customization” in the larger umbrella of “integration”.]

Enterprise software integration is hard and risky. Once you’ve invested in integrating your enterprise applications with one another (and/or with your partners’ applications), that integration becomes the #1 reason why you don’t want to change your applications. Or even upgrade them. That’s because the integration is an extension of the application being integrated. You can’t change the app and keep the integration. SOA didn’t change that. Both from a protocol perspective (the joys of SOAP and WS-* interoperability) and from a model perspective, SOA-type integration projects are still tightly bound to the applications they were designed for.

For all intents and purposes, the integration is subservient to the applications.

The opportunity (or risk, depending on which side you’re on) is if that model flips over as part of the move to Cloud Computing. If the integration become central and the applications become peripheral.

The integration is the application.

Which is really just pushing “the network is the computer” up the stack.

Just like cell phone operators don’t want to be a “dumb pipe”, enterprise software vendors don’t want to become a “dumb endpoint”. They want to own the glue.

That’s why Salesforce built, acquired Heroku and is now all over “enterprise social networking” (there’s no such thing as “enterprise social networking” by the way, it’s just better groupware, integrated with enterprise applications). That’s why WorkDay’s only acquisition to date, as early as 2008, was an enterprise integration vendor (CapeClear). And why Oracle, in building its public Cloud, is making sure to not just offer its applications as SaaS but also build a portfolio of integration-friendly services (“the technology services are for IT staff and software developers to build and extend business applications”).

How could this be flipped over, freeing the integration from being in the shadow of the application (and running on infrastructure provided by the application vendor)? Architecturally, this looks like a Web integration use case, as Erik Wilde posited. But that’s hard to imagine in practice, when you think of the massive amounts of data and the ever-growing requirements to extract value from it, quickly and at scale. Even if application vendors exposed good HTTP/REST APIs for their data, you don’t want to run Big Data analysis over these remote APIs.

So, how does the integration free itself from being controlled by the SaaS vendor, while retaining high-throughput and low-latency access to the data it requires? And how does the integration get high-throughput/low-latency access to data sets from different applications (by different vendors) at the same time? Well, that’s where the cliffhanger at the end of the previous blog post comes from.

Maybe the solution is to have SaaS applications run on a public Cloud. Or, in the terms used by the previous blog post, to run enterprise Solution Clouds (SaaS) on top of public Deployment Clouds (PaaS and IaaS). Two SaaS applications (by different vendors) which run on the same Deployment Cloud can then, via some level of coordination controlled by the customer, access each other’s data or let an “integration” application access both of their data sets. The customer-controlled coordination (and by “customer” here I mean the enterprise which subscribes to the SaaS apps) is two-fold: ensuring that the two SaaS applications are deployed in close proximity (in the same “Cloud zone” or whatever proximity abstraction the Deployment Cloud provider exposes); and setting appropriate access permission.

Enterprise application vendors will fight to maintain control of application integration. They will have reasons, some of them valid, supporting the case that it’s better if you use them to run your integration. But the most fundamental of these reasons, locality of the data, doesn’t have to be a blocker. That can be bridged by running on a shared public Deployment Cloud.

Remember, the integration is the application.


Filed under Business, Business Process, Cloud Computing, Enterprise Applications, Everything, Integration, Utility computing

8 Responses to The enterprise Cloud battle will be an integration battle

  1. Pingback: » Will Clouds run on Clouds? Cloud Comedy, Cloud Tragedy

  2. William, good post. I like your analysis of the challenge facing enterprise software vendors and the inevitable problem for both vendor and customer as more applications are delivered as SaaS. The vendor wants to own the glue to retain some degree of lock-in and control, and the enterprise (while they want the opposite in terms of lock-in) needs to have their applications integrated and often cannot achieve that between geographically distributed systems because of latency, security, cost of moving data, and other issues. The remedy you propose to the solution, however, while technically sound can only make sense for the foreseeable future for small to mid-sized companies. Enterprise companies, particularly highly regulated organizations, are constrained from running many systems on shared public infrastructure. Even if they overcome the technical and psychological challenges of shifting to this model, the regulatory bodies will lag far behind them. Therefore, alternate strategies for integration will be needed for quite some time.

    • Ravi Pinto

      Moreover, the question of access controls for the data is not an easy one to solve IMHO. App A may not (rather will not) want to share all data with App B. Furthermore, with different users within App B, the data shared may be different (according to their role).
      Apart from this, how much leverage customer will really have with the cloud vendor to ensure that their apps are located in close proximity? What if the SaaS vendor has its own cloud?
      I don’t think there are easy answers here. BTW, I don’t think we know all the questions in the first place!
      But, good to see relevant new posts from you, William!

  3. Pingback: Distributed Weekly 176 — Scott Banwart's Blog

  4. Pingback: Integration is gonna be a problem for cloud | Coté's Drunk & Retired

  5. Hi William,

    This article really resonated with me.. I’m currently working on a platform that provides webhook delivery as a service, and wrote up an article describing a new approach to webhooks we have come up with for integration scenarios called “templated webhooks”. Here’s the article:

    Love to hear your thoughts



  6. Pingback: The Next Leap in the Evolution of Enterprise Software | Hubba

  7. Stu


    First off, congratulations on your move.

    I agree completely, integration-as-the-application is something I’ve been following since 2008 when I jumped into cloud , figuring it would start to bubble as a problem around … 2012-ish. And it’s been a concern even further back really, dating back to Alan Kay’s email almost 15 years ago, i.e.

    I think a new iteration of the Web architecture is how we deal with this challenge, but a ton of work remains, as we are only know beginning to ask the right questions, let alone having the answers.

    One specific, you say: “Even if application vendors exposed good HTTP/REST APIs for their data, you don’t want to run Big Data analysis over these remote APIs.”

    BitTorrent perhaps shows us the way. Web/REST is the control plane to the Torrent architecture, where the high-throughput multi-party transfer occurs. It can work the same way for Big Data analysis.