Category Archives: SCA

Generalizing the Cloud vs. SOA Governance debate

There have been some interesting discussions recently about the relationship between Cloud management and SOA management/governance (run-time and design-time). My only regret is that they are a bit too focused on determining winners and loosers rather than defining what victory looks like (a bit like arguing whether the smartphone is the triumph of the phone over the computer or of the computer over the phone instead of discussing what makes a good smartphone).

To define victory, we need to answer this seemingly simple question: in what ways is the relationship between a VM and its hypervisor different from the relationship between two communicating applications?

More generally, there are three broad categories of relationships between the “active” elements of an IT system (by “active” I am excluding configuration, organization, management and security artifacts, like patch, department, ticket and user, respectively, to concentrate instead on the elements that are on the invocation path at runtime). We need to understand if/how/why these categories differ in how we manage them:

  • Deployment relationships: a machine (or VM) in a physical host (or hypervisor), a JEE application in an application server, a business process in a process engine, etc…
  • Infrastructure dependency relationships (other than containment): from an application to the DB that persists its data, from an application tier to web server that fronts it, from a batch job to the scheduler that launches it, etc…
  • Application dependency relationships: from an application to a web service it invokes, from a mash-up to an Atom feed it pulls, from a portal to a remote portlet, etc…

In the old days, the lines between these categories seemed pretty clear and we rarely even thought of them in the same terms. They were created and managed in different ways, by different people, at different times. Some were established as part of a process, others in a more ad-hoc way. Some took place by walking around with a CD, others via a console, others via a centralized repository. Some of these relationships were inventoried in spreadsheets, others on white boards, some in CMDBs, others just in code and in someone’s head. Some involved senior IT staff, others were up to developers and others were left to whoever was manning the controls when stuff broke.

It was a bit like the relationships you have with the taxi that takes you to the airport, the TSA agent who scans you and the pilot who flies you to your destination. You know they are all involved in your travel, but they are very distinct in how you experience and approach them.

It all changes with the Cloud (used as a short hand for virtualization, management automation, on-demand provisioning, 3rd-party hosting, metered usage, etc…). The advent of the hypervisor is the most obvious source of change: relationships that were mostly static become dynamic; also, where you used to manage just the parts (the host and the OS, often even mixed as one), you now manage not just the parts but the relationship between them (the deployment of a VM in a hypervisor). But it’s not just hypervisors. It’s frameworks, APIs, models, protocols, tools. Put them all together and you realize that:

  • the IT resources involved in all three categories of relationships can all be thought of as services being consumed (an “X86+ethernet emulation” service exposed by the hypervisor, a “JEE-compatible platform” service exposed by the application server, an “RDB service” expose by the database, a Web services exposed via SOAP or XML/JSON over HTTP, etc…),
  • they can also be set up as services, by simply sending a request to the API of the service provider,
  • not only can they be set up as services, they are also invoked as such, via well-documented (and often standard) interfaces,
  • they can also all be managed in a similar service-centric way, via performance metrics, SLAs, policies, etc,
  • your orchestration code may have to deal with all three categories, (e.g. an application slowdown might be addressed either by modifying its application dependencies, reconfiguring its infrastructure or initiating a new deployment),
  • the relationships in all these categories now have the potential to cross organization boundaries and involve external providers, possibly with usage-based billing,
  • as a result of all this, your IT automation system really needs a simple, consistent, standard way to handle all these relationships. Automation works best when you’ve simplified and standardize the environment to which it is applied.

If you’re a SOA person, your mental model for this is SOA++ and you pull out your SOA management and governance (config and runtime) tools. If you are in the WS-* obedience of SOA, you go back to WS-Management, try to see what it would take to slap a WSDL on a hypervisor and start dreaming of OVF over MTOM/XOP. If you’re into middleware modeling you might start to have visions of SCA models that extend all the way down to the hardware, or at least of getting SCA and OSGi to ally and conquer the world. If you’re a CMDB person, you may tell yourself that now is the time for the CMDB to do what you’ve been pretending it was doing all along and actually extend all the way into the application. Then you may have that “single source of truth” on which the automation code can reliably work. Or if you see the world through the “Cloud API” goggles, then this “consistent and standard” way to manage relationships at all three layers looks like what your Cloud API of choice will eventually do, as it grows from IaaS to PaaS and SaaS.

Your background may shape your reference model for this unified service-centric approach to IT management, but the bottom line is that we’d all like a nice, clear conceptual model to bridge and unify Cloud (provisioning and containment), application configuration and SOA relationships. A model in which we have services/containers with well-defined operational contracts (and on-demand provisioning interfaces). Consumers/components with well-defined requirements. APIs to connect the two, with predictable results (both in functional and non-functional terms). Policies and SLAs to fine-tune the quality of service. A management framework that monitors these policies and SLAs. A common security infrastructure that gets out of the way. A metering/billing framework that spans all these interactions. All this while keeping out of sight all the resource-specific work needed behind the scene, so that the automation code can look as Zen as a Japanese garden.

It doesn’t mean that there won’t be separations, roles, processes. We may still want to partition the IT management tasks, but we should first have a chance to rejigger what’s in each category. It might, for example, make sense to handle provider relationships in a consistent way whether they are “deployment relationships” (e.g. EC2 or your private IaaS Cloud) or “application dependency relationships” (e.g. SOA, internal or external). On the other hand, some of the relationships currently lumped in the “infrastructure dependency relationships” category because they are “config files stuff” may find different homes depending on whether they remain low-level and resource-specific or they are absorbed in a higher-level platform contract. Any fracture in the management of this overall IT infrastructure should be voluntary, based on legal, financial or human requirements. And not based on protocol, model, security and tool disconnect, on legacy approaches, on myopic metering, that we later rationalize as “the way we’d want things to be anyway because that’s what we are used to”.

In the application configuration management universe, there is a planetary collision scheduled between the hypervisor-centric view of the world (where virtual disk formats wrap themselves in OVF, then something like OVA to address, at least at launch time, application and infrastructure dependency relationships) and the application-model view of the world (SOA, SCA, Microsoft Oslo at least as it was initially defined, various application frameworks…). Microsoft Azure will have an answer, VMWare/Springsouce will have one, Oracle will too (though I can’t talk about it), Amazon might (especially as it keeps adding to its PaaS portfolio) or it might let its ecosystem sort it out, IBM probably has Rational, WebSphere and Tivoli distinguished engineers locked into a room, discussing and over-engineering it at this very minute, etc.

There is a lot at stake, and it would be nice if this was driven (industry-wide or at least within each of the contenders) by a clear understanding of what we are aiming for rather than a race to cobble together partial solutions based on existing control points and products (e.g. the hypervisor-centric party).

[UPDATED 2010/1/25: For an illustration of my statement that “if you’re a SOA person, your mental model for this is SOA++”, see Joe McKendrick’s “SOA’s Seven Greatest Mysteries Unveiled” (bullet #6: “When you get right down to it, cloud is the acquisition or provisioning of reusable services that cross enterprise walls. (…)  They are service oriented architecture, and they rely on SOA-based principles to function.”)]

6 Comments

Filed under Application Mgmt, Automation, Cloud Computing, CMDB, Everything, Governance, IT Systems Mgmt, ITIL, Mgmt integration, Middleware, Modeling, OSGi, SCA, Utility computing, Virtualization, WS-Management

A small step for SCA, a giant leap for BSM

In a very short post, Khanderao Kand describes how configuration properties for BPEL processes in Oracle SOA Suite 11G are attached to SCA components. Here is the example he provides:

<component name="myBPELServiecComponent">
  ...
  <property name="bpel.config.inMemoryOptimization">true</property>
</component>

It doesn’t look like much. But it’s an major step for application-driven IT management (and eventually BSM).

Take a SCA component. Follow the SCA-defined component-to-composite and service-to-reference relationships upwards and eventually you’ll get to top level application services that have a decent chance of mapping well to business-relevant activities (e.g. order processing). Which means that the metrics of these services (e.g. availability, response time) are likely to be meaningful and important to the line of business. Follow the same SCA relationships downward and you’ll end up (in a SCA-based infrastructure like Oracle SOA Suite 11G), with target components that are meaningful to the IT administrator. Which means that their metrics and configuration settings (like “inMemoryOptimization”) are tracked and controlled by IT. You now have a direct string of connections between this configuration setting and a business relevant metric. You can navigate the connection in both directions: downward/reactive (“my service just went down, what changed in the infrastructure”) versus upward/proactive (“my service is always slow, what can I do to optimize the execution”).

Of course these examples are over-simplistic (and the title of this post is a bit too lyrical, on account of this). Following these SCA relationships in brute-force fashion will yield tens of thousands of low-level configuration settings for any top-level service, with widely differing importance and impact (not to mention that they interact). You need rules to make sense of this. Plus, configuration-based models are a complement to runtime transaction discovery, not a replacement (unless your model of the application includes every single line of code). But it’s not that often that you can see a missing link snap into place that clearly.

What this shows is the emergence of a common set of entities between the developer’s model and the IT admin model. And if the application was developed correctly, some of the entities in the developer’s model correspond to entities in the mental model of the application user and the line of business manager. SCA is the skeleton for this. Attaching configuration to SCA components puts muscle on the bone.

The road to BSM is paved with small improvements in the semantic alignment between IT infrastructure and application services. A couple of years ago, I tried to explain why SCA is very relevant for IT management. Now we can see it.

4 Comments

Filed under Application Mgmt, BPEL, BSM, Business, Business Process, Everything, IT Systems Mgmt, Mgmt integration, Middleware, Modeling, Oracle, SCA, Standards

Oslo, blog posts and my crystal ball

There is more and more information coming out about Oslo in anticipation of the Microsoft PDC in October.

David Chappell recorded a video about it last month. More recently Doug Purdy and Don Box each posted a short description of Oslo. Don describes the goal of Oslo as “simplify the process of developing, deploying, and managing software”. But when he lists ancestor technologies to illustrate that “Microsoft has been moving in this direction for over a decade now”, they are all about development, not management: COM type libraries, .NET metadata attributes, XAML. Interesting that neither SDM nor SML gets a mention. Neither did SCA by the way, but I wasn’t really expecting that one… :-)

Maybe the I am the only one looking for a SDM/SML echo here, just because I came to hear of Oslo through the DSI angle. Am I wrong to see Oslo as an enabler for DSI? This eWeek article doesn’t have anything to do with IT management. Reading it, Oslo is all about allowing people to write code through drag and drop. Yawn. And Don Box endorses the article.

Maybe it’s just me (an IT management guy more than a software development guy) but I don’t care so much about how the application model is created. I care a lot more about what it allows you to do in terms of IT management. Please don’t make me pull out the often-quoted figure about the percentage of IT budget spent on operations versus development/licensing. The eWeek piece fails to excite me, but fortunately David Chappell’s video interview is a lot more aligned with my thinking, so I still hold hopes for Oslo as an IT management enabler. Here is my approximate transcript of an example that David provides (at around 4:20) in the video:

“If someone comes to you and says i’ve got this business process and the SLA is not being met, what do you do? You’ve got to trace this through the right business process and the right application that supports that part of the process and find the machine it runs on and maybe look at the workflow that implements it and maybe look at the services that it provides. This involves talking to business analysts, or the IT pros or the architect or the developer, all of whom have their own view of the world, their own tools, their own prospective. The repository provides a common place to store all this stuff, to link it all together, and with a visual editor to have a common tool that lets you actually go through and answer this kind of questions.”

Now you’re talking.

And if Oslo is not the new blood of DSI, then what is? The DSI story is getting dated, SML is fading in our memories and of the three parts that supposedly compose DSI (“virtualized infrastructure, design for operations, and knowledge-driven management”), only virtualization is actually represented on the list of technologies on the DSI home page. Has DSI turned into just allowing System Center to manage a hypervisor? I still hold hopes that the Oslo data is going to spice things up there. It would be good for the industry at large, not just Microsoft.

I won’t be at the PDC but it will be interesting to see what filters out of these sessions. The first session in the list adds management of hybrid application systems (hybrid as in “cloud/on-premise combination” or “software+services” as Microsoft calls it), to the long “can do” list for Oslo. Impressive, if there is some meat behind the abstract. I think this task is often overlooked in discussions around management aspects of Cloud computing (see “the new, interesting thing is going to be the IT infrastructure to manage your usage of utility computing services as well as their interactions with your in-house software” in this previous entry).

Yes, I am reading way too much into session abstracts, but while I am at it I can’t help noticing that there is a lot of SQL and very little XML/XSD/XPath mentioned there. Even though one of the presenters is Gudge, the only person I have ever met who fully understands XSD (actually even he doesn’t, I’ve seen him in the WS-I days have to refer to… his book).

Even though I am sure we’ll be told that SML can be built on top of Oslo, the SQL orientation won’t make that so easy (I want to see how to build XSD+Schematron validation on top of a relational store using Oslo’s drag and drop development tool). And it puts Microsoft on a different architectural direction from IBM, who, as far as I can tell, thinks that the world is a big XML document. Neither is the most appropriate for IT management models. I prefer a graph model and associated graph queries along the lines of SPARQL or CMDBf.

But that’s just late-night idle speculations on my part (aka “blogging”). Let’s see what comes out in October.

[UPDATED 2008/9/10: Interesting timing. Microsoft is joining OMG, home of UML and BPMN. Coming next: a submission of a “new version” of UML and BPMN that happens to contain the extensions and tweaks that Microsoft made to them in the process of implementing Oslo. This, BTW, is the final nail in the SML coffin (SML isn’t even mentioned in the press release).]

3 Comments

Filed under Application Mgmt, CMDBf, Conference, Desired State, Everything, Graph query, IT Systems Mgmt, Mgmt integration, Microsoft, Middleware, Modeling, Oslo, Query, SaaS, SCA, SML, SPARQL, Specs, Tech, Trade show, Utility computing, Virtualization

More clues on the Oslo/SCA/SML trail: it’s “D”

I just found out that I completly missed some interesting information about Oslo-related efforts at Microsoft. Back in February, Mary-Jo Foley reported on a new modeling language (code-name “D”, apparently) that is part of this initiative. And more recently she reported that David Chappell gave a presentation about Oslo (and more generally Microsoft’s SOA plans) at TechEd. He reportedly said that we should expect a new “schema language” (which Mary-Jo thinks is “D”). What I want to know is what its relationship is with SML/SDM and SCA.

Mary-Jo might not know about SCA and SML but I know that David does. He wrote this white paper about SCA and an article arguing that “Microsoft Should Not Support SCA” (based on an a questionable assessment that SCA is only about portability). He and I also had a little back-and-forth about SCA, SML and Microsoft in the comments section of his post. Unfortunately, David hasn’t blogged about Microsoft’s SOA strategy for a while for us non-TechEd people.

In addition to Mary-Jo’s report, the only information I was about to quickly dig out about David’s presentation is this blog post on Microsoft’s Israel site. Looks like David gave the same presentation at TechEd Israel 2008. Anyone who understands Hebrew cares to translate the blog? Fortunately there is a two-minutes video (also available here) in which we can hear David talk (in English). During the second of the two minutes you’ll hear and see something that could come straight out of a SCA presentation…

For some reason, David’s TechEd Israel presentation doesn’t seem to be listed here and TechEd online tells me that “Featured videos are unavailable at this time”. That’s both for IT Professionals and Developers. But of course they forced me to install Silverlight before telling me that.

[UPDATED 2008/8/11: Here is a 14 minutes video interview of David Chappell providing an update on Oslo.]

3 Comments

Filed under Application Mgmt, Automation, Conference, Desired State, Everything, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Oslo, SCA, SML, Tech

WS-ManagementHammer: don’t do it but if you are going to do it anyway then…

With the IBM/Microsoft/Intel/HP WSDM/WS-Management convergence now implicitly (if not yet officially) dead, it will be interesting to see what IBM is going to do with WSRF. WSRF is being used today, rarely explicitly but rather in an embedded fashion. People who use WSDM use it, people who use CDDLM use it, people who use the Globus Toolkit use it, etc. IBM could write off the convergence work (WS-ResourceTransfer, which was published as a draft, and WS-ResourceEnumeration and WS-EventNotification which were never published) and stick to using the existing WSRF specifications when they need the corresponding functionality. That’s what I hope they do.

Alternatively, they could decide to get the forceps out of the drawer. They can create a new, IBM-friendly (e.g. Fujitsu, CA, Cisco…) private consortium to take over the unfinished drafts (if the IBM/Microsoft/Intel/HP legal agreement allows this) or start new ones. Or they could go directly to W3C, OASIS or OGF and push for a new working group to do the work in the open (and since no-one else would really care about this work IBM should have relatively free hands there, the way Microsoft did in DMTF when IBM chose to boycott WS-Management). Why W3C would care and why OASIS or OGF would want to start commitees to obsolete their existing work is a separate question.

While I hope that IBM doesn’t try to push another pile of WS-* resouce management specifications on an industry that already has too many, if they do I hope that at least they’ll do it right. And that means doing away with the approach embedded in WS-ResourceTransfer. Having personally been involved in many iterations on this problem, I hope to have some insight to contribute.

Along the lines of the age-old parental advice “don’t do it but if you are going to do it then use a condom”, here is my advice to anyone thinking of doing another iteration on the WSRF question: don’t do it but if you are going to do it then be specific about what problem you are addressing.

First, let’s separate three scenarios.

Database query

WS-ResourceTransfer should not be seen as a way to query an XML database. Use XQuery for this.

REST

While architecturally it should be possible to build RESTful applications on top of WS-Transfer‘s operations, this is simply not what is happening. WS-Transfer is being used either by CIM people (who get to it via WS-Management) or by big-SOA people (who get is as part of the whole WS-* stack) and neither of them is doing anything remotely RESTful. So just leave that aside and don’t see WS-ResourceTransfer as a way to do “fine-grained REST”. No REST user is loosing sleep over WS-ResourceTransfer being in limbo.

A flexible way to interact with a complex system

This is the use case that you should focus on. You have a system made up of many parts (e.g. a composite application or a server that is made of many components) that you can represent as an XML document. The XML repesentation contains some important information about the system, but it isn’t the system. There are identified resources within the system that have lifecycles, management capabilities and internal parameters. Not everything relevant is captured in the XML model. This is why it is different from an XML database.

In general, I don’t think that XML is the best way to represent complex IT systems. It has plenty of complications that are not relevant to IT management and it doesn’t elegantly support the representation of graphs, often the most natural way to represent such a system (more on this here). CMDBf, with its graph-oriented approach, is a better choice in general. But there are plenty of areas (especially smaller, well-defined, sub-systems) in which XML formats have been defined to represent systems. SCA and SML for example.

In the case where you are dealing with such an XML-described system, then there is value in standard ways to simplify interactions with the system and its parts. But here too, we need to distinguished different patterns rather than trying to handle them all in the same way.

Filtering/sequencing of returned data

Complex IT systems can generate a lot of configuration and/or monitoring data and often you only care for a small subset. For example, an asset record has dozens of elements (lease terms, owner, assigned user…) but you may only care to retrieve the date the lease expires. When you do a GET on the record, you want to qualify it by specifying that only that date needs to be returned. That’s what WS-RP, WS-RT and the WS-Management wsman:TransferFragment header allow. In a variation of this, you want all the data but you don’t want it in one go, you want to pull it piece by piece. That’s what WS-Enumeration gives you. The problem with all these specifications is that they only offer that feature when you are retrieving the resource representation (a WS-Transfer GET or equivalent), not for other operations. But how is this different from invoking an AirlineBooking operation and saying that you only want to be sent the confirmation code, not the full itinerary, equipment type, assigned seat, etc? Bundling this inside WS-RT (or equivalent) is not helpful. A generic SOAP header that can go on any message would be more appropriate (the definition of this header would need to pay special attention to security considerations, especially if the response is signed, because it could be abused to trick the server into sending, and signing, specifically-crafted messages).

Interacting with a sub-element of the system

If you have a handle to a computer system resource and you know that it has one CPU and that this CPU is represented by the /comp:CPU element of the system, why would you need to use some out-of-band discovery mechanism to interact with that CPU? It’s right there, you can see it, you can point to it. Surely there must be a way to address operations to it directly, right? WS-Management tries to do it with its wsman:Selector mechanism, but the selectors are not tied to the model and require, effectively, a separate out-of-band agreement for addressing. There shouldn’t be a need for such an additional agreement once an agreement has already been reached on the model.

What is needed is a way, for systems that have a known XML model, to address message to subpart by using the model itself to support that addressing. Call it SOAPy mashup if you want to feel like you are part of the cool kids. I described such a mechanism a while ago. In effect, it is an improvement on wsman:Selector that an eventual new iteration of WSRF should at least consider.

In some cases, namely when the operation is a WS-Transfer GET, this capability overlaps with the “filtering of returned data” capability. One way to look at it is that you are doing a GET at the level of the overall computer system and filtering the results down to the part that represents the CPU. Another way to look at it is that you are pinpointing the message to a subset of the model (the CPU part) and doing an unmodified GET on it. It doesn’t matter how you choose to think about it. In my proposal, these two ways produce the same message. Like the wave view and particle view of a photon, that in the end, describe the same physical entity with each being the best representation for a set of situations.

The problem with WS-RT and its predecessors is that it doesn’t recognise that this is just the intersection of two orthogonal concerns (filering of output versus addressing of sub-elements) and only handles that intersection.

Interacting with a set of resources as a set

The same kind of expression (typically XPath) that lets you point at a sub-element inside of a system also lets you point at a set of such sub-elements. But even though from an XPath perspective there isn’t much of a different (the first one just happens to return a nodeset that contains only one node), from an architectural perspective it is a very different use case. If you want to support such a use case then you have handle it as such and define all the associated semantics (sequential/parallel execution, fault handling, partial completion, resource-specific permissions…). You can’t just cross your fingers and assume that you get such features “for free” just because XPath can return a nodeset.

I know that this post illustrates a way of giving free advice that virtually ensures that it gets ignored. Similar (if you’ll allow the big stretch) to the way Chirac and Villepin were arguing againt an Iraq invasion in ways that probably reinforced the Bush administration’s determination to do it. When will the world finally learn to appreciate the oh-so-slightly obnoxious undertone that is inherently French (because, let me tell you, we’re not about to loose it)? At least, when my grandchildren ask me “where were you when IBM invented WS-ManagementHammer?” I can point to this post and say “I tried to stop it, I tried”.

[UPDATED 2008/5/15: How timely! Just after publishing this I find, via Coté, what looks like another example of French abrasiveness in the systems management world: the attitude, name and the way Jeff ends with a French-language quote make it quite likely that the “Jacques” person discounting the fact that his company’s SNMP agent is broken is indeed a compatriot. French obnoxiousness aside, and despite my respect for standards, my advice to Jeff is that if a given SNMP agent works with HP, IBM, BMC and CA you will probably save yourself time in the long run by finding a way to support it (even if it is not spec-compliant) rather than getting the vendor to change. There are lots of sites out there that work fine with Firefox and IE but are not compliant with Web standards. Good luck getting them all fixed.]

[UPDATED 2008/7/14: I don’t really plan to turn this post into a ongoing set of updates about “French attitude” but since today is Bastille Day I’ll point to this map of the world as seen from Paris. If I wasn’t on strike right now, I’d explain why the commenter is wrong to assert that “French self-deprecating humour” is rare.]

4 Comments

Filed under Everything, HP, IBM, IT Systems Mgmt, Mgmt integration, Microsoft, SCA, SML, SOAP, SOAP header, Specs, Standards, WS-Management, WS-ResourceTransfer, WS-Transfer, XMLFrag, XPath

SCA, OGSi and Spring from an IT management perspective

March starts next week and the middleware blogging bees are busy collecting OSGi-nectar, Spring-nectar, SCA-nectar, bringing it all back to the hive and seeing what kind of honey they can make from it.

Like James Governor, I had to train myself to stop associating OSGi with OGSI (which was the framework created by GGF, now OGF, to implement OGSA, and was – not very successfully – replaced with OASIS’s WSRF, want more acronyms?). Having established that OSGi does not relate to OGSI, how does it relate to SCA and Spring? What with the Sprint-OSGi integration and this call to integrate OSGi and SCA (something Paremus says they already do)? The third leg of the triangle (SCA-Spring integration) is included in the base SCA framework. Call this a disclosure or a plug as you prefer, I’ll note that many of my Oracle colleagues on the middleware side of the house are instrumental in these efforts (Hal, Greg, Khanderao, Dave…).

There is also a white paper (getting a little dated but still very much worth reading) that describes the potential integrations in this triangle in very clear and concrete terms (a rare achievement for this kind of exercise). It ends with “simplicity, flexibility, manageability, testability, reusability. A key combination for enterprise developers”. I am happy to grant the “flexibility” (thanks OSGi), “testability” (thanks Spring) and “reusability” (thanks SCA) claims. Not so for simplicity at this point unless you are one of the handful of people involved in all three efforts. As for the “manageability”, let’s call it “manageability potential” and remain friends.

That last part, manageability, is of course what interests me the most in this area. I mentioned this before in the context of SCA alone but the conjunction of SCA with Spring and/or OSGi only increases the potential. What happened with BPEL adoption provides a good illustration of this:

There are lots of JEE management tools and technologies out there, with different levels of impact on application performance (ideally low enough that they are suitable for production systems). The extent to which enterprise Java has been instrumented, probed and analyzed is unprecedented. These tools are often focused on the performance more than the configuration/dependency aspects of the application, partly because that’s easier to measure. And while they are very useful, they struggle with the task of relating what they measure to a business view of the application, especially in the case of composite applications with many shared components. Enter BPEL. Like SCA, BPEL wasn’t designed for manageability. It was meant for increased productivity, portability and flexibility. It was designed to support the SOA vision of service re-use and to allow more tasks to be moved from Java coding to infrastructure configuration. All this it helps with indeed. But at the same time, it also provides very useful metadata for application management. Both in terms of highlighting the application flow (through activities) and in terms of clarifying the dependencies and associated policies (through partner links). This allowed a new breed of application management tools to emerge that hungrily consumer BPEL process definitions and use them to better relate application management to the user-visible aspects of the application.

But the visibility provided by BPEL only goes so far, and soon the application management tools are back in bytecode instrumentation, heap analysis, transaction tracing, etc. Using a mix of standard mechanisms and “top secret”, “patent pending” tricks. In addition to all of their well-known benefits, SCA, OGSi and Spring also help fill that gap. They provide extra application metadata that can be used by application management tools to provide more application context to management tasks. A simple example is that SCA’s service/reference mechanism extends BPEL partner links to components not implemented with BPEL (and provides a more complete policy framework). Of course, all this metadata doesn’t just magically organize itself in an application management framework and there is a lot of work to harness its value (thus the “potential” qualifier I added to “manageability”). But SCA, OSGi and Spring can improve application management in ways similar to what BPEL does.

Here I am again, taking exciting middleware technologies and squeezing them to extract boring management value. But if you can, like me, get excited about these management aspects then you want to follow the efforts around the conjunction of these three technologies. I understand SCA, but I need to spend more time on OGSi and Spring. Maybe this post is my way of motivating myself to do it (I wish my mental processes were instrumented with better metadata so I could answer this question with more certainty – oh please shoot me now).

And while this is all exciting, part of me also wonders whether it’s not too early to risk connecting these specifications too tightly. I have seen too many “standards framework” kind of powerpoint slides that show how a bunch of under-development specifications would precisely work together to meet all the needs of the world. I may have even written one myself. If one thing is certain in that space, it’s that the failure rate is high and over-eager re-use and linkage between specifications kills. That was one of the errors of WSDM. For a contemporary version, look at this “Leveraging CMDBf” plan at Eclipse. I am very supportive of the effort to create an open-source implementation of the CMDBf specification, but mixing a bunch of other unproven and evolving specifications (in addition to CMDBf, I see WS-ResourceCatalog, SML and a “TBD” WS API which I can’t imagine will be anything other than WS-ResourceTransfer) is very risky. And of course IBM’s good old CBE. Was this HTML page auto-generated from an IBM “standards strategy” powerpoint document? But I digress…

Bonus question: what’s the best acronym to refer to OGSi+SCA+Spring. OSS? Taken (twice). SOS? Taken (and too desperate-sounding). SSO? Taken (twice). OS2? Taken. S2O? Available, as far as I can tell, but who wants a name so easily confused with the stinky and acid-rain causing sulfur dioxide (SO2)? Any suggestion? Did I hear J3EE in the back of the room?

10 Comments

Filed under Everything, IT Systems Mgmt, OSGi, SCA, Specs, Spring, Standards

The Oslo accords (presumably between composite application modeling and systems management)?

Microsoft introduced an umbrella project called Oslo at their SOA and Business Process conference this week. There is very little information available but it seems to have two main components: improving the ability of the Microsoft platform to support SOA-style distributed applications and improving the use of models to develop and manage applications. At first sight there isn’t anything new. The SOA talk is similar to any number of “why SOA” presentations available from dozens of companies. And the modeling aspect is the same story that Microsoft has been pitching with DSI for years. The real news is that the two stories are being linked (at least at the marketing level, which is a starting point) and that the application development people have taken over the application modeling baton from the System Center group.

Over the last few years, I worked with people from System Center on different standards related to DSI, including SML which they see as the heart of the modeling effort. One of the things that kept me skeptical when hearing the DSI pitch, was to see the System Center team making announcement and promises about how SML would be central to the development experience in Visual Studio. I am pretty sure I know who’s the gorilla and who’s the chimp at Microsoft between Visual Studio / .Net Framework on the one hand and System Center on the other. The application model is too central to the developer experience for the Visual Studio group not to own it. It looks like it’s now happening and it’s a good thing.

The only content I could find on Oslo that’s not PR fluff is a report from Directions on Microsoft which mostly talks about incremental improvements to BizTalk. Towards the end, there is a small section about a “repository” that will “provide centralized storage of composite application components”. At that point I can’t help remembering the blog post from David Chappell about why it wouldn’t make sense for Microsoft to support SCA. Through comments in his post as well as a blog post of my own, I followed-up with the assertion that the application component model also plays a very important role for management. And at the risk of sounding self-congratulatory, the Oslo announcement seems vindicate that view. I see that David was a speaker at the Microsoft conference where Oslo was announced and he has very good insights into both the application developement and the systems management efforts at Microsoft. So hopefully he’ll soon have a white paper or a blog entry out to share some insights.

If you’re wondering what this means for the technical work that has been going on under the DSI umbrella so far, you can only read the tea leaves. It could be that the application development people adopted the whole SML/CML technology stack as promoted by their System Center colleagues and are going to use it as is. Or on the other extreme, it could be a complete reset that leads to the creation of a component model that is much less general and much more application-centric. Of course, no matter which one happens (or something in the middle), it will be presented as a perfectly smooth and controlled evolution of the DSI vision (get ready for some nice spin at MMS2008). If you are adopting SML because you expect Microsoft to base its application component model on it, you might want to wait a bit until more details emerge about Oslo. For example, after calling XSD a schema language that attempts to be a floor wax, dessert topping, and personal lubricant all at the same time” you have to wonder whether Don Box would advocate to use SML (80% of which is XSD) as the most effective metamodel for an application component model…

Let’s end with this quote from the Directions on Microsoft report on Oslo, regarding application integration: “SAP and Oracle are better positioned in that regard, and so their customers will want to investigate these vendors’ composite application platforms along side Microsoft’s”. Can’t disagree with that. A good place to start this investigation would be the upcoming Oracle Open World.

3 Comments

Filed under IT Systems Mgmt, Microsoft, Oslo, SCA, SML, Tech

SCA is not just for code portability

(updated on 2007/10/4, see bottom of the article)

David Chappell (not the same person as the Oracle-employed Dave Chappell from my previous post) has a blog entry explaining why there would be little value if Microsoft implemented SCA. The entry is reasonable but, like this follow-up by Stephan Tilkov, it focuses on clarifying the difference between portability (for which SCA helps) and interoperability (for which SCA doesn’t help very much). Seeing it from the IT management point of view, I see another advantage to SCA: it’s a machine readable description of the logic of the composite application, at a useful level of granularity for application and service management. This is something I can use in my application infrastructure to better understand relationships and dependencies. It brings the concepts of the application world to a higher level of abstraction (than servlets, beans, rows etc), one in which I can more realistically automate tasks such as policy propagation, fail-over automation, impact analysis, etc.

As a result, even if this was an Oracle-only technology, I would still be encouraging Greg and others to build it in the Oracle app server so that I can better managed applications written on that stack. And I would still encourage the Oracle Fusion applications to take advantage of it, for the same reason.

In that perspective, going back to Dave Chappell’s question, would there be value if Microsoft implemented SCA? I think so. It would make it a lot easier for me, and all the management vendors, to efficiently manage composite applications that have components running on both Microsoft and Oracle, for example. I believe Microsoft will need a component model for composite applications and I am sure Don Box has his ideas on this (he’s not yet ready to share his opinion on Dave’s question as you can see). I know of the SML-based work that is being driven by the System Center guys at Microsoft and they see SML as playing that role across applications and infrastructure. I don’t know how much they’ve convinced Don and others that this is the right way.

From an IT management perspective, portability of code doesn’t buy me very much. Portability of my ability to introspect composite applications and consume their metadata independently of the stack they are built on, on the other hand, is of great value. Otherwise we’ll only be able to build automated and optimized application and service management software for the Oracle stack. Which, I guess, would not be a bad first step…

[UPDATE on 2007/10/4] If this topic is of interest to you, you might want to go back to some of the links above to read the comments. David Chappell and I had a little back-and-forth in the comments section of his post, and so with Don Box in his post. In addition, Hartmut Wilms at InfoQ provides his summary of the discussion.]

3 Comments

Filed under Everything, IT Systems Mgmt, Portability, SCA

Coming up: SCA 1.0

A look at the “specifications” page of the “Open SOA” web site (the site used by the companies that created the SCA and SDO specifications) reveals a long list of specs with a release date of tomorrow. It’s like stumbling on the quarterly announcement of a publicly traded company the day before the announcement… except without the profit potential.

There is no link at this point, so no luck accessing the specifications themselves (unless one feels lucky and wants to try guessing the URLs based on those used for previously posted documents…) but we now know what they are and that they are coming out tomorrow:

  • SCA Assembly Model V1.00
  • SCA Policy Framework V1.00
  • SCA Java Common Annotations and APIs V1.00
  • SCA Java Component Implementation V1.00
  • SCA Spring Component Implementation V1.00
  • SCA BPEL Client and Implementation V1.00
  • SCA C++ Client and Implementation V1.00
  • SCA Web Services Binding V1.00
  • SCA JMS Binding V1.00
  • SCA EJB Session Bean Binding V1.00

The second one is the one I’ll read first.

Comments Off on Coming up: SCA 1.0

Filed under Everything, SCA, Standards, Tech