Category Archives: Implementation

Big Data in the Cloud at Google I/O

Last week was a great party for the entire Google developer family, including Google Cloud Platform. And within the Cloud Platform, Big Data processing services. Which is where my focus has been in the almost two years I’ve been at Google.

It started with a bang, when our fearless leader Urs unveiled Cloud Dataflow in the keynote. Supported by a very timely demo (streaming analytics for a World Cup game) by my colleague Eric.

After the keynote, we had three live sessions:

In “Big Data, the Cloud Way“, I gave an overview of the main large-scale data processing services on Google Cloud:

  • Cloud Pub/Sub, a newly-announced service which provides reliable, many-to-many, asynchronous messaging,
  • the aforementioned Cloud Dataflow, to implement data processing pipelines which can run either in streaming or batch mode,
  • BigQuery, an existing service for large-scale SQL-based data processing at interactive speed, and
  • support for Hadoop and Spark, making it very easy to deploy and use them “the Cloud Way”, well integrated with other storage and processing services of Google Cloud Platform.

The next day, in “The Dawn of Fast Data“, Marwa and Reuven described Cloud Dataflow in a lot more details, including code samples. They showed how to easily construct a streaming pipeline which keeps a constantly-updated lookup table of most popular Twitter hashtags for a given prefix. They also explained how Cloud Dataflow builds on over a decade of data processing innovation at Google to optimize processing pipelines and free users from the burden of deploying, configuring, tuning and managing the needed infrastructure. Just like Cloud Pub/Sub and BigQuery do for event handling and SQL analytics, respectively.

Later that afternoon, Felipe and Jordan showed how to build predictive models in “Predicting the future with the Google Cloud Platform“.

We had also prepared some recorded short presentations. To learn more about how easy and efficient it is to use Hadoop and Spark on Google Cloud Platform, you should listen to Dennis in “Open Source Data Analytics“. To learn more about block storage options (including SSD, both local and remote), listen to Jay in “Optimizing disk I/O in the cloud“.

It was gratifying to see well-informed people recognize the importance of these announcement and partners understand how this will benefit their customers. As well as some good press coverage.

It’s liberating to now be able to talk freely about recent progress on our quest to equip Google Cloud users with easy to use data processing tools. Everyone can benefit from Google’s experience making developers productive while efficiently processing data at large scale. With great power comes great productivity.

1 Comment

Filed under Big Data, BigQuery, Cloud Computing, Cloud Dataflow, Everything, Google Cloud Platform, Implementation, Open source, Query, Spark, Tech, Utility computing

Comments on “The Good, the Bad, and the Ugly of REST APIs”

A survivor of intimate contact with many Cloud APIs, George Reese shared his thoughts about the experience in a blog post titled “The Good, the Bad, and the Ugly of REST APIs“.

Here are the highlights of his verdict, with some comments.

“Supporting both JSON and XML [is good]”

I disagree: Two versions of a protocol is one too many (the post behind this link doesn’t specifically discuss the JSON/XML dichotomy but its logic applies to that situation, as Tim Bray pointed out in a comment).

“REST is good, SOAP is bad”

Not necessarily true for all integration projects, but in the context of Cloud APIs, I agree. As long as it’s “pragmatic REST”, not the kind that involves silly contortions to please the REST police.

“Meaningful error messages help a lot”

True and yet rarely done properly.

“Providing solid API documentation reduces my need for your help”

Goes without saying (for a good laugh, check out the commenter on George’s blog entry who wrote that “if you document an API, you API immediately ceases to have anything to do with REST” which I want to believe was meant as a joke but appears written in earnest).

“Map your API model to the way your data is consumed, not your data/object model”

Very important. This is a core part of Humble Architecture.

“Using OAuth authentication doesn’t map well for system-to-system interaction”

Agreed.

“Throttling is a terrible thing to do”

I don’t agree with that sweeping statement, but when George expands on this thought what he really seems to mean is more along the lines of “if you’re going to throttle, do it smartly and responsibly”, which I can’t disagree with.

“And while we’re at it, chatty APIs suck”

Yes. And one of the main causes of API chattiness is fear of angering the REST gods by violating the sacred ritual. Either ignore that fear or, if you can’t, hire an expensive REST consultant to rationalize a less-chatty design with some media-type black magic and REST-bless it.

Finally George ends by listing three “ugly” aspects of bad APIs (“returning HTML in your response body”, “failing to realize that a 4xx error means I messed up and a 5xx means you messed up” and “side-effects to 500 errors are evil”) which I agree on but I see those as a continuation of the earlier point about paying attention to the error messages you return (because that’s what the developers who invoke your API will be staring at most of the time, even if they represents only 0.01% of the messages you return).

What’s most interesting is what’s NOT in George’s list. No nit-picking about REST purity. That tells you something about what matters to implementers.

If I haven’t yet exhausted my quota of self-referential links, you can read REST in practice for IT and Cloud management for more on the topic.

7 Comments

Filed under API, Cloud Computing, Everything, Implementation, Manageability, Mgmt integration, Modeling, Protocols, REST, SOAP, Specs, Tech

The API, the whole API and nothing but the API

When programming against a remote service, do you like to be provided with a library (or service stub) or do you prefer “the API, the whole API, nothing but the API”?

A dedicated library (assuming it is compatible with your programming language of choice) is the simplest way to get invocations flowing. On the other hand, if you expect your client to last longer than one night of tinkering then you’re usually well-advised to resist making use of such a library in your code. Save yourself license issues, support issues, packaging issues and lifecycle issues. Also, decide for yourself what the right interaction model with the remote API is for your app.

One of the key motivations of SOAP was to prevent having to get stubs from the service provider. That remains an implicit design goals of the recent HTTP APIs (often called “RESTful”). You should be able to call the API directly from your application. If you use a library, e.g. an authentication library, it’s a third party library, not one provided by the service provider you are trying to connect to.

So are provider-provided (!) libraries always bad? Not necessarily, they can be a good learning/testing tool. Just don’t try to actually embed them in your app. Use them to generate queries on the wire that you can learn from. In that context, a nice feature of these libraries is the ability to write out the exact message that they put on the wire so you don’t have to intercept it yourself (especially if messages are encrypted on the wire). Even better if you can see the library code, but even as a black box they are a pretty useful way to clarify the more obscure parts of the API.

A few closing comments:

– In a way, this usage pattern is similar to a tool like the WLST Recorder in the WebLogic Administration Console. You perform the actions using the familiar environment of the Console, and you get back a set of WLST commands as a starting point for writing your script. When you execute your script, there is no functional dependency on the recorder, it’s a WLST script like any other.

– While we’re talking about downloadable libraries that are primarily used as a learning/testing tool, a test endpoint for the API would be nice too (either as part of the library or as a hosted service at a well-known URL). In the case of most social networks, you can create a dummy account for testing; but some other services can’t be tested in a way that is as harmless and inexpensive.

– This question of provider-supplied libraries is one of the reasons why I lament the use of the term “API” as it is currently prevalent. Call me old-fashioned, but to me the “API” is the programmatic interface (e.g. the Java interface) presented by the library. The on-the-wire contract is, in my world, called a service contract or a protocol. As in, the Twitter protocol, or the Amazon EC2 protocol, etc… But then again, I was also the last one to accept to use the stupid term of “Cloud Computing” instead of “Utility Computing”. Twitter conversations don’t offer the luxury of articulating such reticence so I’ve given up and now use “Cloud Computing” and “API” in the prevalent way.

[UPDATE: How timely! Seconds after publishing this entry I noticed a new trackback on a previous entry on this blog (Cloud APIs are like military parades). The trackback is an article from ProgrammableWeb, asking the exact same question I am addressing here: Should Cloud APIs Focus on Client Libraries More Than Endpoints?]

9 Comments

Filed under API, Automation, Everything, Implementation, Protocols, REST, SOAP

Amazon proves that REST doesn’t matter for Cloud APIs

Every time a new Cloud API is announced, its “RESTfulness” is heralded as if it was a MUST HAVE feature. And yet, the most successful of all Cloud APIs, the AWS API set, is not RESTful.

We are far enough down the road by now to conclude that this isn’t a fluke. It proves that REST doesn’t matter, at least for Cloud management APIs (there are web-scale applications of an entirely different class for which it does). By “doesn’t matter”, I don’t mean that it’s a bad choice. Just that it is not significantly different from reasonable alternatives, like RPC.

AWS mostly uses RPC over HTTP. You send HTTP GET requests, with instructions like ?Action=CreateKeyPair added in the URL. Or DeleteKeyPair. Same for any other resource (volume, snapshot, security group…). Amazon doesn’t pretend it’s RESTful, they just call it “Query API” (except for the DevPay API, where they call it “REST-Query” for unclear reasons).

Has this lack of REStfulness stopped anyone from using it? Has it limited the scale of systems deployed on AWS? Does it limit the flexibility of the Cloud offering and somehow force people to consume more resources than they need? Has it made the Amazon Cloud less secure? Has it restricted the scope of platforms and languages from which the API can be invoked? Does it require more experienced engineers than competing solutions?

I don’t see any sign that the answer is “yes” to any of these questions. Considering the scale of the service, it would be a multi-million dollars blunder if indeed one of them had a positive answer.

Here’s a rule of thumb. If most invocations of your API come via libraries for object-oriented languages that more or less map each HTTP request to a method call, it probably doesn’t matter very much how RESTful your API is.

The Rackspace people are technically right when they point out the benefits of their API compared to Amazon’s. But it’s a rounding error compared to the innovation, pragmatism and frequency of iteration that distinguishes the services provided by Amazon. It’s the content that matters.

If you think it’s rich, for someone who wrote a series of post examining “REST in practice for IT and Cloud management” (part 1, part 2 and part 3), to now declare that REST doesn’t matter, well go back to these posts. I explicitly set them up as an effort to investigate whether (and in what way) it mattered and made it clear that my intuition was that actual RESTfulness didn’t matter as much as simplicity. The AWS API being an example of the latter without the former. As I wrote in my review of the Sun Cloud API, “it’s not REST that matters, it’s the rest”. One and a half years later, I think the case is closed.

17 Comments

Filed under Amazon, Application Mgmt, Cloud Computing, Everything, Implementation, Mgmt integration, REST, Specs, Utility computing

Updates on Microsoft Oslo and “SSH on Windows”

I’ve been tracking the modeling technology previously known as “Microsoft Oslo” with a sympathetic eye for the almost three years since it’s been introduced. I look at it from the perspective of model-driven IT management but the news hadn’t been good on that front lately (except for Douglas Purdy’s encouraging hint).

The prospects got even bleaker today, at least according to the usually-well-informed Mary Jo Foley, who writes: “Multiple contacts of mine are telling me that Microsoft has decided to shelve Quadrant and ‘refocus’ M.” Is “M” the end of the SDM/SML/M model-driven management approach at Microsoft? Or is the “refocus” a hint that M is returning “home” to address IT management use cases? Time (or Doug) will tell…

While we’re talking about Microsoft and IT automation, I have one piece of free advice for the Microsofties: people *really* want to SSH into Windows servers. Here’s how I know. This blog rarely talks about Microsoft but over the course of two successive weekends over a year ago I toyed with ways to remotely manage Windows machines using publicly documented protocols. In effect, showing what to send on the wire (from Linux or any platform) to leverage the SOAP-based management capabilities in recent versions of Windows. To my surprise, these posts (1, 2, 3) still draw a disproportionate amount of traffic. And whenever I look at my httpd logs, I can count on seeing search engine queries related to “windows native ssh” or similar keywords.

If heterogeneous Cloud is something Microsoft cares about they need to better leverage the potential of the PowerShell Remoting Protocol. They can release open-source Python, Java and Ruby client-side libraries. Alternatively, they can drastically simplify the protocol, rather than its current “binary over SOAP” (you read this right) incarnation. Because the poor Kridek who is looking for the “WSDL for WinRM / Remote Powershell” is in for a nasty surprise if he finds it and thinks he’ll get a ready-to-use stub out of it.

That being said, a brave developer willing to suck it up and create such a Python/Ruby/Java library would probably make some people very grateful.

3 Comments

Filed under Application Mgmt, Automation, Everything, Implementation, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Modeling, Oslo, Protocols, SML, SOAP, Specs, Tech, WS-Management

Square peg, REST hole

For all its goodness, REST sometimes feels like trying to fit a square peg in the proverbial round hole. Some interaction patterns just don’t lend themselves well to the REST approach. Here are a few examples, taken from the field of IT/Cloud management.

Long-lived operations. You can’t just hang on for a synchronous response. Tim Bray best described the situation, which he called Slow REST. Do you create an “action in progress” resource?

Query: how do you query for “all the instances of app foo deployed in a container that has patch 1234 installed” in a to-each-resource-its-own-URL world? I’ve seen proposals that create a “query” resource and build it up incrementally by POSTing constraints to it. Very RESTful. Very impractical too.

Events: the process of creating and managing subscriptions maps well to the resource-oriented RESTful approach. It’s when you consider event delivery mechanisms that things get nasty. You quickly end up worrying a lot more about firewalls and the cost of keeping HTTP connections open than about RESTful purity.

Enumeration: what if your resource state is a very long document and you’d rather retrieve it in increments? A basic GET is not going to cut it. You either have to improve on GET or, once again, create a specifically crafted resource (an enumeration context) to serve as a crutch for your protocol.

Filtering: take that same resource with a very long representation. Say you just want a small piece of it (e.g. one XML element). How do you retrieve just that piece?

Collections: it’s hard to manage many resources as one when they each have their own control endpoint. It’s especially infuriating when the URLs look like http://myCloud.com/resources/XXX where XXX, the only variable part, is a resource Id and you know – you just know – that there is one application processing all your messages and yet you can’t send it a unique message and tell it to apply the same request to a list of resources.

The afterlife: how do you retrieve data about a resource once it’s gone? Which is what a DELETE does to it. Except just because it’s been removed operationally doesn’t mean you have no interest in retrieving data about it.

I am not saying that these patterns cannot be supported in a RESTful way. In fact, the problem is that they can. A crafty engineer can come up with carefully-defined resources that would support all such usages. But at the cost of polluting the resource model with artifacts that have little to do with the business at hand and a lot more with the limitations of the access mechanism.

Now if we move from trying to do things in “the REST way” to doing them in “a way that is as simple as possible and uses HTTP smartly where appropriate” then we’re in a better situation as we don’t have to contort ourselves. It doesn’t mean that the problems above go away. Events, for example, are challenging to support even outside of any REST constraint. It just means we’re not tying one hand behind our back.

The risk of course is to loose out on many of the important benefits of REST (simplicity, robustness of links, flexibility…). Which is why it’s not a matter of using REST or not but a matter of using ideas from REST in a practical way.

With WS-*, on the other hand, we get a square peg to fit in a square hole. The problem there is that the peg is twice as wide as the hole…

37 Comments

Filed under Cloud Computing, Everything, Implementation, Modeling, Protocols, Query, REST, Utility computing

PaaS as the path to MDA?

Lots of communities think of Cloud Computing as the realization of a vision that they have been pusuing for a while (“sure we didn’t call it Cloud back then but…”). Just ask the Grid folks, the dynamic data center folks (DCML, IBM’s “Autonomic Computing”, HP’s “Adaptive Enterprise”,  Microsoft’s DSI), the ASP community, and those of us who toiled on what was going to be the SOAP-based management stack for all IT (e.g. my HP colleagues and I can selectively quote mentions of “adaptation mechanisms like resource reservation, allocation/de-allocation” and “management as a service” in this WSMF white paper from 2003 to portray WSMF as a precursor to all the Cloud APIs of today).

I thought of another such community today, as I ran into older OMG specifications: the Model-Driven Architecture (MDA) community. I have no idea what people in this community actually think of Cloud Computing, but it seems to me that PaaS is a chance to come close to part of their vision. For two reasons: PaaS makes it easier and more rewarding, all at the same time, to practice model-driven design. More bang for less buck.

Easier

My understanding of the MDA value proposition is that it would allow you to create a high-level design (at the level of something like an augmented version of UML) and have it automatically turn into executable code (e.g. that can run in a JEE or .NET container). I am probably making it sound more naive than it really is, but not by much. That’s a might wide gap to bridge, for QVT and friends, from UMLish to byte-code and it’s no surprise that the practical benefits of MDA are still to be seen (to put it kindly).

In a PaaS/SaaS world, on the other hand, you are mapping to something that is higher level than byte code. Depending on what types of PaaS containers you envision, some of the abstractions provided by these containers (e.g. business process execution, event processing) are a lot closer to the concepts manipulated in your PIM (Platform-independent model, the UMLish mentioned above). Thus a smaller gap to bridge and a better chance of it being automagical. Especially if you add a few SaaS building blocks to the mix.

More rewarding

Not only should it be easier to map a PIM to a PaaS deployment environments, the benefits you get once you are done are incommensurably greater. Rather than getting a dump of opaque auto-generated byte-code running in a regular JVM/CLR, you get an environments in which the design concepts (actors/services, process, rules, events) and the business model elements are first class citizens of the platform management infrastructure. So that you can monitor and set policies on the same things that you manipulate in you PIM. As opposed to falling down to the lowest common denominator of CPU/memory metrics. Or, god forbid, trying to diagnose/optimize machine-generated code.

We shall see

I wasn’t thinking of Microsoft SQL Server Modeling (previously known as Oslo) when I wrote this, but Doug Purdy’s tweet made the connection for me. And indeed, one can see in SQLSM+Azure the leading candidate today to realizing the MDA vision… minus the OMG MDA specifications.

[Note: I wasn’t planning to blog this, but after I tweeted the basic idea (“Attempting MDA (model-driven architecture) before inventing model-driven deployment and mgmt was hopeless. Now possibly getting there.”) Shlomo requested more details and I got frustrated by the difficulty to explain my point in twitterisms. In effect, this blog entry is just an expanded tweet, not something as intensely believed, fanatically researched and authoritatively supported as my usual blog posts (ah!).]

[UPDATED 2009/12/29: Some relevant presentations from OMG-land, thanks to Jean Bezivin. Though I don’t see mention of any specific plan to use/adapt MOF/XMI/QVT/etc for the Cloud.]

4 Comments

Filed under Application Mgmt, Automation, Azure, BPM, Business Process, Cloud Computing, Everything, Implementation, Microsoft, Middleware, Modeling, Specs, Standards, Utility computing

Expanding on “twitter with a brain”

Chuck Shotton recently made a compelling case (“Twitter with a Brain“) for Twitter tools to allow the user to change the protocol endpoint. That is, instead of always going to twitter.com, you can tell your Twitter client to send all requests to myTweetInterceptor.me.com. Why would you do this? You should read his blog entry, but in short his point is that the intermediary can add all kinds of new features that neither the Twitter client nor Twitter itself support. As always in computer science, a new level of decoupling adds opportunities for extensions (and breakage too, of course).

I fully agree with what he writes and I would very much like to see his call to action answered. In fact, I want more than what he is asking for. So here is my call to action:

1) It’s not just Twitter

Why just Twitter? This should be true for any client using any protocol. Why not also the APIs for the various Google and Yahoo services? The APIs for the other social networks beyond Twitter? For shopping sites like Amazon and EBay? Etc. And of course to all the various Cloud providers out there. Just because I am using the Amazon EC2 API it doesn’t mean I necessarily want the requests to go straight to Amazon. Client tools should always make the endpoint configurable, period.

2) It’s not just the clients, it’s also (and especially) the third party sites

Chuck’s examples are about features that the Twitter clients could provide but don’t, so an intermediary would be an easy way to hack support for them (others presumably include modifying the client – if open source -, writing a plug-in for it – it there is such mechanism -, or running a network interceptor on the local client – unless the protocol is encrypted-).

That’s nice and I’d love to see this, but the big deal for me is less with clients and more with third party sites. You know, all these sites that ask for your Twitter login/password. Or those that ask for your GMail/Yahoo account info to retrieve a list of your contacts. I never grant these requests, but I would consider it if they allowed me to tell them what endpoint URL to use. For example, rather than using my Twitter login to go straight to twitter.com, they would use a login/password that I create and talk to twitterIntermediary.vambenepe.com. The requests would be in the exact same shape as what they send today to Twitter, just directed to another URL. There, I could have a proxy that only allows some requests (e.g. “update twitter background image” but not “send update”) and forwards them using my real Twitter credentials. Or, for email accounts, I could have a proxy that allows requests that read my address book but not those that read my mails. The goal here is not to add features, it is to delegate trust in a fine-grained (and audited) manner. This, to me, is the burning need, rather than a 3rd place to implement Twitter lists.

I would probably write these proxies using a PaaS platform like the Google App Engine. Or maybe even Yahoo Pipes. I have long struggled to think of use cases for which Yahoo Pipes hits the sweetspot, and this may well be it. Especially if people write modules to handle specific APIs (e.g. a “Twitter API” module that shows all operations and lets you enable/disable them one by one in a pipe). The one thing missing would be a way for a pipe to keep a log of its invocations, for auditing.

You want access to my email and social network accounts? Give me the ability to filter you requests and you’ll get access. If it’s blind trust you want, I am afraid I have a very limited supply.

[Note: I wanted to add this as a comment on Chuck’s blog, but he doesn’t seem to allow them: “go start your own blog and/or shut up and eat your vegetables” is his recommendation. Since I already have my own blog, I guess I don’t have to eat my vegetables if I don’t want to. I just hope my kids don’t learn about this rule or they’ll be blogging in no time.]

[UPDATED 2009/11/30: WRT to Chuck’s request, it looks like it’s being done already. But no luck with the third party sites so far, which is what I most want to see.]

6 Comments

Filed under Automation, Everything, Google App Engine, Implementation, Mashup, PaaS, Portability, Protocols, Security, Social networks, Twitter, Yahoo

PaaS as a satisfying and success-ready hobbyist platform

I don’t know anyone in Silicon Valley who can code and doesn’t fantasize about writing an accidental killer app. One that gets designed during a long layover in DEN and implemented in a rainy weekend (El Nino is my VC). One that was only supposed to meet the needs of a few friends and is used by half of the world a few months/years later.

I am not talking about seasoned entrepreneurs, who have a network, discipline, resources and enough experience to know that it takes a lot more than a cool idea. Rather, about programming hobbyists (who may of may not be programmers in their day jobs),

By definition, hobbyists only do things that are satisfying. In the rarefied air of Silicon Valley, it also helps if there is a conceivable “upside” to dream about. Platform as a Service (PaaS, e.g. Google App Engine) provides both to software-oriented hobbyist. And make it very cheap (borderline free), which doesn’t hurt.

Satisfying

In a well-crafted PaaS environment, the development process and the result are both satisfying. I am not a Google shrill, but GAE is a fair example. The barrier to entry is very low (the download is less than 10MB and contains all you need to get started). In an hour you have an application running locally. In an hour and 5 minutes you have it deployed and accessible on the web for all. And yet this ease of bootstrap does not come at the cost of too many longer-term limitations (now that the environment has gown a bit from the original limitations and provides scheduled and background jobs). Unlike Yahoo Pipes, for which the first impression is “nifty!” and the second is “gimme a textual representation of my pipe now!”.

Beyond the easy ramp-up, the main source of satisfaction developing in a PaaS environment is that you spend 99% of your time working on the application. Not the OS, not the firewall, not the application container, not the database. Not to mention having to deal with your co-lo provider or the leased line for the servers in the basement. If you are a hobbyist with only a few spare hours per week, that’s a make or break deal. It also means that you have a fighting chance of developing a secure application because you are responsible for a much smaller surface of attack.

Eons ago (in computing time), Visual Basic was the name of the game for these people. More recent was the rise of PHP. It dramatically lowered the barrier to adoption and provided a quick route to a working web application. I know several non-professional developers (e.g. web designers) who are scared of any “normal” programming language but happily write PHP (often of equivalent complexity BTW). Combine this with the wide availability of ISP-managed PHP environments and you get close to what GAE gives you. At the risk of adding to the annoying trend of retroactively cloudifying everything, I think of ISP-hosted PHP as the first generation of PaaS. But it is focused on “show what’s in the DB” scenarios rather than service-centric / mash-up / web 2.0 integration. And even for DB-centric scenarios, by and large PHP coders don’t want to think too much about model and queries (and it shows). I think Google decided to go with Python rather than the easy route of aping the hosted PHP environments in large part to avoid hitting such ceilings down the road. Not surprisingly, PHP support is currently the most requested GAE feature, ahead of Perl and Ruby. Lets see if Google tries to get the PHP community on board or prefers to stay clear of such PaaS legacy (already!).

Ready for success

In the unlikely event that your application catches fire and sees wide adoption (which is not impossible, especially if well integrated in a social network), what are you going to do about it? Keeping in mind two constraints: first, this is a part-time hobby of yours. Second, don’t dream of riches. We are talking about an influx of facebookers or twitterers here, with no intention to pay for anything. But click they will. If you were going to answer: “I get funding and hire a real IT staff” then think again. You most likely won’t get funding for your toy app without revenue potential. And even if you do, by the time you have it it’s too late and people have moved on because you were not there when the spotlight was on you.

With a PaaS-based application you have a fighting chance. If the spike is short enough, you may not even hit the limit of the free quota. If it does, you have the choice of whether you are willing to pay to support the extra traffic or not. No change in code required (though it may be advised anyway, if your app wasn’t architected for efficient scaling – PaaS doesn’t entirely take this off your hands).

That “sudden spike” story is a commonly-invoked use case for EC2. And it’s probably true for a start-up with an IT staff (of at least one full-time person). But despite Amazon’s efforts (and other providers such as RightScale) this type of scaling is something you have to architect for and putting it in place takes away from the time you spend coding application features. It also means that you are responsible for more infrastructure (OS and application container at least). Not to mention that IaaS providers don’t usually offer free resources for limited usage, the way Google does (I suspect 99% of GAE apps never get over this quota). Even if a small EC2 instance is not very expensive (though it adds up over time if you keep it up for that occasional user), the difference between “free” and “cheap” is significant. As an application provider you’d like for this not to be the case with your users, but as a consumer of infrastructure service you’re on the other side of the deal, aren’t you?

There is a reason why suburbia-bound SUVs are advertised crossing mountain streams. The “I could if I wanted to” line has appeal. For the software hobbyist, knowing that your application won’t crash if it happened to meet success (even if only for a couple of days, e.g. the Slashdot effect) is a good feeling (“I could if *they* wanted to”). In truth this occasion is rare (and likely to end up like this), but you are ready for the eventuality. And if there are enough such hobbyists, then statistically some will encounter it.

The provider’s upside

That last point brings the topic of the PaaS provider’s upside in this. I have read several critical comments arguing that no company will rewrite their application for GAE (true) and that no start-up will write their new code for it either because of the risk of lock-in (also true: “being bought by Google” is not a bad outcome but “has to be bought by Google” is a bad exit strategy). But I think this misses the point of casting such a wide nest and starting with creating a great tool for hobbyists.

After all, Google has made a great business monetizing millions of small sites none of which makes much money by itself. At the very least least this can grow the web and, symbiotically, Google. With  two possible upsides:

  • Some of these hobbyist applications may actually take off and Google becomes their natural partner/godfather (including managing their user accounts). For example, wouldn’t it be nice for Google if Craigslist or Twitter was running on GAE?
  • The platform eventually evolves into something that makes sense for start-ups, SMBs and/or enterprises to use. Google works out the kinks with less demanding users first.

Interesting times

Two closing thoughts, which I’ll leave undeveloped for now:

  • There is an especially good synergy between mobile apps and PaaS. Once you get past the restaurant tip calculators, many mobile apps need a server-hosted sidekick to do the heavy lifting of gathering/storing/transforming data. As a hobbyist, you want to spend most of your time making you mobile app cool. Which leaves even less time for administrating server components. On the server side, you are even less likely to want to deal with anything but application logic. PaaS is especially attractive in these scenarios. Google and Microsoft have to navigate these waters carefully but look for some synergy/integration stories around GAE + Android and Azure + Windows Mobile respectively. Not clear what Apple’s story is here or if they think they need one. If it surfaces as an issue then we have yet another reason to restart the “Apple buys Adobe” rumor. Or maybe Sanjiva will get a middle-of-the-night call from Steve Jobs…
  • A platform to build/run your application is one thing. A way to reach users is another (arguably much more critical). Things like mobile app stores (especially Apple’s of course), Facebook and next generation app stores. But this goes  beyond the scope of this post.

Just to be clear, I am not in any way suggesting that PaaS is only for hobbyists. I am just saying that right now it is a great tool for them, the best way for an individual programmer to have fun and have an impact. This doesn’t take away from the value that PaaS will eventually deliver to larger organizations.

[UPDATED 2009/10/4: Microsoft Azure apparently supports PHP.]

5 Comments

Filed under Cloud Computing, Everything, Google, Google App Engine, Implementation, Mashup, Microsoft, PaaS, Utility computing, WSO2

Look Ma, no hypervisor!

Encouraged by hypervisor vendors, the confusion between virtualization and Cloud Computing is rampant. In the industry, the term “virtualization” (and its corollary, “virtual machine”) is used in so many different ways that it has lost all usefulness. For a recent example, read the introduction of this SNIA/OGF white paper (on Cloud Storage) which asserts that “the new technology underlying this is the system virtual machine that allows multiple instances of an operating system and associated applications to run on single physical machine. Delivering this over the network, on demand, is termed Infrastructure as a Service (IaaS)”.

In fact, even IaaS-type Cloud services don’t imply the use of hypervisors.

We need to decouple the Cloud interface/contract (e.g. “what are the types of resources that can I provision on demand? hosts, app servers, storage capacity, app services…”) from the underlying implementation (e.g. “are hypervisors used by the Cloud provider?”). At the risk of spelling out things that may be obvious to many readers of this blog, here is a simplified matrix of Cloud Computing systems, designed to illustrate that all combinations of interface and implementation are possible and in many cases even reasonable.

IaaS interface PaaS interface
Hypervisor used Yes! (see #1) Yes! (see #2)
Hypervisor not used Yes! (see #3) Yes! (see #4)

#1: IaaS interface, hypervisor-based implementation

This is a very common approach these days, both in public Clouds (EC2, Rackspace and presumably at some point the VMWare vCloud Express service providers) and private Clouds (Citrix, Sun, Oracle, Eucalyptus, VMWare…). Basically, you take a bunch of servers, put hypervisors on all of them and make VMs running on these hypervisors available to the Cloud customers.

But despite its predominance, this is not the only path to a Cloud, not even to an IaaS (e.g. “x86 hosts on demand”) Cloud. The following three other scenarios are all valid too.

#2: PaaS interface, hypervisor-based implementation

This is the road SpringSource has been on, first with Cloud Foundry (using AWS EC2 which is based on the Xen hypervisor) and presumably soon on top of VMWare.

#3: IaaS interface, no hypervisor in the implementation

Let’s remember that the utility computing vision (before the term fell in desuetude in favor of “cloud”) has been around before x86 hypervisors were so common. Take Loudcloud as an illustration. They were building what is now called a “public Cloud” starting back in 1999 and not using any hypervisor. Just bare metal provisioning and advanced provisioning automation software. Then they sold the hosting part to EDS (now HP) and only kept the software, under the name Opsware (now HP too, incidentally). That software was meant to create what we now call a “private Cloud”. See this old DCML announcement as one example of the Opsware vision. And no hypervisor was harmed in the making of this movie.

At the current point in time, the hardware (e.g. multiple cores, shared memory) and software (hypervisors, legacy apps) environment is such that hypervisor-based solutions seem to have an edge over those based on automated provisioning/configuration alone. But these things tend to change quickly in our industry… Especially if you factor in non-technical considerations like compliance, fear of data leakage and the risk of having the hardware underlying your application seized because of an investigation involving another tenant…

And this is not going into finner techno-philosophical points about the different types of hypervisors. Not to mention mainframe LPARs… One could build a hypervisor-free IaaS solution on these.

To some extent, you may even put the “pwned” machines (in a botnet) in this “IaaS with no hypervisor” category (with the small difference that what’s being made available is an x86 with an OS, typically Windows, already installed). If you factor out externalities (like the FBI breaking down your front door at 6:00AM) this approach has claims as the most cost-effective form of Cloud computing available today… Solaris zones are another example of possible foundation for a hypervisor-free IaaS-like offering (here too, with an OS rather than a “raw host” as the interface).

#4: PaaS interface, no hypervisor in the implementation

In the public sphere, this corresponds to Google App Engine.

In the private sphere, several companies have built it themselves on top of WebLogic, by adding some level of “on-demand” application provisioning in order to streamline the relationship between the IT group running the servers and the business groups who want to deploy applications on them. Something that one should ideally be able to buy rather than build.

Waiting for the question to become irrelevant

Like most deeply-ingrained confusions, the conflation of virtualization and Cloud Computing won’t be dispelled as much as made irrelevant. The four categories enumerated in this post are a point-in-time view of a continuously evolving system. What may start today as a bundle of a hypervisor, an OS and an app server may become a somewhat monolithic “PaaS engine” over time as the components are more tightly integrated. That “engine” may have memory isolation mechanisms that look a lot like a hypervisor. But it may not be able to host a generic OS. In the same way that whales don’t have fingers and toes and yet they are still very much apparent in their skeleton.

[UPDATED 2009/10/8: A real-life example of #3! On-demand servers via bare metal provisioning (via Sam). No hypervisor in the picture. See also here.]

[UPDATED 2009/12/29: Another non-hypervisor Cloud provider! NewServers. Here is their API. And a Q&A.]

3 Comments

Filed under Application Mgmt, Cloud Computing, Everything, Google App Engine, Implementation, IT Systems Mgmt, Middleware, Utility computing, Virtualization, VMware, XenSource

Toolkits to wrap and bridge Cloud management protocols

Cloud development toolkits like Libcloud (for Python) and jcloud (for Java) have been around for some time, but over the last two months they have been joined by several other open source contenders. They all claim to abstract the on-the-wire Cloud management protocols sufficiently to let you access different Clouds via the same code; while at the same time providing objects in your programming language of choice and saving you the trouble of dealing with on-the-wire messages. By focusing on interoperability, they slot themselves below the larger role of a “Cloud broker” (which also deals with tasks like transfer and choice). Here is the list, starting with the more recent contenders:

DeltaCloud shares the same goal of translating between different Cloud management protocols but they present their own interface as yet another Cloud REST API/protocol rather than a language-specific toolkit. More along the lines of what UCI is trying to do (not sure what’s up with that project, I recorded my skepticism earlier and am still waiting to be pleasantly surprised).

Of course there are also programming toolkits that are specific to one Cloud provider. They are language-specific wrappers around one Cloud management protocol. AWS protocols (EC2, S3, etc…) represent the most common case, for example amazon-ec2 (a Ruby Gem), Power-EC2Dream (in C# which gives it the tantalizing advantage of being invokable via PowerShell) and typica (for Java). For Clouds beyond AWS, check out the various RightScale Ruby Gems.

The main point of this entry was to list the cross-Cloud development toolkits in the bullet list above. But if you’re in the mood for some pontification you can keep reading.

For some reason, what used to be called “protocols” is often called “APIs” in Cloud settings. Witness the Sun Cloud “API” or the vCloud “API” which only define XML formats for on-the-wire messages. I have never heard of CIM/XML over HTTP, WSDM or WS-Management being referred as APIs though they occupy a very similar place. They are usually considered “protocols”.

It’s a just question of definition whether an on-the-wire protocol (rather than a language-specific set of objects/methods) qualifies as an “Application Programming Interface”. It’s not an “interface” in the Java sense of the term. But I can “program” against it so it could go either way. On this blog I have gone along with the “API” term because that seemed widely used, though in verbal conversations I have tended to stick to “protocol”. One problem with “API” is that it pushes you towards mixing the “what” and the “how” and not respecting the protocol/model dichotomy.

Where is becomes relevant is when you start to see language-specific APIs for Cloud control pop-up as listed above. You now have two classes of things called “API” and it gets a bit confusing. Is it time to bring back the “protocol” term for on-the-wire definitions?

As a developer, whether you’re better off eating your Cloud noodles using chopsticks (on-the-wire protocol definitions) or a fork (language-specific APIs) is an important decision that will stay with you and may come back to bit you (e.g. when the interfaces are versioned). There is a place for both of course, but if we are to learn anything from WS-* it’s that we went way too far in the “give me a java stub” direction. Which doesn’t mean there is no room for them, but be careful how far from the wire semantics you get. It become even trickier when your stub tries not jsut to bridge between XML and Java but also to smooth out the differences between several on-the-wire protocols, as the toolkits above do. The hope, of course, is that there will eventually be enough standardization of on-the-wire protocols to make this a moot point.

2 Comments

Filed under Amazon, API, Automation, Cloud Computing, Everything, Google App Engine, Implementation, IT Systems Mgmt, Manageability, Mgmt integration, Open source, Protocols, Utility computing

File upload/download and remote program execution using WS-Management – a practical solution

The previous blog post described a way to upload and (in theory at least) download text files to/from a remote Windows machine using WS-Management. In practice, the applicability of the method is  limited for upload (text files only, slow for large files) and almost nonexistent for download. Here is a much improved version.

This is another example of something that was too obvious for me to see last weekend when I was in the thick of fighting with WS-Management SOAP messages and learning about WMI classes. It just took a day of not thinking about it to have the solution pop in my mind: use ftp.exe. For the longest time (at least since Windows NT) Windows has been shipping with this FTP client. And the documentation shows that you can call it from the command line and provide it with the name of a text file containing the commands to execute. Bingo.

Specifically, here are the steps. Let’s say that I want to run a program called task.exe on a remote Windows machine and that program takes a large binary file (data.bin) as input. I want to transfer both to the remote machine and then run the program. This can be done in 3 simple steps:

Step 1: upload the FTP command file to the remote Windows machine. The content of the command file is below. mgmtserver.myco.com is the name of the machine from which the two files can be retrieved over FTP. I use anonymous FTP here, but you could just as well provide a username and password.

open mgmtserver.myco.com
anonymous
binary
get task.exe
get data.bin
quit

Step 2: execute the FTP commands above. This downloads task.exe and data.bin from mgmtserver.myco.com onto the remote Windows machine.

Step 3: execute the program on the remote Windows machine (“task.exe data.bin”).

Here are the on-the-wire messages corresponding to each step:

Step 1: upload the FTP command file to the remote Windows machine

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
  xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
  xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
  <s:Header>
    <a:To>http://server:80/wsman</a:To>
    <w:ResourceURI s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process </w:ResourceURI>
    <a:ReplyTo>
    <a:Address s:mustUnderstand="true">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
    </a:ReplyTo>
    <a:Action s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create</a:Action>
    <a:MessageID>uuid:9A989269-283B-4624-BAC5-BC291F72E854</a:MessageID>
  </s:Header>
  <s:Body>
    <p:Create_INPUT xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process">
      <p:CommandLine>cmd /c echo open mgmtserver.myco.com>ftpscript&amp;&amp;echo
      anonymous>>ftpscript&amp;&amp;echo binary>>ftpscript&amp;&amp;echo get
      task.exe>>ftpscript&amp;&amp;echo get data.bin>>ftpscript&amp;&amp;echo
      quit>>ftpscript</p:CommandLine>
      <p:CurrentDirectory>C:datawinrm-test</p:CurrentDirectory>
    </p:Create_INPUT>
  </s:Body>
</s:Envelope>

As before, you need to set the Content-Type HTTP header to “application/soap+xml;charset=UTF-8” (or UTF-16).

Step 2: execute the FTP commands to download the files from your server

It’s the same message, except the <p:CommandLine> element now has this value:

<p:CommandLine>ftp -s:ftpscript</p:CommandLine>

Step 3: execute the task.exe program on the remote Windows machine

Again, the same message except that the command line is simply:

<p:CommandLine>C:datawinrm-testtask.exe data.bin</p:CommandLine>

Note that I have broken this down in three messages for clarity, but you can easily bundle all three steps in one SOAP message. Just use this command line:

<p:CommandLine>cmd /c echo open mgmtserver.myco.com>ftpscript&amp;&amp;echo
anonymous>>ftpscript&amp;&amp;echo binary>>ftpscript&amp;&amp;echo get
task.exe>>ftpscript&amp;&amp;echo get data.bin>>ftpscript&amp;&amp;echo
quit>>ftpscript&amp;&amp;ftp -s:ftpscript&amp;&amp;C:datawinrm-testtask.exe
data.bin</p:CommandLine>

Of course this can also be used in reverse, to download files from the remote Windows machine rather than upload files to it. Just use PUT or MPUT as FTP commands instead of GET or MGET.

This mechanism is a major improvement, for many use cases, over what I originally described. I feel a bit like someone who just changed a flat tire by loosening the lug nuts with his teeth and then found the lug wrench under the spare tire.

2 Comments

Filed under Everything, Implementation, IT Systems Mgmt, Manageability, Microsoft, Portability, SOAP, Standards, WS-Management

Uploading a file to a Windows machine via WMI/WS-Management

[UPDATED 2009/6/30: Check the following post for a more practical solution.]

Here is a simple way to upload a text (i.e. not binary) file to a Windows machine. Because my interest is to be able to do it from any platform, I investigated the use of WS-Management. But the method relies on invoking WMI methods over WS-Management, so I don’t see why it would not also work in a straight WMI scenario if you prefer.

I am not a Windows management expert, so there may be a much better way to do this (e.g. BITS). But if what you’re after is the simplest possible way to drop a file on a Windows machine it from a non-Windows machine, it doesn’t get much simpler than sending an XML doc over HTTP and calling it a day. Here is how.

The easiest would be if the CIM_DataFile WMI class had a “create” method to create a new file. It doesn’t. But Win32_Process does. Invoking this method creates a new process and you get to specify the command line to execute. All you need to do is come up with a command line that invokes a program that will create the file that you want to upload.

There may be alternatives, but the command line I came up with for this purpose uses the “cmd.exe” interpreter (the Windows command-line shell). By using the “/c” option, you can invoke this interpreter with its instructions as parameters directly on the command line (it gets a bit confusing because we have two “command lines” here, the one that is used to launch the “cmd.exe” shell and the one that is presented inside the “cmd.exe” shell).

Anyway, if you type the following line inside the “start/run” field in Windows

cmd /c echo 1st line > test1.txt

It will have the same effect as opening a command shell, typing “echo 1st line > test1.txt” in it and the closing it. It creates a new file called “test1.txt” with one line of content (“1st line”). If you want a second line, you can do this by adding a second command that uses “>>” (append) instead of “>”. And the two commands can be joined by “&&” to invoke them in one pass. So to create a file with three lines, we’d execute:

cmd /c echo 1st line > test1.txt && echo 2nd line >> test1.txt
&& echo 3rd line >> test1.txt

Now all we have to do is package this in a WS-Management SOAP message and post it to the WS-Management listener of the Windows machine. In the process, we have to escape the “&” in the command line to “&amp;” because of XML syntax rules. The resulting message looks like:

<s:Envelope
  xmlns:s="http://www.w3.org/2003/05/soap-envelope"
  xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
  xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
<s:Header>
<a:To>http://localhost/wsman</a:To>
<w:ResourceURI s:mustUnderstand="true">
  http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process
</w:ResourceURI>
<a:ReplyTo>
<a:Address s:mustUnderstand="true">
  http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous
</a:Address>
</a:ReplyTo>
<a:Action s:mustUnderstand="true">
  http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create
</a:Action>
<a:MessageID>uuid:9A989269-283B-4624-BAC5-BC291F72E854</a:MessageID>
</s:Header>
<s:Body>
<p:Create_INPUT
  xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process">
<p:CommandLine>cmd /c echo 1st line > test1.txt &amp;&amp; echo 2nd line >>
  test1.txt &amp;&amp; echo 3rd line >> test1.txt</p:CommandLine>
<p:CurrentDirectory>C:datawinrm-test</p:CurrentDirectory>
</p:Create_INPUT>
</s:Body>
</s:Envelope>

You don’t even need a WS-Management toolkit to do this as the only WS-Management header is w:ResourceURI which can easily be set manually. You don’t need a WS-Addressing library either as all the headers are also static (except for the MessageID even though nobody will care in practice if you always send the same value; I hereby authorize you to re-use the one in my example as much as you want). As a side note, this is yet another illustration of how useless this header (and more generally WS-Addressing) is in 95% of the case. And yet the Microsoft WS-Management implementation (like many others) will make a point to fault if you don’t send it. But ranting against WS-Addressing is a topic for another day (look for a future post titled “WS-IfInteroperabilityWasEasyItWouldNotBeFunWouldIt”).

I should mention that you want to set the Content-Type HTTP header to “application/soap+xml;charset=UTF-8” for this message. Or UTF-16 if that’s what you’re sending.

A few comments:

  • This obviously only works for character-based files, not binaries
  • I’ve noticed that the parsing of the wsa:Action header is pretty minimalistic. The Microsoft implementation seems to just pick up the text behind the last “/”. So you can type send “blahblah/Create” and it works just as well as the correct value, “http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Process/Create” (it knows what class to apply the operation on from the Resource URI). Interestingly, there is only one URL ending in “/Create” that doesn’t work and it’s the WS-Transfer “Create” operation (“http://schemas.xmlsoap.org/ws/2004/09/transfer/Create”). That’s because the “Create” operation invoked in the message above is not the WS-Transfer “Create” operation but rather the homonymous operation on the WMI class.
  • Using the “/k” modifier on “cmd” in the command line (instead of “/c”) would also work, but the command shell would stay alive after returning so over time you’d have quite a few of them hanging out and using up memory on the remote machine. Not a good move.
  • As part of this exercise, I noticed an error in the MSDN page describing the “invoke” method of Win32_Process. In the SOAP body, the URI for the “p” namespace prefix uses “…/cim/…” instead of “…/cimv2/…”, which caused my first attempts to fail.

If the file you want to upload is large, you can break the upload over several successive messages similar to the one above. As long as you use the same file name and use “>>” instead of “>” you’ll keep appending to the end of the file until it’s complete.

Of course this could be any type of text file, including XML (watch for the character-escaping rules though, both for XML and for “cmd” as you have to apply them in the right sequence). Even better, it could be a Python, Perl or PowerShell script too. And in that case (assuming the corresponding interpreter is installed on the machine) you can use the same mechanism to also invoke the script for execution. So that you use this WS-Management interface just to bootstrap into a more comfortable remote-control mechanism.

The next logical question (for extra credit) is whether WS-Management can be used to read files remotely instead of writing them. In theory yes, though in practice you’re much better off with alternate solutions, like the remote shell extension to WS-Management that I have described as “dumb SSH” previously.

But since you ask, here is the theory. My first attempt was to do a WS-Management “Get” (the Get operation from WS-Transfer) on an instance of CIM_DataFile (using the “Name” selector and setting it to “C:datawinrm-testtest1.txt”). But this returns the properties of the file rather than its content. Whether this is kosher is an interesting theoretical question to ponder from a REST-beard-stroking perspective, but it’s useless for my file retrieval purpose. As before, one solution is to use the magical Win32_Process “Create” method to overcome the shortcomings of the CIM_DataFile class. The windows command shell “type” command can be used to display the content of a text file. But the WMI Win32_Process “create” operation that we use here only returns the processId and a result code, not the stdout stream (unlike the remote shell protocol that I mentioned above). We cannot therefore use it directly to return the output of the “type” command over the wire.

The solution is to use one Win32_Process “create” operation over WS-Management to write the content of the file in a place where a subsequent WS-Management opeation can read it. I can think of two examples off the top of my head: directory names and environment variables.

Here is how you’d do it with directory names. The following command takes the test1.txt file, reads it and creates nested subdirectories, one for each line in the input file. The name of the directory is the content of the corresponding line in the file.

for /f "delims=" %I in (test1.txt) do @mkdir "%I" && cd "%I"

For example, if the file content is

1st line
2nd line
3rd line

The command will generate the following three subdirectories:

1st line
  |_ 2nd line
      |_ 3rd line

What’s the point? You can use WS-Management enumeration to retrieve the names of all directories (using the Win32_Directory WMI class). Now that may be a bit overwhelming, so you want to add a WS-Enumeration filter to your WS-Management request. The Microsoft WS-Management implementation supports the WQL filter syntax that lets you do just that.

BTW, you can presumably do the same thing with files, but directories by their nesting make it easy to read the lines in the order in which their appear in the file. Though you’d quickly run into path length limitations (and characters that are not valid in file/directory names).

A slightly more robust approach may be to set each line of the file in an environment variable (again via the “for”, and using “set” after the “do”). You can then read these environment variables over WS-Management by doing a WS-Transfer Get on the Win32_Environment WMI class. Unlike CIM_DataFile (for which Get only return properties, not the content), a Get on Win32_Environment includes the value of the environment variable as one of the properties. The pragmatic reasons for this dichotomy are obvious, but the architectural consequences will give a headache to anyone who still has any illusion that WS-Transfer has anything to do with REST.

As a side note, the “for” instruction can keep no more than 52 variables at a time, so if your file has more than 52 lines you’d have to send successive WS-Management requests and add a “skip” option to the “for” operation on subsequent requests (“skip=52”, “skip=104”, etc…). Again, practicality isn’t much of a concern here, we’re just playing with theory (Ed: “we”? how many people do you expect will still be reading at this point?).

That’s it for today’s episod of “Windows management for the on-the-wire-protocol guy”. Maybe next weekend I’ll take some time to look more into the remote shell over WS-Management protocol extention and how it can be misued/abused.

[UPDATE: The next post describes a more practical approach.]

5 Comments

Filed under DMTF, Everything, Implementation, IT Systems Mgmt, Manageability, Microsoft, SOAP header, Specs, Standards, WS-Management

Native “SSH” on Windows via WS-Management

Did you know that you can now SSH to a Windows machine over WS-Management and its is a documented protocol that can be implemented from any platform and programming language? This is big news to me and I am surprised that, as management protocol geek, I hadn’t heard about it until I started to search MSDN for a related but much smaller feature (file transfer over WS-Management).

OK, so it’s not exactly SSH but it is a remote shell. In fact it comes in two flavors, which I think of as “dumb SSH” and “super SSH”.

Dumb SSH

Dumb SSH is the ability to remotely run a DOS-like command shell over WS-Management. Anyone who has had to use the Windows command shell as a scripting language ersatz understands why I call it “dumb”. I expect that even in Microsoft most would agree (otherwise why would they have created PowerShell?).

Still, you can do quite a few basic things using the Windows command shell and being able to do them remotely is not something to sneer at if you’re building a management product. If you’re interested, you need to read MS-WSMV, the WS-Management Protocol Extensions for Windows Vista specification (available here as a PDF). By the name of the specification, I expected a laundry list of tweaks that the WS-Management and WS-CIM implementation in Vista makes on top of the standards (e.g. proprietary extensions, default values, unsupported features, etc). And there is plenty of that, in sections 3.1, 3.2 and 3.3. The kind of “this is my way” decisions that you’d come to expect from Microsoft on implementing standards. A bit frustrating when you know that they pretty much wrote the standard but at least it’s well documented. Plus, being one of those that forced a few changes in WS-Management between the Microsoft submission and the DMTF standard (under laments from Microsoft that “it’s too late to change Longhorn”) I am not really in position to complain that “Longhorn” (now Vista) indeed deviates from the standard.

But then we get to section 3.4 and we enter a new realm. These are not tweaks to WS-Management anymore. It’s a stateful tunneling protocol going over WS-Management, complete with base-64-encoded streams (stdin, stdout, stderr) and signals. It gives you all you need to run a remote command shell over WS-Management. In addition to the base Windows command shell, it also supports “custom remote shells”, which lets you leverage the tunneling mechanism for another protocol than the one made of Windows shell commands. For example, you could build an HTTP emulation over this on top of which you could run WS-Management on top of which… you know where this is going, don’t you?

A more serious example of such a “custom remote shell” is PowerShell, which takes us to…

Super SSH

Imagine SSH with the guarantee that the shell that you log into on the other side was a Python interpreter, complete with full access to the server’s management API. I think that would qualify as “super SSH”, at least for IT management purposes (no so exciting if all you want to do is check your email with mutt). This is equivalent to what you get when the remote shell invoked over WS-Management (or rather WS-Management plus Vista extensions described above) is PowerShell instead of the the Windows command shell. I have always liked PowerShell but it hasn’t really be all that relevant to me (other than as a design study) because of its ties to the Windows platform. Now, thanks to MS-PSRP, the PowerShell Remoting Protocol specification (PDF here) we are only a good Java (or Python, or Ruby) library away from being able to invoke PowerShell commands from any language, anywhere.

I have criticized over-reliance on libraries to shield developers from XML for task that really would be much better handled by simply learning to use XML. But in this case we really need a library because there is quite a bit of work involved in this protocol, most of which has nothing to do with XML. We have to fragment/defragment packets, compress/decompress messages, not to mention the security aspects. At this point you may question what the value of doing all this on top of WS-Management is, for which I respectfully redirect you to your local Microsoft technology evangelist, MVP or, in last resort, sales representative.

Even if PowerShell is not your scripting language of choice, you can at least use it to create a bootstrap mechanism that will install whatever execution engine you want (e.g. Ruby) and download scripts from your management server. At which point you can sign out of PowerShell. For some reason, I get the feeling that we just got one step closer to Puppet managing Windows machines.

A few closing comments

First, while the MS-WSMV part that lets you run a basic command shell seems already available (Vista SP1, Win2K3R2, Win2K8, etc), the PowerShell part is a lot greener. The MS-PSRP specification is marked “preliminary” and the supported platform list only contains Windows 7 and Win2K8R2. Nevertheless, the word from Microsoft is that they have the intention to make this available on XP and above shortly after Windows 7 comes out. Let’s hope this is the case, otherwise this technology will remain largely irrelevant for years to come.

The other caveat comes from the standard angle. In this post, I only concern myself with the technical aspects. If you want to implement these specifications you have to also take into account that they are proprietary specifications with no IP grant (“Microsoft has patents that may cover your implementations of the technologies described in the Open Specifications. Neither this notice nor Microsoft’s delivery of the documentation grants any licenses under those or any other Microsoft patents”) and fully controlled by Microsoft (who could radically change or kill them tomorrow). As to whether Microsoft plans to eventually standardize them, I would again refer you to your friendly local Microsoft representative. I can just predict, based on the content of the specification, that it would make for some interesting debates in the DMTF (or wherever they may go).

This is a big step towards the citizenship of Windows machines in an automated datacenter (and, incidentally, an endorsement for the “these scripts have to grow up” approach to automation). As Windows comes to parity with Unix in remote scripting abilities, the only question remaining (well, in addition to the pesky license) will be “why another mechanism”. Which could be solved either via standardization of MS-PSRP, de-facto adoption (PowerShell on Suse Linux is only one Microsoft-to-Novell check away) or simply using PowerShell as just a bootstrapping mechanism for Puppet or others, as mentioned above.

[UPDATE: On a related topic, these two posts describe ways to transfer files over WS-Management.]

8 Comments

Filed under Automation, DMTF, Everything, Implementation, IT Systems Mgmt, Manageability, Mgmt integration, Microsoft, Portability, Specs, Standards, WS-Management

The datacenter as a programmable entity

This is an exciting time for those who want to shrink the computer. They are having a field day playing with devices powered by Android, the iPhone’s Cocoa, Palm’s new WebOS, Windows Mobile, JavaFX (maybe one day) and, to a lesser extent, the Blackberry.

But times are good too for those who want to go the other way and program larger things rather than smaller ones. If you are interested in thinking about datacenters as a programmable entity, you are in luck: for these long plane trips when you run out of battery, bring a printout of the proceedings of the research meeting organized last year in Cambridge by Microsoft and HP Labs, titled “The Rise and Rise of the Declarative Datacentre”. When you’re back on-line go check the presentations on the site.

And if you liked Paul Anderson’s “Programming the Data Centre” presentation at the Cambridge meeting, you can also read his “Programming the Virtual Infrastructure” slides from LISA 08. More LISA 08 presentations here.

I got the link to Paul Anderson’s second presentation (and maybe also the first one, some time ago) from Steve Loughran, who also adds a few comments, starting with the debate between the declarative and procedural approaches. This question has plenty of down-the-road implications. There is a lot to like about the declarative approach in terms of composition, manageability and more generally as a framework to manage complexity via encapsulation.

A simple analogy for this debate is to think about driving directions. The declarative approach is for me to give you a map with a circle on it showing where my house is and let you find your way. It’s more work for you but it’s also more resilient. The procedural approach is for me to give you a set of turn-by-turn directions, based on where you are coming from. If you miss one turn or if one road happens to be blocked at the time, then you’re in trouble.

That being said, there are enough powerful and useful PowerShell or Puppet scripts out there to give you a pause before discarding procedural approaches. While the declarative (aka “desired state”, “policy-driven” and sometimes “model-based”) approach looks a lot more elegant, at this point in time the real work usually gets done via scripts, deployment procedures or the likes.

In additin to academia, the competition between these approaches is playing out right now between all the companies and products that want to help you automate and manage your cloud deployments (public and/or private): for example, Rightscale scripts (custom scripts and Righscripts, see here and here) versus the more declarative ECML/EDML documents from Elastra. Or the very declarative approach taken by SmartFrog.

5 Comments

Filed under Automation, Cloud Computing, Conference, Desired State, Everything, Grid, Implementation, Research, Tech, Utility computing

Long-running processes on Google App Engine: it finally works

I am probably taking things a bit too personally, but I feel like I just successfully guilt-tripped the Google App Engine (GAE) team. Just last night I was complaining that they were teasing me (supporting urllib2, but an older version, without timeout support). And tonight I noticed a new post on their blog, announcing the end of these “High CPU Requests” that have been the bane of my GAE experience.

The reason why I was looking for timeout support in the first place is to avoid generating these dreaded “High CPU Requests” that quickly result in your application being disabled. It’s all explained here and  here.

But now that they are gone, I don’t need timeouts anymore. Around 11:00PM Pacific (on Thursday 2/12) I restarted my application. All it does is create 5 new simple entities in the datastore, one every second (it sleeps for one second between each entry). Then it spawns its successor, which will do the same thing, ad vitam aeternam. The application’s name, “rere”, is short for “request relay”, the pattern used to emulate a long running process. The default page for the app (available here) just returns a list of the 30 last entities created. The point is to illustrate that a single original request spawned an ever-lasting computing task on GAE.

Here is the code:

#!/usr/bin/env python
#
# Copyright 2009 William Vambenepe
#

import wsgiref.handlers
import os
import logging
import time

from google.appengine.ext import db
from google.appengine.ext.webapp import template
from google.appengine.ext import webapp
from google.appengine.api import urlfetch

numberOfHeartbeatsViewed = 30
secondsDurationOfTaskWait = 1
numberOfTasksPerRequest = 5

class HeartBeat(db.Model):
  requestId = db.IntegerProperty()
  date = db.DateTimeProperty(auto_now_add=True)

# The mere existence of an instance of this class in the DB means that the relay has to stop.
class StopExec(db.Model):
  date = db.DateTimeProperty(auto_now_add=True)

class MainHandler(webapp.RequestHandler):
  def get(self):
    hbs = HeartBeat.all().order("-date").fetch(numberOfHeartbeatsViewed)
    template_values = {"hbs": hbs}
    path = os.path.join(os.path.dirname(__file__), "index.html")
    self.response.out.write(template.render(path, template_values))

class StartHandler(webapp.RequestHandler):
  def get(self):
    if (StopExec.all().count() == 0):
      try:
        id = int(self.request.get("id"))
      except (TypeError, ValueError):
        id = 0
      try:
        logging.debug("Request " + str(id) + " launching background task.")
        loopCount = 0
        while(loopCount < numberOfTasksPerRequest):
          hb = HeartBeat()
          hb.requestId = id
          hb.put()
          logging.debug("Request " + str(id) + " saved heartbeat #" + str(loopCount))
          time.sleep(secondsDurationOfTaskWait)
          loopCount = loopCount+1
      finally:
        logging.debug("Launching successor request with id=" + str(id+1))

# This silly back and forth between the two URLs is because of
# "App cannot fetch the same URL as the one used for the request" error.
        if (self.request.url.find("start2") == -1):
          urlfetch.fetch("http://localhost/start2?id=" + str(id+1))
        else:
          urlfetch.fetch("http://localhost/start?id=" + str(id+1))
        logging.debug("Request " + str(id) + " completed")

def main():
  application = webapp.WSGIApplication([("/", MainHandler), ("/start", StartHandler), ("/start2", StartHandler)], debug=True)
  wsgiref.handlers.CGIHandler().run(application)

if __name__ == "__main__":
  main()

One thing I had to change from the earlier version (written using version 1.1.0 of the GAE SDK) is that urlfetch now returns an error if your app tries to invoke itself at the same URL (“App cannot fetch the same URL as the one used for the request”). So I have to alternate between http://localhost/start and http://localhost/start2, both of which are mapped to the same handler. This was added sometimes between SDK 1.1.0 and SDK 1.1.9. If it is aimed at preventing the kind of batton-passing that I am doing, it is pretty unefficient considering how easy it is to circumvent.

It is now 1:02AM Pacific the next day (Friday 2/13) and the process is still progressing, based on the single HTTP request I sent to it at 11:00PM the previous evening. The result page currently returns:

  • From request # 1056, with date Fri, 13 Feb 2009 09:02:15 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:14 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:13 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:12 +0000
  • From request # 1056, with date Fri, 13 Feb 2009 09:02:11 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:10 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:09 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:08 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:07 +0000
  • From request # 1055, with date Fri, 13 Feb 2009 09:02:06 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:05 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:04 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:03 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:02 +0000
  • From request # 1054, with date Fri, 13 Feb 2009 09:02:01 +0000
  • From request # 1053, with date Fri, 13 Feb 2009 09:01:59 +0000
  • From request # 1053, with date Fri, 13 Feb 2009 09:01:58 +0000

Which shows that 1056 successive requests have participated in the relay (the last one just happened, at 09:02:15 UTC which is 1:02AM Pacific).

Hopefully it will still be running when I wake up tomorrow.

[UPDATED 2009/2/13, 9:08AM Pacific: It’s alive!

  • From request # 6411, with date Fri, 13 Feb 2009 17:08:45 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:44 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:43 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:42 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:41 +0000
  • From request # 6410, with date Fri, 13 Feb 2009 17:08:40 +0000
  • From request # 6409, with date Fri, 13 Feb 2009 17:08:39 +0000

BTW, the code provided uses localhost to run on my local machine. The version uploaded to Google of course replaces this with rere.appspot.com.]

[UPDATED 2009/5/1: For some reason this entry is attracting a lot of comment spam, so I am disabling comments. Contact me if you’d like to comment.]

4 Comments

Filed under Cloud Computing, Everything, Google App Engine, Implementation, Tech, Utility computing

Google App Engine is teasing me

Version 1.1.9 of the Google App Engine (GAE) SDK was released earlier this week. The first item in the announcement covers the big news, that developers can “use the Python standard libraries urllib, urllib2 or httplib to make HTTP requests”. My first thought reading this was that I was finally going to be able to use timeouts on outgoing HTTP requests. I care about this because my earlier attempt to emulate a long-running process in GAE was stymied by the GAE quota system, something I think I can work around if I can timeout a request once it has spawned its successor (more precisely, once it has spawned the successor of the incoming request that created the new outgoing request).

I got the new SDK last evening and moved the code from using urlfetch to using urllib2 (with timeout). On my local machine it seems to work, but the quota system (that I am trying to finesse) doesn’t run in the SDK. So the only real test happens once you deploy the app in the Google environment. Which is when I realized that GAE uses Python 2.5.2 and that the timeout parameter in urllib2 (and httplib) came with 2.6. Slap.

I was especially disappointed because the links for urllib2 and httplib in the GAE 1.1.9 SDK announcement take us to the Python 2.6.1 documentation. The timeout parameter is right there at the top of these pages, staring at me. It would be more accurate for this announcement to point to the 2.5.2 documentation (here it is for urllib2 and httplib).

It doesn’t really matter of course because this is just a toy project. And a real scheduler seems to be in the works (see this pre-announcement and this work in progress). I just do this as a fun way to get a glimpse of what it takes to turn existing infrastructure into a *aaS product, something that is going on in different ways in many places these days. Linux wasn’t created to run on something else than hardware. Xen wasn’t created to support EC2. Python wasn’t created to support GAE.

And GAE wasn’t created to support long-running processes. But I haven’t given up.

1 Comment

Filed under Cloud Computing, Everything, Google App Engine, Implementation, Tech, Utility computing

Now I know why GAE has been killing me

I haven’t written about Google App Engine lately because I haven’t spent too much time using it. And the time I did spend was mostly consumed fighting JavaScript for some client-side aspects that have nothing to do with the fact that the server side is on GAE. I have only touched JavaScript occasionally in the 12 years since writing this game, and I am pretty rusted. Funny how Python came back to me almost instantly but JavaScript just doesn’t.

I am writing a quick post on GAE today because the Google team just published a blog entry that explains why my attempts to create a long-running process in GAE resulted in my application being disabled for having too many “high CPU requests”, a concept that was never documented in the quota system.

Google is now offering a “quota detailed dashboard” and, as this screen capture shows, it tells us that you only get 60 “high CPU requests” and that they replenish at a rate of 2 per minutes. So at least now I know.

What is still not clear is why a request that is doing nothing but waiting for a URLfetch to come back is considered by Google to use too much CPU…

2 Comments

Filed under Everything, Google App Engine, Implementation, Tech, Utility computing

Here comes WSTF

A new Web services-related industry body has been announced today: WSTF (Web Services Test Forum). More details about it from Infoworld. My employer (Oracle) seems to be one of the drivers (along with IBM) but I am not personally involved.

A lot of hand-wringing, of course, about its relationship with WS-I. Which is understandable if you consider what WS-I was originally supposed to deliver (profiles, sample applications and testing tools). But not if you consider what it has actually delivered that is relevant (a couple of profiles, some time ago). WSTF could also be compared with the SOAPBuilders Yahoo group, but since that group has seen only two emails messages so far in 2008 (last one dated April 2nd), it seems safe to consider it dead. It would be interesting to know why that is though (it used to be pretty active in the early days) and what lesson WSTF may learn from it. Another effort you may want to compare this to is the Microsoft Web Services Protocol Workshops Process. It’s too early to tell, but they may turn out to be more closely related than meets the eye.

I noticed this innocuous-sounding sentence in the press release (warning, PDF): “As an open community, WSTF has made it easy to introduce new interoperability scenarios and approve work through simple majority governance”. You may wonder why this is important enough to figure in the press release.

I interpret it as a dog whistle call (heard only by those to whom it is intended) for the WS-I board. Microsoft’s Paul Cotton responds to it in his quote for Infoworld: “WS-I provides a proven and open organization and process that best suits our customers’ needs”. He also talks about “the incredible industry-wide momentum and leadership of WS-I”, which is indeed not very credible (especially the momentum part). The WS-I process, associated board politics and resulting inaction is what I was talking about in this entry (“veto rules being commonly invoked, stopping most of the activities that the resort was originally planning to offer”).

Speaking of this “Standardstown” blog entry, I should probably soon update it to include WSTF. What should it be? Maybe a trailer park in which customers bring their own lodging, put them side by side and see how they line up?

The current test scenarios seem to focus on fixing the interoperability mess that is WS-Addressing. I assume more will soon be added to test the different WS-* specifications out there. It will be interesting to see what direction WSTF takes after that. Will the payloads of the test messages be obvious dummy payloads (so that the focus is on testing the implementation of the WS-* protocols)? Or will they start to include real payloads (e.g. real purchase orders from real enterprise applications)? How about this: “dear vendor, I will only buy your wonderfully open, standard and interoperable Web services-based application when it is available as a WSTF endpoint and there are three other real-life products (including one from your main competitor) that successfully interoperate with the exact same SOAP messages I will be using”. This could become an interesting tussle between vendors as well as between vendors and buyers.

Alternatively, of course, WSTF could turn into a test of how much difference there is between a “standard” and a publicly specified and interop-tested interaction scenario…

A quick (and unsuccesful) Technorati search for some blog comments returns the “WSTF Dark Retribution Dinorobots Limited Giftset” which “includes all five Dinorobots in their sinister evil incarnation”. Can’t say you were not warned…

[UPDATED 2008/12/10: Gilbert Pilz, who was involved with WSTF from the start (and also left a comment on this entry, see below), wrote a detailed description of the problem WSTF tries to address and how Gilbert and others have structured WSTF to solve it.]

[UPDATED 2008/12/15: Via InfoQ, another long description of the goals and processes of WSTF, this time from Doug Davis.]

[UPDATES 2009/1/5: Chris Ferris also weighs in, including his view on the relationship with WS-I. Having participated in several of the early WS-I plenary meetings, I have to wonder if Chris had any double-entente in mind when he wrote that WS-I helps “the community understand where the bar is”.]

[UPDATED 2009/2/17: A response from Redmond.]

1 Comment

Filed under Everything, IBM, Implementation, Microsoft, Oracle, SOAP, Specs, Standards, Testing

Last call for SML and SML-IF

The SML working group at W3C has published the “last call” working draft of version 1.1 of the SML and SML-IF (“IF” stands for “interchange format”) specifications. You have until October 3rd to tell them what you think.

With all the Oslo fun, the OMG embrace and the silence from System Center there are more questions than answers about the use of SML at Microsoft. But the Eclipse COSMOS project (IBM and friends) is, as far as I know, valiantly going forward with the store/validator implementation. Which may or may not be the same codebase as what was used for the recent CMDBf interop demo (I am not sure how the SML and CDMBf implementations in COSMOS are articulated).

The COSMOS group also recently published an overview of SML. It doesn’t try to tell you why you’d want to use SML but it’s a good and succint description of what SML is technically (from an XML developer’s perspective).

Comments Off on Last call for SML and SML-IF

Filed under CMDB Federation, CMDBf, Desired State, Everything, IBM, Implementation, IT Systems Mgmt, Mgmt integration, Microsoft, Modeling, Open source, Oslo, SML, Specs, Standards, Tech, W3C