Category Archives: Security

URL shorteners and privacy: The Good, the Bad and the Cookie

The table below compares various URL shorteners based on how much they value service performance and the privacy of their users.

Here is the short version of the reading guide: a URL shorterner which gives a high priority to reliability, performance and privacy will use a 301 (“Moved Permanently”) response code, will not use cache control headers and will not use cookies. A URL shortener which gives high priority to its own ability to monetize its traffic by tracking users will do one or more of these things.

Here is how a few of the most popular shorteners perform by this measure (red is bad).

For the long version (and an explanation of how I came to create this table) read below the table.

Service name Cookie Status code Caching limitations
t.co (Twitter) 301 5 min
bit.ly tracking 301
tinyurl.com 301
goo.gl (Google) 301 24h
wp.me (WordPress) 301
snurl.com 301 10h
fb.me (Facebook) (*) 301
twurl.nl tracking 301
is.gd
ping.fm 301
p.ly tracking 301 no caching
ff.im tracking 301 (**)
u.nu 301
tiny.cc tracking 301
snipurl.com 301 10h
chkit.in tracking 301
ur1.ca 302 no caching
digs.by 302 no caching

Notes:

(*) Facebook’s service, fb.me, tries to set a cookie but its content is “locale=en_US” and cannot be used for identification. In addition, it sets the domain to “.facebook.com” in the Set-Cookie directive but since the response comes from another domain (fb.me) the cookie is actually never returned by the browser and therefore useless. It looks like this is a leftover configuration setting copied from the normal facebook.com servers. Defying all expectations, Facebook comes out as one of the most privacy-friendly URL shorteners.

(**) ff.im limits the cache to being “private” which means that your browser can cache the result but a shared proxy (e.g. your company’s proxy) should not cache it. Forcing each user behind that proxy to resolve the URL once. I magnanimously did not ding them for this, even though it’s sub-optimal.

Now for the longer explanation

Despite the potential it offers to stretch out our tweets, I wasn’t too impressed when I learned of Twitter’s plan to roll out (and mandate) its own URL shortening service. My fundamental issue is that URL shortening is made necessary by an arbitrary decision on Twitter’s part (the 140 character limit and the fact that URLs count toward it) and that it would be entirely within their power to make these abominations unneeded. Or, at least, much more rarely needed (when tinyurl.com came out, the main use case was to insert a very long URL in an email without having problems with carriage returns, not to turn third-world countries into purveyors of silly domain names).

Beyond this fundamental issue, my main concerns about Twitter’s t.co mechanism are that it reduces privacy and it demands that you break the HTTP specification.

From a privacy perspective, the issue is that anyone who clicks on these links tells Twitter where they are going. And Twitter can collect and correlate these actions. The easiest way for them (or any other URL shortener) to do this is to use cookies. Cookies aren’t often used as part of redirections, but technically nothing prevents them. So I wanted to see if Twitter used them.

[Side note: in practice there are ways to track your browser without using identifying cookies, not to mention simply using the IP address which works quite well on people who browse from home. Still, identifying cookies are the preferred method.]

From a specification conformance perspective, the problem is that Twitter announced that they would modify the Terms of Service of their API to prevent you from replacing the short URL with the real location once you’ve resolved it the first time (as of this writing they apparently haven’t yet made the ToS change). That behavior would be in violation of the HTTP specification if the redirection used status code 301 (“Moved Permanently”) which states that “any future references to this resource SHOULD use one of the returned URIs” and “clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server“. So I wanted to see whether t.co indeed returns a 301 (and asks us to violate the spec) or if they use a Temporary Redirect (302 or the new 307) in which case the specification would not be violated but other problems would arise (for example, search engines would not give you PageRank karma for such a link).

The other (spec-compliant) way to force a 301 to call back home once a while is the (strange but legal) practice of using cache control headers on permanent redirections. So I also wanted to see how t.co behaves on that front.

And then I decided to also test a few other services, which is how the table above came to be.

Comments Off on URL shorteners and privacy: The Good, the Bad and the Cookie

Filed under Everything, Facebook, Google, Protocols, Security, Social networks, Tech, Testing, Twitter

The fallacy of privacy settings

Another round of “update your Facebook privacy settings right now” messages recently swept through Twitter and blogs. As also happened a few months ago, when Facebook last modified some privacy settings to better accommodate their business goals. This is borderline silly. So, once and for all, here is the rule:

Don’t put anything on any social network that you don’t want to be made public.

Don’t count on your privacy settings on the site to keep your “private” data out of the public eye. Here are the many ways in which they can fail (using Facebook as a stand-in for all the other social networks, this is not specific to Facebook):

  • You make a mistake when configuring the privacy settings
  • Facebook changes the privacy mechanisms on you during one of their privacy policy updates
  • Facebook has a security flaw that bypasses access control
  • One of you friends who has access to your private data accidentally/stupidly/maliciously shares it more widely
  • A Facebook application to which you grant access betrays your trust in accessing the data and exposing it
  • A Facebook application gets hacked
  • A Facebook application retains your data in its cache
  • Your account (or one of your friends’ account) gets hacked
  • Anonymized data that Facebook shares with researchers gets correlated back to real users
  • Some legal action (not necessarily related to you personally) results in a large amount of Facebook data (including yours) seized and exported for legal review
  • Facebook looses some backup media
  • Facebook gets acquired (or it goes out of business and its assets are sold to the highest bidder)
  • Facebook (or whoever runs their hardware) disposes of hardware without properly wiping it
  • [Added 2012/3/8] Your employer or schoold demands that you hand over your account password (or “friend” a monitor)
  • Etc…

All in all, you should not think of these privacy settings as locks protecting your data. Think of them as simply a “do not disturb” sign (or a necktie…) hanging on the knob of an unlocked door. I am not advising against using privacy settings, just against counting on them to work reliably. If you’d rather your work colleagues don’t see your holiday pictures, then set your privacy settings so they can’t see them. But if it would really bother you if they saw them, then don’t post the pictures on Facebook at all. Think of it like keeping a photo in your wallet. You get to choose who you show it to, until the day you forget your wallet in the office bathroom, or at a party, and someone opens it to find the owner. You already know this instinctively, which is why you probably wouldn’t carry photos in your wallet that shouldn’t be shown publicly. It’s the same on Facebook.

This is what was so disturbing about the Buzz/GMail privacy fiasco. It took data (your list of GMail contacts) that was not created for the purpose of sharing it with anyone, and turned this into profile data in a social network. People who signed up for GMail didn’t sign up for a social network, they signed up for a Web-based email. What Google wants, on the other hand, is a large social network like Facebook, so it tried to make GMail into one by auto-following GMail contacts in your Buzz profile. It’s as if your insurance company suddenly decided it wanted to enter the social networking business and announced one day that you were now “friends” with all their customers who share the same medical condition. And will you please log in and update your privacy settings if you have a problem with that, you backward-looking, privacy-hugging, profit-dissipating idiot.

On the other hand, that’s one thing I like about Twitter. By and large (except for the few people who lock their accounts) almost all the information you put in Twitter is expected to be public. There is no misrepresentation, confusion or surprise. I don’t consider this lack of configurable privacy as a sign that Twitter doesn’t respect the privacy of its users. To the contrary, I almost see this as the most privacy-friendly approach: make it clear that everything is public. Because it is anyway.

One could almost make a counter-intuitive case that providing privacy settings is anti-privacy because it gives an unwarranted sense of security and nudges users towards providing more private data than they otherwise would. At least if the policy settings are not contractual (can you sue Facebook for changing its privacy terms on you?). At least it’s been working that way so far for Facebook, intentionally of not, as illustrated by all the articles that stress the importance of setting our privacy settings right (implicit message: it’s ok to put private information as long as you set  privacy settings).

Yes you should have clear privacy settings. But the place to store them is in your brain and the place to enforce them is by controlling what your fingers do before data gets on Facebook. Facebook and similar networks can only leak data that they posses. A lot of that data comes from you directly uploading it. And that’s the point where you have control. After this, you really don’t. Other data comes from tracking and analyzing your activities and connections, without explicit data upload from you. That’s a lot harder for you to control (you rarely even get asked for your privacy preferences on this data), but that’s out of scope for this blog entry.

Just like banks that are too big to fail are too big to exist, data that is too sensitive to leak from Facebook is too sensitive to be on Facebook.

5 Comments

Filed under Everything, Facebook, Google, Off-topic, Security, Social networks, Twitter

Taxonomy of Cloud Computing Benefits

One of the heavily discussed Cloud topics in early 2009 was a  Cloud Computing taxonomy. Now that this theme has died down (with limited results), and to start 2010 in a similar form, here is a proposal for a taxonomy of the benefits of Cloud Computing.

Just like the original Cloud Computing taxonomy only had three layers (IaaS/PaaS/SaaS), so does this taxonomy of Cloud benefits. The point of this post is to promote the third layer. I describe layers 1 and 2  mainly to better call out what’s specific about layer 3.

Layer 1 (infrastructure: “let someone else do it”)

This is the bare-bottom, inherent benefit of Cloud Computing: you don’t have to deal with the hardware. In practice, it means:

  • no need to worry about power/cooling,
  • on-demand provisioning/deprovisioning (machines appear/disappear in a way physical machines do not),
  • not responsible for physical security (though responsible for ensuring that the provider has an acceptable security level),
  • economies of scale (for equipment purchase and operations),
  • potential environmental benefits,
  • etc…

Layer 2 (management: “let a program do it”)

More specifically, more automated IT management. This does not require Cloud Computing (you can have a highly automated IT management environment on premise), but the move to Cloud Computing is the trigger that is making it really happen. While this capability is not an inherent benefit of Cloud Computing, the Cloud makes it:

  • Needed: You don’t get to put color tags on machines, you don’t get to bring a DVD to install a new application, you don’t get to open a machine to insert more memory, you don’t get to go retrieve a backup tape, label it and put it in a safe, etc. Of course loosing these “privileges” doesn’t sound bad considering that they are mostly chores, but it means that you have to design alternative (and mostly programmatic) ways to perform the functions that these tasks addressed.
  • Easier: Cloud environments are highly API-driven. Many IT tools from the previous generation were console-centric (people would go out and buy “a network/event/system management console“) with APIs/protocols as a secondary thought. In Cloud environments, tools are a lot more API-centric with the console as an adjunct (anyone has stats about the ratio of EC2 instances provisioned via the AWS console versus the APIs?). This is also why even though a lot of people wanted standard management protocols (of the WSDM/WS-Management generation), there wasn’t as much of a realization of their importance in the old environment (and not as much pressure to create them and eagerness to adopt them). The stakes and visibility are a lot higher in the Cloud environments and that’s why this second wave of protocols will have to succeed where the previous one came short.
  • More beneficial: Once you have automated IT management in a traditional data center, what you get is fewer employees needed and somewhat better utilization. But you are still gated by the time/process to purchase/install new machines and the cost of unused machines (at least with automation you don’t have to pay their power/cooling). You don’t get the “just what I need” level of infrastructure usage that the same automation work allows in a Cloud setting.

Layer 3 (applications: “do it right”)

In short, use the move to the Cloud as an opportunity to fix some of the key issues of today’s applications. Think of the Cloud switch as a second Y2K, 10 years later: like in 2000, not only are there things that the transition requires you to fix, there are also many things that aren’t exactly required to fix but still make sense to fix as part of the larger modernization effort. Of course the Cloud move is missing that ever-so-valuable project management motivator of a firm deadline, but hopefully competitive pressure can play a similar role.

What are these issues? Here is a partial list:

  • Security: at least authentication and authorization. We have SSO/Federation systems, both enterprise-type and Web-centric and they often suck in practice. Whether it’s because of the protocols, the implementations, the tools or the mindset. Plus, there are too many of them. As applications gained mouths and ears and started to communicate with one another, the problem became obvious. If, in the Cloud, you also want them to grow legs and be able to move around (wholly or in parts) then it really really has to get fixed. Not to mention the “all or nothing” delegation model that I am surprised hasn’t yet created a major disaster (let’s see what 2010 has in store). I suggested a band-aid fix earlier, but this needs a real solution (the Cloud Security Alliance provides some guidance in this document, see “domain 12” for IAM).
  • Get remote application interfaces right. It’s been discussed, manifesto’ed, buried and lampooned many times before (this was my humble take on it). Whether it’s because of WS-* or, more likely, java2wsdl we have been delayed in this but it simply has to happen. Call it SOAP, zenSOAP, REST, practical REST or whatever you want. Just make sure that all important functions and data are accessible via clear, documented, consistent, easy-to-use, on-the-wire interfaces. Once we have these interfaces, and only then, we can worry about reliably composing/orchestrating applications that cross organizational boundaries.
  • Related to the previous point, clean up the incestuous relationship between an application and its data. Actually, it’s not “its” data. It’s the data it works on.
  • Deliver application-centric IT management. Quit loosing and (badly) re-creating information: for example, an application deployment followed by a black-box discovery (“what did I just do”?). Or after-the-fact re-establishing correlations between events on different servers (“what was this about”?). Application management too often looks like a day in the life of a senile person.
  • Fault-tolerance and disaster recovery. It is too often lacking (or untested, which is the same) for applications that are just below the perceived threshold of requiring it to be done right. That threshold needs to be lowered and the move to the Cloud can be used to make this possible.

[You should also read Tim Bray’s perspective (and Stefan Tilkov’s comment) on the process/methodology/tools for enterprise applications, an orthogonal (but related) area of improvement. More fundamental.]

As I mentioned above, these are mostly not Cloud specific (though it is possible to create a Cloud connection for each). They are things that we have known about and tried to fix for a while. But the pace has been pretty slow and there is an opportunity for the Cloud transition to do more than just hand out the keys of the datacenter.

What kinds of benefits are you aiming for in your Cloud plans?

[UPDATED 2010/01/11: An interesting take on a similar topic by Brenda Michelson: 5 Enduring Aspects of Cloud Computing]

[UPDATED 2010/01/14: Along the same lines (but looking at it in the other direction), an interesting graph from Alistair Croll of Bitcurrent.]

11 Comments

Filed under Application Mgmt, Automation, Cloud Computing, Ecology, Everything, IT Systems Mgmt, Mgmt integration, Security, Utility computing

Expanding on “twitter with a brain”

Chuck Shotton recently made a compelling case (“Twitter with a Brain“) for Twitter tools to allow the user to change the protocol endpoint. That is, instead of always going to twitter.com, you can tell your Twitter client to send all requests to myTweetInterceptor.me.com. Why would you do this? You should read his blog entry, but in short his point is that the intermediary can add all kinds of new features that neither the Twitter client nor Twitter itself support. As always in computer science, a new level of decoupling adds opportunities for extensions (and breakage too, of course).

I fully agree with what he writes and I would very much like to see his call to action answered. In fact, I want more than what he is asking for. So here is my call to action:

1) It’s not just Twitter

Why just Twitter? This should be true for any client using any protocol. Why not also the APIs for the various Google and Yahoo services? The APIs for the other social networks beyond Twitter? For shopping sites like Amazon and EBay? Etc. And of course to all the various Cloud providers out there. Just because I am using the Amazon EC2 API it doesn’t mean I necessarily want the requests to go straight to Amazon. Client tools should always make the endpoint configurable, period.

2) It’s not just the clients, it’s also (and especially) the third party sites

Chuck’s examples are about features that the Twitter clients could provide but don’t, so an intermediary would be an easy way to hack support for them (others presumably include modifying the client – if open source -, writing a plug-in for it – it there is such mechanism -, or running a network interceptor on the local client – unless the protocol is encrypted-).

That’s nice and I’d love to see this, but the big deal for me is less with clients and more with third party sites. You know, all these sites that ask for your Twitter login/password. Or those that ask for your GMail/Yahoo account info to retrieve a list of your contacts. I never grant these requests, but I would consider it if they allowed me to tell them what endpoint URL to use. For example, rather than using my Twitter login to go straight to twitter.com, they would use a login/password that I create and talk to twitterIntermediary.vambenepe.com. The requests would be in the exact same shape as what they send today to Twitter, just directed to another URL. There, I could have a proxy that only allows some requests (e.g. “update twitter background image” but not “send update”) and forwards them using my real Twitter credentials. Or, for email accounts, I could have a proxy that allows requests that read my address book but not those that read my mails. The goal here is not to add features, it is to delegate trust in a fine-grained (and audited) manner. This, to me, is the burning need, rather than a 3rd place to implement Twitter lists.

I would probably write these proxies using a PaaS platform like the Google App Engine. Or maybe even Yahoo Pipes. I have long struggled to think of use cases for which Yahoo Pipes hits the sweetspot, and this may well be it. Especially if people write modules to handle specific APIs (e.g. a “Twitter API” module that shows all operations and lets you enable/disable them one by one in a pipe). The one thing missing would be a way for a pipe to keep a log of its invocations, for auditing.

You want access to my email and social network accounts? Give me the ability to filter you requests and you’ll get access. If it’s blind trust you want, I am afraid I have a very limited supply.

[Note: I wanted to add this as a comment on Chuck’s blog, but he doesn’t seem to allow them: “go start your own blog and/or shut up and eat your vegetables” is his recommendation. Since I already have my own blog, I guess I don’t have to eat my vegetables if I don’t want to. I just hope my kids don’t learn about this rule or they’ll be blogging in no time.]

[UPDATED 2009/11/30: WRT to Chuck’s request, it looks like it’s being done already. But no luck with the third party sites so far, which is what I most want to see.]

6 Comments

Filed under Automation, Everything, Google App Engine, Implementation, Mashup, PaaS, Portability, Protocols, Security, Social networks, Twitter, Yahoo

Are these your files? I found them on my cloud

Drip drip drip… Is this the sound of your cloud leaking?

It can happen in different ways. See for example this recent research paper, titled “Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party Compute Clouds”. It’s a nice read, especially if you find side channels interesting (I came up with one recently, in a different context).

In the first part of the paper, the authors show how to get your EC2 instance co-located (i.e. running in in the same hypervisor) with the instance you are targeting (the one you want to spy on). Once this is achieved, they describe side channel attacks to glean information from this situation.

This paper got me thinking. I noticed that it does not mention trying to go after disk blocks and memory. I don’t know if they didn’t try or they tried and were defeated.

For disk blocks (the most obvious attack vector), Amazon is no dummy and their “proprietary  disk  virtualization  layer  automatically  wipes every block of storage used by  the customer, and guarantees  that one customer’s data  is never exposed to another” as explained in the AWS Security Whitepaper. In fact, they are so confident of this that they don’t even bother forbidding block-based recovery attempts in the AWS customer agreement (they seem mostly concerned about attacks that are not specific to hypervisor environments, like port scanning or network-based DOS). I took this as an invitation to verify their claims, so I launched a few Linux/ext3 and Windows/NTFS instances, attached a couple of EBS volumes to them and ran off-the-shelf file recovery tools. Sure enough, nothing was found on  /dev/sda2 (the empty 150GB partition of local storage that comes with each instance) or on the EBS volumes. They are not bluffing.

On the other hand, there were plenty of recoverable files on /dev/sda1. Here is what a Foremost scan returned on two instances (both of them created from public Fedora AMIs).

The first one:

Finish: Tue Sep  1 05:04:52 2009

5640 FILES EXTRACTED

jpg:= 14
gif:= 670
htm:= 1183
exe:= 2
png:= 3771
------------------------------------------------------------------

And the second one:

Finish: Wed Sep  2 00:32:16 2009

17236 FILES EXTRACTED

jpg:= 236
gif:= 2313
rif:= 11
htm:= 4886
zip:= 182
exe:= 6
png:= 9594
pdf:= 8
------------------------------------------------------------------

These are blocks in the AMI itself, not blocks that were left on the volumes on which the AMI was installed. In other words, all instances built from the same AMI will provide the exact same recoverable files. The C: drive of the Windows instance also had some recoverable files. Not surprisingly they were Windows setup files.

I don’t see this as an AWS flaw. They do a great job providing cleanly wiped raw volumes and it’s the responsibility of the AMI creator not to snapshot recoverable blocks. I am just not sure that everyone out there who makes AMIs available is aware of this. My simple Foremost scans above only looked for the default file types known out of the box by Foremost. I suspect that if I added support for .pem files (used by AWS to store private keys) there may well be a few such files recoverable in some of the publicly accessible AMIs…

Again, kudos to Amazon, but I also wonder if this feature opens a possible DOS approach on AWS: it doesn’t cost me much to create a 1TB EBS volume and to destroy it seconds later. But for Amazon, that’s a lot of blocks to wipe. I wonder how many such instantaneous create/delete actions on large EBS volumes it would take to put a large chunk of AWS storage capacity in the “unavailable – pending wipe” state… That’s assuming that they proactively wipe all the physical blocks. If instead the wipe is virtual (their virtualization layer returns zero as the value for any free block, no matter what the physical value of the block) then this attack wouldn’t work. Or maybe they keep track of the blocks that were written and only wipe these.

Then there is the RAM. The AWS security paper tells us that the physical RAM is kept separated between instances (presumably they don’t use ballooning or the more ambitious Xen Transcendent Memory). But they don’t say anything about what happens when a new instance gets hold of the RAM of a terminated instance.

Amazon probably makes sure the RAM is reset, as the disk blocks are. But what about your private Cloud infrastructure? While the prospect of such Cloud leakage is most terrifying in a public cloud scenario (anyone could make use of it to go after you), in practice I suspect that these attack vectors are currently a lot more exploitable in the various “private clouds” out there. And that for many of these private clouds you don’t need to resort to the exotic side channels described in the “get off of my cloud” paper. Amazon has been around the block (no pun intended) a few times, but not all the private cloud frameworks out there have.

One possible conclusion is that you want to make sure that your cloud vendor does more than writing scripts to orchestrate invocations of the hypervisor APIs. They need to understand the storage, computing and networking infrastructure in details. There is a messy physical world under your clean shinny virtual world. They need to know how to think about security at the system level.

Another one is that this is a mostly an issue for hypervisor-based utility computing and a possible trump card for higher level of virtualization, e.g. PaaS. The attacks described in the paper (as well as block-based file recovery) would not work on Google App Engine. What does co-residency mean in a world where subsequent requests to the same application could hit any machine (though in practice it’s unlikely to be so random)? You don’t get “deployed” to the same host as your intended victim. At best you happen to have a few requests executed while a few requests of your target run on the same physical machine. It’s a lot harder to exploit. More importantly, the attack surface is much more restrained. No direct memory access, no low-level scheduler data, no filesystem… The OS to hardware interface that hypervisors emulate was meant to let the OS control the hardware. The GAE interface/SDK, on the other hand, was meant to give the application just enough capabilities to perform its task, in a way that is as removed from the hardware as possible. Of course there is still an underlying physical reality in the GAE case and there are sure to be some leaks there too. But the small attack surface makes them a lot harder to exploit.

[UPDATED 2009/9/8: Amazon just improved the ability to smoothly update your access certificates. So hopefully any such certificate found on recoverable blocks in an AMI will be out of data and unusable.]

[UPDATED 2009/9/24: Some good security practices that help protect you against block analysis and many other forms of attack.]

[UPDATED 2009/10/15: At Oracle Open World this week, I was assured by an Amazon AWS employee that the DOS scenario I describe in this post would not be a problem for them. But no technical detail as to why that is. Also, you get billed a minimum of one hour for each EBS volume you provision, so that attack would not be as cheap as I thought (unless you use a stolen credit card).]

4 Comments

Filed under Amazon, Cloud Computing, Everything, Google App Engine, Security, Utility computing, Virtualization, Xen

Sorry, CMDBf doesn’t make coffee either

The IT Skeptic is writing to us from his mountain retreat (via a time-delayed post on his blog), and the topic he felt safe to cover in such fashion (what journalists call an “evergreen”) is the fact that CMDBf is an orchestrated sham, brilliantly executed by IT management vendors.

I’d love to be part of something that’s brilliantly executed for once, even if it is a sham, but I am afraid this is not it. But first I should state the obvious, clarifying that even though I am a member of the CMDBf group at DMTF (and also an author of the original version, under my previous employer) I do not speak for the group or DMTF (or my employer for that matter). Just as myself, as always on this blog.

The problem that Rob England, Mr. Skeptic, has with the CMDBf specification is that it doesn’t do a bunch of things that he’d like it to do, such as specifying how data sources acquire data for their domain, how they store the data, how the underlying resources are reconfigured, what processes are followed etc. See the full list from his post. The list is a copy/paste from the CMDBf specification, with some comments added, so at the very least he has to admit that as far as “smokescreens” go this one is pretty upfront about its limitations…

He concludes that “this is once again a geeky technical solution to a cultural, organizational and procedural problem.” I have to ask: who expects DMTF specifications to solve “cultural, organizational and procedural” problems? Does CIM solve such problems? Does WBEM?

Human-to-human communication is a “cultural, organizational and procedural” problem and SMTP/POP/IMAP/etc (the interoperable protocols used by email systems) are just as geeky as CMDBf. They don’t solve the larger problem, only contribute to the solution. If CMDBf can contribute as much to datacenter management as SMTP/POP/IMAP contribute to human communication (minus the SPAM if possible), I’d call that a success.

And then there is this warning:

“WARNING: vendors will waive this white paper around to overcome buyer resistance to a mixed-vendor solution. For example if you already have availability monitoring from one of them, one of the other vendors will try to sell you their service desk and use this paper as a promise that the two will play nicely.”

Has anyone actually seen this happen? I am asking because so far, both at HP and Oracle, the only sales reps I have ever met who know of CMDBf heard about it from their customers. When asked about it, the sales person (or solutions engineer) sends a email to some internal mailing list asking “customer asking about something called cmdbf, do we do that?” and that’s how I get in touch with them. Not the other way around.

Also, if the objective really was to trick customers into “mixed-vendor solutions” then I also don’t really understand why vendors would go through the effort of collaborating on such a scheme since it’s a zero-sum game between them at the end.

As far as the glacial pace of progress (“Glacial advance. That’s the way the vendors want it” from an earlier post by the Skeptic), CMDBf is no race horse but I don’t see it going any slower than other standards. Slowness (I mean, deliberation) is part of the landscape. I would submit a slight twist on Hanlon’s razor: “Never attribute to malice that which can be adequately explained by legal, procedural and organizational inertia.”

Having said all this, some of Rob’s criticism is perfectly justified, such as his sarcasm about this sentence from the specification:

“The Federated CMDB operates in a closed environment, in which some security issues are less critical than in open access or public systems.”

OK, that’s stupid indeed. Especially in a public cloud environment where you don’t know who is renting the VM next door. I’ll ask the group to remove this. Actually, that whole appendix is useless and I pointed this out in my earlier review of CMDBf 1.0 (look for the “security boilerplate” section at the bottom of the review).

Rob could also have pointed out that this specification only addresses “federation” if you accept a very scaled-down definition of the term. What it does do is help with CMDB query and synchronization. Not the holy grail, but nothing to sneer at either.

Rob, next time you want to throw tomatoes at CMDBf while you’re on holiday, just give me the password to the site and I’ll do it for you… :-)

[UPDATED 2009/1/21: Rob responds via a comment on his original blog entry.]

2 Comments

Filed under BSM, CMDB Federation, CMDBf, DMTF, Everything, IT Systems Mgmt, ITIL, Mgmt integration, Security, Specs, Standards

The circus continues…

Here we go again. Yet another institution who “takes the protection of [my] personal information very seriously” wrote to me to let me know that they lost some unencrypted backup tapes with my SSN and everything. In a way I’d prefer if they said that they don’t take the protection of my personal information seriously. Because now I have to assume that they are incompetent even at the tasks they take seriously, which presumably also includes performing financial transactions (it’s a bank). That they plead dumbness rather than carelessness kind of scares me.

Well, not really. This letter is just damage control of course and whatever reassuring verbiage they put doesn’t mean anything. Everyone is just playing pretend, which is how this whole “identify theft” problem started (“we’ll pretend that the SSN is confidential information and that we can use it to authenticate people”).

A few months ago I wrote that it is now safe to steal my identity because the credit watch service provided by Fidelity following their similar screw-up (laptop stolen from a car that time) had expired. Of course the new breach comes with two years of credit monitoring, courtesy of the incompetent bank.

So here is yet another reason to not buy credit monitoring services (in addition to the fact that they don’t work and that you can get the same thing for free): it’s only a matter of months before the next breach and the free two years of credit monitoring that will ensue.

2 Comments

Filed under Everything, Identity theft, Off-topic, Security, SSN

It is now safe to steal my identity

Note to whoever stole the laptop of a Fidelity employee two years ago, with personal information (SSN and more) for everyone enrolled in HP’s retirement plan: it is now safe to make use of the information. Congratulations on being patient.

I received an email telling me that the “credit watch” service in which all affected HP employees (and ex-employees) were enrolled for free has expired. Of course, we are invited to start paying Equifax to keep it running. $65 per year (and that’s supposedly a discounted rate, mind you, half the “normal” price) to run a DB query once a week on my behalf. Not bad. I should be in that business.

In what ways is the lost data less dangerous two years later? The “1 or 2 years of free credit watch” offer that is typical after events such security violations is obviously just a PR move to allow the guilty party to look like they are taking responsibility for their embarrassing display of incompetence. And it probably costs them very little, if anything, to provide this, considering how good a customer acquisition strategy it is for the “credit watch” department of the credit agencies. The fact that Fidelity and their pears don’t have to bear any real cost for this is the reason why it keeps happening.

If I sound a bit detached about this, it’s not that I am not worried about someone impersonating me by using my SSN and birth date. It’s just that I am not more worried about that specific laptop theft than I am about the hundreds of employees at medical offices, dental offices, insurances companies, banks etc that already have access to this information.

The solution is to publish every single SSN on a web site and stop pretending they can be used for authentication.

[UPDATED 2008/7/7: One more name in the long list of companies that have (often through a subcontractor) leaked so-called “personal” information about their employees. It’s only news because the employer is Google and anything Google-related is for some reason considered newsworthy. Danny is kind to be appreciative for the one year of free credit monitoring. It probably costs Google close to nothing. Which is why Google and the others don’t really care about the problem.]

1 Comment

Filed under Everything, HP, Identity theft, Security, SSN

IGF and GIF: it’s not a typo

With the Oracle announcements at the RSA conference this month (things like Oracle Role Manager and this white paper), the Identity Governance Framework (IGF) is back in the news. And since HP publicly released the Governance Interoperability Framework (GIF) earlier this year, there is some potential for confusion between the two (akin to the OSGi/OGSI confusion). I am not an author or even an expert in either, but I know enough about both that I can at least help reduce the confusion.

They are both frameworks, they are both about governance, they both try to enable interoperability, they both define XML formats, they were both privately designed and they are both pushed by their authors (and supporters) towards standardization. To add to the confusion, Oracle is listed as a supporter of HP’s GIF and HP is listed as a supporter of Oracle’s IGF.

And yet they are very different.

GIF is an attempt to address SOA governance, which mostly relates to the lifecycle of services and their artifacts (like WSDL, XSD and policies). So you can track versions, deployment status, ownership, dependencies, etc. HP is making the specification available to all (here but you need to register) and has talked about submission to a standards body but as far as I know this hasn’t happened yet.

IGF is a set of specifications and APIs that pull access policy for identity related information out of the application logic and into well-understood XML declarations. With the goal of better controlling the flow of such information. The keystones are the CARML specification used to describe what identity related information an application needs and its counterpart the AAPML specification, used to describe the rules and constraints that an application puts on usage of the identity-related information it owns. The framework also defines relevant roles and service interfaces. Unlike GIF, which is still controlled by HP, IGF is now under the control of the Liberty Alliance Project. Oracle is just one participant (albeit a leading one).

Could they ever meet?

A Web service managed through a GIF-like SOA governance system could have policies related to accessing identity-related information, as addressed by IGF (and realized through CARML and AAPML elements). GIF doesn’t really care about the content of the policies. Studying the positions of the IGF and GIF specifications relative to WS-Policy would be a good way to concretely understand how they operate at a different level from one another. While there could theoretically be situations in which IGF and GIF are both involved, they do not do the same thing and have no interdependency whatsoever.

[UPDATED 2008/4/18: Phil Hunt (co-author of IGF) has a blog where he often writes about IGF. He also wrote a good overview of IGF and its applicability to governance and SOX-style compliance.]

Comments Off on IGF and GIF: it’s not a typo

Filed under Everything, Governance, Identity theft, Oracle, Security, Specs, Standards

HP is starting to pull out of Identity Management

Rumors prompted me to do a Google search on << HP “identity management” exit >>. The second resulting link brought confirmation in the form of this Burton Group article.

From the article, HP is not declaring “end of life” on its IDM products (the “Select” family, made up of Select Access, Select Audit, Select Federation and Select Identity) but they are restricting them to the current customers rather than going after new ones. Which sounds like an end of life, albeit a slow one that gives customers plenty of time to plan and execute their transition. Good for HP, because that’s one area in which you really don’t want to make precipitous decisions (as sometimes happens when an IDM effort is kicked off as a result of a negative security event).

My first reaction is to wonder what this means for my ex-colleagues, including the IDM people I sat next to (most of them from the Trustgenix acquisition) and the remotes ones I interacted with in the context of HP’s software standards strategy (Jason Rouault and Archie Reed, both well-known and respected in the corresponding standards efforts). These are all smart people so I am sure they’ll find productive work either in HP or outside (the IDM domain is booming).

My second reaction is puzzlement. This move is not very surprising from the point of view of the market success and financial returns of HP’s IDM suite so far. But it is a lot more surprising in the context of HP’s BTO strategy. I am sure they realize the importance of IDM in that context, I guess they must have decided that they can do it based on partner products rather than HP products. Hopefully they can maintain the expertise even without further developing products.

The Burton Group article quotes Eric Vishria, “HP Software Vice President of Products”. Based on his title I would have been in his organization so I would have known him if he had been there when I was at HP. Which tells me that he probably came from the Opsware acquisition, soon after I left. The Opsware people now have a lot of influence in HP Software and it looks like they are not shying away from bold moves.

[UPDATED 2008/5/22: HP appears to have struck a deal to migrate its IDM users to Novell.]

1 Comment

Filed under Everything, HP, Security

Taking control of the Flash player

As far as I can tell, Flash is an advertising delivery platform for the Web. This is why I have not installed the Flash player in my Firefox browser. It saves me (especially when combined with the Adblock Plus Firefox add-on) from a lot of obnoxious animations. And a few security vulnerabilities too, (this latest one is what prompted me to write this quick entry to help readers protect themselves while retaining the option to use Flash).

Despite all the hype about Flash, I very rarely run into a page that requires it for something useful. A few sites are Flash-only (mostly restaurant web sites from my experience, apparently restaurant owners are easy preys for incompetent Web site designers) and when I find one I usually take that as a sign that I am saving myself a lot of frustration by taking my business elsewhere.

Still, once a while I need to view a Flash applet. Ideally, I would like to have Flash installed but disabled, such that I can enable it for a given page with a single click. This doesn’t seem to be possible (my guess is that Adobe knows very well that Flash is mostly used in ways that are not welcomed by users and that they would likely disable it most of the time if given the option). So here is a convenient way to achieve the same effect:

While I have not installed the Flash player in Firefox, I have installed it in IE. I have also installed the IE Tab Firefox add-on which allows one to switch from the Firefox rendering engine to the IE rendering engine within a given Firefox tab. It can be configured to place a small icon in the status bar. Clicking on that icon switches the rendering engine, which means that suddenly the Flash player is enabled for the page you are looking at. One-click enable/disable as requested!

You can also configure IE Tab to automatically switch to IE rendering for some pre-configured sites. So if there are Flash-dependent sites that you use on a regular basis, just enter them there and the IE rendering engine will automatically be used whenever you are on those sites. Again, this all happens inside your Firefox tab, it doesn’t start a separate IE browser. Enjoy.

[UPDATED on 2007/12/24: I wrote this entry to try to help readers and it turns out I am the one who’s getting helped after all. Many commenters pointed to the Flashblock firefox add-on which is designed specifically to do what I get done in a round-about way with IE Tab. I looked for such an add-on some time ago and didn’t find it, which is why I devised the work-around. Thank you all for the info.]

[UPDATED 2008/5/14: Another reason to keep Flash turned off: Crossdomain.xml Invites Cross-site Mayhem.]

[UPDATED 2008/6/9: Looks like Flashblock can be circumvented (in a way that my more basic FF vs IE setup cannot). BTW, I closed comments on this entry because for some reason it was attracting a lot more comment spam than all the others combined. Email me (see about page) if you want to post a comment here.]

9 Comments

Filed under Everything, Flash, Off-topic, Security

We won’t get rid of SSN-based authentication anytime soon…

… because the issue has been mixed up with the whole terrorism/DHS hysteria. Game over. So now we have “Real ID” which won’t stop any terrorist but somehow is marketed as an anti-terrorist measure. I don’t like this law because it is too focused on physical identification (ID card) and not virtual identification. Trying to impersonate someone in person is difficult, dangerous (you risk being arrested on the spot or at least having your face captured by a security camera) and doesn’t scale. Doing it virtually is easy, safe and scales (you can even do it from anywhere in the world, including places where labor is cheap and the FBI doesn’t reach much). So this is where the focus should be. Also, this law is not respectful of privacy (the “unencrypted bar code” issue, even though if someone really wanted to systematically capture name and address from ID cards today they could take a picture of the ID and OCR it, the Real ID-mandated bar code would only make things a little easier).

On the other hand, I also can’t go along with the detractors of this law when they go beyond pointing out its shortcomings and start ranting about this creating a national ID card. While it’s true that this is what it effectively does, someone needs to explain to me why this is bad and why this would make the US a “police state”. If really such IDs are so damaging to liberties, why is it ok for every state to have them? What makes a national ID more dangerous than a state ID?

I agree that the Real ID effort is a bad cost/benefit trade off in terms of protection against terrorism. But leaving terrorism aside, we do need a robust (not necessarily perfect) way to authenticate people to access bank accounts and other similar transactions. In that respect, something like Real ID is needed. And in that context, the cost/benefit trade-off can be hugely positive if you think of how much impersonation costs and how much friction it creates in the country’s economy.

As long as we live in denial about what a Social Security number represents and as long as we can’t think sanely about terrorism, there can’t be an answer to the authentication problem.

Comments Off on We won’t get rid of SSN-based authentication anytime soon…

Filed under Everything, Identity theft, Off-topic, Security, SSN

Agriculture Department and Census Bureau to the rescue

An article in today’s New York Times reports that “the Social Security numbers of tens of thousands of people who received loans or other financial assistance from two Agriculture Department programs were disclosed for years in a publicly available database”.

Almost there folks! But tens of thousands is not enough, we need to cover everyone. The simplest effective way to dent the “identity-theft” (or more exactly “impersonation”) wave is to go beyond this first step and publish on a publicly accessible web site all social security numbers ever issued and the associated names. And get rid once and for all of the hypocritical assumption that SSN have any authentication value. We need a reliable authentication infrastructure (either publicly-run as a government service or privately-run, that’s a topic for another day) and this SSN-based comedy is preventing its emergence by giving credit issuers (and others) a cheap and easy way to pretend that they have authenticated their customers.

Over the last couple of years, I have received two alerts that my SSN and other data have been “compromised” (one when Fidelity lost a laptop containing data about everyone enrolled in HP’s retirement plan and one from a university) and my wife has received three. Doesn’t this sound like a bad joke going on for too long (and I should know about bad jokes going on for too long, they are my specialty)? And of course this doesn’t count the thousands of employees at dentist, medical offices, and many other businesses that have at some point had access to my data (and anybody else’s).

So, to the IT people at the Census Bureau I say “keep going”! But of course that’s not the reaction they had. The rest of the NY Times articles goes on with the usual hypocritical (or uninformed) lamentations about putting people’s identities at risk. “We took swift action when this was brought to our attention, and took the information down.” says an Agriculture Department spokeswoman. And of course there is the usual “credit report monitoring” offer (allowing the credit report agencies to benefit from both sides of the SSN-for-authentication debacle). Oblivious to the reality even though it manifests itself further down in the article: “The database […] is used by many federal and state agencies, by researchers, by journalists and by other private citizens to track government spending. Thousands of copies of the database exist.”

Another quote from the article: “Federal agencies are under strict obligations to limit the use of Social Security numbers as an identifier”. The SSN is a fine identifier. It’s using it as a mean of authentication that’s the problem.

[UPDATE] This is now a Slashdot thread. The comments are pouring in. Some get it (like here, and here). This one seems to get it too but then goes on to advocate dismantling the social security system which at this point is only connected by name to the issue at hand.

[UPDATED 2008/7/2: Sigh, sigh and more sigh while reading this article. The cat is so far out of the bag that a colony of mice has taken residency in it. The goal shouldn’t be to try to make the SSN hard to get, it should be to make it useless to criminals. That approach isn’t even mentioned in the article.]

2 Comments

Filed under Everything, Identity theft, Off-topic, Security, SSN

Security by complexity?

WSRF/CDL/WS-Addressing complexity used as a security barrier? Ouch!

Comments Off on Security by complexity?

Filed under Everything, Security, Standards

Updating an EPR

The question recently came back on the WS-Addressing mailing list of whether Reference Parameters can/should be used as the SOAP equivalent of cookies. Something more along the lines of session management than addressing. See Peter Hendry’s email for a clear description of his use case. The use case is reasonable but I don’t think this is what WS-Addressing is really for as I explain in bullet #3 of this post. What interested me more was the response that came from Conor Cahill and his statement that AOL is implementing an “EndpointReferenceUpdate” element that can be returned in the response to tell the sender to update the EPR. I am not fond of this as a mechanism for session management, but I can see one important benefit of this mechanism: getting hold of a “good” EPR for more efficient addressing. Here is an example application:

Imagine a Web services that represents the management interface of a business process engine. That Web service provides access to all the currently running business process instances in the engine (think Service Group if you’re into WSRF). Imagine that this Web service supports a SOAP header called “target” and that header is defined to contain an XPath statement. When receiving a message containing a “target” header, the Web service will look for the (for the sake of simplicity let’s assume there can only be one) business process instance for which this XPath statement returns “true” when evaluated on the XML representation of the state of the business process instance. And the Web service will then interpret the message to be targeted at that business process instance. This is somewhat similar to WS-Management’s “SelectorSet”. A sender can use this mechanism to address a specific business process instance based on the characteristics of that instance (side note: whether the sender understands and builds this header itself or whether it gets it as a Reference Parameter from an EPR is orthogonal). But this can be a very expensive dispatching mechanism. The basic implementation would require the Web service to evaluate an XPath statement on each and every business process instance state document. Far from optimal. This is where Conor’s “EndpointReferenceUpdate” can come in handy. After doing once the XPath evaluation work of finding out which business process instance the sender wants to address, the Web service can return a more optimized EPR to be used to address that instance, one that is a lot easier to dispatch on. This kind of scenario is a lot more relevant in my perspective to the work of the WS-Addressing working group than the session example.

An important consequence of a mechanism such as “EndpointReferenceUpdate” is that it makes it critical that the Web service be able to tell which SOAP headers are in the message as a result of being in the EPR used by the sender and which ones were added by the sender on purpose. For example, if a SOAP message comes in with headers “a”, “b” and “c” and the Web service assumes that “a” and “b” were in the EPR and “c” was added by the invoker, then the new EPR returned as part of “EndpointReferenceUpdate” will only be a replacement for “a” and “b” and the Web service will still expect “c” to be added by the sender. But if in fact “c” also came from a reference parameter in the EPR used by the sender then follow-up messages will be incomplete. This puts more stress and responsibilities on the already weak @isReferenceParameter attribute. And, by encouraging people to accept EPRs from more and more sources, it puts EPR consumers are even greater risk for the problems described in bullet (1) of this objection.

2 Comments

Filed under Everything, Security, Standards, Tech

So you want to build an EPR?

EPR (Endpoint References, from WS-Addressing) are a shiny and exciting toy. But a sharp one too. So here is my contribution to try to prevent fingers from being cut and eyes from being poked out.

So far I have seen EPRs used for five main reasons, not all of them very inspired:

1) “Dispatching on URIs is not cool”

Some tools make it hard to dispatch on URI. As a result, when you have many instances of the same service, it is easier to write the service if the instance ID is in the message rather than in the endpoint URI. Fix the tools? Nah, let’s modify the messages instead. I guess that’s what happens when tool vendors drive the standards, you see specifications that fit the tools rather than the contrary. So EPRs are used to put information that should be in the URI in headers instead. REST-heads see this as a capital crime. I am not convinced it is so harmful in practice, but it is definitely not a satisfying justification for EPRs.

2) “I don’t want to send a WSDL doc around for just the endpoint URI”

People seem to have this notion that the WSDL is a “big and static” document and the EPR is a “small and dynamic” document. But WSDL was designed to allow design-time and run-time elements to be separated if needed. If all you want to send around is the URI at which the service is available, you can just send the URI. Or, if you want it wrapped, why not send a soap:address element (assuming the binding is well-known). After all, in many cases EPRs don’t contain the optional service element and its port attribute. If the binding is not known and you want to specify it, send a around a wsdl:port element which contains the soap:address as well as the QName of the binding. And if you want to be able to include several ports (for example to offer multiple transports) or use the wsdl:import mechanism to point to the binding and portType, then ship around a simplified wsdl:descriptions with only one service that itself contains the port(s) (if I remember correctly, WS-MessageDelivery tried to formalize this approach by calling a WSRef a wsdl:service element where all the ports use the same portType). And you can hang metadata off of a service element just as well as off of an EPR.

For some reason people are happy sending an EPR that contains only the address of the endpoint but not comfortable with sending a piece of WSDL of the same size that says the same thing. Again, not a huge deal now that people seem to have settled on using EPRs rather than service elements, but clearly not a satisfying justification for inventing EPRs in the first place.

3) “I can manage contexts without thinking about it”

Dynamically generated EPRs can be used as a replacement for an explicit context mechanism, such as those provided by WS-Context and WS-Coordination. By using EPRs for this, you save yourself the expense of supporting yet-another-spec. What do you loose? This paper gives you a detailed answer (it focuses on comparing EPRs to WS-Context rather than WS-Coordination for pretty obvious reasons, but I assume that on a purely technical level the authors would also recommend WS-Coordination over EPRs, right Greg?). In a shorter and simplified way, my take on the reason why you want to be careful using dynamic EPRs for context is that by doing so you merge the context identifier on the one hand and the endpoint with which you use this context on the other hand into one entity. Once this is done you can’t reliably separate them and you loose potentially valuable information. For example, assume that your company buys from a bunch of suppliers and for each purchase you get an EPR that allows you to track the purchase as it is shipped. These EPRs are essentially one blob to you and the only way to know which one comes through FedEx versus UPS is to look at the address and try to guess based on the domain name. But you are at the mercy of any kind of redirection or load-balancing or other infrastructure reason that might modify the address. That’s not a problem if all you care about is checking the ETA on the shipment, each EPR gives you enough information to do that. But if you also want to consolidate the orders that UPS is delivering to you or if you read in the paper about a potential UPS drivers strike and want to see how it would impact you, it would be nice to have each shipment be an explicit context Id associated to a real service (UPS or FedEx), rather than a mix of both at the same time. This way you can also go to UPS.com, ask about your shipments and easily map each entry returned to an existing shipment you are tracking. With EPRs rather than explicit context you can’t do this without additional agreements.

The ironic thing is that the kind of mess one can get into by using dynamic EPRs too widely instead of explicit context is very similar in nature to the management problems HP OpenView software solves. Discovery of resources, building relationship trees, impact analysis, event correlation, etc. We do it by using both nicely-designed protocols/models (the clean way) and by using heuristics and other hacks when needed. We do what it takes to make sense of the customer’s system. So we could just as well help you manage your shipments even if they were modeled as EPRs (in this example). But we’d rather work on solving existing problems and open new possibilities than fix problems that can be avoided. And BTW using dynamic EPRs is not always bad. Explicit contexts are sometimes overkill. But keep in mind that you are loosing data by bundling the context with the endpoint. Actually, more than loosing data, you are loosing structure in your data. And these days the gold is less in the raw data than in its structure and the understanding you have of it.

4) “I use reference parameters to create new protocols, isn’t that cool!”

No it’s not. If you want to define a SOAP header, go ahead: define an XML element and then describe the semantics associated with this element when it appears as a SOAP header. But why oh why define it as a “reference parameter” (or “reference property” depending on your version of WS-A)? The whole point of an EPR is to be passed around. If you are going to build the SOAP message locally, you don’t need to first build an EPR and then deconstruct it to extract the reference parameters out of it and insert them as SOAP headers. Just build the SOAP message by putting in the SOAP headers you know are needed. If your tooling requires going through an EPR to build the SOAP message, fine, that’s your problem, but don’t force this view on people who may want to use your protocol. For example, one can argue for or against the value of WS-Management‘s System and SelectorSet as SOAP headers, but it doesn’t make sense to define those as reference parameters rather than just SOAP headers (readers of this blog already know that I am the editor of the WSDM MUWS OASIS standard with which WS-Management overlaps so go ahead and question my motives for picking on WS-Management). Once they are defined as SOAP headers, one can make the adventurous decision to hard-code them in EPRs and to send the EPRs to someone else. But that’s a completely orthogonal decision (and the topic of the fifth way EPRs are used – see below). But using EPRs to define protocols is definitely not a justification for EPRs and one would have a strong case to argue that it violates the opacity of reference parameters specified in WS-Addressing.

5) “Look what I can do by hard-coding headers!”

The whole point of reference parameters is to make people include elements that they don’t understand in their SOAP headers (I don’t buy the multi-protocol aspect of WS-Addressing, as far as I am concerned it’s a SOAP thing). This mechanism is designed to open a door to hacking. Both in the good sense of the term (hacking as a clever use of technology, such as displaying Craig’s list rental data on top of Google maps without Craig’s List or Google having to know about it), and in the bad sense of the term (getting things to happen that you should not be able to make happen). Here is an example of good use for reference parameters: if the Google search SOAP input message accepted a header that specifies what site to limit the search on (equivalent to adding “site:vambenepe.com” in the Google text box on Google.com), I could distribute to people an EPR to the vambenepe.com search service by just giving them an EPR pointing to the Google search service and adding a reference parameter that corresponds to the header instructing Google to limit the search to vambenepe.com.

Some believe this is inherently evil and should be stopped, as expressed in this formal objection. I think this is a useful mechanism (to be used rarely and carefully) and I would like to see it survive. But there are two risks associated with this mechanism that people need to understand.

The first risk is that EPRs allow people to trick others into making statements that they don’t know they are making. This is explained in the formal objection from Anish and friends as their problem #1 (“Safety and Security”) and I agree with their description. But I don’t agree with the proposed solutions as they prevent reference parameters to be treated by the service like any other SOAP header. Back last November I made an alternative proposal, using a wsa:CoverMyRearside element that would not have this drawback and I know other people have made similar proposals. In any case, this risk can and should be addressed by the working group before the specification becomes a Recommendation or people will stop accepting to process reference parameters after a few high-profile hacks. Reference parameters will become the ActiveX of SOAP.

The second risk is more subtle and that one cannot be addressed by the specification. It is the fragility that will result from applications that share too many assumptions. I get suspicious when someone gives me directions to their house with instructions such as “turn left after the blue van” or “turn right after the barking dog”, don’t you? “We’re the house after the green barn” is a little better but what if I want to re-use these directions a few years later. What’s the chance that the barn will be replaced or repainted? EPRs that contain reference parameters pose the same problem. Once you’ve sent the EPR, you don’t know how long it will be around, you don’t know who it will get forwarded to, you don’t know what the consumer will know. You need to spend at least as much efforts picking what data you use as a reference parameter (if anything) as you spend designing schemas and WSDL documents. If your organization is smart enough to have a process to validate schemas (and you need that), that same process should approve any element that is put in a reference parameter.

Or you’ll poke your eye out.

2 Comments

Filed under Everything, Implementation, Security, Standards, Tech

Reality check on Microsoft/Sun claims about single sign-on

This morning I learned that Microsoft and Sun had a public event where the CEOs reported on a year of working together. This is a follow-up to Greg Papadopoulos’ report on the progress of the “technical collaboration”. In that post, Greg told us about the amazing technical outcomes of the work between the two companies and, being very familiar with the specs he was referring to, I couldn’t help but point out that the result of the “technical collaboration” he was talking about looked a lot like Sun rubber-stamping a bunch of Microsoft specifications without much input from Sun engineers.

So when I heard this morning that the two companies were coming out publicly with the result of their work, I thought it would be fair for me to update my blog and include this information.

Plus, reading the press release and Greg’s Q&A session, it sounded pretty impressive and it would have been bad faith from my part to not acknowledge that indeed Greg actually had something to brag about, it just wasn’t yet public at the time. In effect, it sounded like they had found a way to make the Liberty Alliance specs and WS-Federation interoperate with one another.

From Greg’s Q&A: “In a nutshell, we resolved and aligned what Microsoft was trying to accomplish with Passport and the WS-Federation with what we’ve been doing with the Liberty Alliance. So, we’ve agreed upon a way to enable single sign-on to the Internet (whether through a .NET service or a Java Enterprise System service), and federate across those platforms based on service-level agreements and/or identity agreements between those services. That’s a major milestone.”

Yes Greg, it would have been. Except this is not what is delivered. The two specs that are supposed to support these claims are Web SSO MEX and Web SSO Interop Profile. Which are 14 and 9 pages long respectively. Now I know better than to equate length of a spec with value, but when you cut the boilerplate content out of these 14 and 9 pages, there is very little left for delivering on ambitious claims such as those Greg makes.

The reason is that these specs in no way provide interop between a system built using Liberty Alliance and a system built using WS-Federation. All they do is to allow each system to find out what spec the other uses.

One way to think about it is that we have an English speaker and a Korean speaker in the same room and they are not able to talk. What the two new specs do is put a lapel pin with a British flag on the english speakers and a lapel pin with a Korean flag on the korean speaker. Yes, this helps a bit. At least now the Korean speaker will know what the weird language that the other guy is speaking is and he can go to school and learn it. But just finding out what language the other guy speaks is a far cry from actually being able to communicate with him.

Even with these specs, a system based on Liberty Alliance and one based on WS-Federation are still incompatible and you cannot single sign-on from one to the other. Or rather, you can only if your client implements both. This is said explicitly in the Web SSO Interop Profile spec (look for the first line of page 5): “A compliant identity provider implementation MUST support both protocol suites”. Well, this isn’t interop, it’s duplication. Otherwise I could claim I have solved the problem of interoperability between English and Korean just by asking everyone to learn both languages. Not very convincing…

But of course Microsoft and Sun knew that they could get away with that in the press. For example, CNet wrote “The Web Single Sign-On Metadata Exchange Protocol and Web Single Sign-On Interoperability Profile will bridge Web identity management systems based on the Liberty and Web services specifications, the companies said”. As the Columbia Journalism Review keeps pointing out, real journalists don’t just report what people say, they check if it’s true. And in this case, it simply isn’t.

1 Comment

Filed under Business, Everything, Security, Standards, Tech

To prevent intrusion, pull the plug on your server

There is a new WSDL validation tool on IBM’s AlphaWorks site: the Web Services Interface Definition for Intrusion Defense. This is an Eclipse plug-in that checks out your WSDL and flags “any interface feature that could open a door to hacker attacks”. What does it mean in practice? Well, it flags any usage of xsd:any (or xsd:anyType or xsd:anySimpleType) anywhere in your schemas. It also complains if you have elements with maxOccurs=”unbounded”. And more of the same. The result is that this excludes pretty much any existing schema definition. And most of the useful ones one can think of.

The payload of XML messages should reflect the business logic of the service and not the convenience of the implementer. Go tell the line of business manager that the “checkout” operation should be modified so that the number of items in the shopping cart has a hard-coded limit. Go tell the print shop that they can’t accept XHTML documents as input. It is the implementer’s job (and by that I include the runtime, IDE and tools) to make sure the message processing code (be it at the plumbing level or the business level) doesn’t expose security holes.

Comments Off on To prevent intrusion, pull the plug on your server

Filed under Everything, Security

@wsa:type=”parameter” instead of

Eager to prove wrong those who have a hard time picturing a January face to face meeting in Australia as little more than an excuse for some scuba diving, the WS-A working group is cranking through its issues list. One of the outcomes is that issue #8 is now resolved. The issue is about identifying which headers in a SOAP message are there because the EPR used to address the SOAP message required it. This problem is familiar to anyone who has ever been the victim of the common joke that consists in tricking someone who doesn’t speak a certain language into memorizing and later repeating a sentence in that language while misleading them on the meaning of the sentence. And waiting for them to embarrass themselves in public at the worst time.

The group closed this issue by introducing an attribute that can be added to these headers (wsa:type). Yes, the world really needed another type of “type”. BTW, the resolution will need to be tweaked as a result of another decision: issue #1 was resolved by getting rid of reference properties, leaving only reference parameters. More on this in a future post, but at least this gives an opportunity to replace wsa:type=”parameter” with wsa:parameter=”true” and avoid yet-another-type.

Going back to issue #8, I am glad the group somewhat acknowledged the problem but this doesn’t solve it. For two reasons:

  1. The schema of the header element might not allow this attribute to be added
  2. Even if the attribute is present, the recipient of the SOAP message can pretend to not understand it (“I don’t care about this attribute and all the other WS-A crap, all I know is that you sent me the ‘I owe you $1,000’ header and you signed it so now pay up!”)

    The way to solve both of these problems was to create one additional SOAP header (in the WS-A namespace) that could be marked mustUnderstand=”true” and points to all the headers in the message that are there because “the EPR told me so”. I proposed this (the wsa:CoverMyRearside approach) in the one and only message I have sent to the WS-A WG, but obviously I wasn’t very convincing. Since I don’t participate in the WG I might never understand what is wrong with this approach. What really surprises me is that it wasn’t even considered. The issues list shows only 3 proposals but the minutes of the face to face show that there was a 4th option considered, which comes a bit closer. Basically, it is the same as the adopted solution with the addition of a WS-A-defined header (that could be marked mustUnderstand=”true”) which states that “by processing this header you agree that you understand the wsa:type attribute when present in other headers”. This is not very elegant in my mind and doesn’t solve (1) but it does solve (2).

    Interestingly, even though the wsa:CoverMyRearside header approach was not chosen by the WG, nothing stops me from using this approach: I can defined and use a header in my namespace that I’ll mark mustUnderstand=”true” and that will point to the headers that are there only because “the EPR told me so”. The problem of course is that my namespace is not going to be nearly as well-known as the WS-A namespace and most people will fail on processing my messages because it contains a mustUnderstand=”true” header they do not understand. So in practice I can’t do this. That being said, I wouldn’t be surprised if some spec somewhere one day decided that the mechanism in WS-Addressing is not good enough and that they should take on the task to define such a header because they need this protection.

    Comments Off on @wsa:type=”parameter” instead of

    Filed under Everything, Security, Standards, Tech

    From UPS to EPR

    A packaged was shipped to me through UPS. As usual, I received an email message informing me that it had shipped and giving me a URI to track its progress. This is what the URI looked like (after changing the tracking number and inserting a few line breaks):

    http://wwwapps.ups.com/WebTracking
    /processRequest?HTMLVersion=5.0&Requester=NES
    &AgreeToTermsAndConditions=yes&loc=en_US
    &tracknum=123123123123

    The interesting thing to notice, is that there is a parameter in the URI, called “AgreeToTermsAndConditions” and its value is set to “yes”. If you do a GET on this URI, you will receive, as expected, the description of the status of the shipment. On the other hand, if you go to the UPS Web site armed with just the tracking number you have to check a box that reads “By selecting this box and the ‘Track’ button, I agree to these Terms and Conditions” before you can track the shipment. It seems pretty clear that the “AgreeToTermsAndConditions” parameter is in the URI in order to plug into the same Web application from the email link as from the Web page and this application was designed to check that the parameter is present and set to “yes”.

    This has several similarities with some of the discussions around WS-Addressing. First, it illustrates the need to plug into an application in a place where the application might not have been designed to allow you to do so. In this case, we can imagine that the tracking application was designed with the assumption that people would only invoke it from the UPS web site. One day UPS decided it would be nice to send in email a link that takes people directly to the status of a particular shipment rather than just tell them to “go to ups.com and enter your tracking ID in the tracking form”. One important reason for pushing back on the idea of wrapping reference properties is that it would prevent such a scenario in a SOAP world. For this reason I agree that a wrapper for reference properties is a bad idea and if reference properties remain in WS-Addressing the way to fix this mechanism is to leave them as SOAP headers but add a WS-Addressing-defined SOAP header to list headers that were added to a message only because an EPR requires it, with no implication that the sender stands behind any semantic that might be attached to them.

    When I write about “fixing” reference properties in the previous sentence, I am referring to the fact that the current version of WS-Addressing creates a lot of confusion as to the implications of sending SOAP headers and whether I can be held liable by anything that is in a SOAP header I send (and potentially sign). This is the second thing that this UPS URI illustrates by analogy. As a human I get a sense of what a parameter called “AgreeToTermsAndConditions” corresponds to (even though the URI doesn’t tell me what these terms and conditions are). But what if the parameter name was shortened to its acronym “ATTAC”? In any case, I am not expected to understand the composition of this URI, I should be able to treat it as opaque. Just like resource properties. And for this reason, when I do a GET on the URI I am not bound by whatever the presence of this parameter in the URI is assumed to mean by UPS. This means that I can NEVER be bound by the content of a URI I dereference because how can one prove that I understand the semantic of the URI. Even when I fill a form on a Web site, I don’t know (unless I check the HTML) how the resulting URI (assuming the form uses GET) will come out. There might well be a hidden parameter in the form.

    In a SOAP world, this can be fixed by meaningful, agreed-upon, headers. If people agree that by sending (and signing) a SOAP header you are making a statement, then you can build systems that can rely on the ability to make such statements as “I understand and agree to the set of terms and conditions identified by a given QName”. But this breaks down if people are able to say “I didn’t understand I was saying this, I was just echoing what you told me to say”. This is what the current WS-Addressing spec does, and what needs to be fixed. Let’s see how the UPS URI could translate to an endpoint reference. For this, we need to consider two scenarios, both of them equally valid.

    Scenario 1: legacy integration

    In this scenario, the UPS people decide they do not require the invoker to actually make a statement about the terms and conditions. They just need a SOAP header called “I agree to the terms and conditions” to be set to true because this is how their application is currently programmed (it will fail if it doesn’t see it). In this case, it is perfectly reasonable to put a “I agree to terms and conditions” element as a reference property and this element will be sent as a SOAP header, preventing the application from failing. But in order for SOAP headers to be used to make a statement in some cases, there needs to be a way to expressed when, as in this scenario, the invoker is not making a statement by including it. Thus my earlier proposal of a wsa:CoverMyRearside header (that points to all the headers I include for no reason other than because an EPR asks me to). The other option, as written down by Dims is to add an attribute in each such header. But there are two main drawbacks to this approach: (1) unlike a new SOAP header, I can’t mark an attribute “mustUnderstand=true” (Dims’ initial proposal actually had a SOAP header with “mustUnderstand=true” for this very reason) and (2) some elements might not allow arbitrary attributes to be added at the top level.

    Scenario 2: meaningful header

    In this second scenario, the UPS people want to make sure, before they return the tracking information, that the invoker has made a statement that it understands and agrees to the terms and conditions. In this case, it makes no sense to put a “”I agree to the terms and conditions” element as a reference property as what is intended is not for such an element to be echoed blindly but to be used to make an explicit statement. In this scenario, the EPR sent by UPS would contain all the opaque elements, those sent for the benefit of the UPS application but by which the invoker is not making a statement (from the look of the URI, this would be “HTMLVersion”, “Requester”, “loc” and “tracknum”). But the “agree to terms and conditions” header would not be specified as a reference property, it would be listed in the WSDL description of the service. And when the invoker sends this header, it would not be included in the wsa:CoverMyRearside header because the invoker is indeed making a statement by sending it.

    Comments Off on From UPS to EPR

    Filed under Everything, Security, Standards, Tech