Category Archives: Off-topic

Review of reviews of iPad reviews

Since we’re talking about the third generation of iPads, it seems silly to stop at the “review of reviews” level and we should be meta-to-the-cube. So here’s a review of reviews of iPad reviews. Because no-one asked for it.

Forbes and Barron’s reviews of reviews are pretty much indistinguishable from one another and cover the same original reviews (with the only difference that Forbes adds a quick quote from John Gruber). And of course, since they both cater to people who see significance in daily stock prices, they both end by marveling that the Apple stock flirted today with the $600 mark.

Om Malik’s review of reviews wins the “most obvious laziness” prize (in a category that invites laziness), but if you really want to know ahead of time how many words each review will inflict on you then he’s got the goods.

Daniel Ionescu’s review of reviews for PCWorld is the most readable of the lot and manages to find a narrative flow among the links and quotes.

The CNET review of reviews comes a close second and organizes the piece by feature rather than by reviewer. Which makes it more a “review-based review” than a “review of reviews” if you’re into that kind of distinctions.

The Washington Post’s review of reviews just slaps quote after quote and can be safely avoided.

You know what you have to do now, don’t you? No, I don’t mean write something original, are you crazy? I mean produce a review of reviews of reviews of reviews, of course. This is 2012.

1 Comment

Filed under Apple, Articles, Off-topic

Notes from buying a new car online for the second time

In case you’re in the market for a new car, these few data points about a recent on-line buying experience may be of interest.

Here’s an interesting view of Search Engine Optimization (SEO) from the perspective of a car dealership. I mention this for two reasons:

a) The article is from AutoNews, a site which mostly caters to dealers and others in the car business. Its RSS feed is worth subscribing to in the few months that precede a car purchase. It’s a very different perspective from the consumer-oriented car sites out there.

b) I find it hard to reconcile this article with my experience, as I’ll describe in more details below.

To a large extent, car dealers’ interest in SEO is unsurprising. What business doesn’t care about its Google rank? But, having just bought a Toyota Sienna on-line last week, I have a hard time reconciling dealers’ efforts to get shoppers to their site with how bad the experience is once you get there.

Try and find an email address on the site of a Toyota dealership in Silicon Valley. I dare you. So, forget the email blast. Instead, you have to use stupid “contact us” forms in which you have to copy/paste multiple fields to simply describe what car you’re looking for. And they must know you’re copy/pasting, otherwise would they make the “comments” section so ridiculous small? And the phone number is a “required” field? As if.

I understand that it makes it easier for “lead tracking” software if the transaction starts with one of these forms, but after spending money on SEO do you really want to refuse to talk to customers in the way they prefer? Compare this to going to a dealership in person. They’ll talk to you in their office, they’ll talk to in the showroom, they’ll talk to you in the parking lot in the rain and, gender permitting, they’ll probably talk to you in the bathroom.

I know there are sites (including manufacturer sites) which propose to email local dealers for you, but I don’t know what arrangement they have and I don’t want to initiate a price negotiation in which the vendor already owes a few hundred dollars to a referrer if we strike a deal. I want to initiate direct contact.

At least this year I didn’t have to convince vendors to negotiate the price over email; unlike the first time I bought a car in this manner, three years ago, when over half of the dealers I corresponded with had no interest in going beyond an over-inflated introductory quote, followed by efforts to get me to “come talk at the dealership”. With comments such as “if I give you a price, what tells me you’re not going to take it to another dealership to ask them to beat it?”. Well, that’s the whole point, actually. Nowadays (at least among Toyota dealership in the San Francisco Bay Area) they have “internet sales managers” to do just that.

Once you clear the hurdle of contacting vendors on their web sites, the rest of the interaction is quite painless. “Internet sales managers” deserve their title. In my experience, most of them have no problem doing everything over email and respond in a straightforward way as long as you’re specific in your requests. I never once talked to anyone on the phone. And when I came to take delivery of the car, all had been agreed and my entire visit took one hour, most of it spent doing email while they cleaned the car. I don’t know why anyone would buy a car any other way. The total amount of time I spent on the whole process is less than it would have taken me to go to the closest dealership and negotiate one price.

As a side note, I used TrueCar (under a separate email account, of course) to get an idea of the price and I ended up paying $450 less than the TrueCar proposal. When contacted via their web site, that dealer initially gave me the same offer they had submitted via TrueCar, and we went down from there, based on competing offers from other dealers. I never mentioned the TrueCar offer to them.

Another side note: MSRP is actually very useful. Not as a an indication of the price to pay, of course, but as a checksum on the level of equipment of the car. It doesn’t change between dealers and allows you to ensure that you’re comparing cars with the same equipment level. Of course any decent programmer would scream that it’s a very bad checksum, if only because two options could cost the same, but it worked just fine for my purpose.

I still think that the whole third-party-dealership model is fundamentally broken. The on-line buying process doesn’t fix it, it’s just an added layer that hides some of the issues. As we say in computer science, there’s no problem that can’t be solved by adding another level of indirection. We say this tongue-in-cheek because it’s both true (in the short term) and horribly false (in the long term). The same applies to the US car sales process.

As a side note, now that the family is equipped with an admiral ship I have a 2001 VW Golf Turbo (manual transmission) to sell if anyone in the Bay Area is interested…

Comments Off on Notes from buying a new car online for the second time

Filed under Off-topic, Uncategorized

Whose nonsense exactly?

Completely off-topic, but many people are on holidays, so why not. I tried to fit this into a tweet, but to no avail.

Under the title “That Exact Nonsense”, John Gruber posts a quote from Penn Jillette’s book “God, No!: Signs You May Already Be an Atheist” (I left Gruber’s ID in the Amazon URL since he deserve his cut in this context).

There is no god and that’s the simple truth. If every trace of any single religion died out and nothing were passed on, it would never be created exactly that way again. There might be some other nonsense in its place, but not that exact nonsense. If all of science were wiped out, it would still be true and someone would find a way to figure it all out again.

It’s compelling and I personally believe it’s true. But unfortunately it doesn’t prove anything. Clearly, Jillette doesn’t believe that there was a divine revelation for any of the existing religions, but rather that they emanated from human imagination. If that’s the case then yes, the next time around it will come out somewhat differently. But what if there was a divine revelation? What would stop the deity from repeating it if its message got lost?

Jillette only proves that religion is a human invention (not a revelation of a larger truth) if you accept the hypothesis that it is a human invention.

To be fair, Jillette doesn’t claim it’s a proof, but the way the quote is making the rounds (Gruber credits Kottke for the find who in turn credits mlkshk who got it from imgur)  seems to suggest that it’s being heralded as such, rather that as a compelling-sounding tautology.

1 Comment

Filed under Off-topic

Why I don’t use iTunes metadata

I am taking quite a beating in the comments section of my previous post. Apparently I am a soon-to-be-crestfallen old man with OCD (if I combine Kelstar’s comment with the one from “Mr. D.”). Thankfully there are also messages from fellow Luddites who support my alternative lifechoice.

The object of the scorn I’m getting? Nothing to do with the Automator scripts I shared (since I am new to Automator I was hoping to get some feedback/correction/suggestions on that). It’s all about the use case that lead me to Automator in the first place, my refusal to rely on iTunes metadata to organize my music collection.

Look, I’m a software engineer. My employer (Oracle) knows a thing or two about structured data. I work in systems management, which is heavily model-driven. As an architect I really care about consistent modeling and not overloading data fields. I’m also a fan of Semantic Technologies. I know proper metadata is the right way to go. As a system designer, that is, I know it; any software I design is unlikely to rely on naming conventions in file names.

But as a user, I have other priorities.

As a user, my goal is not to ensure that the application can be maintained, supported and evolved. My goal is to protect the data. And I am very dubious of format-specific metadata (and even more of application-specific metadata) in that context, at least for data (like music and photos) that I plan to keep for the long term.

I realize that ID3 metadata is not iTunes specific, but calling it a “standard” that’s “not gonna change” as another commenter, Vega, does is pretty generous (I’m talking as someone who actually worked on standards in the last decade).

Standard or not, here are a few of the reasons why I don’t think format-specific metadata is a good way to organize my heirloom data and why I prefer to rely on directory names (I use the artist name for my music directories, and yyyymmdd-description for photos, as in 20050128-tahoe-ski-trip).

[Side note: as you can see, even though I trust the filesystem more than format-specific metadata I don’t even fully trust it and stubbornly avoid spaces in file and especially directory names.]

Some of the pitfalls of format-specific metadata:

  • Metadata standards may guarantee that the same fields will be present, but not that they will be interpreted in the same way. As proven by the fact that my MP3 files carry widely inconsistent metadata values depending on their provenance (e.g. “Beattles” vs. “The Beattles” vs “Beattles, The”).
  • I can read and edit file names from any programming language. Other forms of metadata may or may not be accessible.
  • I don’t have to download/open the file to read the file name. I know exactly which files I want to FTP just by browsing the remote directories.
  • I often have other types of files in the same directory. Especially in my photo directories, which usually contain JPEGs but may include some images in raw format or short videos (AVI, MPEG, MOV…). If I drop them in the 20050128-tahoe-ski-trip directory it describes all of them without having to use the right metadata format for each file type.
  • File formats die. To keep your data alive, you have to occasionally move from one to the other. Image formats change. Sound formats change. The filename doesn’t have to change (other than, conventionally, the extension). Yes, you still have to convert the actual content but keeping the key metadata in file or directory name makes it one less thing that can get lost in translation.
  • Applications have a tendency to muck with metadata without asking you. For example, some image manipulation applications may strip metadata before releasing an image to protect you from accidental disclosure. On the other hand, applications (usually) know better than to muck with directory names without asking.
  • You don’t know when you’re veering into application-specific metadata. I see many fields in iTunes which don’t exist in ID3v2 (and even less in ID3v1) and no indication, for the user, of which are part of ID3 (and therefore somewhat safer) and which are iTunes-specific. It’s very easy to get locked in application-specific metadata without realizing it.

[In addition to the issues with format-specific metadata listed above, application-specific metadata has many more, including the fact that the application may (will) disappear and that, usually, the metadata is not attached to the data files. That kind of metadata is a no-starter for long-term data.]

So what do I give up by not using proper metadata?

  • I give up richness. But for photo and videos I just care about the date and a short description which fit nicely in the directory name. For music I only care about the artist, which again fits in the directory name (albums are meaningless to me).
  • I give up the ability to have the organization of my data reflected in metadata-driven tools (if, like iTunes, they refuse to consider the filesystem structure as meaningful). Or, rather than giving it up, I would say it makes it harder. But either there is a way to automatically transfer the organization reflected in my directories to the right metadata (as I do in the previous post for iTunes) or there isn’t and then I definitely don’t want to have anything to do with software that operates on locked-away metadata.
  • I also give up advanced features that use the more exotic metadata fields. But I am not stripping any metadata away. If it happens to all be there in my files, set correctly and consistently, then I can use the feature. If it isn’t (and it usually isn’t) then that piece of metadata hasn’t reached the level of ecosystem maturity that makes it useful to me. I have no interest in manually fixing it and I just ignore it.

I’m not an audiophile. I’m not a photographer. I have simple needs. You may have more advanced use cases which justify the risk of relying on format-specific metadata. To me, the bargain is not worth it.

I’m not saying I’m right. I’m not saying I’m not a grumpy old man. I’m just saying I have my reasons to be a grumpy old man who clutches his filesystem. And we’ll see who, of Mr. D. and me, is crestfallen first.


Filed under Apple, Everything, Modeling, Off-topic, Standards

MacOS Automator workflow to populate iTunes info from file path

Just in case someone has the same need, here is an Automator workflow to update the metadata of a music (or video) file based on the file’s path (for non-Mac users, Automator is a graphical task automation tool in MacOS).

Here the context: I recently bought my first Mac, a home desktop. That’s where my MP3 reside now, carefully arranged in folders by artist name. I made sure to plop them in the default “Music” folder, to make  iTunes happy. Or so I thought, until I actually started iTunes and saw it attempt to copy every single MP3 file in its own directory. Pretty stupid, but easily fixed via a config setting. The next issue is that, within iTunes, you cannot organize your music based on the directory structure. All it cares about is the various metadata fields. You can’t even display the file name or the file path in the main iTunes window.

Leave it to Apple to create a Unix operating system which hates files.

The obvious solution is to dump iTunes and look for a better music player. But there’s another problem.

Apparently music equipment manufacturers have given up on organizing your digital music and surrendered that function to Apple (several models let you plug an SD card or a USB key, but they don’t even try to give you a decent UI to select the music from these drives). They seem content to just sell amplifiers and speakers, connected to an iPod doc. Strange business strategy, but what do I know. So I ended up having to buy an iPod Classic for my living room, even though I have no intention of ever taking it off the dock.

And the organization in the iPod is driven by the same metadata used by iTunes, so even if I don’t want to use iTunes on the desktop I still have to somehow transfer the organization reflected in my directory structure into Apple’s metadata fields. At least the artist name; I don’t care for albums, albums mean nothing. And of course I’m not going to do this manually over many gigabytes of data.

The easy way would be to write a Python script since apparently some kind souls have written Python modules to manipulate iTunes metadata.

But I am still in my learn-to-use-MacOS phase, so I force myself to use the most MacOS-native solution, as a learning experience. Which took me to Automator and to the following workflow:

OK, I admit it’s not fully MacOS-native, I had to escape to a shell to run a regex; I couldn’t find a corresponding Automator action.

I run it as a service, which can be launched either on a subfolder of “Music” (e.g. “Leonard Cohen”) or on a set of files which are in one of these subfolders. It just picks the name of the subfolder (“Leonard Cohen” in this case) and sets that as the “artist” in the file’s metadata.

Side note: this assumes you Music folder is “/Users/vbp/Music”, you should replace “vbp” with your account user name.

For the record, there is a utility that helps you debug workflows. It’s “/System/Library/CoreServices/pbs”. I started the workflow by making it apply to “iTunes files” and later changed it to work on “files and folders”. And yet it didn’t show up in the service list for folders. Running “pbs -debug” showed that my workflow logged NSSendFileTypes=(“”); no matter what. Looks like a bug to me, so I just created a new workflow with the right input type from the start and that fixed it.

Not impressed with iTunes, but I got what I needed.


I’ve improved it a bit, in two ways. First I’ve generalized the regex so that it can be applied to files in any location and it will pick up the name of the parent folder. Second, I’m now processing files one by one so that they don’t all have to be in the same folder (the previous version grabs the folder name once and applies it to all files, the new version retrieves the folder name for each file). This way, you can just select your Music folder and run this service on it and it will process all the files.

It’s pretty inefficient and the process can take a while if you have lots of files. You may want to add another action at the end (e.g. play a sound or launch the calculator app) just to let you know that it’s done.

In the new version, you need to first create this workflow and save it (as a workflow) to a file:

Then you create this service which references the previous workflow (here I named it “assign artist based on parent folder name”). This service is what you invoke on the folders and files:

I haven’t yet tried running more than one workflow at a time to speed things up. I assume the variables are handled as local variables, not global, but it was too late at night to open this potential can of worms.


Filed under Apple, Automation, Everything, Off-topic

When piracy has systemic consequences

Interesting “Planet Money” podcast on “How The U.S. Gave S&P Its Power”. I especially found that part illuminating:

But the business [of the rating agencies] started to change in the late 1960s. Instead of charging investors, the rating agencies started to take money from the issuers of the bonds. White blames the shift on the invention of “the high-speed photocopy machine.”

The ratings agencies were afraid, he says, that investors would just pass around rating information for free. So they had to start making their money from the company side. Even this seemed to work pretty well for a long time.

The short term consequences of intellectual piracy may be easy to rationalize: I can imagine that, back in the 60s, stiffing the rich and powerful credit rating agencies by using new technology (the copy machine) to duplicate a report didn’t feel very consequential and maybe even cool. But then there are long term consequences. Like, in this case, a reversal of the money flow which ended up with the issuers themselves paying the rating agencies. A fundamentally unsustainable system which, many years later, gave us credit agencies who coached their customers, the issuers, on how to craft deceptively-rated securities.

We have many examples of too much government intervention (like tax policies that encourage borrowing) setting up the conditions for a crisis, but this appears to be an example of the reverse: a lack of government intervention (to ensure that the IP of credit agencies is respected) being part of what caused a systemic problem to form.

There are similar situations being created today. I cringe when I see advertising (and gathering of personal data) becoming such a prevalent monetization model, by lack of more direct alternatives. I don’t know what form the ensuing crisis will eventually take, but it may be just as bad as the debacle of the credit rating system.

I’m not saying more IP protection is always better. The patent system is an example of the contrary. It’s a difficult balance to achieve. But looking at the money flow is a good gauge of how well the system is working. The more directly the money flows from the real producers to the real consumers, the better and the more sustainable the system is. Today, it isn’t so for credit rating agencies, nor for much of the digital economy.

Comments Off on When piracy has systemic consequences

Filed under Business, Everything, Off-topic

On resisting shiny objects

The previous post (compiling responses to my question on Twitter about why there seems to be more PaaS activity in public Clouds than private Clouds) is actually a slightly-edited repost of something I first posted on Google+.

I posted it as “public” which, Google+ says, means “Visible to anyone (public on the web)”. Except it isn’t. If I go to the link above in another browser (not logged to my Google account) I get nothing but an invitation to join Google+. How AOLy. How Facebooky. How non-Googly.

Maybe I’m doing something wrong. Or there’s a huge bug in Google+. Or I don’t understand what “public on the web” means. In any case, this is not what I need. I want to be able to point people on Twitter who saw my question (and, in some cases, responded to it) to a compilation of answers. Whether they use Google+ or not.

So I copy/pasted the compilation to my blog.

Then I realized that this is obviously what I should have done in the first place:

  • It’s truly public.
  • It brings activity to my somewhat-neglected blog.
  • My blog is about IT management and Cloud, which is the topic at hand, my Google+ stream is about… nothing really so far.
  • The terms of use (for me as the writer and for my readers) are mine.
  • I can format the way I want (human-readable text that acts as a link as opposed to having to show the URL, what a concept!).
  • I know it will be around and available in an open format (you’re probably reading this in an RSS/Atom reader, aren’t you?)
  • There is no ad and never will be any.
  • I get the HTTP log if I care to see the traffic to the page.
  • Commenters can use pseudonyms!

It hurts to admit it, but the thought process (or lack thereof) that led me to initially use Google+ goes along the lines of “I have this Google+ account that I’m not really using and I am sympathetic to Google+ (for reasons explained by James Fallows plus a genuine appreciation of the technical task) and, hey, here is something that could go on it so let’s put it there”. As opposed to what it should have been: “I have this piece of text full of links that I want to share, what would be the best place to do it?”, which screams “blog!” as the answer.

I consider myself generally pretty good at resisting shiny objects, but obviously I still need to work on it. I’m back to my previous opinion on Google+: it’s nice and well-built but right now I don’t really have a use for it.

I used to say “I haven’t found a use for it” but why should I search for one?

1 Comment

Filed under Big picture, Everything, Google, Off-topic, Portability, Social networks, Twitter

“Toyota Friend”: It’s cool! It’s social! It’s cloud! It’s… spam.

Michael Coté has it right: “all roads lead to better junk mail.

We can take “road” literally in this case since Toyota has teamed up with to “build Toyota Friend social network for Toyota customers and their cars“.

If you’re tired of “I am getting a fat-free decaf latte at Starbucks” FourSquare messages, wait until you start receiving “my car is getting a lead-free 95-octane pure arabica gas refill at Chevron”. That’s because Toyota owners will get to “choose to extend their communication to family, friends, and others through public social networks such as Twitter and Facebook“.

Leaving “family and friends” aside (they will beg you to), the main goal of this social network is to connect “Toyota customers with their cars, their dealership, and with Toyota”. And what for purpose? The press release has an example:

For example, if an EV or PHV is running low on battery power, Toyota Friend would notify the driver to re-charge in the form of a “tweet”-like alert.

That’s pretty handy, but every car I’ve ever owned has sent me a “tweet-like” alert in the form of a light on the dashboard when I got low on fuel.

Toyota’s partner, Salesforce, also shares its excitement about this (they bring the Cloud angle), and offers another example of its benefits:

Would you like to know if your dealer’s service department has a big empty space on its calendar tomorrow morning, and is willing to offer you a sizable discount on routine service if you’ll bring the car in then instead of waiting another 100 miles?

Ten years ago, the fancy way to justify spamming people was to say that you offered “personalization”. Look at this old advertisement (which lists Toyota as a customer) about how “personalization” is the way to better connect with customers and get them to buy more. Today, we’ve replaced “personalization” with “social media” but it’s the exact same value proposition to the company (coupled with a shiny new way to feed it to its customers).

BTW, the company behind the advertisement? Broadvision. Remember Broadvision? Internet bubble darling, its share price hit over $20,000 (split-adjusted to today). According to the ad above, it was at the time “the world’s second leading e-commerce vendor in terms of licensing revenues, just behind Netscape and ahead of Oracle, IBM, and even Microsoft” and “the Internet commerce firm listed in Bloomberg’s Top 100 Stocks”. Today, it’s considered a Micro-cap stock. Which reminds me, I still haven’t gotten around to buying some LinkedIn…

Notice who’s missing from the list of people you’ll connect to using Toyota’s social network? Independent repair shops and owners forums (outside Toyota). Now, if this social network was used to let me and third-party shops retrieve all diagnostic information about my car and all related knowledge from Toyota and online forums that would be valuable. But that’s the last thing on earth Toyota wants.

A while ago, a strange-looking icon lit up on the dashboard of my Prius. Looking at it, I had no idea what it meant. A Web search (which did not land on Toyota’s site of course) told me it indicated low tire pressure (I had a slow leak). Even then, I had no idea which tire it was. Now at that point it’s probably a good idea to check all four of them anyway, but you’d think that with two LCD screens available in the car they’d have a way to show you precise and accurate messages rather than cryptic icons. It’s pretty clear that the whole thing is designed with the one and only goal of making you go to your friendly Toyota dealership.

Which is why, without having seen this “Toyota Friend” network in action, I am pretty sure I know it will be just another way to spam me and try to scare me away from bringing my car anywhere but to Toyota.

Dear Toyota, I don’t want “social”, I want “open”.

In the meantime, and since you care about my family, please fix the problem that is infuriating my Japanese-American father in law: that the voice recognition in his Japan-made car doesn’t understand his accented English. Thanks.

Comments Off on “Toyota Friend”: It’s cool! It’s social! It’s cloud! It’s… spam.

Filed under Automation, Business, Cloud Computing, Everything, Off-topic

How to annoy the French

This is completely off-topic, but I wanted to share a link to a very interesting podcast I played during my commute today. Stanford professor Lera Boroditsky addressed The Long Now Foundation in October last year on the topic of “How Language Shapes Thought”.

It’s a great presentation and she is an excellent speaker. It reminded me of this long article from last summer in the New York Times: “Does Your Language Shape How You Think?”. The talk and the article overlap somewhat, but both are worth listening/reading.

The most memorable topic in both cases is that of geographic languages (but there are many other interesting perspectives). From the NYT article:

In order to speak a language like Guugu Yimithirr, you need to know where the cardinal directions are at each and every moment of your waking life. You need to have a compass in your mind that operates all the time, day and night, without lunch breaks or weekends off, since otherwise you would not be able to impart the most basic information or understand what people around you are saying. Indeed, speakers of geographic languages seem to have an almost-superhuman sense of orientation. Regardless of visibility conditions, regardless of whether they are in thick forest or on an open plain, whether outside or indoors or even in caves, whether stationary or moving, they have a spot-on sense of direction. They don’t look at the sun and pause for a moment of calculation before they say, “There’s an ant just north of your foot.” They simply feel where north, south, west and east are, just as people with perfect pitch feel what each note is without having to calculate intervals. There is a wealth of stories about what to us may seem like incredible feats of orientation but for speakers of geographic languages are just a matter of course. One report relates how a speaker of Tzeltal from southern Mexico was blindfolded and spun around more than 20 times in a darkened house. Still blindfolded and dizzy, he pointed without hesitation at the geographic directions.

But what about annoying the French you may ask. This comes from a small section at the end of Lera Boroditsky’s talk. It’s at 73:54, just before the Q&A starts. Once again, I encourage you to listen to the podcast (or better yet pay for membership and see the video). But here is a transcript of the relevant advice:

People tried to affect some linguistic change and it’s just silly. So, here, US Congress decides to rename “French fries” into “Freedom fries”. This is when France refused to go into the war in Iraq, didn’t want to join our “coalition of the willing” and so this was their way of getting back at them.

Well, that seems stupid, right?

But this is not new.  So, during World War I, for example, everything that had a German-sounding name became Liberty-something-or-other. There is a reason that these kinds of substitutions don’t work and it’s because they’re based on wrong theory about how cognition and language relate to one another.

So, words that you can simply replace one for the other in a language are synonyms, right? So if two words can equally well go into any phrase, that means they have the same meaning, they are synonyms. And so when you make that kind of replacement what you’re saying is “French” is synonymous with “freedom”.

So, “French fries” are “freedom fries”, “French toast” is “freedom toast”, “French poodles” are “freedom poodles”, “French kissing” is “freedom kissing”, and then we have “freedom manicures”… But what should we call France then? Freedomland? And French would be “the language of freedom”? It’s setting up the wrong kind of mapping.

What I want to suggest is if we understand, really, how language and though interact, and mind, we can even be nationalistic in a more effective manner. So if we really want to annoy the French, I say take all the things that the French hate and call them French. That will really annoy them.

For example, ketchup becomes “French sauce”, McDonald’s will be “the French cafe”, shorts will be “French pants”, mimosas will be “French cocktails”, Disneyland will be “France”, Americans will be “French people”, the English language will be called “French”. That will get them.


Filed under Everything, Off-topic

Let me explain, officer

I am not in the habit of using a camera in public bathrooms, but since I haven’t written any post in the CrazyStats category for a while I figured this was worth taking the risk of being arrested.  Last weekend, I had the honor of using a urinal which “saves 88% more water than a one gallon urinal”. A completely meaningless statement that masquerades as a statistic (presumably they mean “uses 88% less water”). How much water does a one gallon urinal save? I know how much it consumes (one gallon) but how do you define how much it saves? Compared to what? To a standard one gallon model? Well, a one gallon doesn’t save anything compared to a one gallon, so the urinal I used (if it uses less than one gallon) actually saves infinitely more water than a one gallon urinal. If you are going to make meaningless claims, why stop at 88%?

Marketing claims based on meaningless statistics. It’s not just for Cloud Computing.

1 Comment

Filed under CrazyStats, Off-topic

There should be a word for this (Blog/Twitter edition) part 3

Resuming where we left off (part 1 , part 2), here are more words that we need in the age of Twitter and blogs.

#17 The feeling of elation when writing an IM which you know is approaching 140 characters, your fingers start to tense but you go past it and nothing happens, nothing turns red, and you suddenly feel so free to express yourself. Also works on IRC.

#18 The art of calibrating how many hints you should put that there is a joke/punt/double-entendre in your tweet. Some jokes are best delivered with a straight face (e.g. without a smiley or a #humor tag). And readers derived more pleasure from less obvious jokes. But the risk is that the joke will go completely unnoticed as people hurriedly scan their timelines.

#19 The trauma of temporarily switching back to a “feature phone” (e.g. a basic clamshell) while waiting for the replacement of a broken smartphone to arrive. In response to this request, Lori MacVittie suggested “retrotrauma” which I like a lot though I may shorten it to “retrauma” or “retroma”.

#20 The intuition that the thought you just had is original enough to interest your readers but probably not originally enough to not have been tweeted already. The quasi-certitude that doing a twitter search on it would find previous occurrences, thereby making you an involuntary plagiarist. The refusal to perform such search (in violation of the “Google before you Tweet is the new Think before you Speak” adage) before writing your tweet. Or, on the other side, the abandon of a tweet idea based on the assumption that it’s already out there. E.g. I could think of a few jokes on the HP “invent” tagline in the wake of Mark Hurd’s resignation (“HP Invent… business expenses”) but didn’t bother, based on the assumption that these tweets were already doing the rounds.

#21 A brand, especially a personal one, (e.g. twitter ID, domain name…) that has aged badly because it uses a now-out-of-favor buzzword. Like, soon enough, everything with “Cloud” in it. I still remember, over 10 years later, laughing out loud when I heard a KQED radio program sponsored by Busse Design USA who was inviting us to visit them at “”. This was in the late nineties when “portals” where the hot thing on the Internet (as well as the “my” prefix, when Yahoo and others got into personalization). I am happy to see that they are now using a much more reasonable domain name but Yahoo’s calcified directory still bears witness of their hubris. Look for Busse Design on this listing.

# 22 Someone who has never been on-line. I don’t personally feel the need for a new term for this, but we have to find an alternative to this most unfortunate and ambiguous coinage: “digital virgins” (as in “30 percent of Europeans are ‘digital virgins'”)

#23 Chris Hoff wanted a term to describe “someone who tries to escape from the suffocating straight jacket of disingenuousness exposed by their own Twitter timeline.” His proposal: “tweetdini”

As always, submissions are welcome in the comments if  you think you’ve coined the right term for any of these.


Filed under Everything, Off-topic, Twitter

The Way of the Weasel

Say you want to play the tough guy on Twitter, but would rather not be taken to task on your proclamations. Here is a technique you can use to publicly insult/challenge/criticize someone by name without them knowing about it.

Let’s assume you want to challenge Internet darling Chuck Norris to a duel, but aren’t too sure that the result of an actual fight would look like The Way of the Dragon (with you as Bruce Lee and Chuck as Chuck). So you would prefer that he didn’t hear about your challenge. Here is the process to follow.

  • First, in the “settings” page of your Twitter account, check the “protect my tweets” option.
  • Then write your challenge tweet, e.g. “I challenge @ChuckNorris to a fight to death but the coward will probably never dare to answer this tweet.”
  • Then, back on the “settings” page, uncheck “protect my tweet”.

Voila. All your followers will see your bravado and Chuck Norris will never hear about it. No trace should remain of this subterfuge once it’s over and the whole thing can be done in a couple of seconds.

Note that this only works if Chuck doesn’t follow you direclty. This method prevents someone from noticing your tweet in the list of mentions of their @username but it doesn’t prevent your followers from seeing the tweet. Which is the whole point, since you want your followers to see what a tough guy you are. You would just rather not face the consequences.

Anyway, I just thought this was an interesting corner case. Not that I or any of my readers would be ever do this, but be aware that it’s something someone (who takes Twitter too seriously) could do.

1 Comment

Filed under Everything, Off-topic, Twitter

Twitter changes the rules for URLs in tweets: the end of privacy or the end of the 140 character limit?

Twitter has decided that for our good and their own it would be better if any time you click a link in a tweet the request first went to Twitter before being redirected to the intended destination. This blog entry announces the decision, but a lot of the interesting details are hidden in the more technical description of the change sent to the Twitter developers mailing list.

Here is a quick analysis of the announcement and its ramifications.

The advertised benefits

For users:

  • Twitter scans the links for malware, offering a layer of protection
  • It becomes easy to shorten links directly from the Tweet box on (or from any client that doesn’t have a built-in link shortening feature)

For Twitter:

  • they collect a lot of profiling data on user behavior, which can be used to improve the “Promoted Tweets” system (and possibly plenty of other uses)

You don’t have to be much of a cynic to notice that the user benefits are already available today for those who care about them (get a link scanner on your computer, get a Twitter client with built-in link shortening) while the benefit to Twitter is a brand new and major addition…

One interesting side-effect: the erosion of the 140 character limitation

Without going into the technical details of the new system, one change is that each URL will now “cost” 20 characters (out of the 140 allowed per tweet), no matter how long it really is. But in most cases the user will still see the complete URL in the tweet (clients may choose to display something else but I doubt they will, at least by default, except for SMS). So you could now see tweets like (line breaks added):

In the town where I was born / Lived a man who sailed to sea /
And he told us of his life / In the land of submarines,
And.we.lived.beneath.the.waves/In.our.yellow.submarine/,yellow.submarine/ in.yellow.submarine/Yellow.submarine,yellow.submarine.

Based on the Twitter proposal, clicking on this link would send you to a Twitter-operated link shortener (e.g. which would then redirect you to the full URL above. The site (e.g. in this example) could be trivially set up so that such URLs are all valid and they return a clean version of the encoded text (for the benefit of users of Twitter clients that may not show the full URL).

This long URL example may seem a bit overkill (just post 3 tweets), but if you are only short by 20 or 30 characters and just can’t find another way to shorten the tweet the temptation will be big to take this easy escape.

A cool new URL shortening domain

You may have noticed the domain in the previous paragraph. Yes, it’s a real one. That’s the hard-to-beat domain that Twitter was able to score for this. Cute. But frankly I am tired of the whole URL shortening deal and these short domain names have stopped to amuse me. You?


How, you may wonder, can Twitter ensure that the clicks go through its gateway if the full URL is available as part of the Tweet? Simple: they change the terms of service to forbid doing otherwise. It’s interesting how the paragraph in the email to developers which announces that aspect starts by asking nicely “we really do hope that…”, “please send the user through the link”, “please still send him or her through” and ends with a more constraining “we will be updating the TOS to require you to check and register the click”. Speak softly and carry a big stick.

It will be obviously easy to avoid this, and you won’t have to resort top copy/pasting URLs. Even if the client developers play ball, open source clients can be recompiled. Proxies can be put in front of clients to remember the mapping and do the substitution without ever hitting Plug-ins and Greasemonkey scripts can be developed. Etc. Twitter knows that for sure and probably doesn’t care. As long as by default most users go through the company will get the metrics it needs. It’s actually to Twitter’s benefit to make it easy enough (but not too easy) to circumvent this, as it ensure that those who care will find a solution and therefore keep using the service without too much of a fuss. We can’t tell you to cheat but we’ll hint that we don’t mind if you do.

The privacy angle

This is a big deal, and disappointing to me. Obviously the hopes I had for Twitter to become the backbone of an open, user-controlled, social data bus are not shared by its management. Until now, Twitter was a good citizen in a world of privacy-violating social networks because the data it shared (your tweets and your basic profile data) had always and unambiguously been expected to be public. Not true with your click stream. An average user will have no idea that when he clicks on the request first goes to Twitter. Twitter now has access to identified personal data (the click stream) that its users do not mean to share. I realize that this is old news in a world of syndicated web advertising and centralized analytics, but this is new for Twitter and now puts them in position to mishandle this data, purposely or not, in the way they store it, use it and share it.

I wouldn’t be surprised if they end up forced to offer an option to not go through and they had additional metadata in the user profile to inform the Twitter client that it is OK for this user not to go through the gateway when they click on a link. We’ll see.

The impact on the Twitter application ecosystem

The assault on URL shorteners was expected since the Chirp conference (and even before). Most Twitter application developers had already swallowed the “if you just fill a hole in the platform we’ll eventually crush you” message. What’s new in this announcement is that it also shows that Twitter is willing to use its Terms of Service document as a competitive tool, just like Steve Jobs does with Flash. Not unexpected, but it wasn’t quite as clear before.

It’s not a pretty implementation

Here is what it looks like at the API level: the actual tweet text contains the shortened URL. And for each such URL there is a piece of metadata that gives you the URL, the corresponding full-length URL (which you should display instead of the one) and the begin/end character position of the URL in the original tweet so you can easily pull it out (not sure why you need the end position since it’s always beginning+20 but serialized Twitter message are already happily duplicative).

As a modeling and serialization geek, I am not impressed by the technical approach taken here. But before I flame Twitter I should acknowledge the obvious: that the Twitter API has seen a rate of adoption several orders of magnitude larger than any protocol I had anything to do with. Still, it would take more than this detail of history to prevent me from pontificating.

For such a young company, the payload of a Twitter message is already quite a mess, mixing duplications, backward-compatible band-aids, real technical constraints and self-imposed constraints. Why, pray tell, do we even need to shorten URLs? If you’re an outsider forced to live within the constraints of the Twitter rules (chiefly, the 140 character limit), they make sense. But if you’re Twitter itself? With the amount of cruft and repetition in a serialized Twitter message, don’t tell me these characters actually matter on the wire. I know they do for SMS, but then just shorten the links in tweets sent over SMS. In the other cases, it reminds me of the frustrating experience of being told by the owner of a Mom-and-Pop store that they can’t accede to your demand because of “company policy”.

Isn’t it time for the text of tweets to contain real markup so that rather than staring at a URL we see highlighted words that point somewhere? Just like… any web page. Isn’t it the easiest for an application to process and doesn’t it offer the reader a more fluid text (Gillmor and Carr notwithstanding)? By now, isn’t this how people are used to consuming hypertext?

Couldn’t the backward compatibility issue of such an approach be solved simply by allowing client applications to specify in their Twitter registration settings that yes, they are able to handle such earth-shattering concept as a <a href=””></a> element. This doesn’t prevent a URL to “cost” you some fixed number of characters, it doesn’t prevent the use of a tracker/filter gateway if that’s your business decision.

We’ll see how users (and application developers) react to this change. As fans of Douglas Adams know, the risk of claiming that “all your click are belong to us” is that you expose yourself to hearing “so long, and thanks for all the whale” as an answer…


Filed under Everything, Off-topic, Twitter

Exclusive! Mark Hurd pulls a Steve Jobs on Microsoft

When Mark Hurd read Steve Job’s rant against Flash (saying, in effect, “we have to tolerate Flash on our desktops/laptops for now but this piece of crap is not going to soil our iPhones, iPads and iPods”) he must have thought “hey, if I can pull one of these stunts maybe I too will have groupies screaming my name when I unveil our tablet”. Sources tell me he is planning to work on this over the weekend and publish it on on Monday. I was able to get hold of an early draft, which an HP staffer (or Mark himself?) left in a beer garden. Here is what Mark Hurd has to say to about Windows for mobile devices:

Thoughts on Windows

HP has a long relationship with Microsoft… [Note from Mark: someone inserts some hypocritical blah blah about how we used to love each other – sure hasn’t been the case since I’ve been here]. Today the two companies still work together to serve their joint customers – HP customers buy a big chunk of Windows licenses – but beyond that there are few joint interests.

I wanted to jot down some of our thoughts on Microsoft’s Windows products so that customers and critics may better understand why we will not use Windows in our phones and tablets. Microsoft will characterize our decision as being primarily business driven – they say we want to taste decent margins for once – but in reality it is based on technology issues. Microsoft will claim that we are a closed system, and that Windows is open, but in fact the opposite is true. Let me explain.

First, there’s “Open”.

Microsoft Windows products are 100% proprietary. They are only available from Microsoft, and Microsoft has sole authority as to their future enhancement, pricing, etc. While Microsoft Windows products are widely available, this does not mean they are open, since they are controlled entirely by Microsoft and available only from Microsoft. By almost any definition, Windows is a closed system.

HP has many proprietary products too. Though the WebOS operating system we’ll use in our phones and tablets is proprietary, we strongly believe that all standards pertaining to the web should be open. Rather than use Windows, HP has adopted HTML5, CSS and JavaScript – all open standards. HP’s mobile devices will all ship with high performance, low power implementations of these open standards. HTML5, the new web standard that has been adopted by HP, Apple, Google and many others, lets web developers create advanced applications without relying on proprietary APIs (like Windows). HTML5 is completely open and controlled by a standards committee.

Second, there’s the “full application ecosystem”.

Microsoft has repeatedly said that HP mobile devices will not be able to access “the full application ecosystem” because 75% of applications are Windows applications. What they don’t say is that almost all these applications are also available in a more modern form, using Web standards and implemented as a service, and usable on HP’s upcoming phones and tablets. Microsoft Office, Outlook, financial software etc all have excellent Web-based alternatives. Users of HP’s phones and tablets won’t be missing many applications.

Third, there’s reliability, security and performance.

Windows has had an awful security records for twenty years. We also know first hand that Windows is the number one reason PCs crash. We have been working with Microsoft to fix these problems, but they have persisted for several years now. We don’t want to reduce the reliability and security of our phones and tablets by using Windows.

In addition, Windows has not performed well on mobile devices. We have routinely asked Microsoft to show us Windows performing well on a mobile device, any mobile device, for a few years now. We have never seen it. Microsoft publicly said that Windows would work well on a device starting with the first Windows CE in 1996. Then came Pocket PC 2000, then Pocket PC 2002, then Windows Mobile 2003, then Windows Mobile 5, 6, 6.1 and 6.5, none of which was any good. And now they say it will be with Windows Phone 7. We think it will eventually ship, but we’re glad we didn’t hold our breath. Who knows how it will perform?

Fourth, there’s battery life.

To achieve long battery life, mobile devices must use thin and efficient software and Windows is anything but that. It only runs on power-hungry Intel processors while the same features can be delivered by much smaller and more efficient processors when using WebOS. Not only does the battery last longer, the devices are lighter and don’t leave burn marks on your clothes.

Fifth, there’s Touch.

Windows was designed for PCs using mice, not for touch screens using fingers. For example, many Windows applications have such crappy UI that users depend on tooltips to figure out what a button does. They pop up when the mouse arrow hovers over a specific spot. WebOS revolutionary multi-touch interface doesn’t use a mouse, and there is no concept of a tooltip. Most Windows applications will need to be rewritten to support touch-based devices. If developers need to rewrite their Windows applications, why not use modern technologies like HTML5, CSS and JavaScript?

Even if HP phones and tablets used Windows, it would not solve the problem that most Windows applications need to be rewritten to support touch-based devices.

Sixth, the most important reason.

Besides the fact that Windows is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we will not use Windows on our phones and tablets. Windows is an abstraction layer that covers very different underlying hardware.

We know from painful experience that letting a third party layer of software come between the hardware and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of hardware enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying an operating system that runs on hardware from many vendors. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.

Windows is a multi-hardware abstraction. It is not Microsoft’s goal to help developers write the best application for HP’s phones and tablets. It is their goal to help developers write applications that will run on Windows devices from all hardware manufacturers. [Note from Mark: should I describe how Microsoft has been getting in the way of how our PCs talk to our printers and making a mess of desktop printing for the last 20 years or is this off-topic?]

Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.


Windows was created during the PC era – for PCs and mice. Windows is a successful business for Microsoft, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Windows falls short.

The avalanche of Web-based applications accessible from Web-enabled mobile devices demonstrates that Windows is no longer necessary to access application functionalities of any kind.

New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too [Note from Mark: maybe I should remove that parenthesis or we’ll give Ballmer a heart attack]). Perhaps Microsoft should focus more on creating a great Web-centric platform for the future, and less on criticizing HP for leaving the past behind.

Mark Hurd
April, 2010

[UPDATED 2010/5/18: I hear echos of “should I describe how Microsoft has been getting in the way of how our PCs talk to our printers and making a mess of desktop printing for the last 20 years or is this off-topic?” in the statements Mark Hurd made during his post-earning analyst call today: “when you look across the HP ecosystem of interconnected devices, it is a large family of devices and we think of printers, you’ve now got a whole series of web connected printers and as they connect to the web, [they] need an OS.” Though I am really puzzled by the next line: “Hurd adds that HP prefers to own the OS to “control the customer experience” as it always has in printing.” HP doesn’t control the customer experience at all in printing, because of Windows. It’s only because we are so used to it that we don’t realize how awful the printing experience is, whether using a connected printer or over the network. Glad to see that they intend to apply the Palm acquisition to this problem too.]

[UPDATED 2010/5/26: According to some, this breakup letter from HP caused a breakup inside Microsoft: The reason Robbie Bach was fired]

1 Comment

Filed under Everything, HP, Microsoft, Mobile, Off-topic

The fallacy of privacy settings

Another round of “update your Facebook privacy settings right now” messages recently swept through Twitter and blogs. As also happened a few months ago, when Facebook last modified some privacy settings to better accommodate their business goals. This is borderline silly. So, once and for all, here is the rule:

Don’t put anything on any social network that you don’t want to be made public.

Don’t count on your privacy settings on the site to keep your “private” data out of the public eye. Here are the many ways in which they can fail (using Facebook as a stand-in for all the other social networks, this is not specific to Facebook):

  • You make a mistake when configuring the privacy settings
  • Facebook changes the privacy mechanisms on you during one of their privacy policy updates
  • Facebook has a security flaw that bypasses access control
  • One of you friends who has access to your private data accidentally/stupidly/maliciously shares it more widely
  • A Facebook application to which you grant access betrays your trust in accessing the data and exposing it
  • A Facebook application gets hacked
  • A Facebook application retains your data in its cache
  • Your account (or one of your friends’ account) gets hacked
  • Anonymized data that Facebook shares with researchers gets correlated back to real users
  • Some legal action (not necessarily related to you personally) results in a large amount of Facebook data (including yours) seized and exported for legal review
  • Facebook looses some backup media
  • Facebook gets acquired (or it goes out of business and its assets are sold to the highest bidder)
  • Facebook (or whoever runs their hardware) disposes of hardware without properly wiping it
  • [Added 2012/3/8] Your employer or schoold demands that you hand over your account password (or “friend” a monitor)
  • Etc…

All in all, you should not think of these privacy settings as locks protecting your data. Think of them as simply a “do not disturb” sign (or a necktie…) hanging on the knob of an unlocked door. I am not advising against using privacy settings, just against counting on them to work reliably. If you’d rather your work colleagues don’t see your holiday pictures, then set your privacy settings so they can’t see them. But if it would really bother you if they saw them, then don’t post the pictures on Facebook at all. Think of it like keeping a photo in your wallet. You get to choose who you show it to, until the day you forget your wallet in the office bathroom, or at a party, and someone opens it to find the owner. You already know this instinctively, which is why you probably wouldn’t carry photos in your wallet that shouldn’t be shown publicly. It’s the same on Facebook.

This is what was so disturbing about the Buzz/GMail privacy fiasco. It took data (your list of GMail contacts) that was not created for the purpose of sharing it with anyone, and turned this into profile data in a social network. People who signed up for GMail didn’t sign up for a social network, they signed up for a Web-based email. What Google wants, on the other hand, is a large social network like Facebook, so it tried to make GMail into one by auto-following GMail contacts in your Buzz profile. It’s as if your insurance company suddenly decided it wanted to enter the social networking business and announced one day that you were now “friends” with all their customers who share the same medical condition. And will you please log in and update your privacy settings if you have a problem with that, you backward-looking, privacy-hugging, profit-dissipating idiot.

On the other hand, that’s one thing I like about Twitter. By and large (except for the few people who lock their accounts) almost all the information you put in Twitter is expected to be public. There is no misrepresentation, confusion or surprise. I don’t consider this lack of configurable privacy as a sign that Twitter doesn’t respect the privacy of its users. To the contrary, I almost see this as the most privacy-friendly approach: make it clear that everything is public. Because it is anyway.

One could almost make a counter-intuitive case that providing privacy settings is anti-privacy because it gives an unwarranted sense of security and nudges users towards providing more private data than they otherwise would. At least if the policy settings are not contractual (can you sue Facebook for changing its privacy terms on you?). At least it’s been working that way so far for Facebook, intentionally of not, as illustrated by all the articles that stress the importance of setting our privacy settings right (implicit message: it’s ok to put private information as long as you set  privacy settings).

Yes you should have clear privacy settings. But the place to store them is in your brain and the place to enforce them is by controlling what your fingers do before data gets on Facebook. Facebook and similar networks can only leak data that they posses. A lot of that data comes from you directly uploading it. And that’s the point where you have control. After this, you really don’t. Other data comes from tracking and analyzing your activities and connections, without explicit data upload from you. That’s a lot harder for you to control (you rarely even get asked for your privacy preferences on this data), but that’s out of scope for this blog entry.

Just like banks that are too big to fail are too big to exist, data that is too sensitive to leak from Facebook is too sensitive to be on Facebook.


Filed under Everything, Facebook, Google, Off-topic, Security, Social networks, Twitter

There should be a word for this (Blog/Twitter edition) part 2

Back in October (see “there should be a word for this” part 1) I listed a few concepts (related to twitter and/or blogging) for which new words were needed. Since it’s such a rich field, I barely scratched the surface. Here is the second installment.

#9 The temptation to repeat a brilliant tweet of yours that went unnoticed when you expected a RT storm in response (maybe it was a bad time of the day when everyone was offline? maybe it fell in a twitter mini-outage?)

#10 The new pair of eyes you get the second after you post a tweet.

#11 The act of sharing (e.g. via delicious…) or RTing a URL to an article you haven’t actually read (but you think it makes you look smart). For example, I’d love to give a test to everyone who RTed this entry.

#12 The shock of seeing a delivery error when DMing someone you were positive was following you (this is related to definition #1 from part 1, so Shlomo’s followimp could apply).

#13 The minimum number of people to follow on twitter, of blog feeds to subscribe to and of Facebook friends to have such that you can cycles through all three continuous and never run out of new content. In the TV world, the equivalent would be the minimum number of cable channels needed to cycle through them and never feel like you’ve established that there is nothing worth watching.

#14 The awful feeling when the twitter/blog/facebook cycle from #13 breaks on a Friday night because others have a life.

#15 When a twitter conversation has reached a dead-end because of the short form. When the response you get makes you wonder what the other person understood from your last tweet. But forcing a clarification would take a half-dozen tweets at least and risk turning you into a twoll (another coinage for the twitter era, by Andi Mann).

#16 The compression rate of a sentence: how hard it is to further compress it (e.g. in order to squeeze in an RT comment), whether all the easy shortcuts have been taken already.

Please submit your candidate terms for these definitions.

[UPDATED 2010/8/12: there is now a part 3.]


Filed under Everything, Media, Off-topic, Social networks, Twitter

Yes you can read the OSGi specification

You know what I like the best about OSGi? That it doesn’t put the bar too high for architects. At first I was a bit intimidated by the size  (338 pages for the “core specification”, 862 pages for the “service compendium”) and the fact that I had to look up “compendium”. But then they put me right at ease:

“Architects should focus on the introduction of each subject. This introduction contains a general overview of the subject, the requirements that influenced its design, and a short description of its operation as well as the entities that are used. The introductory sections require knowledge of Java concepts like classes and interfaces, but should not require coding experience.”

I am like so totally overqualified for my job. Hell, I even know what packages are.

(from the recently released OSGi version 4.2.)


Filed under Application Mgmt, Everything, Off-topic, OSGi

There should be a word for this (Blog/Twitter edition)

I enjoyed finishing reading The Atlantic with Barbara Wallraff’s “Word Fugitives” column every month. Until earlier this year, when it was replaced  with Jeffrey Goldberg’s attempts at humor. For old time sake, I am borrowing the “Word Fugitive” format and applying it to the world of blogs and tweets. Here is a list of blog/twitter situations for which “there should be a word”.

#1 The ego-crushing realization, in the course of a face to face conversation covering topics you’ve written about, that the other person has not read your blog/tweets on this. Even though the first thing they told you when you met 10 minutes earlier is that they love your blog.

Candidate: followimp (from Shlomo).

#2 Conversely when someone brings up in the conversation something you wrote and had forgotten you did (maybe we need two words here, one if you are happy to be reminded of this and one if you’d rather not have been).

Candidates: twegreat and twegrets, respectively (from Shlomo).

#3 Seeing the corner of the blogo-twitto-sphere where you hang out light up in response to someone’s post even though you wrote up the same thing two years ago. At least you were trying to explain the same thing, but your brilliance went unnoticed.

Candidate: deja-lu.

#4 The frustrating (for system modelers at least) intermixing of data (your text) and metadata (e.g. the identification of the tweet you are responding to) in Tweeter conversations.

Candidate: metamess.

#5 (This one comes from @Beaker) The art of carving up tweets from others to be able to retweet them in 140 characters.

Hoff has a suggestion: Twexter (Twitter + Dexter).

#6 The art of guessing early the Twitter #hashtag that will emerge as a winner for a given topic.

Candidate: foretweetude.

#7 The frustration of having too many blog drafts and no time to write them up.

Candidate: blocrastination. And Neil WD offered logjam in the comments.

#8 (added on 2009/10/22 after seeing this) The feeling of nakedness one has while his/her blog is offline.

Candidate: e-vanescence.

[UPDATED 2010/3/8: See part 2 for more.]

[UPDATED 2010/8/12: And part 3.]


Filed under Everything, Media, Off-topic, Social networks, Twitter

On Twitter

I created the @vambenepe Twitter account a while ago to reserve the username. Yesterday I posted three tweets, so I guess I am now “on Twitter”, in case anybody cares. We’ll see where this goes. @jamesurquhart gave me a kind (but intimidating) welcome and @Beaker hasn’t called me a “jackass” yet, so things are looking good. BTW, is it just me or has Cisco assembled a top-notch good cop / bad cop team? I hope I manage my blog-to-twitter expansion as well as they did.

The Cloud stuff is where the fun is, but if this Twitter thing is going to be of any use for real work I need to find who to follow in the IT management, application management and systems modeling areas. Any suggestion beyond @cote, @MouthOfOpenNMS, @dmcclure, @puppetmasterd and @theitskeptic (I feel like I am just Twitterifying my blogroll)?

And even then, finding people to follow seems to be the easy part. It took me about 20 minutes last night to realize that I am not going to read all the tweets (and I currently only follow 18 people). Worst case I’ll just track the direct mentions of my handle and some occasional hastags during interesting announcements. And scan the rest once a week. I assume that’s what the Twitter natives like @cote do as well (I seeded my list by picking names I recognized from his 1,130-long follow list). Advice?

The other issue is the 140 characters limit of course, but this should be easier to get used to. In the Apple/Palm tweet last night (about how this might show us what enforcement options standard bodies have) I wanted to invoke Stalin’s dismissive “The Pope! How many divisions has he got?” quote by replacing the pope with the USB Implementers Forum (USB-IF). But no room left unless I sacrificed the image of a working group chair breaking the knee cap of an offending implementer (which, as an ex-WG chair myself, I see some upside to).

Is it bad form to post multi-part tweets? How about, say, 50 parts? I need a protocol to guarantee delivery and order on top of the Twitter API. Maybe REST-* can help me… ;-)

I also wanted to ping Andy Updegrove with the hope that he’d comment on the USB-IF letter (he has looked at the iPhone before, but not this specific issue) for an authoritative opinion. But he doesn’t seem to be on Twitter. The nerve!

And then there is the “follower” thing, which I guess I am now supposed to start obsessing about (folks, if I don’t have a hundred followers by week end the kitten gets it).

In the real world, there are a few people who return my emails and occasionally agree to have lunch with me, but that’s a far cry from calling them “followers”. Even my wife would spit her coffee if I referred to her as my “follower”. But on Twitter, I just posted three tweets yesterday and I already feel like a religious guru with my 24 “followers”.

Jokes aside (on the cult-leader overtones of the word “follower”), the fact that these people are identified is a nice improvement over blog subscribers (who, to me, are just an occasional number within the user-agent field in my Apache httpd logs), at least until they comment/email. Nice to “see” you.

One more step in the slippery slope towards total egomania. Blog > Twitter > Live webcam of the inside of my stomach.

Comments Off on On Twitter

Filed under Everything, Media, Off-topic, Social networks, Twitter

Whose ******* idea was this?

My last two entries have been uncharacteristically Microsoft-friendly, so it’s time to restore some balance. Coincidentally, I just noticed the latest “alertbox” entry by Jakob Nielsen, about putting an end to password masking (the ******* that appears when you type a password). I actually disagree with Nielsen on this (it’s not just about shoulder-surfing, who hasn’t had to enter a password while sharing their desktop via a projector or a webex-like conference service; plus I either know my password very well or I paste it directly from a password management tool, either way the lack of visual feedback doesn’t bother me).

But, and this is where the Microsoft-bashing starts, there is one area where password-masking is inane: wifi keys. Unlike passwords, these are never things that you have picked yourself, so they are harder to type, often hexadecimal (the one I chose, for my home network, I never have to type).  And where do we do this? Either in a meeting room, where the key is written on the white board, or in a dentist waiting room, where it is pinned on the wall. In almost all cases, everyone in the room has access to the key. And if it is not on a wall, then it is on a piece of paper that’s right next to my computer and easier to snoop from. Masking this field, as Windows XP does, is plain stupid.

But stupidity turns into depravity and sadism when they force you to type it twice. I understand the reason for entering passwords twice when you initially set them in the system (accidentally entering a different password than what you intended can be trouble). But not when you provide them as a user requesting access (accidentally entering the wrong password just means you have to try again). So why does Windows insist on this? In the best case (I enter the key correctly twice) I’ve had to do double work for the same result. In the worst case (at least one is mistyped) I am in no better situation than if there was only one field but I have done twice the work. And this worst case is twice as likely to happen, since I have twice the opportunity to foul-up.

When confronted with this, I usually type the key in a regular text box (e.g. the search box in Firefox) and copy-paste from there to both fields in the Windows dialog box. But I shouldn’t have to.

While I am at it, do you also want to read what I think about the practice, initiated by MS Word as far as I can tell, to include formatting in copy/paste by default? And how deep you have to go in the “paste special” menu to get the obviously superior behavior (unformatted text)? Not really? Ok, I’ll save that for a future rant. Let’s just say that this idea must have come from a relative of the Windows wifi-key-screen moron. Just give me their names and I’ll be the arm of Darwinism.

[UPDATED 2009/6/26: Bruce Schneier agrees with Jakob Nielsen. So this is an issue at the confluence of security and usability on which both security guru Schneier and usability guru Nielsen are wrong. Gurus can’t always be right, but what’s the chance of them being wrong at the same time?]


Filed under Everything, Microsoft, Off-topic