With the ongoing virtualization of the computing infrastructure as well as the proliferation of multi-core processors, revising software pricing strategies (often based on number of processors) is a hot topic. The usual spin is: we can’t keep using the current model (as “number of processors” doesn’t mean much anymore) so we have to think of a new one. But there is another way to look at it. Revising the pricing strategy not because we have to but because we can.
Pricing software based on the number of processors only makes sense because we are used to it. We are used to it because it is prevalent. It is prevalent because it is easy to measure and apply (or was until recently). It’s hard to measure the value to the business of a piece of software but it is easy to measure how many processors run it. So we use the number of processors as an approximation of the value. This approach to pricing is very similar to the approach of policy-driven IT management that creates SLAs at different levels of the architecture. The IT administrator is told to make sure that a certain server stays up 99.9% of the time. Does the business really care that the server is up? No, what it cares about is that the business processes can progress and these processes happen to use applications running on the server. But if we told the IT admin “make sure the business processes can progress”, he doesn’t know what to do in practical terms. He doesn’t know whether the downtime to patch the server is worth it or not. By giving him a more measurable metric (uptime), the IT admin is now able to make the necessary decisions to meet the specific uptime SLA. Just like the number of processors is used as a convenient approximation of the business value of the software, the uptime SLA is used as a convenient approximation of the business need. Like any approximation, they are not perfect and making decisions based on them rarely leads to optimal decisions. But when that’s all you can do you call it good enough and you go with it.
One of the key promises of the effort to “bridge the gap between business and IT” is to better align infrastructure-level decisions with the real business impact. Products like OpenView’s Business Process Insight allow you to map business processes to the IT infrastructure that powers the steps of the process. So that you can make decisions on managing the IT elements based on their real impact on the business rather than fixed SLAs. We are seeing a huge amount of interest for this and there is a lot of room for optimization once this correlation is established. At this point, the focus is on using this to automate and optimize IT management. But this is so similar to the software pricing issues that one has to wonder whether these technologies won’t eventually allow us to price software in a way better aligned with the real business value provided by the software. And who knows, maybe one day management software will be used to tie salaries to business value rather than being driven by approximations such as “number of hours worked”, “number of bugs fixed”, “uptime of the server”, “number of specs produced”.