Category Archives: OVM

Exalogic, EC2-on-OVM, Oracle Linux: The Oracle Open World early recap

Among all the announcements at Oracle Open World so far, here is a summary of those I was the most impatient to blog about.

Oracle Exalogic Elastic Cloud

This was the largest part of Larry’s keynote, he called it “one big honkin’ cloud”. An impressive piece of hardware (360 2.93GHz cores, 2.8TB of RAM, 960GB SSD, 40TB disk for one full rack) with excellent InfiniBand connectivity between the nodes. And you can extend the InfiniBand connectivity to other Exalogic and/or Exadata racks. The whole packaged is optimized for the Oracle Fusion Middleware stack (WebLogic, Coherence…) and managed by Oracle Enterprise Manager.

This is really just the start of a long linage of optimized, pre-packaged, simplified (for application administrators and infrastructure administrators) application platforms. Management will play a central role and I am very excited about everything Enterprise Manager can and will bring to it.

If “Exalogic Elastic Cloud” is too taxing to say, you can shorten it to “Exalogic” or even just “EL”. Please, just don’t call it “E2C”. We don’t want to get into a trademark fight with our good friends at Amazon, especially since the next important announcement is…

Run certified Oracle software on OVM at Amazon

Oracle and Amazon have announced that AWS will offer virtual machines that run on top of OVM (Oracle’s hypervisor). Many Oracle products have been certified in this configuration; AMIs will soon be available. There is a joint support process in place between Amazon and Oracle. The virtual machines use hard partitioning and the licensing rules are the same as those that apply if you use OVM and hard partitioning in your own datacenter. You can transfer licenses between AWS and your data center.

One interesting aspect is that there is no extra fee on Amazon’s part for this. Which means that you can run an EC2 VM with Oracle Linux on OVM (an Oracle-tested combination) for the same price (without Oracle Linux support) as some other Linux distribution (also without support) on Amazon’s flavor of Xen. And install any software, including non-Oracle, on this VM. This is not the primary intent of this partnership, but I am curious to see if some people will take advantage of it.

Speaking of Oracle Linux, the next announcement is…

The Unbreakable Enterprise Kernel for Oracle Linux

In addition to the RedHat-compatible kernel that Oracle has been providing for a while (and will keep supporting), Oracle will also offer its own Linux kernel. I am not enough of a Linux geek to get teary-eyed about the birth announcement of a new kernel, but here is why I think this is an important milestone. The stratification of the application runtime stack is largely a relic of the past, when each layer had enough innovation to justify combining them as you see fit. Nowadays, the innovation is not in the hypervisor, in the OS or in the JVM as much as it is in how effectively they all combine. JRockit Virtual Edition is a clear indicator of things to come. Application runtimes will eventually be highly integrated and optimized. No more scheduler on top of a scheduler on top of a scheduler. If you squint, you’ll be able to recognize aspects of a hypervisor here, aspects of an OS there and aspects of a JVM somewhere else. But it will be mostly of interest to historians.

Oracle has by far the most expertise in JVMs and over the years has built a considerable amount of expertise in hypervisors. With the addition of Solaris and this new milestone in Linux access and expertise, what we are seeing is the emergence of a company for which there will be no technical barrier to innovation on making all these pieces work efficiently together. And, unlike many competitors who derive most of their revenues from parts of this infrastructure, no revenue-protection handcuffs hampering innovation either.

Fusion Apps

Larry also talked about Fusion Apps, but I believe he plans to spend more time on this during his Wednesday keynote, so I’ll leave this topic aside for now. Just remember that Enterprise Manager loves Fusion Apps.

And what about Enterprise Manager?

We don’t have many attention-grabbing Enterprise Manager product announcements at Oracle Open World 2010, because we had a big launch of Enterprise Manager 11g earlier this year, in which a lot of new features were released. Technically these are not Oracle Open World news anymore, but many attendees have not seen them yet so we are busy giving demos, hands-on labs and presentations. From an application and middleware perspective, we focus on end-to-end management (e.g. from user experience to BTM to SOA management to Java diagnostic to SQL) for faster resolution, application lifecycle integration (provisioning, configuration management, testing) for lower TCO and unified coverage of all the key parts of the Oracle portfolio for productivity and reliability. We are also sharing some plans and our vision on topics such as application management, Cloud, support integration etc. But in this post, I have chosen to only focus on new product announcements. Things that were not publicly known 48 hours ago. I am also not covering JavaOne (see Alexis). There is just too much going on this week…

Just kidding, we like it this way. And so do the customers I’ve been talking to.

Comments Off on Exalogic, EC2-on-OVM, Oracle Linux: The Oracle Open World early recap

Filed under Amazon, Application Mgmt, Cloud Computing, Conference, Everything, Linux, Manageability, Middleware, Open source, Oracle, Oracle Open World, OVM, Tech, Trade show, Utility computing, Virtualization, Xen

Would you like some management with that appliance?

Andi Mann recently wrote an interesting post about virtual appliances . He uses the domain name pleasediscuss.com for his blog so I figured I’d do just that. More specifically, I have three comments on his article.

Opaque or transparent appliance

Andi’s concerns about the security and management problems posed by virtual appliances are real, but he seems to assume that the content of the appliance is necessarily opaque to the customer and under the responsibility of the appliance provider. Why can’t a virtual appliance be transparent in the sense that the customer is able to efficiently manage at least some aspects of the software installed on it? “You can’t put agents on most virtual appliances, they don’t come with WMI, and most have only a GUI for management” says Andi. Why can’t an appliance come with an agent (especially in these days of consolidation where many vendors provide many layers of the stack – hypervisor / OS / application container / application / management tools – including their agent)? Why can’t it implement a standard management API (most servers nowadays implement WBEM, WS-Management and/or IPMI pre-boot – on the motherboard – which is a lot more challenging to do than supporting a similar protocol in a virtual appliance). Andi is really criticizing the current offering more than the virtual appliance model per se and in this I can join him.

Let me put it differently, since this is probably just a question of definition: what would Andi call a virtual appliance that does expose management APIs for its infrastructure (e.g. WS-Management for the OS, JMX for the java stack) or that comes with an agent (HP, IBM, BMC, Oracle…) installed on it?

Such an appliance (let’s call it a “transparent virtual appliance” for now) doesn’t provide all the commonly claimed benefits of an appliance (zero config/admin) but as Andi points out these benefits come with major intrinsic drawbacks. A transparent virtual appliance still drastically simplifies installation (especially useful for test/dev/demo/POC). It doesn’t entirely free you of monitoring and configuration but at least it provides you with a very consistent and controlled starting point, manageable from the start (no need to subsequently install an agent). In addition, it can be made “just enough” (just enough OS, just enough app server…) to require a lot less maintenance than an application stack that you assemble yourself out of generic parts. We’ll always have trade offs between how optimized/customized it is versus how uniform your overall environment can be, but I don’t see the use of an appliance as a delivery mechanism as necessarily cornering you into a completely opaque situation, from a management perspective.

Those who attended Oracle Open World a few weeks ago were treated to an example of such an appliance, if they attended any of the sessions that covered Oracle’s Appliance Builder (the main one was, I believe, Virtualizing Oracle Fusion Middleware in the Modern Data Center, in case you have access to the Open World On Demand replay and slides). I believe it’s probably the same content that @jayfry3 was shown when he tweeted about “Oracle is demoing their private cloud self-service app”. These appliances are not at all opaque from a management perspective. To the contrary, they are highly manageable, coming with an Enterprise Manager agent installed that can manage everything in the appliance (and when that “everything” doesn’t include the OS, it’s because there isn’t one thanks to JRockit Virtual Edition, making things slimmer, faster, safer and more manageable). And of course the OVM-based environment in which you deploy these appliances is also managed by Enterprise Manager. OK, my point here wasn’t to go into marketing mode, but this is cool stuff and an example of what virtual appliances should be. BTW, this was also demonstrated during Hasan Rizvi’s keynote at OpenWorld, including the management of these systems through Enterprise Manager.

In the long run it’s irrelevant

As with all things computer-related, the issue is going to get blurrier and then irrelevant . The great thing about software is that there is no solid line. In this case, we will eventually get more customized appliances (via appliance builders or model-driven appliance generation) blurring the line between installed software and appliance-based software.

Waiting for PaaS

Towards the end of his post, Andi paints an optimistic vision of the future: “I also think that virtual appliances have a bright future – but in some ways I continue to see them as a beta version of what could (or should) come next.  By adding in capabilities for responsible and accountable management, they could form the basis of more fully-functional virtual service management containers. These in turn could form the basis of elastic, mobile, network-deployed, responsible cloud appliances that deliver complete end-to-end service management without regard to physical location or domain of control.”

I mostly agree with this vision, though when I describe it it is in the guise of a PaaS platform. Where your appliance (which today goes from the OS all the way to the app) has shrunk to an application template that you deploy in the PaaS environment (rather than in a hypervisor). If/when the underlying PaaS environment has reached the right level of management automation you get all the benefits of an appliance while maintaining the consistency of your environment and its adherence to your management policies (because the environment is the PaaS platform and its management is driven from your policies).

[As is often the case, this started as a comment (on Andi’s blog) and quickly outgrew that environment, leading to this new post. Plus, Andi’s blog is brand new and seems to be well worth spreading the word about (Andi himself is under-marketing it).]

3 Comments

Filed under Application Mgmt, Automation, Desired State, Everything, IT Systems Mgmt, Manageability, Oracle, Oracle Open World, OVM, PaaS, Virtual appliance, WS-Management

Managing the stack from top to bottom, including virtualization

The press release for the release of Oracle Enterprise Manager 10gR5 came out yesterday, but that’s not all: the Oracle VM Management Pack for Enterprise Manager was also announced yesterday. What this illustrates is that, in addition to the commonly-cited “one neck to choke” benefit of getting the entire stack from one vendor (from the hypervisor to the application, including the OS, DB and MW), there is also the benefit of getting a unified management environment for the whole stack. Here is how my friend and Oracle colleague Adam Hawley (director of product management for Oracle VM and previously with Enterprise Manager) describes it in more details:

So what’s so big about it and why does this give us a clear advantage over others?

  • No other company can offer management of the virtualization AND the workload that runs inside the virtualization at this depth and scale: not anyone. We now offer a single management product…Enterprise Manager Grid Control…that manages your entire data center from top-to-bottom:  from the packaged application layer (Siebel, PeopleSoft, Beehive, etc.) through all the middleware and database layers to the OS and virtualization itself. And we do that for the both physical and virtual worlds together seamlessly.

    • Other virtualization vendors either ONLY do virtualization management or to the extent they do anything else, it is typically one other category in the stack…virtualization plus the OS or virtualization plus some very specific applications (but no OS…), etc.
    • No one else can provide the entire picture the way we can with Oracle VM
  • So what does that mean for users?
    • It means Oracle VM is virtualization with a difference:
      • It is virtualization that makes application workloads faster, easier, and less error prone to deploy with Oracle VM Templates as pre-built, pre-configured VMs containing complete product solutions maintained in a central software library for easy re-use:  download from Oracle, import the VMs, use the product.  Simple.
      • It is virtualization that makes workloads easier to configure and manage:  Automate deployment of the VMs, installation of the management agent, and enable powerful, in-depth monitoring of guests and Oracle VM Servers including configuration management…
        • Set-up configuration policies to track how your VMs and servers are configured and to alert you if that configuration changes or “drifts” over time
        • What about if you have one VM running perfectly and another supposedly identical one not doing as well?  Run a configuration compare to check for differences not only in packages or application versions in the VM, but also down to OS parameter settings and other key items to rapidly identify differences and address them from the same console
      • It is virtualization that makes workloads easier to troubleshoot and support:

        • Not only is Oracle VM support very affordable compared to anyone out there, management of Oracle VM servers in Enterprise Manager makes it so much easier to rapidly track down issues across the layers of your data center from one UI With other vendors, to troubleshoot an issue with applications or the database, you have to trace it down through your environment, possibly to the virtual machine, but then how do you get all the info about the VM itself like its parameters and which physical server it is hosted on?  You have to jump to another tool entirely… whatever stand-alone tools you are using to manage the virtualization layer… to get the information and then go back-and-forth:  tedious and time consuming With Enterprise Manager, it is all there in one UI.  Need to tweak the number of virtual CPUs based on your database performance analysis report indicating a CPU bottleneck?  Navigate from the performance page for the database to the home page of that virtual machine and adjust the configuration in the same UI.  Done.  Well, OK, you may have to restart the application for the new vCPU setting to take effect but you can do still do that all within Enterprise Manager, saving time and minimizing risks.
        • This can dramatically reduce the time to troubleshoot as well as reduce the chances of human error navigating between multiple products with different structures and concepts to help you maximize your up-time.

So this is where it starts to get interesting. This is where the game starts to really be about not just the virtualization itself, but how it makes the rest of your overall data center better and more efficient.  The Oracle Enterprise Manager Grid Control Oracle VM Management Pack is a huge step forward for users.

[UPDATED 2009/3/21: An Oracle Virtualization blog has recently been created. So now you can hear directly from Adam and his colleagues.]

1 Comment

Filed under Application Mgmt, Everything, IT Systems Mgmt, Manageability, Mgmt integration, Oracle, OVM, Virtualization

Announcing Xen Transcendent Memory project

If you have more than one child, you’ve probably heard yourself say things like “if you are not using your train, you should let your brother play with it” more often than you’d like. The same happens in a datacenter (minus the screams and tears, at least usually). In that context, the rivaling siblings take the form of guest virtual machines and the toys in contention are the physical resources of the host system: CPU, I/O, memory. While virtualization platforms do a pretty good job at efficiently sharing the first two, the situation is not nearly as good for memory. It is often, as a result, the limiting factor for virtualization-driven consolidation. A new project aims to fix this.

The Oracle engineers working on the Xen-based Oracle Virtual Machine have just announced a new open source (GPL-licensed) project to improve the sharing of physical memory between guest virtual machines on the same physical system. It’s called Transcendent Memory, or tmem for short.

Much more information, including a comparison with VMWare’s memory balloon, is available from the project home page.

Another reason to come to the upcoming Xen Summit (February 24 and 25), hosted by Oracle here at headquarters.

Comments Off on Announcing Xen Transcendent Memory project

Filed under Everything, Linux, Open source, Oracle, OVM, Tech, Virtualization, Xen

Oracle VM template for Grid Control

Oracle recently released a set of VM templates (aka images) for OVM (Oracle Virtual Machine). In addition to being interesting news for OVM users, it’s also potentially useful for EM (Enterprise Manager) users: one of the images contains a full install of Enterprise Manager Grid Control. It is a patched Grid Control 10.2.0.4 installation and associated DB 10.2.0.4 repository pre-configured. This is running on Oracle Enterprise Linux. It also has a local Oracle Enterprise Linux 4 and 5 Yum repository for Grid Control usage.

You can get the files through the Linux side of edelivery.oracle.com (select “Oracle VM templates” as the “product pack”).

More templates are available here. You can now impress your friends and family with a full Oracle demo/development environment and they won’t need to know that you didn’t have to install or configure any application.

Comments Off on Oracle VM template for Grid Control

Filed under Everything, IT Systems Mgmt, Linux, Oracle, OVM

Google App Engine: less is more

“If you have a stove, a saucepan and a bottle of cold water, how can you make boiling water?”

If you ask this question to a mathematician, they’ll think about it a while, and finally tell you to pour the water in the saucepan, light up the stove and put the saucepan on it until the water boils. Makes sense. Then ask them a slightly different question: “if you have a stove and a saucepan filled with cold water, how can you make boiling water?”. They’ll look at you and ask “can I also have a bottle”? If you agree to that request they’ll triumphantly announce: “pour the water from the saucepan into the bottle and we are back to the previous problem, which is already solved.”

In addition to making fun of mathematicians, this is a good illustration of the “fake machine” approach to utility computing embodied by Amazon’s EC2. There is plenty of practical value in emulating physical machines (either in your data center, using VMWare/Xen/OVM or at a utility provider’s site, e.g. EC2). They are all rooted in the fact that there is a huge amount of code written with the assumption that it is running on an identified physical machine (or set of machines), and you want to keep using that code. This will remain true for many many years to come, but is it the future of utility computing?

Google’s App Engine is a clear break from this set of assumptions. From this perspective, the App Engine is more interesting for what it doesn’t provide than for what it provides. As the description of the Sandbox explains:

“An App Engine application runs on many web servers simultaneously. Any web request can go to any web server, and multiple requests from the same user may be handled by different web servers. Distribution across multiple web servers is how App Engine ensures your application stays available while serving many simultaneous users [not to mention that this is also how they keep their costs low — William]. To allow App Engine to distribute your application in this way, the application runs in a restricted ‘sandbox’ environment.”

The page then goes on to succinctly list the limitations of the sandbox (no filesystem, limited networking, no threads, no long-lived requests, no low-level OS functions). The limitations are better described and commented upon here but even that article misses one major limitation, mentioned here: the lack of scheduler/cron.

Rather than a feature-by-feature comparison between the App Engine and EC2 (which Amazon would won handily at this point), what is interesting is to compare the underlying philosophies. Even with Amazon EC2, you don’t get every single feature your local hardware can deliver. For example, in its initial release EC2 didn’t offer a filesystem, only a storage-as-a-service interface (S3 and then SimpleDB). But Amazon worked hard to fix this as quickly as possible in order to be appear as similar to a physical infrastructure as possible. In this entry, announcing persistent storage for EC2, Amazon’s CTO takes pain to highlight this achievement:

“Persistent storage for Amazon EC2 will be offered in the form of storage volumes which you can mount into your EC2 instance as a raw block storage device. It basically looks like an unformatted hard disk. Once you have the volume mounted for the first time you can format it with any file system you want or if you have advanced applications such as high-end database engines, you could use it directly.”

and

“And the great thing is it that it is all done with using standard technologies such that you can use this with any kind of application, middleware or any infrastructure software, whether it is legacy or brand new.”

Amazon works hard to hide (from the application code) the fact that the infrastructure is a huge, shared, distributed system. The beauty (and business value) of their offering is that while the legacy code thinks it is running in a good old data center, the paying customer derives benefits from the fact that this is not the case (e.g. fast/easy/cheap provisioning and reduced management responsibilities).

Google, on the other hand, embraces the change in underlying infrastructure and requires your code to use new abstractions that are optimized for that infrastructure.

To use an automotive analogy, Amazon is offering car drivers to switch to a gas/electric hybrid that refuels in today’s gas stations while Google is pushing for a direct jump to hydrogen fuel cells.

History is rarely kind to promoters of radical departures. The software industry is especially fond of layering the new on top of the old (a practice that has been enabled by the constant increase in underlying computing capacity). If you are wondering why your command prompt, shell terminal or text editor opens with a default width of 80 characters, take a trip back to 1928, when IBM defined its 80-columns punch card format. Will Google beat the odds or be forced to be more accommodating of existing code?

It’s not the idea of moving to a more abstracted development framework that worries me about Google’s offering (JEE, Spring and Ruby on Rails show that developers want this move anyway, for productivity reasons, even if there is no change in the underlying infrastructure to further motivate it). It’s the fact that by defining their offering at the level of this framework (as opposed to one level below, like Amazon), Google puts itself in the position of having to select the right framework. Sure, they can support more than one. But the speed of evolution in that area of the software industry shows that it’s not mature enough (yet?) for any party to guess where application frameworks are going. Community experimentation has been driving application frameworks, and Google App Engine can’t support this. It can only select and freeze a few framework.

Time will tell which approach works best, whether they should exist side by side or whether they slowly merge into a “best of both worlds” offering (Amazon already offers many features, like snapshots, that aim for this “best of both worlds”). Unmanaged code (e.g. C/C++ compiled programs) and managed code (JVM or CLR) have been coexisting for a while now. Traditional applications and utility-enabled applications may do so in the future. For all I know, Google may decide that it makes business sense for them too to offer a Xen-based solution like EC2 and Amazon may decide to offer a more abstracted utility computing environment along the lines of the App Engine. But at this point, I am glad that the leaders in utility computing have taken different paths as this will allow the whole industry to experiment and progress more quickly.

The comparison is somewhat blurred by the fact that the Google offering has not reached the same maturity level as Amazon’s. It has restrictions that are not directly related to the requirements of the underlying infrastructure. For example, I don’t see how the distributed infrastructure prevents the existence of a scheduling service for background jobs. I expect this to be fixed soon. Also, Amazon has a full commercial offering, with a price list and an ecosystem of tools, why Google only offers a very limited beta environment for which you can’t buy extra capacity (but this too is changing).

2 Comments

Filed under Amazon, Everything, Google, Google App Engine, OVM, Portability, Tech, Utility computing, Virtualization, VMware

Top 10 lists and virtualization management

Over the last few months, I have seen two “top 10” lists with almost the same title and nearly zero overlap in content. One is Network World’s “10 virtualization companies to watch” published in August 2007. The other is CIO’s “10 Virtualization Vendors to Watch in 2008” published three months later. To be precise, there is only one company present in both lists, Marathon Technologies. Congratulations to them (note to self: hire their PR firm when I start my own company). Things are happening quickly in that field, but I doubt the landscape changed drastically in these three months (even though the announcement of Oracle’s Virtual Machine product came during that period). So what is this discrepancy telling us?

If anything, this is a sign of the immaturity of the emerging ecosystem around virtualization technologies. That being said, it could well be that all this really reflects is the superficiality of these “top 10” lists and the fact that they measure PR efforts more than any market/technology fact (note to self: try to become less cynical in 2008) (note to self: actually, don’t).

So let’s not read too much into the discrepancy. Less striking but more interesting is the fact that these lists are focused on management tools rather than hypervisors. It is as if the competitive landscape for hypervisors was already defined. And, as shouldn’t be a surprise, it is defined in a way that closely mirrors the operating system landscape, with Xen as Linux (the various Xen-based offerings correspond to the Linux distributions), VMWare as Solaris (good luck) and Microsoft as, well Microsoft.

In the case of Windows and Hyper-V, it is actually bundled as one product. We’ll see this happen more and more on the Linux/Xen side as well, as illustrated by Oracle’s offering. I wouldn’t be surprised to see this bundling so common that people start to refer to it as “LinuX” with a capital X.

Side note: I tried to see if the word “LinuX” is already being used but neither Google nor Yahoo nor MSN seems to support case-sensitive searching. From the pre-Google days I remember that Altavista supported it (a lower-case search term meant “any capitalization”, any upper-case letter in the search term meant “this exact capitalization”) but they seem to have dropped it too. Is this too computationally demanding at this scale? Is there no way to do a case-sensitive search on the Web?

With regards to management tools for virtualized environments, I feel pretty safe in predicting that the focus will move from niche products (like those on these lists) that deal specifically with managing virtualization technology to the effort of managing virtual entities in the context of the overall IT management effort. Just like happened with security management and SOA management. And of course that will involve the acquisition of some of the niche players, for which they are already positioning themselves. The only way I could be proven wrong on such a prediction is by forecasting a date, so I’ll leave it safely open ended…

As another side note, since I mention Network World maybe I should disclose that I wrote a couple of articles for them (on topics like model-based management) in the past. But when filtering for bias on this blog it’s probably a lot more relevant to keep in mind that I am currently employed by Oracle than to know what journal/magazine I’ve been published in.

Comments Off on Top 10 lists and virtualization management

Filed under Everything, IT Systems Mgmt, Linux, Microsoft, Oracle, OVM, Tech, Virtualization, VMware, XenSource

Virtual machine or fake machine?

In yesterday’s post I wrote a bit about the recently-announced Oracle Virtual Machine. But in the larger scheme, I have always been uncomfortable with the focus on VMWare-style virtual machines as the embodiement of “virtualization”. If a VMWare VM is a virtual machine does that mean a Java Virtual Machine (JVM) is not a virtual machine? They are pretty different. When you get a fresh JVM, the first thing you do is not to install an OS on it. To help distinguish them, I think of the VMWare style as a “fake machine” and the JVM style as an “abstract machine”. A “fake machine” behaves as similarly as possible to a physical machine and that is a critical part of its value proposition: you can run all the applications that were developed for physical machines and they shouldn’t behave any differently while at the same time you get some added benefits in terms of saving images, moving images around, more efficiently using your hardware, etc. An “abstract machine”, on the other hand, provides value by defining and implementing a level of abstraction different from that of a physical machine: developing to this level provides you with increased productivity, portability, runtime management capabilities, etc. And then, in addition to these “fake machines” and “abstract machines”, there is the virtualization approach that makes many machines appear as one, often refered to as grid computing. That’s three candidates already for carrying the “virtualization” torch. You can also add Amazon-style storage/computing services (e.g. S3 and EC2) as an even more drastic level of virtualization.

The goal here is not to collect as many buzzwords as possible within one post, but to show how all these efforts represent different ways to attack similar issues of flexibility and scalability for IT. There is plenty of overlap as well. JSRs 121 and 284, for example, can be seen as paving the way for more easily moving JVMs around, WMWare-style. Something like Oracle Coherence lives at the junction of JVM-style “abstract machines” and grid computing to deliver data services. And as always, these technologies are backed by a management infrastructure that makes them usable in the way that best serves the applications running on top of the “virtualized” (by one of the definitions above) runtime infrastructure. There is a lot more to virtualization than VMWare or OVM.

[UPDATED 2007/03/17: Toutvirtual has a nice explanation of the preponderance of “hypervisor based platforms” (what I call “fake machines” above) due to, among other things, failures of operating systems (especially Windows).]

[UPDATED 2009/5/1: For some reason this entry is attracting a lot of comment spam, so I am disabling comments. Contact me if you’d like to comment.]

1 Comment

Filed under Everything, IT Systems Mgmt, OVM, Virtualization, VMware

Oracle has joined the VM party

On the occasion of the introduction of the Oracle Virtual Machine (OVM) at Oracle World a couple of weeks ago, here are a few thoughts about virtual machines in general. As usual when talking about virtualization (see the OVF review), I come to this mainly from a systems management perspective.

Many of the commonly listed benefits of VMWare-style (I guess I can also now say OVM-style) virtualization make perfect sense. It obviously makes it easier to test on different platforms/configurations and it is a convenient (modulo disk space availability) way to distribute ready-to-use prototypes and demos. And those were, not surprisingly, the places where the technology was first used when it appeared on X86 platforms many years ago (I’ll note that the Orale VM won’t be very useful for the second application because it only runs on bare metal while in the demo scenario you usually want to be able to run it on the host OS that normally runs you laptop). And then there is the server consolidation argument (and associated hardware/power/cooling/space savings) which is where virtualization enters the data center, where it becomes relevant to Oracle, and where its relationship with IT management becomes clear. But the value goes beyond the direct benefits of server consolidation. It also lies in the additional flexibility in the management of the infrastructure and the potential for increased automation of management tasks.

Sentences that contains both the words “challenge” and “opportunity” are usually so corny they make me cringe, but I’ll have to give in this one time: virtualization is both a challenge and an opportunity for IT management. Most of today’s users of virtualization in data centers probably feel that the technology has made IT management harder for them. It introduces many new considerations, at the same time technical (e.g. performance of virtual machines on the same host are not independent), compliance-related (e.g. virtualization can create de-facto super-users) and financial (e.g. application licensing). And many management tools have not yet incorporated these new requirements, or at least not in a way that is fully integrated with the rest of the management infrastructure. But in the longer run the increased uniformity and flexibility provided by a virtualized infrastructure raise the ability to automate and optimize management tasks. We will get from a situation where virtualization is justified by statements such as “the savings from consolidation justify the increased management complexity” to a situation where the justification is “we’re doing this for the increased flexibility (through more automated management that virtualization enables), and server consolidation is icing on the cake”.

As a side note, having so many pieces of the stack (one more now with OVM) at Oracle is very interesting from a technical/architectural point of view. Not that Oracle would want to restrict itself to managing scenarios that utilize its VM, its OS, its App Server, its DB, etc. But having the whole stack in-house provides plenty of opportunity for integration and innovation in the management space. These capabilities also need to be delivered in heterogeneous environments but are a lot easier to develop and mature when you can openly collaborate with engineers in all these domains. Having done this through standards and partnerships in the past, I am pleased to be in a position to have these discussions inside the same company for a change.

1 Comment

Filed under Everything, IT Systems Mgmt, Oracle, Oracle Open World, OVM, Tech, Virtualization, VMware