Wednesday, February 22, 2006

Virtual Machines, the future of security

The purpose of an operating system is to manage the resources of a computer. Over time, market competition has lead to the inclusion of a number of new components into standard distributions of operating systems. It is now common for such a system to include a network stack, at least one scripting language, a web browser, and email client. A server would also include services such as file sharing, authentication, web and ftp services, and more.

Windows, Mac OS, and Linux have similar security models. It's all about the user. Once the user signs in, it is assumed they should be allowed to do anything they have permission to do. The user is expected to be judicious in their choices of actions.

Unless special precautions are taken, programs run with identical permissions to the user which they are working on behalf of. This means that the user is forced to trust a program in order to use it. There is simply no other option for the average user. This need to trust code is implicit in the security model. It's so deeply embedded in our conception that it takes people by surprise when you point it out. This need to absolutely trust code is a very corrosive force, and is responsible for most of the security problems that plague PCs today.

This is how we get virii and other problems that keep cropping up, in spite of virus scanners, etc. Once lanched, a virus has essentially full run of a PC, which it can use to spread itself, or to use for its payload program.

Most efforts these days are being directed towards what I consder to be a stopgap measure. This is to try to limit the user to running only trustworthy code. The decision is made that some programmers are deemed trustworthy, and given cryptographic keys to "sign" a piece of code.

The signatures on a piece of code can thus be checked for validity before it is executed. This can help prevent tampering with the code, so it does help security, a bit.

However, this is not a long term solution, though many are otherwise convinced. The problem is that any bug, anywhere in the program can be used to attack the system. Consider that a workstation with the standard load might have a million lines of code, with perhaps 20 different services running at any give time. If any of these systems have a bug, it may be possible to exploit it, and take over the system.

--

I've been digging around the question of PC security for a while now since my Windows Apocalypse. I'm now convinced the only way out is to use a classic solution, and to head in the opposite direction.

Instead of accepting more and more code as "trusted", we need to adopt the philosophy that NOTHING should be trusted, unless absolutely necessary. Even device drivers shouldn't be trusted.

IBM engineers in the 1960s utilized the Virtual Machine concept to make scarce resources available to Operating System developers. They did this by offering what appeared to be a complete machine to each program to be run. This allowed the development of new operating systems on hardware that didn't exist yet, and eliminated the need to give each and every programmer at least one mainframe of their own.

The net result is that each and every virtual machine created is insulated from the hardware, and from every other virtual machine. It's practically impossible for a process inside a virtual machine to do any harm to the outside environment. Because of this, you don't have to trust the code you run, you can just contain it in a Virtual Machine.

The adoptation of Java, and more recently, ".net" are both steps in the right direction. They place code into a "sandbox", and limit access. However the permissions are course, and the idea of trusted code creeps in, thus limiting the effectiveness of the sandbox.

Bochs, Plex86, VMware, and "Virtual Server" are all adaptations of this strategy. They emulate a complete PC (or substantial portion thereof) to allow a complete operating system to run inside a virtual machine. This makes it possible to test out new OSs, do development, etc... much in the same way the IBM programmers back in the 1960s were allowed to do so, even though it's likely the programmers now can afford multiple computers. It's much easier to debug from an environment that doesn't have to be rebooted all the time. ;-)

The recent arrival of the "browser appliance" as a safer way to surf the web is an example of running an application insider a sandbox, albeit a very elaborate one. The speed of the virtualization on current stock hardware will be less that idea, due to some architectural limitations of the 386 heritage.

Recent announcements by both Intel and AMD of hardware support for virtualization are very encouraging. It was this type of hardware support that made the VM/370 system feasable in the 1960s. This will allow virtualization with almost no performance penalty on future machines.

The next step will be lightweight operating systems optimized to run inside virtual machines. Because the virtual machine environment provided is completely uniform, the number of drivers required to interact with it can be reduced almost to zero. As time goes on, even this overhead can be reduced by the use of run-time systems, which are tailored for specific uses.

We may start to see the distribution of applications inside of their own operating systems as a convinient means of making sure that everything necessary for the application gets included.

All of this points to a future which might make a mainframe programmer smile with the recognition that it's all happening again, but slightly differently... and they'd be right. The concept of trusted code will die, at least until the sequel. ;-)

Eventually, we'll get to the point where no application needs to be trusted. Most of the operating system itself doesn't need to be trusted, so it won't be. The long term result is to have one, and only one, piece of trusted code in the system... a microkernel virtual machine manager. It will be the only piece of code running on the bare metal. Device drivers, security algorithms, everything else will run in its own virtual environment, in complete isolation.

Only then will security be sufficiently fixed that I can relax, or so I hope.

That's my prediction, I welcome all criticism and discussion.

--Mike--

No comments: