It's my long held belief that all of operating systems as we now know them are fundamentally insecure. They all rely on the need to trust a given piece of software to be free from flaws.
There are alternative security models which greatly reduce the amount of code to be trusted (down to one module in the kernel of the OS). NOTHING else in the OS needs to be trusted.
This model of security has been called "Capabilities" based. When a program is run, it's only given the minimum required access to do the job, and nothing more.
For example, if you fire up a Word processor on Windows, Mac, Linux, DOS, etc... it can open ANY file you have access to, and do anything to it. You have to trust that it only does what you want. The problem is that you can't trust it. 99.9999999% of the time it works in the fashion you expect... but it's that one in a billion flaw that the virus/worm/spam/enemy can use to subvert the whole system.
It's going to take a long time to overcome the inertia of all of the installed systems, and the programmers who write them. Perhaps 20 years from now we'll finally be able to start to shut down the virus scanners, and firewalls.
Until that time, all of our computers will be available to any party with the resources to find and exploit any of the flaws in the code we all run.
It's a matter of National Security to fix this, but people are wrongly convinced our Virus Scanners/Spam Filters/Firewalls have solved the problem.
I really enjoy your blog, and value the insight you share. It's good to know you're on our side.
Sunday, June 24, 2007
Comment over at John Robb's Weblog
Here's what I posted over at John Robb's very insightful blog.