Saturday, February 19, 2011

Instead of cyberwar, and all that mess, let's just FIX things

I strongly believe that it's possible to reduce the treat of "cyber war" by actually fixing the security problem at it's source, our computers and servers. Imagine if it were possible to greatly reduce the number of security holes on the average pc or server. If this were the case, we wouldn't need to have politically motivated filtering and other types of control to "save us" from our own systems.

The internet is just a big network, and while BGP seems to have it's issues, with some work they can be solved. The network itself is just a "series of tubes", as it's been described in the past, and you don't have to guard the tubes if the ends are secured.

There is a deep design flaw in the operating systems and applications we use on a regular basis. Historically it's been possible to tightly control the code we run, so it was reasonable to trust the code to do its job. This assumption no longer is valid.

  • We can no longer afford the luxury of trusting our applications.
  • We can't even afford to trust our drivers with kernel mode.
  • We can't afford to trust the system processes to stick to their designated roles.


At a practical level, we have to trust some code, why not trust as little of it as possible? Micro-kernels present the smallest amount of code required to manage the operating system. There has been much research in this area, and recently there have been "proven" micro-kernels which theoretically have no flaws in their implementation of their specifications.

Now, the kernel needs device drivers and other system processes to make a usable operating environment for the user and programs. A kernel which doesn't trust its drivers must use a new strategy. One way forward is to use the concept of capabilities. A "capability" is a token / key (really, just a big number) which allows access to a resource. Each device driver, system process, etc... is given the appropriate set of keys to the resources that are required to do the job. If the key isn't present, the access is not allowed.

Thus a disk driver wouldn't get access to the internet. A clock driver wouldn't need to either. The system time demon would get access to a log file, a specific set of internet ports and addresses, and the clock. Any bug or vulnerability in one of these drivers would only affect it, and the capabilities it happened to have at the time.

Applications would have to be re-designed as well, for example, if you want to open a file in OpenOffice, the program opens a system dialog box to get the name and path to a file, it then opens the files as required. The new version would instead call a slightly different dialog box, which would them return the file handle (a capability) to only that file. The save dialog would also be modified in a similar fashion. If there are libraries required, etc... they can be included in the applications home folder. A capabilities based version of OpenOffice would thus work the same way, but be far more secure.

With this approach, we end up with secure systems that are still usable.

I think I've shown fairly well that we must re-design things from the ground, a decidedly non-trivial task, but it is the only way to avoid having government overlords telling us what code we can and can't use. If we wish to own our own systems as free men, we need to get our act together and fix things now, before it's too late and we loose the freedom to write our own code.

The path we are on ends with computers we merely have license to use, secured by the government, censored by the government, rented from big corporations, running applications we rent or buy from app stores. This is a future we need to avoid.

Thank you for your time, attention, and comments.

No comments: