Tuesday, July 17, 2012

Secure programming - good intentions

I recently read a good article about security practices in applications and software as a service. The author lays out some very good rules to help keep users information secure in today's threat environment. However, it strikes me strong reminder of the vast amount of effort we're wasting by trusting applications programs at all.

We should never completely trust any programs, services, or drivers outside of the very kernel of an operating system. We shouldn't have to.  The millions of lines of code that are required to do even a basic database with a web front end are bound to have bugs which can lead to unintended and unwelcome side effects. The effects can be subtle to disastrous depending on what cascade of events happens.

The application programmer has no tools to prevent his program from exceeding its scope at a given task. Current operating system design holds that the user is the proper level of granularity for deciding what access a given task is to be allowed. All of the responsibility is then thrust upon programmers to keep things safe as a result. The programmer and/or install package is then responsible for setting all of the permissions on all of the the objects (files, pipes, registry entries, ports, etc) in the end system to be appropriate for the given tasks.


This is an impossible task, given that there can be literally millions of such permissions to set, and it only takes one mistake to let things pass through. 


It doesn't have to be this way.

Capability based security is an approach that uses the principle of least access to enforce security in a much more appropriate manner.  The millions of choices about what to deny are replaced with a much shorter list of what things to allow. This list is per process, not per user.  It tells which files, folders, ports are to be allowed, and which mode (read only, write only, append only, full, etc).

This is a much more natural way to handle risk, as you simple decide what side-effects you are going to allow a given process to have, and the operating system enforces your decision. You don't have to trust your code, nor does the user. If something goes wrong, the maximum extent of damage is already known. You don't have to worry about the entire system shattering.

Isn't it time we stop spending so much effort on making our programs safe, when it could be better spent building better programs?  Help support efforts which will deliver operating systems with capability based security, such as Genode, which provides a choice of 8 microkernels, capability based security, and runs native Linux applications.

Thanks for your time and attention.

No comments: