Saturday, February 19, 2011

Instead of cyberwar, and all that mess, let's just FIX things

I strongly believe that it's possible to reduce the treat of "cyber war" by actually fixing the security problem at it's source, our computers and servers. Imagine if it were possible to greatly reduce the number of security holes on the average pc or server. If this were the case, we wouldn't need to have politically motivated filtering and other types of control to "save us" from our own systems.

The internet is just a big network, and while BGP seems to have it's issues, with some work they can be solved. The network itself is just a "series of tubes", as it's been described in the past, and you don't have to guard the tubes if the ends are secured.

There is a deep design flaw in the operating systems and applications we use on a regular basis. Historically it's been possible to tightly control the code we run, so it was reasonable to trust the code to do its job. This assumption no longer is valid.

  • We can no longer afford the luxury of trusting our applications.
  • We can't even afford to trust our drivers with kernel mode.
  • We can't afford to trust the system processes to stick to their designated roles.


At a practical level, we have to trust some code, why not trust as little of it as possible? Micro-kernels present the smallest amount of code required to manage the operating system. There has been much research in this area, and recently there have been "proven" micro-kernels which theoretically have no flaws in their implementation of their specifications.

Now, the kernel needs device drivers and other system processes to make a usable operating environment for the user and programs. A kernel which doesn't trust its drivers must use a new strategy. One way forward is to use the concept of capabilities. A "capability" is a token / key (really, just a big number) which allows access to a resource. Each device driver, system process, etc... is given the appropriate set of keys to the resources that are required to do the job. If the key isn't present, the access is not allowed.

Thus a disk driver wouldn't get access to the internet. A clock driver wouldn't need to either. The system time demon would get access to a log file, a specific set of internet ports and addresses, and the clock. Any bug or vulnerability in one of these drivers would only affect it, and the capabilities it happened to have at the time.

Applications would have to be re-designed as well, for example, if you want to open a file in OpenOffice, the program opens a system dialog box to get the name and path to a file, it then opens the files as required. The new version would instead call a slightly different dialog box, which would them return the file handle (a capability) to only that file. The save dialog would also be modified in a similar fashion. If there are libraries required, etc... they can be included in the applications home folder. A capabilities based version of OpenOffice would thus work the same way, but be far more secure.

With this approach, we end up with secure systems that are still usable.

I think I've shown fairly well that we must re-design things from the ground, a decidedly non-trivial task, but it is the only way to avoid having government overlords telling us what code we can and can't use. If we wish to own our own systems as free men, we need to get our act together and fix things now, before it's too late and we loose the freedom to write our own code.

The path we are on ends with computers we merely have license to use, secured by the government, censored by the government, rented from big corporations, running applications we rent or buy from app stores. This is a future we need to avoid.

Thank you for your time, attention, and comments.

Friday, February 18, 2011

A tale of rules and their makers

Management had made the choice, there was no disputing it without risking his job, he had heard bad things about the new system, but resigned himself to making it work. So he started reading the documentation and signed up for the support forums. The tutorials showed how rules worked, how to make new ones, the configuration of the auto-update feature, and how to submit a rule to the pool. The system worked by freely sharing rules which helped, so at least he could get the help of his peers.

Soon he was making rules that worked, and after that he learned how to make them simple and elegant. He could make a rule that had very few side effects, and stopped the threat without much cost. The system was getting slower, but thanks to advances in technology, a new system would soon be installed which was more than twice as fast as the old one. The users were fairly happy with things, as it kept disruptions to a minimum.

Over time, he learned about the pros and cons of the other rule systems, and how they worked. He wasn't a big fan of his system, but felt the users of the others were a bit too smug in their claims that there systems were somehow much better. He knew the basics were the same, that it was just a matter of time before theirs had similar problems, and that they mistook temporary conditions as a permanent condition.

One day it occurred to him that there might be a better way to do things. A friend had joked that instead of making rules to stop threats, perhaps it would be better to have a list of things that were not threats. It stuck in the back of his mind, and the more he thought about it, the more sense it made. He tried to explain his new idea to his friends, but they thought it was silly, and it would make it way too difficult to manage things, and would make the users complain too much about things they couldn't do because they weren't in the list.

Eventually he convinced some friends to build a prototype system, it would watch what the user did, and build rules to allow those things, and had a new feature which denied everything else. The idea of denying everything was crazy, but it worked in this case. The prototype system was interesting, but he thought it should go further. He had an even bigger idea, the thought the prototype should become the standard way of doing things.

His friends and peers thought he was nuts! How could you possibly list all the things the user wanted to do? Why would the users, who were the source of profit, possibly allow his group do such an absurd thing. If the list of allowed things didn't have something they needed, they would have to stop work and tell his group and get it added to the list. Such a presumption of power was surely a foolish thing to do.

He was sure his idea was right, but it wouldn't work because of the politics of it. He then wondered what would happen if the users could add things to the list themselves? This would leave the users with a system that would allow them to do what they needed, but without the need to have his group always blocking threats. Such a system would leave his group with a lot more time to work on the other tasks they had to keep interrupting, he was sure it would be worth it, but how to convince his peers?



Well.... by writing this very story. The above is a description of an imaginary world in which firewalls lack the ability to include a default deny rule. This makes it necessary to enumerate every threat and create a rule to stop it, and to share the list of rules. In our world, firewalls do have this ability, and we (network administrators) make rules explicitly allowing each protocol and port connection from the internet to our servers.


The above is also a description of this world. This is the way we currently handle computer viruses. We subscribe to services which list rules to identify bad code fragments, and we have systems which block those fragments when they are found. The point of this story is to get you to consider the opposite... a system which trusts nothing, and lets the users explicitly choose what connections and resources a program should get.

It's called capability based security, CabSec for short.

Thank you for your time and attention.

Monday, February 14, 2011

Small miracles

I'm very thankful for the random appearance of a piece of red paper in my life this morning.

God works in mysterious ways.

Sunday, February 13, 2011

A case against arbitrary field size limits in Medical Records

Here's my IT perspective on The Doctor vs. the Computer, which appeared in today's New York Times.

The doctor in question hit an arbitrarily sized text field for inputing the evaluation of a patient, and was arbitrarily stopped at 1000 characters. The help desk confirmed the limit, and was snarky about it.

I can see how this may have been an acceptable design decision when systems had a total of 5 megabytes of space in the 1960s, but it is clearly not acceptable by any means in our current era.

I found the article via Quora, and here's the comment I wrote there:

Wow... I can see how such things happen... and that is a truly stupid situation. Hours of lost medical care to save a few megabyte of disk space across a year. 
A single photograph, let alone some MRI or CT scan data could wipe this savings out in an instant. 
The savings in this case, assuming the doctor had 5000 characters of text, would be 4000 bytes... and at today's prices of about 10 Gigabytes / $US, that works out to 0.00004 cents. Let's say it took 2 minutes to do the edit. 
Done 10,000 times per year, that's 13.8 days of medical staff time, to save a whopping 0.04 cents! 
Yikes!

Now... I'm cross posting it here to reach a wider audience. If you're in IT, and considering the size limits of a text field, be very sure you don't just want a memo field instead.


Thanks for your time and attention.