Tuesday, June 19, 2012

When will IT stop blaming the user?

I'm a system administrator in my day job. Long ago I realized it was foolish to blame users when it wasn't their fault. It is easy to fall into a trap, an IT version of Stockholm Syndrome, in which we grow accustomed to the insane inconsistent behavior of our systems, and expect everyone else to do the same.

This article in dark reading is a typical example of blaming the user instead of solving the problem. Here's the lead paragraph:
On a chaotic workday, a top executive scans hastily through dozens of emails that have arrived in the last 10 minutes. There is one from an IT staffer whose name he doesn't know – he doesn’t know most of the people in IT – and it states that he needs to do a password reset or he will lose access to his applications. Without thinking, he clicks on the link provided in the email -- and malware is introduced to the entire corporate network. (Emphasis mine)

The basic setup is good, where some empathy is displayed, showing understanding of the pressures we all face from IT systems, especially the users.  Then the article goes off the rails, in assuming the user didn't think about it before opening a link, and blaming the user for all the subsequent damage to the network because of introduced malware.

While this may make the author, and sympathetic readers feel good to share "trench stories" about the enemy (the user)... it does nothing to solve the actual problem. In fact, this type of article makes things worse by draining emotional energy that could have been directed towards solutions away from the problem.

The outside of real problem here is one of a complete lack of security once something is inside the corporate firewall. The inner root issue is the complexity of modern software, and the need to trust millions of lines of code any time the user makes a choice. The user can't examine those millions of line of code, in fact nobody could evaluate them as a system and make them secure.

We use systems which contain millions of lines of code in fragile systems which offer no real security. Blaming the user for exposing this fact through accident isn't healthy.  We need to adopt systems which reduce the amount of trust we place in code, ideally reducing it to zero.

Capability based security offers a step in that direction. The Genode project is active, and hopes to be at the "eating our own dog food" stage by the end of 2012. They offer capability based security, a choice of 8 different microkernels, and the ability to run standard Linux programs as processes. This means that you could then set up a system where the user could run things in a sandbox by default, and have systems which aren't fragile, and don't shatter at the drop of any hat.

Instead of blaming users for our broken glass houses, let's go get some better building materials.