A storm is brewing, and it's not going to be pretty.
Windows isn't secure. We all know it. We've gotten used to hiding our Windows boxes behind firewalls, running Windows Update on a regular basis. We've all installed and update our virus scanners on a regular basis as well. We've all grudgingly come to accept all of this as part of a normal IT environment.
My day job entails adding value to the hardware and software we've got installed on or corporate network. It can be broken into some components:
- Absorbing uncertainty (proactive and reactively)
- Keeping data secure
- Keeping data available
There are a lot of things that can go wrong with complex systems such as computers. IT professionals absorb the uncertainties, and hide the complexity from the users. The basic idea is to make it as much like an appliance as humanly possible. We do the tricky stuff so they don't have to.
History:
The value mix has traditionally been one of keeping the systems running, while trying to balance cost against performance. Data security as slowly crept in scope, but is mostly focused on limiting access to the correct mix of users.
Once upon a time, getting a computer virus was something handled on a case by case basis. It could usually be tracked back to the offending floppy diskette, or email. The damage was usually measured in the amount of time spend doing the cleanup. Virus issues were strictly limited to workstations, as you would never run an application on a server.
Server uptimes were measured in hundreds of days. One old 486box (NT 3.51) running as a backup domain controller ran for more than 400 days before I had to swap out the UPS powering it. Servers did one job, and did it well.
Then the virii started becomeing more frequent. Every workstation had a virus scanner on it to scan files before opening them. Eventually, the scanners ran full time, thow at considerable loss in performance. It eliminated the need to worry about virii for the end user.
As the tempo of virus development began to increase, the need for update mechanisms became apparent. We began telling our users that if the virus scanner was more than 6 months out of date, it was useless. Over time this window became smaller, and we got tired of manual updates, so the vendors solved it by offering automatic updates.
Buffer overflow exploits started showing up on the internet in the form of worms. We treated this threat in a similar fashion to the virus threat. Over time we went from case by case, to manual, to updated, to automatic updates.
Somewhere along the line, it became obvious that Windows wasn't tough enough to deploy directly on the internet. We start hiding our Windows Boxes behind firewalls. This introduced us to the joy of having to support VPNs, but it seemed to be good enough. The Firewall systems themselves also have automatic updates.
When the spam got too bad, we started using keyword filters, then Bayensian (sp?) filters, then we got spam filters as an add on to the virus scanner.
So, here it is in Mid-2005, we've got a continous stream of system patches, and a continous stream of virus definitions, most of our spam is gone, and we're behind a continously updated firewall.
This interlocking system of patches does a good job of hiding the complexity and plugging the holes so that the users can go about their business. However, it's not perfect, but hey, that's why we get paid the big bucks, right? We fix the little issues that pop up, then go back to doing our other work.
This system addresses the growing volume of threats in a fairly straightforward and efficient manner. It's not perfect, but it's amazing that it works as well as it does.
However, I'm not happy. In fact, I'm starting to get
very worried. The chinks in the armor are showing up. The end users are starting to get distracted from work again, in a number of ways:
It appears that these theats have already been dealt with, but my end users are still seeing them... and
that worries me.There is a problem here, widely known as the
day zero problem. For practical purposes, there are an essentially infinite number of vulnerabilities in the computer systems we use. A growing number of tools are availble to automate the process of mining for a new flaw to exploit. Tools are usually included for creating a program to exploit the flaw. This new program is called an
exploit.
To utilize an exploit, it is then necessary to find target systems which are vulnerable. This requires some form of scanning of the internet address space, and may also include sending emails, queries to DNS or Web servers, as well Google or other search engines. This activity is the first point at which it is possible to react to a threat, if detected.
Once targets have been found, the flaw is exploited, and the target system compromised. This is the second point at which the threat may be detected. Worms may then use the compromised system to futher search and compromise other systems. This is usually done as part of the exploit, and no human involvement is necessary once the exploit has been launched.
There are many complex factors that determine the extent and speed of which an exploit can then propagate across the internet. The discussion of these factors is outside my area of expertise. I am certain, however, that there are two opposing groups working on trying to shift these numbers to their advantage. I'll simplify it down to
good, and
evil.
It seems to me, based on ancedotal evidence, that the good guys are smart, but they have to be careful. They have to worry about niceties such as preventing false positives, testing, quality control, etc. Testing is good, and necessary, but it's also a delay, and it's got define lower limits.
The lower limit seems to be somewhere between 24 hours and 2 weeks, depending on who you ask, and how you measure. Meanwhile the bad guys can work on things at their leasure, and deploy at will. They are well aware of most of the efforts expended to keep our systems secure, and have worked to build ways around them. A well funded evil person can test his exploit against the latest detection methods commercially available, without fear of discovery.
As much as I want to avoid the analogy, it's a war. Both sides have to test their weapons, but the good guys have to do the testing while the clock is running. Unfortunately, they don't have 2 weeks like they used to. The window for testing is getting smaller, and will, at some point even become
negative, due to the other delays inherent in the system.
The recent signs from my users tell me that time is running out. The virus signatures aren't keeping pace. We've not solved the issue, and it's going to come back to us, full force, very soon.
We live in interesting times.
--Mike--