Tuesday, August 30, 2005

Paranoid by Default

Capablities based systems are Paranoid by Default.

The ONLY things a program are allowed to do are specified by the capablities handed to it.

The main problem in making any point (including capabilties) is that you have to simplify things to the point where someone can grok an example. The downside is that it's deliberately trivialized, so the tendency is to poke holes in the example, and think they scale... which they don't.

Cases of this that I've seen in the past include the arguments for and against pretty much any new technology, including:
  • Structured coding
  • Object Oriented Coding
  • Information Hiding
  • Garbage Collection
  • Strong Typing
All of these have been attacked at the example level. There are benefits to all of them, and yes... you could write in assembler, but it's more efficient for the programmer to write in a higher level structured manner. Etc, etc.

I need to make a better case for Capabilities (and for the BitGrid, for that matter), and I WILL do so.

--Mike--

Monday, August 29, 2005

Blogs for Dialogs?

Now for some meta-analysis (navel gazing?) of blogs as tools for dialog.

Byran makes a damned good point:
You're asking some pretty big questions, and it's a dialog I'd love to be a part of.

So, if you don't mind a little criticism, I'd like to humbly suggest that you're gonna need a better forum for the dialog; blogger ain't gonna cut it. Too easy to lose stuff as blog posts & their comments roll into the archives. We need something that allows threaded discussion! And a bigger commentbox, maybe with rudimentary WYSIWYG, would also be nice ...
Now, threaded discussions would be nice, but I think that Slashdot has show the model limitations of low cost commenting. Long term discussions tend not to happen there either. I like Doc's idea of subscribing to searches, but can't figure out just how I would search for the thread about capablities without getting every sales slick on the internet.

I think we're going to have to come of with a way of making blog postings into threaded discussion, because a blog (that isn't spam) is an assertion of identity that has a cost.

Ugh.. it's all back to the Identity issue... keeps creeping up, any time you step back and really think about things.

We all want a discussion without spam, and with some maturity, to arrive at the truth.

I'm listening for ideas (and I'm tired... too much news surfing last night)

--Mike--

Apocalypse Now - Version 0.02 (rambling)

I'm sorry for the lenght of this post, I haven't had time to make it shorter.

On Windows Bashing:
I started this conversation when I noticed things poking through the layers of security in my systems, which happen to be Windows based. It is my belief that Linux is no better, given sufficient market penetration.

Byran makes some very good criticisms of version 0.01 of this thread. I may be guilty of a bit of Chicken Little syndrome, or Crying Wolf, or Cassandra, or not... only time will tell.

Open source & Bugs:
When people point out that Open source should reduce the number of bugs in a program, I believe they are right. While fewer bugs are good, it's not going to drive the number to zero. Real security requires Zero exposed bugs, ever. (Which may well be impossible)

George Ou points out that progress is being made on many fronts, including the buffer overflow issues, and a lot of work is being put into tightening things up by many parties, and I applaud everyone's efforts.

I feel that the threat is going to continue to grow in terms of strength. I believe that one needs to be fairly paranoid these days, and that capabilities are just the right amount of paranoid thinking to be encoded into an operating system. ;-)

Byran makes the case that Capabilities can be emulated with the right combination of ACLs. The technical arguments surrounding this get very tricky, very quickly. I believe (and am willing to change my opinion based on the facts and/or good argument) that Capabilities embody a concept which is missing from the current crop of OSs:
Don't trust the code

In a capabilities based system, everything is essentially living in it's own sandbox. The only interactions possible are via the capabilities provided to a given piece of code. A capablities based system should be able to run mobile code without any risk of compromise. Think of capablities as the Java sandbox on steriods.

There are many marvelous programs and components out there, waiting to be written. Java gives a hint as to where this could go. The demo scene (DOS programs limited to 64k) points to the really cool things that can be done, if mobile code could be somehow run safely.

I believe that retrofitting the Capabilities model into the existing Windows and Linux code base is possible, but it'll be a large chunk of work in either case. I believe that efforts in both camps should be supported. It would be nice if they interoperated, or could somehow share code, but I realize that's not likely.

It's been an interesting thread, thanks to Doc Searls, David Berlind, George Ou, and Bryan from AdminFoo for all the support and constructive criticism.

Sunday, August 28, 2005

The Tao of Doc

It occurs to me that the best way to ask Doc Searls a question is to merely read what he's already said about it.

A day of conversation (with some self analysis)

I belong to a computer club called APCU. We meet monthly to help solve each others problems though a part of the meeting called "Random Access". During Random Access, we have a moderated discussion (with a series of hand signals to make it easier on the moderator), in which we try to solve the technical issues which have arisen for us in the past month. We also address any other technological questions that may arise. We then usually have an hour of random access, followed by a presentation on a technical topic from one of the club members, or a vendor. We don't tollerate sales pitches, so the technical talks tend to be quite valuable to us.

Because of scheduling conflicts, we didn't have the presentation yesterday, and so we had extended random access. I used the opprotunity to bring my "windows apocalypse" thread into the conversation there. It was very enlightening, to say the least, and I found it quite valuable.

I managed to get some concensus from the group on the overall subject of security:
  • ANY system is more secure in the hands of Proficient Technical users
  • ANY system is less secure in the hands of an "average" user
Now, because we like to think of ourselves as technically proficient, we might be a bit biased, but I felt it quite rewarding to reach this agreement as we ran out of time.

Lunchtime conversation was quite interesting, as I solicited doomsday scenarios (which had crept up in various comments at the meeting). The Avian Flu Pandemic scenario is a new one to me... and given the reputation of the member who brought it up, is something worth worrying about. The Peak Oil scenero ended up being the topic of conversation, however.

As time went on, I found there were a few camps to which one could belong, but we didn't get to any concensus. I don't know if it was a limit of time, the less moderated nature of conversation over excellent food, or the overall nature of the topic, which prevented this.

After sleeping on it, and looking back in retrospect, I find myself wanting. I want to get to the truth in all of this. I want to participate in a dialog, and explore the idea space. It there truely is a problem, I want to be part of the solution. I wonder how well this blog, as well as all of the other means of conversation at my disposal (Phone, IM, Email, Letters, etc.) will work towards this goal.

All of this leads to this question:

  • How do I best use this blog as a means of seeking and sharing truth?

I'll be listening,
--Mike--

Thursday, August 25, 2005

Capabilities in the real world

Capabilities are the permissions to do some specific task. I wrote this on the train this morning, I hope it helps illuminate the area around security I've been talking about recently.

Imagine a network, where there are billions of accounts. Some of the users have multiple accounts. But in this system, the accounts have no passwords. The social penalties for using the wrong account is the basis of security, along with the usernames secret. (Security through obscurity). Once someone has your username, the only option is to get a different username, and watch the activity more closely.

This IS the situation we all face in the world of Credit Cards, and Social Security. Two factor authentication is seen as THE solution to this problem. In this case, it's like finally allowing the usernames to have a password as well. The only problem is that many sloppy implementations will simply require you to give out your username AND password to make a purchase. If you are given the ability to change the password, and do it frequently enough, you decrease (but don't eliminate) the odds of misuse of the account.

A better system is to use Capabilities. For instance, when you buy something on line, what you really want to do is to grant the permission to extract the amount you specify from your bank. Some vendors are now experimenting with this idea, known as a "one time" credit card number. This is also called a Capability.

When you give a program a capability, it is only good for that use, until revoked, for that one process. No other process can utilize it if they manage to acquire it. If it becomes necessary to distribute the capability to another process, that simply requires another capability, and the OS will then issue appopropriate capabilities to the recieving task.

(Technically, all Capabilities include a GUID, and are locked to a specific process.)

Back to the Credit Card analogy, if you put a capability to withdraw $100 into a message, and someone managed to intercept it, it will be useless to them because it's locked into the recipients identity.

So, I've laid out a practical set of analogies and examples to help demystify capabilities. I've assumed a lot, but I've got a programming background, and I'd welcome discussion on the technical aspects of pulling all of this hand waving off, in a secure manner.

Monday, August 22, 2005

Sunday, August 21, 2005

New look and feel

I noticed that there were no links to other posts in the old template, so I updated it, and republished everything. Sitemeter is addictive, though I'm sure I'll eventually get over it. Won't I?

Marketing and Blogs

Over at Naked Conversations, Shel Israel answers the question Is Blogging Anti-Marketing? Doc then questions his blog being categorized as a PR blog, while pointing to the assertion that PR is becoming a management function.

What's really happening is that we're all learning the 4th R (along with Reading, wRiting, and aRithmetic) that got left out when mass education took hold in the 1900s... Rhetoric. We bloggers are getting quite good at sniffing out abuses of Rhetoric.

It's only the PR or Marketing or Advertising based on lies that has cause for concern. If your message is true, we'll welcome it. We just don't want to be lied to. It's pretty simple.

For example, the English Cut blog run by Thomas Mahon is marketing, and it's brilliant! It's a behind the scenes peek into a world that I would otherwise know nothing of. It's very effective marketing, and it's a blog.

Saturday, August 20, 2005

Does anyone have good Threat Models?

Ian Grigg asks WYTH? (What's Your Threat Model?), and does a pretty good job of explaining the defacto threat model for the internet, the same threat model used in the design of SSL and TLS. Ian then proceeds to point out the bad assumptions, and the need for a better model.

I can't seem to find a good source for the threat model used in Unix or Windows. I can only assume, which I'd rather not do.

It's entirely possible there was no threat model for Unix, but just a common shared set of assumptions and a subconsious model in the heads of the developers.

I want to show how the nature of threats, and the various fudge factors used when guessing about the outcome of a threat tree have shifted drastically in the past 30 years. Once this is brought out into the open, we can discuss how to mitigate and better design future systems.

So, I'm looking for threat models for Unix, Linux and Windows to use in this analysis. Any help would be greatly appreciated.

--Mike--

Friday, August 19, 2005

MetaBlame Brain Dump

A short recap of the thread to date, followed by a brain dump (which I've tried to keep sane)
  • I noticed things seeping through the filters, and worried aloud about security.
  • Doc Searls suggested it's really Mono vs Poly
  • David Berlind points out that Monoculture == Corporate standard, and starts to consider the implications, and the need for discussion
  • Zotob hit CNN, making the headlines
  • The Zotob blame game began
I want to get the discussion going again. It doesn't matter whose fault this particular worm gets assigned to, the answer is irrelevant to fixing the overall problem. What really matters is that we correct the bigger picture. (So, I'm doing a Meta-Blame game?)

Here's how I see it... and I'm more than willing to shift my views to fit the facts:
  • All 3 major platforms (Windows, Mac, and Linux) have required patches in the last year
  • It is safe to assume they all have remaining undiscovered (or undisclosed) vulnerablities
  • It is impossible to eliminate all of the bugs in any system
  • Evil is at work on new exploits, and getting better at it
  • Day Zero exploits nullify automatic updates as an effective tool
This depressing picture leads naturally to the conclusion that there isn't a single system which will remain secure over time. If we keep using variations of the same strategy, we're going to get the same results.

My social/economic view of the overall driving forces looks like this:
  • Evil people provide resources to seek out our vulnerabilities, in expectation of a return on investment (damage to infrastructure, validation of ego, extortion, etc)
  • Evil people operate a bazaar (in the lines of the Global Guerrillas theory of John Robb), which distributes knowledge, and distributes the risks
  • Offensive Tools which prove effective become commercialized (weaponized?) in this bazaar.
  • Defensive Tools which prove effective become commercialized
  • Good people also operate a bazaar, (in the lines of the Cathederal and the Bazaar theory of Eric S. Raymond)
  • Good people provide resources to defend against attacks, in expectation of a return on their investment (improved productivity, better security, validation of ego, etc.)
You can see there is a mirror-like symmetry to all of this, and information leaks both ways.

When you get to the technical arena the picture includes these elements:
  • Exploits must expend resources to search for targets (Time, Bandwidth, Risk of Exposure)
  • Once found, attacking the target is a gamble for more resources
  • The pool of targets is of finite size
  • The cost of acquisition of targets increases as time goes on
  • Not all identified targets yield success
  • Attack programs are subject to reverse-engineering, and could review their source
On the technical level, it's reasonable and necessary to assume that a perfect defense remains unavailable. It becomes quite prudent (and urgent!) to pursue strategies to reduce the return on investment for a given exploit:
  • Diversify our systems to reduce the absolute numbers of each specific vulnerability (as Doc pointed out in Mono vs Poly)
  • Utilize IDS and Honeypot systems, along with other monitors, to increase the probability of interception, and decrease the time
  • Automatic updates and scanners to block the leakage of resources from known holes eliminate the long term value of exploits as a possible resource base
On the social/economic front, the strategies include:
  • support the white-hat community to promote the constructive disclosure of flaws
  • stop the blame game which encourages vendors to hide flaws
  • community discussion and cooperation in the search for better technologies and social strategies
  • re-examination and re-evaluation of the engineering tradeoffs made in our current system designs.

The cost of C

We've all come to expect that any given computer program is going to have a bug or two. However, the nature of the cost of these bugs is changing. I feel it's time to re-examine our choice of languages for implementing Operating Systems.

{I'm biased... I admit it... but this isn't meant to be flame bait, and I hope it contributes to the discussion. I'd especially like to know which bits I'm wrong about}

A somewhat short history of programming

In the beginning was the Flying Spaghetti Monster. It was written in assembler, and could do many great and mysterious things with its noodley appendage. It's wrath was written in the core dumps of the devout few who were its followers.

The first programming environments allowed the user total freedom to use the machine as they saw fit. The machines were so expensive, it was well worth some extra time on the part of the programmer to wring out a few operations here and there in the name of efficiency. The hacker culture grew out of this need to wring the most possible work out of the fewest possible machine cycles.

As machines became more powerful, and lower in cost, the benefit of wringing out the extra few cycles decreased, as the programmers time became relatively more valuable. The growing complexity of programs, resulted in the appearance of procedural programming, which breaks programs down to a set of procedures and funtions. C was one of these procedural progamming languages.

Structured programming relies on the concept of limited scope to reduce the coupling between portions of a program, in an effort to localize and reduce the resulting effects of logic errors, and other bugs. The benefits of structured programming are now an accepted fact in most corners of the software development world. Pascal was one of the first popular structured programming languages.

Over time, other improvements have made the scene, including:
  • Type checking
  • Bounds checking
  • Object oriented programming
  • Native strings
  • Garbage Collection
  • Functional Programming
  • Aspect Oriented Programming
  • Programming by Contract
All of these improvements are aimed at improving the productivity of the programmer, at the expense of run time. As computers continute to become faster and less expensive, this appears to be a worthwhile tradeoff.

Why C?

The Unix operating system became widespread in academic circles, and was the first widely ported to a large number of environments. Liberal licensing terms and access to the source code helped spread the popularity of the C language among it's users.

Meanwhile, the computer science community developed a strong interest in the Pascal programming language. UCSD Pascal was widely distributed, but had its share of problems mostly due to the interpreted nature of the UCSD implementation.

The movement towards structured programming in the 1970's met head on with this large body of C programmers. Because programs in UCSD Pascal ran many times slower than those written in C, the C programmers won the battle. This is especially true in terms of Operating Systems design, where speed is more important than programmer time.

C remains the implementation language of choice for operating systems, even in 2005.

The cost of C

We've come to accept that if a given program fails in some fashion, we can track down the bug to a specific piece of code, and just fix it. The costs are limited to the inconvenience and lost work of the user, along with the costs of incorrect outputs. Structured programming limits the action of a bug to its immediate module and those calling it.

However, when pointers are involved, any bug can invalidate the scope limits of everything, including the operating system. Thus pointer bugs can result in seeming random crashes. The debugging of pointer errors is one of the toughest jobs facing a programmer. It is this fact which lead to the introduction of largely pointer-free programming languages.

The C language as utilized today still forces the programmer to deal directly with pointers. Unlike other variables, when a pointer has an incorrect value, the scope of the error immediately becomes unlimited. A single pointer error in any line of C code has the capacity to effect any other variable or result. Thus pointer errors are a special danger.

The C language also does not deal well with complex data types, and buffers for data must be manually allocated and removed. The problem of buffer overflows has been dramatically demonstrated in the various worms that exploited them to great cost in the last few years.

All of this results in C programs tending to have buffer overflow issues, as well as pointer handling problems.

Conclusion

The choice of the C language for implementation of operating systems made sense in the 1970s, but is no longer appropriate as it currently is utilized. The buffer and stack overflows that are more prevalent in C programs provide more targets for exploitation, and reduce our collective security. It's time for an alternative.



I'm just one guy... with an opinion... which might be wrong...

Thanks for your time and attention

--Mike--

Update 5/3/2007 - A commenter left this

A good article describing overflow exploits in a basic language


http://www.loranbase.com/idx/142/1921/article/Writing-Buffer-Overflow-Exploits-for-Beginners.html

You can find the original article (as near as I can tell) here.

Second source software

Doc says that it's not really Good vs Evil, but Mono vs Poly. Doc lays out the case for diversity in our networks, especially in the form of compatible, but different, implementations.

I'll talk more about Evil in another post. Here is yet another set of ideas to add to the pot:

Second source semiconductors

The "Poly" concept that Doc refers to is widely present in the semiconductor industry -- second source.

The nature of semiconductors is that a specialized design is produced in sufficiently large quantities to recover both the design, and the production costs. There are high barriers to entry in the production end of this market, with recent chip fabrication facilities have price tags in the billions of dollars. It is the low incremental costs of mass production that drive the profits of this industry.

Because of the large set-up costs, chips are usually produced in a single large batch sufficient to meet market demand as deemed appropriate by the producer. All of this combines to result in a market which tries to limit the number of designs to the ones that turn out to be most profitiable.

The nature of the market automatically brings the availabilty of any chip into doubt. Other factors which can make things even worse incude fiscal failure of the producers, excessive market demand (competing buyers driving the price too high), and outright disaster.

To reduce the risks for customers, and thus to help sell chips, Suppliers of chips often make legal agreements with other suppliers to provide a backup source of chips of a given design. This then become a good selling point for vendors to use, often explicitly stating where to find the second source. (Usually at the same price)

When the chips are purchased as components to be built into some product, the costs of any changes to the final product makes them very expensive to substitute. If any component can't be acquired, large costs are incurred while seeking a suitable replacement. It is thus natural that a significant portion of the design cost of a new product goes into making sure that all of the components will be available in a timely and cost-effective manner. Engineers thus tend to seek and specify chips with second sources.

The second source then helps to insure availability of a given component. It also helps in another way which is not quite as obvious, troubleshooting. If a given design is found to work well with one suppliers chips, it becomes possible to track down exactly where the fault lies, saving time and energy.

As the semiconductor industry matured, vendors recognized the benefits of second sources, and the value they gave to their customers. It became (and remains) commonplace to see the term "second source" in sales literature, and specification sheets.

Second source software

In the computing world, there are very few second sources. The barrier to production is essentially zero, as anyone can make another copy of a program. Thus, unlike the hardware world, a second source software is that of a second design, as opposed to a second producer.

There are good reasons to avoid second sourcing, and it's cousin "forking". The costs of software are all in development, thus a second source is twice as much work. It is highly unlikely that someone will branch off from a project (fork) in order to produce different code, with the same set of features. It's entirely rational that we would all want to single source as much as possible... until the resulting monoculture brought certain risks with it.

The number one driver for this discussion (from my limited perspective) is the vulnerable nature of our computing infrastructure. Most of our systems all come from a few sources. It's not uncommon to find that a flaw in a widely used library may result in a vunerabilities across a vast number of systems.

So, the incentives are beginning to appear for second sources of software. It's going to take a long time before things start to show up. It's even possible that another solution to the problem can be found which doesn't require such an investment.

Open source

The open source movement plays a part in this picture as well. Open source projects result in a product with NO production costs. The design costs have all been absorbed by the contributors to a given project. The availabilty of source means that literally anyone (even the user) can be considered a second source. In terms of debugging, the user can then delve into parts of the picture that would otherwise be hidden, fix problems, and become a new, and source. So, in this fashion, Open Source is partially equivalent to a software second source.


Prospects for the future

As the market learns, it may eventually make business sense for even the biggest vendors to have some form of second sourcing, but I see this as unlikely soon. (However, if there is money in it, businesses can spin on a dime)

For some users, Linux is a suitable second source. If you're not constrained to Windows-only applications, then you can swap operating systems, and go on with life. The rest of us will bear the costs, and as a result the market as a whole will seek out second sources in the long run.

So, you can see... I agree with Doc, mono is bad, poly is good.

Thanks for your patience, and attention

--Mike--

Wednesday, August 17, 2005

Secure Computing done right

I worry about security, it's just good sense to be aware of the true nature of the systems we all rely on heavily to get our jobs done, to play, etc. The current crop of Operating Systems all share the same flaws:
  • Poor security model
  • Poor choice of implementation languages
  • Bugs
Windows, OS-X, and Linux all utilize the same security model from Multics, which dates back to the 1970's. It's fine for a cooperative environment, but falls down flat in today's environment. The assumption is that the user knows what they are doing, and can trust all the code they need to run... both of which are obviously false in the present.

The battle between C and Pascal was won by the C++ camp, and as a result we're trying to build operating systems which can't pass strings around, let alone complex objects, as native types. This leads to the buffer overflows, and a whole host of issues that would not be present in a Pascal based environment. (Note that I'm a Pascal fan, so I might be biased and/or wrong here!)

Bugs are a given in software development. The more use and testing of software, the more likely a bug is to be found. It makes sense to re-use working code (which contains the distilled experience of the authors and users), instead of re-inventing it. The price is that a flaw anywhere is a flaw everywhere.

The combination of these three factors combines to present a "perfect storm" in which insecurity is the inevitable result, as follows:
  • some error somewhere in the manual code to deal with a pointer and a buffer fails to check the size of a buffer
  • data gets copied from an input to the program into the variable after buffer (or into the stack space)
  • the program flow can now be captured through careful study of the result of the bug and the engineering of an exploit
  • the exploit can then inject code to run as the user (or system process) on a targeted machine
  • because the code IS the user as far as the OS is concerned, the bug now becomes a trojan horse, and can open the gates to any desired action
It's the combination of all of these factors, which results in the current vulnerable state of computering as we know it. I started talking about this with Windows, because I'm most familar with it, but make no mistake, Linux and OS-X machines also contain bugs, and the same security model, so in the long run, they are just as vulnerable.

The situation will remain bleak until this trifecta of doom is defeated by concerted action on everyone's part:
  • Add Strings and other complex types NATIVELY to C/C++
  • OR switch to Pascal ;-) {no I don't expect this to actually happen -- Mike}
  • Make bounds checking the default
  • OR find a code model that makes bounds overflows IMPOSSIBLE (Compile time type checking?)
  • Ditch the old ACL model in favor of something modern such as Capability based security
Note that this does require work. I believe it's a fairly realistic approach, but it requires leadership from a few places. The GNU Compiler crew should work with everyone to make the required changes to C/C++. If the Microsoft and other compiler vendors went along, the new builds would have fewer holes to exploit, which helps us all.

The other big hurdle (which is NOT optional) is to move from ACL (Access Control List) based security to Capabilities. This is going to force a lot of assumptions built into everything to be re-evaluated, and it won't be trivial. It'll force the drivers out of the Kernel and into User space, which will be a performance hit, but necessary. It'll mean a lot of work for Linus and crew (as well as the folks in the closed source world).

Education as to the details of Capabilities, and the adaptation of them into the scripts and service code will take time, but the benefits will be enormous. Once all of this work is done, we'll have no less than honest to goodness Secure Computers!

  • bugs still exist in programs, but are now limited to implementation errors, and variables no longer are subject to random scope violations, resulting in much more deterministic results. (fewer crashes)
  • strings and objects can be passed via a native set of operations, reducing the impedance mismatch, and the chance of error handing such items (fewer crashes)
  • in the event that some code does turn evil, the extent of the damage is strictly limited to its present runtime capabilities list.
I believe this is the direction we need to take to make things better for us all.

Thanks for your time and attention.
Comments and questions welcome.

--Mike--

Monday, August 15, 2005

Secure computing

Computing in the 21st Century requires participating in an arms race. The virus scanners, automatic updates, spyware scanners, etc. are defensive weapons.

The opposing side has an array of toolkits for writing worms and trojans, a growing network of financial and logistical support from all manner of sources, not to mention the ill directed energy of millions of adolescent males out to prove themselves.

We've become reliant on the patchwork of fixes that has evolved over time.. While encoraging some diversity is a wise and practical step to take to help bolster our defenses, I believe their is an underlying issue needs to be addressed.

Our goal should be secure computing. A secure computer is one which allows the user to trust that the computer operates as intended, and is free from any undesired outside influence. -- Note that this is contrary to definition used by certain OS and Media providers, their secure computer is one which is free from any undesired USER influence.

Windows, Mac OS, and Linux are all built on the same model. A trusted kernel supports a set of applications, using a series of access control lists (ACLs). There is a basic problem of granularity when using ACLs, it all comes down to granting permission per user account. If an application is run with a given account, it as full run to do anything that the specified user is allowed to do. There is no method of limiting a program to a specific set of actions, for example. This problem hasn't been solved in 30 years, nor is a fix for ACL based systems likely in the forseable future.


The alternative, well researched, but not on store shelves, is Capabilities based operating systems. In such a system, tokens are granted to a program to perform certain tasks, or capabilities. An application is thus limited to their available capablities, but no others. This means that services can be given tokens to access certain files, and perform certain actions on them, but no others. Even if an aversary subverts the code running a service, the capabilities limit the scope of action that can be taken by the adversary. The capablities model, if properly implemented (and the devil is always in the details) can be mathematically proven to be secure. (Which can't be done with ACLs)

If we can find and bring to market an open source, capabilities based OS, we gain a very large defensive weapon, though tere will still be other attack vectors. I had thought that capablities were coming soon via GNU Hurd, but a closer inspection of the online documentation seems to indicate they are going with user IDs ... oh well.

Well... the solution is out there... now we just have to find it, time is running out.

Or perhaps I'm just paranoid?

--Mike--

Doomsday planning for a small shop

Doc Searls correctly points out that the Windows monoculture of our computing systems is a weakness. I'm going to be taking his suggestion, and keeping it in play while planning IT strategy from now on.

However, for the present, I'm still going to seek low cost ways to help survive the doomsday scenario I laid out in my previous post, however. It might just be paranoia, but I like to have backup plans, just in case.

Data Lifeboats

In the past few years, the size of backups has increased at a faster pace than the tape drives. Backup to disk has become a relatively inexpensive, and fast way to do things. It occurs to me that it would be quite valuable to be able to read one of these backups while totally disconnected from the network (and any source of threat). A machine so configured could be considered to be a data lifeboat. Just as many small vessles save human lives in the event of a sinking ship, the data lifeboat would be adequate to allow a user to carry on the most essential tasks independent of all of your IT infrastructure. (Note the subtle assumption of multiple backups. If you don't have at least 2 complete backups on a shelf at any given time, you're rolling the dice.)

It should be fairly easy for even the smallest business to have at least one machine, with a complete load of the applications, and all the hardware necessary to directly read the backup drive, ready and tested, and most importantly, unplugged. We all tend to have a steady stream of older computers, and this is a perfect use for one or more of them. If the shit hits the fan, nobody is going to care much about the speed of the machine, just as long as they have something that works. This simple (simple?) measure could save a small business.

Scaling up

You might even want to consider having a set of workstations and a server, all ready to go if things get really bad. Be sure to test the heck out of it, using some spare old network hubs.

In the event of an emergency, you then pull everything infected off the network. Then disconnect from the internet and any VPN connections. Get a your backup server off the shelf, and get the restore going, meanwhile, replace all of the boxes with their older, slower, but clean and safe counterparts. By the time you've got the hardware hooked up, you should be able to let your co-workers have their systems for use. It'll be slower, but they'll definitely understand. You then can start searching for a fix, reformatting, or imagining, or whatever is required, with far less stress from your co-workers and management.

That's a cheap ($) way to deal with doomsday... ugly, but cheap.

I look forward to the discussion.

--Mike--

Sunday, August 14, 2005

Windows Apocalypse Now - Version 0.01 (Verbose!)

A storm is brewing, and it's not going to be pretty.

Windows isn't secure. We all know it. We've gotten used to hiding our Windows boxes behind firewalls, running Windows Update on a regular basis. We've all installed and update our virus scanners on a regular basis as well. We've all grudgingly come to accept all of this as part of a normal IT environment.

My day job entails adding value to the hardware and software we've got installed on or corporate network. It can be broken into some components:
  • Absorbing uncertainty (proactive and reactively)
  • Keeping data secure
  • Keeping data available
There are a lot of things that can go wrong with complex systems such as computers. IT professionals absorb the uncertainties, and hide the complexity from the users. The basic idea is to make it as much like an appliance as humanly possible. We do the tricky stuff so they don't have to.

History:

The value mix has traditionally been one of keeping the systems running, while trying to balance cost against performance. Data security as slowly crept in scope, but is mostly focused on limiting access to the correct mix of users.

Once upon a time, getting a computer virus was something handled on a case by case basis. It could usually be tracked back to the offending floppy diskette, or email. The damage was usually measured in the amount of time spend doing the cleanup. Virus issues were strictly limited to workstations, as you would never run an application on a server.

Server uptimes were measured in hundreds of days. One old 486box (NT 3.51) running as a backup domain controller ran for more than 400 days before I had to swap out the UPS powering it. Servers did one job, and did it well.

Then the virii started becomeing more frequent. Every workstation had a virus scanner on it to scan files before opening them. Eventually, the scanners ran full time, thow at considerable loss in performance. It eliminated the need to worry about virii for the end user.

As the tempo of virus development began to increase, the need for update mechanisms became apparent. We began telling our users that if the virus scanner was more than 6 months out of date, it was useless. Over time this window became smaller, and we got tired of manual updates, so the vendors solved it by offering automatic updates.

Buffer overflow exploits started showing up on the internet in the form of worms. We treated this threat in a similar fashion to the virus threat. Over time we went from case by case, to manual, to updated, to automatic updates.

Somewhere along the line, it became obvious that Windows wasn't tough enough to deploy directly on the internet. We start hiding our Windows Boxes behind firewalls. This introduced us to the joy of having to support VPNs, but it seemed to be good enough. The Firewall systems themselves also have automatic updates.

When the spam got too bad, we started using keyword filters, then Bayensian (sp?) filters, then we got spam filters as an add on to the virus scanner.


So, here it is in Mid-2005, we've got a continous stream of system patches, and a continous stream of virus definitions, most of our spam is gone, and we're behind a continously updated firewall.

This interlocking system of patches does a good job of hiding the complexity and plugging the holes so that the users can go about their business. However, it's not perfect, but hey, that's why we get paid the big bucks, right? We fix the little issues that pop up, then go back to doing our other work.

This system addresses the growing volume of threats in a fairly straightforward and efficient manner. It's not perfect, but it's amazing that it works as well as it does.

However, I'm not happy. In fact, I'm starting to get very worried. The chinks in the armor are showing up. The end users are starting to get distracted from work again, in a number of ways:
  • spam
  • phishing
  • virii
It appears that these theats have already been dealt with, but my end users are still seeing them... and that worries me.

There is a problem here, widely known as the day zero problem. For practical purposes, there are an essentially infinite number of vulnerabilities in the computer systems we use. A growing number of tools are availble to automate the process of mining for a new flaw to exploit. Tools are usually included for creating a program to exploit the flaw. This new program is called an exploit.

To utilize an exploit, it is then necessary to find target systems which are vulnerable. This requires some form of scanning of the internet address space, and may also include sending emails, queries to DNS or Web servers, as well Google or other search engines. This activity is the first point at which it is possible to react to a threat, if detected.

Once targets have been found, the flaw is exploited, and the target system compromised. This is the second point at which the threat may be detected. Worms may then use the compromised system to futher search and compromise other systems. This is usually done as part of the exploit, and no human involvement is necessary once the exploit has been launched.

There are many complex factors that determine the extent and speed of which an exploit can then propagate across the internet. The discussion of these factors is outside my area of expertise. I am certain, however, that there are two opposing groups working on trying to shift these numbers to their advantage. I'll simplify it down to good, and evil.

It seems to me, based on ancedotal evidence, that the good guys are smart, but they have to be careful. They have to worry about niceties such as preventing false positives, testing, quality control, etc. Testing is good, and necessary, but it's also a delay, and it's got define lower limits.

The lower limit seems to be somewhere between 24 hours and 2 weeks, depending on who you ask, and how you measure. Meanwhile the bad guys can work on things at their leasure, and deploy at will. They are well aware of most of the efforts expended to keep our systems secure, and have worked to build ways around them. A well funded evil person can test his exploit against the latest detection methods commercially available, without fear of discovery.

As much as I want to avoid the analogy, it's a war. Both sides have to test their weapons, but the good guys have to do the testing while the clock is running. Unfortunately, they don't have 2 weeks like they used to. The window for testing is getting smaller, and will, at some point even become negative, due to the other delays inherent in the system.

The recent signs from my users tell me that time is running out. The virus signatures aren't keeping pace. We've not solved the issue, and it's going to come back to us, full force, very soon.

We live in interesting times.

--Mike--

Tuesday, August 02, 2005

Things that need to be free

Jimbo Wales asks for input trying to put together a list of things that need to be free (open) in the next century. Here are some of my nominations:

  • Software - It's going to take a long time, but most software will be open source for practical reasons, including auditing, flexibility, and as an escrow against a proprietary vendor's demise. It will eventually be unacceptable to NOT get the source to programs. People will still pay for it, and Microsoft may even figure out how to dominate it, however.
  • Protocols and Standards - TCP/IP, Ethernet, 802.11, and all of the other protocols will continue to be free and open, with competitive proposals working their way into public acceptance. As the manufacturing revolution forces hardware into the open (see below), the embedded standards such as DVD, CD-R, and others will be forced out into the open as well. When anyone can make an optical drive from scratch in their basement, nobody will build in region codes.
  • Hardware Design / Firmware - Eventually everything from CPU processor designs all the way to the latest iPod-3D design will work there way into the open. Like software, some will be open source, and some will be the result of a commercial firm and a more traditional engineering process. Everything will have a URL printed on it somewhere, where you can see all of the specs, and find pointers to the user community dedicated to it.
  • Manufacturing - Advances in manufacturing technology at the small end will allow you to turn out custom chips, and build your own gadgets in your garage, albeit for more cost than items that are mass produced.
  • Patents - The need for the public to be able to review patent applications and derive advancement of the arts will eventually force the entire process out into the light of day. The USPTO will eventually have ALL of the patent application process online. The public will be an essential part of the review process for prior art, which is necessary to stem the tide of bad patents and fleeing patent clerks.
  • Law - All of the laws will be available online, along with the discussions that lead to the adoption of them, similar to a Wiki. It will be quite practical to find out exactly what tradeoffs and considerations were made when a law is written, making it much easier to follow the spirit of a law, and stop worrying so much about the letter of it. Court decisions and the output of trial processes will also go into a public database. Companies which make their profits from locking up the law,will be forced to reconsider their business model.
  • Copyright - If you believe the current "everything is a remix" meme , there is nothing truely original. It's the gathering together of ideas and expressing it as a new synthesis that is valuable, and that will continue to be rewarded and encouraged by Copyright. The extension of copyright as a tool for locking down culture will not last.
  • Identity - All of our medical, financial, legal, and other information will be available to our guardians and us in an open format. We'll get to decide who gets to access which parts of it, and get an audit trail of everything.
  • Culture - A great deal of our culture is tied up in the stories and shared beliefs that make a community. We will not tolerate commodification and lockdown of our culture by large corporate entities.
  • Computing - Regardless of the current apparent progress of "Trusted" Computing, we will have general purpose computing for the long term.
  • Data - Our stuff, including photographs, movies, writing, poems, music, and all of the other creative and logistical output of our lives will be available to use in an open, known format. The vision of a world locked down by DRM will not come to pass.
Well these things to me seem to be general enough to make a fair start at Jimbo's list.