Saturday, June 23, 2018

How we lost the upcoming cyberwar, in the 1970s.

There is a lot of talk about an upcoming (ongoing?) “cyberwar” with Russia and China. I believe strongly that this "war" was lost in the 1970s, in a manner similar to the way we ended up with the Y2K problem. The scope of programs slowly shifted, and base design decisions haven’t been revisited adequately in the intervening decades.
Ambient Authority is a design decision which only appears once you have multiple users sharing a computer. In early computing environments, there were military, commercial, or academic constraints on the users, and it was sufficient to limit the programs to the authority of the user, so no change was needed, except for the provision of ACL (Access Control List) security, the kind where permissions are added to files and folders, in the form of who is allowed to access a given file.
ACL was deemed to be adequate, and as a result, everyone just kept using Ambient Authority without much thought. Fast forward 40 years, andl we find ourselves in a world of persistent networks, mobile code, no system administrators, and multiple layers of firmware and OS from various hardware and software vendors.
In such a system, any code runs with the full authority of the user who started the task, and the users have no effective means of limiting the side effects of running a given program. This in turn means we have to try to guess the intent of code (which is equivalent to solving the halting problem, and is thus impossible). The band-aid is to then try to enumerate all the bad code in the world (virus scanners), and to enumerate all the code bugs in all our programs (security updates), and to eliminate the trust of users (DRM, forced updates, "safety" filters in our browsers). None of these band-aids will work against a determined individual, let alone a nation-state.
Running tasks with the least possible privilege, the "Principle of Least Authority" (POLA) allows a user in such a system to decide ahead of time exactly what files the program is allowed to read, write, etc. Because we're all used to dialog boxes, and drag to drop GUI elements, this doesn't even require any special training of users to accomplish.
Of course, rebuilding our infrastructure to fix a design flaw of the size and scope of using 2 digit years (the Y2K problem we once faced), isn't going to be easy... especially when there's no deadline to make the need for action obvious. It's just going to remain an insidious vulnerability instead for decades to come.
If you think EAL certifications address this, they don't. 8(

Data storage trivia

If printed with dense codes, you can fit about 1/2 megabytes on one side sheet of letter sized paper, reliably. This means 1 megabyte per page.
A case of paper is 500 (sheets/ream) * 10 reams -> 5000 megabytes.

One case of paper is 5 gigabytes.

Sunday, June 17, 2018

21st century deep perceptron nets

Long ago, the Perceptron showed great promise as a way to have machines learn. There were limitations, but unfortunately these were extrapolated prematurely to conclude it was a dead end, and the technology languished for years before being reborn as neural networks, and eventually deep neural networks, or "DeepNet".

The cool thing about perceptrons was they were analog, and each unit cell could adjust its own parameters in a simple, analog, and provably effective manner.

IBM has just figured out how to mix Non Volatile Memory (NVM) and CMOS technology to build analog perceptrons on a chip, thus making it possible to have analog deep nets, which saves incredible amounts of compute time by doing everything in the analog domain.

I expect this to make a big difference in the long run.

Saturday, June 09, 2018

We need to have a deep discussion about computer security.

I think that we need to have a very deep discussion about security. There are many folks who figure that things are just fine, except the users, OS vendors, administrators, or some other "blame sink" is the problem.  Things are not fine, not even just slightly broken, they are a ticking time bomb.  We've built a civilisation on top of a layer upon layer of code that is full of holes, and should never be trusted.

I have a radical idea to sell... no product, no service, no profit for me.  Please consider this idea, and not reject it out of hand.  If you find it appealing, just help spread it, refine it, and you can leave me out of it for all I care.

Computer Security can be fixed.  The fix is expensive, because the flaw is in the foundational assumptions of what makes a good operating system, which means we have to rebuild from ground zero.   Operating Systems are supposed to fairly share the resources available in a computing device according to the policies set in place by the designers, administrators, and users.   To do this, some assumptions are made, one of which is that a program, once set in to motion by a user, should have the full authority of the user at its disposal at all times.  This ambient authority is baked into everything out there, the trillions of lines of code running almost every device on the internet, and off.

Programs can be written in a different way, without the need for ambient authority. This is called capability based security, and also goes by some other names, including the principle of least privilege.  There are historic examples of Capability Based operating systems in the past, like KeyKOS, which prove that it can at least be done.  There is a project in Germany, the Genode project, which appears to be close to usable for building capabilities based systems, though I haven't had my hands on it yet.... it's getting close.

Capability Based secure systems don't even have to work differently for most users.  Capability UX Tools like a "powerbox" replace dialog box and the subsequent opening of a file, giving the same results without the need for ambient authority.

Programs without ambient authority don't present a vector for the spread of malware. In fact, it might be possible to completely dispense with virus scanners, firewalls, and the whole mess of "security" software that we layer onto our PCs in an attempt to keep them safe.

So, there it is... a lot of text written freestyle into a dialog box on a web page.  I hope I did a good enough job to convince you to give the idea some consideration.  I'll do whatever it takes to help spread it.

--Mike--
mike warot - chezmike@gmail.com

The minimum viable demo for Capability Based Security

I have a vision of how computers can be made secure, but apparently in order to convince my fellow programmers that it can be done, it is necessary for be to build an "existence proof". Clearly I'm not up to building an Operating System, not even a custom linux distro.... something far less ambitions is required, a minimum viable demo... I'll be pondering this for a while, any suggestions would be appreciated.

Tuesday, June 05, 2018

I have a dream, the Capability Based Security version

I have a dream, that one day, we shall have secure general purpose computing. We shall no longer need virus scanners, we shall no longer have multiple gigabyte “security updates”, and we shall no longer fear to click on a link, or try out a program… I have a dream.
General purpose computing needs to be secured, this will fix IoT as a secondary effect. There’s not enough money in small gadgets to do this the other way around. We fix IoT security the same way we fix workstation security, by deploying operating systems that don’t lubricate applications in a sea of authority, with all the explosive results.

There are operating systems that default to NO permissions what so ever, and yet are capable of getting things done. Linux, Windows, Mac OS all work by letting applications do whatever the user is allowed to do… which is pretty much anything and everything. IOS and Android attempt to sandbox off a bit of this, but still assume they can open any file, do anything, within the sandbox… over time the sandbox has more holes poked in it to let more “features” happen, and make it more porous.

Capability based security, it’s not a silver bullet, but it can fix general purpose computing, and solve IoT security as a side benefit.

Thursday, May 17, 2018

Ambient Authority - The Trojan Horse

“Secure Computing” - Isn’t. If it was, you’d be able to check your gmail and surf the web on a secure computer in a secure environment. No sane GS-15+ is going to let that happen any time soon, because they know better.
Ok, controversial statement up front, let me explain where I’m coming from, and defend that statement.
I’ve come to learn that there is a set of jargon used in “secure computing” that I really don’t know, and need to learn in order to effectively communicate to the crowd that uses it. There is also a different set of jargon used in the description capability based security that further makes it hard to convey what it is, and why, to people who really, really need it. So, forgive me if I oversimplify things a bit, and try to make this as plain-speak as possible.
There is a Trojan Horse hiding in every major operating system out there, Linux, Windows, Mac OS, and even (as far as I can tell from my civilian outsider view) the “secure” OSs. As far as I can tell the major difference when it comes to “secure” computing is to carefully check the code for errors, audit and log the shit out of everything that happens, be very very paranoid about what you let the user do.... and then hope the applications do what they say on the tin, every time.
The Administrators don’t trust the “dumb users” in civilian environments, and are rightly concerned with spies in the “secure” environments. Everyone is worried about the users, nobody worries about the code they run once it’s decided that it is safe to run. They trust the code, pretty much without thinking.
If a program wants to open a file, the programmer will write a line of code like this:
open("filename", O_RDONLY, 0)
If your operating system allows this, it isn’t secure. Why do you trust the program to randomly open files on the user’s behalf? Because it’s always been done that way, that’s why. It’s called “ambient authority” and it’s a trojan horse. Read up on it at Wikipedia here: https://en.wikipedia.org/wiki/Ambient_authority
Now, there is a safer way, that doesn’t require more/different work, on the user’s part
Capability systems move the “dialog box” (file open) outside the applications control, and hand the file handle (a “capability”) to the application, which can then use it as normal. This then removes the need to trust applications with the ability to randomly do anything the user can do... this removes ambient authority.
Changes for the user - The dialog boxes look a bit different, pretty much works the same. No government employee is required to learn something new.
Changes for programmers - Things are a bit different, have to request the capabilities instead of a dialog box then opening files. Not a huge shift for them. Changes for the OS - Things have to be re-written, if you want deep security, using a proven microkernel, etc.... but even Windows could be made secure, IMHO. (Not EAL7, though, never) This subtle shift removes entire classes of attack from computing, and makes the world safer. We all need capability based computing. Thank you for your time and attention.

Tuesday, May 15, 2018

History - Lessons Unlearned? - Part one

Back in June 2015, The US Office of Personnel Management was hacked by a Chinese national, who managed to make off with pretty much all of the personal information of everyone who has a security clearance, including the big long nasty form where you tell them every little thing that makes you black-mail-able, as part of your security clearance application. Oh, and fingerprints, too. 

You can read all about it on Wikipedia - https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach

In the summer of 2016, a group called The Shadow Brokers published several leaks containing hacking tools from the National Security Agency (NSA).  These tools targeted Firewalls, Antivirus Software, and Microsoft Products.

You can read all about it on Wikipedia - https://en.wikipedia.org/wiki/The_Shadow_Brokers

I’m no expert, but my reading of history informs me that in the 1970s, in response to the need to be able to have computer safely process data that was “secret” and “top secret” at the same time, Multilevel Security was invented, and through the decade, pretty much perfected. It strikes me as very odd that things aren’t more secure, as the technology exists, if you can afford it, to keep things secure.

Data diodes - are devices which are designed to only allow data flow in one direction. They work, and have been around for decades.  It is physically impossible to get data transfer the wrong way though these things, not because of clever programming, but because the inbound link only has a fiber optic input, and is incapable of transmitting data outbound.
You can read about data diodes on Wikipedia - https://en.wikipedia.org/wiki/Unidirectional_network
Once upon a time, I got to see how work happens in classified environments during an open house... people actually work in big vaults.  Computers which contain important secrets being worked on aren’t connected to the internet. Colloquially, these machines are said to be air-gapped.  Now there are attacks which can leak data out of these networks, but the good ones require physically breaching the network, not remote hacking.

You can read about air-gap on Wikipedia -  https://en.wikipedia.org/wiki/Air_gap_(networking)

Put air-gaps and data diodes together, and you can build a system which can take data, even over the internet, and get it into an air-gapped network, and never let it back out.  Why was this not done? It boggles my mind.   I’m ok with our secrets being collected, and stored in a central location, with physically secure, redundant backups.

Now a pile of secrets which can’t be accessed from the outside is useless.... there needs to be a controlled means of egress, something that humans can understand, and thus manage intelligently, with little cognitive load.  I propose a simple way of doing this.  Build a system wherein a personnel record can be requested from the real world, and makes its way via a data diode into the secure environment.  The request is reviewed, and approved, and filled by a human using a computer, and then the requested records are written to a single use, 1.44 mb floppy diskette.  The operator then hands this diskette off to a different operator, who then records the transaction, and sends the information off via the internet.  The used diskettes are then sent to a third person who stores them in a vault, should any back-checking of access, or auditing be required, etc.

I’m out of time.... more later.
Mike Warot - May 15, 2018

Sunday, May 13, 2018

Crisis solved, thanks to Mixxx the Free DJ Mixing Software App

So, thanks to Mixxx, I can now record stereo into my laptop with no external mixers, XLR cables, etc.

Here is a sample.


I'm an unreasonable man, about to prove it again.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. -- George Bernard Shaw

I am an unreasonable man... I have some ideas about how technology should work, and have a strong interest in keeping my ideals aligned with reality. There ave been times in my life when I've been told something "just can't be done" directly, or by circumstances.   I tend towards proving otherwise, sometimes I fail, sometimes I don't.

Back in the days of MS-DOS I wrote a text search algorithm that was as fast as you could read from diskette... 10x times faster than the default one.

I also was told you couldn't dual boot Unix (way before Linux) and MS-DOS... after some hacks to the boot sector, it worked.

I did multitasking, and text pipes internal to Turbo Pascal programs in DOS, because I needed it.

Back in the days of OS/2, I wrote an application in assembler, because I was told it couldn't be done.   It was a native code Forth interpreter.  Forth/2

Lately I haven't done as much... but circumstances have arisen which I find most intolerable, and must be fixed, by me.

Situation: One Windows 10 laptop, with 2 USB ports, and new brand new Blue Ice microphones... each of which does work, by itself.    Audacity can't record from both, but rather only allows one input.

I've just wasted most of my free time today trying all the usual options, none of which work.

I'm going to have to either write a multi-track recording program, or build some virtual mixer device, to solve this problem.

I'll write more once I've gotten a path figured out.  Thanks for your attention.

Sunday, April 08, 2018

Elon Musk - You're worried about the wrong thing.

I'm watching the movie about the dangers of AI that Elon Musk is sponsoring the streaming of for the weekend.... but as I watch it I can't help but keep thinking that there's a bigger issue that everyone has ignored....

There's a lot to be concerned about here, but the thing that everyone seems to miss, over and over, is the fact that we can't secure our computers against humans, let alone an AI with infinite patience. A few years ago, all of the 128 page security clearance applications for the entire United States were digitized, and online.... who was stupid enough to let this happen? Everyone was surprised and shocked when it happened, but I bet most of you don't even remember it any more.

All this data is eventually accessible via the internet, and there's shit for security protecting it. One lucky rogue human is all it takes to take the whole thing down. I'd be deeply surprised if someone, somewhere, isn't training an AI to take over compute resources.... and once that gets sufficiently good, it's game over, because nothing is secure.

It's possible to radically increase security, and do it in a user friendly manner... but this requires re-writing everything based on a new security model. (The principle of least privilege), so it's not a "magic bullet", but rather an expensive one.

I hope we decide to spend the resources and fix security... but it's a faint hope.