Wednesday, June 27, 2007

An accidental path to VRM.

Doc Searls is trying his best to come up with a coherent vision for VRM. He seems to be taking the straight on approach that works pretty good for lots of things of limited scope, such as Email, sending a file, TCP, etc.

VRM is such a nebulous concept right now that the scope is essentially infinite. This results in a "boil the ocean" view of things... which just doesn't work in the real world. I'm working on a completely different problem, but I think it might accidentally solve Doc's problem along the way.

Here's the general flow that VRM seems to take:
  • Somewhere, on a web page, X is described.
  • I tell my computer that I'd like to purchase X
  • The computer generates a file which specifies exactly my interest in X and puts it where vendors can see it
  • Vendors read the file, and the machines begin to negotiate
  • I pick the best choice, and approve the transaction.
  • I get X

My accidental path to VRM...

I'm interested in an ideological quest, to get what I call "markup" included in the web. My religious difference is that real markup on ... um... paper, for example... doesn't have to be done all at once. It can be layered... post facto. HTML just doesn't do it. Anyone who says otherwise is itching for a fight... ;-)

One of the ideas that you need in order to make real markup work is the ability to add content in a layer on top of an existing document. The word transclusion gets tossed in here... but it's got a lot of baggage associated with it. The basic requirement is to be able to say

this document is to be a layer on top of original_document_url
If you can do that, you can then do a lot of very powerful and new things with the web. The glue to hold it all together is a new set of places to store all of this new markup. It's fairly obvious to me that it wouldn't go anywhere on the original server for a number of reasons. It would probably get stored in a local repository, and then shared out to a community server somewhere to enable others to discover and read it. This need for a new repository of data is another common point of interest to the VRM problem. You're looking to add new data to something in a silo... you have to have a different silo to put it in though.

Making this new data discoverable and useful is a matter of aggregating, sorting, etc... it's a new Goggle class opportunity waiting to be solved.

The final link is that I'm interested in adding more than just text on top of pages, I want to be able to include metadata... and the VRM data would be a small easy to fit subset of that.

I hope this is coherent enough to make some sense to the rest of you. I welcome all discussion.

--Mike--

Sunday, June 24, 2007

Comment over at John Robb's Weblog

Here's what I posted over at John Robb's very insightful blog.
John,
It's my long held belief that all of operating systems as we now know them are fundamentally insecure. They all rely on the need to trust a given piece of software to be free from flaws.

There are alternative security models which greatly reduce the amount of code to be trusted (down to one module in the kernel of the OS). NOTHING else in the OS needs to be trusted.

This model of security has been called "Capabilities" based. When a program is run, it's only given the minimum required access to do the job, and nothing more.

For example, if you fire up a Word processor on Windows, Mac, Linux, DOS, etc... it can open ANY file you have access to, and do anything to it. You have to trust that it only does what you want. The problem is that you can't trust it. 99.9999999% of the time it works in the fashion you expect... but it's that one in a billion flaw that the virus/worm/spam/enemy can use to subvert the whole system.

It's going to take a long time to overcome the inertia of all of the installed systems, and the programmers who write them. Perhaps 20 years from now we'll finally be able to start to shut down the virus scanners, and firewalls.

Until that time, all of our computers will be available to any party with the resources to find and exploit any of the flaws in the code we all run.

It's a matter of National Security to fix this, but people are wrongly convinced our Virus Scanners/Spam Filters/Firewalls have solved the problem.

I really enjoy your blog, and value the insight you share. It's good to know you're on our side.

--Mike--

Tuesday, June 19, 2007

Imagining the future

In the future, you'll post a page to a server some where. The site, folder, or specific page will then contain a few pieces of metadata to make all of this comment mechanism obsolete:
  • GUID (Globally Unique ID) string to allow reference to a document
  • Digital Signature for the document, and its authors
  • List of places for the reader to find updates, comments, etc.
  • List of places where the author publishes his comments, ratings, etc.
The process of getting a web page won't be as simple. The resulting view for the reader will depend on the original source content, plus the additional data that their browser may have gathered depending on their preferences, web of trust, etc.

If this page, for example, were created in my desired future, there would be a link to a public comments server somewhere, to help with compatibility to the current web browsers.

A FireFox plugin would search the document and its locale (folder, server, etc) for a list of places to find and put comments, markup, etc. It might also search some private lists as well for comments hosted by communities I'm involved in.

The browser then could check through the identities of the authors of comments, and highlight or hide their comments based on various pools of reputation to which the reader has access.


It's a much richer, more complex, and if done properly, a closer approximation to the way we social humans deal with each other. We're still at Web 0.1, we're not even up the the level of the Vannevar Bush vision of the memex which at least allowed for markup of existing documents.

More later... Virginia's on the move...

--Mike--

Blog Archive