Thursday, October 29, 2009

IRON/Cloud — the outline of what a modern OS should be

In a previous blog entry — while reviewing the Windows 7 Operating System — I outlined some of the reasons I felt disappointed with the state of operating systems in general in the year 2009.

I sat down and listed many points that I felt (at least some) operating systems should have by now, separating them from forever from the Xerox UI and Bell Labs UNIX roots that they have done so little to grow beyond in the last 30-50 years.

One thing I ran into since this blog entry is an article on Slashdot discussing Microsoft and Danger losing a ton of data that was being stored on behalf of T-mobile Sidekick telephone clients. The sidekick stores all of it's data remotely; yes "in the cloud" and some of that data simply went "poof" earlier this month. The summary of the slashdot article recommended this may have never happened had they been using a more failure-tolerant filesystem, such as ZFS.

So I looked into ZFS, it is a modern filesystem being developed by Sun for use on their commercial Solaris operating system. It meets points #4 and #5a/#5b from my initial blog entry of what Operating Systems should be doing these days. It favors appends over edits, it uses hash-based inodes so that duplicate data content never need be duplicated on-disk, and born of this it copies data via a lazy-copy model. It even implements data editing this way; creating a new block on disk for the newly written data so that the original data is not immediately overwritten but instead remains available to roll back to in case of a crisis. Given enough disk space, you can honestly travel backwards in time to previous data snapshots prior to old data being reclaimed. You can even bring old data snapshots to life as R/W volumes which overlap the live volumes of data!

This is the sort of innovation we should be seeing in modern operating systems, and it is the kind of change I put my weight behind.

Inspired by this and by conversations I've had with others regarding my blog post, I have learned much more about my expectations for operating systems. Enough so that I can describe what I see as the right direction to move in.

One of the most important transitions that the IT industry is presently tripping badly over itself to take advantage of is the ability to abstract software and data completely and forever away from Hardware, such that hardware within a given IT infrastructure can be truly commoditized.

This is breathtakingly important, if anyone has ever worked in a small to medium sized business, you know the horrors of vendor lock-in and hardware lock-in. Somewhere there is a 15-20 year old piece of iron that runs some software long ago abandoned by it's creators that the fate of the entire enterprise rests on for some reason or another. This linchpin of the business can apparently never be upgraded or replaced. If it breaks someone has to figure out how to re-ducttape it back into working order. If the motherboard ever goes out, or it loses enough drives (that they also no longer make replacements for) in the RAID, the powers that be fear it could spell doom for the company.

Being locked into the software is a problem all it's own, but by extension being locked into the hardware just makes things worse. Additionally there is the danger that hardware always fails eventually, you just don't want your software depending heavily upon it during the moment that occurs. Since you never really know when that might happen, the correct answer is software should "never" rely heavily upon any specific example of hardware.

Finally, most small businesses cannot afford to do the trendy thing and forever separate every application onto it's own pristine server.. then move forward with High Availability goals by trying to replicate services across different iron using application-specific replication models (DNS axfer, MySQL slave/master, VMware Vmotion, RAID for the disks, etc) This simply costs too much to buy one or more instances of iron for every new process, to say nothing of the IT workload managing all these new boxes takes.

I feel that the perfect operating system for small business (not to mention Large Enterprise, home power users, and not to shabby for even Gramma checking Email) would be one that allows you the greatest advantages of high availability with as simple a setup as 2 machines in a NOC (each with as few as one large disk) and one machine offsite for geographic replication (also with a single disk). I'm not delving into backup scenarios here; you'd likely still want one backup SAN in the noc and another offsite. I won't jaw about that. But I envision an OS that allows you to run all of your small business's applications on these three machines, while offering maximal data replication, heightened availability and geographic leveraging for you.

I call this hypothetical Operating System "IRON/Cloud". "Cloud" in this case meaning just what it always should have meant: a platform designed to divorce software, data and services from hardware. A platform that lays on top of potentially geographically disparate hardware in order to provide highly available and geographically leveraged services to users. "Cloud" in this sense does not necessarily mean trusting your data to "someone else" or some data vendor. I am talking about IT staff and end-users engineering their own god damned clouds for a change. :P

"IRON" in this sense refers to my own pet name for hardware. Hardware really should be commoditized so heavily that adding a server into a rack should be like pouring more molten iron into a smelt. Obtain and rejoice at all of the added resources. CPU, RAM, Disk, attached peripherals, network card, geographic diversity, possibly GPUs and other hardware acceleration, keyboards and mice for user presence, etc etc. But forget about and do not concern yourself with the limitations of the hardware. All of these resources should be consumed by the cloud platform itself, running processes redundantly and in parallel over all this hardware and shrugging off nodes that go dark like an elephant shrugs off arrows.

I have noticed recently that both independent enterprise and would-be "cloud" vendors are trying valiantly to provide both high availability, and many security aspects I mentioned in my last OS tirade by leveraging virtual machines. The thought is, if you tailor a virtual system on a virtual OS and image that, then you can rapidly rub off copies of that image to meet spikes of demand across a data center. If you can't properly jail your applications (point #3a of my tirade) nor can you afford separate iron for every app, then put each app in it's own virtual machine. I see this as a sign for a strong market needing the sorts of qualities in an OS I am espousing today.

IRON/Cloud would be an OS designed from the ground up to provide all of these features and niceties at the OS level without the added complexity or resource suck of virtualization. An OS that supports full-machine virtualization, but would not require it in order to meet potentially any of these modern needs. It's so twentieth century to run your service as though it has been installed on fake Iron, when you should instead divorce the service from Iron and simply allow it fluid access to resources. It is very twentieth century to write software under the assumption that you are a single thread of execution on a single CPU and responsible for the whole show. It is better to be a single thread of execution on a single CPU (with exclusive access to your memory) within a construct such that you know there are likely more threads just like you performing the same or similar computations, and that you are one of many threads using a shared API and RPC calls to accomplish a task as a team. Your thread will not run forever, it may even be cut tragically short by hardware trouble or even bad coding, but you work for the good of the team of threads that you trust to carry on your work after your execution has stopped; for good or for ill. Your computations will be checked and perhaps even compared with a pool of results from identical threads to weed out and log or report possibly hardware-induced errors in math.

Not every thread or process will be redundant; what is redundant and how redundant is subject to sysadmin configuration. Some of these threads would have application-specific failsafes in case they perish. Some threads can just be run anew if this happens. Worker threads of a webserver are a great example of this. Some threads just need data recovery in case of failure, such as a word processor. So the cloned thread (if one is configured) acts little more than a place for live data to be replicated too, which can replicate back to the master to recover gracefully after an application bork.

Let's go back to the example with 2 machines in a NOC and one offsite. This example is rather arbitrary; IRON/Cloud should run perfectly well on a single machine with no net connection. Of course it would not offer any of it's replicating power in such a setup, but I hope it would compete admirably against today's OSen nonetheless. I only mention the 2+1 scenario because of how modest it is for a small business to actually build/maintain and how much blinding power you could squeeze from it using IRON/Cloud.

If you could get "the software" to do anything you wanted with such a hardware beachhead, what is the most you could really expect from it? I for one imagine the ability to pump data down whatever pipe connects the two sites (most likely commodity internet, but hopefully as much bandwidth as one can manage ;D) in order to keep all live data for all services running across these three boxes perennially up to date. Not only would the offsite machine be there to save your ass if both machines in the NOC fail (fire, flood, site nuked from orbit) but it should be live and capable of handling customer connections from it's geographically unique vantage point. Perhaps your NOC is in Oregon and your backup is hosted in New York or even Europe. Any clients hitting your web server or what have you from those far flung locations ought to (optionally) just be able to dial into that closest node and get low ping times. The data pertinent to their transactions would flow back to you at the NOC via lazy or delayed writes over the backhaul. Using bandwidth for client connections at both points of presence and only sharing the pertinent details between you should keep bandwidth costs as low as possible too. You could maximize peering relationships, larger data center layouts could even schedule workload to favor cheaper bandwidth or power consumption.

Best of all, these benefits would be provided either by the OS, or by applications written to adhere to the design principals of the OS. Gone would be the times when you have to learn the hokey replication voodoo engendered by a specific style of database or lock yourself into a virtual machine vendor and another layer of guest OS. It goes without saying that all applications can benefit from replication and relocation, so it is as much folly to rely on the application vendors to build these features by themselves as it is for app vendors to handle their own installation procedures (See point #7 of my previous rant).

In order to provide such wonders, I envision that IRON/Cloud would be built in two parts (reflected by the dividing line in the name). The first part, IRON, would be a collection of disparate software products to either be installed as a primary OS on individual pieces of hardware, or booted from a liveCD (though doing so would not be encouraged long-term given the waste of RAM resources valuable to the cloud), booted as the guest OS in a VM (not sure why you would want that but hell, why not allow it?) or even run as a simple application within a host OS (which is preferable to VM in all cases I can imagine).

IRON's job would be to run on individual bits of hardware to begin the hardware abstraction process. IRON on a machine makes available whatever resources that machine has (in app or VM mode, you could ideally even export only selected hardware resources!) to the overlaying Cloud platform. IRON would handle things like what kind of processor this is (32 bit? 64 bit? x86, powerPC, ARM?), how many CPU's are present, what Network connectivity exists, and how to talk to the peripherals and either limit or curtail the necessity of the running processes having to give a damn about such colloquial concerns.

With IRON running in whatever capacity you choose across a variety of IRON nodes, the overarching Cloud platform then takes over. IRON nodes would be configured to provide services to one (or more!) specific Cloud instances ("skies", they might be called!) which provide fine grained user and task control and administration. So an organization in house would likely run their own single Sky, while a hosting provider may allow any number of clients to run their skies over shared IRON hardware. Multiple skies would provide little benefit over single skies aside from resolving contentions regarding administration. The paradigm is designed such that a single organization should always pair well with a single sky. Further granularity is acheived via smaller components within the sky, such as individual CloudVM's.

Each "sky" allows you to add or remove nodes from it's access (nodes would also be removed if they fail or fall offline) while it is running. In fact, while a "sky" operates very much like a single mainframe server, it should be designed to never stop completely or reboot. Any major software updates would be accomplished via sort of a rolling process restart to minimize or in most cases completely eliminate applicable service downtime. IRON updates would be handled similarly; the sky would schedule IRON nodes to go offline and reboot the hardware as needed then accept the node back when it comes online (after sysadmin intervention in case things get bad of course). Hardware that needs updating would simply be taken offline and brought back at SysAdmin's whims. The beauty of relaxing our needs for reliability from underlying hardware is hard to miss. It is much more expensive to craft one piece of Iron which you can rely on 24/7 for years than to obtain several that may have an average failure lifespan of a year or two. And both software and hardware must ALWAYS be maintained. You need to take it offline and give it a tuneup; I don't care what "it" is. Thus, an OS that runs perpetually, capable of surviving all such maintenance with even modest levels of replication is just what the doctor ordered. For everyone. Period. :P

Another advantage of this approach is that resources are utilized optimally. Traditional IT either shoehorns many apps dangerously onto the same iron thus the same single point of failure, or spreads apps thinly among iron that spends much of it's time underutilized. When I think of the powerful desktops idling all night in offices across the globe I weep! It would be far preferable if workstations ran the IRON/Cloud OS right at their desktops; their very own workstations can then crunch numbers and handle replication for the enterprise.

The flipside being that "remote desktop" would become a quaint and antiquated notion. Instead your OS is the server OS; you run in the same sky as the "server" you hope to administer. Open windows to maintain and configure your service, those windows really do open right on your desktop hardware. Walk downstairs to watch the blinky lights in the NOC, then open the same live window on this new workstation downstairs. The Cloud does it's best to make you feel like the window is still local, performing RDP-like operations with your first workstation upstairs until it has fully replicated the GUI process to your local terminal, after which your local terminal becomes the master for the process and it *is* once more local. You could do the same thing after a flight from San Fran Cisco to Paris: open an old window or desktop, and after a few minutes of slow RDP-like access to your process in Frisco it completes it's transition to your new terminal in Paris and the activity is once more local. Availability and responsiveness provided in the greatest ways one can conceive given the computational and network limitations available. Today is tomorrow god damn it, I think we deserve this now.

Given different nodes providing the same services, it makes sense to leverage modern network protocols such as SCTP to provide strong multi-homing support. I would recommend IRON/Cloud applications and infrastructure favor SCTP and IPv6 completely, perhaps even forgoing the dual stack for IPv4 and leveraging a site-gateway to IPv4 and TCP/UDP resources instead of bothering to support these at the application level.

The Cloud platform, by way of your "sky", operates spanning over all of the IRON nodes you give it leave to. The sky maintains and tabulates all of the disk, CPU, RAM, network, and peripheral resources provided by the IRON nodes and schedules processes to run where they are needed and in such a way as to best honor the parameters for each task. Tasks are run as individual "Clouds", or "Cloud Virtual Machines" within a sky. Each Task, or CloudVM, is assigned to one or more IRON nodes initially and in many cases comes to span more or fewer IRON nodes throughout the lifespan of the task. Many tasks run indefinitely (web server, database.. really any "daemon") so they are designed to be capable of this, but most tasks do not require this.

Tasks are software application instances. They run as seperate CloudVM's (with their own unique view of available filesystems, unique access to RAM, firewalled Network access, and all other hardware resources) in order to meet point #3 of my first list of criteria. Applications should by default remain jailed from one another. Skies offer rich opportunities for applications to work together via RPC, but most apps really don't care about other running apps on a modern OS. For security's sake they ought to be isolated from one another whenever possible anyway.

Tasks run software that has hopefully been engineered specifically for IRON/Cloud or else to meet a list of criteria that can allow software to remain compatible with this or competing OSen. Such software should be designed around the philosophy I cited above: that a thread is best served by not behaving monolithically, but instead as a single participant among what may be many threads, combined by a shared interface and framework that allows threads to better survive the decoupled nature of such an OS.

One prime example herein is Disk access. Today, processes simply "assume" that the disk is available and fast to access. If the disk is not easily accessed, or the underlying OS must spend a lot of time getting to the disk (network shares, etc) the client application normally blocks or freezes, becoming unresponsive waiting for the disk. Even though the app can still conceivably accept user input, it simply refuses to. Even though such an app is perfectly capable of keeping it's GUI window refreshed, it almost never does and you get page tearing instead. What really grinds my gears about this problem is it normally happens when I NEVER MEANT TO OPEN THAT NETWORK SHARE TO BEGIN WITH!! I simply followed a symlink without thinking, or a file open dialog opened to the "most recent" folder which is no longer available in my laptop's new location. But I can't change my mind now, no! I have to wait for the App and/or OS to time out and realize the shit is gone before I can direct the app's attention to where I really intended to go. >:C

Contrast this with well written AJAX software, such as Google's Gmail. You write a message and Gmail autosaves as you write: both locally to gears and remotely to the server. When you make a decisive action such as hitting send, the local Gmail javascript invokes this command. If the response from the server is instant, then you move right the heck along. In case it is not, Gmail lets you know what's going on with a status line saying "sending", leaving the missive open. If it takes still longer the status will update to "Still sending..". The user can still interact with the application. For example, rub this troublesome message off into a new window (which continues valiantly tring to send) and then poke through the inbox to read other messages (which, if user really has fallen offline, still may have already been locally downloaded to Gears).

This is the kind of decoupled attitude towards resources that all software should embrace. You have what you have (CPU/RAM) and you know what, all else might be slow or unresponsive so deal with it gracefully. In the case of IRON/Cloud, you want to take that a little farther. Processes are spawned from software on one or more IRON nodes. The software should further be designed to maintain state in version controlled, easy to export and merge data sets which are regularly shared among nodes via the sky. Whatever software has not yet committed to it's state hasn't really "happened". Whatever versions of a state have not yet been shared with other nodes remains in danger of being lost in case the specific hardware melts. Software designers should leave the mindset of writing code that masters all data from a single thread and begin thinking about spawning armies of threads running similar code to refine sharable, version controlled and conflict-management-friendly data sets. You don't need this paradigm shift just to take advantage of IRON/Cloud either, this sort of sea change has been a long time coming simply to support the growing popularity of cell based CPU architecture.

Not all applications will be engineered to such standards, especially early on. IRON/Cloud ought to be capable of running applications only lightly ported to the system which continue to behave monolithically (a POSIX compliance layer and ELF-binary support would be a great start), and perhaps even applications over virtual machines running in foreign guest operating systems. Still, these are merely training wheels and the true holy grail is applications built from ground up to participate completely in the IRON/Cloud mantra.

To this end, applications will likely be marked with any number of different properties that skies will take into consideration when launching them and spawning new processes. Properties like "serial" for old-style, unimportant or single-shot tasks vs "Parallel" for tasks smart enough to run concurrently on different hardware. Properties specifying how highly available a task ought to be, and how much redundancy to plan for early on. Properties inferring preference over geographic diversity or local responsiveness. Properties clarifying if the Task is capable of running disparate parallel child processes: such as worker threads for a web server, or parallel compute threads for heavy number crunching vs many threads all crunching the same numbers for simple redundancy.

The Task is not ultimately responsible for micromanaging the creation of it's own army of processes, the sky does that. The Task may request but should not expect to demand more or fewer processes be spawned. Hardware resources are still precious and contentious to the sky (and the enterprise) and Tasks ought to be able to — in the worst of times — run as single threads on single nodes sharing data sets with past and future versions of themselves care of the custodial sky.

This is the expectation I personally have for operating systems today. As an industry we are woefully behind schedule. I honestly and pointedly consider every drawback separating the current state of the art and what I have just described as a SPECIFIC FAILURE. The way operating systems are today is so far behind what is reasonable to expect from our hardware that I feel as though we have no sewer systems and people are still emptying chamberpots out the window into the street.

I cannot really say why the industry has milled around the same watering hole for so many decades now without so much as budging along the obvious evolutionary path. Perhaps it's just that nobody has sat down and thought about where to go next? Perhaps the Software industry just needed someone to shine a flashlight in the appropriate direction?

If that's all it is then that is what I am doing right now. The future is that-a-way, now please proceed at best possible speed, or else for the love of God explain to us, your users, why not. This is what we want. Give us what we want or we will leave you behind.

That is all.

PS: I googled the keywords "Iron Cloud" to make certain the moniker I had chosen for my hypothetical reference OS was not stepping on anyone's toes. The closest match I found was that Amazon is teaming up with a project called "Casst Iron" for their cloud services and calling this combined project "Casst Iron Cloud". Though similar, this initiative has quite different goals from my hypothetical initiative. I hope I don't have to rename my abstract idea, as it would be a pain. Also, I used too many S's in the word "Casst" on purpose to prevent my own article being indexed and showing up in searches for Amazon's service. I don't believe they will market heavily missing the first word, so saying "Iron Cloud" or "IRON/Cloud" should still be safe to protect folk from confusion for the time being and keep Amazon from flinging lawyers at me. :P

Wednesday, October 21, 2009

Major record label shamed out of using DMCA to bully Ameture Music Video



According to a comment by Ray Koefoed, the director of this video:




They [Warner Music Group] did put a [DMCA] claim against it, but allowed it to stay on so long as they could advertise on the page. Then one day they just removed the claim.


What I take away from this is that it is not necessary to build art by hand from the molecule up. "Half Immortal" is built from an existing video game, from someone else's game models and color-adjusted textures and set to music performed by a third party. Yet I challenge anyone to identify this work as anything but wholly original.

Saturday, October 10, 2009

"Sudo", it's unix for "Please .. ?" :D

Thought just came to me.

Problem is it deflates the fun out of this comic. :(

Wednesday, July 15, 2009

My Windows 7 review is both more confusing, and more rambling than yours. :D

I installed Win7 on a machine previously running Win2000; kept old HD just in case.

Used it. Liked it. Stable, did not appear a bit slower than Win2k on my 2.7Ghz single core with 1GB ram. Improved user friendliness by way of Simple English controls. I can see the benefit there for novice users, and experts after a brain-fizzling day at work alike.

BUT

I still rolled right on back to Win2k after a week's worth of testing.

Why roll back if there was zero faults I could find?

It doesn't do anything new.

It's as shiny as all get out, but I don't care much for my OS being shiny.

It could have "an improved network stack", but is not capable of any new network features. For example, I can't set DHCP alongside static IP's on a single interface. I've been waiting for someone to offer that now for 15 years.

It might be "more secure".. and to be honest, I was quite impressed by the user accounts management and program install policies.. but my natted Win2K box with my usage policies has stood the test of time and requires no extra security.

So, beyond the superficial there is really no advantage to the upgrade for me. Now, list the disadvantages to upgrading an existing install:

  • buy OS, yet again (or figure out how to pirate it and worry about keeping on the upgrade path and avoid kill switches)

  • reinstall all the software, some of which needs to either be re-bought or repirated. Reconfigure all the settings. In short, every time you find something that doesn't work like before, you have to stop to tweak with it. After a week, I still couldn't get a thing done without tripping all over things that needed configurational TLC.

Now reconfig will happen whenever you start anew, which is inevitable as the machine will eventually die. However, this machine won't die any later after installing the new OS and going through a gratuitous reconfig step. In short, no pros and some cons. Verdict, not happening.

So, congrats M$ on distracting us all with how terrible Vista was for long enough to sink our expectations and try to make us say nice things about Win7 being relatively better. I admit it is marginally better and marginally more resource conservative than XP. On a new machine, Win7 > XP. But you'll still never get our old machines, because Win7 is not a DECADE more advanced than XP or even 2000, as it really ought to be.

It's time someone changed the whole GOD DAMNED PARADIGM. The entire OS abstraction that most of the world relies upon today is outdated. I welcome Google ChromeOS now, not because I am concerned many people will use it, but I hope it will gadfly M$, Apple, AND the OSS community into doing an operating system properly for a change.

I can't say what a truly superior, worth-trying paradigm will look like.. but I have some small recommendations:
  • scrap the directory-based filesystem. Tag files, and make it easy to use tags like directories.

  • Once you've perfected ACLs for the new filesystem, use something very similar on the network stack. Relying on third party firewall apps to track application to network bindings indicates the present approach is too loose in this regard.

  • Jail every app by default. Give them all hooks to the same kernel, but no access to the same filesystem, network infrastructure, or message loop by default. App jails should have similar separation to user accounts. Apps put windows on the same screen for a user, but cannot "see" one another by default. Apps can be granted shared access, IPC conduits, or access to broad resources (screenshot utilities, VNC clients etc) but should not have such privileges by default. Approaching things in this direction ought to obviate 99% of the local security concerns most people are bothered about, and the ones Win7 tries so valiantly to defend you against. EG, why shit bricks about every piece of software that gets installed when many of them do not require the broad power current OS's afford them? Keep honest applications honest and nip the problem in the bud. Remember "protected memory access"? This approach will have the same glowingly positive effect in secondary storage.

  • Optimize filesystem to favor appends over edits (see GoogleFS whitepaper.. it applies to regular folks too)

  • replace standard inode logic with hash-based inodes. Lazy copy friendly, duplicate files take up reduced space on disk. (less important once you've replaced directories with tags.. but still :P) Once again, refer to GoogleFS but also Git Source Control Manager developed by Linus Torvalds, et al

  • Hide the OS chrome. One thing about Google ChromeOS mission statement that struck me (I am holding out as to it's implementation or privacy impact) is the idea to keep the OS itself from being an obstacle to getting work done. Take a firefox-plugin inspired approach to virtually every user-facing aspect of the OS. The amount of help folks need finding the right file, launching an app or managing windows varies wildly, and this is where the most innovation is to be had. Decouple the kernel, driver manager, file system and network stack from such concerns.

  • Foster an open ecology for installing and upgrading applications. This obviously is not in the direct control of an OS, but OS makers can endorse better practices. Applications should not install or upgrade themselves, as many apps on Windows try to and Google Apps insist on doing both in Windows and on Mac. In fact, Apps should not CARE how they are upgraded or installed. They ought to be installation agnostic. Even the distribution systems in Linux/BSD are limiting, because your distro works hard to try to give you every scrap of software you will ever need, and that is a losing battle to wage.

    Within any OS, third parties should create installation and upgrade engines and app developers should publish something akin to RSS feeds announcing updates as XML or XML-like files (msi apparently is a fine candidate for this circle, but ought not be the only one).

    If an app developer (like Google) or an OS developer (like M$) thinks they are good at app install and upgrade, they should each offer their engine, but take care that their software will still comply with the (to be written) standard so that it can be maintained by anyone else's engine as well. Then users can choose whatever app-management they wish, and most will stick to one app engine, instead of having M$ update here, Google Pack Updater there, and 300 other apps all accessing the network to interupt you on app launch about some new version, which you have to manually browse to and install, oft requiring a reboot.

  • OK, seriously (windoze-only complaint) but will SOMEONE do SOME BLOODY THING about this start menu arrangement for newly installed apps that hasn't evolved since Windows version 1.0??!? Does every app need a folder in the start menu to itself, with a readme, website link and uninstaller?? This is one point I am hoping will be solved by implementing the above point.

Well, them's my tuppence on the matter, and why my upstairs machine won't leave Win2k either until it has to, or until there is somewhere actually worth going.

Thursday, July 9, 2009

Holy Crap on a Cracker!

Mmmm, comfy! ;D
Well, here's the scoop.

Late last year I posted an article talking about consuming serialized media in general, and mentioning an awexome webcomic called Questionable Content in particular.

Jeph is known for his serial background humor. For example, he will have posters in the background it might take one or more strips just to see all of, every in-world "day" some new barrista humor is chalked into the blackboard menu at the coffee shop of DOOM, and people wear offbeat tee-shirts at arbitrary times.

This morning I was astonished to find out that I HAD BEEN MADE INTO A TEE-SHIRT! And that furthermore, Dora be wearing me! ;D

Saturday, July 4, 2009

Copyright Abolition

Hello friends, I am sorry that I haven't written in my blog much recently. It seems like I only post here when I have a new machinema to publish, or when I get an email from Google that gets me riled up about consumer rights online.

In a previous post, I made my stand clear regarding online privacy. Today I would like to talk about my feelings regarding Copyright, which are probably even more unusual and surprising.

I am a pirate. I download songs, software, movies and TV shows without paying money for the privilege. Virtually everyone I know that knows how to even remotely use a computer commits some form of piracy or another on a regular basis, but they normally qualify it with strange conditions such as "I only download content that is hard to find to purchase legally", or "I only download material I've already paid for legally", or "shaddap I don't want to talk about it right now."

I have always been mystified by this ethical double standard so many practice, and until recently I shared in it. Essentially, I did not understand what copyright law should look like, but I did feel strongly that if someone online had material they chose to share with me (being an anonymous neighbor on the nets) that any law preventing this based upon the material itself is a form of censorship and I didn't get too broken up over resisting that.

Sometime over the last year, I have however been able to solidify my position. I've come to back a political ideology which would leave copyright law quite simple indeed, on account of it being entirely absent.

I have come to the conclusion that the best legal framework for society to grow and innovate in is one where copyright law simply does not exist. Period. No copyright. None. Ever. Finito.

At first glance, you would think that is a common conclusion that a pirate would come to, given that pirates are best known for logging onto bittorrent and gnutella based services to download every kind of media which exists as soon as it is released. But once you start plumbing people's political opinions in popular online dialogues, the picture starts looking very different indeed.

Apparently, most internet users seem more interested in Copyright Reform, most popularly (though I cannot find a mid-page link to a quote, tellingly enough) reducing the copyright term
limit to roughly 14 years and making it easier for consumers to legally make more use of their purchases. However, since I personally estimate that more than half of online copyright infringement involves material less than 14 years old (put another way, newer than the world wide web?) most of these people are gunning for a law that they would still be breaking once it's ratified.

I on the other hand, and the few people I know of who agree with me, submit that copyright law is simply harmful on a large scale. Put in the barest terms, creation of work should not entitle any person to interfere in every transaction that third parties engage in all around the world merely to artificially inflate the value of the work. Unhindered communication is magnitudes more valuable to the global community than the ability to profit from reselling infinitely copyable non-commodities.

I have made my point a number of different ways in the past, normally in rebuttal in online forums, but I found myself writing an email to a friend today and a couple of hours later I had completed another illustration which details my feelings on the matter quite well. So, I decided to post it here instead.




I imagine a world like ours in most respects, I will call it Lacuna. For the simplicity of this illustrations, the Lacunans speak English.

There is a custom that Lacuna artists can invent new words. So long as the word they invent follows certain phonetic rules, and so long as nobody claims prior use of the word, they can call it their own and then charge others for the convoluted right of being able to use the word in discussion. The definition is set by the author, but much like the "meaning" of any art it is really more decided by the consumers and the artist stays ahead of this evolution taking credit for meaning it that way all along.

There is glamorous public interest in Lacuna for these new, forbidden words. They do little to forward actual communication since so few people are really familiar with them, but speakers of them have status. They sound important because they utter words they had to burn money to obtain license to. Soon, commoners who hear these words spoken often enough figure out how to pronounce and write them, and some begin doing so.

Oh no! The market value of these new "works of art" are being threatened by counterfeiters! Customer demand in words that have been sullied by common tongues drops. So the industry does the only thing it can do in it's own reasonable self-interest. It either lobbies for new laws or perverts existing laws to levy ever greater punishments against those who speak these hallowed words in public.

Soon, commoners know to be careful not to say such forbidden things where they might be heard. Still, when alone and out of earshot of the aristocracy they vie for status among one another by demonstrating working knowledge of the forbidden art. This leaves commoners less impressed by aristocrats who use the words which commoner's have "cracked", dropping aristocratic demand for "cracked" words, and leading to still more invasive prosecution attempts against commoners including monitoring all private communication, rewarding individuals who report speech violations and even posing as commoners to try to catch them in the act.

All the while, new "hallowed" words are created which hew closer and closer in pronunciation and/or spelling to actual words of the original English language. If commoner's using hallowed words devalue them commercially, then surely the coined usage of hallowed words devalue the communicational power of English words, do they not? One notable wordsmith, Dalt Wisney, goes so far as to craft new commercial words which sound very similar to some of the first words Lacunans use in the English Language growing up. Whether he had planned to or not, over the next two generations he derailed the aristocracy into using his words exclusively in favor of their plain English counterparts, and guaranteed that they would pay him (and his successors) for the privilege to do so indefinitely. As time wore on, the original English vocabulary sounded crass and unpolished to aristocratic ears, and the commoners who invoked such vulgar language were shunned and ridiculed.

A time came where virtually none of the English language still had coin in this world. You can try to speak it, but noone will understand you because even most commoners work their fingers to the bone for the very privilege of communicating to their employers with alien, hallowed vocabularies. Some groups of people still try to band together, keeping the English language alive amongst themselves and abiding by what they interpret the law to require all at the same time, but it takes only a cease and desist letter or court order to quash such attempts since none of the participants can afford the bureaucracy of defending their freedoms in court.

Then a day comes when a prosecutorial equilibrium is reached. The peasants are so far subjugated they cannot recall what freedom is like (Stockholm Syndrome). They still communicate, but only by working hard to pay for what small vocabularies they have. None wish to rock the boat and anger the aristocrats — who give them language so reasonably — by speaking out of turn, so the world persists in a sort of Nash Equilibrium. No utility or true societal benefit stems from the subversion to natural language, and much suffering is evident: but not visible from within this society, nor can any party see any individual action they could take which would better their lot.

The day the wireless telegraph is invented in Lacuna, nobody saw it's potential to shift the balance of power in a battle everyone thought was over long ago. It opens a new avenue of communication between people. Between businesses, between aristocrats, and even difficult-to-monitor communication between far-flung commoners.

So, commoners begin learning Morse code and tapping at one another. Early models of the machine support abysmally low bit rates, so to begin with you can only get simple, 10-100 kilobyte hallowed words across at a time. Most commoners use this device for major announcements, such as the hallowed equivalents of "Baby!" or "Marry?" or "Won!" hoping that shared context will help to complete the communication. There is nowhere near enough capacity to reliably communicate truly lucrative hallowed words.

Soon enough, bitrates begin increasing. The thresholds by which long words can be transmitted are passed long before the sluggish aristocracy is prepared for the ramifications. Now, geographically distant commoners who have paid for disjoint vocabulary sets begin sharing one another's words so that they can understand each other. Each time they do habit causes them to glance over their shoulder, but they find that nobody is there to reprove them. This mode of communication is technologically fairly private, but the privacy people experience is practicably perfect due to lack of repressive interest in what they have to say.

Commoners find that their communicational effectiveness increases somewhat as they learn inexpensive words used heavily elsewhere, as they are used lightly where they live too. So that's what this commoner's boss has been saying behind his back all this time? Oh, this other commoner can now understand parts of news reports that were not initially meant to be perceived by poor people. The list goes on.

As small pockets of commoners begin to see the value of sharing these words illegally — and the capacity to get away with it in this new communications medium — some band together ahead of the curve of public awareness to create a clearinghouse that they call "Vocabster", where people are free to advertise what words they know and browsers can elect to learn whatsoever words they please. Soon, the users of this service command a powerful communicative arsenal.

It takes some time for the aristocracy to respond to the equalizing effect of this informational weather system which is damaging their hegemony. They know they cannot admit it's existence without drawing public attention to it and potentially worsening the problem. The story does break however with a single, wealthy, disgruntled wordsmith throwing a brick at the very segment of the population who are interested in the words he invents. Later it will be made clear that he owns no right in these words either, he has sold them to his publishing house in exchange for enhanced noteriety. One day he will publicly admit to logging on to such a clearing house to be taught how to use a word which he invented, but later forgot the details of. He states that he was not too cheap, but simply too busy to engage in the growing bureaucracy required to re-purchase his word.

Vocabster is threatened with court orders, and it resists all threats publicly, improving it's notability every time. Eventually it is taken to court, and thereafter "purchased" into oblivion, but not before other clearing houses with even more advanced technology take it's place.

Next, the aristocracy begins threatening individual commoners with jailtime for participating in these clearinghouses. This is perhaps a century removed from the early attempts at vocabularital freedom, so the public is simply not prepared for the hard demands made by the aristocracy in these cases nor are the aristocracy properly prepared to wage so many small battles simultaneously. While they try many people who have gone so far as to profit from their counter-establishment activities, all of the PR focuses on the 7 year olds, grandmothers and poor college students who are drug into court instead. Guilty or not of the sharing they are accused of, they hew as far from the picture of a hardened criminal as the public can understand. This helps illuminate how much power the commoners have achieved, how little the establishment still holds, and helps to make it clear how little the Lacunan aristocracy values the lives of individual commoners when they cannot be fleeced properly.

Another popular place online in this world is a bulletin board called Wordtube, where people can share short messages with one another. Up and coming word smiths use this place to craft and share their own words, determining their popularity and cutting their teeth. Some people share commercial words here unaltered, and when enough attention is drawn the Wordtube administration takes these down. What few expected are how many wordsmiths might create portmanteaus of commercial words to communicate their own point, and then use the power of this bulletin board to disseminate their hybrid creations. Instead of vetting the research to make sure no similar hallowed word exists, and finding out what complicated pronunciations flow easily off the tongue, and hiring a lawyer to back their property claim: they borrow parts from a pool of words which are proven to work for their audience and remix their own efforts from there. Unfortunately, this practice is also illegal and many ameture wordsmiths have their work summarily removed by Wordtube and related "legal" sharing services, unless the individual feels so fervently about their work that they can afford to go to court over the matter.

While they don't realize it, hybridisation is precisely how language evolved in Lacuna before it became commercialized, and sharing is exactly how it propagated. In Lacuna it was once believed that the common person understanding a word or finding utility for it was so much more valuable than prostrating yourself before it's coiner that everyone used words with roots hundreds or even thousands of years old, and noone remembered who first coined any of them. A man could simply open his mouth and speak his peace without first signing himself into bondage for a vocabulary portfolio.

But now the battle is engaged once more. The liberal commoners are empowered by a curiously difficult to censor communications network. The aristocrats on the other hand are positively giddy. Not because of the capability they may have to profit with their old, corrupt business model over this network if they can ever re-master control, why think so small? But because now they can paint themselves as victims to try to exact power over the very network itself, and the ability to censor non-vocabulary related material when doing so leads to their profit. The aristocrats leverage the attitudes of the commoners which they have brainwashed. Commoners who have invested in the old model, and feel as though their investments and hard work would be nullified, putting them at a disadvantage should the free-language advocates win. Commoners dreaming of the beautiful new million-dollar words coming out this summer which they can purchase a right to hear if the liberals simply do not piss off the establishment too much. Commoners with dollar signs in their eyes, imagining the earnings they could make by creating and profiting from their own words.. not realizing their artful creations would simply be purchased by aristocratic organizations and the artist simply signed into bondage like the golden goose, or relegated to obscurity if they do not agree to the establishment's terms.

Such unfortunate souls worry that if liberals had their way, there would be no incentive for people to invent new words! Who would spend millions researching long, complicated utterances when they will not make return on investment? There is so much risk involved you see, since your word may prove unpopular, and you won't know until after you've already committed your investment. Everyone would get tired of the words we currently use to one up one another, and then all speech would grind to a halt! Apparently, we would drown in our own liberty. They call a liberal who merely sits at home, learning whatever words he chooses easily over the telegram "greedy", adding the hallowed equivalents of other choice inferences such as "unprincipalled, fat, slob". It is claimed that the liberals rock the boat simply to one-up hard working citizens with their ill-gotten vocabulary, and any claims to natural right over language are scoffed at.

Thus, the 3-part opera occurring in the real world we live in can be illustrated using slightly different terms, to help illuminate to the layperson the depth and breadth of how natural rights are perverted by our global content production industries into a new, innocent-seeming status quo. Most people cannot imagine the power of an interconnected network like this world's Internet, they see it only as a means to purchase and obtain canned content and perhaps to pass short emails to one another. However we can cooperate with each other instead if we choose. We can participate in the creation of content, we can be both producer and consumer. It's not an activity everyone would feel comfortable investing themselves in overnight, but if it becomes a fad and enough of your friends and family participate you know that one day you will too. However, such processes cannot get off the ground so long as creating, sharing, and echoing content are forbidden by powerful people and company's with no interest in anything but monetizing whatever content is in their control. You cannot share their content without being sued or cut from communities such as Youtube over threats of suit. You cannot make your own creations based upon their content or anything conspicuously similar for the same reason, you cannot even make fully original content which competes with theirs or which they would have any reason whatsoever to object to because court threats function as a form of censorship. Unless your content makes so much bank for you that you can afford to meet your accusers in court, you are simply gagged by process, while in the meantime access to all meaningful art in our culture is traded and groomed by profiteers. Notice I said "meaningful" art, not "good" art. The Lacunans in my parable found that the versitile, natural English language lost all import once the publicly vaunted, though inferior hallowed language came into coin. This is the crisis we face in our culture as well, as fewer people consume old media and we are alienated from any cultural reference that is not covered by copyright.

I hope this long post has made sense to some people (though I am certain zero people would read it through, even if six billion were given a chance ;3) now I'll go try and see some fireworks with the family.

Wednesday, March 25, 2009

One small step for bat

One giant leap for.. well.. I'm not really part of any larger groups at all, am I? lol

So after much work, I have been able to perfect recording movies out of that Second Life game that I play so much. Here is my first test complete with voice chat and in-game sounds. Let me know your thoughts, ye fictional reader? ;3

Friday, March 13, 2009

Taking a step to resist Google's advertising-based privacy invasions

I run a website that has traditionally run Google's adwords advertisements. I don't get a lot of traffic, but after 5 years of business my adwords account is now up to a handsome $13usd.

Because I run this website, Google emailed me yesterday to let me know that — as an advertiser — my website will need to update it's privacy policy.

Why? Because Google will now be tracking the behavior of individual users via their "interest-based advertisements" in order to better target them with ads. Apparently, this is among the suite of technologies they gained access to by purchasing DoubleClick.

I have to update my privacy policy, now that Google has drafted me to help them invade the privacy of my users. I guess they just don't want me to be caught off-guard and get sued for their actions. Thanks for the head's up there, Google.

So I've researched the matter further. According to this faq item there is no way for a publisher to opt out of this "service". While you might be able to opt out of displaying ads resulting from such collected data, there is no way to opt out of actually helping to collect the data aside from quitting the adwords program entirely. (Well Google, it looks like you'll be cutting me a check for that $13 after all now, doesn't it?)

So, severing that business relationship takes a load off of my mind. Nonetheless, there is still the matter of us consumers. How may we protect ourselves against such behavioral targeting?

Checking Google's FAQ, they recommend that you opt out of their spying with a cookie. Isn't that a clever idea? Use a cookie to ask not to get cookies?

Even this irony is not lost on Google. They know that users like to be able to clear out their cookies, and might find it counter-productive to clear out their anti-cookie cookie. So Google has developed a Firefox plugin to maintain their special cookie, even if you delete all the rest of your cookies.

Of course, this unprecedented insult to the dignity of internet users worldwide begs many questions:

  • Is their plugin secure, or will it mine my computer from an even more tender vantage point?

  • Would it be reasonable to trust Google to maintain this plugin indefinitely? What if it stops working — even by design. How would we even know?

  • Will you be forced to use a browser their plugin is compatible with before you can be protected from their snooping?

  • What if Google can use this one opt-out cookie to perform all of their tracking needs? All it has to be is an identifier keyed against the database in their servers, after all.

  • Should we trust the remedy of our oppression to our very oppressor?

  • Should a user's privacy and dignity be stripped away by default, and only protected voluntarily if we ask nicely enough and jump through some hoops?



I encourage my readers to tip Google's opt-out cup back into their laps. There must be one or many better ways for a user to protect their online privacy. I would like my readers to be able to read what I am saying without fear of being spied upon for example, since Blogger is hosted by Google.. but also because most free blogging platforms presently feature Google or DoubleClick advertisements.

So I will list the counter-options that I am aware of which users can use to defend themselves. I don't have much just now, but I encourage you to post comments (or email me at jesset@gmail.com) with better suggestions or clarifications, and I will update this article accordingly.


  • Firefox plugin Ad blocker, blocks most well-known advertising networks, including Google and DoubleClick, and also blocks the dreaded Google Analytics website tracking script.

  • Browser-agnostic proxy-filter Privoxy, you can run this from Windows or Unix based machines. Instruct your browser to use this proxy, and on non-SSL based connections it will actively scrub ad code, scripts, image bugs, and annoyances from web pages.. it also scrubs your outbound HTTP headers for popular personally-trackable data. Unfortunately, from it's vantage point as a proxy it cannot aid with SSL-based connections.

  • Come on guys, help me fill out this list!



So it comes down to us, we must arm ourselves in order to enjoy a relatively non-obtrusive stay on the interwebs. I have never used Ad Block before. I have a firefox bookmarklet that manually squashes visually annoying ads, but aside from that I have not been bothered and I have clicked on advertisements which I have found interesting. That was back when the web was stateless. Now however, it appears as though I'll have to take the step of saying goodbye to advertisements, and tell Google, and every other web-ad provider to take their revenue streams and shove them.

I am sorry, I truly am.. but when you exploit your position in the industry to grind the little guy like so much wheat, I simply cannot defend or support you any more. I will continue to bilk free services from you, Yar Har Fiddle Dee Dee, but I will actively do what I can to protect my privacy. I will junk your advertisements and I will encourage others to do so. I will take, and take pains not to give back. If this attitude is burdensome to you Google, then you ought to change your policies and apologize to your public.. bind yourself procedurally to be kept honest.. or else we will abide until the day that someone who can accomplish that will replace you.