Liquid democracy

July 19th, 2010

Another political idea I really like is that of electronic liquid democracy. The idea is this - instead of trying to choose which of a very small number of representatives will best represent your interests, you can instead vote directly on every measure that your representative would vote on.

Most of the time, you might not care or might not have enough background knowledge to know what's best, so you appoint a proxy to choose your vote on your behalf. You can even choose different proxies for different areas - one for health-related legislation, another for trade-related and so on. You can choose multiple ranked proxies so that if one proxy fails to vote on one issue, the vote will go to your second choice and so on. Proxies can also delegate to other proxies. All this is done electronically (with appropriate security safeguards to avoid elections getting hacked) so you can change your proxies and votes whenever you like.

It's somewhat surprising to me that this hasn't taken off already, given how often voters seem to be dissatisfied with the decisions of their elected representatives. It seems like it would be relatively easy to get going - somebody just needs to run on a platform of no policy except voting in accordance with the result of the system. Such a representative could get votes from both sides of the political spectrum and grants voters much more power and control - it seems to be the best of all worlds. Unfortunately when it was tried in Australia, the Senator On-Line failed miserably to get elected. There are several reasons why this might have happened:

  • People avoiding voting for a candidate they feel is unlikely to win in order to avoid the wrong mainstream candidate getting elected.
  • People not trusting this newfangled internet stuff.

The second problem should go away as the generations that haven't grown up in the internet age die out. The first problem is ever present for third parties in first-past-the-post systems. Given that politicians won't generally reform the systems that elected them, it seems like the only chances are for the third parties to sneak in at times when a majority of people have an unpopular view of both mainstream parties (which may happen more often as internet-savvy voters get better informed about what their representatives are actually doing).

Seasteading

July 18th, 2010

I find the idea of Seasteading fascinating - it could certainly be a great thing for humanity if it accelerates the evolution of government systems and gets them to solve the Hotelling's law problem and the irrational voter problem . However, I have serious doubts that a seastead would be a good place to live in the long term. Assuming it gets off the ground, here is how I think the experiment would play out.

The first problem is that seastead "land" is extremely expensive - hundreds of dollars per square foot, on par with living space in a major US city but (initially) without the network effects that make US cities valuable places to live/work/do stuff. The "land" also has ongoing maintainance costs much greater than old-fashioned land - making sure it stays afloat, de-barnacling, ensuring a supply of fresh water and food, removing waste and so on.

This means that (at least initially) a seastead is going to be importing much more than it exports. In order to be practical, it's going to have to ramp up to being a net exporter as quickly as possible to pay back the initial costs, keep itself running and generate net wealth. But there are far fewer options for a seastead to make money than for a landstead. Farming is going to be particularly difficult as it requires a lot of space. So the seasteads will have to concentrate on producing wealth in ways that don't require a lot of space, and trade with land states for food.

The things which seasteads will be able to do cheaper than landsteads are things which are made more expensive by government interference, such as manufacturing (no government to impose a minimum wage or environmentally friendly disposal practices) software and other IP-intensive activities (no copyrights or patents to worry about) and scientific research (particularly biological - no ethics laws to get in the way). Drugs, prostitution and gambling will be common as sin taxes can be avoided. Seasteads which don't do these things just won't be able to compete with the land states, so by a simple survival-of-the-fittest process, this is what the successful seasteads will end up doing - a classic race to the bottom.

By design, it's supposed to be very easy to leave a seastead and join another if you disagree with how your current seastead is being run - you just pack up your house onto a ship and go to another seastead. But this very mobility will introduce its own set of problems. Crime, for example - if someone commits a serious crime aboard a seastead, there's little stopping them from leaving before the police start breathing down their necks. To avoid having a serious crime problem, the seasteads are going to have to sacrifice freedom for security in one of a number of ways:

  • Make it more difficult to move (you'll need a check to make sure you aren't under suspicion of a crime before you'll be allowed to get on the boat).
  • Have a justice system and uniform baseline of laws covering a large union of seasteads - allow free travel between these seasteads but you'll still be under the same criminal jurisdiction - similar to how US states operate. By agreeing to be a part of the union, your seastead gets some additional security but you have to play by the union's rules to keep your police protection.

Another problem is that it's going to be difficult to institute any form of social welfare - if you tax the rich to feed the poor, those who are getting a net tax will just leave for a seastead which doesn't do that. So there is likely to be a great poverty problem - a big divide between wealthy seastates and poor ones. The poor ones will just die out and their "land" will be taken over by other seasteads, accelerating the Darwinism to the delight of libertarians and the dismay of humanitarians.

If lots of people end up on seasteads, I forsee large environmental problems since it's going to be very difficult for seasteads to enact (let alone enforce) anti-pollution laws. Seasteads which take pains to avoid dumping their toxic waste in the ocean just won't be able to compete with those that don't.

There are certain economies of scale in many of the wealth-generating activities that seasteads will perform, so it seems likely that reasonably successful seasteads will merge into larger conglomerates the way corporations do in the western world (particularly in the US). These corporate sea-states will be the ones which are nicest to live on - since they are major producers of wealth they'll be able to afford the best living conditions and attract the brightest and hardest-working people. They won't be the most free seasteads though - only those who can (or have) contributed to the seastead's economy will be allowed to live there - customers, employees and their families, retired employees, employees of partner corporations and so on. As an employee, you'll have to play by the company rules and not engage in any activities which might be deleterious to the company's profit margin - don't expect to be able to take recreational drugs, for example - they might affect your job performance and add costs to the company healthcare plan.

Some seasteads might have constitutions which prevent this sort of thing. They might not be as big, as pleasant to live on or quite as good at making money, but their small size and consequent agility might allow them to survive and become viable in certain niche areas of the seasteading economy. They may have to make some compromises to stay viable, but as different seasteads will make different compromises, there will be plenty of choice.

The outcome of the seasteading experiment is really important because when humankind inevitably starts colonizing space, similar economic forces will be at work.

I am a monad

July 17th, 2010

In the programming language Haskell, it is normal to work with functions that depend only on their input values, and which do nothing except return an output value. Functional programming is generally much easier to reason about, because there's no chance of one piece of a program making a change to global state that affects an apparently unrelated part of the program.

This does leave the problem of how a function program can communicate with the outside world. Various approaches have been tried, and the one Haskell chooses is particularly elegant. Functions that need to accept input or produce output take an IO object as a parameter and produce a (generally different) IO object as the result. The IO object "encapsulates" the entire world outside the program. To actually run a Haskell program, the machine performs a sequence of IO transformations - taking one IO object, evaluating the program as much as necessary to determine the next IO object, and then actually performing the corresponding input or output operation before restarting the loop.

So there's a sort of inversion between how we normally think about function evaluation and how the evaluation actually happens. One can't really pass around the entire state of the universe as a parameter the way a C programmer would pass an int, so one must fake it by moving the "state of the universe" to the outside of the program and rearranging everything else so that it works the same way as it would if you could.

I can't prove it, but I think there's a very deep idea here with implications for how we understand the universe and our place in it. Much like pure functions can't describe IO, it seems like physics as we understand it can't describe human consciousness (in particular, subjective experience). Some suggest that this means consciousness is an illusion, but this has never been a satisfying answer to me.

Physics is done by describing the universe objectively - there are these particles at these positions and such and such a field had value x over here at this time (somewhat like values in a Haskell program). There are rules describing how the state of the universe evolves through time (somewhat like pure functions). But none of these things really seem to be able to describe what it is like to "feel" seeing the colour red (for example). Physics can describe a stream of 700nm wavelength photons hitting a retina, causing neurons to fire in particular patterns, some of which resemble the patterns that have occurred when the same retina received 700nm photons previously. These patterns then provoke other patterns which previously occurred in conjunction with the "700nm" patterns, and cause the release of small amounts of hormones. Understanding this system completely (which admittedly we don't) would allow one to (in principle) predict exactly which previous experiences would be recalled by any given stimulus, and might even allow one to predict exactly how the stimulated individual would react. But none of this seems to be able to tell us what experiencing red is actually like because we have no way to describe subjective experiences objectively.

We experience the universe entirely from a subjective point of view - through our senses. The objective model is useful because it allows us to reason, communicate and simulate but I suspect that in saying that objective reality is the real thing and subjective reality is just an illusion, we would be making a mistake and not seeing the forest for the trees.

Instead, I would like to suggest that we perform the same inversion that the Haskell compiler does. Instead of thinking of human beings as unexplained (possibly unexplainable) things within an objective universe, think of them as the IO hooks of the universe: input from free will (assuming that it exists) and output to subjective experience. This IO doesn't (necessarily) go to "another universe" - it's just a primitive, axiomatic thing that may have no more relevance to our objective universe than the implementation details of the IO monad do to a pure functional Haskell program. Experiencing life is running the program.

One important difference between the universe and a Haskell program is that a Haskell program only has one IO monad, but the universe seems to have many subjective observers in it. Having multiple IO monads would be a problem for a Haskell program because the IO monad defines how the program is actually evaluated - there's only one "real universe" for it to run in. But there's no problem having multiple non-IO monads - if the monads can't communicate with each other through the outside world (only through the program) you can have as many of them as you like. Since people can't communicate with each other except through the physical universe, there's no problem here.

Does this mean that one observer in the universe is priviliged to be the "IO monad" whilst everyone else is a p-zombie? From the point of view of that observer, it certainly seems like that is a possible way of thinking about it, but since there's no objective difference between an IO monad and a non-IO monad (as long as the monads only communicate objectively), I'm not sure the distinction is meaningful.

The physics of a toroidal planet

July 16th, 2010

I love the idea of futuristic or alien civilizations using advanced technology to create planetary scale art.

I wonder if such an advanced civilization would be able to create a toroidal planet. It would have to be spinning pretty fast for it not to fall into the central hole - fast enough that the effective gravitational field at the outermost ring of the surface would be quite weak and the gravitational field at the innermost ring would be quite strong. The gravitational "down" direction would vary across the surface of the planet and wouldn't be normal to the average surface except on those two rings. I think if you slipped you probably wouldn't fall off (unless the effective gravity was very low) but you would generally fall towards the outermost ring. Any oceans and atmosphere would also be concentrated at the outermost ring. It would be as if there was a planet-sized mountain with its top at the innermost ring and its bottom at the outermost. It would therefore have to be made of something very strong if the minor radius was reasonably large - rock (and probably even diamond) tends to act as a liquid at distance scales much larger than mountains.

I need to try to remember to google my ideas before writing about them. Having written this, I've just discovered that lots of people have thought about this before.

Bailout thoughts

July 15th, 2010

In 2008, the US government spent a large amount of money bailing out financial organizations troubled as a result of the subprime mortgage crisis. The fiscal conservative in me thinks that maybe socializing losses isn't such a good idea, and that doing so will cause the invisible hand to engineer boom and bust cycles to siphon large amounts of public money towards private financial institutions again in the future.

On the other hand, the consensus seems to be that it has worked, things are looking up and it cost less than expected, so maybe it was the right thing to do.

I remember reading at the time about the disasterous consequences that were predicted if TARP wasn't passed - banks would fail, making it very difficult for ordinary businesses (who didn't cause the crisis) to get credit to grow and/or continue operating.

The obvious answer is to let the banks fail and to bail out these innocent businesses instead - lending to them directly instead of lending to the banks so the banks can lend to the businesses. But figuring out which businesses are good ones to lend money to is a rather complicated and difficult process in itself - one that the government isn't really set up to do. It makes perfect sense for the government to outsource that work to private institutions (i.e. banks) who were doing it anyway (and not doing altogether too bad of a job at that side of it).

On the other hand, how can we stop this happening again the future? I think the answer is to tie bailout money to the enacting of regulations designed to stop this happening again. In particular, for the current economic crisis it seems like the conflict of interest between credit rating agencies and the issuers of securities was a major cause, and I suspect that well placed regulations there could prevent similar crises.

The fundamental differences between Windows and Linux

July 14th, 2010

When I started to get deeper into Linux programming, I found it fascinating to learn about the design decisions that were made differently in Windows and Linux, and the history between them.

One example in particular is how dynamic linking is done. Windows dynamic link libraries (DLLs) are compiled and linked to load at a particular address in memory. When a DLL is loaded, the loader inserts the addresses of A DLL can be loaded at a different address (in the case that its preferred address is already in use), but then the loader must relocate it, which is an expensive operation and prevents the DLLs code from being used in other processes. Once loaded, however, code in a DLL is no slower than any other code on the system (calls from one module to another are slightly slower than calls within a module though, since there is a level of indirection involved).

With Linux, on the other hand, the equivalent (shared object files) are compiled to use position independent code, so can be loaded anywhere in memory without relocation. This improves process startup speed at the expense of runtime speed - because absolute addresses cannot be used, in situations where they would otherwise be used the load address must be found and added in.

Another way Linux improves startup speed at the expense of runtime speed is lazy binding of function calls. In a Linux process, a call to a shared library is not normally resolved until the first time it is called. Function pointers initially point to the resolver, and when a function is resolved that pointer is replaced with the found function pointer. This way, the loader doesn't spend any time resolving functions that are never called.

It makes perfect sense that Linux should sacrifice runtime speed for startup speed given the history of Unix. The first versions of Unix had no multithreading - each thread of execution (process) had its own memory space. So you needed to be able to start processes as quickly as possible. With the fork() system call, a new process could be created by duplicating an existing process, an operation which just involved copying some kernel structures (the program's data pages could be made copy-on-write). Because process startup was (relatively) light, and because of Unix's philosophy that a program's responsibilities should be as limited as possible (and that complex systems should be made out of a number of small programs), processes tend to proliferate on operating systems modelled on Unix to a much greater extent than Windows.

However, this does mean that a program developed with Windows in mind (as a single monolithic process) will tend to run faster on Windows than on Linux, and a program developed with Linux in mind (as many small cooperating processes continually being created and destroyed) will tend to run faster on Linux than on Windows.

Another way in which Linux and Windows differ is how they deal with low memory situations. On Linux, a system called the "OOM killer" (Out Of Memory killer) comes into play. The assumption is that if a machine is running too low on memory, some process or other has gone haywire and is using it all. The OOM killer tries to figure out which process that is (based on which processes are using a lot of memory, and which critical system processes are trusted not to go haywire) and terminates it. Unfortunately it doesn't always seem to make the right choice, and I have seen Linux machines become unstable after they run out of memory and the OOM killer kills the wrong thing.

Windows has no OOM killer - it will just keep swapping memory to disk and back until you get bored and kill the offending process yourself or reboot the machine. It's very easy to bring a Windows machine to its knees this way - just allocate more virtual address space than there is physical RAM and cycle through it, modifying each page as rapidly as possible. Everything else quickly gets swapped out, meaning that even bringing up the task manager to kill the program takes forever.

User code in kernel mode

July 12th, 2010

Most modern operating systems have a great deal of complexity in the interface between user mode and kernel mode - 1100 syscalls in Linux 2.4.17 and some 400 odd in Windows Vista. This has implications for how security (since all those calls need be hardened against possible attack) and also implications for how difficult it is to learn to program a particular platform (albeit indirectly, since the syscall interface is not usually used directly by application programs).

It would be much better if the system call interface (and the programming API on top of that) was as minimalistic as possible whilst still securely multiplexing hardware and abstrating away hardware differences.

The reason it isn't is that doing so would make an operating system extremely slow. It takes some 1000-1500 cycles to switch between user mode and kernel mode. So you can't do a syscall for each sample in a realtime-generated waveform, for example. There are many other ways that an operating system could be simplified if not for this cost. TCP, for example, is generally implemented in the kernel despite the fact that it is technically possible to implement it in userspace on top of a raw IP packet API. The reason is that network performance (especially server performance) would be greatly impacted by all the extra ring switching that would be necessary whenever a packet is sent or received.

A much simpler system API would be possible if user programs could run code in kernel mode - that way they could avoid the ring switch and all the other associated overhead. This was the way things used to be done, back in the days of DOS. Of course, back then every application you used needed to support every piece of hardware you wanted to use it with, an application crash would take down the entire system and there was no such thing as multi-user. We certainly don't want to go back to those bad old days.

Microsoft's Singularity OS (as I've mentioned before) solves this problem by requiring that all code is statically verifiable, and then runs it all in ring 0. Given how much code performance enthusiasts like I like to write highly optimized but completely unverifiable code, I think it will be a while before memory protection becomes a thing of the past.

But maybe there's a middle path involving both approaches - unverified code using the MMU and verified code running in the kernel. A user application could make a system call to upload a piece of code to the kernel (perhaps a sound synthesis engine or a custom TCP implementation) and the kernel could statically verify and compile that code.

With a suitably smart compiler, parts of the kernel could also be left in an intermediate form so that when the user code is compiled, the kernel implementations could be inlined and security checks moved outside of loops for even more speed.

Another thing that such a system would make practical is allowing applications to subvert the system calls of less trusted applications.

Optimizing by taking advantage of undefined behaviour

July 11th, 2010

Undefined behavior doesn't sound like something you'd ever deliberately introduce into a program's codebase. Programmers generally prefer their programs' behaviors to be fully defined - nasal demons are rarely pleasant things.

However, there are sometimes good reasons for adding a statement to your program with undefined behavior. In doing so, you are essentially telling the compiler "I don't care what happens if control flow reaches this point", which allows the compiler to assume that control flow will never get to that point if doing so will allow it to better optimize the parts of the program that you do care about. So it can be an optimization hint.

A good way to use this hint is to put it in the failure path of assert statements. In checked/debug builds, assert works as normal (does the test and terminates the application with a suitable message if the test fails) but in release/retail builds assert checks the test and does something undefined if the test fails. The compiler can then skip generating the code to actually do the test (since it's allowed to assume that it will never fail) and can also assume that the same condition is false in the subsequent code, which (if it's sufficiently smart) will allow it to generate better code.

GCC has a built-in function for just this purpose: the __builtin_unreachable() function is specified to have undefined behavior if it is ever called, and is actually used in just this way. I think this is really clever, in a slightly twisted way.

What happened to DirectSound?

July 10th, 2010

Some years ago, I was tinkering about with some sound synthesis code in Windows. This was in the pre-Vista days, and the officially recommended way of doing sound output was using DirectSound. My application was interactive, so I wanted to make it as low latency as possible without having too much overhead. I set it up with a buffer size of 40ms (3,528 bytes). I then set up IDirectSoundNotify with two pointers, one at the middle of the buffer and one at the end. Whenever one was triggered I would fill the other half of the buffer. This all worked great.

At least, it did until I came back to the code some years later, after having upgraded the OS on my laptop to Windows Vista (the hardware hadn't changed). Suddenly this code which worked great before sounded horrible - the sound was all choppy as if only half of the buffers were being played. What happened?

After some experimentation, I discovered that the jittering happened with a buffer of less than 7,056 bytes (80ms) but not with larger buffers. Armed with this evidence and a bit of pondering, I have a good theory about what happened.

The Windows audio system was drastically rewritten in Windows Vista and DirectSound was re-implemented - instead of a thin layer over the driver, it became a high level API implemented on top of the Windows Audio Session API (WASAPI). In doing so, it lost some performance (it's no long hardware accelerated) and, it seems, suffered an increase in latency - the DirectSound implementation uses a 40ms buffer (I think). That's all very well, but there's a bug in the implementation - IDirectSoundNotify fails to trigger if the positions are more closely spaced than this.

The preferred API for this kind of thing is now XAudio2 which is actually a little bit nicer (the code to do the same thing is slightly shorter) and works on both XP and Vista (unlike WASAPI). I can't really fault Microsoft too much since apparently this particular use of IDirectSoundNotify is rather unusual (or they would have made it work) but still it's annoying that DirectSound went from being the recommended API to buggy and (practically if not technically) deprecated in a single Windows version. Still, I understand that the world of Linux audio is even worse (albeit getting slowly better).

I wonder why audio APIs seem to go through so much churn relative to graphics (given that audio is that much less complicated, and that the hardware isn't really changing much any more). I sometimes wish all the audio APIs were as simple as a "get the next sample" callback API but I guess this is too CPU intensive, or at least was when the APIs were designed.

Website for making websites

July 9th, 2010

This is an idea I came up with when I was working on Argulator. Making websites like this involves a lot of very fiddly work - keeping the server-side and client-side bits synchronized, implementing login/logout stuff and so on. It would be great if there was a site (a "MetaSite" if you will) that let you create data-driven web applications whilst writing a minimal amount of code - sort of like how WordPress lets you make a blog whilst writing a minimal amount of HTML and CSS.

MetaSite would have, built in, the ability to create user accounts, login, logout, change password and so on. The users thus created can then create their own sites and interact with sites built by others. Some sites might require email verification and/or a captcha solve before they will allow the user to do certain things - MetaSite would take responsibility for that and once these tasks are done they don't need to be redone for each application.

There would be an "admin hierarchy" in that an admin of a site can appoint other users as admins with special powers (moderation and so on) who can then further delegate their powers. Upon withdrawing power from an admin, that power is then withdrawn from the closure of admins they've delegated to.

Users would be given tools to allow them to specify which sites can access the information stored by which other sites. One such "built in" site might be a repository of personal information.

Another useful "built in" site would be messaging - a virtual inbox which sites can use to notify their users when events of interest happen.

Yet another useful "built in" would be a shopping cart application which lets applications act as online shops. So if you've written a site and you decide to add a feature which you want to charge users for, it's as simple as just ticking a box. Since payment is centralized with MetaSite, it would be possible to do micropayments (making a single credit card charge to buy "tokens" which can be spent on any MetaSite site).

So far, nothing too unusual - most of this has already been done. Where MetaSite really comes into its own is its rapid application development tools. If you want to create a blogging service, a photo hosting service, an Argulator clone, a wiki or whatever else, it's just a matter of defining what the "objects" that are stored in your system are and how users can interact with them. MetaSite would have various widgets built in so if you define one field to be a date and say that users can edit this date, all the code for a calendar control widget would be automatically instantiated. All the defaults would be secure, reasonably attractive and AJAXified so that the site is nice to use. When developers do need to resort to code it would be written in a sandboxed language (not just uploading raw PHP to the site, that would be a security nightmare). This language would have instrinsics which abstract out all the web-specific stuff and allow developers to just concentrate on their application domain.

This is the big difference between MetaSite and Facebook - if you want to create a Facebook application you need to have your own web server and you need to write the server-side code to run on it. MetaSite would have a very shallow learning curve - making a new site should be as easy as starting a blog.

Metasite applications would be limited in the amount of storage space and CPU time they can use, so that they don't adversely affect other sites. One way Metasite could make money would be to sell extra CPU time and storage space to sites that need it. MetaSite could also make it easy for site admins to add advertising to their sites to monetize them and/or pay for such extras. Another value added feature might be the ability to run a MetaSite site with a custom domain name, or even on your own server.

Everything on MetaSite would be customizable - objects, data editing widgets, even entire applications. In fact, the usual way of creating one of these would be to take an existing thing that's similar to the thing that you're trying to make, and modify it. These modifications could then be available to authors of other sites as starting points for their own things (in fact, I think this should be the default and the ability to keep your customizations proprietary should be a "value added" feature).

Ideally it would be the first stop for any web startup trying to get their site up and running as quickly as possible, but if nothing else it would be a way to take advantage of all those "I need a facebook clone and will pay up to $100" projects on vWorker (nee RentaCoder).

I've gone so far as to register MetaSite.org but I haven't done anything with it yet.