Plug-ins should not be in the same address space as the main process

June 27th, 2008

After some thought I have come to the conclusion that a software architecture which allows third-party code to run in the same process as the main program is a bad idea.

The first problem with such an architecture is reliability - how do you protect yourself against bugs in the third-party code? The short answer is that you can't - the plugin has the ability to stomp all over your process's invariants. The best you can do is terminate the process when such a situation is detected.

The second problem is that a compiler can't reason about the third-party code (it might not even exist at compile time). This means that there are all kinds of checks and optimizations that cannot be performed (like some of the stack tricks I mentioned a while ago).

The third problem is that one cannot tell what code the plug-in will rely on - if it sticks to documented interfaces that's okay, but (either deliberately or accidentally) a plug-in might rely on undocumented behavior which causes the plug-in to break if the program is updated. This forces the program to have ugly kludges to keep old plug-ins working.

If plug-ins are run as separate processes communicating with the main program via well-defined IPC protocols, all these problems are solved. The down-side is that these protocols are likely to be a little more difficult to code against and (depending on the efficiency of the operating system's IPC implementation) may also be significantly slower. The speed problem seems unlikely to be insurmountable though - current OSes can play back HD video which involves the uncompressed video stream crossing the user/kernel boundary - few applications are likely to need IPC bandwidth greater than that.

Fine structure constant update

June 26th, 2008

Many years ago I posted this on sci.physics. It turns out that the value of the Inverse Fine Structure Constant (a dimensionless parameter which can be experimentally measured as about 137.036 but for which no theory of physics yet predicts a value) is remarkably close to (alpha^2)(pi^2)(pi^pi-1)/16 where alpha is the second Feigenbaum constant, about 2.502907875096. This formula gives a value for the IFSC of 137.0359996810.

After posting that, I got a message from James Gilson pointing out his work on the same subject - he has a different formula for the IFSC, pi/(29*cos(pi/137)*tan(pi/(137*29))), which is not quite so pretty but does have the advantage of having some geometrical justification (which I still don't completely understand). Gilson's formula gives a value for the IFSC as 137.0359997867.

Back in 2001 the most accurate measurement of the IFSC (CODATA 1999) gave a value of 137.03599976(50) (i.e. there is a 68% chance that the true value is between 137.03599926 and 137.03600026). Both the formula give answers in this range.

I thought I would revisit this briefly and see if the latest measurements were able to rule out one of both of these formulae. Work of G. Gabrielse et al in 2006 give the IFSC as 137.035999068(96), i.e. there is a 68% chance that the true value is between 137.035998972 and 137.035999164. This appears to rule out both formulae. An earlier version of the 2006 Harvard work (which was compatible with both formulae) was superceded by an erratum. I admit this post is a bit anticlimactic but when I started writing it I thought that the latest measurements ruled out Gilson's formula but not mine.

I want to run as root

June 25th, 2008

With all the hype about Vista's User Access Control functionality, an important point seems to have gone largely unsaid. UAC protects the integrity of the operating system, but does nothing for the user's files. If I run some bad software in a non-user account, it can still read my financial documents, delete my photos etc. These things are much more difficult to fix than the problems UAC does prevent (which can be fixed by reinstalling the operating system).

The trend towards more Unix-like operating system structure annoys me somewhat. I want to run as root/admin all the time. If ask the for some critical system files, the operating system shouldn't second guess me, it should just do what I asked. I have been running Windows systems as administrator for years and it has never been a problem for me in practice. I don't ever want to have to input my password for machines that I own that I'm sitting in front of (remote access is different).

I think a better security model for a single-user machine would be not to authorize individual commands but to authorize programs. When a piece of software is downloaded from the internet and run, the OS calls it makes should be sandboxed. If it attempts to modify the system directories the OS should fake it so the system directories are not modified but so that it looks to that application like they have been. Private user files should just not appear to be present at all (not even findable but read-locked).

Vista is actually capable of this to some extent but it is only used as a last resort, to enable legacy programs to run. Applications released since Vista tend to have manifests which allow them to fail instead of get lied to - I don't think a program should even have the ability to tell that it is being lied to - if I want to lie to my media player and tell it that the sound output is going to a speaker when in fact it is going to a file, I should be able to do that. This is similar (but not quite the same) as a chroot jail in Unix, though chroot is not intended to be used as a sandbox for untrusted applications.

I suppose that, having said all that, what Microsoft have done in Vista does make sense for them - in the long run it will probably reduce support calls and promote development of software that requires only the minimum of privileges to run. I just wish I could turn it off completely without the annoying side-effects.

Map showing river basins

June 24th, 2008

I'd be interested to see a map of the UK where each point was coloured according to where the water that falls on the land corresponding to that point reaches the ocean.

Here's an example of the kind of thing I'm thinking of, except that this isn't very detailed - while it looks right for the Thames basin, lots of smaller rivers get bunched together (for example in the South-West).

This is a bit more like it but for Africa. Note that there are some regions which do not appear to be adjacent to any coast - this means that all the water from these areas ends up in interior lakes and ultimately evaporates again. They are Endorheic basins.

Shepard tones

June 23rd, 2008

If you arrange the musical notes C, C#, D, D#, E, F, F#, G, G#, A, A# around a circle (in that order), like this:

The resulting structure has some interesting properties. Firstly, each interval becomes a directed chord (in the geometric sense) on this circle, and transposition is just rotation.

The trouble is that because of octaves, this picture doesn't tell the whole story. If you go all the way around the circle you'll find yourself an octave higher (or lower) than when you started. Not all the intervals will sound alike as some of them will include an extra octave factor.

Unless you use Shepard tones.

Shepard tones have frequency components of the same pitch class in all octaves (tapering off at low and high frequencies according to a bell curve in log frequency space), like this:

They sound kind of like notes on a church organ.

The Shepard tones sound like musical notes but because the peak of the bell curve is always the same frequency, once you have progressed through a full scale you're back where you started instead of being up an octave. This can be used to create an auditory illusion of a pitch which sounds like it is continually getting higher (or lower), but is in fact looping.

It would be interesting to compose some music using Shepard tones. Another interesting feature of them is that all chords are identical to their own inversions. So in a number of ways Shepard tones drastically simplify music theory.

Another fun thing to do with Shepard tones is to leave the pitch class alone and vary the frequency that is at the peak of the bell curve. That way you can make a sound which continuously increases in frequency but which never seems to change pitch.

Widdershins of hue

June 22nd, 2008

Here is a concept that I feel that there ought to be words for but which there don't seem to be.

Suppose we arrange the spectrum of colours into the familiar colour wheel:

What is the word that corresponds to "clockwise" on this diagram (and "anticlockwise" on the mirror image of this diagram)? I.e. the word that means "more blue than red", "more green than blue" and "more red than green" simultaneously? What is the word for the opposite hue angle direction?

Bonus points for the best neologisms for these equal and opposite concepts.

On "Taking Science on Faith"

June 21st, 2008

This article caused quite a stir and many responses when it appeared, but none of the responses I read spotted the gaping hole in Davies' reasoning.

No faith is necessary in Physics - just measurement. The game is this - we come up with a theory that correctly predict the results of all experiments done so far, from that we determine an experiment that we haven't done yet that would confirm or disprove this theory, and we perform the experiment. If we disprove it, then we we come up with another theory and repeat. If we confirm it we try to think of another experiment. Once we have a single, consistent theory that explains all experimental results, we're done (okay, there's still the small matter of determining all the consequences of that theory but fundamental physics is done).

If the laws of physics are found to "vary from place to place on a mega-cosmic scale" that doesn't make the game impossible to play, it's just another experimental result to be slotted into the next generation of theories. We'd discover the "meta rules" that governed how the rules of physics changed from place to place. And maybe there are meta-meta-rules that govern how the meta-rules change and so on.

This tower might never end, of course - a theory describing all experimental observations might take an infinite amount of information to describe. That's not a problem - it just means that the game never ends. Historically, new theories of physics have begotten new technologies, so as we discover more of this tower it stands to reason that our technologies would become more and more powerful.

The alternative is that the tower does end - there is a final theory of physics. This alternative seems more likely to me - historically our theories of physics have actually become simpler (for the amount of stuff they explain) over time. Together just two theories (General Relativity and the Standard Model of Quantum Mechanics) can account for the results of all experiments we have been able to do so far. These theories are simple enough that they can both be learnt over the course of a four-year undergraduate university degree.

I suspect that the theory that finally unifies these will be simpler still if you know how to look at it in the right way. In fact, I think that it will be so elegant and simple to state that it will be unquestionably correct - the reaction of a physicist from today on seeing and understanding it would be "Of course! How could it possibly be any other way!"

The only "faith" that physicists need to have is the faith that we can gain some knowledge about things by observing them (performing experiments) - which seems to me to be more of a tautology than a matter of faith.

Davies suggests an alternative, which is "to regard the laws of physics and the universe they govern as part and parcel of a unitary system, and to be incorporated together within a common explanatory scheme. In other words, the laws should have an explanation from within the universe and not involve appealing to an external agency." I'm not even sure what that means and I don't think Davies does either. I can't bring myself to believe that there is some box in a dusty attic in some corner of a distant galaxy which somehow "explains the laws of physics".

Basilisk

June 20th, 2008

This is a very scary but brilliant science fiction short story. What a concept - an image that kills you if you remember it.

Sometime I'd like to write a story exploring and extending this concept, following a top-secret research group as they carefully reverse-engineer the Parrot to determine how to make images that can kill non-English speakers, and images which are more quickly acting. Their discoveries show the original to be an extremely crude "sledgehammer" for the human visual cortex and develop much clever and subtler images. Not all of them are deadly - some images can make people do things (immediately or in response to some later stimulus) before erasing themselves.

Yet another family of images creates particular emotional responses in their viewers - fear, joy, sorrow, comfort. One image in particular is programmed to make the viewer believe the image to be beautiful, and indeed test subjects report the image to be the most beautiful thing that they have ever seen - almost painful in its beauty. No-one who sees it ever feels quite the same way again, though most report the experience as positive - their spirits are lifted to know that such beauty is possible in the world. But if it is science that creates this effect, is it art?

Hakmem 2

June 19th, 2008

I found the original hakmem to be a terrific read when I discovered it years ago - some really interesting bits of mathematics, puzzles, short programs, algorithms and hardware hacks. There is material enough in there to inspire several lifetimes worth of research.

But the world has come a long way in 36 years, and I think we're overdue for a sequel. I'm not the only person who thinks so.

Things that should be in Hakmem 2:

  • Quake's fast inverse square root.
  • Simon Tatham's Infinity Machine.
  • Puzzle: Solve Tetris on a grid of width 4. Tetris can only be lost, never won, so solving it means finding an algorithm that allows one to play indefinitely no matter what pieces are given. Btw, Tetris on a grid of width 4 is a fun game - very different strategies apply than for normal Tetris though.
  • Crash course in geometric algebra.
  • Hardware: The Joule Thief.
  • Use of space filling curves to make 2D maps of 1D spaces (or n-D binary spaces).
  • Techniques for drawing fractals (IFS, L-systems, escape time)
  • Solutions to Pentominoes on various grids
  • How to draw the INT curve from Godel, Escher, Bach (there are some clues here).
  • Top ten Perl one-liners.
  • Software Transactional Memory
  • Parsing using Parsing Expression Grammars

As well as a compendium of handy techniques, hakmems can be thought of as a zeitgeist - a gauge of what hackers are (or should be) thinking about at the time they were written.

Come to think of it, a lot of my blog entries would make good Hakmem 2 entries. Hmm...

Sugar rotation

June 18th, 2008

A while ago I was absent-mindedly twiddling a cylindrical jar of sugar that I used to keep at work (for tea purposes) and I noticed that when I rolled the jar across my desk it would stop quite quickly due to the friction of the sugar against itself.

Then I got to wondering what would happen if you put the jar of sugar on some sort of spit or lathe like device that would keep it turning over (not so fast that the sugar would spin around with the jar) and if the jar was a good insulator. The energy put in as rotation would build up in the sugar, heating it up and (presumably) eventually turning it into caramel. Weird.