Animal testing

August 6th, 2008

Live animal testing seems to be a particularly contentious issue, given the number of people who protest against it and the lengths that they go to to attempt to prevent it.

The first question is, I suppose, whether we should be using live animals for testing in the first place. I think we should - a living animal is an extremely complicated thing, so we can't get the same information with a computer model. If it were possible to get the same results without using live animals, you an be sure that the scientists would, because live animals are messy and expensive (and dangerous if you factor in the increased risk of being victimized by animal rights terrorists). Testing on humans would be even more contentious. And not doing the testing at all would pretty much bring a stop to the advances in medicine and macrobiology that we have been making for centuries, and which account for much of our life expectancy and quality of life today.

The second question is whether animal testing is unnecessarily cruel - i.e. more cruelty is involved than is actually necessary to perform the experiment. Certainly there are documented examples of such cruelty. I suppose this is inevitable - as long as there are some people who get a kick out of hurting animals, and no matter how good the screening processes and safeguards are, some of them will be drawn to careers in animal testing, and when they do these things will happen. But I very much doubt that these isolated incidents are indicative of systemic failure of process - if they were then the animal rights people would not need to distort the evidence so much to make it look that way.

The number of animals killed for testing purposes is tiny compared to the number of animals killed for food, and I suspect that the science animals are better looked after on average. Yet there don't seem to be quite so many protests about that. Perhaps this is because monkeys seem more human than cows and chickens. Or that farming, being more common, is seen as less mysterious and therefore less threatening. Or that extracting sustenance from animals is somehow nobler than extracting information from them.

Webifying old games

August 5th, 2008

One of these days I'd like to put some of my old DOS programs on the web to make them a bit more accessible. I used to think that I would do that in Java but these days Flash seems to be the most popular way to do interactive thingies on the web so maybe I should learn that. Or even just plain old HTML with Javascript - if someone can implement Lemmings in DHTML then I'm sure Cool Dude and probably some of the VGA demos could probably be implemented that way as well.

Digger adventure

August 4th, 2008

Lately I haven't really had any interest in improving Digger, but I still occasionally think it might be fun to write a sequel. "Digger 2" or "Digger's Adventure" would have a 2D continuously scrolling playfield, impenetrable walls, new monsters, bosses, new items to collect, more weapons, shields, warps, speed-boosters, and maybe the possibility to dig to the surface where it would be possible to jump.

Another 2D game

August 3rd, 2008

Another 2D game I'd like to write (which might even be the same game as that one) is a bouncing ball game.

The player would be in control of a ball which could be accelerated by means of the direction keys. So holding down right would give a velocity increase in the positive x direction proportional to the length of time the key was held down (there have been a few games that work like this but in my experience in most non-simulation games the player only has one or two speeds and acceleration to/from these is instanteous). Wind resistance would be implemented to avoid the possibility of the ball reaching unlimited speeds.

Once the player has got used to these controls the next complication is maze traversal. While you could in principle just maneuvre around the maze slowly, it is much more fun to play at speeds close to the ball's terminal velocity. At these speeds, the ball's turning radius will be much larger than the scale of features in the maze, so the ball will hit the walls. When it does so, the obvious thing to happen is that it should bounce off (unless the wall is dangerous or special in some way).

Another possible complication is areas of the maze with gravity fields. The strength of the gravity could be smaller than the control acceleration (in which case it just changes the controls a bit) or could be greater (in which case entire directions become impossible to travel in). Gravity fields can be constant or central (or may have even more complicated configurations).

Then there could be places in the maze where the drag factor is proportional to some the velocity plus some constant field (i.e. windy places).

Some parts of the maze could be flooded with various different kinds of fluid, giving the opportunity to show off fun 2D fluid dynamics simulations.

Then of course there are the usual game elements - things to collect, power ups, space warps, spikes, enemies with various degrees of artificial intelligence and so on.

Emulation for fun and profit

August 2nd, 2008

There's much more that you can do with an emulator than just play old computer games. In fact, I think that the usefulness of emulators is seriously underrated. Here are some useful things I can think of doing with an emulator that has some appropriate extensibility hooks:

  • Debugging. A debugger with an integrated emulator might be able to do the following:
    • Debug a program without the program being able to tell that it is running in a debugger - handy for investigating malware like viruses and DRM.
    • Save (delta) states at each step to make it possible to undo steps or perform backwards-in-time debugging.
    • Debug multi-threaded programs deterministically by simulating multiple threads on a single thread and allowing the user to decide when to context switch.
  • Reverse engineering. The problem of finding the actual code in the binary is Turing-complete in general but if you can find most of the important code by actually running it you can get most of the way there.
  • Static analysis. Finding bugs in code after it's been compiled by running it and (as it runs) checking things that would be difficult to check at compile time (code invariants). For example, assertions might not be compiled into an optimized binary but could be provided as metadata that could be understood by the analyzer. This would be a great help for tracking down those tricky bugs that disappear when you switch to debug binaries.

Keep your laws off my internet

August 1st, 2008

Do there need to be laws for the internet like there are laws in real life? The two realms are very different, so attempts to directly translate real-life rules to the internet are bound to fail.

For example, most laws in real life are to protect A from bad things that B might want to do to him (such as murder or theft). But all such forms of law are unnecessary on the internet because all that B can do to A is send packets to him. A is under no obligation to do anything with these packets - if he doesn't like them he can just ignore them.

So most of the laws that apply on the internet are the more nebulous sort that protect some third party C from B sending packets to A (for example copyright, C being the copyright holder) - censorship, essentially. But since C usually can't even tell when A and B are communicating, such laws are rather difficult to enforce and there is something rather troubling about having unenforceable laws on the books.

Another sort of law is that which protects commons (like a village square) from having houses built on it, or having rubbish dumped on it or otherwise being made unusable for its intended purpose.

Commons as such don't exist on the internet in quite the same way - for every computer on the internet, somebody has to pay for the electricity to keep it running if nothing else. There are "intellectual commons" - protocols and standards which we use to communicate. They can't really be stolen but they can be polluted in some sense - email is quite heavily polluted by spam now, for example.

Combating internet pollution does appear on the face of it to be something that requires laws (and indeed laws have been passed that try to do something about spam - the state of my junk mail folder suggests that haven't been very effective, though). But there is a technical way to deal with such things, and that is through the use of trust networks. The Sovereign Computing guys have some interesting ideas about trust networks. It does seem to me that a trust network layer built on top of the internet and which could provide services to applications like email would be a very useful thing.

Some social network sites are quite popular but these tend to be walled gardens, lacking the distributedness that would be needed to create real trust networks. They all also seem to have little if any concept of transitivity of trust meaning that they are little more than glorified lists of friends as far as their users are concerned.

GPL v3 loophole

July 31st, 2008

For the most part I like the v3 GPL - it makes some things clearer and closes a few loopholes that allow people to violate the spirit of GPLv2 while sticking to the letter of it.

The one part I don't like is the anti-Tivoization clause. Not because I think it's draconian, but because I think that it is ineffective. Suppose you are Tivo and you want your users to only be able to use your signed version of some particular piece of GPLv3 software on it. All you have to do is split into two companies (Tivo Hardware and Tivo Software).

Tivo Hardware doesn't distribute any GPLv3 code at all - it just ships boxes which download software from some internet site, check that it is signed correctly and then install it. Tivo Hardware is therefore not bound by GPLv3. Tivo Software just modifies GPLv3 software they way they want to, signs it it with some private key and then uploads to some internet site (the same one the Tivo boxes happen to download from). Tivo Software are not "conveying an object code with or specifically for use in a User Product", they're just distributing signed code as many (if not all) GNU/Linux distributions do. That it happens to be the same key that Tivo Hardware boxes accept is irrelevant - that's not Tivo Software's fault, Tivo Hardware could do that with anyone's public key.

If there was a way that the Tivoization clause could work, someone could really mess with the FSF by releasing a piece of hardware that didn't include any GNU software but would only run GNU software signed by the FSF.

Given that this clause is so easily circumvented it might as well not be there to simplify the license. While I appreciate what the FSF are trying to do with this clause I don't think there is any way to make it work reliably without being overly draconian. The "distribution to business" compromise is also weird and could possibly be thought of as a "code smell".

The other minor gripe I have with GPLv3 is that it is much longer and more difficult to understand than GPLv2. I guess that's an inevitable (and unfortunate) consequence of making it more lawyer-proof, though.

Bloggers' remorse

July 30th, 2008

Sometimes I'll be really proud of a blog post after having written it, but then when it comes time to actually post it I'll cringe a bit to remember it (especially if there's anything at all controversial in it). The feeling usually goes away (at least mostly) when I re-read the post and it isn't as confrontational or embarrasing as I remembered it but sometimes I just have to grit my teeth and post anyway. There have been times when I have just pulled posts altogether though - either because I no longer agree with what I wrote or because I want to find a better form to put those thoughts in. Perhaps someday I'll come back to those posts and see if they can be resurrected in some form or other.

The story of the sprite, the string and the castle

July 29th, 2008

Many years ago I wrote a ASCII/ANSI/text-mode design program called SCRDES (for screen designer). It was probably the most sophisticated thing I had ever written at the time - it had many features (circles, lines, rectangles etc.) and little text mode windows that would pop up over your drawing to do things like select different tools and colours and save or load your drawing. I mostly used it for designing levels for my game "Cool Dude".

A bug in this program spawned an interesting little puzzle. The bug appeared right after I had added the user friendly interface with the text mode windows. These worked by saving the part of the screen behind the window and restoring it to make the window disappear again. However, the first time I tried to remove a window that was on top of a picture, I got some strange garbage instead of the picture I wanted. For example, the image:

would be turned into this:

Each time I created and destroyed the window, the garbage changed. I quickly realized that the bug was that I was saving the image into a string one way (scanning horizontally left to right and then top to bottom) and restoring it another way (scanning vertically top to bottom and then left to right). So I was putting back the same data, just in the wrong order.

The patterns in the garbage were interesting, so I wrote another program, called SPRITE (small bitmaps are often called sprites) which took an image and transformed it in this way repeatedly and which could be made to go forwards or backwards. I quickly discovered the following things:

  • After a certain number of iterations in one direction, the picture becomes the same as the original. This number depends on the dimensions of the picture. A square picture, for example 8x8 or 25x25, would be restored to the original after only 2 iterations but anything more complicated takes many more. The 80x25 castle image above takes 999 iterations to return to the original.
  • Some of the intermediate images consisted of smaller copies of the original arranged in a rectangular grid:

    Sometimes these smaller images would be upside-down, sideways, or even diagonal (straddling the edges in a toroidal fashion):

  • Sometimes, instead of being shrunk, the image appears to be enlarged and interleaved with the other parts of itself:

After thinking about this for a while one can sort of see why it happens. Consider what happens to the point (1,0). After some number of iterations it will end up at the point (x,y) and the point (2,0) will end up at (2x,2y). The transformation is linear in some sense (modulo the edges) so you get affine transformations of the original image.

To figure out how many iterations it will take to return an x*y image back to its original self, I think one can just consider the trajectory of the point which starts off at (1,0) under the repeated applications of the transformation (a,b)->(⌊(bx+a)/y⌋,(bx+a) mod y) or, equivalently, the point 1 under the transformation c->(c mod y)*x+⌊c/y⌋ - once that ends up back where it started the rest will fall into place by linearity. I think the answer is the multiplicative order of x modulo xy-1, based on some tinkering with the OEIS, but I'm not sure how to go about proving it.

The trajectory can contain every point in the grid (except the top left and bottom right, which remain fixed) but does not necessarily do so. Some grids (for example 78x66) do but some only contain a fraction (<0.5) of the points. Of the 1998 remaining points in the 80x25 grid, for example, half of them are in the trajectory, the white ones in this image:

As you can see, this image is symmetrical under the transformation of rotating by 180° and switching black with white.

The following image was made by looking at all the of the possible grids from 2x2 to 65x65 inclusive, calculating the number of iterations required in each case, dividing by the number of points in the grid minus 2 and converting the resulting fraction into a shade of grey (white=1, black=0). So the white points correspond to grids that take the maximum number of iterations (xy-2), the next most common shade (mid-grey) corresponds to grids that take (xy-2)/2 iterations and so on.

I ran the program for grids up to 2001x2002 but the resulting image looks pretty homogeneous. White pixels seem to appear more often on lines corresponding to (some) prime numbers, but don't seem to get as rare as quickly as prime numbers do. Rational numbers <0.5 with small denominators seem to be show up more often.

Another open question is, given a x*y grid and a number of transformations n, how many castles will there be in the resulting image and what angles will they be at? (Or, dually, what fraction of a castle is magnified up to the entire sprite?).

Edit 14th July 2013:

This seems to be closely related (although not quite identical) to Arnold's cat map.

Pot or not-pot

July 28th, 2008

This post is based on a maths lesson I had as a very young child. It was in my first primary school so I couldn't have been more than 9. I had some fantastic maths lessons at that school - one fun one that I remember was my first introduction to the Fibonacci series by a population of breeding rabbits. Another involved colouring squares of the multiplication table according to whether each number could be divided by some other number and seeing what patterns emerged.

The idea behind this lesson was this. You have a (bizarre) snooker table which is a rectangle whose width and length are integers, and whose only pockets are in the corners. You start with the ball in one corner and hit it at a 45 degree angle from one of the sides, very hard. It bounces off the various edges some number of times and eventually ends up in one of the other 3 corner pockets (it can't end up in the pocket where it started because it would have had to retrace its steps, which must have meant that it bounced off some edge perpendicular to its path, but its paths are always at 45 degree angles to the only edges).

If it ends up in the pocket opposite where it started, that is a "pot" and if it ends up in one of the other two corners, that is a "not pot". Can you find the rule for determining whether an x by y table results in a "pot" or a "not pot"? What fraction of tables result in a "pot"? As I recall we spent a happy time drawing rectangles and tracing out the patterns made by the ball. We noticed that square tables (plus tables whose width was odd multiple of their length) were always "pot"s and tables whose width was an even multiple of their length were "not pot"s. I think we noticed that given any integer n, an x by y table had the same result as an nx by ny table. We may even have statistically determined that about 1/3 of the tables were "pot"s, but as I recall we didn't find the general pattern.

Today (at least 20 years later!) I finally got around to doing some more investigation on this. I wrote a program to perform the algorithm for different sized tables and plotted the results on a graph (red is "pot" and the green and blue are the other two corners):

There is an interesting fractal structure here. If you have the picture for tables up to size 2n by 2n you can get the picture for tables up to 2n+1 by 2n+1 by repeating the 2n by 2n picture 4 times and then fixing up the 2n by 2n+1 and 2n+1 by 2n tables. From the recurrence relation thus found one can prove that given a random table, the three pockets are equally likely.

The patterns here make me suspect that there might be some simple algorithm for finding the pocket based on the binary expansions of the table dimensions. I'm not sure what it is though.

One simple (non-binary) method for finding the pocket given the table dimensions is described here.