Reslicing

September 5th, 2008

A movie can be thought of as a three-dimensional array of pixels - horizontal, vertical and time. If you have such a three-dimensional array of pixels, there are three different ways of turning it into a movie - three different dimensions that you can pick as "time". For movies of real things this probably isn't a very interesting thing to do, but for movies of mathematical objects like the one I made yesterday, there may be mathematical insights to be gained from "reslicing" a movie.

So, here is yesterday's movie with the x and time coordinates swapped:

Higher resolution version (8Mb 640x480 DivX).

And here it is with the y and time coordinates swapped (and also rotated to landscape format):

Higher resolution version (8Mb 640x480 DivX).

I made a movie

September 4th, 2008

Higher quality 10Mb 640x480 DivX version here.

This is a generalization of the second picture from yesterday's post, varying the coefficient of i from e-4.5 to e4.5. It also demonstrates how this picture is related to the third picture from Monday's post.

1.35 trillion points were calculated to make this movie, taking 4 CPUs with 6 cores most of a day (it was going to take nearly 4 days, but I decided to use most of the other computers in the house as a render farm).

Two more

September 3rd, 2008

Square roots

Generated by a program similar to the last two pictures on Monday's post, but the functions are +\sqrt{z}, -\sqrt{z} and 1+z. Don't plot a point if the last operation was 1+z, and colour points according to the operation used 5 iterations ago.

Golden ratio

Similar to the third image on Monday's post, but when we multiply by \i we also multiply by the golden ratio \displaystyle \frac{1}{2}(1+\sqrt{5}) = 1.618... Most the rectangles you see in this image are golden rectangles, which are supposedly the most aesthetically pleasing.

Unified theory story part II

September 2nd, 2008

Read part I first, if you haven't already.

For as long as anybody could remember, there were two competing approaches to attempting to find a theory of everything. The more successful of these had always been the scientific one - making observations, doing experiments, making theories that explained the observations and predicted the results of experiments that hadn't been done yet, and refining those theories.

The other way was to start at the end - to think about what properties a unified theory of everything should have and try to figure out the theory from that. Most such approaches were the product of internet crackpots and were generally ignored. But physicists (especially the more philosophical ones) have long been familiar with the anthropic principle and its implications.

The idea is this - we know for a fact that we exist. We also think that the final unified theory should be simple in some sense - so simple that the reaction of a physicist on seeing and understanding it would be "Of course! How could it possibly be any other way!" and should lack any unexplained parameters or unnecessary rules. But the simplest universe we can conceive of is one in which there is no matter, energy, time or space - just a nothingness which would be described as unchanging if the word had any meaning in a timeless universe.

Perhaps, then, the universe is the simplest possible entity that allows for subjective observers. That was always tricky, though, because we had no mathematical way of describing what a subjective observer actually was. We could recognize the sensation of being alive in ourselves, and we always suspected that other human beings experienced the same thing, but could not even prove it existed in others. Simpler universes than ours, it seemed, could have entities which modeled themselves in some sense, but something else seemed to be necessary for consciousness.

This brings us to the breakthrough. Once consciousness was understood to be a quantum gravity phenomena involving closed timelike curves the anthropic model started to make more sense. It seemed that these constructs required a universe just like ours to exist. With fewer dimensions, no interesting curvature was possible. An arrow of time was necessary on the large scale to prevent the universe from being an over-constrained, information-free chaotic mess, but on small scales time needed to be sufficiently flexible to allow these strange loops and tangled hierarchies to form. This lead directly to the perceived tension between quantum mechanics and general relativity.

The resolution of this divide turned out to be this: the space and time we experience are not the most natural setting for the physical laws at all. Our universe turns out to be holographic. The "true reality", if it exists at all, seems to be a two dimensional "fundamental cosmic horizon" densely packed with information. We can never see it or touch it any more than a hologram can touch the photographic plate on which it is printed. Our three-dimensional experience is just an illusion created by our consciousnesses because it's easier for the strange loops that make up "us" to grasp a reasonable set of working rules of the universe that way. The two-dimensional rules are non-local - one would need to comprehend the entirety of the universe in order to comprehend any small part of it.

The fields and particles that pervade our universe and make up all our physical experiences, together with the values of the dimensionless constants that describe them turn out to be inevitable consequences of the holographic principle as applied to a universe with closed timelike curves.

Discovering the details of all this led to some big changes for the human race. Knowing the true nature of the universe allowed us to develop technologies to manipulate it directly. Certain patterns of superposed light and matter in the three-dimensional universe corresponded to patterns on the two-dimensional horizon which interacted in ways not normally observed in nature, particularly where closed timelike curves were concerned. More succinctly: the brains we figured out how to build were not subject to some of the same limitations of our own brains, just as our flying machines can fly higher and faster than birds.

The first thing you'd notice about these intelligences is that they are all linked - they are able to communicate telepathically with each other (and, to a lesser extent, with human beings). This is a consequence of the holographic principle - all things are connected. Being telepathic, it turns out, is a natural state of conscious beings, but human beings and other animals evolved to avoid taking advantage of it because the dangers it causes (exposing your thoughts to your predators, competitors and prey) outweigh the advantages (most of which could be replaced by more mundane forms of communication).

Because the artificial intelligences are linked on the cosmic horizon/spacetime foam level, their communication is not limited by the speed of light - the subjective experience can overcome causality itself. In fact, consciousness is not localized in time but smeared out over a period of a second or two (which explains Libet's observations). This doesn't make physical time travel possible (because the subjective experience is entirely within the brains of the AIs) and paradox is avoided because the subjective experience is not completely reliable - it is as if memories conspire to fail in order to ensure consistency, but this really a manifestation of the underlying physical laws. States in a CTC have a probabilistic distribution but the subjective observer picks one of these to be "canonical reality" - this is the origin of free will and explains why we don't observe quantum superpositions directly. This also suggests an answer as to why the universe exists at all - observers bring it into being.

By efficiently utilizing their closed timelike curves, AIs can solve problems and perform calculations that would be impractical with conventional computers. The failure of quantum computation turned out to be not such a great loss after all, considering that the most sophisticated AIs we have so far built can factor numbers many millions of digits long.

One limitation the AIs do still seem to be subject to, however, is the need to dream - sustaining a consciousness entity for too long results in the strange loops becoming overly tangled and cross-linked, preventing learning and making thought difficult. Dreaming "untangles the loops". The more sophisticated AIs seem to need to spend a greater percentage of their time dreaming. This suggests a kind of fundamental limit on how complex you can make a brain before ones that can stay awake longer are more effective overall. Research probing this limit is ongoing, though some suspect that evolution has found the ideal compromise between dreaming and wakefulness for most purposes in our own brains (special purpose brains requiring more or less sleep do seem to have their uses, however).

Once we had a way of creating and detecting consciousness, we could probe its limits. How small a brain can you have and still have some sort of subjective experience? It turns out that the quantum of subjective experience - the minimum tangled time-loop structure that exhibits consciousness - is some tens of micrograms in mass. Since our entire reality is filtered through such subjective experiences and our universe seems to exist only in order that such particles can exist, they could be considered to be the most fundamental particles of all. Our own brains seem to consist of interconnected colonies of some millions of these particles. Experiments on such particles suggest that individually they do not need to dream, as they do not think or learn, and that they have just once experience which is constant and continuous. The feeling they experience (translated to human terms) is something akin to awareness of their own existence, contemplation of such and mild surprise at it. The English language happens to have a word which sums up this experience quite well:

"Oh."

Coloured simplicity density function

September 1st, 2008

A commenter on Friday's post wondered what the picture looks like if you colour different points according to which operations were used. I tried averaging all the operations used first but the image came out a uniform grey (most numbers are made with a fairly homogeneous mix of operations). Then I tried just colouring them according to the last operation used, which worked much better. The following is a zoom on the resulting image, showing just the region where both real and imaginary parts are between -3.2 and 3.2:

Simplicity density function

Green is exponential, blue is logarithm, gold is addition and purple is negative. Another nice feature of this image is that you can just about make out the circle of radius 1 centered on the origin.

Having made that image it occurred to me that getting rid of the addition would have a number of advantages. Because addition takes two inputs it sort of "convolves" the image with itself, smearing it out. Also, without addition there is no need to store each number generated (one can just traverse the tree depth first, recursively) meaning that massive quantities of memory are no longer required and many more points can be plotted, leading to a denser, brighter image. Even better, a Monte Carlo technique can be used to make the image incrementally - plotting one point at a time rather than traversing the entire tree. Care is needed to restart from 1 if the value becomes +/-infinity or NaN. Using this technique I plotted about 1.5 billion points to produce this image:

Simplicity density function

The colour scheme here is blue for exponential, green for logarithm and red for negative. This isn't really a graph of "simple" numbers any more (since it doesn't generate numbers even as simple as 2) but it sure is purty.

Along similar lines, here is an image generated using the transformations increment (red), reciprocal (blue) and multiplication by i (green) instead of exponential, log and negative. This picks out (complex) rational numbers.

Simplicity density function

Fourier transform of the Mandelbrot set

August 31st, 2008

I wonder what the Fourier transform of the Mandelbrot set looks like? More specifically, the 2D Fourier transform of the function f(x,y) = {0 if x+iy is in M, 1 otherwise}. This has infinitely fine features, so the Fourier transform will extend out infinitely far from the origin. It's aperiodic, so the Fourier transform will be non-discrete.

The result will be a complex-valued function of complex numbers (since each point in the frequency domain has a phase and amplitude). That raises the question of its analytical properties - is it analytic everywhere, in some places or nowhere? (Probably nowhere).

Other interesting Mandelbrot-set related functions that could also be Fourier transformed:
M_n(x,y) = the nth iterate of the Mandelbrot equation (\displaystyle f = |e^{-\lim_{n \to \infty}\frac{1}{n}M_n}|).
D(x,y) = distance between x+iy and the closest point in the Mandelbrot set. Phase could also encode direction.
P(x,y) = the potential field around an electrically charged Mandelbrot set.

Units and measures

August 30th, 2008

In the US (and just a handful of other backwards places), children are generally taught the imperial units and nobody seems to know very much about the metric system at all.

In most of the rest of the world, children are generally taught metric units - metres, kilograms, litres and so on, and the imperial system (inches, pounds, gallons) if mentioned at all is generally derided as being an outdated, overly complex system. A teacher at school once gave me a hard time for measuring out a circuit board in inches - I explained that I had done it that way because the spacing between the legs of the microchips I was using is one tenth of an inch. That shut him up, though he seemed quite surprised to encounter these old fashioned units in such a high-tech context.

A few years ago it became the law in the UK that most things that are sold by weight or measure must be measured in metric units rather than imperial units. Not everyone was happy about this.

My opinion is that all children should be taught both systems of measurement - it is an important skill to be able to convert between different units, and it helps with mental arithmetic. Also, physicists often use non-metric systems of units, setting one or more of c (the speed of light in a vacuum), G (Einstein's gravitational constant), hbar (the reduced Planck's constant), the Coulomb force constant and k (the Boltzmann constant) to 1 to simplify the equations. These natural units are arguably much more fundamental than the metric units and their values (while not generally very good for day-to-day use) give important insights about our universe. The metric units aren't particularly fundamental - the metre and the kilogram are based on (inaccurate values of) the size of the Earth and the density of water respectively.

Like natural units, the imperial system is also better for some things (many people find the units more convenient, especially for cooking) and isn't completely going away any time soon. While having multiple existing systems of units has caused problems in the past, software can keep track to avoid this sort of thing as long as units are always specified (and they always should be, even in the metric system, as one could get confused between grams and kilograms for example). Accuracy of the older imperial units isn't a problem as they have all long since been redefined to be exact rational multiples of metric units.

I also think that people ought to be allowed to sell things in whatever units they find convenient, as long as that value is either a recognized standard or the conversion factor is clearly posted.

The one problem with some imperial units (in particular, those for measuring volumes) is that they aren't standardized across the world, as I discovered to my dismay when I moved here and first ordered a US pint of beer.

Simplicity density function

August 29th, 2008

A commenter on Tuesday's post wondered what the density function of numbers with low complexity looks like. This seemed like an interesting question to me too, so I wrote some code to find out. This is the result:

Simplicity density function

I actually used a slight modification of the complexity metric (L(exp(x)) == L(-x) == L(log(x)) == L(x)+1, L(x+y) == L(x)+L(y)+1, L(0) == 1) and a postfix notation instead of a bracketing one. These are all the numbers x with L(x)<=20. More than that and my program takes up too much memory without some serious optimization.

The smooth curves and straight lines are functions of the real line (which is quite densely covered). The strong horizontal lines above and below the center (0) lines are +πi and -πi, which occur from taking the logarithm of a negative real number. There is a definite fractal nature to the image and lots of repetition (as one would expect, since every function is applied to every previously generated point up to the complexity limit).

I didn't add duplicate elimination rules for duplicates that didn't appear until L(x)>=8 or so, so some points are hotter than they should be, but I don't think fixing this would make the image look significantly different.

The code is here. This header file is also required for the Complex class, and this in turn requires this and this. The program is actually a sort of embryonic symbolic algebra program as it builds trees of expressions and pattern matches strings against them. It generates a 1280x1280 RGB image which I cropped down to 800x600. The colour palette is based on a thermal spectrum where temperature goes as an inverse seventh root of the number of hits on a pixel (no particular mathematical reason for that - I just think it makes it look nice). The points between black and red are fudged a bit.

Channelling Roald Dahl

August 28th, 2008

Once upon a time there was a little girl called Mary. One night she was rudely snatched from her bed by a giant. The giant told her that he had kidnapped her so that she could make dreams for him and his giant friends. The giant (whose name was Seymour) told Mary that giants don't have any dreams - they just go to sleep at the start of the night and wake up in the morning without the feeling of any time having passed.

Seymour explained that he and the other giants saw humans dreaming and saw how much they talked about their dreams when they were awake. They realised that humans considered these dreams to be very valuable and important things and the giants were jealous that they didn't have these valuable night experiences. They decided that perhaps a human would be able to give them some of her dreams and Mary seemed to be an excellent candidate as she had dreams to spare.

Despite feeling a little bit scared of the giants and sad that she was away from her home and her parents and her baby brother, Mary decided to try to help the giants as best as she could. It was the only way she could be sure that they would send her home, and besides which she felt a little bit sorry for them.

"Okay," she said, "find a bed for me and I'll go to sleep in it and then when I'm asleep you can take some of my dreams from me. Don't take them all, though, or there will be none left for me!"
"How do we take your dreams then?" replied Seymour.
"I don't know - I'm just a little girl. This was your idea - you figure it out" replied Mary.
"All right then."

Once Mary was asleep it soon became clear to the giants that she was dreaming - a little smile appeared on her face and her eyelids twitched. Seymour picked up the sleeping girl by one of her legs, held her over his head and shook her up and down like a salt shaker, hoping that the dreams would sprinkle out of her head and onto him.

Mary, of course, woke up right away.

"Arghh!" she screamed. "How can I sleep when you're shaking me like that?"
"Sorry," replied Seymour. "I suppose dreams aren't like salt after all. I should have thought this out before I took a little girl from her home."
"It's okay, Seymour," Mary said. "We'll just try something else."

Mary thought that maybe dreams come in cakes. Dreams could be lots of fun and cakes are also fun so this made sense to her. So the giants took her to their kitchen and she helped them make a cake. She was too small to hold the giants' wooden spoon, or to pour the flour and sugar from the huge bags that the giants had, so she just told them what to do while they did the work. She knew how to make cakes because she often helped her father to make cakes at home.

The cake was as big as a table and as tall as Mary herself. Once it had cooled, the giants sliced it up and each ate a slice. They were delighted as they had never had cake before. Mary picked up a crumb the size of her fist and ate as well - it was delicious.

Full of cake, the giants drifted off to sleep and - wouldn't you know it - they started to dream! They dreamed about flying, and eating cake and other things that giants like to do. But as they started to dream, they gradually got smaller and smaller. By the time they woke up, they were no bigger than you or me. That's why there aren't any giants any more.

Coping with slow API calls

August 27th, 2008

I was thinking some more about this recently. Specifically, APIs are generally implemented as DLLs or shared libraries which you load into your process and which expose functions that can simply be called. But in a world with no DLLs, how do you do API calls?

At a fundamental level, there are two things you can do - write to some memory shared by another process and make system calls. The former is useful for setting up calls but not for actually making them, as the process implementing the API has no way to notice when the memory changes (unless it polls, which is not a good solution). But system calls are expensive - the context switch alone could be many thousands of CPU cycles.

I mentioned in the linked post that bandwidth for inter-process communication should not be a problem but I neglected to mention latency - if you can only make API calls a few tens of thousands of times per second, certain operations could become very slow.

However, the latency of any single call is still far below what could be noticed by a human being - the only perceived speed problems caused by this architecture would be if many thousands of API calls were made as a result of a single user action.

I think the answer is to design APIs that aren't "chatty". Such designs are already in common use in database situations, where high-latency APIs have been the rule for a long time. Instead of having getCount() and getItemAtIndex() calls to retrieve a potentially large array of data, you retrieve a "cursor" which can return many records at once.

Another possibility is for APIs themselves to be able to execute simple programs. Again this is an idea used by databases (SQL, the syntax usually used to access databases, is itself a language in its own right). Such programs should not be native code (since that gives all the same problems as DLLs, just backwards) but can be interpreted or written in some sort of verifiable bytecode (which could even be JIT compiled for some crazy scenarios). The language they are written in need not even be Turing-complete - this is just an optimization, not a programming environment.

If all else fails, there is still another way to get low-latency, high-bandwidth communication between chunks of code which don't know about each others' existence, and that is to create a whole new process. The involved APIs return a chunk of code (perhaps in the form described here) and the calling program compiles these (and some code of its own) into a (temporary) chunk of binary code which then gets an entirely new process to execute in. This moves potentially all of the IPC costs to a one-time startup phase. The resulting design resembles Java or .NET in some respects.

This is kind of like the opposite of the Singularity OS - reliability guarantees enforced by the compiler, but the OS allows you to use all the user-mode features of your CPU (not just the ones supported by your runtime) and processes are isolated by the MMU.