Archive for the ‘maths’ Category

Coloured simplicity density function

Monday, September 1st, 2008

A commenter on Friday's post wondered what the picture looks like if you colour different points according to which operations were used. I tried averaging all the operations used first but the image came out a uniform grey (most numbers are made with a fairly homogeneous mix of operations). Then I tried just colouring them according to the last operation used, which worked much better. The following is a zoom on the resulting image, showing just the region where both real and imaginary parts are between -3.2 and 3.2:

Simplicity density function

Green is exponential, blue is logarithm, gold is addition and purple is negative. Another nice feature of this image is that you can just about make out the circle of radius 1 centered on the origin.

Having made that image it occurred to me that getting rid of the addition would have a number of advantages. Because addition takes two inputs it sort of "convolves" the image with itself, smearing it out. Also, without addition there is no need to store each number generated (one can just traverse the tree depth first, recursively) meaning that massive quantities of memory are no longer required and many more points can be plotted, leading to a denser, brighter image. Even better, a Monte Carlo technique can be used to make the image incrementally - plotting one point at a time rather than traversing the entire tree. Care is needed to restart from 1 if the value becomes +/-infinity or NaN. Using this technique I plotted about 1.5 billion points to produce this image:

Simplicity density function

The colour scheme here is blue for exponential, green for logarithm and red for negative. This isn't really a graph of "simple" numbers any more (since it doesn't generate numbers even as simple as 2) but it sure is purty.

Along similar lines, here is an image generated using the transformations increment (red), reciprocal (blue) and multiplication by i (green) instead of exponential, log and negative. This picks out (complex) rational numbers.

Simplicity density function

Fourier transform of the Mandelbrot set

Sunday, August 31st, 2008

I wonder what the Fourier transform of the Mandelbrot set looks like? More specifically, the 2D Fourier transform of the function f(x,y) = {0 if x+iy is in M, 1 otherwise}. This has infinitely fine features, so the Fourier transform will extend out infinitely far from the origin. It's aperiodic, so the Fourier transform will be non-discrete.

The result will be a complex-valued function of complex numbers (since each point in the frequency domain has a phase and amplitude). That raises the question of its analytical properties - is it analytic everywhere, in some places or nowhere? (Probably nowhere).

Other interesting Mandelbrot-set related functions that could also be Fourier transformed:
M_n(x,y) = the nth iterate of the Mandelbrot equation (\displaystyle f = |e^{-\lim_{n \to \infty}\frac{1}{n}M_n}|).
D(x,y) = distance between x+iy and the closest point in the Mandelbrot set. Phase could also encode direction.
P(x,y) = the potential field around an electrically charged Mandelbrot set.

Units and measures

Saturday, August 30th, 2008

In the US (and just a handful of other backwards places), children are generally taught the imperial units and nobody seems to know very much about the metric system at all.

In most of the rest of the world, children are generally taught metric units - metres, kilograms, litres and so on, and the imperial system (inches, pounds, gallons) if mentioned at all is generally derided as being an outdated, overly complex system. A teacher at school once gave me a hard time for measuring out a circuit board in inches - I explained that I had done it that way because the spacing between the legs of the microchips I was using is one tenth of an inch. That shut him up, though he seemed quite surprised to encounter these old fashioned units in such a high-tech context.

A few years ago it became the law in the UK that most things that are sold by weight or measure must be measured in metric units rather than imperial units. Not everyone was happy about this.

My opinion is that all children should be taught both systems of measurement - it is an important skill to be able to convert between different units, and it helps with mental arithmetic. Also, physicists often use non-metric systems of units, setting one or more of c (the speed of light in a vacuum), G (Einstein's gravitational constant), hbar (the reduced Planck's constant), the Coulomb force constant and k (the Boltzmann constant) to 1 to simplify the equations. These natural units are arguably much more fundamental than the metric units and their values (while not generally very good for day-to-day use) give important insights about our universe. The metric units aren't particularly fundamental - the metre and the kilogram are based on (inaccurate values of) the size of the Earth and the density of water respectively.

Like natural units, the imperial system is also better for some things (many people find the units more convenient, especially for cooking) and isn't completely going away any time soon. While having multiple existing systems of units has caused problems in the past, software can keep track to avoid this sort of thing as long as units are always specified (and they always should be, even in the metric system, as one could get confused between grams and kilograms for example). Accuracy of the older imperial units isn't a problem as they have all long since been redefined to be exact rational multiples of metric units.

I also think that people ought to be allowed to sell things in whatever units they find convenient, as long as that value is either a recognized standard or the conversion factor is clearly posted.

The one problem with some imperial units (in particular, those for measuring volumes) is that they aren't standardized across the world, as I discovered to my dismay when I moved here and first ordered a US pint of beer.

Simplicity density function

Friday, August 29th, 2008

A commenter on Tuesday's post wondered what the density function of numbers with low complexity looks like. This seemed like an interesting question to me too, so I wrote some code to find out. This is the result:

Simplicity density function

I actually used a slight modification of the complexity metric (L(exp(x)) == L(-x) == L(log(x)) == L(x)+1, L(x+y) == L(x)+L(y)+1, L(0) == 1) and a postfix notation instead of a bracketing one. These are all the numbers x with L(x)<=20. More than that and my program takes up too much memory without some serious optimization.

The smooth curves and straight lines are functions of the real line (which is quite densely covered). The strong horizontal lines above and below the center (0) lines are +πi and -πi, which occur from taking the logarithm of a negative real number. There is a definite fractal nature to the image and lots of repetition (as one would expect, since every function is applied to every previously generated point up to the complexity limit).

I didn't add duplicate elimination rules for duplicates that didn't appear until L(x)>=8 or so, so some points are hotter than they should be, but I don't think fixing this would make the image look significantly different.

The code is here. This header file is also required for the Complex class, and this in turn requires this and this. The program is actually a sort of embryonic symbolic algebra program as it builds trees of expressions and pattern matches strings against them. It generates a 1280x1280 RGB image which I cropped down to 800x600. The colour palette is based on a thermal spectrum where temperature goes as an inverse seventh root of the number of hits on a pixel (no particular mathematical reason for that - I just think it makes it look nice). The points between black and red are fudged a bit.

A little mathematical game

Tuesday, August 26th, 2008

Given five things:

  • The number 0, denoted 0
  • The ability to find ex for any number x, denoted {x}
  • The ability to find the principle natural logarithm log(x) for any number x, denoted [x]
  • The ability to find the additive inverse -x for any number x, denoted <x>
  • The ability to find the sum x+y for any pair of numbers x and y, denoted (x+y)

What numbers can you make and how long are their denotations? This gives some sort of metric to how "complicated" a number is. Write the length of the smallest possible denotation for number x as L(x). Then:

  • Subtraction: a-b is denoted as (a+<b>) and L(a-b) <= 5+L(a)+L(b)
  • Multiplication: ab is denoted as {([a]+[b])} and L(ab) <= 9+L(a)+L(b)
  • Division: a/b is denoted as {([a]+<[b]>)} and L(a/b) <= 11+L(a)+L(b)
  • Exponentiation: ba is denoted as {{([a]+[[b]])}} and L(ba) <= 13+L(a)+L(b)

Some interesting numbers, with their complexities:

1 {0} 3
e {{0}} 5
-1 <{0}> 5
2 ({0}+{0}) 9
i {{([[<{0}>]]+<[({0}+{0})]>)}} 29
π <{([[<{0}>]]+[{{([[<{0}>]]+<[({0}+{0})]>)}}])}> 47

Some interesting questions:

  • How does the complexity function L grow with its argument?
  • What interesting numbers do not have finite complexity?
  • How could the game be changed to include them?

Related: Fine structure constant update.

Startups

Saturday, August 16th, 2008

I've been reading a lot about startups lately, specifically in the essays of Paul Graham. There is a part of me that regrets not having ever tried to start my own startup - unfortunately the bursting of the bubble and the intricacies of US immigration policy meant that there was never really a practical time. While being filty rich does have a certain appeal, the more compelling reason is the one at the end of this essay - getting the business of getting enough money for life over with as quickly as possible in order that I can work on whatever I want to. The summer/autumn before I started at Microsoft, when I had several months of mostly uninterrupted time to myself to work on whatever I wanted to was one of the most intellectually satisfying (not to mention relaxing) periods of my life.

I have always thought of myself as having progressive politics but this is the best argument I have seen yet against taxing the rich. I guess one possible counterargument is that maybe taking very big risks isn't really necessary to develop new technologies (nor necessarily particularly beneficial to society as a whole) - the only reason that this has been true in the past is that developers and funders of new technologies have very little information about whether these new technologies are likely to be successful or not. With better information, perhaps we will be able to pick the winners before a lot of money has been spent, meaning that the big risks will be unnecessary.

One other thought: as startups become cheaper than ever to create and run, perhaps we will see more of them following the model of craigslist - staying small (in the financial sense) and private and concentrating on what users want instead of aiming for explosive growth, trying to become as profitable as possible and then either go public or get aquired. While the latter is more lucrative, I suspect the former might be more intellectually and spiritually satisfying.

It always comes down to economics

Thursday, August 14th, 2008

What do Bruce Schneier, Mark Chu-Carroll and Paul Graham have in common? They're all bloggers (though Graham prefers to describe himself as an essayist) who seem to have gradually transitioned towards writing about economics starting from their more specialized subjects (cryptography, mathematics and lisp respectively).

I guess this is because economics is a very powerful subject - if you want to change the world you have to make it worth peoples' while to do so, which means you have to set up the economic conditions for it to do so. Conversely, if you know (or think you know) how the world is likely to change in the next few years you can use economic principles to figure out what to place bets on (invest in).

Different kinds of truth

Monday, August 11th, 2008

I used to think that the truth was just that - The Truth, singular. That there was just one "Platonic" set of true mathematical facts. I no longer subscribe to this point of view - what's true depends on who you ask.

First there are some basic truths that we have to agree on to have a discussion about anything, like "if A is true, and if A implies B, then B is also true". If we don't accept these basic logical principles as true the consequences are simply that we can't deduce anything, or that we have to accept that everything is true, or that nothing is true. We accept these truths because if we didn't what we get is a rather limited and boring set of mathematics, useless for doing anything interesting (like modelling the real world) with. Those who would deny them can't be disproven, but they can't be reasoned with either. So these truths just have to be admitted as axioms.

Next there are empirical truths like "the sky is blue" and "2+2=4". These can be thought of as facts about the universe we live in. We know they are true because we can see that they are. One could in principle do mathematics without such facts (just using pure logic) but most mathematicians generally accept these truths as well as it makes mathematics more interesting (and definitely more useful).

Sometimes mathematicians envisage mathematical objects which cannot exist in our universe - objects which are infinite in some sense (not necessarily infinitely big - a perfect sphere is infinitely smooth, for example, and the real number line contains infinitely many points). Infinity is a very slippery thing to deal with precisely because infinities are never directly observed in the universe. How can we say anything about infinity then? Well, mathematicians have developed techniques like "epsilon delta" (for every delta you can name, no matter how small, I can name an epsilon with such and such a property). These arguments break down in physics (nothing can be smaller than the Planck length or the concentration of energy required to confine it in that interval would cause a black hole) so they are purely mathematical in nature. Nevertheless they form a consistent and beautiful theory, and they do turn out to be useful for approximating physics, so we accept them.

But when infinities start to get involved, things get very weird - you start to find that there are multiple different versions of mathematics (multiple different sets of "true facts") which are consistent with themselves, consistent with our universe and interesting. Two of these are accepting and denying the "Axiom of Choice" (AC). If we accept the AC it allows us to prove things about infinities without actually constructing or defining them. This has some very weird results (like being able to disassemble a sphere into 5 pieces, move and rotate them and end up with 2 identical spheres of the same size as the original with no gaps). But denying the AC also gives you some weird results (every set can be put into order). Each are just as "true" but give different sets of mathematics. Currently mathematics including the AC is more popular as it seems to provide greater richness of intellectual territory.

As mathematics develops, it seems likely that more of these "interesting" axioms will be discovered (some of which might already have been assumed in some proofs) and that mathematics will fracture into increasng numbers of "branches" depending on which axioms one chooses to accept and which to deny. In fact, Gödel's Incompleteness Theorem says that for any axiomatic system of mathematics there will be "obviously true" statements that can't be proved from these axioms, in other words that the "bulk of mathematics" (though not necessarily the bulk of interesting mathematics) is found at the leaves of this metamathematical tree.

There are other branches of mathematics whose "truth value" is currently unknown to human mathematicians. For example, many theorems have been proven under the assumption that the Riemann hypothesis is true. We think it probably is but nobody has been able to prove it yet. The volume of work which assumes it makes it one of the most important unsolved problems.

Computer algebra system

Sunday, August 10th, 2008

At some point in the future I'd like to write a computer algebra system, like Maple or Mathematica, just because I think it would be really fun piece of software to write. The basic idea is that you can tell the computer things you know (e.g. "x+2=3") and ask it questions (like "x?") and the computer would attempt to give the simplest possible answer by searching its database of facts. When printing formulae on the screen it would use algorithms from TeX to give attractive output.

Another nice feature would be the ability to directly manipulate formulae, for example rearranging terms of an equation by dragging them with the mouse or expanding multiplications by clicking on them (the program, of course, would prevent manipulations that aren't mathematically correct). These kinds of manipulations can be very tedious to do by hand.

Yet another feature that I want is the ability to create animated, interactive proofs. Rather than just presenting a static sequence of "X implies Y implies Z" on a page, the program could actually create an animation of X turning into Y. And if, at some stage, a derivation is unclear, the user could right-click on it, select "Why?" and the program would attempt to explain. That sounds difficult to do but I think much of this is really quite mechanical. When studying mathematics at university, I often wished that the proofs were presented like this - it would have made things much easier.

As well as an interactive algebra and mathematical presentation system, this program would also contain a big database of mathematical facts, both to aid in the program's own proofs and as an (interactive) mathematics textbook in its own right. Mathematicians using the software could contribute their discoveries to this database/textbook in addition to (or even instead of) the traditional distribution method of writing papers.

Economy in ubiquity

Thursday, August 7th, 2008

I think that at some point in the future we will develop nano-replication technology (a la Star Trek: The Next Generation) and there will be perfect open-source recipes for these machines for everything from roast beef to nano-replication machines.

Shortly after that will come the complete collapse of society, because we won't need anyone else for our basic necessities any more. We will just replicate power generators (solar, wind or both depending on climate) and devices for collecting rainwater and making it drinkable. Our waste will be recycled into the raw materials the replicators need (as disgusting as that sounds, I'm sure we'll learn to deal with it) and our internet connections will be made out of wi-fi meshes and long-range point-to-point radio connections.

How can you have any sort of economy with such an absence of scarcity? Well, presumably there will still be some things that are scarce and desirable (like land in nice locations). And presumably there will still be people doing particularly creative things that improve peoples' lives (making movies or inventing new recipes) - we'd just need some way to connect the two.

I'm not advocating for intellectual property as such here (that makes the accounting far more complicated than it really needs to be and introduces artificial barriers to creativity) - instead I'm imagining something more like Nielsen ratings (but for everything) to determine who gets the scarce wealth.

I guess we'll probably always have some form of money, and I'm sure the arguments about how it is accounted for will be as heated as ever. How do you decide on the relative value of the recipe your dinner was made from verses the movie you watched?