The usual binary representation of floating point numbers is a sign/exponent/mantissa representation. To get an idea of what this means, consider a hypothetical 6-bit floating point format with 1 sign bit, 3 bits of exponent and 2 bits of mantissa. That gives us 64 possibilities, which is sufficiently few that it's practical to list all the possible values:
0 000 00 | +0 | 1 000 00 | -0 | |
0 000 01 | +0.0625 | 1 000 01 | -0.0625 | |
0 000 10 | +0.125 | 1 000 10 | -0.125 | |
0 000 11 | +0.1875 | 1 000 11 | -0.1875 | |
0 001 00 | +0.25 | 1 001 00 | -0.25 | |
0 001 01 | +0.3125 | 1 001 01 | -0.3125 | |
0 001 10 | +0.375 | 1 001 10 | -0.375 | |
0 001 11 | +0.4375 | 1 001 11 | -0.4375 | |
0 010 00 | +0.5 | 1 010 00 | -0.5 | |
0 010 01 | +0.625 | 1 010 01 | -0.625 | |
0 010 10 | +0.75 | 1 010 10 | -0.75 | |
0 010 11 | +0.875 | 1 010 11 | -0.875 | |
0 011 00 | +1 | 1 011 00 | -1 | |
0 011 01 | +1.25 | 1 011 01 | -1.25 | |
0 011 10 | +1.5 | 1 011 10 | -1.5 | |
0 011 11 | +1.75 | 1 011 11 | -1.75 | |
0 100 00 | +2 | 1 100 00 | -2 | |
0 100 01 | +2.5 | 1 100 01 | -2.5 | |
0 100 10 | +3 | 1 100 10 | -3 | |
0 100 11 | +3.5 | 1 100 11 | -3.5 | |
0 101 00 | +4 | 1 101 00 | -4 | |
0 101 01 | +5 | 1 101 01 | -5 | |
0 101 10 | +6 | 1 101 10 | -6 | |
0 101 11 | +7 | 1 101 11 | -7 | |
0 110 00 | +8 | 1 110 00 | -8 | |
0 110 01 | +10 | 1 110 01 | -10 | |
0 110 10 | +12 | 1 110 10 | -12 | |
0 110 11 | +14 | 1 110 11 | -14 | |
0 111 00 | +Infinity | 1 111 00 | -Infinity | |
0 111 01 | SNaN | 1 111 01 | SNaN | |
0 111 10 | QNaN | 1 111 10 | QNaN | |
0 111 11 | QNaN | 1 111 11 | QNaN |
So we can represent numbers up to (but not including) 16, in steps of (at finest) 1/16.
The distribution of these numbers (on the positive side) looks like this:
As you can see it approximates an exponential curve but is piecewise linear. Despite there being 64 distinct bit patterns, only 55 real numbers are represented because of the infinities, Not-a-Numbers and the two zeroes. Near zero, the pattern is broken and the numbers between 0 and 0.25 have the same spacing as the numbers between 0.25 and 0.5 - these are called the "denormals" and cause enormous headaches for CPU manufacturers, OS developers, toolchain developers and developers of numerical software, since they often either don't work according to the standards or are much slower than the rest of the floating point numbers.
With IEEE-754 single precision floating point numbers, there are so many pieces to the piecewise linear curve (254 of them for each sign) that the piecewise linearity wouldn't be visible on a graph like this, and the proportion of the bit patterns corresponding to distinct real numbers is much higher (close to 255/256 rather than 7/8).
How different would the world be if we just used an exponential curve instead of a piecewise linear approximation of one? That is to say, what if instead of representing floating point number x as A*2B for some integers A and B, we represented it as 2A/k for some integer A and a constant integer k? In other words, we represent floating point numbers as the exponent of a fixed point fraction (hence, the name Fractional Exponent Floating Point or FEFP). Let's choose k such that the powers of two are the same as in the exponent/mantissa representation, i.e. k=4 in this case, and again list all the possibilities:
0 000 00 | +0 | 1 000 00 | -0 | |
0 000 01 | +0.1487 | 1 000 01 | -0.1487 | |
0 000 10 | +0.1768 | 1 000 10 | -0.1768 | |
0 000 11 | +0.2102 | 1 000 11 | -0.2102 | |
0 001 00 | +0.25 | 1 001 00 | -0.25 | |
0 001 01 | +0.2973 | 1 001 01 | -0.2973 | |
0 001 10 | +0.3536 | 1 001 10 | -0.3536 | |
0 001 11 | +0.4204 | 1 001 11 | -0.4204 | |
0 010 00 | +0.5 | 1 010 00 | -0.5 | |
0 010 01 | +0.5946 | 1 010 01 | -0.5946 | |
0 010 10 | +0.7071 | 1 010 10 | -0.7071 | |
0 010 11 | +0.8409 | 1 010 11 | -0.8409 | |
0 011 00 | +1 | 1 011 00 | -1 | |
0 011 01 | +1.1892 | 1 011 01 | -1.1892 | |
0 011 10 | +1.4142 | 1 011 10 | -1.4142 | |
0 011 11 | +1.6818 | 1 011 11 | -1.6818 | |
0 100 00 | +2 | 1 100 00 | -2 | |
0 100 01 | +2.3784 | 1 100 01 | -2.3784 | |
0 100 10 | +2.8284 | 1 100 10 | -2.8284 | |
0 100 11 | +3.3636 | 1 100 11 | -3.3636 | |
0 101 00 | +4 | 1 101 00 | -4 | |
0 101 01 | +4.7568 | 1 101 01 | -4.7568 | |
0 101 10 | +5.6569 | 1 101 10 | -5.6569 | |
0 101 11 | +6.7272 | 1 101 11 | -6.7272 | |
0 110 00 | +8 | 1 110 00 | -8 | |
0 110 01 | +9.5137 | 1 110 01 | -9.5137 | |
0 110 10 | +11.3137 | 1 110 10 | -11.3137 | |
0 110 11 | +13.4543 | 1 110 11 | -13.4543 | |
0 111 00 | +16 | 1 111 00 | -16 | |
0 111 01 | +SNaN | 1 111 01 | -SNaN | |
0 111 10 | +QNaN | 1 111 10 | -QNaN | |
0 111 11 | +Infinity | 1 111 11 | -Infinity |
I moved the infinities to have "all 1s" representation, and just reserved 4 extra entries for NaNs (I'm not sure if anyone actually uses the facility of NaNs to store extra data, but a nice thing about this representation is that you can dedicate as few or many bit patterns to NaNs as you like).
A couple of improvements: our "minimum interval" between adjacent numbers is now about 1/36 instead of 1/16 and (because of the NaN tweak) we can represent slightly higher numbers.
In this representation, there's only one "denormal" (zero) for each sign. That can't fit the pattern however we cut it, since our "A" value never reaches minus infinity, however I suspect people would complain if they couldn't put zero in their floating point variables. This does mean that the interval between zero and the next highest number is larger than the minimum interval.
Would it be practical for CPUs to actually do computations with these numbers? Surprisingly, yes - though the trade-offs are different. Multiplication and division of FEFP numbers is just addition and subtraction (of the fixed point fractional exponent). Addition and subtraction are more complicated - you need to actually take the exponent of the representations, add or subtract them and then compute the logarithm to get the representation of the result. That sounds really hard, but using the CORDIC algorithms and some of the trickery currently used to add and subtract floating point numbers, I think additions and subtractions could be done in about the same time and using roughly the same number of transistors as multiplications and divisions of floating point numbers currently use. What's more, it might be possible to reuse some of the add/subtract logic to actually compute exponents and logarithms (which are themselves quite useful and often implemented in hardware anyway).
So I actually think that FEFP is superior to IEEE754 in several ways. The latter is sufficiently entrenched that I doubt that it could ever be replaced completely, but FEFP might find some use in special-purpose hardware applications where IEEE754 compatibility doesn't matter, such as signal processing or graphics acceleration.
I quite like that idea -- it'd prevent you using floating-point numbers for things like iteration counts, which they're ill-suited for anyway :-).
Another thing I'd like to see (in hardware) is support for (arbitrary-precision?) rationals. That seems like it might be fun...
Exact rational arithmetic is definitely fun! But it does have a bad tendency for the numerators and denominators to get much larger than they really need to be, and hardware support for arbitrary precision integers is always going to be fiddly because you want to avoid the situation of having instructions with potentially unbounded execution time (since then you'd potentially unbounded interrupt latency). Hardware support for arbitrary precision integers (and therefore rationals) could definitely be improved, though.
I'm planning to use exact rational arithmetic in my emulator to figure out the timebases - every clock speed for everything ever is some (not usually horrible) exact fraction of 1Hz.