A compiler ought to be able to perform compuations at compile time. Unlike the generated code, these computations don't have to be blazingly fast (since they only happen at compile time), and don't have to conform to any particular machine's architecture (since the language should be the same for different architectures anyway) so some nice things can be done.
Arbirary-precision integer arithmetic is a nice easy one and one that is done by many scripting languages already. This allows you to write expressions such as:
const int a = 1000000000000/1000; |
and have a initialized to one billion even on 32-bit machines.
Rational arithmetic is also easy and useful. This allows one to write:
const int a = (1000/3)*3; |
and have a initialized to exactly 1000
even on machines lacking any sort of floating point facility.
Then there are closed forms such as:
const int a = sqrt(1000)^2; // a==1000, not 961 |
One could even make a compiler understand pi
as a symbolic constant of type "Real
" and evaluate it to the appropriate number of significant figures for the expression and type that is being initialized. So:
const int a = truncate_cast<int>(pi*10000); |
would initialize a
to 31416
.
Once these things are in place, a compiler can perform quite sophisticated mathematical transformations to allow programmers to write what they really mean and still obtain optimal code. Even to the level of Maple/Mathematica if the compiler writers are sophisticated enough.