Normally when compiling a program, you tell the compiler how much optimization you want and what the input files are, it goes off and does its thing and comes back when it's done. Sometimes I think it might be nice if one instead told the compiler how much time to spend doing the compiling. It would then do the absolute minimum to make a working binary, and then gradually do more and more optimizations until the timer ran out. This would make the time it takes to rebuild things much more predictable. One downside is that the performance of the resulting program would depend on unpredictable things like how busy the build machine was. Another is that it's probably more efficient for a compiler to decide upfront what the optimizations should be rather than making lots of intermediate binaries of different optimization levels.
However, this sort of thing might be more useful as a run-time process - i.e. a JIT compiler. The system can monitor which bits of the code are being run most often (this is very easy to do - just interrupt at a random time and see what code is being run) and concentrate optimization efforts on those parts. The compilation can continue (gradually making the program faster and faster) until the point of diminishing returns is reached. I understand there's a Java Virtual Machine which can do this.
Check out the adaptive compilation strategies of SELF, ca. 1992 (which then were also implemented in the Smalltalk animorphic VM, and the team for that was hired by Sun and there they wrote HotSpot).