When I finally write the compiler for my super-language, I want to make it nice and modular - i.e. make it very easy to compile different languages, target different architectures and include different optimizations.
In order to achieve this I will need to have some kind of intermediate form in which pieces of code in various states of compilation to be transmitted from one module to another.
This form should be as simple as possible but should also make it possible to express all the complexities that occur in real code. In short, it will need to be a tree structure.
While I'm not convinced that the Lisp idea of "code without syntax" is a great one (I think every available visual/syntactical clue makes it easier to understand unfamiliar code) I have to admit that the lisp data structures (SExps and Lists) are perfect for this role. I wouldn't expect people to be reading large quantities of this code but it's handy to be able to print it out for debugging purposes.
A Symbol (what lispers call a SExp, S-Expression or Symbolic Expression) is either an Integer, a String, an Atom, a List or a Vector. Integers are what you'd expect them to be. Strings are used to hold things like variable names. Atoms are displayed as strings but treated as unique, opaque integers internally. All you do with an Atom is compare it to another Atom and see if they are the same. See Stringy Identifiers for another perspective on these. This is not quite the same as the lisp usage of the word, where integers are also atoms.
A List (delimited with parentheses) is effectively an object. It consists one or more Symbols, the first of which must be an Atom. It has a type, which is represented by its Atom and the number and types of the following Symbols.
A Vector [delimited with square brackets] is effectively an array of length possibly not known until run-time. This can be used for a sequence of statements (for example) or for the bytes in the compiled program before it is output to disk. Lisp doesn't distinguish between lists and vectors but I think it is useful to keep them separate.
It is not a coincidence that this is quite similar to the RTL of GCC. Many of the same problems are being solved, and I was learning about RTL when I came up with this. However, this structure is a little more general-purpose and a little simpler. In particular it can also be used for front-ends.
These expressions will generally be manipulated by pattern matchers - that is, we search through the program for an object matching some characteristics, perform some transformation on it and then substitute the result for the original object.
For example, here is a part of an x86 assembler written in a hypothetical language which has support for these Symbols built in:
with(x) {
match (`jmp` d:Int):
if (isSByte(d))
becomes (`db` [0xeb d]);
else
becomes (`db` [0xe9 bytesFromDWord(d)]);
...
} |
with(x) {
match (`jmp` d:Int):
if (isSByte(d))
becomes (`db` [0xeb d]);
else
becomes (`db` [0xe9 bytesFromDWord(d)]);
...
}
This matches a List which consists of the `jmp` atom followed by an integer (which we will call d). We pass that integer to a function and (depending on the result) we emit one of two possible forms of the x86 JMP instruction. As "bytesFromDWord" is known to be a function and not an Symbol, it is evaluated and the result placed into the vector rather than two Symbols (bytesFromDWord and (d)) being created and placed into the vector.
This works just as well for the front end. The "unless" statement that I have written about before can be broken down using this code:
match (`unless` condition): becomes (`if` (`!` condition)); |
match (`unless` condition): becomes (`if` (`!` condition));
And we should have this optimization as well, in case condition is itself a negated condition:
match (`!` (`!` x)): becomes (x); |
match (`!` (`!` x)): becomes (x);
Quite a lot of the compiler can be written very easily and tersely this way. There are some complications to do with the order in which transformations are applied, though - we would rather avoid having to specify that explicitly and let the computer figure it out. But the order might depend in quite a complicated way on the context. One way to deal with this might be to keep the pre-transformed Symbol around and continue to match against it even after it has been replaced. Another might be for transformations to transform other transformations as they are applied to the Symbols. I haven't quite figured this bit out yet. This seems like one of those problems that might be technically intractable but always works out to be fast enough in practice.
With some cleverness, it ought to be possible to write a very good compiler this way. Great speed is possible as no string handling is done after the initial parsing phase and symbol table lookups (e.g. we don't write out assembler instructions and rely on a separate assembler to assemble them, though we could still easily obtain assembly code if necessary by having a separate output module.) Many speed improvements are possible (e.g. by performing some transformations when the compiler is compiled).
But the really great thing about this scheme is that code for the compiler is completely self-describing - it's exactly the code one would hope to be able to write.
Having a set format for code also enables the scenario I wrote about in Compiler compilers.
Finally, this structure works just as well for disassemblers and decompilers as it does for compilers and assemblers. Some of the algorithms will be different most of the transformations probably cannot be reversed directly but many of the same patterns show up.