I was thinking about FPGAs recently, and that got me wondering about the possibility of designing a computer architecture that is completely homogeneous - that is, composed of a number of identical cells arranged in a square grid. Some of the cells (presumably those on the edges or even just a corner or two) would be connected to IO devices (perhaps directly, perhaps via a bus for a more general machine).
Each cell would have some kind of logic and a small amount of memory (enough to remember how it is connected to adjacent cells and to use as actual user memory).
We would like to be able to have different pieces of the machine doing different things at the same time, so cells should be independent (except where we explicitly connect it to an adjacent cell).
We also want to be able to have the hardware reconfigure itself (so while group-of-cells-A is doing something else, group-of-cells-B can be reprogramming group-of-cells-C). In order to do this, we need to make it possible for each cell to be able to address any other cell that its connected to.
If we have some string of cells connected together in a line, how do we differentiate between data sent all the way to the end of the line and programming instructions sent to a particular cell somewhere along the line? For a chip with 2n cells we need to have at least n+1 bits connecting each cell to its neighbours - n data bits (x) and one "program" bit (p). If a cell in "interconnect" mode sees an input of (p,x)==(0,x) then it passes (0,x) out the other side. If it sees (1,x>0) then it passes (1,x-1) out and if it sees (1,0) then it is reprogrammed. I guess we'd also need some additional data bits to tell the cell what mode it should be in after reprogramming. A cell with 4m input bits and m output bits has requires 24mm bits to completely describe its truth table and this is clearly going to be larger than m so obviously we can't program it with any arbitrary truth table in one go - we'd either have to upload it in pieces or limit the possible cell modes to some useful subset of the possible truth tables.
This is probably an incredibly inefficient way to build a computer, since for any given state a large number of the cells are likely to be acting as just interconnects or just memory and only using a tiny proportion of their capacity. But having the cells be identical might mean that doesn't matter so much if we can build a very large number of them very cheaply.
A variation on this architecture might be to have the cells arranged three-dimensionally in a cube rather than than on a flat 2D surface. Current photolithographic fabrication technologies aren't quite up to this yet (the fattest chips are only maybe 10 layers thick) but I expect that the nanoassembly and microfluidic channels for integrated cooling that will be required to make fully 3D chips aren't too far off. When that happens the chips will be so complex that we'll have to use concepts like repetition and fractal-like scale invariance to a greater extent to make their designs tractable. In other words, chips will start to look more biological - more like living tissue. Perhaps a more homogeneous computer design will come into its own in that era.
[...] In the not-too-distant future, we’ll hit a limit on how small we can make transistors. The logical next step from there will be to starting building up – moving from chips that are almost completely 2D to fully 3D chips. When that happens, we’ll have to figure out a way to cool them. Unlike with a 2D chip, you can’t just stick a big heatsink and fan on top because it would only cool one surface, leaving the bulk of the chip to overheat. What you need is a network of cooling pipes distributed throughout the chip, almost like a biological system. [...]