Assuming they exist (which seems likely according to our current best theories) black holes are very strange things indeed. In particular, the fate of something heading over the black hole horizon depends seems to be different depending on whether you cross the horizon or not.
In the frame of reference of somebody passing through the horizon, the horizon doesn't seem very special at all - you could cross the horizon of a sufficiently large black hole without even noticing. After passing the horizon, however, escape is impossible and you would inevitably end up spaghettified by the massive tidal forces near the central singularity.
In a frame of reference of somebody outside the black hole, the picture is very different. From that perspective, the gravitational time dilation at the horizon means that the progress of the falling object becomes slower and slower, grinding to a complete stop at the horizon itself. The red-shifting of any photons emitted from close to the horizon also means that the object looks redder and redder, going through infra-red, terahertz radiation, microwaves, and radio waves of increasing wavelength until they are too low energy to detect but they never quite stop altogether.
In the meantime, the black hole evaporates by the process of Hawking Radiation. This takes an unimaginably long time - as long as 10100 years for the largest black holes but if you wait there long enough it will happen. Supposing that you could somehow detect the infalling observer during the entire period of the evaporation, you'd see the infalling observer crossing the horizon at the exact moment that the black hole disappeared altogether in a bright flash of energy (the smaller the black hole, the more it radiates). But of course, at that moment the infalling observer has zero mass-energy (as does the black hole as a whole) so how can it be the same observer in its own frame of reference (when its mass is the same as it was before the experiment began)?
Clearly our current theories of physics are insufficient here - we don't yet have a consistent theory of quantum gravity so we just don't know what happens near a tiny black hole when the spacetime curvature is high enough to be in the quantum regime.
One way out of the apparent paradox is simply that the two observers can never compare notes because whatever information the infalling observer collects is inevitably destroyed at the singularity, and no information from inside the black hole can ever be transmitted to the outside universe. The interior of the black hole is causally disconnected from the outside - it can be considered to exist infinitely far in the future (long after the black hole evaporates) or can even be considered to be an entirely different universe in its own right. If the two observers can never get back together to compare notes (and find that they disagree) there isn't really any disagreement. It's a bit like the Many Worlds Interpration of quantum mechanics - observers in different parallel universes can't subsequently interact so they can't disagree about the observable outcome of some experiment.
But in both of these cases there is a philosophical problem - if the universe is qualitatively different for different observers then it seems to make a mockery of the very idea of an objective reality. Just as the IO Monad theory of subjective experience has no problems with observers disagreeing on matters that make no difference in an objective sense, it seems like general relativity may require that we have no problems with disagreements between causally disconnected objective realities.
For a while now I've been wanting to write a cycle-exact emulator for the original IBM PC and XT machines that are the direct ancestors of the machine I'm writing this on (and, almost certainly, the machine you're reading this on). I've written an 8088 emulator with cycle timings that I think are plausible but I have no idea if they're correct or not. The exact details of the timings don't seem to be published anywhere (at least not on the internet) so the only way to determine if they are correct is to compare against real hardware.
So, when I saw a cheap XT for sale on eBay recently, I made a spur-of-the-moment bid, and won it. It is a bit beaten up - the case was bent in one place, the speaker cone was falling off and the end of one the ISA slots was broken off. All these problems were easy to fix with a hammer and a bit of superglue, though. It lacks keyboard and monitor, which makes it rather difficult to do anything useful with it, but it did come with all sort of weird and wonderful cards:
Hyundai E40080004 MDA card with 66Kb of RAM. It's all discrete logic apart from the CRTC, RAM and a ROM which I think is a character ROM rather than a BIOS ROM (though I could be wrong). The amount of memory makes me suspect it can do graphics, but I can't find any documentation - I'd probably have to reverse engineer a schematic to find out exactly what it's capable of.
Tecmar Captain 200044 card with 384Kb RAM (bringing total to 640Kb), serial port, connector for a parallel port and possibly an RTC (it has a CR2032 battery on it anyway).
AST 5251/11 - apparently this enables PCs to connect, via Twinax, to a 5251 terminal server. It has the largest DIP packaged IC I've ever seen - a Signetics N8X305N RISC Microprocessor running at 8MHz (16MHz crystal) with 6KB of ROM and 96Kb16KB of SRAM (12KB on a daughterboard) for its own purposes.
A generic serial+parallel card.
IBM floppy and MFM hard drive controller cards.
IBM serial card.
PGS scan doubler II. It seems to be pretty rare, there's only one mention of it on Google and I've never heard of such a device being used with PCs before (though I understand they were more popular in the Amiga-verse). It's only uses the power and clock lines from the ISA bus - it has two CGA-style ports on the back, one for input and one for output. You loop the CGA card's output back into one of the ports, and the other one outputs a signal with twice the line rate (it buffers each line in its internal 2Kb RAM and outputs each one twice, at double the pixel rate). I'm guessing the output had to be a monitor with a CGA input which could run at VGA frequencies, which can't have been all that common.
It also comes with a floppy drive and a hard drive which makes the most horrendous metal-on-metal grinding sounds when it spins up and down (so I'll be very surprised if it still works).
This machine has clearly been a real working PC for someone rather than a perfectly preserved museum piece - it tells stories. At some point the original MDA card failed and was replaced with a clone, the RAM was upgraded, it's been used as a terminal for a mainframe, and at some point the owner read about this device that improves your video output, bought one, found it didn't work with his MDA card but just left it in there.
Due to lack of keyboard and monitor (neither my TV nor my VGA monitor can sync to MDA frequencies) I haven't got it to boot yet. I tried to use the scan doubler with the MDA and my VGA monitor (connecting the mono video signal via the red channel of the doubler) and the monitor was able to sync but the output was garbage - I guess the doubler is either broken or only works with CGA frequencies. If it does work with CGA then it'll be useful for seeing the TTL CGA output (though I'll have to put something together to convert the RGBI digital signals to analogue RGB - with proper fix up for colour 6 of course).
I ordered a CGA card but decided to see if I could jerry-rig something up in the meantime. I programmed my Arduino to pretend to be an XT keyboard and also the "manufacturing test device" that IBM used in their factories to load code onto the machine during early stage POST (it works by returning 65H instead of AAH in response to a keyboard reset). I then used this to reprogram the CRTC of the MDA to CGA frequencies (113 characters of 9 pixels at 16MHz pixel clock for 18 rows (14 displayed) of 14-scanline characters plus an extra 10 scanlines for a total of 262 scanlines). The sources for this are on github.
Next, I connected the hsync, vsync, video and intensity lines to a composite output stage like the one in the CGA card (it's just a transistor and a few resistors) and put some junk in video memory. Amazingly, it works - there is a barely legible picture. Even more amazingly, it is in colour! On the same breadboard I was doing this on, I had a Colpitts oscillator driving a 3.5795MHz (NTSC colour burst frequency) crystal from an earlier experiment (before my XT arrived I was trying to get colour TV output from my Arduino). This oscillator wasn't connected to anything, but the very fact that that frequency was bouncing around nearby was enough to turn off the colour killer circuit in the TV and allow it to interpret certain horizontal pixel patterns as colours.
The colours themselves aren't actually related to the picture in any useful way - the hsync and crystal aren't in sync so the phase (and hence the hues) drift over time. In fact, by applying slight heat and/or pressure to the crystal with my fingers, I can change the frequency slightly and make the hue phase drift rate faster or slower with respect to the frame rate, and even stop it altogether (though it's still not a multiple of the line rate so the colours form diagonal patterns).
The sync isn't quite right - because the hsync and vsync signals are just added together the TV loses horizontal sync for 16 lines or so during vsync and then spends half the picture trying to recover. Unfortunately the CRTC in the MDA card has a vertical sync pulse height fixed at 16 lines but it needs to be closer to 3 for NTSC so I haven't been able to get a stable signal even by XORing the signals like the CGA does. The CGA uses a 74LS175 to get a 3-line sync signal, but I don't have one of these to hand.
Here's the schematic for the circuit as I would have made it if I had the parts:
Unfortunately I haven't been able to continue the BIOS POST sequence after running this code - I tried jumping back into the BIOS at a couple of places but it just froze. I'll have to tinker with it some more to see if I can get it to work and determine where the POST is failing next.
I've determined that it should be possible for the XT to send data back over the keyboard line (the clock line is controllable by software). So I'm planning to do bidirectional communication between the XT and a host PC entirely over the keyboard port! I'm writing a tiny little OS kernel that will download a program over the keyboard port, run it, send the results back and then wait for another one.
Unfortunately my plans have been derailed because the power supply unit has failed. I noticed that the PSU fan wasn't spinning - I think that has caused some parts in the PSU to overheat. One resistor in there looked very burnt and the resistors for discharging the high-voltage smoothing capacitors were completely open circuit. I replaced all these but it's still not working. I've ordered a cheap ATX power supply and will be transplanting the guts into the XT PSU's box so that I can keep the big red switch.
It would be really interesting to try to make a waveform that is fractal in nature, and actually sounds interesting and musical at whatever frequency you play it at. As you speed it up, some frequencies will go so high they become inaudible and new frequencies move into the audible range at the low end. In normal music, the pitches, effects, tempos and structures all happen at different timescales but in fractal music they would all interact with each other in interesting ways.
It's well known that it's possible to change a program that uses GOTOs into one that uses loops and conditionals if you're allowed to duplicate code or add more variables. One very easy way is just to change the program into a state machine, with one state for each label and a big loop around the whole thing. But under what conditions can you eliminate GOTOs if you can't do either?
The "Multiple break" example in the "More keywords" blog post is a piece of code that can't be de-GOTOized in C. I'm wondering if there are any examples of pieces of code that can't be de-GOTOized without multiple break, extra variables or code duplication. If I understand the abstract of Peterson 1973 correctly I think there aren't - that multiple break is enough to make GOTO redundant (but I can't find a non-paywalled copy of that article to check, and I don't care enough to spend $15 to find out).
Templated C++ classes are a bit nicer to use than non-templated classes, because one can define the entire class within its declaration without having to make sure that the types it uses are defined first (this check is done at instantiation time for template classes, but at parse time for non-template classes). I have found myself making classes templates when they don't really need to be just so that I don't have to define member functions outside the declaration - i.e. when CTemplate doesn't actually use T for anything, and CTemplate is just used as "typedef CTemplate C;". T may be used as the template parameter to other classes defined this way, though.
The Haskell programming language implements a lot of very cool ideas (many of which I want to liberate for my programming language). However, Haskell seems to have a reputation for being a very difficult language to learn. The IO monad seems to be one particular sticking point, but this didn't seem to be particularly difficult to me - it's just a clever little hack (I just read lots of "How I came to understand the IO monad" accounts until I got it).
Another sticking point seems to be there's a lot of stuff written about Haskell in the form of academic papers (with all the academic jargon that entails). That shouldn't be a surprise (since it's a language with origins in academia which seems to be only reluctantly escaping to industry), but it is kind of interesting that there's a sort of language barrier between academia and industry - what the former calls "deforestation hylomorphism" might be called "tree flattening" by the latter, for example.
But I think the real problem is something shared by other functional languages - it's just a different level of abstraction than we're used to thinking about. In imperative programming languages programmers can think "okay, what do I want the machine to do here?" and write that code. In functional programming languages one instead has to think about what the inputs and outputs are, write functions to perform those transformations and trust that the compiler will optimize it well (to a first approximation). It's much more like doing mathematics than imperative programming. Compilers will do all sorts of crazy things to turn those pure functions into high-performance imperative code. So when things are inevitably too slow, it's not obvious why (since the relationship between the generated code and the source code is very complicated) and it's difficult to understand how to make it faster.
Functional programming languages do have some interesting advantages over imperative languages, though - it's much easier to reason about pure functions than about imperative programs with all their attendant state, which means lots more interesting automatic optimizations are possible and (increasingly importantly) the code can be automatically parallelized so it takes maximum advantage of modern multi-core CPUs.
I'm not sure which side to place my money on. I suspect both imperative and functional paradigms will continue to exist for the foreseeable future. On the imperative side: the low-level OS and runtime components must can't be written using functional languages, and optimizations and multi-core abstractions for imperative languages will continue to improve. On the functional side, compilers will require less and less hand-holding to generate good code and will generate better code than imperative compilers for more and more situations.
It seems to me that a lot of the complaints about GOTOs turning programs into spaghetti are really about jumping into a block - jumping out of blocks is much easier to reason about, and in fact with the minor modification to the break keyword I mentioned before it's trivial to turn any such GOTO into structured code without even rearranging anything.
I do have some C++ code (written for my Mandelbrot plotter program) which uses "goto". I originally worked out the algorithm as a state machine, so writing it in that way was natural. I tried rewriting it to use loops and conditionals instead but it just obscured what the program was really trying to do. I could have used a switch statement inside a loop as well but I think that's really just a more verbose way of saying the same thing.
When we're all using languages which can compile code at runtime it's going to be more important to think about when your code is being compiled. Take a fractal plotter for example - one in which users can supply a formula to be iterated. Obviously we'd like to do some compilation here instead of just interpreting the formula, as the latter would be very slow. But how often should we recompile it? The more often we recompile, the more information we have available to us and the more optimizations we will be able to use. For example, if we just compile when the formula changes, we would have to use arithmetic operators which can work with any precision, but if we recompile whenever the precision changes we can unroll those arithmetic operators and make the code much faster (especially for low precisions). On the other hand, recompiling does have some overhead so we probably wouldn't want to recompile for each pixel. Though for some formulae that might actually be helpful - if we can hoist a test from per-iteration loop to the per-pixel loop and the iteration count is high it might be worth it.
One possibility might be to give the code-generation library the freedom to compile whenever it likes, so it can try various things and run with what works best.