When I first learnt how to do 3D graphics it was as a recipe. You take your 3D world coordinates x, y and z and rotate them (rotation matrices around the coordinate axes are particularly easy to write down). If you want perspective, you then divide x and y by z (for isometric views just ignore z). Next you scale the result by the number of pixels per world-coordinate unit at z=1 and translate so that x=y=0 is in the center of the screen.
This worked great, but didn't tell me why this was the right thing to do.
Later, reading about ray tracing I realized what was really going on. Your model is in 3D space but also in 3D space are some additional points - the entrance pupil of your eye and the pixels that make up your screen. If you imagine a straight line from your eye passing through a particular point of your model and going on to infinity, that line may also pass through the screen. If it does, the point on the screen that it passes through corresponds to the pixel on the physical screen at which that point should be drawn.
Since computer screens are generally rectangular, if the positions of the corners of the screen are a, b, c and d (a and d being diagonally opposite to each other) the position of the fourth corner can be determined from the first three by using the equation a+b=c+d. So to fix the position, orientation and size of the screen we only need to consider 3 corners (a, b and c). We also need to consider the position of the eye, e, which is independent of a, b and c. We thus have 12 degrees of freedom (3 coordinates each for a, b, c and e). Three of these degrees of freedom correspond to translations of the whole screen/eye system in 3D space. Two of them correspond to orientation (looking up/down and left/right). Two correspond to the horizontal and vertical size of the screen. Three more give the position of the eye relative to the screen. One more gives the "twist" of the screen (rotation along the axis between the eye and the closest point on the screen plane). That's eleven degrees of freedom - what's the other one? It took me a while to find it but eventually I realized that I was under-contraining a, b and c - the remaining degree of freedom is the angle between the top and left edges of the screen (which for every monitor I've ever seen will be 90 degrees - nice to know that this system can support different values for this though).
If you solve the equations, this system turns out to be exactly the same transformations as the original recipe, only a little more general and somewhat better justified. It also unifies the perspective and isometric views - isometric is what you get if the distance between the eye and the screen is infinity. Obviously if you were really infinitely far away from your computer screen you wouldn't be able to see anything on it, which is why the isometric view doesn't look as realistic as the perspective view.
Many 3D graphics engines allow you to set a parameter called "perspective" or "field of view" which effectively increases or decreases how "distorted" the perspective looks and how much peripheral vision you have. This is essentially the same as the eye-screen distance in my model. To get the most realistic image the FoV should be set according to the distance between your eyes and your screen.
[...] on from this post, a natural generalization is that to non-Euclidean spaces. This is important for simulating [...]
There are algorithms which can use a standard web camera to accurately track the position of your head in three dimensions, without much latency either.