After some thought I have come to the conclusion that a software architecture which allows third-party code to run in the same process as the main program is a bad idea.
The first problem with such an architecture is reliability - how do you protect yourself against bugs in the third-party code? The short answer is that you can't - the plugin has the ability to stomp all over your process's invariants. The best you can do is terminate the process when such a situation is detected.
The second problem is that a compiler can't reason about the third-party code (it might not even exist at compile time). This means that there are all kinds of checks and optimizations that cannot be performed (like some of the stack tricks I mentioned a while ago).
The third problem is that one cannot tell what code the plug-in will rely on - if it sticks to documented interfaces that's okay, but (either deliberately or accidentally) a plug-in might rely on undocumented behavior which causes the plug-in to break if the program is updated. This forces the program to have ugly kludges to keep old plug-ins working.
If plug-ins are run as separate processes communicating with the main program via well-defined IPC protocols, all these problems are solved. The down-side is that these protocols are likely to be a little more difficult to code against and (depending on the efficiency of the operating system's IPC implementation) may also be significantly slower. The speed problem seems unlikely to be insurmountable though - current OSes can play back HD video which involves the uncompressed video stream crossing the user/kernel boundary - few applications are likely to need IPC bandwidth greater than that.
[...] was thinking some more about this recently. Specifically, APIs are generally implemented as DLLs or shared libraries which you load [...]
[...] given time - the system should be able to cope with this. Of course, if these shared components are separate processes this ceases to be a [...]