Does anyone know how the 8088 Minix did its processes? (Specifically its user processes, the kernel is a different beast.)
It is explained reasonably well in the Minix books... in the memory management chapters. Only Minix 1.x and 2.x run on the 8086.
The 200LX is a 80186 and does not have any kind of memory protection.
Most modern *nix system rely on the an MMU for VM mapping and memory protection.
The 8086 does have an MMU. It is very rudimentary and lacks memory protection, but it is definitely workable. Basic Unix does not require memory protection, and without external hardware it is not possible on an 8086/80186 anyway. There are no invalid instruction exceptions on those processors either. All operating systems on these processors have therefore similar limitations, such as PC/IX or Xenix-86.
One potential way to do it mandating all of the processes use the Tiny memory model, then they can be "safely" running in a single segment, but you "have" to allocate the entire 64K segment to them.
But with 64K allocated, you don't have to worry about the program breaking out of the segment and potentially stomping on other processes or the kernel. And I'm not talking from a security POV, just simple system stability since Minix relies on C, a notoriously not memory safe language.
Well, no. First of all, any application can create any far pointer at any time and access any address (or I/O device) in the system, so there is no safety at all. Second, just because you allocated the whole 64K segment and turned every possible near pointer into a valid memory address does not mean you have achived anything: NULL pointers are still valid, and you can easily stomp on your own libc and syscall stubs.
Stability is achieved by behaving like a civilized program, not like a rampaging bull in heat. Memory-safe languages rely on compile-time guarantees and runtime asserts (both of which are perfectly possible on an 8086, even when using C) as well as memory-protection hardware (missing on 8086). C is not to blame here.
Minix 2.0 supports two memory models, "combined I and D space" (tiny) and "separate I and D space" (small). Memory is allocated in units of 256 bytes called "clicks", and processes consist of five parts: code, data, bss, heap and stack. The "chmem" program is used to change the allocated size in the binary if necessary. Consequently, programs cannot exceed 128K.
For tiny model programs, a single segment covering all parts is allocated (code, data, bss, heap at the bottom, stack at the top). Small model programs get two segments, but the code segment can be shared across multiple instances of the same application. The book version of Minix (2.0.0) never swaps segments to disk, although later versions added support for swapping. Applications can never cause the system to run out of memory, as their maximum heap size is allocated at startup.
Or did folks just routinely crush the OS when they let an uninitialized pointer go rampaging off in the dark.
Most Unix applications are written in C, and a small mode compiler will never generate far pointers. With a maximum heap size and without assembly trickery, accidentally overwriting other stuff is unlikely. Stack overflows cannot be detected when they happen, but can be checked for at strategic points (library or system calls), but I doubt Minix does that by default. Memory allocations can be checked (to detect out-of-memory), but memory overflows cannot be detected at all. System calls can validate their arguments and check process state, but I don't think they do.
The microkernel architecture in Minix is theoretically more resistant, but Minix 2.x does not take advantage of it. Tanenbaum did focus in that direction in later versions of Minix, handling crashing system services automatically and keeping the system alive even when injecting hardware faults. However, the 8086 processor never crashes, it only starts misbehaving.
Anyway, I'm just curious if anyone knows how Minix pulled this off.
By using high-level languages and functioning compilers where possible, and assuming that programs try to behave. Many things can be done to prevent accidents, but there is no protection against actively misbehaving or evil programs.
In fact, I have written a tool for PC/IX which uses (abuses) the lack of memory protection to read the pce-ibmpc emulated RTC; and having the ability to access any hardware from anywhere allows neat driver shenanigans to improve performance.