• Please review our updated Terms and Rules here

Running Minix, a UNIX-like Operating System on the HP 200LX DOS Palmtop

radiance32

New Member
Joined
Jan 3, 2022
Messages
8
Hi all,

I'm trying to get a UNIX-like operating system to boot/run on a HP 200LX DOS Palmtop PC from the early 90's... After various options, I settled on Minix 2.0.2 (with special patches and started from DOS).


My ultimate goal is, if I can manage the low amount of memory available, to set up a small webserver with a SLIP or PPP null-modem cable link.
Enjoy watching, the 2nd part is already in the making.
Cheers,
Terrence
 
Does anyone know how the 8088 Minix did its processes? (Specifically its user processes, the kernel is a different beast.)

The 200LX is a 80186 and does not have any kind of memory protection.

Most modern *nix system rely on the an MMU for VM mapping and memory protection.

One potential way to do it mandating all of the processes use the Tiny memory model, then they can be "safely" running in a single segment, but you "have" to allocate the entire 64K segment to them.

And to be fair, that's not true. You can do sub-64K blocks, just set the segment registers and you're on your way.

But with 64K allocated, you don't have to worry about the program breaking out of the segment and potentially stomping on other processes or the kernel. And I'm not talking from a security POV, just simple system stability since Minix relies on C, a notoriously not memory safe language.

Or did folks just routinely crush the OS when they let an uninitialized pointer go rampaging off in the dark.

Anyway, I'm just curious if anyone knows how Minix pulled this off.
 
I'm not too familiar with the internal workings of the Minix kernel,
but it does start in Real Mode on the 200LX,
and I assume it just allocates what processes request, and if there's not enough ram left, it will refuse to start the process, or remove/kill an existing process that's asking for more memory.
Offcourse, a wrong pointer can cause havoc and bring down the system, but, after using it for a few weeks now, with only 600KB RAM, and a ~200KB kernel, that's 400KB free, i've never encountered a system panic/crash of any kind. I have encountered quite a few times a "cannot exec: ..." because there is not enough memory left...
Maybe download Minix 2.0 and look in /usr/src/kernel and /usr/src/mm

Cheers,
Radiance
 
But with 64K allocated, you don't have to worry about the program breaking out of the segment and potentially stomping on other processes or the kernel. And I'm not talking from a security POV, just simple system stability since Minix relies on C, a notoriously not memory safe language.

This is just parroting something I vaguely recall somewhere, but that source claimed the Minix C compiler tried to route memory allocation through the kernel in a way that let it at least attempt to referee potential conflicts. But, yes, when it really comes down to it 8086 Minux can't enforce anything in hardware. Since the CPU has no distinction between user and supervisor code there's absolutely nothing preventing someone from poking the segment registers directly or overwriting something belonging to another process that resides within the same 64K physical "window" even if they don't do that.

(Real Mode x86 is actually kind of a weird architecture so far as it supports segmentation without protection; there are no "base and bound" registers, it's solely for address extension. The 286 is really the "right" CPU to run a 16-bit UNIX on; it may be a "brain-dead chip" but technically its MMU outclasses the PDP-11s, at least. The kind way to look at it, I guess, is the segmentation lets Minix on 8086 be at least a little more like "real" UNIX than Mini-UNIX on the non-MMU-equipped PDP-11s was, at least in terms of process handling.)
 
Hi mate. A Vogons Mod had a fit and band me. Doesn't worry me. one bit. Still planning the Palmy run?
Hey, Sorry to hear that. I'm not sure, depends on my friends in CHCH and the state of the country with the new covid variant etc... i'll let you know..
 
It's an old topic, but still, I hope this information still might be useful.

Article from The HP Palmtop Paper magazine about running Minix on HP 200LX


Richard L. Dubs, Ph.D. saved webpage where he says: "I have created the PCMCIA and BIOS INT13 services necessary to boot MINIX on the HP200LX Palmtop from a PCMCIA ATA Flash Disk. These services should be equally useful for booting LINUX-86 (ELKS) on the HP200LX."


Although unfortunately the original 200PRGS.ZIP file containing software is not available for download anymore from this site, you can download it from "The Largest 200LX Software Archive: S.U.P.E.R." website mirror as part of minix.zip file.

 
Last edited:
Does anyone know how the 8088 Minix did its processes? (Specifically its user processes, the kernel is a different beast.)
It is explained reasonably well in the Minix books... in the memory management chapters. Only Minix 1.x and 2.x run on the 8086.

The 200LX is a 80186 and does not have any kind of memory protection.
Most modern *nix system rely on the an MMU for VM mapping and memory protection.
The 8086 does have an MMU. It is very rudimentary and lacks memory protection, but it is definitely workable. Basic Unix does not require memory protection, and without external hardware it is not possible on an 8086/80186 anyway. There are no invalid instruction exceptions on those processors either. All operating systems on these processors have therefore similar limitations, such as PC/IX or Xenix-86.

One potential way to do it mandating all of the processes use the Tiny memory model, then they can be "safely" running in a single segment, but you "have" to allocate the entire 64K segment to them.

But with 64K allocated, you don't have to worry about the program breaking out of the segment and potentially stomping on other processes or the kernel. And I'm not talking from a security POV, just simple system stability since Minix relies on C, a notoriously not memory safe language.
Well, no. First of all, any application can create any far pointer at any time and access any address (or I/O device) in the system, so there is no safety at all. Second, just because you allocated the whole 64K segment and turned every possible near pointer into a valid memory address does not mean you have achived anything: NULL pointers are still valid, and you can easily stomp on your own libc and syscall stubs.

Stability is achieved by behaving like a civilized program, not like a rampaging bull in heat. Memory-safe languages rely on compile-time guarantees and runtime asserts (both of which are perfectly possible on an 8086, even when using C) as well as memory-protection hardware (missing on 8086). C is not to blame here.

Minix 2.0 supports two memory models, "combined I and D space" (tiny) and "separate I and D space" (small). Memory is allocated in units of 256 bytes called "clicks", and processes consist of five parts: code, data, bss, heap and stack. The "chmem" program is used to change the allocated size in the binary if necessary. Consequently, programs cannot exceed 128K.

For tiny model programs, a single segment covering all parts is allocated (code, data, bss, heap at the bottom, stack at the top). Small model programs get two segments, but the code segment can be shared across multiple instances of the same application. The book version of Minix (2.0.0) never swaps segments to disk, although later versions added support for swapping. Applications can never cause the system to run out of memory, as their maximum heap size is allocated at startup.

Or did folks just routinely crush the OS when they let an uninitialized pointer go rampaging off in the dark.
Most Unix applications are written in C, and a small mode compiler will never generate far pointers. With a maximum heap size and without assembly trickery, accidentally overwriting other stuff is unlikely. Stack overflows cannot be detected when they happen, but can be checked for at strategic points (library or system calls), but I doubt Minix does that by default. Memory allocations can be checked (to detect out-of-memory), but memory overflows cannot be detected at all. System calls can validate their arguments and check process state, but I don't think they do.

The microkernel architecture in Minix is theoretically more resistant, but Minix 2.x does not take advantage of it. Tanenbaum did focus in that direction in later versions of Minix, handling crashing system services automatically and keeping the system alive even when injecting hardware faults. However, the 8086 processor never crashes, it only starts misbehaving.

Anyway, I'm just curious if anyone knows how Minix pulled this off.
By using high-level languages and functioning compilers where possible, and assuming that programs try to behave. Many things can be done to prevent accidents, but there is no protection against actively misbehaving or evil programs.

In fact, I have written a tool for PC/IX which uses (abuses) the lack of memory protection to read the pce-ibmpc emulated RTC; and having the ability to access any hardware from anywhere allows neat driver shenanigans to improve performance.
 
By using high-level languages and functioning compilers where possible, and assuming that programs try to behave. Many things can be done to prevent accidents, but there is no protection against actively misbehaving or evil programs.
The fundamental point being that while under development, programs very well may misbehave, just not intentionally.

The question is how likely this is to be with this system. How much risk of system instability when Joe Programmer is off learning about linked lists or trees or whatever. The tiny and small memory models certainly should help with that, but I assume syscalls passed in far pointers to the kernel routines.

Many fragile systems worked fine with sharing resources with well behaved programs. Witness the Macintosh. Of course, many systems exhibited their fragility when programs did misbehave. Witness, again, the Macintosh.

I used to work on Alpha Micros, and we wrote in BASIC. But even then, hard crashes and reboots were not unheard of. They were not necessarily routine, but even in production systems, they happened.
 
The question is how likely this is to be with this system. How much risk of system instability when Joe Programmer is off learning about linked lists or trees or whatever. The tiny and small memory models certainly should help with that, but I assume syscalls passed in far pointers to the kernel routines.
The probability of Joe Programmer making a mistake in his program depends 100% on Joe Programmer, not on the system he tries to run his code on. If you know how badly crashing the system reduces your productivity, you become more careful. An 8088 running ACK is very slow, you have enough time to reason about your code a second time while compiling. I care far less if a compile-test-verify cycles takes less than a second.

User code in Minix simply does not deal with far pointers, especially not when written in C. The compiler won't create them, the user doesn't need them, and calling libc functions (which in turn call something else) just works. It's a solved problem by the people writing the OS and system libraries. You even have decent online documentation. Reading compiler warnings, checking return values and dealing with runtime errors reduces the likelihood of crashes as well.

I've only programmed Minix on a 286 system, which does have memory protection and can properly segfault. But when I was dealing with PC/IX or Xenix-86, I didn't really have issues with destabilizing the whole system (granted, I didn't to complex stuff). Crashing DOS is far easier in my experience, but mostly due to the unfamiliar and more complex environments, as well as generally dealing with larger, multi-segmented applications.

If you want to hear an answer to "can you crash the system if you write buggy code" then the answer is definitely yes. Whether that's a problem depends mainly on programmer's ability and care.
 
If you want to hear an answer to "can you crash the system if you write buggy code" then the answer is definitely yes. Whether that's a problem depends mainly on programmer's ability and care.
But from your description, it seems reasonably safe to run random C code and not have it crater the system, through your points about no far pointers, and the use of the tiny or small memory models.

Do you think it would have been safe to run untested C code on a Minix system without have to crawl over the source first? Assuming the code has not malicious intent, its just someone's Red/Black Tree code from the interwebs or something.
 
But from your description, it seems reasonably safe to run random C code and not have it crater the system, through your points about no far pointers, and the use of the tiny or small memory models.
It is reasonably safe to run reasonably well-written code.

Minix does not support dynamic linking, your segments contain a copy of libc. Libc does syscalls, so accidentally overwriting that will cause strange things to happen. Segments are not always 64K, so out-of-bounds writes can destroy your own code or data segments, or other processes. Or they may change memory which is currently not in use.

Nobody prevents you from doing something stupid like this, either:
#asm
push ds
pop cs
#endasm
The 8086 will happily continue execution somewhere in the middle of the data segment, and will not fault.

Do you think it would have been safe to run untested C code on a Minix system without have to crawl over the source first?
No, of course not. Running untested code is never safe, not on any operating system, not using any language, not in any environment. That's a philosophical question, not a technical one.

However, Eudimorphodon is right: The amount of damage an 8086 running Minix can cause is most likely very limited. If that machine is actually controlling the nuclear reactor or rocket launcher over to the side, you probably shouldn't even consider thinking about running untested code on it.

Every process can at crash itself. Minix is a proper multi-user, multi-tasking system (even on an 8086), so killing a stuck process from a separate console is no issue. But processes can also crash the system, they can mess with your keyboard or graphics hardware, and they can do all sorts of things which make it hard to recover from. If the machine is networked, they can even cause problems down the network.

Although the amount of damage such a machine, which is barely able to run TCP/IP itself without dying, can cause on the network is also reasonably limited.
 
When I was first learning C I remember making some rookie mistakes with not malloc()-ing enough RAM for arrays; the resulting code would seemingly work fine on the Linux machine I had in my dorm but segfaulted on the SunOS machines in the lab... or maybe it was the reverse, it's been too long. The reason the one machine let it slide while the other didn't had something to do with different default page size/granularity between the two systems or something.. again, ancient history. But the point here is that I'm sure it'd be reasonably trivial with Minix to make a similar honest mistake and, unless the compiler is pretty smart, create a binary that'll blissfully run out of bounds and clobber bits of memory it doesn't own *instead* of generating a segfault.

Edit: Maybe malloc specifically was a bad example, because poking around a little it doesn't look like 8086 Minux supports dynamic allocation? But the point remains that there's nothing stopping you in Minux from incrementing a pointer beyond the end of your data segment and clobbering something arbitrary next door, which may well be some other processes' code segment.
 
Last edited:
Minix does support dynamic allocations, but the heap is inside your only data segment (it uses a traditional BRK syscall to allocate heap). Nobody prevents you from not checking malloc() return values and overflowing into your stack (above the heap) or your bss/data areas (below the heap).

You will not get regular segmentation faults or illegal instruction exceptions on any unix-like system running on an 8086. There is simply no hardware to generate those. (Again, some things can be checked for at runtime, for example stack overflows or NULL pointer dereferences, but these will trigger exception only when they are detected, not when the bug happens. Watcom for DOS does those checks if enabled and aborts the program as well.)

Mistakes happen, but unless you really overflow into neighboring processes by accident (possible if 'chmem' is small), you are unlikely to crash the OS.
 
Many years ago, Mack Baggette modified the minix kernel so that it will work with the LX's eccentricities. I had copies of the files and did test that they ran on my HP-200LX. He made several versions for different size cards, from 8 meg to 40 meg. I no longer have the files, but I did some searching on the internet and the files are available on the Internet WayBack Machine. Just enter " http://minix/technoir.org" and select one of the early instances - probably early 2002 or so.

Note the link is no longer available on the web and is only usable through the wayback machine.

Note: I think Mack's version uses DOSMINIX to be able to get it to run.

73
Bill WD9EQD
Smithville, NJ
 
Last edited:
Back
Top