• Please review our updated Terms and Rules here

PDOS/86 now huge!

kerravon

Experienced Member
Joined
Oct 31, 2021
Messages
137
Circa 1994 I didn't know what "huge" memory model was,
and selected "large" to write PDOS/86 in.

I then found that my already-written memory management
routines didn't allow me to manage more than 64k of
memory.

Rather than change my routines, I hacked PDOS/86 to get
more than 64k of memory.

Fast-forward nearly 3 decades, and I now know what "huge"
means. After an enormous effort, PDOS/86 is now built with
the "huge" memory model, the hack is gone, and my original
memory management routines work unchanged.

My size_t is 32 bits.

This is all pure 8086 code generated by Watcom C, but
using PDPCLIB as the C library.

Available at http://pdos.org

(released an hour or so ago).

BFN. Paul.
 
This is an honest question, because I've never dealt much with C under DOS (I mostly write assembly code where you're generally free to come up with your own memory "models"):

I thought "huge" was only useful when the length of a single data structure can exceed 64K? Is that an actual requirement in PDOS? The way I understand it, the huge model has to constantly re-normalize its 32-bit pointers (i.e. calculate a new base segment and a new offset) so you can address a >64K memory region without exceeding a segment limit. That sounds like a recipe for code bloat and impacted performance unless it's done only when absolutely required.
 
Having written some code in huge model on OS/2 1.x, the performance losses of huge exist but should only account for a reduction of a few percent unless all the code does is jump to a new data address. Given PDOS's cross platform design goals, having a huge for the 8086 seems to make sense.
 
Some of us might still not know what "huge" means. I myself assume that it is a kind of extension of "large" model with a segment size of more than 64k. in that case the compiler must have support and check pointer operations against overflow and adjust segment registers accordingly. Would that be the correct explanation?
 
Some of us might still not know what "huge" means. I myself assume that it is a kind of extension of "large" model with a segment size of more than 64k. in that case the compiler must have support and check pointer operations against overflow and adjust segment registers accordingly. Would that be the correct explanation?
Correct, and it also (well, I did in my (PDPCLIB) routines, because I thought it was a requirement, but I'm not totally sure it's a requirement) normalizes the pointers in that process, so that the offset is less than 16. I think that is so that comparisons can be done efficiently.
 
This is an honest question, because I've never dealt much with C under DOS (I mostly write assembly code where you're generally free to come up with your own memory "models"):

I thought "huge" was only useful when the length of a single data structure can exceed 64K? Is that an actual requirement in PDOS? The way I understand it, the huge model has to constantly re-normalize its 32-bit pointers (i.e. calculate a new base segment and a new offset) so you can address a >64K memory region without exceeding a segment limit. That sounds like a recipe for code bloat and impacted performance unless it's done only when absolutely required.

This is an OS that I am writing, so the single data structure is the entire 640k. I need to manage that memory. And I already have C90-compliant memory management routines, but with size_t sitting at 64k, they weren't usable. Well, I refused to change my perfectly valid code and found a way to hack the OS instead.

Now that I have size_t of 32 bits (something the Watcom library doesn't do for some reason - instead requiring the user to use farmalloc), the already-written code worked just fine, and I was able to remove (via #ifdef) the hack.

When using a computer, the computer will normally be CPU-bound on application code rather than CPU-bound in the OS.

And you don't need to use the huge memory model for your applications. PDOS/86 supports running programs that were built in any memory model, including COM files.

PDPCLIB now (as of like a week ago) supports all of that too, although you can't do a malloc or use parameters in tiny/small/medium. I can probably change that but it hasn't been a priority yet.

Also as noted by another poster, it's a theoretical performance hit, not a real one.
 
Another thing to note is that now that I have made the code standard, removing from my source code the hardcoded 4-bit shifts, I'm planning on supporting the Turbo 186 with 8-bit shifts, and doing the equivalent with selectors for the 80286.

I'm envisioning no change in the MZ executable format other than ensuring that no single bit of object code crosses a 64k boundary so that I can adjust segments appropriately on different 16-bit environments.

It's still basically MSDOS, but supercharged.

Unless I'm missing something ...
 
Another thing to note is that now that I have made the code standard, removing from my source code the hardcoded 4-bit shifts, I'm planning on supporting the Turbo 186 with 8-bit shifts, and doing the equivalent with selectors for the 80286.

I'm envisioning no change in the MZ executable format other than ensuring that no single bit of object code crosses a 64k boundary so that I can adjust segments appropriately on different 16-bit environments.

It's still basically MSDOS, but supercharged.

Unless I'm missing something ...
How is it supercharged? By that I mean there seems to be lots of fancy features but DOS based programs would have to be rewritten to make use of said fanciness, or am I missing something?
 
Back
Top