• Please review our updated Terms and Rules here

Do you think Tim Patterson infringed on DRI's rights with QDOS/MSDOS?

Not enough memory? It ran on my 386/20 back in the day. Later releases dropped 386 support. SCO ran quite well on a 286.

Time moves on. I think that Debian and Ubuntu have dropped any plans for future releases on 32-bit x86 platforms. I suspect that the 32-bit ARM releases (Armbian) (is there CP/M for ARM?) will continue for some time.
Plenty. Red Hat 5.2 installed on it just fine... I wouldn't touch SCO Unix with a barge poll on my 286 with 8megs of ram.. Dos and MS Windows 3.1 or PCGeos support more hardware and is more useful. You see I own the Manuals and software for SCO Open Unix. I'm well aware of SCO Unix system requirements. See pic below.

IMG_20231002_060714_hdr.jpgThe 5522 had 64megs.Have a look in this thread https://forum.vcfed.org/index.php?threads/compaq-presario-5522.36633/
 
Last edited:
Which other OS had been implemented on more than 100 different platforms?

I would say you are using “platforms” extremely generously and in a manner not remotely comparable to, say how UNIX uses it. Effectively every system CP/M-80 runs on *within the scope of what’s actually supported by the API* is the “same” except the video and storage drivers had to be hacked.

(This is actually an interesting place where the PC completely outclasses the CP/M model: the PC has video and storage drivers built into its boot ROM which the OS links to, allowing the *same OS binaries* to run on PCs with different storage controllers and other variations *without modification*. They didn’t steal that from DRI.)

I mean, basically CP/M is “portable“ in the sense most software was in 1975, IE, your API was a jump table and users who had a different collection of boards in their Altair than the author were expected to modify that jump table to point to a driver for their particular dingus. It‘s not really that comparable to modern conceptions of modular/portable software at all. And it is important to note that the services provided by CP/M outside of disk storage were minimal and applications relying on, well, even things as simple as cursor positioning, still need modification via the same crude patching technique.

What other options existed at that time that you feel could have truly have been as viable as CP/M for the small time business or experimenter?

They could always suck it up and write their own, and if you look through the history of microcomputing in the early days that’s just what everyone did. CP/M was never as dominant as you seem to believe it was; I mean, according to this worldview the entire massive ecosystem of computers based on the 6502 couldn’t possibly have existed because writing 4K of disk code was just too hard?

It’s obvious from the success of proprietary 8-bit platforms that given the choice people preferred a richer user experience than CP/M offered. You mentioned those cool Japanese computers that ran it as the base OS, but those are just exceptions that prove the rule: software for them is effectively just as proprietary as Apple II software because it hits the enhanced hardware. Ultimately CP/M doesn’t buy you very much unless an absolute least common denominator experience is what you want, and most people didn’t.
 
Last edited:
Post-DOS as in after DOS's creation... Not "After DOS died" - I see where my words may be confusing.

And yes, I too ran DOS from 286 to Pentium. And on some XTs too, but I started with a 286.
You are stretching things aren't you? Dos is not dead...See FreeDos.
 
Last edited:
These statements lack an understanding of motivations and personalities and personal goals of the individuals and companies involved. It's easy to make these sorts of remarks from 40 years in the future, having not known the people or watched things unfold. You are correct that DRI was not evolving fast enough in certain areas, but to imply that DRI was motivated to compete with, or replace, MS is not entirely accurate. DRI was supplying true-multitasking OSes for the x86 long before MS could accomplish even "fake multitasking".

The market wanted larger more capable applications not the ability to switch between the same inadequate small programs. DRI put in a lot of effort chasing a very small market; MS wound up with an OS that fit the customers' requirements better.
 
The market wanted larger more capable applications not the ability to switch between the same inadequate small programs. DRI put in a lot of effort chasing a very small market; MS wound up with an OS that fit the customers' requirements better.
Bundling with systems helped a lot as well. All the major OEMs were doing it.
 
You see I own the Manuals and software for SCO Open Unix. I'm well aware of SCO Unix system requirements. See pic below.
Yes, I donated the manuals quite some time ago, as I had no use for it. May still have copies of the floppies; I haven't looked. Still have my manuals for System III, however.
 
The market wanted larger more capable applications not the ability to switch between the same inadequate small programs. DRI put in a lot of effort chasing a very small market; MS wound up with an OS that fit the customers' requirements better.
Not sure what you're talking about here. Are you comparing 8-bit computers with 16-bit? MS-DOS was not competing with anyone, IBM had ensured its success by making it the default, and for awhile the only, OS. DRI products for the 8086 could run larger programs, too.
 
The single most common configuration for a CP/M-80 machine right up until the end of market viability in most regions was a 4Mhz-ish Z-80 with 64k. That isn’t a lot different from the most minimal machine CP/M would reasonably run on in the mid-1970’s. And they acted pretty much the same; about the only thing that evolved was cost and form factor, and the combination of those explains why the last “mainstream” CP/M machines before it died out were bargain priced portables, a niche that *poofed* when PC compatibles got cheap enough.

CP/M was a fossil in the 1980’s and casual users were for the most part thrilled to dump it when better options came along. I was pretty young at the time but I was “there”, and my memories are that CP/M just wasn’t on most people’s radar unless it absolutely had to be. CP/M is what your accountant used or what ran your dentist’s office in the late 1970’s, and by 1983 had become that thing people who discovered the joy of word processing grudgingly put up with on their Kaypro because the package was cheap and not much heavier than a portable typewriter.

CP/M was a useful product that deserves its place in the history books because it *was* the first “least common denominator” operating system that a company could buy off the shelf (but then need to modify), and it’s existence thus greased the wheels for some of the early pioneering computer companies to get product out the doors (just like Microsoft did with BASIC). But this claim that is set the mold for all future “modern computing” is a hell of stretch, especially when you consider how the version of CP/M that 95% of the world actually *used* was essentially dead and frozen by 1979 or so. DRI did have a lot of interesting products after CP/M-80 (GEM is my personal favorite), but in a lot of ways the personal computing world had already moved on to more shiny objects even before MS-DOS came along and strangled CP/M-86 in the cradle.
Once upon a time, during the 80's, I worked for a large federal agency located just outside of the beltway (DC). One of its missions was designing and implementing VHF communication tower sites for some various federal law enforcement groups. Our HQ staff had a budget set aside for about $40K towards a custom software package to encase the various parameters required for optimum efficiency and operation of the new transceiver sites. When the software package arrived and was installed on an IBM PC, it was found to be an IBM CP/M-86 version and couldn't be returned in accordance with the purchase agreement. Now, don't hold my feet to the fire on the CP/M version, but nobody ever got canned for buying IBM back in day, and if your department was getting some PC's back then, you could bet the farm that they would be IBM. However, there was a smattering of Apples here and there. Most, if not all of this section's PC were IBM XT's configured with IBM-DOS on HD's, with the usual complement of floppy drives. So, the project stalled and the boss's and his lead man wrung their hands for the longest while when one of our field tech's got wind while visiting the office. Being about as computer savvy as you could be back then, he suggested that they just reconfigure a PC for CP/M. You can do that was the response. Sure, just get me the CP/M software package and we'll have it up and running in no time. And they did. Back then, if you didn't have a word processor and/or an inventory program, then you PC was just a status symbol.
 
On the topic of other things that almost got sued for allegedly copying CP/M, was the operating system for the Z80-based Norwegian computer Tiki-100. Conveniently they called it KP/M, so it certainly hit a sore spot! Their counter-claim, published in the news at the time the case came up, was that "it was written specifically for the Z80, so could not be a copy".

In hindsight, looking at some disassembled code, it is clear that some copying did indeed happen. Just not from DRI's CP/M. It used a slightly modified version of ZCPR as shell, and it wouldn't surprise me if there's a fair bit of ZSDOS below that. Nothing of this was credited in any way, shape or form, of course, everything was sold as if it was made all in-house from scratch.

At the end of the day, Tiki-Data settled the case with DRI out of court very quickly, and the only part of the deal that was made public was a name-change to the less infringing "TIKO".
 
So that was a matter of trademark infringement, not pirated code, right?

See https://en.wikipedia.org/wiki/Software_copyright under "United States". Until about 1974, software wasn't considered by the USPTO to be copyrightable nor patentable. One could claim copyright before that, but it was generally not enforceable. I don't recall any copyright notices in mainframe computer code pre-1975.

A later tangent was not copying functionality, but "look and feel" was an issue, but that wasn't tested until 1994.

Software copyright is a tangle of legal opinions and claims. Had DRI chosen to sue MS , I suspect the case would have dragged on for years, with the inevitable result of bankrupting DRI. I'm sure that Gary realized this.
 
If "look and feel" was to be litigated, DRI may have had problems over their use of PIP.

Amazing how SCP evades potential litigation in all these retrospective cases since SCP did the distribution of purportedly infringing material.
 
FACT- it wouldn't install on my 386...or on my Compaq Presario 5522 with P166mmx cpu. Don't believe every thing you see posted on the internet.
Just because it didn't run on YOUR PC doesn't mean it won't run on any. I've run NetBSD 1.5 on a P133 for a long time.

I think that Debian and Ubuntu have dropped any plans for future releases on 32-bit x86 platforms.
Ubuntu has already dropped 32-bit x86 support, Debian probably won't for some more time.

There are practical issues nowadays; the OpenBSD people are deeply unhappy with the Rust ecosystem, because it cannot be compiled on a 32-bit system because the compiler runs out of address space. That kills a lot of ports, among them Firefox. Cross compilation does help, though.

Linux may not have existed in 1980, but Unix flavors existed, and there were some distance cousins that ran on 8-bit machines. However, that was probably not practical for how IBM envisioned (and dictated) the original PCs being used.
On the other hand, IBM did sell PC/IX (a System III) for the XT, but as a single-user system.

Your [NetBSD] definition of architectures seems to be different than the one used in Linux (the 'arch' subdirectory). That "53" number is pretty generous, and if you look at the supported CPU list you see a much smaller number. By that definition, Linux "architecture support" is on-par with anything else.
That is true, although NetBSD does support a few things Linux doesn't bother with (VAX and some special m68k platforms, for example). However, it never stretched into the NOMMU realm, which Linux successfully did.

I wouldn't touch SCO Unix with a barge poll on my 286 with 8megs of ram..
Xenix/286 did deeply impress me by providing true multitasking on a 286/12. Running two serial ports (one transmitting at full speed with kermit, no FIFO), writing a floppy image and running snake on the local console without breaking a sweat. Impossible in either DOS, Windows, or Minix.

[OS/2 subsystem in NT] Only for OS/2 v1x though.
And only for OS/2 1.x console applications, no PM support. About as useful as the POSIX support in NT - completely useless.
 
Software copyright is a tangle of legal opinions and claims. Had DRI chosen to sue MS , I suspect the case would have dragged on for years, with the inevitable result of bankrupting DRI. I'm sure that Gary realized this.

It's kind of an obscure subject, but I can't help but wonder how Gary Kildall felt about MSX-DOS for the Japanese MSX computers. As an OS it has the delightfully twisty origin story of being ordered as a version of "MS-DOS for the Z-80", and to make it so Microsoft hired Tim Paterson to build it... and the end result inherits the CP/M API compatibility that MS-DOS had, but actually set up to be (mostly) binary compatible with 8080 CP/M as well. From a technical standpoint it might be the least literal "CP/M-80 clone" there is, given the major changes like using the FAT filesystem natively, a completely different (and, fwiw, substantially friendlier) command interpreter, etc. But still, talk about just eating someone's lunch right in front of their face...

Speaking of MSX-DOS, I can't help but guess that if something like CP/M hadn't been around in the late 70's Microsoft may well have filled that void themselves. The FAT filesystem was actually invented by Marc McDonald ("Microsoft Employee #1") in 1977 for use with a "stand-alone disk BASIC" that MS distributed for a few platforms, and he further developed it into an OS variously called M-DOS or MIDAS that MS never sold but apparently was using for some things in-house by 1979. (Tim Paterson worked FAT into Q-DOS, IE, the future MS-DOS, because of experience helping with a port of stand-alone disk BASIC to the Seattle Computer Products 8086 board around that time.) In some alternative universe without CP/M Microsoft may well have launched something a lot like MSX-DOS publically in 1978 or 1979 to go along with the BASIC licenses they were selling to S-100 companies. (And everyone else.)
 
Xenix/286 did deeply impress me by providing true multitasking on a 286/12. Running two serial ports (one transmitting at full speed with kermit, no FIFO), writing a floppy image and running snake on the local console without breaking a sweat. Impossible in either DOS, Windows, or Minix.
I said it doesn't have anywhere near the hardware support. Which still stands....
 
It's kind of an obscure subject, but I can't help but wonder how Gary Kildall felt about MSX-DOS for the Japanese MSX computers.

He probably would not have liked it to have existed... But I think he would have even more objected to my own version of CP/M which is a direct replacement and is smaller than DRIs... And the fact that I added in the capability to switch either the BDOS or the CCP between my own version and the DR version would have been injury to insult rather than an "option" since he might still be hurting from IBMs deal somewhat.

But I'm factoring in that had it been made in the UK back in the day and not last year, it would have been a distant threat at most at least until/if it became popular in the late 80s and from the look of things, Gary Kildall seemed pretty reasonable about settlements too so maybe people would have been persuaded to settle had CP/M hung around, but the biggest insult would have been that it was based on CP/M 2.2, NOT CP/M 3 or MP/M or anything he worked on later which is more of a threat as it represents a fork that would only serve to draw customers away, not back and forth between... MSX-DOS would have been similar. It would have been based on earlier CP/M versions, and hence driven compatability back towards version 1. That in and of itself is a bit of a threat as it lowers the lowest common denominator and invalidates future developments and hence revenue from upgrades.

A similar thing happened to the Spectrum 128K in the UK. It had three times the memory of the 48K version, but software only used 48K and only then used the 48K hardware except for the sound card, meaning that instead of upgrading to a 128K to get the benefit, you could just bung in an aftermarket AY-3-8912 adapter and play 128K games on a 48K version... Software publishers found it easier to just program for the 48K version and add the music code anyway rather than creating all new games that took advantage of the extra memory.

Likewise, if software companies kept pushing out CP/M 1.0 compatible software, no one is going to upgrade their OS.

I didn't know MSX-DOS used FAT. Did it support CP/M file system or was it purely translated?

I like MS-DOS' file system because it scales better for subdirectories, which are a pain to implement on a classic CP/M disk format, but the CP/M disk format works really well as a MMU so I'm glad I didn't try to move away from the CP/M file system format. I imagine it caused compatability issues for some CP/M software though that expected the allocations in the filename, or tried to create sparse files?
 
One thing that many seem to miss is that CP/M was developed for a diskette target. At the time, memory was dear and hard disks were extremely expensive., I can remember being floored by the cost of a hard disk for our MDS-800. So, a directory structure that includes allocation information made sense, especially since file sizes were anticipated to be rather modest. The idea of a separate allocation map was very old at the time and pretty much necessitated a memory cache to ensure adequate performance. We opted for files occupying contiguous areas--at least, if one lost the directory, the information was still there and not scrambled. But it necessitated periodic "consolidation runs" to reclaim space from deleted files between active files. In our particular set of applications, it suited the purpose very well.
If you lose the FAT on a DOS filesystem, you're in deep weeds, particularly if files are routinely extended or deleted. You may have the file names, but gathering the content can be an arduous task.
There were other schemes, to be sure. One was to include a forward/backward link in every data block. There might be only a bitmap of allocated sectors kept on disk, or none at all. The weakness there is that a broken link (bad sector) leaves you in the dark--and it's particularly painful doing random-access--you don't know where the end of the file is until you get there by reading the entire thing.k Zilog's MCZ system even added 4 more bytes to a sector (132 bytes) to accommodate this.
ISIS used a sort of hybrid arrangement of pointer blocks that hold pointers to data sectors. Again, you have the problem of "lose a pointer block and suffer"
One filesystem that I learned about back in the big-iron days was to keep the allocation information on the same track as the data sectors that it described. I'm not certain that there was any particular advantage to that.
CP/M works well for diskette, but then falls apart with its flat directory structure for large hard disks. I suppose that it could accommodate subdirectories less inefficiently were it to keep an allocation bitmap on disk. But then, that's what the FAT filesystem accomplishes; only it also moves all of the allocation information from the directory as well.
 
Back
Top