• Please review our updated Terms and Rules here

Do you think Tim Patterson infringed on DRI's rights with QDOS/MSDOS?

alank2

Veteran Member
Joined
Aug 3, 2016
Messages
2,146
Location
USA
I'm not in love with the CP/M filesystem though I think it is interesting. To me there is so much directory waste for the allocation that I tend to like the FAT concept better.

My real gripe though was the idea of logging/mounting a disk and never changing it. What did people do if they ran out of space on a disk and had a document they needed to save?
 

durgadas311

Veteran Member
Joined
Mar 13, 2011
Messages
1,697
Location
Minnesota
I'm not in love with the CP/M filesystem though I think it is interesting. To me there is so much directory waste for the allocation that I tend to like the FAT concept better.

My real gripe though was the idea of logging/mounting a disk and never changing it. What did people do if they ran out of space on a disk and had a document they needed to save?
CP/M allowed changing disks. It was quite easy - although not without problems. In general, a warm-boot was all that was needed, which happens when most programs exit or when ^C is typed at the command prompt. The equivalent could be done under program control as well. Switching the disk in drive A: was more problematic, but there were many "single drive" copy programs written.
 

krebizfan

Veteran Member
Joined
May 23, 2009
Messages
6,077
Location
Connecticut
I'm not in love with the CP/M filesystem though I think it is interesting. To me there is so much directory waste for the allocation that I tend to like the FAT concept better.

My real gripe though was the idea of logging/mounting a disk and never changing it. What did people do if they ran out of space on a disk and had a document they needed to save?
Have multiple drives and save on whichever drive had space and then transfer to a new data disk once convenient.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
43,250
Location
Pacific Northwest, USA
Disk changing wasn't a problem if your code accounted for it. I wrote a hard disk backup program in the early days that did just that. There's a BDOS call to "reset" (function 13) the disk system. Of course, it's necessary to close any open files on the current disk, so the program had to be aware of the rules.
 

alank2

Veteran Member
Joined
Aug 3, 2016
Messages
2,146
Location
USA
It was something I didn't expect because I started with MS-DOS 2.11 and so it took me awhile to get used to this aspect of CP/M.
 

retrogear

Veteran Member
Joined
Jan 29, 2014
Messages
1,070
Location
Minnesota
CPM86 for IBM has an encrypted easter egg from programmer Dean Ballard. Wouldn't it be cool to uncover such a thing in early msdos ?
Gotta keep the myth going ... ;)

Larry G
 

Eudimorphodon

Veteran Member
Joined
May 9, 2011
Messages
6,467
Location
Upper Triassic
It was something I didn't expect because I started with MS-DOS 2.11 and so it took me awhile to get used to this aspect of CP/M.

I managed to trash a couple disks for the Osborne I bought at a garage sale in the early 90's, which was the first CP/M machine I handled in the flesh. I never really thought of MS-DOS as being "user friendly" but, yeah, coming to CP/M after MS-DOS makes CP/M feel like something a caveman chiseled out of a rock... and then hurled off a cliff at your head.

This is one of the reasons why my personal favorite Z-80 operating system(s) are the various TRS-DOS variations/clones for the Tandy Model I/III; even though there are some aspects of them that are less like MS-DOS than CP/M is (different drive names, etc) they're significantly more forgiving/friendly. Granted they're a lot more technically sophisticated/complex than CP/M. (Like, for instance, how they heavily rely on disk overlays. It's great from a functionality standpoint but it also means the optimal drive configuration for a TRS-80 is *three* disk drives, with a System disk permanently duct-taped into drive 0.)...

You know, in a way I think you could make a case for the PC's success in part the result of it arguably having more in common with a TRS-80 than a CP/M machine. Think about it: Both machines had fairly extensive device APIs and low-level services built into ROM (Level II BASIC and the PC BIOS respectfully), that programs could use to leverage to write significantly richer software than CP/M's limited set of paper TTY-oriented I/O calls allow... yet despite that both ended up with software bases that really rely on "bit-banging"-level compatible hardware because when you're dealing with small low-horsepower computers it's very difficult to achieve with general purpose APIs what you can do with brute force engineering. (And, significantly, when these systems were built nobody really had a good idea what an all-purpose device-independent OS with a "rich" user experience would even look like. I would maybe credit the original Macintosh for being an early low-resource crack at this that had surprisingly good device independence... but also uses way more memory than would have been practical in the 1970's, and was originally devilishly hard to write software for.) "Generic" MS-DOS compatible computers basically completely cratered as a genre after only about three years on the market because of the weaknesses inherent in trying to restrict compatibility to a meager and inadequate API that don't really provide anything but disk access; by 1984 a "PC Compatible" had to be almost as hardware compatible with an original IBM 5150 as a TRS-80 clone had to be with an original TRS-80, or an Apple clone with an original Apple II, etc, to be a sellable product.

MS-DOS may have carried around inside of it some API compatibility with CP/M, but even that ended up being largely depreciated when the "new" disk API came out with MS-DOS 2.0. So, yeah. I guess I actually don't feel like PCs are all that similar to CP/M machines despite the ancestrial cross-pollination.

Gary Kildall seemed pretty reasonable about settlements too so maybe people would have been persuaded to settle had CP/M hung around, but the biggest insult would have been that it was based on CP/M 2.2, NOT CP/M 3 or MP/M or anything he worked on later which is more of a threat as it represents a fork that would only serve to draw customers away, not back and forth between... MSX-DOS would have been similar. It would have been based on earlier CP/M versions, and hence driven compatability back towards version 1. That in and of itself is a bit of a threat as it lowers the lowest common denominator and invalidates future developments and hence revenue from upgrades.
likewise, if software companies kept pushing out CP/M 1.0 compatible software, no one is going to upgrade their OS.

CP/M 1.0 compatible software? FWIW, 1.4 was the first version that was even remotely common in the wild, and for most practical purposes CP/M 2.2 is universally considered the baseline version. If you compare the CP/M parts of MS-DOS to the CP/M BDOS call list you'll see it basically copied the 2.2 API, as did MSX-DOS. Anyway, I'm kind of curious what you feel like was implemented in CP/M 3 or MP/M would change anything with regard to making CP/M a more "desirable" user environment, specifically with regard to making *good* software more portable?

I think it's worth noting that CP/M really was a dumpster file by the early 80's. Here's a recollection from someone who worked on the development of the Atari ST in 1984. Atari contracted with DRI for both the GEM GUI and an underlying OS for it to run on top of, and that underlying OS was originally going to be CP/M-68K. Read the article for more details, but even in 1984 the version of CP/M-68K they were working with was stuck with nothing but the rusty old CP/M filesystem calls and character I/O; the team was "saved" from CP/M by switching to a DRI skunkworks project called GEMDOS which, ironically for this thread perhaps, was a knockoff of MS-DOS, using the FAT filesystem and cloning the better MS-DOS 2.x filesystem API; this is what eventually became the underpinnings of Atari's TOS, not CP/M-68k. (The history of GEMDOS is murky, but it looks like it was related to PC-MODE and other components that turned Concurrent CP/M-86 into Concurrent DOS... and then of course eventually DR-DOS.) So... yeah, it kind of reads like by the mid-80's even DRI was done with CP/M.

For the most-part yes, excepting that there was a slight almost-formed movement of people with z80 systems that *didn't* support CP/M who wanted to use it, because IIRC, Wordstar was still pretty big and hung in there until around 1990 and there were other apps too they wanted, for exactly the reasons I've mentioned. They were tired of losing data and the PC hadn't quite become the new defacto standard until the late 80s.

In other words, owners of machines that had been orphaned by their manufacturers or simply not taken seriously enough by software companies to bother porting "productivity" software to game computers found that hacking their computers to run CP/M at least gave them *something* they could use to stretch their investment a little further. That, in a nutshell, is what CP/M-80 was good for in the latter half of the 1980's. And, sure, Wordstar is definitely better than nothing.
 
Last edited:

durgadas311

Veteran Member
Joined
Mar 13, 2011
Messages
1,697
Location
Minnesota
Comparing CP/M-80 to MS-DOS for the purpose of determining the best product for the future is just pointless and completely misses the point of the discussion. And it shows a complete misunderstanding of the issues and history. DRI had already stopped improving CP/M-80, and 8-bit machines had run out of headroom. CP/M-86 and beyond, and CP/M-68K and others, we just Gary enjoying the journey into the future. They certainly showed what could be done if you're not encumbered the way MS was. But, as previously stated, there was no way that anything but MS-DOS was going to be the primary OS for IBM-PCs, and that had nothing to do with it being good or better.
 

Eudimorphodon

Veteran Member
Joined
May 9, 2011
Messages
6,467
Location
Upper Triassic
CP/M-86 and beyond, and CP/M-68K and others, we just Gary enjoying the journey into the future. They certainly showed what could be done if you're not encumbered the way MS was.

But CP/M-68K, per the Atari ST anecdote, was an archaic mess? And if we’re going to accuse Microsoft of “ripping off” CP/M, well, what saved CP/M-86 was ripping off and morphing into MS-DOS, so I’m not sure how DRI was exactly “unencumbered” here.

IBM didn’t owe it to DRI to use CP/M. Sure, they offered to stick CP/M-86 into their catalog after Kildall threatened to sue, but that was them being nice, not proof they were “guilty“. (Frankly, when you’re a target as juicy as IBM sometimes it pays to pay people off rather than risk the vagaries of a jury, even if you don’t feel like you’re in the wrong.) They originally thought CP/M-86 was what they wanted because the whole plan behind the PC was to try to buy as much as they could off the shelf and that was the name that they recognized. But when they got there they found out the OS wasn’t even done and they didn’t like his terms, so they went somewhere else. Maybe it feels a little sketchy that they duplicated the CP/M disk api as an expedient to help software developers, but APIs aren’t copyrighted, and in exchange they got an OS they could evolve at whatever pace they wanted to. I’d say they made the right choice.
 

durgadas311

Veteran Member
Joined
Mar 13, 2011
Messages
1,697
Location
Minnesota
But CP/M-68K, per the Atari ST anecdote, was an archaic mess? And if we’re going to accuse Microsoft of “ripping off” CP/M, well, what saved CP/M-86 was ripping off and morphing into MS-DOS, so I’m not sure how DRI was exactly “unencumbered” here.

IBM didn’t owe it to DRI to use CP/M. Sure, they offered to stick CP/M-86 into their catalog after Kildall threatened to sue, but that was them being nice, not proof they were “guilty“. (Frankly, when you’re a target as juicy as IBM sometimes it pays to pay people off rather than risk the vagaries of a jury, even if you don’t feel like you’re in the wrong.) They originally thought CP/M-86 was what they wanted because the whole plan behind the PC was to try to buy as much as they could off the shelf and that was the name that they recognized. But when they got there they found out the OS wasn’t even done and they didn’t like his terms, so they went somewhere else. Maybe it feels a little sketchy that they duplicated the CP/M disk api as an expedient to help software developers, but APIs aren’t copyrighted, and in exchange they got an OS they could evolve at whatever pace they wanted to. I’d say they made the right choice.
Makes a good story, to some.
 

cj7hawk

Veteran Member
Joined
Jan 25, 2022
Messages
806
Location
Perth, Western Australia.
I'm not in love with the CP/M filesystem though I think it is interesting. To me there is so much directory waste for the allocation that I tend to like the FAT concept better.

My real gripe though was the idea of logging/mounting a disk and never changing it. What did people do if they ran out of space on a disk and had a document they needed to save?
I'm not sure CP/M should be blamed for that - firstly, not all drives of the era supported a disk change line, and even then it needed to be trapped.

As a workaround, they calculated a checksum for each 128 byte record (4 entries) in the directory, and scanning through the directory would generate the checksum and compare it to the checksum vector for that block in the BDOS.

Then when the checksum differed, it indicated that the disk had been changed, and as such, should no longer be trusted and triggered rebuilding of the allocation vector, which should theoretically address the issue of running out of disk space. Assuming the app didn't try to look for a local file or perform a local file operation before saving and didn't do anything similarly weird.

It's not all that inefficient to do it that way, and only has a 1/256 chance of being ignored if a disk is changed unexpectedly, and should rebuild the disk checksum vector and allocation tables if it does detect a disk change.

David
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
43,250
Location
Pacific Northwest, USA
It's not all that inefficient to do it that way, and only has a 1/256 chance of being ignored if a disk is changed unexpectedly, and should rebuild the disk checksum vector and allocation tables if it does detect a disk change.
Actually, it's much worse than that. I ran into this one myself. Part of my usual routine with 940KB diskettes was to make every disk--including disks used only for data--a system disk with a selection of files that might be used. The easiest way to do this was to make a master "blank" and copy it, then add working files as needed. This meant that any working disk could also serve as a boot disk with a selection of essential utilities and get one out of the awful position when the boot disk suddenly fails.

Unfortunately, this led to the "disk changed" checksum being thwarted. Clobbered a few disks this way.
 

cj7hawk

Veteran Member
Joined
Jan 25, 2022
Messages
806
Location
Perth, Western Australia.
Actually, it's much worse than that. I ran into this one myself. Part of my usual routine with 940KB diskettes was to make every disk--including disks used only for data--a system disk with a selection of files that might be used. The easiest way to do this was to make a master "blank" and copy it, then add working files as needed. This meant that any working disk could also serve as a boot disk with a selection of essential utilities and get one out of the awful position when the boot disk suddenly fails.

Unfortunately, this led to the "disk changed" checksum being thwarted. Clobbered a few disks this way.

Well, I did say "Should" - though I haven't had a close look at the code to see how often the BDOS builds, checks and rebuilds the AVTables and Checksum Tables.

Losing data because of the OS is a big problem. :( I'm kind of amazed it was accepted at all back in the day.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
43,250
Location
Pacific Northwest, USA
We took a more pro-active approach. Since we were using 5.25" drives (I suspect this might be applicable to 8" drives as well), we sampled the write-protect status every 200 milliseconds or so. If there were files open for writing and we detected a change, we stopped the system right there, sounded the buzzer and advised the user to put the disk back in the drive. It did solve the corruption-by-user problem nicely. Of course, this requires a timer, so not applicable to generic CP/M.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
43,250
Location
Pacific Northwest, USA
Define "rights" in the context of the time. "Look and feel" issues came later. Certainly not the trademark or any patents. Code can't have been copied, as it was a different architecture. (see NEC v. Intel). Manual text wasn't copied.
If you can claim that MS infringed on DRI some way, then you have to explain why DRI didn't infringe on MS with products like Concurrent DOS or DR-DOS.
 
Last edited:

Eudimorphodon

Veteran Member
Joined
May 9, 2011
Messages
6,467
Location
Upper Triassic
You’ve also got to consider that any clone of something as small as CP/M is by necessity going to have structural similarities. I mean, the logical way to implement the BDOS interface is a jump table, there’s only so many ways you can code that. So I’m not necessarily impressed by anecdotes alleging theft that revolve around “suspicious similarities” in isolated parts of the object binaries.
 
Top