• Please review our updated Terms and Rules here

Why didn't CP/M use a standard disk format?

Pentad4k

Member
Joined
Sep 11, 2023
Messages
16
I never used CP/M back in the day, but have read a lot about it and its creator Gary Kildall. It seemed like a nice OS for the time, but if I understand CP/M correctly, it did not have a standard disk format? As in each OEM could use their own format for storing the OS and data. For those of you who lived through CP/M did Gary Kildall/Digital Research try to push for standard format? I would think the users would have wanted a standard format for easier data exchange? Was their a reason Gary/DRI didn't set a standard format from the beginning?

Thanks!
 
DRI did accept the IBM 8" format as the standard.

The early machines could not process a sector during the gap between sectors so either sectors get skipped or the disk needs additional revolutions. Optimizing for the best interleave and skew made disk access several times faster. It was probably a painful experience for any Osborne or Kaypro user to deal with IBM PC formatted disks instead of the machine dependent format. Of course, the need for compatibility left later models stuck with a less efficient format.
 
CP/M abstracted the hardware away almost completely. The requirements were a CPU that was binary compatible with the 8080, at least 20K of contiguous RAM starting at address 0000H and a disk driver that worked in 128 byte blocks. Everything else was hidden from user programs. As long as the disk routines in the BIOS handled data to and from user programs 128 bytes at a time, it was free to write to the hardware in any format the designer wanted. It made disk compatibility an issue, but also meant that CP/M could be made to work on pretty much any disk hardware you could come up with.
 
As @krebizfan noted it actually *did* have a "standard" disk format; the first version of CP/M was written to use the same single-sided/single-density 8" format as the IBM 3740 data entry system, IE, 26 128 byte sectors per track, for a bit under 256K on a disk. This was the format that DRI source for CP/M was distributed on, and probably a majority of CP/M systems that had disk controllers capable of handling it could read disks in this format in addition to whatever native higher density formats they might have supported.

But, yeah, the trick there is that before controller chips like the Western Digital 1771 came along building a controller capable of doing the standard IBM format was pretty expensive. (There's this religion built around how Steve Wozniak's disk controller for the Apple II was amazing because it eliminated "dozens" of chips; this is what it's based on. Granted by the time it actually came to market you could build a controller using a 1771 with about the same number, if not fewer, chips, than the Apple controller, but I guess the chips on Woz controller were still *cheaper* chips.) There was a fair amount of economic incentive to find cheaper solutions, and this especially applied when the 5 1/4" microfloppies came out; you don't want your controller to cost more than the disk drives. Since CP/M's hardware abstraction let people get away with cooking up their own disk hardware it's kind of inevitable that you'd end up with a Wild West of formats to go along with it.

Why things didn't start converging towards a universal standard (for 5 1/4" disks) once most computers started coming with Western Digital-type controllers mostly capable of reading each other's formats might be a decent question, but it has to be acknowledged that CP/M didn't really have that much of a window to pull it off. CP/M portables like Kaypro and Osborne's products had their 15 minutes of fame in the early eighties, but in the grand scheme of things they were never that popular before the IBM compatible came along and completely ate their lunch.
 
As the disks became 'larger' then the physical disk formats became a little more 'fluid' shall we say.

Dave
 
In a sense, many of the CP/M systems did converge to a standard interchange format, the one used by the IBM PC. I know Kaypro shipped with a utility that could read and write IBM PC disk formats and I am fairly certain a lot of other systems included such a utility by the mid-80s.
 
In a sense, many of the CP/M systems did converge to a standard interchange format, the one used by the IBM PC. I know Kaypro shipped with a utility that could read and write IBM PC disk formats and I am fairly certain a lot of other systems included such a utility by the mid-80s.
XenoCopy can read over 400 floppy disk format types on a PC -- basically everything except GCR-based formats like Apple and Commodore:

www.xenosoft.com/fmts.html
 
At least well into the 80s, DRI provided its OEM vendors with masters in 3740 8" format.

...and 22disk can handle (at last count) 550 distinct formats. At least MS-DOS didn't make the same error.
 
Last edited:
Don't confuse disk(ette) format with filesystem format. DRI had a well-define filesystem format, which could be adapted to different disk formats. DRI did not define a specific method for making the filesystem format for a specific disk auto-detected, which is unfortunate (perhaps). Each vendor was free to use the physical disk as they wanted, and then define how CP/M would organize the files within that. But, as evidenced by things like "cpmtools", the filesystem was well defined and if you knew the vendor's "DPB" (etc) you could access the files just fine (in most cases).
 
Was there an expectation of compatibility at this time? Did people think it was important at the time? Looking back it certainly would have been wise for DRI to declare a standard for 5.25" disks, which would have turned into multiple standards for single density, double density, etc.. The PC had 160K/180K/320K/360K/720K/1.2M/1.44M/2.88M, and so on.
 
DRI did not even define a standard for 8" disks, aside from what was commonly accepted as standard for CP/M on SD SS "IBM 3740" 8" diskettes. The problem with 5" diskettes is that there was nothing like the IBM 3740 to reference. There were a lot of different ways of writing data to 5" diskettes, especially when the technology was in its infancy, and not all were even soft-sectored. There was no "benevolent dictator" to force a single standard on everyone. DRI simply made their OS adaptable to the hardware that was out there. At least most 5" disk systems used the standard FM/MFM data encoding. But even 8" DD formats were "the wild west", as most were not satisfied with the IBM 8" DD 256B/sector specification.
 
In addition, vendors had different requirements/constraints based on their target platform, and so you couldn't force a specific starting track for the directory or layout of the boot tracks or how the bootstrap worked.
 
The various companies did recognize the need for reading/writing competing disk formats or they would be shut out of any company that already purchased a competing system, thus, the inclusion of a disk importing utility with a lot of branded CP/Ms. I think Tim Paterson's company made a major point in their advertising of the number of disk formats that could be handled.

MS-DOS wasn't without its array of strange formats if one includes the systems released before the IBM PC using SCP-DOS. DOS backup software for floppies did all the same strange tricks to squeeze in a little more data and get a little more performance.
 
Last edited:
Thanks for the clarification! I appreciate the insight offered by everyone. Alank2 raised an interesting question about the expectation of compatibility at the time. I assumed that users would expect to share data between people/machines, but that comes from growing up in the 80s and 90s. Did users have that expectation at the time?
 
There were really only a few (5-8) years that CP/M machines were a major player in business circles, so the idea of interchanging data between machines of different types was not really that prominent. There really wasn't that much movement of data (outside a circle of like-machines). By the early 80's, though, it was becoming something people looked at more and more. Often, 8" SD SS was the means for interchange. But fewer machines offered 8" drive capability. Often times, such matters were handled as one-time conversions when moving to a new vendor, possibly even being done as a transfer over serial port. Or paying someone else to do the conversion. When moving to IBM PC, some customers were willing to pay IBM to convert their data. That was one of the appeals that IBM had, for those willing to pay.
 
I'd argue CP/M's disk format was very consistent. But the media changed, and how the media was laid out greatly affected how disks were read, just as the hardware itself mattered.

The base code of the BDOS is the same for ALL CP/M and all it cares about is sector number and track number. The BIOS is the part that interprets it all differently.

But if you eliminate the unused bits in sector and track, then it's kind of a simplistic LBA and there are only two majorly different formats that code needs to be aware of - whether it's an 8 bit allocation or a 16 bit allocation.
 
That is what I was trying to say as well - filesystem format vs. disk format.

In fact, the CP/M BDOS actually uses "LBA" internally, and then at the last moment it calculates track/sector using DPB.SPT. I've often thought about a nice "enhancement" for SASI, CF, SDCard would be to skip the whole track/sector crud and just have the BDOS pass the "LBA" (actually logical record number) to the BIOS. One could define DPB.SPT == 0 to mean "use LBA", and then modify the BDOS to skip the track/sector computation in that case. Since the computation of track/sector is done by repeated (16 or 24-bit) subtraction, it could save a lot of wasted effort (particularly when a small DPB.SPT is chosen).
 
Thanks for the clarification! I appreciate the insight offered by everyone. Alank2 raised an interesting question about the expectation of compatibility at the time. I assumed that users would expect to share data between people/machines, but that comes from growing up in the 80s and 90s. Did users have that expectation at the time?

I thought when Tandy introduced the 2000 that is not compatible with the IBM PC that why would they possibly do this? It makes no sense to a post 100% IBM compatible mindset, but they didn't have that mindset for a time. I have to presume that they thought that they could go the direction they wanted and that operating systems and programs would be ported over. Maybe there was an expectation of porting that didn't exist some time later as well.
 
I thought when Tandy introduced the 2000 that is not compatible with the IBM PC that why would they possibly do this? It makes no sense to a post 100% IBM compatible mindset, but they didn't have that mindset for a time. I have to presume that they thought that they could go the direction they wanted and that operating systems and programs would be ported over. Maybe there was an expectation of porting that didn't exist some time later as well.
I don't recall exactly when the 2000 came out, but in the beginning of the IBM-PC there was a lot of concern, and I believe even outright threats, of litigation from IBM if anyone built a machine "too much like" the PC. Eventually, one or more companies stood up to that and won, but there may been a fear IBM's legal team that influenced the differences.
 
It makes no sense to a post 100% IBM compatible mindset, but they didn't have that mindset for a time. I have to presume that they thought that they could go the direction they wanted and that operating systems and programs would be ported over. Maybe there was an expectation of porting that didn't exist some time later as well.

Something that's a little lost to history is that there was a window following the IBM PC's introduction in late 1981 where serious market players assumed that the "MS-DOS" market would mirror what came earlier with CP/M, IE, that each manufacturer would be free to customize/enhance their machines however they pleased and compatibility limited to a set of documented DOS API and PC BIOS calls. (Which, FWIW, is at least a *considerably* richer programming environment than you get with CP/M; CP/M natively basically only supports paper teletype terminals; if your MS-DOS clone implements INT10h you at least get video terminal functions, including rudimentary graphics support.)

It wasn't at all obvious that strict hardware-level compatibility with the IBM machine was going to be a requirement, since up to this point not only had the CP/M market not gone that way, the most popular individual computers on the market were in fact completely proprietary systems like the TRS-80 and Apple II, where disk formats were basically the lowest thing on the list preventing easy portability of data and software between them. And as a result there were a ton of not-very-PC-Compatible MS-DOS computers that came out between 1982 and late 1983-ish, where it did become obvious that IBM's machine had become the 800 pound gorilla in the room and programmers weren't interested in either customizing their software for every machine that came out or restricting themselves to the compatible BIOS calls at the cost of limited functionality. Just off the top of my head here's a list of not-PC-Compatible MS-DOS machines that came out in this period.:

Tandy 2000
Sanyo MBC550/555
TI Professional PC
Victor 9000/Sirius 1
Heathkit/Zenith Z-100
HP 110 portable and HP 150 Touchscreen
NEC APC series
Mindset PC
ACT/Apricot
Durango Poppy

... and a bunch more that I can't quite dredge up, and no doubt more that I've never heard of. Some, but not all, of these computers could read an IBM PC disk, but even that wasn't a given; the Tandy 2000 and several others used 80 track disk drives that had issues reliably writing 40 track disks, others like the HP and Apricot products used 3.5" disks before they showed up in PC compatibles, the Victor 9000 used a GCR disk format like that of a Commodore computer, and some models of the NEC APC even used 8" floppies.

Anyway, like with CP/M it's possible to write a "generic" MS-DOS program that will run on any of these systems if you can get it onto the right disk format, but unless the program allows selecting specific video/console drivers it's going to be limited to relatively simple console interaction... which, again, was "fine" with CP/M since CP/M was designed around 1970's vintage dumb terminals, but it became clear that wasn't going to fly in the PC market in 1983 because people were starting to get hooked on a higher standard of interaction. Ironically a lot of these not-PC machines actually had better graphics and other capabilities than IBM's products, so you could argue that they would have been "worth" the effort to customize software specifically for them, but... that's kind of the curse of "compatibility"; if you're compatible with a least common denominator that's all you're going to get until you sell enough copies of your particular machine to get special treatment. IBM sold enough PCs to get "special treatment", so anyone churning out a clone that was compatible enough with IBM's to run software targeting it over "generic" MS-DOS is going to have out of the gate a more attractive software library than an computer that has a ton of bells and whistles that nobody is using yet.

Ultimately what "fixed" this was environments like Windows that offered APIs to do "everything" without targeting the hardware directly, but those kind of environments need more powerful computers than Z80 or 8088 PCs.
 
Back
Top