• Please review our updated Terms and Rules here

CP/M Recommended disk geometry and directory sizes

peteri

New Member
Joined
Aug 10, 2022
Messages
8
I've just finished off disassembling the Cirtech CP/M plus (V1) implementation for a 128K Apple //e and it supports any Apple ProDOS hard drive upto 32Mb in size.

Sadly it uses eight 512 byte sectors per track for its geometry. I have a feeling that going to 24 (It needs 12K of boot sector) or 32 sectors per track would improve the seek performance given the way CP/M handles computing blocks -> tracks.

However if I do decide to alter the way it auto computes a geometry I can't also help but feel that it's being a bit stingy with directory entries.

The comments I did while disassembling (shown below) and working things out show most of the time you didn't get that many, you're probably going to run out of directory entries before you run out of blocks on a 32Mb drive.

Was there any guidance from DRI on this?

; Size |Blocks | BSze | EXM | AL0 | CKS | DirEntries ; -----|-------|------|-----|-----|-----|----------- ; 800K | $03FF | 4K | 3 | $80 | $20 | $80 ; 1MB | $07FF | 4K | 1 | $80 | $20 | $80 ; 2MB | $0FFF | 4K | 1 | $80 | $20 | $80 ; 4MB | $1FFF | 4K | 1 | $C0 | $40 | $100 ; 8MB | $3FFF | 8K | 3 | $80 | $40 | $100 ; 16MB | $7fff | 8K | 3 | $80 | $40 | $100 ; 32MB | $ffff | 8K | 3 | $C0 | $80 | $200

Note: There is a V2 from Cirtech that might be a bit more generous, and supports sharing the drive with ProDOS I haven't dug into that code yet as it fails to detect a Z80 card even with my original Cirtech hardware so I've been concentrating on getting V1 running with a Softcard under emulation.
 
There are penalties for having a large directory, too. For one, it takes a longer time to "log in" the drive. If the directories are hashed, then you have more memory consumed for the dir hash. If the directory is not hashed and actually grows to full size, it takes a long time just to search it every time.

However, simple math on your 32M example shows that at if EVERY file (directory entry) on the disk were the max 64K (4 extents of 16K each) then you would consume all 32M. However, if your average file size were (a more-realistic) 8K then you would only be able to consume 4M (12.5% of the disk).

One common practice is to partition the physical disk into smaller CP/M drives, thus allowing for more efficiency. Unless you actually need to create a single file of 32M, it probably doesn't makes sense to have a CP/M drive that big.

As far as DRI giving advice, I don't recall seeing anything on that. It really depends on your intended usage. If you are creating lots of smaller files you have different needs than creating a few very large files (such as for databases). An end user might even want to create a smaller partition designed for lots of small files (A: drive for COM files, etc) and also create a larger partition designed for a few large (B: for data) files.
 
Big flat file systems do not translate well to large memory spaces. And you can have as many directory entries as you want to reserve. An 8192 byte block supports 256 directory entries and each entry can hold up to 4 extents, for 64K in total per entry. You can have up to 16 blocks as directory entries. That's 4096 entries. At 8K per file median ( which most files are going to be with 8k allocations ) you get at least 30Mb of space consumed as a minimum - and you can have multiple partitions on the same drive by reserving space for prior drives, so you could have multiple directory structures spread throughout the drive as an alternative.

But like early PC drives, 32Mb is one of the first limits you hit. The PC survived long enough to get bigger drives, but CP/M did not.

And you could probably go to 16Kb allocations as an alternative.
 
Big flat file systems do not translate well to large memory spaces. And you can have as many directory entries as you want to reserve. An 8192 byte block supports 256 directory entries and each entry can hold up to 4 extents, for 64K in total per entry. You can have up to 16 blocks as directory entries. That's 4096 entries. At 8K per file median ( which most files are going to be with 8k allocations ) you get at least 30Mb of space consumed as a minimum - and you can have multiple partitions on the same drive by reserving space for prior drives, so you could have multiple directory structures spread throughout the drive as an alternative.

But like early PC drives, 32Mb is one of the first limits you hit. The PC survived long enough to get bigger drives, but CP/M did not.

And you could probably go to 16Kb allocations as an alternative.
This is not entirely true. Firstly, you can't reserve more directory entries than you can represent in the ALV0 bitmap - at least if you want things to work. Also, you can only get 16 blocks per directory entry if the total number of blocks on the disk is less than 256. In all these cases, you have more than 256 blocks per disk and so you can only get 8 blocks total per directory entry, distributed over 4 extents (EXM=3). As I previously noted, you cannot get "30Mb of space consumed as a minimum". Rather, that (32M) is the MAXIMUM you could use - and only if every directory entry used the full 64K. The MINIMUM space used would be 1 block per directory entry (8K * 512) or 4M (out of the 32M). Although, you can create directory entries with zero blocks, if you really wanted to... so you could fill the directory without using ANY blocks.
 
Regarding sector size, it depends how much memory you have to spare for blocking and cacheing. CP/M since about 2.0 provides a hint to the BIOS as to the nature of the sector being requested or written. Thus, large sectors can turn out to be much faster than small ones.

As to directory size, smaller is generally faster. There's nothing to prevent a scheme of dynamic directory sizing, where the BIOS parameters are adjusted to expand the root directory. Since this is a hard disk, there's no real call for standardization. You'll simply have to incorporate a dummy file following the directory that contains the expansion space. This would take some tricky coding in an expansion utility, but would get one out of the position of "you can't get there from here". Also, since this is a hard disk, if you have the memory to spare you can probably get a performance advantage by caching the directory in memory.
 
Last edited:
Regarding sector size, it depends how much memory you have to spare for blocking and cacheing. CP/M since about 2.0 provides a hint to the BIOS as to the nature of the sector being requested or written. Thus, large sectors can turn out to be much faster than small ones.
This is CP/M 3, so the BIOS does not need to do the deblocking. You still need to provide buffers for physical sectors, though.
 
Thanks folks, this matches up with my thoughts. I know if I'd had a 32MB drive back in the day I'd have been annoyed with that number of directory entries as a design choice (and it's not like by 85-86 a 32MB drive would have been out of range)

I think I might just bump the number of directory entries by factor of two, but I think I need to see if Cirtech changed the factors when they added the Apple II GS support and carving a section of ProDOS disk for CP/M in V2.
 
This is not entirely true. Firstly, you can't reserve more directory entries than you can represent in the ALV0 bitmap - at least if you want things to work. Also, you can only get 16 blocks per directory entry if the total number of blocks on the disk is less than 256. In all these cases, you have more than 256 blocks per disk and so you can only get 8 blocks total per directory entry, distributed over 4 extents (EXM=3). As I previously noted, you cannot get "30Mb of space consumed as a minimum". Rather, that (32M) is the MAXIMUM you could use - and only if every directory entry used the full 64K. The MINIMUM space used would be 1 block per directory entry (8K * 512) or 4M (out of the 32M). Although, you can create directory entries with zero blocks, if you really wanted to... so you could fill the directory without using ANY blocks.
The 16 directory bits in the DRM ( which mask straight over the Allocation Tables ) are the first limitation. I would be surprised if anyone had a drive with only 16 blocks, so it's very unlikely the allocation vector bitmap will ever be a limitation for the directory size.

But with an extent mask of $0F, it should provide 16K blocks, and hence 16K allocations.

I think you misunderstood me though - I didn't mean 16 allocations in the directory entry, I meant 16 blocks ( 1 block per bit of DRM ) of 16K per allocation FOR directory entries. That's 512 entries in total per block, or 8192 entries in total over 16 blocks, which is a BIG load of entries. Enough for 128Mb of space addressed by file entries and up to 128Kb each.

So you're going to hit the size limitations on the drives first... Long before you run out of directory entries to fill it...

Though I am not certain you can have 16k allocations since I haven't checked that in the source, but it's certainly supported by the CP/M model, with 8 x 2 byte extents per entry. Do you know if it's actually possible?

I was rather clumbsy in my description though so I see why you assumed I meant somethint else.
 
I was writing a tool to do some more manipulation of larger disk size images and wrote a quick piece of test code for the Cirtech DPB computation routines.
As a result I've found I have an off by one error in the comments I wrote by hand. As it turns out you get larger numbers of directory entries for the bigger disks and also the system uses 16K block sizes for everything above 8MB. So it's not as terminally stupid as I suspected.
Results of a short test program I wrote show the correct table looks like this:


Code:
Cirtech DPB test (Size is 512 byte blocks)
| Size| SPT | BSH| BLM| EXM| DSM | DRM | AL0| AL1| CKS | OFF | PSH| PHM| ALV |
|-----|-----|----|----|----|-----|-----|----|----|-----|-----|----|----|-----|
| 03FF| 0020| 05 | 1F | 03 | 007B| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0020|
| 0420| 0020| 05 | 1F | 03 | 0080| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0021|
| 063F| 0020| 05 | 1F | 03 | 00C3| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0032|
| 07FF| 0020| 05 | 1F | 03 | 00FB| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0040|
| 0820| 0020| 05 | 1F | 01 | 0100| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0041|
| 0FFF| 0020| 05 | 1F | 01 | 01FB| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0080|
| 1020| 0020| 05 | 1F | 01 | 0200| 00FF| C0 | 00 | 0040| 0003| 02 | 03 | 0081|
| 1FFF| 0020| 05 | 1F | 01 | 03FB| 00FF| C0 | 00 | 0040| 0003| 02 | 03 | 0100|
| 2018| 0020| 06 | 3F | 03 | 01FF| 00FF| 80 | 00 | 0040| 0003| 02 | 03 | 0081|
| 3FFF| 0020| 06 | 3F | 03 | 03FD| 01FF| C0 | 00 | 0080| 0003| 02 | 03 | 0100|
| 4017| 0020| 06 | 3F | 03 | 03FE| 01FF| C0 | 00 | 0080| 0003| 02 | 03 | 0100|
| 4018| 0020| 07 | 7F | 07 | 01FF| 01FF| 80 | 00 | 0080| 0003| 02 | 03 | 0081|
| 7FFF| 0020| 07 | 7F | 07 | 03FE| 01FF| 80 | 00 | 0080| 0003| 02 | 03 | 0100|
| 8019| 0020| 07 | 7F | 07 | 03FF| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0101|
| A000| 0020| 07 | 7F | 07 | 04FE| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0140|
| FFFF| 0020| 07 | 7F | 07 | 07FE| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0200|

Columnn names are as per the CP/M plus system guide. Sizes are on boundary conditions both with and without the 3 track boot sectors. I've also added in 063FH as that is the size of an 800K 3.5" disk, and 0A000H for a 20MB drive.

Now I need to fix some comments up.
 
Last edited:
I was writing a tool to do some more manipulation of larger disk size images and wrote a quick piece of test code for the Cirtech DPB computation routines.
As a result I've found I have an off by one error in the comments I wrote by hand. As it turns out you get larger numbers of directory entries for the bigger disks and also the system uses 16K block sizes for everything above 8MB. So it's not as terminally stupid as I suspected.
Results of a short test program I wrote show the correct table looks like this:


Code:
Cirtech DPB test (Size is 512 byte blocks)
| Size| SPT | BSH| BLM| EXM| DSM | DRM | AL0| AL1| CKS | OFF | PSH| PHM| ALV |
|-----|-----|----|----|----|-----|-----|----|----|-----|-----|----|----|-----|
| 03FF| 0020| 05 | 1F | 03 | 007B| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0020|
| 0420| 0020| 05 | 1F | 03 | 0080| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0021|
| 063F| 0020| 05 | 1F | 03 | 00C3| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0032|
| 07FF| 0020| 05 | 1F | 03 | 00FB| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0040|
| 0820| 0020| 05 | 1F | 01 | 0100| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0041|
| 0FFF| 0020| 05 | 1F | 01 | 01FB| 007F| 80 | 00 | 0020| 0003| 02 | 03 | 0080|
| 1020| 0020| 05 | 1F | 01 | 0200| 00FF| C0 | 00 | 0040| 0003| 02 | 03 | 0081|
| 1FFF| 0020| 05 | 1F | 01 | 03FB| 00FF| C0 | 00 | 0040| 0003| 02 | 03 | 0100|
| 2018| 0020| 06 | 3F | 03 | 01FF| 00FF| 80 | 00 | 0040| 0003| 02 | 03 | 0081|
| 3FFF| 0020| 06 | 3F | 03 | 03FD| 01FF| C0 | 00 | 0080| 0003| 02 | 03 | 0100|
| 4017| 0020| 06 | 3F | 03 | 03FE| 01FF| C0 | 00 | 0080| 0003| 02 | 03 | 0100|
| 4018| 0020| 07 | 7F | 07 | 01FF| 01FF| 80 | 00 | 0080| 0003| 02 | 03 | 0081|
| 7FFF| 0020| 07 | 7F | 07 | 03FE| 01FF| 80 | 00 | 0080| 0003| 02 | 03 | 0100|
| 8019| 0020| 07 | 7F | 07 | 03FF| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0101|
| A000| 0020| 07 | 7F | 07 | 04FE| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0140|
| FFFF| 0020| 07 | 7F | 07 | 07FE| 03FF| C0 | 00 | 0100| 0003| 02 | 03 | 0200|

Columnn names are as per the CP/M plus system guide. Sizes are on boundary conditions both with and without the 3 track boot sectors. I've also added in 063FH as that is the size of an 800K 3.5" disk, and 0A000H for a 20MB drive.

Now I need to fix some comments up.

It always bugs me they call it sectors per track when it's records per track... Curious as to why in a LBA system you're going with 32 records/track - Wouldn't it make more sense to bump that up to 128 records/track so your block size matches your track? Then you can fit a block on a single track.

That will reduce some of the operating system calculation overhead for very large disks. Your biggest disk is heading towards 8000 tracks. Increasing your SPT would reduce that to around 2000 tracks, but more importantly, your Tracks are aligned with Blocks...
 
It always bugs me they call it sectors per track when it's records per track... Curious as to why in a LBA system you're going with 32 records/track - Wouldn't it make more sense to bump that up to 128 records/track so your block size matches your track? Then you can fit a block on a single track.

That will reduce some of the operating system calculation overhead for very large disks. Your biggest disk is heading towards 8000 tracks. Increasing your SPT would reduce that to around 2000 tracks, but more importantly, your Tracks are aligned with Blocks...
Totally agree, this is how the BIOS currently sets things up, it looks to me like bigger track sizes would be a good plan for the future (it's partly why I asked the question).

I do need to have a look at the V2 implementation that Cirtech did for the Apple IIGS as it can carve out a section of ProDOS formatted disk for CP/M so they may have changed what they did for larger disk sizes, while they were adding this as a feature. Sadly my guess is that they kept the same code base and just added a track offset.
 
I've just finished off disassembling the Cirtech CP/M plus (V1) implementation for a 128K Apple //e and it supports any Apple ProDOS hard drive upto 32Mb in size.

Sadly it uses eight 512 byte sectors per track for its geometry. I have a feeling that going to 24 (It needs 12K of boot sector) or 32 sectors per track would improve the seek performance given the way CP/M handles computing blocks -> tracks.

However if I do decide to alter the way it auto computes a geometry I can't also help but feel that it's being a bit stingy with directory entries.

The comments I did while disassembling (shown below) and working things out show most of the time you didn't get that many, you're probably going to run out of directory entries before you run out of blocks on a 32Mb drive.

Was there any guidance from DRI on this?

; Size |Blocks | BSze | EXM | AL0 | CKS | DirEntries ; -----|-------|------|-----|-----|-----|----------- ; 800K | $03FF | 4K | 3 | $80 | $20 | $80 ; 1MB | $07FF | 4K | 1 | $80 | $20 | $80 ; 2MB | $0FFF | 4K | 1 | $80 | $20 | $80 ; 4MB | $1FFF | 4K | 1 | $C0 | $40 | $100 ; 8MB | $3FFF | 8K | 3 | $80 | $40 | $100 ; 16MB | $7fff | 8K | 3 | $80 | $40 | $100 ; 32MB | $ffff | 8K | 3 | $C0 | $80 | $200

Note: There is a V2 from Cirtech that might be a bit more generous, and supports sharing the drive with ProDOS I haven't dug into that code yet as it fails to detect a Z80 card even with my original Cirtech hardware so I've been concentrating on getting V1 running with a Softcard under emulation.
I have a Cirtech board here. Didn't realize it was hardware compatible with a Softcard - or did I misread your statement above?

I hope you will make the disassembly available when it's complete.
 
For the Apple //e hardware I have it piggybacks the board into the 6502 socket and uses a 7Mhz clock from one of the //e chips. I suspect the //c hardware is the same.
It uses a 74LS288 for the Z80->6502 mapping and swaps a couple of regions of memory using the extra A11 line (the softcard uses a 4 bit adder) as a form of copy protection, the BIOS checks for this.
Providing the Z80 and 6502 aren't expecting to see contigous memory in those areas I suspect the original softcard CP/M2.2 would boot but I can see some of the memory configurations might fail in interesting ways, I suppose I should try out some of the boot disks from the various archive sites.

I have patched the BIOS and since the CP/M plus in this case requires the 80 column 64K memory banks it's not an issue. The patched version runs great with a softcard under emulation and the orginal Cirtech hardware I have.

The IIGS version of the Cirtech CP/M disk found on archive.org refuses to boot for me with my //e hardware. The unpatched Cirtech BIOS uses different addresses to swapping between Z80 and 6502 land in the hardware checking routines which I couldn't see having any effects on my original hardware, but might do something on other revisions of the hardware.

Source code is on github https://github.com/peteri/CirtechCPM/ but I've only got it building on Windows at the moment (although I doubt it's going to be hard to use ZXCC instead)

Note AppleWin doesn't return back the disk size in X/Y from a ProDOS status call so the hard disk won't work without further patching either by the AppleWin team or me to the BIOS..
 
Interesting. My Cirtech CP/M 3.0 board sits in a regular expansion slot. Have a feeling the piggy-back version was intended for a //c.
 
Back
Top