• Please review our updated Terms and Rules here

8 bit IDE (XTA) Replacement Project

Another XTA card I found: the VTI-XTB from V&V Systems. Someone already dumped the ROM, which was copied from MiniScribe:

https://www.vcfed.org/forum/forum/ge...995#post222995

Awesome! Will be another good one to test with. I looked over the board pics and there is just enough logic for the ROM and XTA address decodes and buffer the ROM. There is one gate on one chip that I can't see enough traces to know what it is doing. Probably not super important to dig into at the moment. The WDXT-150 is the one I am more interested in digging into as it has quite a bit more logic than seems minimally necessary. Those cards are not super uncommon so I will find one soon I am sure.

I will extract the drive parameter tables out of the BIOS at some point for comparison. I should to the same for a Tandy BIOS. The Tandy might be a little harder - much more code in those bigger ROMs to dig through.
 
Last edited:
Updated drive tables with data from that Miniscribe BIOS and the Tandy 1000 TL/2 system BIOS. On most of these BIOS I can not find the BIOS code that indexes into the table so I think there must be another method DOS uses to index the table.

On both the Tandy BIOS and the Miniscribe BIOS I came across new sector-per-track values. Previously they have always been 17, or 0, which I assume to mean 17. I am not sure if the non-17 values are actually used by DOS. I recall my WD drive in the TL/2 reporting as 20-something MB and fdisk having no unallocated space. This would imply DOS always uses 17. For these BIOS, I have listed capacities for both 17 sectors-per-track as well as whatever the BIOS table says.

This drive parameter situation is a mess. No surprise that the size configuration on the drives is confusing.

I think what I will do for the replacement drive is have the default size be the max of all 3 important parameters (3D4 cylinders, 6 heads, 1B sectors per track). Then report as drive size 2 which through some luck, is the largest possible size for every BIOS found so far. This will require a 128MB SD card as it will need about 80 megs with known max values. On the vintage PC it will "just work", assuming the drive is initialized with DOS. However the SD card won't be readable in a modern PC as the used sectors won't be contiguous.

I think I will then add size jumpers to select different known BIOSes for folks who want to try to get the SD card usable in both the vintage and modern PC.

Not having a clean default is a bit of a bummer. I was planning to add a button to the drive which will initialize the SD card with FreeDOS and perhaps some other useful utilities. This can't work right unless the drive knows the parameters the BIOS will be reporting.
Drive TypeST05XWD-defaultWD-altOld Zenith BIOSVTI / Miniscribe BIOSTandy 1000 TL/2
CylHSize (bytes)CylHSize (bytes)CylHSize (bytes)CylHSize (bytes)CylHSpTSize (bytes) 17 SpTSize (bytes)CylHSpTSize (bytes) 17 SpTSize (bytes)
03D4542,649,600264421,307,392267421,411,840264421,307,39226741121,411,84021,411,84032541A28,026,88042,864,640
1267632,117,760267632,117,76030E320,419,58439D432,204,80032521A14,013,44021,432,32032521A14,013,44021,432,320
23D4542,649,6003D1542,519,04030E427,226,112264421,307,39232541A28,026,88042,864,64030E41B27,226,11243,241,472
3267421,411,840267421,411,84030E213,613,056264421,307,39232541128,026,88028,026,88030E21B13,613,05621,620,736
13441110,723,32810,723,328
STO5X Type 2: ST351/AX when configured for ST05X per Tandy faxback instructions
STO5X Type 3: ST325X
 
Not having a clean default is a bit of a bummer. I was planning to add a button to the drive which will initialize the SD card with FreeDOS and perhaps some other useful utilities. This can't work right unless the drive knows the parameters the BIOS will be reporting.

I have been thinking and there are a number of ways to tackle this drive parameters problem.

One way would be to do an OnTrack style master boot record injection of the modified drive parameters. This was suggested earlier. The neat thing I have just realized is that the drive can have this functionality built in - it can do the injection no matter the state of the SD card. I think even standard fdisk would work. It should be able to support quite large drives just as OnTrack does. The SD card would also be readable on another system as the sectors would be written contiguously. I guess it will have to use 1K of system RAM to add the extra code. The main downside with this approach is that it requires a driver to access the drive if the vintage PC is booted from any other drive.

I think maybe I will just support two options via a jumper selection. One will be the OnTrack style MBR injection. The other will be an option where it "just works" if the drive is initialised by the vintage PC. In this case the SD card data will not be accessible when used on another PC.

I think instead of having a button to initialise the SD card with a FreeDOS install, I will make a way to boot a floppy IMG file. I now see this is possible using the same MBR injection technique. Folks can make an IMG file in an emulator, then copy it to the drive via a USB cable. Then when booting with the button on the drive pressed, it it will boot from the IMG file. Kind of like a Tandy DOS-in-ROM boot, only with whatever contents are desired. The goal is to give folks a way to set up the machine even if their floppy drive is dead or they have no other way to make a floppy disk. I can include a default IMG file with FreeDOS and utils.
 
One way would be to do an OnTrack style master boot record injection of the modified drive parameters. This was suggested earlier. ....

Well bummer. There is a snag with the OnTrack style MBR injection. I need to somehow find some RAM so there is somewhere to put the corrected drive parameter tables. I will probably need some resident code also. There does not seem to be a way to block off memory at the low end of memory before DOS loads. What memory the BIOS uses is well defined and the OS will assume everything after the BIOS is free memory. That leaves two options:
  1. Shrink the total memory the BIOS reports by 1K, essentially taking from the end of memory. This would be fine on most systems but not with Tandy graphics. Tandy uses the end of memory for graphics memory. I would have to allocate off a large amount of memory (64K perhaps) to be sure that no Tandy graphics mode ever overwrote my code/data. Tandy machines are one of the main targets for this drive replacement.

    Of note, OnTrack does allocate memory this way. 24K it seems, which is rather a lot. I think OnTrack is more intended for Windows 95 and later where taking a chunk of base RAM is not a big deal. OnTrack were probably not at all concerned with saving Tandy 1000 base memory. OnTrack interacts directly with the ATA and that is not really something possible on any machine with Tandy graphics. While modern XTIDE can run ATA drives on these machines, the XTIDE BIOS runs different code than what is used on a normal 16 bit ATA interface / device. I am pretty sure OnTrack would fail to work.

    Eating 64K of system memory would probably be OK for my proposed ROM-boot-style-boot of a floppy image. This boot of a floppy image would just be intended as a means to set up the drive without needing a working floppy drive or having to somehow get files onto a 720K floppy disk.
  2. Find some unused/rarely memory in the BIOS RAM area. This is what the XTIDE BIOS does by default in the XT builds. I don't think there is enough available for my needs though. It looks like there are around 256 bytes to be had. Also, it would be pretty lame if my drive replacement could not coexist with the XTIDE BIOS.
I think I will go back to my earlier idea of 20-40MB of drive space on a 128MB SD card. The exact amount of space will depend on specific BIOS drive parameter tables. The SD card will not be readable on another PC. This is not worse than the experience of an original hard drive, although it would have been nice to be able to move the SD card back and forward between PCs. At least the oversized SD card lets us avoid the confusing jumper settings needed on the original drives.

I will also go back to having a second SD card socket. This extra card can be made accessible through a block device device driver to provide secondary drive partitions. This SD card would be readable in other PCs. In fact it could probably be partitioned / formatted in a modern PC.
 
After nearly missing the flaming tip of an exploding tantalum capacitor that shot across the room like a firework the first time I powered it on, I got the VTI/MiniScribe card connected. One odd thing is that it doesn't spin up the drive until the ROM initializes. It recognized my Seagate ST-325X drive as a MiniScribe 8425XT, but aborted with an error when attempting to format it. It didn't recognize my WD 93208X drive at all, and didn't even spin it up.

There are a few MiniScribe IDE-XT drives on eBay, but with only vague descriptions and no photos; maybe I'll see if any of them will take a lowball offer.
 
I think I will go back to my earlier idea of 20-40MB of drive space on a 128MB SD card. The exact amount of space will depend on specific BIOS drive parameter tables. The SD card will not be readable on another PC.
Didn't MFM controllers store the drive geometry on the drive itself? If you add a small PC program to extract the SD card content as an image file (and restore a raw image to the card), it would be sufficiently usable. Sure, SD cards won't be transferrable between systems of different geometries, but they would be usable on a modern PC.
 
Didn't MFM controllers store the drive geometry on the drive itself?

Some did (they'd write a little data blob on a reserved area, like the first sector of track 0, and then they'd lie to the PC about the size of the drive, making it smaller by one track, say, so this area wouldn't get overwritten, which subsequently allowed the controller to reinitialize itself without relying on some NVRAM or jumpers pointing to a limited selection of options), but far from all of them.

I would probably say it would make more sense to have this drive replacement have an internal table of as many drive geometries as you can possibly find and allow the user to select which one it emulates by matching the model number to their current (dead) drive than doing this scattered "sparse sectors" thing. You could do this by providing a simple program that would write configuration information for the MCU to the SD card (or whatever) from a desktop or laptop PC.

So far as preconfiguring goes, given the hulking power of the MCU you're planning to use and the fact you're already having to deal with geometry translation (vs. what XTIDE does, which is just use the native LBA geometry of the attached IDE/CF device) it might be worth considering doing disk image files instead of using the storage media "raw". There are utilities (or built-in OS facilities, depending on what you're using) for mounting disk images and raw images can also be attached to emulators, so doing this could eliminate the need for bizarre workarounds for bootstrapping on the machine you're using it on, and you could also provide a facility for switching between disk images. This would mean that even if you *were* limited to 40MB or whatever it'd be simple enough for the user to switch between a "Games" image, an "Apps" image, whatever.

(Doing disk images on a FAT32 or whatever formatted SD card or USB stick would also simplify that "configure me to look like a ..." step; the program to do that could literally live on the storage media, along with the config file it generates.)
 
... one more note about abstracting the disk as suggested above: if you really also want to support larger volumes than the system BIOS supports if you go with the disk image idea this may become trivial, as long as it's not a hard requirement you be able to boot from them. (IE, they'd have to be drive D or whatever. You'd still have a bootable C.)

A simple block device driver for MS-DOS can be *extremely* tiny; you mentioned Tandy 1000s, well, the driver that a Tandy 1000 HX loads (as a BIOS extension) to access its ROM drive fits in 512 bytes. You might object this is read-only, but also going with oddball Tandy-isms, there was a rare-er version of the Tandy Plus Bus memory expansion card that could hold 512K of RAM instead of 384k. The extra 128k, if enabled, was mapped into the UMB space and Tandy provided a driver to use it as a RAM disk. (I've run this driver on my homebrew expansion cards for these systems.) It also is well under 1K resident, and it of course is read-write. Just spitballing here, but I'm thinking if you set up your drive substitute so it could use "illegal opcodes" sent to the command register as a hidden API you could, in addition to emulating the 40MB or whatever "real" XTA drive, enable it to support one or more "soft" drives which could be of arbitrary size (up to the max that FAT16 supports with DOS 5+, presumably) just by loading a tiny driver in config.sys.
 
After nearly missing the flaming tip of an exploding tantalum capacitor that shot across the room like a firework the first time I powered it on, I got the VTI/MiniScribe card connected. One odd thing is that it doesn't spin up the drive until the ROM initializes. It recognized my Seagate ST-325X drive as a MiniScribe 8425XT, but aborted with an error when attempting to format it. It didn't recognize my WD 93208X drive at all, and didn't even spin it up.

There are a few MiniScribe IDE-XT drives on eBay, but with only vague descriptions and no photos; maybe I'll see if any of them will take a lowball offer.

I guess on the plus side, explosions make it very easy to locate the bad cap. :)

Those are some interesting data points. I don't even know how it would prevent the drive spinning up. Holding the reset line maybe. At some point during testing I will try the VTI/MiniScribe BIOS in my XTA board.
 
I would probably say it would make more sense to have this drive replacement have an internal table of as many drive geometries as you can possibly find and allow the user to select which one it emulates by matching the model number to their current (dead) drive than doing this scattered "sparse sectors" thing. You could do this by providing a simple program that would write configuration information for the MCU to the SD card (or whatever) from a desktop or laptop PC.

This is surprisingly difficult. I have been digging into this for a while and am still not sure what the actual drive parameters are for the 3 hard drives I have. I *think* I know the correct parameters for the ST325X with its current jumper configuration because it came attached to an ST05X card and I have pulled the parameters from its BIOS. But even that drive has a jumper for configuring it for use on other machines and I don't know what that does.

Things get more complicated with other drives. My WD drive has a "translation" jumper which presumably configures it for a different set of parameter tables. However nether of the two sets of parameter tables in the WD BIOS I disassembled match what I got from the BIOS of the machine I pulled the drive from (a Tandy 1000 TL/2).

Things get even more messy with the ST351 A/X. Even ignoring the AT mode settings, it has a lot of options. I have not been able to make much headway into characterizing exactly what they do. The only reason I was able to configure it to work with my ST05X card was because I found a Tandy fax-back doc saying to use a specific jumper arrangement I had not seen elsewhere. If I use the settings from stason.org or other Tandy docs, the drive does not work correctly with the ST05X. On one setting I tried, the drive was not detected at all. On another, the drive started hammering the heads continuously. Just for fun, the ST351 A/X comes in two varieties, one with a 12 pin jumper block, and another with an 8 pin jumper block.

Even if I can figure all this out, it seems like a high burden to put on anyone trying to configure the device. There is also no guarantee that the vintage PC is using optimal geometry. I suspect that my TL/2 is wasting a head (using 4 instead of 5), and so the image that computer wrote would not be readable on another computer even if I matched the WD geometry perfectly.

I think a good start is a default that works for everything but the SD card can't be used in other machines. A jumper config for a few known BIOSes would be doable. I am still concerned that the landscape is so confusing that most folks will not be able to get an SD card that works in their modern PC. I think the experience might be better if we just say up front that nope, the primary SD card won't work in another computer without reinitializing.

I was thinking I might do a a more complex translation where the first part of an SD card is configured for 5 heads / 17 sectors / 1024 tracks, then anything outside of that range is located later on the SD card. This will match some common 40MB drive parameters and so in some cases the SD card will be readable on another PC. Unfortunately I think it mostly just covers the Seagate and WD hard card BIOSes which are the weakest use case for this project. I have considered something that runs on the vintage PC to pull the parameters from the actual parameter table being used and store them to an unused sector towards the end of SD card. The tricky part of this is that it has to happen before fdisk runs. Maybe there could be an optional tool that could be run before fdisk to change the base parameters for the translation I just described. I think I can implement it in a way that even if something goes wrong with the parameters, the SD card will still work fine in the vintage PC that fdisk-ed it.

It is all configurable in software of course and could be changed later. First pass will be one fixed CHS translation method to the primary SD card. Next on my priority list is probably boot-floppy-image-via-MBR-injection because that sounds super fun. Then device driver support for the second SD card as a hard drive. Somewhere along the way I am going to lose interest. Not before there is something working with any luck.

The main outcome of these thoughts is that I am putting the second SD card connector back and I should probably add a few more dip switches. :) When I thought the OnTrack MBR injection would work well, I removed the second SD connector from the drive and I was looking at cutting back on the dip switches to save an IC.

So far as preconfiguring goes, given the hulking power of the MCU you're planning to use and the fact you're already having to deal with geometry translation (vs. what XTIDE does, which is just use the native LBA geometry of the attached IDE/CF device) it might be worth considering doing disk image files instead of using the storage media "raw". There are utilities (or built-in OS facilities, depending on what you're using) for mounting disk images and raw images can also be attached to emulators, so doing this could eliminate the need for bizarre workarounds for bootstrapping on the machine you're using it on, and you could also provide a facility for switching between disk images. This would mean that even if you *were* limited to 40MB or whatever it'd be simple enough for the user to switch between a "Games" image, an "Apps" image, whatever.

(Doing disk images on a FAT32 or whatever formatted SD card or USB stick would also simplify that "configure me to look like a ..." step; the program to do that could literally live on the storage media, along with the config file it generates.)

I did originally plan to use .IMG files. I do love the ability to make an image in PCem and very much wish it was easier to move them to a vintage PC. If I could make the process robust (I think that means, override the drive parameter tables), I would be quite for that. Otherwise it gets pretty fragile and not much fun. Even using XTIDE it can be challenging to share drive IMG files. And XTIDE has full control over the parameter tables. Something that could be added later if I/someone is motivated and there is demand.

I do think I could potentially support HD IMG files on the second SD card via a driver. It occurs to me now that I could also boot a floppy image located there. Maybe even cycle floppy images so a clean DOS install is possible.
 
Another strategy if you really want to just assume the worst about the know-ability of drive parameters would be to analyze the commands you receive during a format operation and derive your LBA-to-CHS mapping on the fly. Or even simpler, maybe? Have the size and geometry be undefined until an MBR and partition table is written. With classical DOS won’t the start and endpoints of the partitions be written in CHS format? Once you have an MBR written to the disk it seems to me like you should be able to derive the correct CHS to LBA translation from it.
 
... one more note about abstracting the disk as suggested above: if you really also want to support larger volumes than the system BIOS supports if you go with the disk image idea this may become trivial, as long as it's not a hard requirement you be able to boot from them. (IE, they'd have to be drive D or whatever. You'd still have a bootable C.)

A simple block device driver for MS-DOS can be *extremely* tiny; you mentioned Tandy 1000s, well, the driver that a Tandy 1000 HX loads (as a BIOS extension) to access its ROM drive fits in 512 bytes. You might object this is read-only, but also going with oddball Tandy-isms, there was a rare-er version of the Tandy Plus Bus memory expansion card that could hold 512K of RAM instead of 384k. The extra 128k, if enabled, was mapped into the UMB space and Tandy provided a driver to use it as a RAM disk. (I've run this driver on my homebrew expansion cards for these systems.) It also is well under 1K resident, and it of course is read-write. Just spitballing here, but I'm thinking if you set up your drive substitute so it could use "illegal opcodes" sent to the command register as a hidden API you could, in addition to emulating the 40MB or whatever "real" XTA drive, enable it to support one or more "soft" drives which could be of arbitrary size (up to the max that FAT16 supports with DOS 5+, presumably) just by loading a tiny driver in config.sys.

I did some reading a couple of days ago on block drivers. I do think I can make it fairly small. Maybe not as low as 512 bytes resident but you have given me a target. :) There needs to be enough code to directly communicate with the drive. Probably not very large actually if I skipped DMA/IRQ and just use polled IO. I do enjoy a good REPNZ. There also needs to be space for the BPB's for each partition (described here: https://jdebp.uk/FGA/bios-parameter-block.html). The FAT16 partitions need about 64 bytes per partition. What I am thinking is to have the drive to parse out the partitions so the block driver just has to read a array of BPB's from the drive. With any luck the fatfs library already has robust partition parsing that I can leverage. The range of possible MBR's and BPB's seems to be quite extensive. Even if I have to write the partition parsing code myself, this kind of stuff is so much easier in 32 bit. I wrote something in Turbo C++ 3.0 for DOS recently and I was constantly dealing with challenges related to16 bit ints/sizes.

I would prefer to just throw the entire secondary SD card at DOS and let DOS figure it out. I just don't know if there is a way to tell DOS there is a new hard drive in the system by the time a device driver runs. I am thinking not. Maybe it would work well enough to at least let fdisk initialize the drive.

Thanks for the info on the HX ROM drive. I was curious how that works. For what I want to do with floppy images, I don't know what OS is booting so I don't think I can do anything too specific to hook in a custom driver. That is really why I plan to use a floppy image rather than a hard drive or block device image. My plan is something like:
  • If the "boot floppy image" button is pressed, rather than load sector 0 from the SD card, return a custom boot loader.
  • The custom boot loader detects the drive, loads some disk IO code high in base memory, reduces the BIOS memory size so the disk IO code does not get overwritten, then tells the BIOS to execute its bootstrap code again.
  • The disk IO code would hook int 13, overriding drive A: access to instead read from the floppy image.
I think this will work. It will certainly be a lot of fun to try. :)

What you describe there with the additional opcodes is exactly what I had in mind for access images or the secondary SD card. The only slight concern is that if I use an undocumented opcode, I might accidentally hit an undocumented operation on a real hard drive. My "identify yourself" command might be another drive's "factory intialize". I know, running an XT MFM hard drive or second XTA drive next to a solid state replacement is a little out there, but I think it should work/be safe. What I plan to do is issue a read from a definitely-out-of-range sector and see if the drive returns data containing a fingerprint. Hmm, I wonder if a high cylinder number could smash heads. So maybe cylinder zero with an out of range head and sector might be more safe.
 
Another strategy if you really want to just assume the worst about the know-ability of drive parameters would be to analyze the commands you receive during a format operation and derive your LBA-to-CHS mapping on the fly. Or even simpler, maybe? Have the size and geometry be undefined until an MBR and partition table is written. With classical DOS won’t the start and endpoints of the partitions be written in CHS format? Once you have an MBR written to the disk it seems to me like you should be able to derive the correct CHS to LBA translation from it.

Thanks for all the feedback / ideas. I am enjoying the conversation.

I did look into the above a little. It does not appear that the MBR contains CHS info. The per-partition BPB from DOS 3.0 and later does have heads and sectors fields which are the important ones. I think those fields are optional and I am not sure whether they are always set. I don't understand why this info is not in the MBR instead - it seems far more useful there. If the the CHS is not known from the get-go, how are BPB's loaded in the first place? Maybe the first BPB is always within the first track/head and that is good enough. It is interesting - this information would allow the hard drive to be read even if moved to another PC with different parameter tables. Maybe that is what the information is for. It stills seems like it belongs in the first sector (the MBR).

There is a bit of a tricky timing problem with trying to process this automatically. It should be robust to diagnose the partitions and BPB's after partitioning is done / system restarted. By that time a bunch of sectors have already been written. At this point it would probably be safest to have an explicit tool or other way to trigger the drive to reorder the sectors. I was going to suggest this option in an earlier reply but I don't want to write it. :) A tool to set CHS before running fdisk - I might be up for implementing that.
 
The docs here:

https://wiki.osdev.org/Partition_Table

Say that the information for the four primary partitions supported by MBR format boot records are stored in a 64 byte data structure stored in the first sector of the hard disk, and that three bytes of each partition record are the CHS info for the ending cylinders/head/sector. It also says for drives smaller than 8GB this should match up with the LBA representation. So I would think if this is accurate simply creating a partition the maximum size supported by the BIOS for the drive should give you a partition table entry that contains the maximum cylinder, head, and sector count? This is literally only one sector’s worth of data you need to parse to decide on the optimal geometry to accommodate the actual format, which will not follow immediately. (You need to reboot after an FDISK.)

Edit: of course I can't say I've ever written my own OS for x86 hardware from scratch, not even close, so maybe these docs are way off, but... I haven't seen anything that says the CHS values are "optional".

Another point about targeting disk images instead of trying to directly map to partitions on the underlying storage is that your strategy of a "sparse" mapping could certainly be done in a "temporary" image file used for the inital FDISK and formatting, and then based on what the controller logs as the maximum observed CHS values you could A: compress the image, and B: update your config so future disk image initializations can use compact translation in the first place. The compression step could happen either in situ or by popping the card into a reader and running a utility, whatever is easier.
 
Last edited:
The docs here:

https://wiki.osdev.org/Partition_Table

Say that the information for the four primary partitions supported by MBR format boot records are stored in a 64 byte data structure stored in the first sector of the hard disk, and that three bytes of each partition record are the CHS info for the ending cylinders/head/sector. It also says for drives smaller than 8GB this should match up with the LBA representation. So I would think if this is accurate simply creating a partition the maximum size supported by the BIOS for the drive should give you a partition table entry that contains the maximum cylinder, head, and sector count? This is literally only one sector’s worth of data you need to parse to decide on the optimal geometry to accommodate the actual format, which will not follow immediately. (You need to reboot after an FDISK.)

OK, I did some experimentation in PCem, and yeah, I think this will work most of the time. Cool. It does appear that the MBR is the first sector written during a fresh install (I tested using the DOS 6.22 install disks). DOS did write the max head and sector count into the end sector. For for some reason DOS dropped the last cylinder but that is not important. I am just going to hardcode for 1024 cylinders. I don't know if this approach will be correct in all cases - 32 meg partitions with DOS 3.3 might not work for example. I could use the 32 bit LBA values to confirm the translation is working correctly. Maybe that is not even necessary - if I get the CHS values wrong, I am still no worse off than I was before - any writes to heads/sectors out of range will still be translated to be written to later locations on the SD card. Probably the better option in the case where the 32 bit values do not match the CHS values is to use them to infer a CHS.

I was thrown by some of this for a little while. I have been disassembling what I thought was a DOS MBR and the format does not match the published MBR formats. I was using the first sector I extracted off one of my XTIDE CF cards. I think I got the first sector of the first partition, not the first sector of the drive. Starting over with PCem, all makes more sense now.

It turns out there is no need to disassemble any MBR's. Someone has done a thorough job of analyzing them: https://thestarman.pcministry.com/asm/mbr/index.html
 
Does anyone have a Tandy 1100 HD they could dump the BIOS from? I would like to learn a bit about the little Connor 2.5" XTA drives. I tried the Tandy 1100 FD and Panasonic BP150 BIOSes but no luck. I did fund some hard drive parameter tables but they are quite small. I think they must be for ROM drives.
 
... I don't know if this approach will be correct in all cases - 32 meg partitions with DOS 3.3 might not work for example. ...

So I tried it on DOS 3.3 with a 40 meg type 17 drive (5 heads, 17 cyliders) and it did indeed end the primary 32 meg partition at the end of cylinder (head & cylinder = max). Nice!
 
So I tried it on DOS 3.3 with a 40 meg type 17 drive (5 heads, 17 cyliders) and it did indeed end the primary 32 meg partition at the end of cylinder (head & cylinder = max). Nice!

I vaguely recall from a book I had ages ago about DOS disk layouts/repair/etc. that DOS always used cylinder boundaries for partitions, even in versions where in FDISK you specify the size in "megabytes". (It gets rounded to the closest cylinder.) You might want to double check that but, yeah, that's why I was thinking this strategy should work. And I agree with just making the cylinder count the maximum no matter what because, yeah, the absolute size doesn't matter for the LBA/CHS translation algorithm, you just need the correct heads/sectors count. Unused bytes at the end aren't really going to matter regardless of if it's an image or raw storage.

An idea that doesn't even depend on FDISK came to me this morning: I use a simple multi-boot manager called bootmgr to switch between several different DOS versions on my Tandy 1000; it's a really tight little piece of code that fits a simple partition selection menu into the teeny amount of code space available in the MBR, in place of the standard code that just finds and loads the boot sector from the active partition. It doesn't look to me like there's anything in the MBR proper that cares about geometry, it just has this rigid structure of having space for a blob of code that's loaded, and it's up to that code to figure out the next step. So...

How about this? You set up the firmware of the device so if it's uninitialized any boot attempt (IE, a read of sector zero) is answered with a synthetic MBR that contains a code payload that when loaded simply looks at the BIOS Fixed Disk Parameter Table to figure out the size of the disk (maybe print this on the screen for future reference) and then makes an INT13h call to read the last logical sector on the drive? (Actually, won't even need the FDPT for this, the INT13h AH=08h call has the info we need in it?) Your drive can then autoconfigure itself based on that bogus sector read attempt because you'll know it'll have the highest value for all three parameters. On the drive end the "configured" flag gets set, the MBR code reboots the PC, and bam, your drive matches whatever the BIOS is set to perfectly.

The source code for bootmgr is available, you could probably use it as a starting point.
 
Last edited:
Back
Top