• Please review our updated Terms and Rules here

Looking for details on 3.5 and 5.25 disk drive raw data rates -

cj7hawk

Veteran Member
Joined
Jan 25, 2022
Messages
1,104
Location
Perth, Western Australia.
That is to say the maximum rate of flux changes per second that the disk can support without considering encoding method or sectors, gaps, etc.
Or some documentation somewhere on developing disk formats from scratch.

I'm sure it exists, but I get a lot of bad hits for this one when googling since it comes way too close to other things people look for.

Thanks
David.
 
The practical data rate is defined by the floppy controller. One could push data faster with a controller of one's own. The clever technique of running 3.5" double density floppy drives at 300 kbs increasing storage to close to 1 MB or 20% more might give an idea of how much extra room there is for a new faster controller to exploit.

The sector headers and gaps are filled with bytes so the total number of bytes written won't change with a new format. The gaps may be shrunk and larger sectors used to reduce sector headers and free up more space for additional data. Note that some of the more creative techniques like the use of very large sectors like XDF or track at once techniques might require too much memory for many lower end systems to handle.

www.bitsavers.org/magazines/Computer_Design/198002_Encoding-Decoding_Techniques_Double_Floppy_Disc_Capacity.pdf is the article describing the limits of Shugart's double density design. Good for a sense of the timings needed. I know there were other articles that showcased alternate methods and resultant clever formats, all from before everyone chose to accept the standard floppy controllers. A few percent more storage isn't worth the effort of a floppy format no one else can read.
 
Hi @krebizfan - thanks for the link to the article, it looks very interesting - though what I'm looking for is the maximum read/write flux rates (positive to negative ) for the disk and drive itself - the controller might control the ultimate rate, but it cannot exceed the capacity of the disk to maintain the data.

Disks are usually measured in bits-per-inch I think, or something like that - and hence I assume the 720 disk is something 8,000,000 bits spread over 2 heads and 80 tracks, which equaltes to something like 60,000 reversals/rotation with 5-bit MFM symbols, or 300,000 flux reversals per second... It shouldn't have capacity above this with a non-hemispherical encoding system regardless of the clock technique due to the flux reversals will interfere with each other and data will be lost in time ( or so I believe, I could be wrong ) - and the data rate should be a lot lower than this, once the symbol system is decided on, with around 150 Kbps maximum for FM encoding.

But finding the maximum data rate for floppy disks is difficult to locate - I'm looking to see just how rapidly I can write flux reversals to a normal 720K disk ( and also, if available, the 360, 1.2 and 1.44 ) - so I can build my own symbol encoder/decoder ( or rather, for various reasons I'm specifically trying to avoid using a FDC, and maximum capacity isn't the primary issue ) - But knowing the maximum data rate of the FDD (of all of the common FDDs) would be helpful. I'm having trouble locating that information.

Thanks
David
 
Commonly used bitrates are 250, 300 and 500KHz
The information is easily found in the data sheets for floppy disk controller ICs

Maximum bitrate of the media is determined by the formulation of the coating.
Bitrate of the drive is set by the filters and data separator of the drive's read channel.
You are going to have a very difficult time getting anything over 300KHz reliably on a
720K drive, especially on the inner tracks.
 
Last edited:
Hi @Al Kossow - I've been reading some of your older posts lately and I see you've changed your avatar - It looks nice - :)

It's not the FDCs I'm looking to use - so I'm more interested in the maximum rate of the disk medium itself - rather than the rate at which the controller is set. -

Though that information you supplied is still helpful. It gives me a baseline to work from. I assume 250 KHz corresponds to typicaly DD disks and hence 250 reversals/sec. Or is that the bitrate for data recovered from the FDD?
 
here is the table for the common teac fd55 with the data rates that it was designed for
 

Attachments

  • fd55.png
    fd55.png
    492 KB · Views: 17
Hi @Al Kossow , - Bingo ! That's exactly what I'm looking for - Do you have that information for any 3.5" drives also? I never thought to look for the Floppy Disk Drive as the source of information.... I found it again quickly when I searched for the manual for the FD-55

Thanks again -
David
 
Hi @Al Kossow - I found the other disks too - that was perfect. Seems I needed to look for the disk drives, not the disk specifications. I incorrectly assumed the disk specification was met by the drives, but the capabilities are mostly set by the drive itself.
A few quick searched quickly unearthed the rest of the material I wanted - Thanks again for the info - you got me onto the right track to find the rest also -

David
 
Those values in the table are just standard values for given data rates, modulation, and rotational speed. They don't talk about the real capacity of a drive, no holds barred. Anyone who's written a formatter is very familiar with them--they're needed to calculate inter-sector gaps.

Recall that a floppy disk system is very imperfect. The mechanism has to be tolerant of variations in vendor media (i.e. the goop smeared on the cookie) and the drive speed regulation itself. Tossing all that aside, suppose we talk about raw track capacity--one sector per track?

One thing that has to be considered is the fact that the read channel is very much analog in nature. Consider, for example, the iconic MC3470 read amplifier. Note that there's a low-pass filter on the thing to reduce the effects of high-frequency noise. You can adjust that to allow for a somewhat higher data rate, all other factors permitting. We did that with Tandon 100 tpi drives on our system and we could fit 12 sectors of 512 bytes on a DD 5.25" disk, using GCR--the downside is that the setup is a little more finicky when it comes to media.

And there are still surprises. I took that same system, attached a HD 3.5" drive and found that HD drives using new HD media at the somewhat higher data rate didn't work at all. On the other hand, finding a DD floppy that worked was extremely hit or miss (and I have a lot of new media to check that out). Oddly, HD media, even the pretty ratty stuff, worked if I taped over the media sense hole. One of these days, I'll haul out the 'scope and figure why that works. I suspect that the HD mode employs a bandpass filter with a specific "sweet spot".

One has undoubtedly written several books on the subject.
 
It's mostly digital at the interface, though a lack of specified timing gives access to analog elements. I was wondering about the jitter limitations and whether it might be possible to encode symbols in the timing to get 2 bits instead of 1 per transition. If the jitter was low enough and the variations in speed predictable enough, it might be possible to get 4 or more bits per transition, like they do with WIFI.

Using a variable-rate recording method could also increase the data rate - but gives inconsistent data lengths per track, as would changing the bit rate for the outer tracks.

I suppose that's entirely possible with soft sectored disks... Might be a bit difficult to interface with the DPB in CP/M though. I suppose you could do it by creating a null file that can't be deleted in the directory extents that "uses" the missing entries as you get closer to the center...

I was thinking to create a single sector per track, then cache the entire track, since there's not much advantage in having individual sectors if there's blocking and deblocking with anything other than 128 bytes for CP/M record sizes - might as well load in the entire track if you have memory, then write the entire track back as long as there's only a single thread/driver accessing the disk at any one time... Would make it pretty quick and remove the advantage of interleaves. And there would be no need to work out the inter-sector gaps... Just find the start of the track, copy it to memory and then break it down to records in RAM... Write the track for any change in the data and before any different read operation occured.

Did they ever do things like have competitions to get more capacity from a disk back then? Sounds like it would be a good challenge for people learning to use disk drives for the first time?
 
As far as interfacing to CP/M, the DPB really only represents a notion of track and sector that the BDOS promises to pass to the BIOS. That is rarely exactly the same as the physical track (cylinder), sector, and side (head) used by the BIOS to access the physical disk. You "only" need to work out the software to translate BDOS track/sector (defined by the BIOS's DPB) into whatever scheme is used on the physical disk.

One other thing, w.r.t. cramming more data on the disk and the gaps used in the format. In case this wasn't already in the supplemental materials. The gaps between sector headers and sectors allow for "wiggle room" when the controller is writing new data to existing sectors. The controller has to read the sector header, match it against the target, then turn everything around for writing - plus it can't start writing until it gets the first data byte from the CPU, either. This time can be variable, which affects exactly when the sector starts in relation to the sector header - and also where the sector ends in relation to the next sector header. If the gaps are too small, you risk the controller not being able to turn around fast enough, or a variation in sector timing corrupting the following sector header. If you could watch the sector headers and sector data over time for a disk being written over and over, you'd see the sector data "wiggle" in relation to the sector header each time the sector is written.
 
I definitely see the "wiggle" you mentioned when putting a logic analyser on a stringy floppy - and it makes sense that this takes some time, plus the wiggle room at the end of the sector to ensure that the new sector header doesn't get overwritten.

The project I'm working on would probably tend to biphase for the format had they completed it back in the 80s, but that's pretty inefficient so I was thinking of something close to 8 and 2 as an extension of Apple's 6 and 2 system, using 10 bits to represent 256 symbols, with 16 sector headers and losely defined sectors, with some error detection to detect bad symbols or bad LRC with a checksum and maybe use something software defined like Apple did, since memory isn't an issue and that would let others redefine the format in software and support different disk drives and different configurations, as well as read other formats.

I still have a lot to read and learn before planning that part, including building a test system in hardware to allow me to test what the emulator shows at real speeds.

But it makes sense to start learning about Floppy Disk storage and at least make a few calculations so I can plan how the disk controller will work. And I'll probably use the same controller for FDD.
 
Theoretically, there's no reason that a floppy couldn't be fitted with a PRML read channel or that (2,7) RLL encoding couldn't be used. The fundamental issue is that the medium quality is extremely variable across manufacturers. On the other hand, there's no theoretical reason, given a narrower track, that 192 tpi wouldn't be possible. Certainly Drivetec showed that higher track densities using preformatted disks with embedded servos was a practical solution. And then there are the 3.5" UHD floppies that, using optical positioning to get 144MB on a 3.5" floppy without sacrificing 720K and 1440K compatibility. Spinning the disk faster improves the S/N ratio, but you have to work with existing technology (e.g. legacy floppy controllers). And yes, you can increase the sector size to a full track--with the problem that any hiccups in the ISV can render an entire track unreadable. Of course, using more advanced read channel technology can overcome that. ISV is one of the reasons that all my 8" drives use a synchronous AC spindle motor with a heavy hub pulley--all that mass tends to cover up jitter in the drive RPM.

Bottom line, however, is that we're working with a flawed vessel--the medium itself. Further, drives are examples of extreme cost-reduction measures. It's a wonder that they work at all.
 
Last edited:
Back
Top