• Please review our updated Terms and Rules here

MORE on DMA controller

...which does go to my point. How do you get a DMA transfer started without a DRQ/DACK sequence? I don't believe that the S100 bus has any provision for those signals. Otherwise, how does the 8237 know that the I/O device is ready to send or receive data? Take, as an example, the 8272 floppy controller. Since it transfers data only when the controller has seen the right address mark on the disk, transfer can occur many hundreds of milliseconds from the programming of the DMAC.

...which is why I wanted to see how you had your 8237 wired up.
 
hmm yes DRQ/DACK are absolutely fundamental to the 8237's operation, surely?
 
hmm yes DRQ/DACK are absolutely fundamental to the 8237's operation, surely?

Yes, absolutely! I've seen how an 8257 can work on Multibus I, but it's not easy. On S100, using no private bus, that's a big ???, unless it's possible to re-purpose some of the lines.

You've got to remember that when S100 was first proposed, no one knew exactly how multichannel DMA might work. I'm sure that the 8 IRQ lines were based on the 8 RST vectors in low memory--and that's how early 808x boards worked. You used a priority encoder, a couple of latches and jammed the RST instruction onto the data bus when the 8080 acknowledged the interrupt cycle.

Now that doesn't mean that DMA is impossible on S100--it's just that it's a little different. You could, for instance, incorporate it as a bus-master in a particular device--so you could put an 8237/57 on the same board--a disk controller, say, ond operate that way. Or you could provide a private bus for DMA REQUEST and DMA ACKNOWLEDGE between boards that need DMA and the DMA board.

I recall (if my mind isn't gone yet) that the IMSAI 2-board floppy controller did DMA by incorporating a second 8080 into the controller, which then acted as a bus master during transfers.

If you wanted to must move memory around you could do that and make sure that all other I/O used AEN to condition their selection.

Anyway, you get the idea...
 
I think the Sun finally rose. If I had missed your point I apologize. I believe I have an understanding of this now. As you know my 8237 is not on the CPU board. I did that because of the S-100 bus not having all the signals on it. Not having the AEN signal blocking the other I/O address device select decoding is my problem. I think that I'll keep the 8237 where it is and will re define an unused S-100 line for AEN. I know that the Hold Ack would work for this block, so I'll change it to the AEN. I don't have any standard or vendor purchased S-100 boards in my computer so, it'll be fine. I also don't think anyone other than me will ever use this machine. I appreciate everyone's help and all the discussion. Sometimes I need a crow bar to get that understanding through to me. Thanks
 
I think that I'll keep the 8237 where it is and will re define an unused S-100 line for AEN

So you will need to modify all of your I/O boards to disable address decode when AEN is active... It might be much easier to do what Chuck had suggested earlier - put the DMA controller on the same board with the the controller that will be using DMA (e.g. Disk Controller). So that DMA issued IOR/IOW commands never go to the S-100 bus.
 
Other than for intellectual curiosity, what is the reason for implementing DMA on your setup?

I suppose that on an 8080 system, DMA can be faster for doing high-speed data transfer because it takes multiple instructions to transfer a byte between I/O space to memory space and some devices require immediate service (e.g. disk controllers) Some devices practically demand either shared memory or DMA, due to their nature (e.g. the 8275 display controller). Otherwise, the only really good reason for DMA is being able to move data transfer to the processing background so one can multitask.

One problem with letting each board do its own DMA is that there's no easy way on the S100 bus to do arbitration of DMA requests. The 8237 handles this, but at the expense of having to dedicate a pair of bus pins for each DMA channel. Some computer bus architectures accomplish arbitration by taking the request/grant DMA lines on the bus and "daisy chaning" from slot to slot. It's purely a physical thing--the devices requiring higher priority are placed in slots closest to the CPU; blank slots are filled with "jumper" cards that allow for the completion of the chain.

But in your case, it might just be best to create a small "private bus" between the DMA card and the controller. This can be nothing more than a run of ribbon cable between cards. Cards not needing DMA are simply bypassed by the private bus cable and are not aware of DMA support.
 
The ultimate goal is to get an 8" disk to work with this machine and a copy of CP/M 2.2. I plan on having a 8272 FDC controller. There should only be one DMA channel used and it would be for the FDC. I suppose that I could move the DMA to the CPU board and have a private bus, but there is a lot of wiring changes for that. At first blush, I'm thinking that I could keep what I have now and use the S-22 line for AEN. But first I want to do some more DMA testing, install a 8259 PIC and do some testing on that, then continue with the 8272, FDC.
Thanks
Mike
 
If you're thinking single-density 8", an 8080 running at 2MHz has more than enough power to do the job without DMA. Just a little clever programming. Take a look at the Tarbell FDC (first one, not later ones) and see how it works. Granted, it used a 1771, but that's just a minor detail. Doing 8" MFM with a 2 MHz 8080 (but not Z80) would probably require DMA or a FIFO.

If you have no particular use for interrupts other than for disk use, you may want to consider just using the interrupt pin "naiked" or even fabricating your own PIC. Before the 8259, one common implementation was to use a priority encoder and few other random bits of logic.

If you wanted to go DSDD/DSSD on a 5.25" or 3.5" drive, a WD1770/1772 is very simple to deal with and doesn't involve a lot of extra circuitry.
 
8272 is a pain to work with :) It requires an external data separator (either tens of ICs and some manual tuning or rare/expensive data separator IC), open collector drivers, and might need additional latches and buffers to control floppy drive. Is there any particular reason you want to stick with it? There are multiple alternatives: FDC9266 and FDC9268 are basically 8272 with built-in data separators (they still need open collector drivers). WD37C65 and PC8477 include everything you need for a floppy disk controller - only add some address decode logic.
The CPU based I/O works well with 4 MHz Z80 when using double-density disks (250 kbit/sec transfer rate)... it might work with slower CPUs as well.
For CPU-based I/O you can refer to N8VEM/RomWBW code, and to N8VEM Disk I/O V3 or Zeta SBC schematic...
 
Herb Johnson's retrotechnology website has some discussions if it's possible to do 8080 programmed I/O on 8" DD drives (500KBPS). Most everyone said no, that the timing doesn't work--but I pointed out that there is a way, and it's ugly. However, SD is straightforward--and DD may be possible with some hardware help.

I don't much care for the 8272 or NEC765 (same chip, two labellings) either, but it's what the PC family uses--and it can be a pain.

I tend to think of the NS PC8477, WD37C65, etc. as more of the PC persuasion and more suited to x86 use. You could try a WD279x, but it will require OC drivers (e.g. 7438 ) to the drive and is a little more involved. But the write precomp and data separator are both on-chip.

On the other hand, consider the WD1770, 1772 or 1773. 28-pin DIPs with onboard data separators, write-precomp and outputs that probably could drive a single 3.5" drive, but I'd recommend some OC buffers (e.g. 7407) to be safe. Very simple to program and used on the Atari ST quite a bit. Apparently, some 1772B parts are fast enough to drive an 8" drive. I've built ISA cards using them and, but for I/O select decoding, very few parts were required.

Of course, if you want to be period-correct, a WD1771 or 1791 is the way to go.
 
I will stick with single density, it will be much faster that anything I have now. I have a 8272 chip and a lot of documentation and examples, so I thought that would be best. I still am a ways away from that yet, I can look at these others. Simplier is appealing. I have time to think about it.
Thanks
Mike
 
I think it was this thread, but in one DMA controller thread on here, somebody mentioned that the 8237 doesnt need a temporary register for mem-to-io transfers. Why is this the case? Even after reading the datasheet, I cant visualize how this is possible, considering that the value on the bus would become invalid once the read address of the source was switched to the write address of the destination. The only way I can see this work is if only the read signal needs to keep being asserted after data from an IO device is placed on the bus... And even then, this only works because the 8237 has separate signals for read write mem and IO...
 
Because the IO and memory events happen concurrently - look at the timing diagrams.

For a memory to IO transfer (memory read):

  • The IO device asserts DRQ indicating it is ready to receive data
  • The DMA controller asserts HOLD and receives HOLD-ACK from the CPU, the CPU floats the busses, and the DMA controller gets control and notifies this via AEN
  • The DMA controller puts the memory address onto the address bus and asserts DACK (on the appropriate channel)
  • Next it assets both MEMR and IOW, causing memory to drive the data bus and this to be latched by the IO device directly

Hope that helps!
 
That post DOES help...

It points out something I've grossly misunderstood about the 8237- the DMA channels are NOT tied to a specific I/O port, but rather to an I/O device. This would explain why no I/O port address is sent to I/O devices, and there are other control signals that the device can use to distinguish between a regular I/O xfer and DMA xfer. Actually I knew this before (see somewhere in the BIOS thread)... I suppose it didn't stick though :/.

Well, I suppose that if set up properly, an I/O device could handle a DMA xfer just by reading/writing a specific I/O port repeatedly (chosen by software configuration registers or hardware address decoding), and autoincrementing an internal buffer offset which maps to the port after each xfer. Couldn't tell you if any real ISA devices do that though.
 
Generally IO ports are used to access specific registers, so no increment is needed as we want to transfer every byte(/word) from the same IO device register, but to consecutive memory addresses (hence REP INSW). So the DRQ/DACK lines in effect short-cut the address selection of the particular register in question.
 
Generally IO ports are used to access specific registers, so no increment is needed as we want to transfer every byte(/word) from the same IO device register, but to consecutive memory addresses (hence REP INSW).
Sorry, what I meant is that on the I/O device side, an I/O register may be a "window" into a larger I/O device memory buffer that is otherwise inaccessible from the CPU side. A read from that I/O register will increment the buffer pointer internal to the I/O device (perhaps the pointer can be manually set using another register), so that consecutive reads from the I/O device will read consecutive internal I/O device buffer locations. And so DMA can, like you said, remove the requirement of having to identify the correct I/O register/port from which to read by using the dedicated control bus signal lines DRQ/DACK in place of placing an I/O port address on the bus.

... That is what I meant to say, anyway... which more or less coincides with what you said... I think.
 
After all this time, I think I'm actually starting to understand the mysterious (to me anyway) 8237...

Rereading the datasheet, I've realized that the 8237 can be 'forced' to do an xfer by the CPU, by forcing DACK to assert in software (at least 2:00 AM me thinks that's how it works :p- feel free to correct me). Is there any PC hardware that anyone is aware of that by convention uses a 'CPU-initiated' DMA xfer instead of using a DMA request? What happens if the I/O device runs out of data to send or receive? I don't recall there being an ISA control signal to tell the DMA controller of abnormal conditions other than to add DMA wait states.

Logically, I thought the floppy controller DMA would've used this approach (software-initiated DMA), but looking at the BIOS source code, it appears the BIOS programs the DMA controller, programs the NEC floppy controller, and then waits in a loop for an interrupt signifying End-of-DMA... so that would mean the DMA xfer begins when the floppy controller asserts DREQ2. Do I understand this correctly?

And even though I mentioned this in my previous post, I'm questioning it anyway...
How DOES an I/O peripheral know when it's time to increment it's internal buffers to read in or write out another byte for a DMA xfer through an I/O port? Since there's no address placed on the bus for I/O devices, and the IOW/IOR signals must always be qualified by AEN and DACKx (where x is a channel) for the duration of the xfer, I can't see which control signal an I/O device can use to "know" when it's time for the next byte to be read or written via DMA.

Also, reenigne, if you read this, you mentioned a way to do a mem-to-mem xfer on the PC, but only in between 64kB pages... I presume this has to do with DACK0 not being connected to the IC "register file" which holds bits 16-19 of the IBM PC address? Additionally since
 
he 8237 can be 'forced' to do an xfer by the CPU, by forcing DACK to assert in software

I'm not sure where you're seeing this. The polarity of the DREQ and DACK (active high, or active low) can though be programmed (datasheet pg.2, and note on pg.14).

Is there any PC hardware that anyone is aware of that by convention uses a 'CPU-initiated' DMA xfer instead of using a DMA request?

Well, since you ask :) My XT-CFv3 DMA mode works in this way. The card asserts DREQ under the control of the programmer, specifically by writing to a certain IO port. Once asserted, the card then counts a set number of transfers after receipt of DACK and releases DREQ once that value has been reached.

What happens if the I/O device runs out of data to send or receive? I don't recall there being an ISA control signal to tell the DMA controller of abnormal conditions other than to add DMA wait states.

That depends on the DMA mode (see page 5). In block mode, the DMA controller simply continues until it reaches the preset value, and TC is asserted to inform the device. In demand mode, the device is free to release (and subsequently re-assert) DREQ as data buffers are exhausted.

How DOES an I/O peripheral know when it's time to increment it's internal buffers

Just like normal port-mapped operation in a DMA write (to memory), data is driven onto the bus whilst IOR is asserted, then the pointer updated when IOR is released.

mem-to-mem xfer on the PC, but only in between 64kB pages

All DMA operations are physical 64KB page bound because the page register is not updated during transfers.


Hope that helps.
 
Back
Top