Hi, I'm getting to the point in writing my Sanyo MBC-55x BIOS that I'd like to create proper interrupt driven buffering serial port support, and I noticed that while the NEC 8251AC chip has a ton of hardware bugs, the intel chip was used on the serial board. The NEC chip is used on the motherboard but as a keyboard controller where really none of the problems matter.
So, my question is, if the NEC chip was that bad, are there any such hardware bugs in the intel 8251A chip? I'd find it hard to believe there's not any, so if I can't find such info I'll implement workarounds for all of the NEC bugs just in case.
I hate running into stuff like that and thinking my assembly is wrong, because debugging assembly is kinda a bitch For example, I found 2 glitches in the mitsubishi M5W1793 floppy controller that were a bit counter-intuitive... writing a sector returns a spurious error flag at the beginning, and the chip does not reliably establish sync to the data track before beginning operations, resulting in impossible size sectors being transferred with garbage data if enough delay isn't used once the track seek or spin-up is done. It's bad enough trying to write code to be fast enough to transfer data when the CPU is barely fast enough to do it (only polled I/O is supported on this motherboard), and running into inconsistencies makes it even harder.
If anyone's curious what kind of bugs the NEC chip has, here are the notes I made about them and possible workarounds in my application (note: the NEC uPD8251ACF and uPD8251AF have fixed these bugs):
So, my question is, if the NEC chip was that bad, are there any such hardware bugs in the intel 8251A chip? I'd find it hard to believe there's not any, so if I can't find such info I'll implement workarounds for all of the NEC bugs just in case.
I hate running into stuff like that and thinking my assembly is wrong, because debugging assembly is kinda a bitch For example, I found 2 glitches in the mitsubishi M5W1793 floppy controller that were a bit counter-intuitive... writing a sector returns a spurious error flag at the beginning, and the chip does not reliably establish sync to the data track before beginning operations, resulting in impossible size sectors being transferred with garbage data if enough delay isn't used once the track seek or spin-up is done. It's bad enough trying to write code to be fast enough to transfer data when the CPU is barely fast enough to do it (only polled I/O is supported on this motherboard), and running into inconsistencies makes it even harder.
If anyone's curious what kind of bugs the NEC chip has, here are the notes I made about them and possible workarounds in my application (note: the NEC uPD8251ACF and uPD8251AF have fixed these bugs):
Code:
;notes for working around uPD8251AC hardware bugs (this chip is ugly):
;1)
;If the remote side cancels CTS while transmitting, you must stop transmission
;(by shutting off txenable) before the next byte is sent or it will be a
;duplicate of the last byte, until the next byte is loaded from the CPU.
;Basically, the hardware confuses non-empty tx buffer with a buffer that
;hasn't begun transmission.
;Workaround: Always shut off txEN after starting a transmission or
;(maybe) immediately after the character finishes sending. Shutting off txEN
;immediately prevents interrupt on completed transmission though. Shutting
;it off after completed byte transmission is safe as long as remote side always
;keeps CTS cancelled for at least one byte at a time. HW mod is possible
;to enforce this condition.
;2)
;Break detect can latch up and in some cases requires a device reset.
;Always reprogram the device after receiving a break.
;3)
;Bugs relating to synchronous transmission - irrelevant here.
;4)
;Status register can return garbage if it was being updated.
;Read status register until it returns the same byte twice.
;Mask off bit 2 (txEmpty) if txEN is off, because of item 6 or it could
;oscillate and cause unpredictable delays.
;5)
;RxRDY clears after 2 cycles instead of instantly - irrelevant.
;6)
;TxEmpty doesn't return proper value when transmission is stopped.
;This can be worked around by using TxReady instead.
;7)
;TxRDY and TxEmpty clear half a cycle too late - irrelevant.
;8)
;Enter hunt command causes receive problems in async mode - shouldn't
;be using it anyway.
;9)
;control/data select line has bad performance - irrelevant because our system
;clock speed is as slow as 8088 has been known to run in a mass produced PC
;- we should be fine.
;10)
;data overrun is not always reliably detected and can cause garbage data.
;Workaround: Never allow a hardware overrun but emulate it through software
;when the received byte cannot be buffered.
;11)
;In async mode, the first bit sent is delayed by one transmit clock period after
;a reset - irrelevant.
;12)
;RxRDY can glitch when clk doesn't have a fixed phase relationshp to receive
;clock - this shouldn't be a problem because all clocks are derived from one
;source.
;13)
;Receiver sometimes generates an extra character after a break ends.
;No proper workaround is possible since the end of a break cannot be
;reliably timed without polling the chip constantly. Every character
;immediately following a break can be discarded unless the break's end was
;found during status polling.
;Problem due to implementation on serial card:
;rxRdy and txRdy share an interrupt line. Transmit enable must be shut off to
;allow reception of rxRdy interrupt. rxRdy must be polled at the end of
;transmission after shutting off txEN to make sure a byte isn't missed, since
;txRdy going high would trigger the interrupt, and rxRdy can trigger just
;after the interrupt handler checks for it. This can however be done in the
;interrupt handler - only exit once txEN is shut off and no rxRdy occurs.
Last edited: