daver2
10k Member
The 8008-1 is a 'faster' CPU than the 8008 (https://datasheets.chipdb.org/upload/wepwawet/8008_(1978) Datasheet.pdf). This may be for a number of reasons.
One possibility is that early CPUs would only run reliably at 500 kHz. They may have run faster - but this is early over-clocking!
Over time, intel may have improved their processes, or slightly redesigned the 8008 to make it 'faster', and gave it the code 8008-1.
It may also be that the 8008 CPU was tested at 800 kHz (plus a margin) and (if it passed) it was classified as an 8008-1. If it failed, but reliably passed at 500 kHz (plus a margin), it was classified as an 8008. If it failed, it was either destroyed - or sent to a country to be destroyed which may (or may not) have happened (if you get my meaning)...
Over time, even 8008-1 devices may have 'aged', so they do not reliably work at 800 kHz. Either that, or unscrupulous companies may have remarked an 8008 as an 8008-1. In the latter case, it will not reliably run at 800 kHz either.
Note that there is also a minimum clock frequency of 333 kHz - implying that the internals of the CPU registers are 'dynamic' in nature and require a minimum clock frequency to hold their values (otherwise the register contents evaporate).
Early designs also 'scrimped' on decoupling capacitors. Switching noise from the logic ends up propagating across the power rails and upset logic operations - especially as the clock frequency increases. You can add additional decoupling capacitors if you wish. The recommendation is one decoupling capacitor per logic device. You can use an oscilloscope to check the power rails for high-frequency noise to see if this is a problem.
The "serial" vs "parallel" is partially a red herring. You can "bit bang" a digital bit to make a serial stream for a TTY. This will work - but does have significant limitations. Or, you can add a dedicated hardware UART to do all the 'magic' for converting parallel to serial and serial to parallel. The UART hardware solution is preferred - especially if you want higher bitrates...
Looking at the delay codes for the bit-bang method, I calculate the following delay values (by scaling from 500 kHz to 800 kHz):
Half bit (68 decimal) scales to 109 (decimal).
Full bit (139 decimal) scales to 222 (decimal).
One and a half bits (204 decimal) scales to 326 (decimal)...
The last example is larger than a byte (255 decimal) - so the code (as written) cannot accommodate this change. In order to accommodate this change, the code will have to be modified to implement two (2) consecutive delays (say a full bit plus half a bit).
>>> I have other responsibilities
Tell me about it
...
It is sometimes good to have deadlines to work to. An exhibition is a fixed deadline - and you made it...
Are you planning on getting SCELBAL operational?
Dave
One possibility is that early CPUs would only run reliably at 500 kHz. They may have run faster - but this is early over-clocking!
Over time, intel may have improved their processes, or slightly redesigned the 8008 to make it 'faster', and gave it the code 8008-1.
It may also be that the 8008 CPU was tested at 800 kHz (plus a margin) and (if it passed) it was classified as an 8008-1. If it failed, but reliably passed at 500 kHz (plus a margin), it was classified as an 8008. If it failed, it was either destroyed - or sent to a country to be destroyed which may (or may not) have happened (if you get my meaning)...
Over time, even 8008-1 devices may have 'aged', so they do not reliably work at 800 kHz. Either that, or unscrupulous companies may have remarked an 8008 as an 8008-1. In the latter case, it will not reliably run at 800 kHz either.
Note that there is also a minimum clock frequency of 333 kHz - implying that the internals of the CPU registers are 'dynamic' in nature and require a minimum clock frequency to hold their values (otherwise the register contents evaporate).
Early designs also 'scrimped' on decoupling capacitors. Switching noise from the logic ends up propagating across the power rails and upset logic operations - especially as the clock frequency increases. You can add additional decoupling capacitors if you wish. The recommendation is one decoupling capacitor per logic device. You can use an oscilloscope to check the power rails for high-frequency noise to see if this is a problem.
The "serial" vs "parallel" is partially a red herring. You can "bit bang" a digital bit to make a serial stream for a TTY. This will work - but does have significant limitations. Or, you can add a dedicated hardware UART to do all the 'magic' for converting parallel to serial and serial to parallel. The UART hardware solution is preferred - especially if you want higher bitrates...
Looking at the delay codes for the bit-bang method, I calculate the following delay values (by scaling from 500 kHz to 800 kHz):
Half bit (68 decimal) scales to 109 (decimal).
Full bit (139 decimal) scales to 222 (decimal).
One and a half bits (204 decimal) scales to 326 (decimal)...
The last example is larger than a byte (255 decimal) - so the code (as written) cannot accommodate this change. In order to accommodate this change, the code will have to be modified to implement two (2) consecutive delays (say a full bit plus half a bit).
>>> I have other responsibilities
Tell me about it

It is sometimes good to have deadlines to work to. An exhibition is a fixed deadline - and you made it...
Are you planning on getting SCELBAL operational?
Dave