• Please review our updated Terms and Rules here

In circuit TTL logic testing with the El Dr. Gusman logic analyzer, would it be possible?

Vintage Computer Club CH

Experienced Member
Joined
Nov 19, 2024
Messages
106
Location
Zürich, Switzerland
Hello

I got myself one (several) of these: https://github.com/gusmanb/logicanalyzer

I've been playing with it, but one use case that came to mind is testing TTL logic circuits while they are running without desoldering the IC, for example on TTL only arcade boards with faults, where many ICs are soldered in and it takes a long time to check all of them.

Would it be possible by programming or extending the software to validate the signals? https://github.com/gusmanb/logicanalyzer/wiki/06---The-LogicAnalyzer-program

For example, this is a 74LS32 gate on CH2-4

1751619516674.png

The idea is to validate the output of against a known truth table, similar to how the TL866's software (or other chip testers) can check the ICs in the programmer.

What do you think, waste of time or potentially useful?

I like that I can just clip the whole LA onto the IC, power it from the VCC pin and grab the signals via WiFi
 
The simple answer is YES, but with some caveats.

Simple combinatorial logic (AND, OR, XOR etc. - i.e. devices with no internal 'state' (e.g. latches)) should work fine, with the proviso that all of the inputs/outputs are exercised. For example, if you have an AND gate, you will only get a TRUE output if all of the input signals are TRUE. Obviously you can demonstrate if you get an (invalid) TRUE output when one (or more) inputs are FALSE - that would indicate that the device's gate is malfunctioning.

On the basis that something is better than nothing, this would be an advancement...

The presence of tristate buffers, open collector outputs, and internal state makes the problem more difficult - but not impossible.

It is a good idea though, and one worthy of investigation.

Just think it through first...

For example, the analysis software could identify how many (and which) of the test cases for a device have been tested and passed, and how many (and which) of the test cases have not been encountered during the test run.

Dave
 
Thinking about this a bit more whilst I was out...

Ideally you will need some sort of trigger clock - using the active high or active low of the trigger to monitor the logic states. This will (hopefully) avoid the case where the logic states are changing and have not yet stabilised. You could also add some filtering to the input logic states - to ensure they have stopped changing state.

You may also require a start and stop event to be considered (i.e. only check the logic states when START=HIGH and STOP=LOW).

You would want to define a configuration language (as a plain text file) containing the logic descriptions. This file could then be used to identify the input and output pins of the device (and which pins are power - so should always be stable). This plain text file could then be extended by the user as required (so you don't have to specify all of the devices). I would convert this file from plain text into binary (to make the file more compact and more easily managed by the runtime software) - perhaps provide a website to drag and drop the plain text file and it then converts the file into binary (spitting out any error as it goes). This is a necessary process to avoid faults creeping through to when the user comes to use the device test vectors...

EDIT: Of course, there is no need to reinvent the wheel here... There are two 'standard' languages already in existence (ABEL and CUPL). Both of these languages were designed to describe logic for programming logic parts, and for simulating the resulting device. You can ignore the parts of the language related to defining the logic, and just concentrate on the pin descriptions and the associated test vectors.

See an example of ABEL for a 74162 here: https://en.wikipedia.org/wiki/Advan...uage#/media/File:ABEL_HDL_example_SN74162.png. You can ignore the 'equations' part - as this is the bit defining the logic to be configured inside the programmable device.

What you are describing is how Tektronix (and others) use an oscilloscope/logic analyser to capture events and to subsequently analyse them using in-built software to produce (for example) serial port, Ethernet protocol or USB/IEEE488 protocol 'dumps' in a manner that can be interpreted - rather than just the 0's and 1's on the relevant wires.

Dave
 
Last edited:
Would it be possible by programming or extending the software to validate the signals?

Not in-circuit. You would have to force logic inputs high or low when they are being driven by other ICs

There was a scheme HP was promoting that Atari used on some 80s arcade boards called signature analysis
which put a board into a state where a known bit pattern would be produced on an output pin, and a signature analysis box
would capture the bitstream and perform a checksum. There was a table of known checksums for IC pins
in that state. The problem was setting up the stable-state conditions where the analysis could be done.

Another box HP built around the same time was the logic comparitor, a clip-on device that put an identical IC in parallel with the
inputs and had a way to indicate when the outputs didn't match. The problem with them is you need an adapter for every type
of IC you want to test.

You could actually build something fast enough today with an FPGA that could simulate every IC that you'd need to be able to build
a soft logic comparitor.

The other thing you'd really like to detect is when an output pin is in an invalid voltage state (ie. one of the output transistors has failed)
 
Last edited:
If you do not set the inputs, but rely on the system under test to set the inputs, then you can test the device with the caveat that only the gates/inputs are tested that are actually generated by the system under test.

A bespoke test program (post-processing the captured logic states) can validate that the outputs of the device matches with the expected outputs (for the given inputs). It can also identify which of the test vector input conditions are not provided - hence we can identify the test coverage.

The above is also a caveat of a logic comparator comparing the output of a known good reference device to the device under test for the same given inputs derived by the system under test.

EDIT: For an example of a logic comparator clip, see https://jmprecision.co.uk/media/JMLC_Assembly_Manual_Revision_2.pdf. A reference module is prepared defining which pins are power, input and output, and an identical IC is inserted into the reference module. The hardware compares the outputs from the device within the reference module to the outputs from the device under test for the given input vectors (produced by the system under test). Any discrepancy is indicated by illuminating an LED on the pin causing the discrepancy. This system will suffer from the same fate as the post processing of a logic analyser trace.

My first 'stab' at an ABEL test script for a 7400 could be:

Code:
module 7400

title 'Test vectors for the 7400 quad 2-input NAND gate.'

declarations

    IN1A, IN1B, OUT1Y pin  1,  2,  3 ;
    IN2A, IN2B, OUT2Y pin  4,  5,  6 ;
    GND               pin  7         ;
    IN3A, IN3B, OUT3Y pin 10,  9,  8 ;
    IN4A, IN4B, OUT4Y pin 13, 12, 11 ;
    VCC               pin 14         ;

test_vectors

    ([IN1A,IN1B,IN2A,IN2B,IN3A,IN3B,IN4A,IN4B] -> [OUT1Y,OUT2Y,OUT3Y,OUT4Y,GND,VCC])

     " Untested gate inputs are '0' making the untested gate outputs '1'.
     [   0,   0,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 1."
     [   0,   1,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 1."
     [   1,   0,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 1."
     [   1,   1,   0,   0,   0,   0,   0,   0] -> [    0,    1,    1,    1,  0,  1] ; " Test gate 1."
     [   0,   0,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 2."
     [   0,   0,   0,   1,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 2."
     [   0,   0,   1,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 2."
     [   0,   0,   1,   1,   0,   0,   0,   0] -> [    1,    0,    1,    1,  0,  1] ; " Test gate 2."
     [   0,   0,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 3."
     [   0,   0,   0,   0,   0,   1,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 3."
     [   0,   0,   0,   0,   1,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 3."
     [   0,   0,   0,   0,   1,   1,   0,   0] -> [    1,    1,    0,    1,  0,  1] ; " Test gate 3."
     [   0,   0,   0,   0,   0,   0,   0,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 4."
     [   0,   0,   0,   0,   0,   0,   0,   1] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 4."
     [   0,   0,   0,   0,   0,   0,   1,   0] -> [    1,    1,    1,    1,  0,  1] ; " Test gate 4."
     [   0,   0,   0,   0,   0,   0,   1,   1] -> [    1,    1,    1,    0,  0,  1] ; " Test gate 4."

     " Untested gate inputs are '1' making the untested gate outputs '0'.
     [   1,   1,   1,   1,   1,   1,   1,   1] -> [    0,    0,    0,    0,  0,  1] ; " Test gate 1."
     [   1,   0,   1,   1,   1,   1,   1,   1] -> [    1,    0,    0,    0,  0,  1] ; " Test gate 1."
     [   0,   1,   1,   1,   1,   1,   1,   1] -> [    1,    0,    0,    0,  0,  1] ; " Test gate 1."
     [   0,   0,   1,   1,   1,   1,   1,   1] -> [    1,    0,    0,    0,  0,  1] ; " Test gate 1."
     [   1,   1,   1,   1,   1,   1,   1,   1] -> [    0,    0,    0,    0,  0,  1] ; " Test gate 2."
     [   1,   1,   1,   0,   1,   1,   1,   1] -> [    0,    1,    0,    0,  0,  1] ; " Test gate 2."
     [   1,   1,   0,   1,   1,   1,   1,   1] -> [    0,    1,    0,    0,  0,  1] ; " Test gate 2."
     [   1,   1,   0,   0,   1,   1,   1,   1] -> [    0,    1,    0,    0,  0,  1] ; " Test gate 2."
     [   1,   1,   1,   1,   1,   1,   1,   1] -> [    0,    0,    0,    0,  0,  1] ; " Test gate 3."
     [   1,   1,   1,   1,   1,   0,   1,   1] -> [    0,    0,    1,    0,  0,  1] ; " Test gate 3."
     [   1,   1,   1,   1,   0,   1,   1,   1] -> [    0,    0,    1,    0,  0,  1] ; " Test gate 3."
     [   1,   1,   1,   1,   0,   0,   1,   1] -> [    0,    0,    1,    0,  0,  1] ; " Test gate 3."
     [   1,   1,   1,   1,   1,   1,   1,   1] -> [    0,    0,    0,    0,  0,  1] ; " Test gate 4."
     [   1,   1,   1,   1,   1,   1,   1,   0] -> [    0,    0,    0,    1,  0,  1] ; " Test gate 4."
     [   1,   1,   1,   1,   1,   1,   0,   1] -> [    0,    0,    0,    1,  0,  1] ; " Test gate 4."
     [   1,   1,   1,   1,   1,   1,   0,   0] -> [    0,    0,    0,    1,  0,  1] ; " Test gate 4."

     " Testing complete."

end 7400 ;

Note that for eight (8) logical input signals to a device, there should be a total 256 test vectors (2^8).

I have reduced the number of test vectors to something more sensible based upon the fact that the 7400 device has four identical 2-input NAND gates. I am testing each gate with the 'untested' gate inputs at both 'all 0' and 'all 1'.

Of course, with a 7400 I have picked one of the simplest combinatorial packages, but we have to start somewhere :)!

Perhaps testing that the GND pin is always logical '0' and the VCC pin is always logical '1' is a bit overboard?

Using a 'logic analyser' means that suspect output voltages from the device may go unnoticed (as Al has already mentioned).

This solution is not a 'magic bullet' that will solve all problems, but it has its merits.

Extending this test methodology to cover internal logic states may start to prove too complex for complex devices.

Dave
 
Last edited:
This seems like a great idea.

I would say that it works for any chip, including latches and tristate and open collector outputs, but with the caveat already mentioned - that the inputs have to be exercised to fully test the chip.

Thus for each chip tested there would be four results - everything tested good, at least something tested bad, at least some part not fully tested, and also not tested due to incorrect input signal levels. By doing this test on each chip one at a time any chip fault, or for that sake output that is pulled to an incorrect state by anything else, should be possible to find even if many chips tests as "not fully tested".

I haven't read up on that particular logic analyzer but in general it seems like a good idea to be able to detect a correct zero, a correct one, and anything in between as three distinct states for each signal.

If we go off on a tangent into more advanced chips: you'd need to write some really complicated code to for example test a microprocessor this way, but it would be doable, and more importantly it would be doable to test simpler I/O chips, and for that sake also determine that correct inputs are sent to all digital inputs on a chip that produces an analogue output (like say the video chip on many 8-bit era home computers, where a logic analyzer can't determine if a composite video output is correct, but can tell if the chip is reasonably initialized).
 
I would say that it works for any chip, including latches and tristate and open collector outputs, but with the caveat already mentioned - that the inputs have to be exercised to fully test the chip.

I'm not sure the caveat is so important: do we really need to "fully test the chip"?

If we're trying to troubleshoot a fault in-circuit, then we don't care about parts of the truth table that the circuit never uses. It is probably enough most of the time to discover a fault in a chip's ability to handle the subset of inputs that it actually sees, particularly if you are making an observation while the fault is taking place.

A passive tester should be able to do this fairly successfully for chips that don't have state (e.g. basic logic gates); for chips with limited state (e.g. a flip-flop), the tester might be able to follow along correctly after a reset or clear signal is asserted; for really complicated or custom stateful ICs, all bets are off.
 
I notice that this particular logic analyser project has user-includable analysis procedures in the form of DLLs.

Dave
 
If we're trying to troubleshoot a fault in-circuit, then we don't care about parts of the truth table that the circuit never uses. It is probably enough most of the time to discover a fault in a chip's ability to handle the subset of inputs that it actually sees, particularly if you are making an observation while the fault is taking place.
Thanks for all your inputs. A lot has already been pondered and said and I agree with most of it, it's not about finding a perfectly working IC but rather a faulty one quickly without having to scope a sea of TTL logic and/or desolder ICs.

I notice that this particular logic analyser project has user-includable analysis procedures in the form of DLLs.

Dave

Correct, one of the reasons for picking the project for this idea is that it is open source and comes with its own expandable software, so perhaps the developer could be involved too?

I thought about using the format described for this tool (Xgpro logic) that helps creating own test descriptions for the XGecu series of programmers, but anything goes really

The tricky part is then writing code that identifies and parses the logic analyzer capture and compares against the selected test vectors.
 
I think one challenge for this concept might be timing-related.

For example, per this TI datasheet for a tristate octal buffer, a healthy 74LS240 needs to be able to transition an output from low to high in 14ns, while a healthy 74S240 needs to do it in 7ns (assuming the actual circuit is electrically similar to the test circuits on PDF pages 7 and 9). The circuit is using the part within its spec if it changes the inputs at a similar frequency, which for the S240 is around 140 MHz, although that would be pushing it and the wise engineer probably wouldn't operate so close to the limits. Even so, some designers were buckaroos: here's one of my favourite quotes from the hardware description document for the Whitechapel MG-1 workstation:

Using the [74]F109 means that there is only a slight violation of worst case conditions at 60MHz!
(It's being used as a clock divider driven by the video dot clock.)

In an out-of-circuit tester, you can set the inputs and wait as long as you wish before testing the outputs --- this can make it difficult to spot marginal ICs that take too long to compute their function, but at least it's easy to set up, and an obvious failure is obvious. In-circuit, you will need to make your judgement on the circuit's schedule, and if the circuit is switching fast, it may be difficult to distinguish correct and incorrect operation.

One approach to consider would leave the judgements up to the user. Considering the 74S240, you might have a running table of observed transition times:

Code:
  74x240 octal 3-state buffer
Buffer line 1 (1A1 --1G'--> 1Y1)
 Event     Min    Median    Max
--------------------------------
  L->H     3ns      6ns     7ns
  H->L     4ns      6ns    12ns
  Z->L      ?  not seen yet  ?
  Z->H      ?  not seen yet  ?
  L->Z      ?  not seen yet  ?
  Z->H      ?  not seen yet  ?

which essentially replicates datasheet table 6.7 or 6.8. In this imaginary example, the buffer is never going high-Z: it's always turned on, so some of the transitions in the datasheet haven't been measured yet (and never will be). But the observant may catch that the largest recorded high->low transition time exceeds the 7ns maximum transition time for the 'S240, and that could be trouble. Whether the technician should care about that is up to them to decide! Although you could provide hints in the UI if something seems suspicious...
 
I'm not sure the caveat is so important: do we really need to "fully test the chip"?

If we're trying to troubleshoot a fault in-circuit, then we don't care about parts of the truth table that the circuit never uses. It is probably enough most of the time to discover a fault in a chip's ability to handle the subset of inputs that it actually sees, particularly if you are making an observation while the fault is taking place.

A passive tester should be able to do this fairly successfully for chips that don't have state (e.g. basic logic gates); for chips with limited state (e.g. a flip-flop), the tester might be able to follow along correctly after a reset or clear signal is asserted; for really complicated or custom stateful ICs, all bets are off.
The caveat is that if the chip isn't fully tested, and you eventually find some other chip that is tested faulty and replace that, and the device still doesn't work correctly, you will have to go back and retest all the not fully tested. Sure, the exact choice of the word "caveat" can be discussed, but my point is that it's important to remember which chips weren't fully tested when further testing needs to be done after anything has changed.

This gives me a feature creep idea: How about a built in simple "PCB layout" thing where you can transfer the actual PCB chip numbering to an image in the analyzer software, and where the analyzer puts different colors on the chip being tested depending on the results, and the software also remember the results for different "sessions", i.e. pre repair, post first repair attempt, and so on. The coloring could fade to white in steps depending on how many "sessions" ago each chip was tested, as a reminder that it might be a good idea to retest even chips that tested good earlier on in the trouble shooting / repair process if all "should absolutely be retested again" chips don't show any detected faults.

Things like programmable chips with unknown equations and ASICs would of course be impossible to test. But I would say that even the most complicated but well documented / well known / well understood chips are possible to detect in circuit. For partially known chips the tester can simply have an indication that the chip is used in a way that the tester doesn't fully know about. Prime example would be undocumented instructions or weird usage of bus signals for a microprocessor.

============

Re timing: I fully agree that that could be an issue. My suggestion is to at first extend the software for this open source project to do as much as the hardware is capable of, and later look into more advanced hardware.

Afaik for certain things like DRAM signals you kind of need more than 100MHz sample rate even for something that runs at a clock speed of a few MHz.

I haven't fully looked into the open source project but if it can take clock signal and sample the input states on either/both edges of the clock signal, a possible way to do some more testing re timing would be to add a circuit that can add a programmable delay the clock signal, allowing reading of the signals at different states. Or in particular add two such circuits that controls two separate groups of analyzer inputs, so you can read the inputs of a chip with a sliding window/delay from the clock in the device under test, and add an additional delay to check that the outputs end up at the correct state within the specs of the chip.

Ideally the logic analyzer would have say 1GHz sample rate or even less. Even the 400MHz sample rate that the open source project that is one sample every 2.5nS which is too slow at least in some cases for the 74F240 example you mention.

On the other hand, it would likely just be a few places in a circuit that have that narrow timing, and those can be identified and analyzed with an oscilloscope in addition to using the logic analyzer.

==============

Bonus feature creep tangent: If the deice under test already exists as code for usage in programmable logic, like the MISTer, you might be able to convert that code into something that an expanded version of the analyzer software can use to test things on a larger scale. I.E. put probes on a few select places and then be able to determine if a group of chips seem to work correctly or not.
 
Back
Top