• Please review our updated Terms and Rules here

Differences between IBM CGA cards

I presume that the resistors that connect the R,G,B & I signals give the composite video its direct color, which is what makes solid colors appear pretty close to the real deal. But the CGA had a portion of its logic, including green and magenta signals, tied to flip flops and a demultiplexer, which I would guess is where the artifact color comes into play. So does the PCjr.s gate array replicate that?

I also count four shades of gray on that screen in addition to black, so how is that explained?

Thumbs up for an interesting (and useful!) thread, folks.



Check out this video at approximately 07:47 - http://www.youtube.com/watch?v=s5a6jKUkjlA
The 16 color bars on the PCjr boot screen show the same effect as the 1501981, so that looks right.



Not my photo (credit goes to RJBJR from this forum), but here's what the output looks like on a 5155 Portable:



Looks just like the older CGA revision (1501486), with the "greys" being the same. Interestingly, the PC Portable was released only one month before the PCjr, but there could have been minor revisions of both.
 
I presume that the resistors that connect the R,G,B & I signals give the composite video its direct color, which is what makes solid colors appear pretty close to the real deal.

Not quite - these signals generate a monochrome signal from the RGBI image. On a 1501486 it's just the I signal, on a 1501981 it's a signal corresponding to what you'd see if you took a black and white photo of an RGB monitor displaying the same image.

But the CGA had a portion of its logic, including green and magenta signals, tied to flip flops and a demultiplexer, which I would guess is where the artifact color comes into play.

Actually the flip flops and multiplexer generate the direct colour (what I call chroma colour) - i.e. the 6 different hues you see if you draw solid blocks of colour in text mode. They work by generating 3.58MHz signals at various phases. Composite monitors (and NTSC) transmit colour pictures in a single signal by having the phase of the 3.58MHz component of the signal (with respect to the phase of the color burst signal) corresponding to the hue, and the amplitude of this signal correspond to the saturation. So by adding one of these 6 signals to the composite output, we get one of the 6 different hues (the multiplexer also has two DC signals - logic 0 and logic 1, for black and white respectively).

It's no coincidence that 3.58MHz is 1/4 of the PC's crystal oscillator frequency (14.318MHz) and 3/4 of the CPU clock frequency. The period of that signal frequency also corresponds to the amount the beam moves horizontally when drawing 4 "hi-res" pixels or 2 "low-res" pixels. That means that we have another way to generate a 3.58MHz signals - by drawing a pattern on the screen that repeats every 4 "hi-res" pixels (or 2 "low-res" pixels) horizontally. And in fact that is exactly what the artifact colours are. So these colours continue to work even if the multiplexer isn't outputting any of the 3.58MHz chroma signals in the visible part of the signal.

Another way to think about artifact colours is that each pixel has a colour which depends on it's horizontal position in that 3.58MHz cycle. So in hi-res mode, pixels with horizontal coordinates 0, 4, 8, 12... are yellow, 1, 5, 9, 13... are red, 2, 6, 10, 14... are blue and 3, 7, 11, 15... are cyan. These pixels are wider than 1/640th of the screen on a composite monitor - they're more like 1/160th of the screen, so you can make solid colours with different combinations of these 4 "primary" colours.

So does the PCjr.s gate array replicate that?

The gate array takes the data from video memory or character ROM at the address generated by the CRTC (and the other signals from the CRTC) and generates the pixel colour data (as RGBI and chroma for composite) and output sync signals.

I also count four shades of gray on that screen in addition to black, so how is that explained?

On a monochrome composite monitor, the 3.58MHz signals won't be interpreted as colours, so you'll see them "how they really are" - as patterns of vertical lines with different horizontal offsets, repeating every 4 "hi-res" horizontal pixels. If you look closely at RJBJR's picture you'll see them. Because this is a 1501486-type CGA card, the output consists of this chroma signal from the multiplexer plus the intensity signal. So that's how we get our 6 intensities: three possible chroma outputs (off, oscillating at 50% duty cycle, on) times 2 possible luma ("I" bit) outputs (off, on).

I also know that this image was made with the color burst signal being on - when it's disabled it's not just the color burst signal just after the horizontal sync pulse that's turned off - the CGA card also disables the 6 chroma signals going into the chroma multiplexer with inputs on the flip-flops that make them all go high. So if color burst was disabled then you'd only see 4 intensities - the 12 bars in the middle would be the same as the 2 bars on the right.
 
Yeah, I'm kind of surprised that they didn't - some fantastic lighting effects could have been done by changing the palette register to simulate changing the colour of the incident light.

Many games changed both the background index and the palette intensity for some special effects. Defender of the Crown uses the "dim" red/brown/green palette for every scene except when you rescue the maiden, then to simulate a more harshly-lit room, it switches to the hi-intensity version of said palette. I've seen similar changes whenever there is a "shot" or "bang" in a game. For background color changes, there are many examples; probably the easiest to view is King's Quest which sets background to blue so that it has a pure red, green, and blue to work with in mixing colors (to poor effect, unfortunately).

This is all with an RGB monitor. I have never seen a CGA game that intentionally tried palette tricks to change the composite colors. In fact, I don't think I've seen a single CGA game that attempted anything other than the "default" composite colors.
 
Okay, I've put a little program at http://www.reenigne.org/misc/chart.com that displays the chart as shown in my icon and cycles through all the graphics modes (and foreground palettes for 320x200 modes) when you press a key. Pressing Esc should take you back to DOS. I haven't actually tried it in DOS yet, so it might be broken if the timer interrupt takes too long, and/or exiting to DOS might be broken.

Works beautifully on my stock 5160 connected to a JVC broadcast monitor. There is some flicker on the left-hand side of the screen; if you have the source anywhere, I'd be curious to see if I can eliminate it.
 
I was hoping you'd find this thread, Trixter - I knew you'd like that chart program!

I have never seen a CGA game that intentionally tried palette tricks to change the composite colors. In fact, I don't think I've seen a single CGA game that attempted anything other than the "default" composite colors.

Which can't just be because it wasn't mentioned that you could do that in the technical reference manual, surely? Lots of games used tricks that went above and beyond that, and modifying the palette register in that mode is a pretty natural experiment to do. Perhaps even back then it was known that the results wouldn't necessarily be consistent between monitors and graphics cards unless you used grey or white.

Works beautifully on my stock 5160 connected to a JVC broadcast monitor. There is some flicker on the left-hand side of the screen; if you have the source anywhere, I'd be curious to see if I can eliminate it.

It's here. The flickering happens because the palette change happens at a different place depending on exactly when the "waitForDisplayDisable" loop finishes. In 640x200 mode this doesn't matter because the overscan area is always black but in 320x200 mode, the palette register controls the overscan colour as well as a colour in the visible area so the palette change is visible. I'm not sure how to make it stable without making the whole thing completely CPU-timed (and therefore very non-portable). I suppose you could do it with a timer interrupt like California Games does, but then you'd have some fiddly work to make the switch happen at a consistent place between runs of the program. Let me know if you manage it.

I was looking at the schematics again earlier and noticed another difference between the ibm_techref_v202_3.pdf and OA%20-%20IBM%20Color%20Graphics%20Monitor%20Adapter%20(CGA).pdf schematics - as well as correcting a few mistakes in the older schematic and the composite differences I mentioned before, there is also a change to how the horizontal sync pulse is sequenced (the logic around U64). I think what it's doing is asserting the horizontal sync pulse twice on each scanline, once for the normal sync pulse itself and then again shortly afterwards to reduce the voltage during the color burst signal (which might otherwise be too high because of the added luma). Though this seems like it would halve the length of the vsync pulse, so maybe I'm wrong. Too bad they didn't fix the broken color burst in 80-column mode while they were tinking with that bit! Though I guess 80-column mode on composite wasn't really supported anyway because the text is so unreadable.
 
I was hoping you'd find this thread, Trixter - I knew you'd like that chart program!

You know me well ;-) Next time, feel free to email me -- I know I'm not great at returning email, but I do read everything I get...

Perhaps even back then it was known that the results wouldn't necessarily be consistent between monitors and graphics cards unless you used grey or white.

I concur, although my personal preference is to do so anyway, because even if the colors might be inconsistent across customers, there are still more of them.

It's here. The flickering happens because the palette change happens at a different place depending on exactly when the "waitForDisplayDisable" loop finishes. In 640x200 mode this doesn't matter because the overscan area is always black but in 320x200 mode, the palette register controls the overscan colour as well as a colour in the visible area so the palette change is visible. I'm not sure how to make it stable without making the whole thing completely CPU-timed (and therefore very non-portable). I suppose you could do it with a timer interrupt like California Games does, but then you'd have some fiddly work to make the switch happen at a consistent place between runs of the program. Let me know if you manage it.

Scanning the source, I think can manage it by rearranging some things and precaching values from memory during the non-wait periods. Specifically looking at this:

Code:
lineLoop2:
  waitForDisplayEnable
  waitForDisplayDisable
  loop lineLoop2
  inc bl
  mov al,bl
  dec dx
  out dx,al
  inc dx

The "inc bl" can be moved before lineLoop2, which will probably make the difference. I'll poke around tonight (pun intended).

No need for the interrupt timer. I used to think that California Games used interrupts as the only way to get the palette switch happening at the right time. Since we had those conversations a few years ago, I've changed my mind: I think they did it more so that they could just "set and forget" the display mode and gain a lot of CPU time back to run the game mechanics. The real cleverness of that technique was not how they implemented it, but how they intentionally switched between palettes that had a shared color (red) and arranged the graphics on scanline boundaries using that color so that the switch was undetectable. In fact, they hid the palette switch inside a few scanlines, so that if the switch happens messily, it is still hidden (ie. they gave themselves a buffer). That's the true genius.

Using all three (hi) palettes and hiding between the shared colors (red and white) during the switch, you get red, cyan, magenta, white, green, and yellow onscreen at once (just not mixed in the same scanline), plus the background color of your choice (1 per scanline). I used to think that I would write a JPEG viewer using this technique, but now I'm much more interested in practical combinations of colors in composite mode!

BTW when I run the chart program I get a color screen, then a B&W screen, then a different color screen, then a B&W screen, etc. Is this intentional? Are you toggling the color burst on/off on every keypress?

Too bad they didn't fix the broken color burst in 80-column mode while they were tinking with that bit! Though I guess 80-column mode on composite wasn't really supported anyway because the text is so unreadable.

Sounds reasonable. Nobody in 1981 was outputting 80-col text to a TV (monochrome, sure, but not a TV).

I still don't fully understand NTSC color generation, but I'm glad you do!
 
Next time, feel free to email me

I would have done if you hadn't commented on the post.

The "inc bl" can be moved before lineLoop2, which will probably make the difference. I'll poke around tonight (pun intended).

Oh, good thinking - that should move the flicker left by 24 hdots, which might be enough. If not, changing the "mov al,bl" to "xchg ax,bx" (and swapping it back after the out) should give another 12 hdots, and moving a waitForDisplayEnable/waitForDisplayDisable pair after the "loop lineLoop2" (reducing cx to 11) should give another 24. Move it too far left though and it might reappear on the right on faster machines!

In fact, they hid the palette switch inside a few scanlines, so that if the switch happens messily, it is still hidden (ie. they gave themselves a buffer). That's the true genius.

Yeah, it's too bad they screwed up the implementation and made the compatibility test too sensitive so it was disabled on a lot of machines on which it probably would have worked just fine.

BTW when I run the chart program I get a color screen, then a B&W screen, then a different color screen, then a B&W screen, etc. Is this intentional? Are you toggling the color burst on/off on every keypress?

Yes, it cycles through all the possible modes with color burst both enabled and disabled. Actually the order is a bit different in the .asm file than in the .com file - I changed the .asm file to match the same order as my colour chart image (and removed the redundant palette combinations with black and white 320x200 mode) but didn't re-assemble with those changes.
 
I read in one of the IBM Tech References that the dot clock in 160 pixel modes is 3.58Mhz, in 320 modes it is 7.16Mhz and in 640 modes it is 14.318MHz. I would suggest that correlates to your earlier statement of drawing two and four pixels respectively in the time the beam expects to draw one. I believe the idea behind artifact color is that the beam does not have the time to turn off in between pixels, so you get solid color across dot - no dot - dot. You also get faint lines too, sometimes visible, sometimes not.

Maybe I missed it, but how do you explain the arbitrary selection of the color depending on the horizontal position of the dot? You say that the pattern is yellow, red, blue and cyan. It would seem then that the CGA hardware is cycling through the color wheel as the beam travels across the screen, since that is the proper sequence with yellow being closest to the burst.

I understand that shifting the phase of the color burst signal produces hue and the amplitude of the signal saturation. A true color burst signal is a sine wave, and a CGA color burst is a square wave, and I assume that the square wave has a 50% duty cycle. What happens if you change the duty cycle (modulating the frequency?) I would think that the CGA card would output chroma as on, off and at 50% amplitude.
 
Last edited:
Oh, good thinking - that should move the flicker left by 24 hdots, which might be enough. If not, changing the "mov al,bl" to "xchg ax,bx" (and swapping it back after the out) should give another 12 hdots, and moving a waitForDisplayEnable/waitForDisplayDisable pair after the "loop lineLoop2" (reducing cx to 11) should give another 24.

Now you're thinking like a cycle eater :) xchg is slower than mov, so I didn't do that, but the other two changes worked. Here is a working loop that displays no artifacts onscreen:

Code:
  mov cx,11
  inc bl
lineLoop2:
  waitForDisplayEnable
  waitForDisplayDisable
  loop lineLoop2
  waitForDisplayEnable
  waitForDisplayDisable
  mov al,bl
  dec dx
  out dx,al
  inc dx
  pop cx
  loop rowLoop2
 
I believe the idea behind artifact color is that the beam does not have the time to turn off in between pixels, so you get solid color across dot - no dot - dot.

Kind of - but it's not an intrinsic property of the CRT hardware that it can't switch the beam on and off that fast - after all, the 5153 monitor has a very similar tube to that of contemporary composite monitors and that has no trouble going from black to white or vice-versa in a time of 1/(14.318MHz). The band-limiting is actually done as part of the composite Y/C separation, to avoid the 3.58MHz signals corresponding to colours from being displayed as light and dark patterns on the screen, not by the beam itself.

You also get faint lines too, sometimes visible, sometimes not.

That's because the Y/C separator isn't perfect and (depending on the monitor) lets some 3.58MHz signals through.

Maybe I missed it, but how do you explain the arbitrary selection of the color depending on the horizontal position of the dot? You say that the pattern is yellow, red, blue and cyan. It would seem then that the CGA hardware is cycling through the color wheel as the beam travels across the screen, since that is the proper sequence with yellow being closest to the burst.

Right, though it's really the colour-decoding hardware in the monitor that is cycling through the colour wheel as the beam travels across the screen. The CGA card outputs a yellow-hued color burst signal shortly after the horizontal sync pulse which the monitor uses to determine which phases correspond to which hues (so, same phase as burst = yellow, 180 degrees out of phase = blue, 90 degrees advanced = cyan and so on).

One possible way to do the decoding (which is actually the method that I use to do the decoding in software) is to have two 3.58MHz sinewave signals in the monitor (I for in-phase and Q for quadrature) which are 90 degrees out of phase. These are multiplied with the composite signal and the results low-pass filtered to get rid of the 3.58MHz signals. There is a third signal Y (luma) which is just the low-pass-filtered composite signal. Then the red, green and blue signals fed to the electron guns are just different linear combinations of Y, I and Q. Y is "brightness", I is "orange-blueness" and Q is "purple-greenness" (the wikipedia article on YIQ has some diagrams which might make that easier to understand).

Now, if you have two sinewaves 90 degrees out of phase, they define a circular trajectory in 2D space (colour space in this case) and that is the "cycling through the color wheel" that happens at 3.58 million cycles per second.

A true color burst signal is a sine wave, and a CGA color burst is a square wave,

That doesn't matter. A square wave is just a sine wave with some higher frequency components added on (which you can find by doing a Fourier decomposition of the wave). The higher frequencies will be ignored by the monitor so a square wave color burst will work just as well as a sine wave.

and I assume that the square wave has a 50% duty cycle.

Right. All the 3.58MHz signals involved here are 50% duty cycle except for artifact colours produced by lighting 1 out of 4 or 3 out of 4 pixels in 640x200 mode (which are rectangle waves of 25% and 75% duty cycle respectively).

What happens if you change the duty cycle (modulating the frequency?)

Frequency is not the same thing as duty cycle - frequency is how many times per second the wave goes through a complete cycle (one high and one low), duty cycle is the proportion of the cycle the wave spends "high".

For the color burst signal that happens after the horizontal sync pulse, the duty cycle doesn't matter - as long as the frequency and phase are correct and the amplitude and DC offset are within range, the burst will do its job. If the frequency is too far off it won't be recognized as a color burst.

For the signal corresponding to the visible picture, changing the duty cycle will have the effect of brightening or darkening the corresponding colour.

I would think that the CGA card would output chroma as on, off and at 50% amplitude.

I'm not sure what you mean by this. The 6 hue signals that go into the multiplexer (i.e. not black and white) all have 50% duty cycles. So yes, referring to the output signal from the multiplexer as "chroma" - it's either on, off or a square wave of 50% duty cycle of some phase depending on the colour.

Duty cycle isn't the same thing as amplitude, though - the amplitude of the chroma signal as sent to the monitor is defined by the resistor network that forms the CGA's composite output DAC. For the 1501486 the chroma amplitude (peak to peak) is about .75V and for the 1501981 is about .31V, which explains why the chroma colours are more saturated on the earlier card.

The amplitude (and therefore saturation) of the artifact colours depends on the raw pixel colours that you're oscillating between (i.e. the colours you'd see on an RGB monitor). For the usual 640x200 mode with palette 15, it's about 1.05V swing on both cards, so the artifact colours will be much more saturated than the corresponding chroma colours, and will be consistent between cards. Though only green and magenta are directly comparable - the other two 50% duty cycle artifact colours, orange and aqua, aren't available as chroma colours and the other 4 chroma colours (blue, red, yellow and cyan) can only be represented as artifact colours with 25% and 75% duty cycles.
 
Here is a working loop that displays no artifacts onscreen:

Great, thanks!

xchg is slower than mov, so I didn't do that,

It's 1 cycle slower on the execution unit, but when you're exchanging a word register with AX it's also 1 byte shorter so 4 cycles faster on the bus interface unit. Since almost all code is bus bound, changing a mov to a 1-byte xchg will almost always save you 4 cycles (the exception being if it's right after a mul, div or long-running shift and right before an instruction that clears the prefetch queue like jump, call, ret or interrupt).
 
I think I want an older CGA card, but I only have newer ones. If you have a newer CGA card, I assume you would have to cut the resistors for the RGB signals being fed directly to the composite output. Then you would have to grab a BLANK signal, switch the resistors and enjoy the brighter and more vibrant colors.

Now, there are three schematics of the CGA card available from IBM and have been put on the Internet. The first (presumably 1804472) is found in the IBM PC Technical Reference Manual, First Edition August 1981. The second (presumably 1501486) can be found in the IBM PC and XT Technical Reference Manuals, Revised Edition April 1983 and the third (presumably 1501981) is found in the Technical Reference Options and Adapters, Revised Edition April 1984. It is interesting that on both the early and the late schematics, the R,G,B signals are being fed directly into the composite video output, but on the middle they are not.
 
It's 1 cycle slower on the execution unit, but when you're exchanging a word register with AX it's also 1 byte shorter so 4 cycles faster on the bus interface unit. Since almost all code is bus bound, changing a mov to a 1-byte xchg will almost always save you 4 cycles (the exception being if it's right after a mul, div or long-running shift and right before an instruction that clears the prefetch queue like jump, call, ret or interrupt).

Doh, I forgot about the accum,reg and reg,accum optimizations. You're right, it is faster. I was looking at the reg,reg size/timings.
 
I think I want an older CGA card, but I only have newer ones. If you have a newer CGA card, I assume you would have to cut the resistors for the RGB signals being fed directly to the composite output. Then you would have to grab a BLANK signal, switch the resistors and enjoy the brighter and more vibrant colors.

I think the -BLANK signal probably won't make any noticable difference, which is why they removed it in the 1501981. Probably just removing R6, R18 and R19 and replacing the 750 ohm resistor R8 with a 270 ohm resistor should give you something pretty close to a 1501486. The only problem I foresee with doing that is that the lower DC offset of the color burst might be too far out of spec for some monitors yielding no colour at all. If that happens there's a bit of rewiring that could be done to increase it but it probably wouldn't be quite so reversable as the resistor changes.

Now, there are three schematics of the CGA card available from IBM and have been put on the Internet. The first (presumably 1804472) is found in the IBM PC Technical Reference Manual, First Edition August 1981. The second (presumably 1501486) can be found in the IBM PC and XT Technical Reference Manuals, Revised Edition April 1983 and the third (presumably 1501981) is found in the Technical Reference Options and Adapters, Revised Edition April 1984. It is interesting that on both the early and the late schematics, the R,G,B signals are being fed directly into the composite video output, but on the middle they are not.

I think the schematic in IBM_5150_Technical_Reference_6025005_AUG81.pdf is actually more recent than the one in ibm_techref_v202_3.pdf - as well as having the +R, +G and +B signals feeding the composite DAC, it also fixes a mistake in the earlier schematic (there's a place in ibm_techref_v202_3.pdf which just wouldn't work at all as drawn). I'm not sure how a later schematic ended up in a manual dated 1981 but maybe the schematics were replaced at some point before the manual was scanned.

This particular schematic seems to be somewhere in between that of the 1501486 and the 1501981, so I think (by process of elimination) it must be the 1504910.
 
There's been some more discussion about the different CGA model numbers over at Vogons, and NewRisingSun and I have come to the conclusion that 1504910 was not in fact a separate CGA card by itself, but the Stock Keeping Unit (SKU) number corresponding to whatever CGA card is available (or, equivalently, the box containing a retail CGA card) based on the fact that this number appears on IBM price lists in both 1981 and 1986.

The main remaining mystery then relates to the schematic in IBM_5150_Technical_Reference_6025005_AUG81.pdf - is it an earlier or later iteration of the CGA card than the one in ibm_techref_v202_3.pdf? If earlier why does the composite output stage look more like the later 1501981's and why doesn't the circuit in US patent 4,442,428 use this circuit? If later then why does it appear in the August 1981 manual, why does it have lower resistor numbering and why doesn't it have the resistor values?
 
Back
Top