• Please review our updated Terms and Rules here

CGA 80x50x64k Mode Preview

Oh well :)

Re the colour mixing, I was wondering if the other shaded character values could expand the colour range even more (176/177/178 )?

I think it could ... I was trying to keep it to the 50% character because I thought I would need half the bandwidth to move data - but I was wrong. It is easy to test, the viewer doesn't seem to care what character is. I also thought that the 50% character would help with the flicker, but it didn't and now I know why... so I will fix it. The total image energy in each frame needs to be roughly the same, hence the interleave.... hmmmm... I have to investigate.
 
Re the colour mixing, I was wondering if the other shaded character values could expand the colour range even more (176/177/178 )?

Using all the characters would expand the range. Here's a simulated pageflip between two 40x25 screens where the second screen is a dither of the errors in the first (look hard and you can see two characters occupy the same spot sometimes).

TDDEBUG clown.png

No, I do not have a functional encoder for this. I suggested people build one to improve on 8088 Corruption but so far nobody has taken up the challenge and I don't want to steal any potential thunder. (Besides, I'm working on a PCjr video system at the moment...)
 
Using all the characters would expand the range. Here's a simulated pageflip between two 40x25 screens where the second screen is a dither of the errors in the first (look hard and you can see two characters occupy the same spot sometimes).

View attachment 12853

No, I do not have a functional encoder for this. I suggested people build one to improve on 8088 Corruption but so far nobody has taken up the challenge and I don't want to steal any potential thunder. (Besides, I'm working on a PCjr video system at the moment...)

I tried this approach first, but found that the encoding time was huge and didn't give a nice dithered appearance. Cycling through 16*16*16*16*255*255 is a pretty big task...
 
Do you publish your image encoding algorithm somewhere? It appears to be based on some type of character sub-sampling because it frequently returns the 50% dither character, but also seems to function like an edge detector as well. When I have gone to character based graphics in the past, first I pick the two colors and then I cycle through every character and calculate the lowest color distance, but I feel like I would be better off comparing the average of 2x2 blocks in each character - but maybe you've described it to me before, OR, I read it somewhere...

Anyway... this one is just taking the gamma corrected (woot) average output of four of the 16 CGA colors and then chooses them accordingly with the 50% dither. I could do as another posted suggested and use the other shaded characters - I seem to remember the average value of two of them are the same, so there is only full, 50% and one more - easy enough to do - not sure I get much more from it, although it would help a lot on the flicker free settings as that really does color limit.

I have continued to feel as though there has to be someway to better choose "the best combination of characters, foreground and background colors" that match to a given image section. Right, here is our problem, we have 16*16*16*16*512*512 comparisons - well... order of magnitude anyway (I realize that there are redundant sets) How do you represent each combination as a vector, or formula, or average, or sub-sample, or in some other way to compare each option to the 8x8*16 bit image that is represented.

Now try this one on - and this one hurts my head - what if you were to start with a much higher resolution input image, let's say to simplify the calculation that you were going after an 80x50 character screen (640x200-char) If you had more image and spatial information to work with, could you find a better character/color combination to match that image. The flicker-character provides a set amount of energy, so how do we emit the same energy, in the closest pattern possible to the higher resolution input... Is it better to down sample first??

If you can do 40x25 at 60 fps - there has to be a way to utilize temporal blending to dramatically improve the quality ... One thing I know is that you have to make sure that the total brightness of one frame is the same as the other, the ability to handle the flicker seems to directly relate to the total luminance difference of the frames - just optimizing that helps a lot. In an animation the flickering would be significantly reduced because you move the energy around so much from one frame to the next.

I just moved the file I uploaded to my Tandy - it works well - some flicker, but not so much that it isn't an interesting effect.

A demo that exploited this effect to make a plasma would be really cool I think. Stationary image flicker isn't that fun ...
 
I tried this approach first, but found that the encoding time was huge and didn't give a nice dithered appearance. Cycling through 16*16*16*16*255*255 is a pretty big task...

Yes, which is why the encoder took 2 seconds per frame on my 2.4GHz P4.

Do you publish your image encoding algorithm somewhere? It appears to be based on some type of character sub-sampling because it frequently returns the 50% dither character, but also seems to function like an edge detector as well. When I have gone to character based graphics in the past, first I pick the two colors and then I cycle through every character and calculate the lowest color distance, but I feel like I would be better off comparing the average of 2x2 blocks in each character - but maybe you've described it to me before, OR, I read it somewhere...

The algorithm is described in the presentation video I did for the project: http://archive.org/details/8088CorruptionExplained It was designed to do exactly what you describe, match shaded characters while also trying to match edges. Both the source and every character-set+color combination are resampled smaller 50% and then the comparisons are made.

If you can do 40x25 at 60 fps - there has to be a way to utilize temporal blending to dramatically improve the quality ...

Yes, see the post above where I attached the clown picture ;-)

A demo that exploited this effect to make a plasma would be really cool I think. Stationary image flicker isn't that fun ...

The plasma in Second Reality does exactly that, actually.
 
In this zip file is the viewer, a .bat file to make things easier and a series of .64C files. just type V64 filename.64C and it will run...

There are various levels of flicker as I modified my encoder to limit lumen differences. I also changed the interleaved pattern!

Let me know.

I finally got around to trying this. Looks pretty good - the colour reproduction is good enough that it could be mistaken for VGA with sufficient squinting! The chequerboard interleave pattern is definitely an improvement over the line-alternation pattern. It actually reminds me of early active-matrix colour LCD laptop displays which used to do similar flickering to improve their colour resolution.

Whatever you're doing to minimize the flicker is definitely working too (kq6lowf is much less headache-inducing than kq6max). Are you just giving less weight (smaller gamut volume) to combinations with more flicker? Or doing some kind of error diffusion to reduce the number of flickery character cells?

A warning to others who want to try viewing Chris's images, though - some of them are Not Safe For Work. Chris, you might want to find some different test images to check reproduction of flesh tones - I'm not sure if these ones violate any VCF policies.
 
I finally got around to trying this. Looks pretty good - the colour reproduction is good enough that it could be mistaken for VGA with sufficient squinting! The chequerboard interleave pattern is definitely an improvement over the line-alternation pattern. It actually reminds me of early active-matrix colour LCD laptop displays which used to do similar flickering to improve their colour resolution.

Whatever you're doing to minimize the flicker is definitely working too (kq6lowf is much less headache-inducing than kq6max). Are you just giving less weight (smaller gamut volume) to combinations with more flicker? Or doing some kind of error diffusion to reduce the number of flickery character cells?

A warning to others who want to try viewing Chris's images, though - some of them are Not Safe For Work. Chris, you might want to find some different test images to check reproduction of flesh tones - I'm not sure if these ones violate any VCF policies.

Whoops! I have a series of test images that I use over and over again. Always include some of those (isn't that what old computers are for?) I can take them out.

The flicker reduction is a setting that eliminates color combinations under a certain lumenince difference threshold. L=.8*r+... The higher the difference the higher the flicker. The other item that I have seen that contributes to the flicker is that difference between screens. The second optimization I did is to balance the overall frame luminosity difference. We have 4 colors, Fore 1, Back 1, Fore 2, Back 2 - I make sure that Fore 1 and Back 1 are bright and dark and Fore 2 and Back 2 are light and dark. The combination of that, plus the checkerboard interleave, plus the lumen diff reduction seems to improve the overall image at the sacrifice of perceived color depth.

Right now I am spending time trying to create a better matching method that uses all of the characters. I have made some exciting progress.
 
CGA Demo - 640x200x85 Color Flicker Viewer - New Encoder!

CGA Demo - 640x200x85 Color Flicker Viewer - New Encoder!

Ugh. I just wrote a great reply and then lost it because my login timed out. I will try again.

Here is a new version of the encoder. It uses all of the available characters and represents an entire rewrite of the code. Here are some features that were added:

1. Implements Gamma Correction in all math - looks better, but depressing because it means I have been doing it wrong for a very long time. Linearizes everything using two functions rgbtolin and lintorgb. Keeps them all on a 255 scale to keep it nice and clean.
2. Uses a sub-sampling brightness based method for character matching. Preserves significant detail in the output image and provides color blending which is critical.
3. Uses the viewer that Reenigne wrote for me! woot! Thanks.
4. Limits luma difference between color pairs on a selectable scale to optimize flicker versus gamut for specific images. (65 is basically no flicker, 108 is acceptable)

Files:

Viewer and Files
BMP Outputs

Videos of the viewer in action on a Tandy 1000:


Goonies
Ronald Reagan
Terminator

Thanks,

Chris
 
1. Implements Gamma Correction in all math - looks better, but depressing because it means I have been doing it wrong for a very long time. Linearizes everything using two functions rgbtolin and lintorgb. Keeps them all on a 255 scale to keep it nice and clean.
2. Uses a sub-sampling brightness based method for character matching. Preserves significant detail in the output image and provides color blending which is critical.

Looks like you (re-)discovered everything I did when I wrote my own :) These images are quite good, congrats.

Now that you are performing subsampling, how long does each image take to convert?
 
Sort of...

Sort of...

1. Implements Gamma Correction in all math - looks better, but depressing because it means I have been doing it wrong for a very long time. Linearizes everything using two functions rgbtolin and lintorgb. Keeps them all on a 255 scale to keep it nice and clean.
2. Uses a sub-sampling brightness based method for character matching. Preserves significant detail in the output image and provides color blending which is critical.

Looks like you (re-)discovered everything I did when I wrote my own :) These images are quite good, congrats.

Now that you are performing subsampling, how long does each image take to convert?

It takes an eternity (over a minute) to convert each frame because it is really calculating two frames. Yes, the subsampling is close to what you were doing in 8088 corruption, but not quite the same. I do the luma channel first, then calculate the chroma. Two part process. Look at this:

britconv.jpg

The Luma channel calc is color reduction of source with dithering, averaging of the tiles, then choosing the character set. It is slow.

I'm going to take a look at making a .TMV at 60 fps using the same algo to see what I get. Flicker shouldn't be nearly as objectionable with motion. There is one more step that could be better by choosing the chroma and luma at the same time. 256*16*16 comparisons is a lot, but 512*512*16*16*16*16 is a lot more! :)

Thanks,

Chris
 
(Can't edit previous message for some reason) Also thought I'd mention that the TMV "compiler" (takes already-converted files and muxes them into a .TMV file) is also now included in the distribution.
 
(Can't edit previous message for some reason) Also thought I'd mention that the TMV "compiler" (takes already-converted files and muxes them into a .TMV file) is also now included in the distribution.

(Can't edit previous message for some reason) Also thought I'd mention that the TMV "compiler" (takes already-converted files and muxes them into a .TMV file) is also now included in the distribution.

Jim,

Do you have the source video file for 8088_CORR? Things over here are finally coming together and I'd like to convert the suite of available TMV files. :)

Chris
 
Your mention of this is timely, as I just over the weekend entered and won a programming competition with a sequel to 8088 Corruption that vastly improves the tech. I'll start a new thread with more details on that in a few days when I have a write-up and source code posted.

As the new tech doesn't use any code or ideas from 8088 Corruption, I would suggest you test your encoder with the TRON discs test footage and then compare what it looks like with the .TMV conversion of the same footage. This should give you a good apples-to-apples comparison.
 
Trixter, have you posted any video yet of whatever it is you've been working on? Would love to see it!
 
Just thinking out-loud... say we use 40x100 text mode and have a custom font on the card EPROM providing the full 64 combinations of 2x2 sub-pixels... so we should be able to do 160x200x(some composite colour depth) at 60fps with no snow?
 
Yes, but I don't like that idea because most people wouldn't be capable of running it on their vintage hardware. Besides, if we were going to make some hardware modifications, I can think of a better one: Replace the CGA with a VGA card so that we can redefine the font without resorting to custom EEPROMs ;-)
 
Just thinking out-loud... say we use 40x100 text mode and have a custom font on the card EPROM providing the full 64 combinations of 2x2 sub-pixels... so we should be able to do 160x200x(some composite colour depth) at 60fps with no snow?

Not sure what you mean by "64 combinations of 2x2 sub-pixels" - in 40x100 text mode there are 16 pixels per character cell so you'd need 65536 different characters (actually 32768 since you can swap foreground and background) and you'd still have some ZX-spectrum style "attribute clash" due to only having two different colours per character cell.

How about 40-column text mode, CRTC tweaks for one scanline per row (requires reprogramming some CRTC registers twice per frame from a timer interrupt) and using the characters 0x0d, 0x21, 0x35, 0x4c, 0x48, 0x6a and 0x99. In their top row, these characters have all combinations of pixels you need for a 160-column mode, so that gives you 160x200x16 on an RGBI monitor (with a small amount of attribute clash - two colours per 4-pixel character cell). I came up with this a while ago but never got around to doing anything with it.
 
Back
Top