• Please review our updated Terms and Rules here

Changing VGA clocking modes

VileR

Veteran Member
Joined
Jul 21, 2011
Messages
656
Location
Israel
Here is what I get from available documentation. Before making changes to the VGA clocking mode (i.e. doubling/halving the character clock for 40/80 column operation, or toggling 8/9 dots per character), the sequencer must be reset first - using the Sequencer Reset Register (port 3C4h, index 00h).

The question is how exactly to go about doing that, and here's where things get ambiguous.

From http://www.osdever.net/FreeVGA/vga/seqreg.htm:

[bit 1] SR -- Sychnronous Reset
"When set to 0, this bit commands the sequencer to synchronously clear and halt. Bits 1 and 0 must be 1 to allow the sequencer to operate. To prevent the loss of data, bit 1 must be set to 0 during the active display interval before changing the clock selection. The clock is changed through the Clocking Mode register or the Miscellaneous Output register."
[bit 0] AR -- Asynchronous Reset
"When set to 0, this bit commands the sequencer to asynchronously clear and halt. Resetting the sequencer with this bit can cause loss of video data"

From https://archive.org/details/Second_Sight_VGA_Registers/page/n9:

Bit 0:
Asynchronous Reset - This bit, synchronous reset, or both should be set to 0 before changing bit 0 or bit 3 of the Clocking Mode register or bit 2 or bit 3 of the Miscellaneous Output register, or all bits of register 3DF index D.
0 - Asynchronous clear and halt the sequencer. This may cause data loss In the dynamic RAM's.
1 - Bit 1 and 0 must be 1 to allow the sequencer to operate.

Bit 1:
Synchronous Reset This bit, asynchronous reset, or both should be set to 0 before changing bit or bit 3 of the Clocking Mode register or bit 2 or bit 3 of the Miscellaneous Output register, or all bits of register 3DF index D.
0 - Synchronous clear and halt the sequencer.
1 - Bit 1 and must be 1 to allow the sequencer to operate.

Taken together, I assume that bit 0 (AR) should be left at 1, and that it should be enough to set bit 1 (SR) to 0, change the clock, then set SR to 1 again.
However the one VGA BIOS I actually looked at (some Trident or other) does the opposite, i.e. it leaves SR at 1 and changes AR.

Besides that, how should I parse "...must be set to 0 during the active display interval" - does that mean setting it to 0 for at least a full interval, or just waiting for the interval to arrive before I set it? Is there a minimum or maximum amount of time during which it must remain 0?

Also, I'm not entirely clear about the issue of possible data loss. I assume this has to do with the DRAM refresh circuitry but I'd appreciate some insight here. Basically I'm looking for the safest way to do this across multiple VGA chipsets and/or system speeds, etc.
 
Last edited:
However the one VGA BIOS I actually looked at (some Trident or other) does the opposite, i.e. it leaves SR at 1 and changes AR.

Perhaps that particular routine doesn't care about preserving the contents of VRAM, and therefore uses the faster asynchronous reset. Or (because it's tied to a particular piece of VGA hardware) it can make assumptions about how the hardware works that normal user VGA programming can't.

Besides that, how should I parse "...must be set to 0 during the active display interval" - does that mean setting it to 0 for at least a full interval, or just waiting for the interval to arrive before I set it? Is there a minimum or maximum amount of time during which it must remain 0?

It should be set to 0 for as little time as possible. From https://www.phatcode.net/res/240/files/vgaman.txt, where I learnt this:
Note that this reset cannot last longer than a few tens of microseconds without the possibility of image memory corruption due to lack of refresh activity.

Also, I'm not entirely clear about the issue of possible data loss. I assume this has to do with the DRAM refresh circuitry but I'd appreciate some insight here. Basically I'm looking for the safest way to do this across multiple VGA chipsets and/or system speeds, etc.

Yeah, the VGA sequencer is responsible for refreshing video DRAM. I'm guessing the synchronous reset operation does a few refresh cycles in advance (perhaps using wait states) to ensure that the refresh can safely be off for a while (much as my lockstep code does with system DRAM refresh).

I'm not sure why "during the active display interval" matters - perhaps there are some edge conditions around horizontal or vertical retrace that could complicate things. But I suspect an overabundance of caution on IBM's part there - the earlier EGA documentation doesn't include that. For another thing, in practice it doesn't seem to matter. One of the best tested pieces of VGA-tweaking code I know of is FRACTINT. The video mode setting code there does this:
Code:
        mov     dx,word ptr es:[63h]    ; say, where's the 6845?
        add     dx,6                    ; locate the status register
vrdly1: in      al,dx                   ; loop until vertical retrace is off
        test    al,8                    ;   ...
        jnz     vrdly1                  ;   ...
vrdly2: in      al,dx                   ; now loop until it's on!
        test    al,8                    ;   ...
        jz      vrdly2                  ;   ...
...and a little later...
Code:
setmiscoreg:
        mov     dx,03c4h                ; Sequencer Synchronous reset
        mov     ax,0100h                ; set sequencer reset
        out     dx,ax
        mov     dx,03c2h                ; Update Misc Output Reg
        mov     al,cl
        out     dx,al
        mov     dx,03c4h                ; Sequencer Synchronous reset
        mov     ax,0300h                ; clear sequencer reset
        out     dx,ax

So this does the reset during the vertical retrace interval rather than the active display interval. So I suspect that either works fine. No harm in picking one and being consistent about it though.
 
Excellent, thank you. Appreciate the pointers to vgaman.txt and to FRACTINT, too -- saved in my reference list.

That kind of leads me to another question, however - the (supposed) necessity to stall between I/O accesses to the same port, to let the bus "settle" (i.e. by doing a "jmp short $+2" or two). These two references completely omit it; Michael Abrash does mention it, but says "I don’t know how applicable it is when accessing VGAs". If we trust FRACTINT I guess it's safe enough to drop the delay, but I wonder if there's a final word on that.
 
That kind of leads me to another question, however - the (supposed) necessity to stall between I/O accesses to the same port, to let the bus "settle" (i.e. by doing a "jmp short $+2" or two). These two references completely omit it; Michael Abrash does mention it, but says "I don’t know how applicable it is when accessing VGAs". If we trust FRACTINT I guess it's safe enough to drop the delay, but I wonder if there's a final word on that.

There was a good thread on this a few years ago. VGA wasn't mentioned specifically though. I'm fairly sure that the extra delays are never necessary when accessing VGA registers. One reason is that they aren't used in FRACTINT (except in one case in the 8514/A code). Another is that the VGA is a very sophisticated chipset. In particular, it needs to add its own wait states for accessing VRAM. Which probably means that it does the same thing for accessing ports too (though it would be interesting to check on 286/386 whether VGA port writes are slower than motherboard device port writes). Anyway I'd feel quite comfortable writing VGA code without the delay, and only put it in if something breaks.
 
The thread link leads me back to this one, but I think I found the one you were thinking of: http://www.vcfed.org/forum/showthread.php?45197-IO-Delays-on-8088-class-computers, is that it?

Lots of good info there, and more sources for my reading list - much obliged. :) I'm still thinking about having some delays between writes, not really for their own sake, but as a byproduct of rolling long "out dx,ax" sequences into loops (I'm not really size-limited here, but it'd be nice to keep the code size down anyway); hopefully that wouldn't slow things down too much. Nice to know it isn't really required, in any case.
 
The thread link leads me back to this one, but I think I found the one you were thinking of: http://www.vcfed.org/forum/showthread.php?45197-IO-Delays-on-8088-class-computers, is that it?

Yep, that's the one I was trying to point to! Sorry for the copy/paste failure.

Lots of good info there, and more sources for my reading list - much obliged. :) I'm still thinking about having some delays between writes, not really for their own sake, but as a byproduct of rolling long "out dx,ax" sequences into loops (I'm not really size-limited here, but it'd be nice to keep the code size down anyway); hopefully that wouldn't slow things down too much.

Depends what you're doing! But I can't imagine that changing the clocking mode is something you're going to be doing more than once a frame or so, so it's probably not a particularly critical code path.
 
Depends what you're doing! But I can't imagine that changing the clocking mode is something you're going to be doing more than once a frame or so, so it's probably not a particularly critical code path.
Much less frequently than that, even. Just some setup code when changing video pages/toggling options in a program (a font editor). My only concern is unseemly glitches in case some changes become visible before others, but I think I have it covered. :)
 
Back
Top