• Please review our updated Terms and Rules here

Project to create an ATX 80286 mainboard based on the IBM 5170

This is really cool. Another interesting experiment (and I am not trying to distract you from the core project) would be to find a Make-It 486 and see how it reacts to the upgrade processor. Did the ISA bus only ever operate at 8.33 MHz even in a 386 DX 40? Hmmm.
Hi chjmartin2,

Sounds interesting your suggestion. Apparently there is some logic needed to interface the 16 bit 486 type CPUs. This is one of my future ideas since the start of this project.
If anyone knows of some circuits or other technical details which could help to form circuits, I would appreciate if you let me know about it. Another idea could be to simply use a DX 486 and try to force it into 16 bit mode permanently. Maybe such a configuration could be beneficial as well.

These speed experiments are interesting to the "original" speed configuration of the mainboard as well because by varying the speeds, this is also showing some timing phenomena at higher clock speeds such as the COM port suddenly showing up in MR BIOS without the pulldown resistor which I was using previously. That resistor is not my ideal solution so I will look for other methods by adjusting the CPU timing to see if that can help the UART to work without any additional adjustments, and changing the clock speeds could provide some clues in this regard as well.

The ISA slot really doesn't have some kind of clock restriction since it's direct I/O. and it has the wait state mechanism to fill in some of the gaps which may occur if the CPU is dramatically faster. Though theoretically if the CPU I/O speed is running too fast, the IO devices may not respond to the faster /IOR and /IOW pulse. So running the CPU at even higher clocks has some new challenges. Maybe at some point it will require faster and more accurate CPLD or FPGA logic to keep up. And let's not forget, the 82284 and 82288 which I am able to test with right now are officially rated at 12Mhz, which is now exceeded by my latest tests so it's unsure if that could even run stably if the 82284 and 82288 can't support this speed. If there are a large number of wait states occurring, you can know that the additional CPU speeds will not be making a large difference. Though more speed always would be a positive improvement, even with some "losses" due to wait states it may still offer differences which are noticable.

So I have been able to test more with the 27Mhz crystal I found, which divides to 13,5Mhz actual clock speed internally in the CPU, correctly shown by the MR BIOS.
 

Attachments

  • Img_3859s.jpg
    Img_3859s.jpg
    127 KB · Views: 12
  • Img_3860s.jpg
    Img_3860s.jpg
    167.9 KB · Views: 11
Last edited:
It would be interesting if 386SX/486SLC support could be added
Indeed, if so, I would choose the 486SLC since this is pin compatible to a 386SX so this would have the same chance of working out and be more worth the effort. I have been thinking about such a project ever since starting the current one.

I did a very short read and comparison between the 386SX and 286 datasheets, it would need a lot of further study and making some translation design, but I think it may be possible to translate the bus control signals of the 386SX to interface it to the 286 AT System controller CPLD, and even connect it to the 82284 and 82288 up to a certain clock speed, which would be the simplest method.

An even better approach would be to replace the 82284 and 82288 with new logic, but that would be more complicated because that solution would also need to be integrated into the IBM AT system design and wait state mechanism. As we know, the /ARDY mechanism is extremely precise in its timing requirements, so the new logic would need to be precisely matched to the operation of the 82284 and control the CPU READY cycle control mechanism on the 386SX/486SLC. I think when designing replacement logic, if successful, would even enable higher clock speeds to be possible because these circuit areas can then all be synced to the clock to keep the timing as accurate as possible. Probably this will generate much better memory and I/O control signals from the CPU status outputs because using a CPLD or FPGA will be much faster logic and eliminate limitations of the 82284 and 82288. A good and necessary experiment would be to replace the 82284 and 82288 with a CPLD first to test the circuit theories initially using a 286 CPU. After this is working, this would lead to a new system control circuit, which can be much more easily adapted to suit the 386SX. This method would be a more safe path. Having a CPLD in place allows for much experimentation until it works correctly. This new logic then also can be tested again by raising the clock speed.

After I am done with all the testing with this project, I will at least do some further research and attempt to draft some theoretical concept schematics. If that works out well and appears viable, I may be more inclined to do more testing and manufacture a prototype mainboard.

Also, my other idea of using a 486DX CPU also may be possible since the DX has an input to force it into 16 bit mode, which is also worth looking into. If a 486DX could successfully run in 16 bit mode, having it already in place is one step closer to a full 32 bit design and the transition may be easier to test if using 16 bit mode can function well. But this is all purely theoretical at the moment. If that would work, it would be a really cool project though. Imagine having a 486 mainboard design without needing any chipset, which would be much more future proof. It also depends on how much documentation of the 386SX bus control timing I can find. Such a system, I would look into building it using only CPLDs and no more bus transceivers and latches at all, or only the fewest other ICs possible. CPLDs should be able to control and interface the whole ISA slot. I think using an FPGA for this purpose would be much more complicated and requires more parts because of needing to shift the voltage levels. It's just a few ideas that come to mind.

I still have a lot of testing to do on the current project to figure out the CPU clock speeds which are still able to run reliably, and the best methods to generate and sync all the other clocks. I may also experiment with other areas of the system controller design, and I will also study the 5162 schematics in much more detail to see if this method could be deduced from the schematics. It's worth at least some additional experimentation though ideally I would like to buy the actual 5162 mainboard if I can be lucky to find a somewhat affordable deal.
 
Last edited:
These days I am going to run a lot of tests to try to find out the stable maximum clock speed the CPU can run at.
After that I will look into how fast the DMA controllers can run, hopefully around 5Mhz which would be ideal.

I replaced the socket under the timer chip which was really bad, still from an earlier batch from China.
Now I am testing with the much cooler UM82C54-2 timer instead of the "oven" class Intel timer chip.

I found a 33Mhz oscillator today so I will be able to test a CPU clock of 16,5 Mhz later.
For this purpose I need to change the 82284 pin to select the EFI input and wire the oscillator output to that.

So first I plan to determine a stable method to run the CPU at 12Mhz.
If that is confirmed, I will start testing at 13.5 Mhz, and finally at 16.5 Mhz.
I may try out 20Mhz but I think the 16Mhz Harris CPU will not like this.
I will try to obtain one of the faster Harris CPUs as mentioned by HoJoPo and Hak Foo.

I also found a 74AS646 on a PCB, which has faster switching times than the 74LS646, so I will also do some testing with that later especially when raising the clock speed even further.
 
I ran the demo of Wolf3D for 4 hours without any crash or freeze in the current configuration!
So this concludes the 12Mhz test as 100% stable. So 12Mhz is at least supported now, a happy milestone since today!

This 4 hour test at 12Mhz was just an extra measure to make sure there are absolutely no further issues left.
I excluded the UART because I am keeping that for later, since I first want to see if the higher CPU clock speed doesn't eliminate the issues with the UART.
I will find a definitive speed and then look at if the UART functions at that speed, or needs some small tweak.

I have saved all the quartus projects which were used successfully at 12Mhz. I will also document all the bus ICs, load capacitor values for the 12Mhz crystal, and I will download the fusemap contents of all the CPLDs as an extra backup for future verification if needed.

Next I will now run the 13,5 Mhz test to see how that goes. I will not need to run it for as long as the 12Mhz test, maybe an hour or two will suffice.
After that I will modify the 82284 and wire in the 33Mhz oscillator on the EFI input and test with that.
 
I'm going to first solder a new mainboard PCB because I am seeing some strange intermittent issues which can only be due to a bad contact somewhere. It will be more work but the most important thing is to get clear and consistent findings so I can know what the right stable configurations are. I will try to use only the more simple type of IC sockets because those precision sockets from China are super unreliable and have caused trouble too often now. I believe I have introduced some bad contact by desoldering a lot of parts on the prototype, or possibly there is still a bad IC socket which I can't identify. Regardless, I need to eliminate this bad contact for sure so I can test undisturbed by it. So this assembly, I will avoid needing to desolder any ICs and socket everything that I possibly may want to replace for testing. I need to get a number of good quality normal DIP IC sockets from Germany or other reliable suppliers in the future.
 
I have finished the assembly work on the new mainboard PCB, which was a step that I could not skip because I needed to eliminate all factors which could possibly influence the test findings. No matter how small or unlikely the chance, I could not risk that being the actual cause, especially due to the strange results I have sometimes seen happen under certain test conditions like raising the speed and testing the UART on the mainboard. I needed to know, is this a contact or assembly problem or a systemic one. By assembling another board, I can be certain of this matter and remove it from my list of causes. The less factors of influence, the more clear the situation will become to be able to identify other factors with more certainty.

With the new board I have paid a lot of attention to all sockets to only use ones that feel really solid. Especially when using desoldered ICs which sometimes have slightly shorter legs. I found some older sockets from Germany on other PCBs which I have desoldered and used for the project.

I am first continuing to test the core PC parts, and this time I am also taking the higher clock rates as an additional project goal. By doing these speed tests I am also seeing a lot of potential improvement from a higher clock rate which raises the level of comfort in using the system, which is also worth consideration from now on at this stage.

For example when running the game Wolf3D, when I raise the clock speed, I am seeing dramatic differences when the demo is running.
So 12Mhz should be obtainable since I have previously run a 4 hour test without issues. It's already a speed gain compared to 8Mhz and I want to get the most speed and performance out of the design which is possible to use in stable operation.

Next I want to try to obtain a stable 16Mhz or 16,5Mhz if I need to use a 33Mhz source crystal.
If I am able to do this without issues, I will look around for a Harris 20Mhz or 25Mhz if this can be found.
The other day I did spot a mainboard which contains a 20Mhz Harris CPU.

After I have finished running all 286 tests, I will start working on a new 486 project.
That one, I want to do it with FPGA technology. I will prefer to use a single chip to contain all, or at least as much as possible of the entire PC system.
So, I am looking for VHDL or Verilog representations of the 8237, interrupt controller, timer chip, keyboard controller, realtime clock etc.
I hope these can be found because to develop these would be a huge and time consuming work and developing a new 486 system is already a large job.

Getting back to the tests I am currently doing, I have a few conclusions:
- in terms of the importance of bypass capacitors, I have tested this and the finding is that these are definitely a big factor and should definitely all be soldered in to get the most stable situation.

- about the crystal and swapping it for other clock speeds, a method needs to be used to have absolutely solid mechanical and electrical connection. So a more solid type of connector is absolutely preferable. I will find and test out some form of pinheader solution first now because the precision IC socket to plug in a crystal is apparently also not solid enough. Any change of crystal may also require an adjustment of the load capacitors. Possibly I will test by using some trim capacitors to try to determine if this can improve the clock stability. There is a lot more testing that I need to do and I also need to find out if my cheap scope can be at all useful for this purpose or not. If I can see improvement on the scope that would be great, but I have some doubts about the scope being able to indicate this. For example the frequency calculations of the scope seem to be constantly in flux so it's never a solid value you can see there.

- I have tested the system using a 33Mhz crystal oscillator. By selecting the EFI input, the 82284 is using the external oscillator and outputs that frequency to the CPU. However, I was not able to get a POST to occur - yet - using this method. I am using a UMC brand 12Mhz 82284 which is of course not ideal for supporting 16,5Mhz CPU clock speed however I don't exclude that this can work out later, I need to test this further.

- since later I will probably want to develop a 486 system, I need to already start looking at using other solutions instead of the 82284 and 82288. If this is successful, it may benefit future systems and will be able to support higher clock speeds more accurately as well. If I am successful to replace the 82284, which I will start with, I will publish the functional schematics here and on Github. Anyway, this 82284 and 82288 replacement is also a plan I am working on.

So I am back on the testing work and taking higher clock speeds into consideration with all tests I am doing from now on.

Kind regards,

Rodney
 
Last edited:
I have tested a lot with the 82284 12Mhz by UMC, the "UM82284-12", at 27Mhz.
The bare PCB capacitance on the crystal inputs is around 3-5pf.

Officially according to UMC, C71(pin 7) should be 25pF and C72(pin 8) should be 15pF.

My first tests show that the values in this case with the full size HC-49 Philips 27Mhz crystal need to be:
C71 around 10-15pF and for C72 even 10pF is too large. I may test later if I can find smaller caps of 3-5 pF for C72, I may have some.

I am now testing the demo game to see if it's able to run at least for a stable period of one hour or longer.
I am seeing some differences in result when the system is powered on for a longer period of time.

Possibly some components in the system are changing their operation after warming up, I am trying to find the area by using a small 4cm fan to cool different parts of the system.

I have found some improvement when I cool the VGA card so it's quite possible that when the card warms up the timing shifts which at least has revealed some issues which need to be looked at.

After I am cooling the VGA card the system is looking considerably more stable.

I may be able to fix this issue by adjusting the timing in a similar way as done in the 5162 using more shifting on the Q1 and Q4 signals, adding more signals and operating the /END_CYC CPU cycle timing control logic with differently shifted time periods. I would need to experiment with this using different System controller CPLD programming.

So at higher speeds, the timing is more delicate and not only influenced by the components but also the temperature is at least a factor to consider during tests.
Of course the solution is in the circuit logic but for testing and determining the cause and reason, temperature should be considered at least as a factor to influence the testing work.

I will continue the longer period stability tests now at 13,5 Mhz. There may be more factors involved which I will work on to identify so we could possibly get a stable 13,5 Mhz.
After that I will find a 32Mhz or 33Mhz normal crystal and test with that as well to see if we could achieve 16 or 16,5 Mhz stable operation. Also I will try to buy some Harris 20Mhz or 25Mhz CPUs if I can find one. A faster CPU will likely be even more accurate in cycle termination timing.
 
Last edited:
I have done more testing and experimentation on the AT logic.
By trying out various new circuits I got more insight in the system timing.
If I extend the CPU I/O phase in certain points this does result in more cycles being needed so it's not always an improvement.
However, if this technique is used to reduce wait states for example on the VGA card, that would still speed up the system performance I believe.
Later I will experiment with this a lot more but for now the first goals are stable higher clock rates. After achieving those, I can experiment to fine tune the I/O more.

There are a few things I have encountered the past days which have resulted in some improvements and more insight.

When we raise the CPU clock speed on an AT PC, this shortens the access times on the I/O devices and on all memory chips interfaced with the CPU.

For example, at raised clock speeds it is important to look at the ROM chips and their access time.
So I searched what I have available and changed the EPROMs to EEPROM chips.

I have a few Winbond W27C512-45 chips, though I am not sure if that labeling is real.
Anyway, I can at least say that the access time of these EEPROM chips is shorter than the EPROMs I was using.
So I am seeing some improvement in stability after using these for the BIOS and option boot ROM. When using these 64KB chips, the upper half starting from 8000 until FFFF can be programmed, which will be accessed in the 32KB socket because A15 is held high.

The next issue about the VGA screen getting distorted after warming up, I have done a lot of testing with that as well, and even modified the conversion timing to get some improvements. I could almost get rid of the issue completely but apparently the issue is also related to that specific card as well. And extending the I/O cycles does slow the system somewhat so that is not ideal, and I discarded that solution.
It's a Headland GC205-PC based card with 41464 DRAMs of 100ns. Possibly one or more of the DRAMs could be marginal at higher CPU speeds and may need replacement.

When using a Trident 9000B card with slightly more modern DRAMs on it, the whole issue with the screen memory doesn't even occur at the higher CPU clock speeds.
So I am continuing my tests using this Trident card and later I will revisit the Headland card to see what can be done with it to fix the DRAM issues which started to occur.
I will try some other DRAMs later because I really like this Headland card and want to see how to improve it.

Anyway, I am getting very close to reaching stability at 13.5 Mhz now which is a very festive occasion for me! So far I am running the Wolf3D demo game for well over 90 minutes now for the first time without any issues in my current test, which is still running. After a few hours of stability, if that happens, I will document all the ICs and CPLD programming used and do further tests to see if the results remain consistent. The system speed in the game demo is looking really great and responsive now, and later I will compare this system with my other Neat chipset 286 PC which also runs at 16Mhz. Really great news!

I have found a Dutch supplier for 32Mhz crystals so I hope to receive them soon so I can try to clock the CPU at 16 Mhz next. These crystals are the shorter HC49 type package just like the other crystals on the mainboard. Maybe tomorrow I can have the crystals but they also may be delivered on monday or later, depending on the postal service.

Another thing I can report about is the usage of larger capacitors on the VCC lines. This is not typically done on the early 286 AT mainboards without chipset ICs, however when using larger capacitance on the VCC line, this greatly improves the RTC retention stability of the CMOS settings. So if anyone is using a 5170 or 5162 and often is seeing issues with the RTC, they can try to add some really big VCC caps for example some 3300µF ones which could improve the issue. Especially when I was doing some extended testing with many resets etc, I found almost never any CMOS data corruption anymore after adding the large capacitors, so definitely a big recommendation to add these. I am now mostly testing with the DS12885 since I have those in supply and still only one HD146818. I ordered more from China but those are taking really long to deliver.

Another change I did was to add pull-up resistors of 10k on the S-databus lines. I will keep these for now to see if these are sufficient and may add additional resistor packs on the address lines, depending on the test results. The address lines up to A9 are more important because these also control I/O decoding on the system. The 5162 is using resistor packs to center the voltage of the ISA slots which could benefit the switching speeds but I think using pull-ups will provide enough benefit in this regard, usually driving lines low is less of a problem than high. When looking at typical 486 ISA slots, they seemed to mostly use pull ups of 4k7 but I will try if 10k can also suffice.

I have exchanged various bus transceivers and latches with HCT types now as much as possible because I like those more on any bus system since they have very nice full amplitudes compared to other logic families. I will probably exchange the bus transceivers on the memory card with HCT types as well as a next step in the process so everything will be well matched.

Currently I am using ALS type chips on the high and low databus and the 74LS646 I replaced with a 74AS646 which I found on an old PCB.
The X-databus is also more sensitive and needs an ALS or LS type for now.

The 82284 clock IC I am currently using is a UMC type rated at 12Mhz. These are more rare I think so I will test some slower Intel ceramic 82284 chips as well later after everything is stable. Same goes for the 82288, I am also using a UMC one and later will test with slower Intel ceramic versions.
For the timer chip I am currently still testing with the UMC version which is running much cooler than the Intel ones.
So far it appears that using no load capacitors at all is the best solution, and the PCB capacitance itself of a few pF seems sufficient.
If 16Mhz is at all capable to run, I expect this to only work without any load capacitors, I will post more about this later.

If I design a future revision of an ISA AT mainboard, possibly with a 486SLC or 486DX in 16 bit mode, I would control everything using CPLDs so I could connect another CPLD directly to most of the ISA slot S-bus lines. The CPLDs are much faster and can easily drive the ISA slots, and using these will also help to further simplify the PCB layout.

Another idea could be to use an FPGA and put all the system logic inside that. Even needing to use level shifters may be only a small negative matter compared to all the advantages an FPGA can offer. I really like integrated designs the most because of the convenience they offer so the choice for an FPGA seems even better. I will like to keep a normal CPU on the mainboard even if other solutions may be possible because it keeps the system more authentic to run on an actual period correct CPU I think.

Kind regards,

Rodney
 
Surely you know this already, but once you get to 16MHz and faster machines, the ISA bus speed is divided from the CPU clock speed (1/2 for 16 MHz, 1/3 for 25 MHz, etc.), especially once you get into 386 and 486.
 
Hi Makefile,

Yes indeed that is a good point, thank you for mentioning this, I also remember seeing such an option in certain BIOS settings of mainboards with faster CPUs such as above 20Mhz.

So since the ISA slot access has no actual clocking mechanism, they must be running some restriction logic inside the chipsets of those mainboards on certain ISA slot access speeds by the CPU, if the CPU speed is considerably higher such as 20-40Mhz. At which time issues are to be expected for sure of course. It still remains to be seen how high I can go with this system anyway, I don't expect to be able to go very far above 16 Mhz without needing some additional measures in the system logic.

Most likely they changed the byte conversion logic by adding additional decoding which offers the possibility of more elaborate cycle control.
Which would likely be activated for example by I/O access and memory access in the ranges where graphics cards map their RAM, the known most speed sensitive areas.
Though some of these are also covered by CPU ready mechanisms already on more advanced ISA adapters, for example a ISA slot VGA card which uses IOCH_RDY.
And for example using IRQ and DMA for certain I/O like the floppy and sound card also solves some potential speed issues already since the CPU is not active during those.
I have already experimented a little with the cycle control while looking at the issues which appeared with the Headland ISA card with the slow DRAMs.
Though in case of that card I will see how it can be modified to work better at the faster CPU speeds, which would keep the performance of the card higher if the rest is capable of that, which is quite possible since the rest of the logic on the card is mostly newer than the DRAMs.

I am currently running no restrictions on the system so I am controlling everything at the CPU's own speed of 13,5 Mhz which is my preference to keep it that way if possible.
I want to find out how far I can stretch the performance of the system when accessed at unmodified CPU speeds.
Of course, only if this can be done in a stable functioning, and I will be testing various ISA cards later to confirm that.
If I run into any issues at for example 16Mhz or higher later on in my test work, I will try some cycle extension mechanisms to control certain specific access by the CPU while keeping the CPU speed high during all other program execution and memory access. Testing and determining these things is not easy because it's not always possible to notice which operation is causing issues if any occur.

I will start to look into this in more detail if anything fails to keep up which would then start to introduce the need to do this. I think that is how it happened in the time period as well when they tested new mainboard designs where the CPU is capable of faster access speeds. I will also look in the BIOS of my Neat 286 PC to see if that manufacturer offers that setting to restrict any ISA slot access speeds, which I think it does.

Last night I ran the 13.5Mhz test for well over 3 hours without issues.
Power cycling the system etc always returned the same stability as seen in that 3 hour test.

I added the UART now without any issues and it's detected properly and consistently since.
I did some short tests of different things such as floppy format etc without any problems.
I am now running a short stability test of a few hours with the UART included and I returned the ARDYEN clock back to SYSCLK_n, I think that is better and I expect no issues from that change.

So I will document everything next, backup the CPLDs and quartus projects, download the test versions JED files from the CPLDs and then continue the work from that configuration.

I am really curious what will happen at 16Mhz CPU clock next which is the native speed of the Harris 80286 I am using.
If I can receive the crystals today I may be able to run some tests, or else it will be in a few days I expect.

Kind regards,

Rodney
 
Last edited:
I have backed up all the CPLD JED files from the stable run of the Wolf3D demo last night which ran for well over 4:45 hours.
So I can assume the system to be reasonably stable at least when running at 13.5Mhz.

Today I received the 32Mhz crystals and I have done some testing with these.
The first results are looking good and I was able to run the PC stable for extended periods already at 16Mhz CPU clock speed.
Indeed another speed boost compared to last night, even more responsive and faster.
I am not sure if this speed can run for extended periods, I will look at the memory card as well to socket the transceivers on the card and exchange at least the data bus transceivers with HCT versions.
I am reasonably sure that the SRAMs will work fine on HCT transceivers and if they do, it can improve the memory access speeds I believe.

Eventually the memory card is also a part which can possibly be replaced for example by an FPGA with SDRAM or DDR RAM where the FPGA can control the RAM and interface it with the system.
I think I have a FPGA board somewhere with modern SDRAM or something like that on it, which I could try to use for testing.
This board has a lot of I/O lines so it should be suitable if it's fast enough to try this.

Today I also tested the printer port for the first time by doing some INTERLNK/INTERSRV transfers which works fine.
The printer port this time is slightly different because I removed the interrupt control from it since that is not really needed for the port to function.
This reduced the printer port with another IC which I didn't use anyway.

I also tested mouse control again which works well.
The UART detection is still sometimes a bit strange.
Maybe I am missing something like I need to pull some serial lines high or something, I did experiment with this before.
Anyway, after removing the chip from the socket, rebooting a few times, powering off the system and putting the UART back, it is suddenly properly detected and remains functional from then on.
It is still pretty weird, but at least it works, and I can use the USB mouse.

I have only done some short tests with the CPU running at 16Mhz however I was able to run the Wolf3D game demo for some periods, and played the Patience card game in windows for about half an hour without any problems.

The ARDYEN output does seem to function best using the negative 286 clock pulse as far as I could conclude so far.

After doing more tests, I will look at the clock generation logic in the system controller again since 32Mhz is a nice multiple of the original 16Mhz clock pulse for the 286_CLK signal.
So that means it will be easy to derive various clocks from this input again instead of getting these clocks from the separate 16Mhz oscillator.

This mainboard is the second PCB assembly of this design because I wanted to exclude some potential factors. Today I will desolder some more ISA slot connectors to complete the PCB and assemble the second IDE port and the LAN chip etc. And after this is done and doing some additional tests each time to make sure nothing is made unstable by the additions, I will properly mount the mainboard in a PC case so it will be more shielded during further testing, and playing around more with the system.

Also I will solder some more SRAM chips so I can get 4MB and maybe 8MB later on. Each RAM upgrade I will test again after adding more SRAM chips.

I would really like to try out some newly compiled version of Doom on this system.
I am curious if Doom could be compiled to run on a 286 CPU, and how much work it would take to achieve that.

Right now the Wolf3D demo has been running at 16Mhz for around 80 minutes without any issues, so the 16Mhz tests also seem to be running well.
 

Attachments

  • Img_3935s.jpg
    Img_3935s.jpg
    148 KB · Views: 6
Last edited:
Yesterday I have tested the system at 16Mhz the whole day without any problems, and during the night I ran the Wolf3D demo for more than 6 hours, finally the game exited with some message from the game to contact the programmers, so it must have been a software reason for it to exit. Anyway, in DOS the PC was normal and responsive so no issues happened.

I made some improvements to the clock divisions since all the clocks have exactly doubled now compared to the 5170 speeds:
286_CLK = 32Mhz
SYS_CLK = 16Mhz
QRTR_CLK = 8Mhz
DMA_CLK = 4Mhz

I got some strange quirks after initially powering on that apparently the Keyboard controller "failed self test" according to the MR BIOS which I found on the CGA monitor.
So that needed another reset to clear the error each time since the MR BIOS wanted to halt the PC instead of continue after finding the "error" at power on.

I looked into the matter further since now I know that it's coming from the keyboard controller after testing with a ATI CGA card and seeing the message.

During my research while designing the system I have read somewhere that the 8242 keyboard controller was manufactured throughout the years in several different technologies and logic types, and that it would be the most compatible method to use a 10k pull down resistor on clock input pin 2 on a mainboard design when using an external clock on pin 3. The reason they wrote was that this would result in the lowest power operation of the keyboard controller clock input circuits in any case. So I also adopted this principle during the design phase because I also prefer low power operation if possible.

Since I am getting some keyboard controller initialization problems apparently after the system powers on and comes out of RESET, I am having another look at the clock inputs on the keyboard controller which do affect its stability of operation. I have read from other projects that the VIA VT82C42 is one of the most reliable chips in this category so long ago I ordered about 5 of them and I am using them ever since. I have some other keyboard controllers such as an Intel and a Fujitsu EPROM version however these seem to cause some problems such as the GATE A20 function. VIA is advertising in their datasheets that their GATE A20 function is really fast so I think that is a critical issue with the keyboard controller operation.

The VIA datasheet mentions that the best way to wire an external clock input would be to ground pin 2 and connect the clock signal to pin 3.
So I grounded pin 2 now, additionally I have created a separate clock division from 32Mhz without any reset on these separate clock division circuits.
These two things may have fixed that first reset to be needed. For several tests now MR BIOS has not reported the error anymore yet, so this appears to have been an improvement.

As for me, for the sake of this project I only look at what appears to work best and from the results of testing I learn to appreciate a certain BIOS more than others. So far that is the MR BIOS.
But also the MR BIOS is not perfect either. Some things are quirky and strange, but that is also because the mechanisms are unknown to me, especially without a source code to browse through.

What I noticed and want to mention is that sometimes the MR BIOS can end up in some erratic state on this system. This is of course also due to all my extensive testing and experimentation with the CPU cycle timing etc where the BIOS may get corrupted or confused for all sorts of reasons because of what I was doing on the system. What I found is the best cure for this state where the BIOS is acting up, is to simply power down the system and remove the RTC from the socket. I use a tweezers to ground the pins which is probably useless but that's my method, and then I put the RTC back into the socket. After that, before powering on, I make sure that the configuration as connected in the system is what I want to get detected, such as including the UART chip on the mainboard, and having a CGA adapter or not, when I want to test with these, and then powering on. You get the CMOS corrupted message one time and enter the BIOS, which I found is the best situation to then just set the clock and date, and other preferences like the floppy drive and clearing the IDE in MR BIOS to type 0 so the XT-IDE BIOS can work properly, and then everything is fine from then on. This procedure is really only needed if some problem occurred that corrupted the CMOS contents, or some hardware setting doesn't want to revert back correctly for example. After doing these steps, everything works perfectly from then on.

Of course, these days I am having almost no more errors in the system anymore so I don't need to do this much, I am also not swapping many things anymore. For example after soldering something on the board I may need to repeat the steps to make sure everything is cleared out and in order. Anyway, I am using the DS12885 mostly for testing which is not exactly standard for the 5170 so maybe what I was experiencing before also has something to do with this and would not occur, or less or differently with a HD146818.

I will do a lot of soldering and testing today so I can properly mount the mainboard in a PC case and have RAM in the system.
Then I will go looking if there is some Doom version that I can try out.

Kind regards,

Rodney
 
Last edited:
What I noticed and want to mention is that sometimes the MR BIOS can end up in some erratic state on this system. This is of course also due to all my extensive testing and experimentation with the CPU cycle timing etc where the BIOS may get corrupted or confused for all sorts of reasons because of what I was doing on the system. What I found is the best cure for this state where the BIOS is acting up, is to simply power down the system and remove the RTC from the socket. I use a tweezers to ground the pins which is probably useless but that's my method, and then I put the RTC back into the socket. After that, before powering on, I make sure that the configuration as connected in the system is what I want to get detected, such as including the UART chip on the mainboard, and having a CGA adapter or not, when I want to test with these, and then powering on. You get the CMOS corrupted message one time and enter the BIOS, which I found is the best situation to then just set the clock and date, and other preferences like the floppy drive and clearing the IDE in MR BIOS to type 0 so the XT-IDE BIOS can work properly, and then everything is fine from then on. This procedure is really only needed if some problem occurred that corrupted the CMOS contents, or some hardware setting doesn't want to revert back correctly for example. After doing these steps, everything works perfectly from then on.
CMOS is meant to be protected by a checksum. Maybe there's an issue with MR BIOS's implementation of this check?

Ideally, if the CMOS corrupted, you'd get defaults (as with your method) alongside an appropriate error message.
 
Hi dmac,

When I started testing in the beginning, many things went wrong before I got the bugs out of the system and in those days I saw CMOS errors multiple times, but lately it's become more rare.

It's an interesting point you make about what the BIOS does after the checksum protection was triggered. This seems different from other BIOS like Award and AMI.

What I noticed after an error is that sometimes only certain settings got changed to defaults and others remained correct. Other times the whole settings were lost. So the degree of corruption was varied. The MR BIOS never offered to set the defaults, but often it did assume those automatically without asking after it was corrupted. The MR BIOS may be more advanced and maybe it can recognize what was still intact from the CMOS data and what needs to be changed to default. Definitely it doesn't always default all the settings when a CMOS error occurs.

Today I did some more soldering and it was strange because I realized that time that I didn't get a CMOS error afterwards, and the system just booted. Later I did some more reboots and I did see the error and reset to defaults.

What I noticed before when there were sometimes freezes occurring in the system that this could corrupt the CMOS settings. So there must have occurred some writes to the CMOS area, or the RTC was unable to go into the powered down mode where it protects the CMOS and keeps the time running.

When I look at the circuits for controlling the RTC in the 5170 by the CPU, it looks pretty elaborate. If I were to interface a RTC I would not do it that way, but I am sure they did this for a good reason which has something to do with the I/O timing of the 286 during 8 bit I/O. Or the designer was simply so familiar with the 286 cycles that they just did it that way because they knew it would work well. Anyway, when I see those circuits they don't seem logical and I always wonder if it could not be done differently.

RTCs are quirky chips in my experience which always cause some kind of problems, and I am not liking the idea when there are some POST errors and the whole system has no display. Then only a CGA adapter may be able to show the errors. I like later BIOS more where the BIOS can report the error and ask for a key to continue. I am sure there are good reasons why they did it differently in these early ATs but it took some getting used to when I was testing my first AT reference mainboards and it was fortunate that I had that ATI Small Wonder card to see the messages.

The early AT mainboards I have seen had some strange errors happening, and all the mainboards including the IBM had some weird stuff going on like sometimes needing extra resets etc. I think these errors may also be related to the RTC. My first ARC mainboard was so buggy that it kept detecting all the multi-IO devices, and they losing them again, and detecting them again. Weird, because when all the things were detected, after that I could use that mainboard all day long without issues, until I did a reboot and it would act up again.

Anyway, the MR BIOS is behaving much better now so I think the issues which happened before did play a role in letting it act up in the beginning.
It seems I have also cured the problem of the keyboard controller self test failure now, which is not happening anymore.

All in all I can say the AT is a much more complex system which has been error prone in the beginning. After ironing out the bugs it is looking like a proper PC finally now, but the path to get there was really hard.
Speeding up the clock and experimenting with the System controller CPLD has shown me a lot of different weird behaviours which lets you get to know the system in much more detail.
Which to me is the charm of this project because all the circuits are known and visible in the schematics. Using the CPLDs I have had to modify several circuits to get the system to work properly.

I hope I can one day buy a 5162 for a reasonable price, even a broken one as long as the PCB is intact and the PALs and PROMs. The rest I can fix. I would love to study how they implemented the PAL which decodes the timing of the conversion and CPU cycle control.

Kind regards,

Rodney
 
I have done some more testing with the RTL8019AS ethernet chip however the results are not great.
For example an FTP transfer test kept freezing the system.
I also tested an ISA card with the same chip and configuration and the results are the same, also not good with freezes during communication.
Perhaps this Realtek chip is indeed not great with the AT, or possibly the CPU speed is also too high for this chip.
But of course I will prefer keep the CPU clock at high speed to keep the performance I have now.
There are no issues with CPU, VGA, UART, sound card and harddisk all at the current 16Mhz CPU clock.

On the XT I had no problems with the IC in 8 bit mode which of course was at 8Mhz CPU clock, but on this project it's quite buggy, so I am abandoning this chip at least for now. Maybe later I will revisit this if I can find some way to improve it. I took the chip off the mainboard to make sure that it was indeed causing the freezing issues, and it definitely was. Maybe in the future I could run the Realtek chip in 8 bit mode if this would work better with a 16Mhz CPU in byte conversion mode.
After all, the LAN is only used occasionally for some simple file transfers for testing out software and no major component anyway.

I have a UM9008 ISA card which is also NE2000 compatible so I have tested this also.
With this card there were no issues at all and smooth transfers could be done without any major freezing like I saw with the Realtek chip.
I got one freeze but that was after exhaustive testing and having transferred a few thousand files.
So I will look around if these UMC chips are available to buy, and do some research to find the datasheet etc.
Or possibly I can find even better and faster network chips from later dates which would probably be better able to keep up with a faster clocked CPU, which is one of the goals of the project now as well. As long as I can find a PDF, I can test them out as a possible replacement for the Realtek chip.

I added some more SRAM chips, I now have 16 chips on the board which amounts to 8MB of RAM in the system.
I think this can suffice for anything I will want to test in the near future.

I ran out of 50ns SRAM chips so I decided to have a try with 70ns of which I have enough chips here.
After some extensive testing I can conclude that 70ns SRAM chips are also no problem at all to use these on the system.
I will do more checkit testing but those take a long time with more RAM so I can run them overnight.
These chips are all Samsung parts, some are reverse pinout so I just flipped them and bent the pins down to the PCB, which is fine to solder them that way.
As long as you watch the pin 1 dot, it should go well. Soldering these chips is also really easy on the card because of the longer pads.
It took about 15 minutes to solder 8 of them on.
It doesn't really need a super small soldering tip either as long as you keep removing excess solder off the tip frequently.

Of course I don't know if the slower 70ns SRAMs could keep up with a 20Mhz or above CPU but as soon as I know more I will write about it here.
I will also try out some other BIOS versions just to see what would work with this system as well.
Even though there are no chipset registers, I will see what happens when testing the other types of BIOS ROMs on the system.

I also will keep searching to find some 53C400 SCSI chips which I will test as soon as I can find some reasonably affordable ones or some cheap card which contains one that I can remove.

I included a photo of the current setup, I know, the soldering on the SRAMs looks not great and I am missing some bypass caps, yesterday I had no time so I just quickly soldered everything, checked with a microscope for shorts and tested the system. I will reflow later with flux, add the remaining bypass caps and clean away all flux from the card.

Kind regards,

Rodney
 

Attachments

  • Img_3953s.jpg
    Img_3953s.jpg
    390.2 KB · Views: 15
  • Img_3949s.jpg
    Img_3949s.jpg
    140.2 KB · Views: 13
  • Img_3944s.jpg
    Img_3944s.jpg
    146.2 KB · Views: 13
I have built the mainboard into a PC case for stability.
I have done more testing on the FTP connection problem using the UMC9008 based card and found out that the PC was not actually freezing during extended transfer tests but rather the FTP server software seemed to be stuck in some kind of time out mode, the time out message seems to occur later. So after it found that, I was able to abort from both sides until the PC responded normally again, it was not a freeze of the whole system after all. Maybe it's my total commander program which I should replace with something better. I will try to find a more stable method for extended tests. Definitely for most transfers the FTP server is more than sufficient for the system.

I experienced some "critical 64k area memory error" beep concerts from MR BIOS which turned out to be caused by the increased number of SRAM chips and lack of bypass capacitors on the memory card. So I took some time to properly finish soldering the memory card, which cleared those errors and returned the system to it's usual stable self which I was accustomed to now the past weeks.

I must say, the AT system at 16Mhz is sensitive to such things as a lack of bypass capacitors so this system demonstrates the actual real need for those.
I could get the system to run properly, but at power on it sometimes had trouble during memory access, and sometimes needed some more resets. Which was fixed by soldering in the full amount of bypass capacitors. Any changes in the system, I must be careful to test each step separately to assure that I am not introducing any problems to the system. I have had some trouble before and I am more cautious now, which is also why I quickly decided to desolder that problematic RTL8019AS chip.

While testing before I solved those beep errors from MR BIOS, I also decided to desolder and replace the two 84 pin PLCC sockets in the mainboard. I got those from an Italian ebayer, I am not sure what happened to these or what material they are made from, while soldering them I felt something is weird about them, the solder didn't seem to want to stick to the pins, and at certain pins it just seemed to form around the pin, not really flowing onto the pin surface. After some more solder and a lot of heating it seemed to improve a little but I was simply not satisfied with the terrible quality of these PLCC sockets. Even the old sockets from the early 2000s years which I bought back then are much better than these sockets. I will not be using those again. There are really a lot of bad components going around everywhere on the market, and some of these have plagued my projects quite a lot. I have mixed feelings about the cheap suppliers because they do provide some components which are impossible to get otherwise, and I did get some good deals in some cases.

Having the amount of 8MB of RAM in the system also adds another advantage, the fact that it's convenient to run SMARTDRV for disk caching, which improves the responsiveness and smooth running of the system somewhat, for example when doing file management or running windows 3.1 etc.

I desoldered the 8 bit conversion LED and soldered in a pinheader, and added a 8 bit conversion LED in the PC front panel. It's nice to be able to see when the PC is doing 8 bit operations and when it is doing 16 bit stuff or just running code from memory.
For example, the conversion LED also lights up when accessing the 16 bit harddisk, which would be strange but makes sense after all because the XT-IDE AT option ROM is located in a 8 bit EPROM, which triggers byte conversion whenever the ROM code inside the XT-IDE EPROM is accessed.

I tried to "inject" the XT-IDE ROM into a blank space inside the MR BIOS by first splitting the ROM into 4k low and high byte parts, however this failed to work, which I kind of expected, but I just thought you never can know and I should at least try it. To load option ROM code inside of it's own blank space is something I can not realistically expect from the very excellent MR BIOS, but I had to test it anyway.

For future systems I will study the chipset documentation, maybe I can find out what method activates the shadow RAM memory decoders to switch in the copied versions of ROM memory sections in place of the ROMS. That way I could add a single 8 bit ROM for all ROM code in the system including the BIOS, copy it over into 16 bit RAM and the continue to run BIOS code at the highest CPU/RAM speed. Especially at higher clock rates this will become more and more important.

So it would pay off to find out the IO address of the bits which control such mechanisms and design that switching capability into a future system. So the shadow RAM should be normally mapped into the memory space somewhere so it can be copied into with ROM code at system initialization. Or the procedure could be done with a simple program after boot as well. I don't particularly want to recreate what the chipset manufacturers did, and prefer to keep the system structure more straight forward an uncomplicated wherever possible, however some features are worth it more than others. Such as using shadow RAM. Such a system may need to be initialized at a lower speed to read and run the BIOS code and then doing the rest of the POST, or continue running from the shadow RAM after the POST. The BIOS shadow copy area could be write protected in hardware to avoid accidental overwriting by any software etc.

Below some photos of the system built in the PC case, which is a more stable and protected condition for testing the system further.

If I can obtain some faster Harris 286 CPUs I will surely try to clock the system at 40 or 50Mhz for 286_CLK to see if the system can keep up. I don't know if the 24Mhz 286_CLK version 82284 can handle this but it's worth a try at least. Hopefully the bare PCB capacitance won't be too much load to be able to start oscillation with those higher frequency crystals.

These tests would raise the IO speed considerably far above the normal range so chances are that it won't work but at least I will see how it goes, or I may be able to discover how it could be done. Maybe the devices I am using in the system will be able to tolerate the speed somehow, I don't know yet. The CPLDs will be able to handle higher clock speeds at least, which can help the chance of the system still being able to do anything at those speeds.

Today I also want to build in a CD or DVD drive on the secondary IDE port.
Also I want to try if I can be able to run some compilation project version of DOOM.
Though I should expect that will probably not work at all since these projects are only recently started on.
If it works I will post about it.

Kind regards,

Rodney
 

Attachments

  • Img_3957s.jpg
    Img_3957s.jpg
    580.6 KB · Views: 19
  • Img_3973s.jpg
    Img_3973s.jpg
    77.5 KB · Views: 18
Last edited:
Hi chjmartin2,

Thanks for your message, I appreciate that!

I have done some testing with IDE CDROM drivers, however most didn't work well and didn't agree with the system, causing it to freeze etc.
I could see inside the driver files that some drivers were copied from other manufacturers and released with other brands of drives.
Even though some may have the same file name like VIDE-CDD.SYS they can be from various manufacturers which can be seen when viewing the files in Norton Commander for example.

I have only had some moderate success with the CDROM.SYS made by microsoft.
This let me install a LiteOn DVD rewriter on the secondary IDE port and create a CDROM drive letter, however only after also connecting IRQ15 to the secondary IDE port.
The LiteOn drive in question was not in great condition, it did try to access a normal CD but apparently was unable to read it.
Maybe it needs to be cleaned.

So far I don't think that ATA CDROMs are a great solution on the 286 since most drivers don't work, so I will probably use a Panasonic or SCSI drive later if I need a CDROM in the system. Also I can experiment with the soundcard to see if the IDE port on that works. Definitely the Panasonic port works which I have tested before.
There are 4 different drivers supplied with my sound card however I was not able to get one of these working with the IDE CD and DVD drives.

I will experiment much more later on, but I feel the need for CD drives is not so big on a 286 anyway, since I am mostly using FTP to add new files and I have ample harddisk storage to keep any files on the system so archiving is also not really needed.

The IDE ports are much more suitable for using with harddisks in combination with the very excellent XT-IDE BIOS, which works very well without needing any IRQs.

I have searched some new Doom projects which led me to the only candidate at the moment to support the 286 specifically(to my knowledge), Doom8088.
In the latest release download there are two executables, of which "DOOM213H.EXE" works best on the system.
This release is able to load game data into XMS memory which it seems to do well.
There is no mouse control as far as I could make out, however it does work by keyboard, and the game sounds are done by PC speaker.
The game seemed to run reasonably on the system considering that it's a 286, the resolution and detail have been lowered in order for slower systems to be able to produce some reasonable gameplay. I think this project is very promising especially because the support for XMS has been added which I think has potential to improve the game further. Anyway, it was fun to be able to run this game on the system, thanks to FrenkelS!

Today I experienced more MR BIOS "sirens" reporting the error with "64k memory" or something similar which I translate to memory access problems, so this must be related to the recently added 70nS SRAMs. I don't trust these slower SRAMs since after adding them I have been experiencing these errors which I had not seen at all before that time. So I will remove those 8 chips at least temporarily just to see if this eliminates those errors. So more testing is required to eliminate some things.

I have had another careful thought about the memory system. I think when raising the CPU clock speed further, this will become a more and more critical factor in order to keep a stable system. It seems that the most sensitive timing is when powering up the system and performing the POST which sometimes can generate those errors at those moments. I also really should test other BIOS types. I always hoped that I could run some AWARD BIOS which I remember the most from the DOS days. So I will experiment with those soon. Perhaps there is some simple system using an AWARD BIOS which does not depend on some chipset registers to pass the POST. After all, the system doesn't need many settings, just keyboard, clock, floppy drives and such basic settings.

So, anyway, for the future, I am looking for a faster alternative for a RAM memory system which will be able to keep up with even faster CPU clock speeds and preferably one which requires much less RAM chips as well. Like a single moden RAM chip would be able to cover the whole RAM region needs of the 286. This will save close to 30 decoding output lines being needed in the present solution. The most suitable candidate for this would indeed be a simple FPGA board which contains some form of SDRAM or DDR RAM or similar where the FPGA would transparently do all the memory control. So I think this will be my next project after I finish all the test work ahead. I will try to find some readily available FPGA solution which is very affordable, containing at least 16MB of really fast RAM directly interfaced to the FPGA. I will also look at ntegrating the BIOS ROM space in the FPGA because I want the BIOS code to be able to be read at faster speeds. This FPGA project could provide a good reason for getting familiar with FPGAs and using them to benefit the project. It seems that possibly the project has developed some need for this type of solution.

Another small project will be to attempt to replace the 82284 and 82288 with a CPLD. What I want to do is to run a CPLD in parallel to the existing 82284 and 82288 and develop the logic from the available chip documentation. Once the CPLD is able to generate identical signals to the 82284 and 82288, which I would be able to verify with my cheap scope using both inputs, I can run the first tests using a CPLD in place of both these chips. This project is a necessary step for two reasons, one is raising the clock speed for which there are no chips which have that specification available, so a CPLD would be needed, and the second reason is that the 486 doesn't have such a set of companion controller chips as the 286 has. So the system designer would need to design their own solution for the ready logic for a 486 system as far as I know. Having some circuits available to control the 286 ready and control logic can serve as a basis to be able to attempt to upgrade to a 486. I think these solutions are a big part of what would be involved to be able to create a 486 AT system.

I don't know which of these projects I will do first however now I will first remove those RAMs to test again with the 50ns SRAMs only because I want to get rid of these errors. If that works better, it would mean that I need to look around for more of these faster chips. I will do a lot more testing to be able to make more conclusions about the system, also by testing with other BIOS versions and comparing these with eachother to see which is best. If I get an AWARD BIOS to work, I will try the drive detection option ROM to see if the AWARD could operate by itself even, which would make the whole disk access run in 16 bit memory.

Kind regards,

Rodney
 
This morning I tested the mainboard on the table with the scope and found that I had apparently fried a few bits on the memory card databus transceivers yesterday. So this was throwing the error "Base 64K Pattern Test Failure" from the MR BIOS.

So I put in a couple of pieced together IC sockets and replaced the ALS type transceivers on the memory card with HCT types. I was already planning to test this anyway because I was reasonably convinced that this would work properly with the SRAMs. Later I will replace the address bus and control signal transceivers with HCT logic as well so all the signals on the memory card can get a good full amplitude on the inputs.

I will spend a few days doing more testing with only the remaining 8 50ns SRAMs on the card and after I am convinced there are no more Base 64K errors coming up or other suspicious things happening, I will resolder those 70ns SRAMs to get the 8MB again. If I get these errors back after that it will be a more clear indication of what is going on to be related to the SRAM access times.

I will think more about further steps in the project. I will also upload several System controller CPLD files to GitHub for different CPU speeds. I may also do more testing with 4,77Mhz on the DMA controllers just to see if this will work. I believe this has some chance of working completely stable.

I will try to find a Harris 20Mhz or 25Mhz CPU. Also I want to experiment with the 82284 to inject the faster clocks of 40 and 50Mhz into the oscillator inputs where normally the crystal is connected. One of these inputs should allow this I believe. I just need to find out which is the input and which is the output crystal pin. I am hoping that using this method can lead to a more stable clock operation at those speeds when using an external oscillator to provide the 286_CLK. And additionally I would not need to modify the 82284 either when using this method.

Also I want to look around for some kind of POST display system which can translate the codes on the POST port to normal diagnostic text instead of codes. Maybe some kind of RP2040 solution with a display can be found.

I will run a test with the Wolf3D game demo for another afternoon now.

Kind regards,

Rodney
 
Last edited:
Back
Top