• Please review our updated Terms and Rules here

Looking for volunteers to help test a new benchmark

It's been a while since I mentioned an update, but there is one: https://github.com/MobyGamer/TOPBENCH/releases/download/0.40a/TOPBV40A.ZIP
It adds helpful percentages views in the Database->Compare menu, which helps you understand why a system got a particular score compared to another system.

The database now has 275 tested systems in it, which is more than I could have hoped for nearly a decade ago starting the project. Many thanks to all who helped out.
 
It's been a while since I mentioned an update, but there is one: https://github.com/MobyGamer/TOPBENCH/releases/download/0.40a/TOPBV40A.ZIP
It adds helpful percentages views in the Database->Compare menu, which helps you understand why a system got a particular score compared to another system.

The database now has 275 tested systems in it, which is more than I could have hoped for nearly a decade ago starting the project. Many thanks to all who helped out.

Trixter,

How long is the CPU speed test supposed to take on a 486 emulation? I wanted to play with the program before placing it on a real system so I loded it up in VMWare under DOS 6.22. It seems to run fine, I navigate to "Add this system" but then it seems to get stock on CPU Speed. Thanks.

Edit: Do you recommend running the benchmark from a floppy or only from a HDD?
 
Last edited:
It adds helpful percentages views in the Database->Compare menu, which helps you understand why a system got a particular score compared to another system.

Thank you for the update. It is definitely handy to have the percentages already calculated for you there when you're trying to state the difference between two machines that score very close to each other. For example, here's a screenshot of the difference between the Tandy 1000 HX in the database that has a V-20 and the Radio Shack memory card with a DMA chip and my 1000 HX that has a SRAM card without DMA:

topbench_comparison.jpg

I don't have to do math to say definitively that getting rid of the DMA cycles makes the machine between 10 and 12 percent faster instead of the 14 percent faster you get from the 7/8 aggregate score comparison. That's super important, right? ;)

(It is an interesting coincidence that having an SRAM memory expansion in a 1000 makes it about as much faster than a DMA-expanded 1000 as a DMA-equipped-memory-expanded 1000 is compared to one with no memory expansion at all. If you're interested in another confirmation of that result I could try disabling my memory card and running that Topbench stub to see if that holds on an EX or HX. Here's the results for a SRAM-expanded HX if you're interested; can email the update file if that's better.)

Code:
[UID7214FBD55]
MemoryTest=1843
OpcodeTest=1110
VidramTest=1128
MemEATest=1429
3DGameTest=1030
Score=8
CPU=NEC V20
CPUspeed=7.16 MHz
BIOSinfo=unknown
BIOSdate=19870601
BIOSCRC16=7214
VideoSystem=CGA
VideoAdapter=Tandy 1000
Machine=Tandy 1000 HX V20/NODMA
Description=Tandy 1000 with SRAM memory expansion, no DMA chip
Submitter=Eudimorphodon@VCfed forums
 
If you're interested in another confirmation of that result I could try disabling my memory card and running that Topbench stub to see if that holds on an EX or HX.

So, I decided to go ahead and do this, and the results were kind of amazing. (It wasn't a whole lot of effort because my RAM card has a DIP switch on it to disable base memory backfill.) A 7.16mhz 1000 EX/HX without a memory card is a good 30-40% slower than a conventionally expanded one! (not the 15%-ish that a regular 4.77mhz 1000 is.) If I had to hazard a guess the EX/HX must effectively run their onboard RAM at the same speed as an original 1000. Fun!

noram.jpg

Contents of output.ini from topstub attached. (I was actually able to *barely* run the normal Topbench interface if I loaded DOS high... yeah, DOS high with 256k base, weird, but it told me it couldn't save the result because it had insufficient RAM for the entire database.)

Code:
[UID721410A0E6]
MemoryTest=2478
OpcodeTest=1726
VidramTest=1516
MemEATest=2203
3DGameTest=1548
Score=5
CPU=NEC V20
CPUspeed=7.16 MHz
BIOSinfo=unknown
MachineModel=0000
BIOSdate=19870601
BIOSCRC16=7214
VideoSystem=CGA
VideoAdapter=Tandy 1000
Machine=Tandy 1000
 
How long is the CPU speed test supposed to take on a 486 emulation?

It shouldn't take long at all, no more than 5 seconds or so. What emulation? Run topbench -? to see a list of options that disable some of the tests/detection, that might let it run. Try -s to skip detection.

Do you recommend running the benchmark from a floppy or only from a HDD?

Either is fine; there's code that waits for floppy drive motors to stop spinning before taking a result (some systems slow down the CPU when the floppy motors are going, as a compatibility measure for older disk-based copy-protection).

(It is an interesting coincidence that having an SRAM memory expansion in a 1000 makes it about as much faster than a DMA-expanded 1000 as a DMA-equipped-memory-expanded 1000 is compared to one with no memory expansion at all.

Is it though? Isn't there a similar speed difference on the PCjr, when loading programs in the first 128k vs. above the first 128k?

(I was actually able to *barely* run the normal Topbench interface if I loaded DOS high... yeah, DOS high with 256k base, weird, but it told me it couldn't save the result because it had insufficient RAM for the entire database.)

You can use command-line options to point to a cut-down database if you like. Just copy the first 10% of the file or so to a new file, and tell topbench to use that instead.

Thanks for the results, I'll include them in the next release.
 
It shouldn't take long at all, no more than 5 seconds or so. What emulation? Run topbench -? to see a list of options that disable some of the tests/detection, that might let it run. Try -s to skip detection.

I am using VMWare to emulate a DOS 6.22 environment. I have created a boot disk and will run the test on a few machines tonight. I will get back to you with results ASAP.
 
AMI Enterprise III Results

AMI Enterprise III Results

OK, I tried the program on real HW and here is what I got:

Code:
[UIDA245130B40]
MemoryTest=62
OpcodeTest=28
VidramTest=119
MemEATest=33
3DGameTest=20
Score=217
CPU=Intel Pentium OverDrive for 486
CPUspeed=83 MHz
BIOSinfo=R(C)1985-1992,American Megatrends Inc.,All Rights Reserved,6145F Northbelt Parkway,GA-30071,USA.
(404)-263-8181. (11/11/92, rev. 0)
BIOSdate=19921111
BIOSCRC16=A245
VideoSystem=VGA
VideoAdapter=VGA, S3 86c928 E, VESA, 4096kb VRAM and 1024kb DRAM
Machine= AMI Enterprise III Series 68 Revision 1 w/ Pentium PODP5V83
Description=AMI Enterprise III EISA/VLB Based System w/ Number 9 #9GXE VLB Video Card
Submitter=Shadow Lord (VCF)

It should be noted that I ran the test a number of times (rebooting in between) and each time I got a different speed for the CPU ranginh from 74 to 90 MHz. 83MHz is the speed on the box.

The program crashed (needed a reset) on my Gateway 2000 486DX2-66V and Everex Step MEGACube as well as VMWare emulation. In all three situations detecting the CPU speed seems to be the issue (using -s allows the program to proceed and not crash).
 
The benchmark was designed to be a 16-bit benchmark so that the scores were appropriately relative from the 8088 all the way up to 486s. Once you get to pentiums, you've got two pipelines and I didn't design the metric code for that, so running it on a pentium or later produces numbers that don't scale as I intended. Also, the CPU detection code is not perfect :) but you can always skip that test and just add in the correct info yourself. In a nutshell, I'm not surprised you saw what you saw, given a Pentium upgrade on a 486 system.

For benchmarking 386s and later, I usually recommend benchmarks that use 386 instructions. I thought about creating TOPBENCH 386 at one point, which would be a protected-mode program and test all 386 instructions as well as VGA operation and memory speed, but there are enough benchmarks out there that already do similar things (ie. you can just run Doom or Quake in benchmarking mode) that I didn't feel it was necessary.
 
The benchmark was designed to be a 16-bit benchmark so that the scores were appropriately relative from the 8088 all the way up to 486s. Once you get to pentiums, you've got two pipelines and I didn't design the metric code for that, so running it on a pentium or later produces numbers that don't scale as I intended. Also, the CPU detection code is not perfect :) but you can always skip that test and just add in the correct info yourself. In a nutshell, I'm not surprised you saw what you saw, given a Pentium upgrade on a 486 system.

Trixter,

I want to clarify that the program crashed on 486 systems with genuine Intel 486 chips. It ran, inaccurately as you would expect, on the Pentium OD just fine. However, it could not detect any of the 486 chips. I will try it on my Toshiba T3200SX and Compaq Systempro XL this weekend and see if it goes any better. Any chance for code improvements in the detection routine? How about in the video department? Could we contribute info to you that would allow for a more accurate (i.e. brand name/model vs just chipset) video detection? Specially in detecting video ram: I consistantly got 256KB on systems with 1MB, 2MB and even 4MB of video ram.

For benchmarking 386s and later, I usually recommend benchmarks that use 386 instructions. I thought about creating TOPBENCH 386 at one point, which would be a protected-mode program and test all 386 instructions as well as VGA operation and memory speed, but there are enough benchmarks out there that already do similar things (ie. you can just run Doom or Quake in benchmarking mode) that I didn't feel it was necessary.

Yes and no. Yes there are other options for bench-marking . However, it would be nice to use one bench mark to create one database that has all sorts of machines in one place. Is it an absolutely necessity, of course not and I think this benchmark (when it works) is good enough as you say up to 486 and early Pentiums. But I wouldn't be sore at you if you built a 32bit version or a more expansive one ;)
 
Any chance for code improvements in the detection routine? How about in the video department?

I'm not planning on it, since the detection routines are for convenience. Since the program can run without CPU detection, and since the video portion can be modified in the editor before committing to the database, I wasn't planning on it.

But I wouldn't be sore at you if you built a 32bit version or a more expansive one ;)

I don't think it would go over as well as existing methods people are already using. Phil already has a large database project, for example.
 
Is it though? Isn't there a similar speed difference on the PCjr, when loading programs in the first 128k vs. above the first 128k?

The hit on an unexpanded PCjr is much worse than the unexpanded Tandy 1000, more like 50% than 10-15%. Just thinking it was an interesting coincidence that the relative overhead of Tandy's video system managed to be so close to the normal DMA overhead. Clearly they weren't quite able to duplicate that when they upped the CPU clock to 7.16mhz.

Have you considered the idea of making a version of the Topbench stub that could take as an argument where in physical memory it runs? After seeing how the HX-with-no-expansion-RAM results were so low it would be interesting if you could, for instance, run the Topbench code loops above the 512k mark in a 640k Tandy 1000 like an SX or, especially, a TX. A TX used the same Big Blue ASIC as the EX/HX, so presumably any program code that might run from the last 112k of RAM in a TX that doesn't have the extra 128k of CPU-dedicated RAM installed (so it's stuck using the shared video ram) would take a *massive* hit.
 
Have you considered the idea of making a version of the Topbench stub that could take as an argument where in physical memory it runs?

No, and I'd argue that such a feature should never exist. The point is benchmarking the system as a user would run programs normally.

That said, nothing is stopping anyone from booting DOS 2.11 and running the stub on an unexpanded PCjr, which is precisely the use case I wrote the stub for.

After seeing how the HX-with-no-expansion-RAM results were so low it would be interesting if you could, for instance, run the Topbench code loops above the 512k mark in a 640k Tandy 1000 like an SX or, especially, a TX. A TX used the same Big Blue ASIC as the EX/HX, so presumably any program code that might run from the last 112k of RAM in a TX that doesn't have the extra 128k of CPU-dedicated RAM installed (so it's stuck using the shared video ram) would take a *massive* hit.

You can use the freeware program EATMEM to eat up RAM, which will then load programs higher in memory.
 
No, and I'd argue that such a feature should never exist. The point is benchmarking the system as a user would run programs normally.

Yes, I'll admit it's an odd case for most circumstances. But there are those weird edge cases (like Tandys with their migrates-upward shared memory, or possibly some other machines that hold less than 640k on their motherboards and require the rest on expansion cards) where there could be some architectural quirk that makes some memory slower than others, which would make the real-world performance of these machines slower than what they normally benchmark at once a program gets large enough to use the slower memory regions.

(Testing the speed of UMBs might also be an interesting edge case; you see people argue about whether it's useful to use something like an 8-bit Lo-Tech card to add UMBs to a fast turbo XT because the additional memory will be "so much slower" than the built-in; you could get an answer for *just how much* slower it actually is.)

But that said, EATMEM should probably do the needful well enough. Using it now I was just able to confirm that this slow upper region can make a *significant* difference. By playing with reserving numbers between 320k and 256k I was actually able to make TopBench spit out benchmark results at degrees between the "fast" SRAM score and the no-expansion RAM one. Checkit also shows the pain of having to run from the slow RAM:

speedbrake.jpg

No wonder these Tandy machines get so slow, even more so than usual, when trying to run Windows or GeoWorks on them. ;) The overhead from the video memory sharing basically knocks them down to not much faster than a 4.77mhz machine when they have to use it for program code.
 
Trixter,

Is there anyway to easily maintain a personal database? i.e. If I wanted to have a DB of my machines for my own use? I saw the switch for providing a different name for a DB file, but then the other machines will not be in it from the master DB (for comparison). I guess I could run the test, create my own personal DB, and import it into the master list but before I do wanted to make sure there was no automatic way of doing it. A related question: can I export results from one DB to another? Or do I need to cut and paste out of the master DB file? TIA!
 
I created the database as a text file, and added the command-line option to pick a database file, specifically so people could maintain their own databases if they wanted. However, there is no facility to handle more than one database simultaneously, and I'm not planning on adding that. As for splitting, merging, etc.: It's text, so cut'n'paste in an editor is your friend :)

Unfortunately, keeping it text is one reason loading takes a while; lots of string ops. I thought about creating some sort of binary cache, but that's not worth the time either since it's not like people are starting up TOPBENCH 15+ times a day.
 
As for splitting, merging, etc.: It's text, so cut'n'paste in an editor is your friend :)

I haven't actually tried the experiment yet but I am curious: It look to me like the database is stored sorted by speed score. If you just append a result (like the output from topstub) out of order does that matter? (would it accept it and fix the sorting next time the CUI interface wrote it back out?)
 
New update! Q1 2022 database now holds 296 systems. (It was over 300, but I did some much-needed auditing and cleanup of the database, removing effective duplicates, renaming "AT CLONE" systems that were really Pentiums, etc.) There are also a smattering of compatibility, speed, and memory usage fixes. You can grab the newest version from the github repo's Releases section.
 
(It was over 300, but I did some much-needed auditing and cleanup of the database, removing effective duplicates, renaming "AT CLONE" systems that were really Pentiums, etc.)
I'm happy I'm still in the database, twice. Once with my surname and once without. You might want to change either one of them.
 
Back
Top