Chuck(G)
25k Member
ARM STM32F4 and F7. Great MCUs.
This brings up the question of what a simulator is.
Using one of the timers is clearly the way to go. As far as accuracy, it is as accurate as the crystal or clock source you use. It does use a PLL or similar way to produce the high frequency clock. That always has some jitter or for short term timing but the clock still corrects to the cycles of the reference clock. I don't know how much the STM parts can deviate from the lock of the reference clock. I suppose there is a spec someplace. For driving something like TTL, the TTL chips likely produce more jitter from power supply noise than the short term errors of the cpu clock.Of course, but it takes a little thinking. For example, on a project I'm working on, the MCU can toggle lines faster than the TTL attached to them can handle them. Fortunately, I have timers running at 84 MHz to provide precise delays.
I guess there are difference in what is required to be cycle accurate. For interactive games keeping close to the same speed for the user is a tough task. For my 4004 stuff, it could run many times faster as long as the cycle count matched what was needed to simulate TTY inputs.The TTY was itself simulated, using the cycle count as the timing reference.at the lowest level, is it clock-accurate or not? and if it tries to be clock-accurate, how close is it really?
I'm glad you are using 'simulator' and not 'emulator'. IBM had a definition for 'emulator' which has now been ignored, a microcoded set of routines
to EMULATE the instruction set of another computer. Note they say nothing about timing accuracy in that definition.
The area of clock-accurate simulation is a fun topic to explore. MAME's timing model is closer to accurate than SIMH, for example, but MAME's processor
models still have problems that vary in degrees of accuracy depending on who wrote the CPU cod
I've been having some fun making an emulator for this just to see what I can get it to do. Well, I should say I've got an 8080 one working and now thinking of making an "8085" mode for it.
FWIW, I came up with a fairly compact 8080/8085/Z80 reference card for hand assembly and disassembly that gives the mnemonics (generally) in both Intel and Zilog format, the opcodes, and the number of cycles for each. (The undocumented 8085 instructions are not yet included, but will be soon.) Rather than having separate tables for the various ways you'd want to search, it's just a short plain text file that you can load into your editor and search with your editor's search function. I've attached it here, and you can always find the latest version in my sedoc repo.The Op8085 is new to me, thanks, will come in handy
Already made some improvements, now at version 1.1, also added a changelog to the repository.
How about the NEC TK-85, the younger sibling of the 8080-based NEC TK-80? I have some information on the TK-85 in my tk80-re repo, including ROM images and BIOS disassemblies. I also own a couple of these boards, and they're quite sweet to use.Now at version 1.3 of the simulator, fixed some bugs.
Thinking about adding a second display/keyboard form for debugging other homemade (single board) systems, maybe based on a multiplexed system (port for digit, data for segments).
Any thoughts ?
This brings up the question of what a simulator is....
A processor simulator that doesn't make it easy to be attached to a system is not of much use
@pytest.mark.parametrize('char, num', [
('0', 0), ('1', 1), ('8', 8), ('9', 9),
('A',10), ('a',10), ('F',15), ('f',15),
('G',16), ('g',16), ('Z',35), ('z',35),
('_', 40), ('\x7F', 40)
])
def test_qdigit_good(m, R, char, num):
m.call(m.symtab.qdigit, R(a=ord(char), N=1))
assert R(a=num, N=0) == m.regs
@pytest.mark.parametrize('char', [
'/', ':', '@', # Chars either side of digits/letters
'\x80', '\x81',
'\xAF', '\xB0', '\xB9', '\xBa', # MSb set: '/', '0', '9', ':'
'\xDA', '\xFa', '\xFF', # MSb set: 'Z', 'z'
])
def test_qdigit_error(m, R, char):
m.call(m.symtab.qdigit, R(a=ord(char), N=0))
assert R(N=1) == m.regs
at the lowest level, is it clock-accurate or not? and if it tries to be clock-accurate, how close is it really?
I'm glad you are using 'simulator' and not 'emulator'. IBM had a definition for 'emulator' which has now been ignored, a microcoded set of routines to EMULATE the instruction set of another computer. Note they say nothing about timing accuracy in that definition.
Looks like a nice project but without the proper rom listing (with labels and comment) it would be a very hard job to do.How about the NEC TK-85, the younger sibling of the 8080-based NEC TK-80? I have some information on the TK-85 in my tk80-re repo, including ROM images and BIOS disassemblies. I also own a couple of these boards, and they're quite sweet to use.
Briefly, it's an 8085, 8255, 8-digit segment-mapped display that reads the segment on/off bits from RAM via DMA, a 2K ROM (with sockets for three more), 1K RAM, and a cassette interface. (The cmtconv tool in r8format will convert between WAV files of TK-85 CMT images and other formats; I use it in my development systems to generate TK-85 saves of my assembled code and play it into the TK-85.)
Fortunately, there is a listing with labels and comments! I've not reverse-engineered every last bit of the ROM, but I think that all the major parts are done, and I'm happy to complete the last little bits if that helps you. I'm also happy to do any hardware reverse-engineering you need.Looks like a nice project but without the proper rom listing (with labels and comment) it would be a very hard job to do.
HelloAs far as I know, there was no real software written exclusively for the 8085 undocumented features. There could have been in some embedded code, I imagine. But in the world of personal computer software, using 8085-unique features (which were disclosed long after the introduction of the 8085) and uncertainty about those features being fully implemented in second-source vendors' versions kept the use out of the real world. After all, you usually wanted the code to run on 8080 and Z80 platforms as well.
RIM/SIM were pretty much OS-level details, mostly to manage the integrated interrupt/trap facilities of the chip. They were also used on simple embedded projects and trainers for limited serial I/O. I've also seen where RIM/SIM could be used for a simple one-bit interface to, say, reset an external flag.
Who knows what they thought at Intel - at the end they only hurt their own sales against the Z80. I imagine the developers of the system found them when writing the startup diagnostics - I do really have never seen a microcomputer realizing the kind of checks this machine does. As the CPU is tested for failures, I can only guess they filled the voids in the opcode matrix (or that they got a preferential treatment from Intel, who knows). The only thing that I do really know is that by inspecting the code nearly all of them are there, especially 0x08, 0x10, 0x28 and 0x38.That's interesting--we were pretty tight with Intel around 1976-onwards; we even had Bill Davidow on our BODs and weren't told about the instructions. One has to wonder when and how the Datamaster team learned of their existence--and why Intel never really documented them in their Microsystem/80 books.
From my understanding, the undocumented 8085 instructions do not carry forward to the 8086. My guess is that Intel originally added them to improve high-level language support, but later realized that they would be a compatibility burden if not carried forward. Without them, it is very easy to map the 8080/8085 onto the 8086 instruction set for simple machine translation.That's interesting--we were pretty tight with Intel around 1976-onwards; we even had Bill Davidow on our BODs and weren't told about the instructions. One has to wonder when and how the Datamaster team learned of their existence--and why Intel never really documented them in their Microsystem/80 books.