• Please review our updated Terms and Rules here

Intel 8085 question

This brings up the question of what a simulator is.

at the lowest level, is it clock-accurate or not? and if it tries to be clock-accurate, how close is it really?

I'm glad you are using 'simulator' and not 'emulator'. IBM had a definition for 'emulator' which has now been ignored, a microcoded set of routines
to EMULATE the instruction set of another computer. Note they say nothing about timing accuracy in that definition.

The area of clock-accurate simulation is a fun topic to explore. MAME's timing model is closer to accurate than SIMH, for example, but MAME's processor
models still have problems that vary in degrees of accuracy depending on who wrote the CPU code.
 
Of course, but it takes a little thinking. For example, on a project I'm working on, the MCU can toggle lines faster than the TTL attached to them can handle them. Fortunately, I have timers running at 84 MHz to provide precise delays.
Using one of the timers is clearly the way to go. As far as accuracy, it is as accurate as the crystal or clock source you use. It does use a PLL or similar way to produce the high frequency clock. That always has some jitter or for short term timing but the clock still corrects to the cycles of the reference clock. I don't know how much the STM parts can deviate from the lock of the reference clock. I suppose there is a spec someplace. For driving something like TTL, the TTL chips likely produce more jitter from power supply noise than the short term errors of the cpu clock.
 
at the lowest level, is it clock-accurate or not? and if it tries to be clock-accurate, how close is it really?

I'm glad you are using 'simulator' and not 'emulator'. IBM had a definition for 'emulator' which has now been ignored, a microcoded set of routines
to EMULATE the instruction set of another computer. Note they say nothing about timing accuracy in that definition.

The area of clock-accurate simulation is a fun topic to explore. MAME's timing model is closer to accurate than SIMH, for example, but MAME's processor
models still have problems that vary in degrees of accuracy depending on who wrote the CPU cod
I guess there are difference in what is required to be cycle accurate. For interactive games keeping close to the same speed for the user is a tough task. For my 4004 stuff, it could run many times faster as long as the cycle count matched what was needed to simulate TTY inputs.The TTY was itself simulated, using the cycle count as the timing reference.
 
I've been having some fun making an emulator for this just to see what I can get it to do. Well, I should say I've got an 8080 one working and now thinking of making an "8085" mode for it.

Alan, just found your question about simulation for 8085.

For interest and comparison purposes there is an emulator capable of decoding all 8085 instructions within the z88dk-ticks command line tool. This is an emulator that allows us to count cycles (for performance testing), and to debug line by line. This can be a useful cross check for your work.

Also, if you're interested the z88dk-sccz80 C compiler generates code that can be optimised to use relevant 8085 undocumented instructions both in-line and within the compiler intrinsic functions. That can be used to generate some source material for your simulator.

What is very interesting (imho) is that some of our benchmarks run faster on 8085 than on z80 (compared with the same sccz80 compiler and optimisation settings). Just the optimisation paths taken generate better code with 8085 than with z80. One example benchmark (fannkuch) better for 8085 is shown below.

Z88DK January 3, 2022
sccz80 / classic c library
1763 bytes less page zero
cycle count = 75381296
time @ 4MHz = 75381296 / 4*10^6 = 18.84 sec

Z88DK December 12, 2022
sccz80 / classic c library / 8085 CPU
1783 bytes less page zero
cycle count = 67446120
time @ 4MHz = 67446120 / 4*10^6 = 16.86 sec

Because I don't think in Intel, I prepared a Zilog mnemonic table. Clone the javascript version, which shows cycles, and flags affected, on mouse-over.
Heresy perhaps, but it helps my mind.

Phillip
 

Attachments

  • 8085 Instruction Set.pdf
    173.3 KB · Views: 7
The Op8085 is new to me, thanks, will come in handy :)
Already made some improvements, now at version 1.1, also added a changelog to the repository.
FWIW, I came up with a fairly compact 8080/8085/Z80 reference card for hand assembly and disassembly that gives the mnemonics (generally) in both Intel and Zilog format, the opcodes, and the number of cycles for each. (The undocumented 8085 instructions are not yet included, but will be soon.) Rather than having separate tables for the various ways you'd want to search, it's just a short plain text file that you can load into your editor and search with your editor's search function. I've attached it here, and you can always find the latest version in my sedoc repo.
 

Attachments

  • opcodes.txt
    6 KB · Views: 6
Now at version 1.3 of the simulator, fixed some bugs.
Thinking about adding a second display/keyboard form for debugging other homemade (single board) systems, maybe based on a multiplexed system (port for digit, data for segments).
Any thoughts ?
How about the NEC TK-85, the younger sibling of the 8080-based NEC TK-80? I have some information on the TK-85 in my tk80-re repo, including ROM images and BIOS disassemblies. I also own a couple of these boards, and they're quite sweet to use.

Briefly, it's an 8085, 8255, 8-digit segment-mapped display that reads the segment on/off bits from RAM via DMA, a 2K ROM (with sockets for three more), 1K RAM, and a cassette interface. (The cmtconv tool in r8format will convert between WAV files of TK-85 CMT images and other formats; I use it in my development systems to generate TK-85 saves of my assembled code and play it into the TK-85.)

cbf5e2b235c59d96d14f9b7e8f362bf3.jpg
 
This brings up the question of what a simulator is....
A processor simulator that doesn't make it easy to be attached to a system is not of much use

I've found the py65 6502 simulator and my own 6800 simulator (both written in Python) to be massively useful; though they don't emulate systems (except for extremely simple "char in" and "char out" I/O that don't actually connect to any real hardware, but just Python lists). I use them for unit testing my assembly code, e.g. like this:

Python:
@pytest.mark.parametrize('char, num', [
    ('0', 0),  ('1', 1),    ('8', 8),  ('9', 9),
    ('A',10),  ('a',10),    ('F',15),  ('f',15),
    ('G',16),  ('g',16),    ('Z',35),  ('z',35),
    ('_', 40), ('\x7F', 40)
])
def test_qdigit_good(m, R, char, num):
    m.call(m.symtab.qdigit, R(a=ord(char), N=1))
    assert R(a=num, N=0) == m.regs

@pytest.mark.parametrize('char', [
    '/',  ':', '@',                     # Chars either side of digits/letters
    '\x80', '\x81',
    '\xAF', '\xB0', '\xB9', '\xBa',     # MSb set: '/', '0', '9', ':'
    '\xDA', '\xFa', '\xFF',             # MSb set: 'Z', 'z'
    ])
def test_qdigit_error(m, R, char):
    m.call(m.symtab.qdigit, R(a=ord(char), N=0))
    assert R(N=1) == m.regs

(Briefly, the first test run will be test_qdigit_good('0', 0), with character "0" and integer 0; that will call on the simulated "machine" m a machine-language routine named qdigit from my assembled code with register A containing to the ASCII value of the character and the N(egative) flag set; it then confirms that after the call returns, the A register contains the binary translation of that ASCII letter, in this case $30 becomes $00, and that the N flag is clear, indicating success. On failure it prints out the expected and actual values of the registers and flags. test_qdigit_good is re-run for every set of parameters, and then the same happens for test_qdigit_error which tests that the qdigit generates proper error codes (N=1) for invalid characters.)

at the lowest level, is it clock-accurate or not? and if it tries to be clock-accurate, how close is it really?

I'm glad you are using 'simulator' and not 'emulator'. IBM had a definition for 'emulator' which has now been ignored, a microcoded set of routines to EMULATE the instruction set of another computer. Note they say nothing about timing accuracy in that definition.

I don't think the "must be done in microcode," or even the more common "must be done in hardware" definitions are very useful for distinguishing between "simulation" and "emulation." I think it's more useful to look at the purpose of the simulation/emulation: does it adequately substitute for another system or just provide useful information about code intended to run on another system.

Thus, I call my 6800 simulator above a "simulator" because it extracts key points I need about the 6800 (what does some particular sequence of instructions do?) for my unit testing; it won't (in most cases) run a complete program intended for a Hitachi Basic Master in the same way it would run on the Basic Master itself. On the other hand, despite (also) being completely software-based, I'd call OpenMSX an "emulator" (as they do) because for most of its use cases (such as playing a game) it's an adequate substitute for a real MSX machine.
 
How about the NEC TK-85, the younger sibling of the 8080-based NEC TK-80? I have some information on the TK-85 in my tk80-re repo, including ROM images and BIOS disassemblies. I also own a couple of these boards, and they're quite sweet to use.

Briefly, it's an 8085, 8255, 8-digit segment-mapped display that reads the segment on/off bits from RAM via DMA, a 2K ROM (with sockets for three more), 1K RAM, and a cassette interface. (The cmtconv tool in r8format will convert between WAV files of TK-85 CMT images and other formats; I use it in my development systems to generate TK-85 saves of my assembled code and play it into the TK-85.)
Looks like a nice project but without the proper rom listing (with labels and comment) it would be a very hard job to do.
However, my project for the SDK-85 is open source so if you want you could give it a try ;-)
 
Looks like a nice project but without the proper rom listing (with labels and comment) it would be a very hard job to do.
Fortunately, there is a listing with labels and comments! I've not reverse-engineered every last bit of the ROM, but I think that all the major parts are done, and I'm happy to complete the last little bits if that helps you. I'm also happy to do any hardware reverse-engineering you need.

There is also a listing of the original source code coming soon, but that will be available only as scanned pages from the manual, at least for the moment. (And I'm avoiding looking at it myself until I finish my disassemblies of both the TK-85 ROM and the TK-80 ROM.)

I'm unlikely to add this to your SDK-85 project myself as I'm a Linux user, and do almost no development on Windows beyond a bit of Bash and Python scripting for ports of some of my Linux code. I certainly don't have any C# tools.
 
With this listing I can make an effort.
The scanned pages of the manual would help as well since I could not find a (pdf) copy of the manual.
Since you're a linux user, is this project of help since you couldn't run it on a linux computer ? (tho wine comes a long way).
I will look into it and let you know.
 
A friend currently has both our paper copies of the manual and the scans of them; I need to get together with him to sort them out and get them uploaded to archive.org and the tk80-re repo. I'm hoping that will happen early this week. Do note that the manual is in Japanese, though I think the comments in their source code are likely to be in English. (And the schematics are an international language, of course.)

The project isn't so handy for me for various reasons, including that I mostly run Linux and I already have the real thing. (I would still try it out on my Windows gaming box, though.) But I have plenty of friends without one of these boards who would be very pleased if they could play with a simulated one. (And this would let me give on-line classes on using them, too.)
 
As far as I know, there was no real software written exclusively for the 8085 undocumented features. There could have been in some embedded code, I imagine. But in the world of personal computer software, using 8085-unique features (which were disclosed long after the introduction of the 8085) and uncertainty about those features being fully implemented in second-source vendors' versions kept the use out of the real world. After all, you usually wanted the code to run on 8080 and Z80 platforms as well.
RIM/SIM were pretty much OS-level details, mostly to manage the integrated interrupt/trap facilities of the chip. They were also used on simple embedded projects and trainers for limited serial I/O. I've also seen where RIM/SIM could be used for a simple one-bit interface to, say, reset an external flag.
Hello

I found an instance of commercial software using extensively the undocumented instructions, not in the CP/M world but in IBM's Datamaster. I am using GHidra to study the code in search of ports but unfortunately it just sticks to the canonical instructions. Does anybody know how to extend it or in defect, an alternative?

Thank you in advance
 
That's interesting--we were pretty tight with Intel around 1976-onwards; we even had Bill Davidow on our BODs and weren't told about the instructions. One has to wonder when and how the Datamaster team learned of their existence--and why Intel never really documented them in their Microsystem/80 books.
 
That's interesting--we were pretty tight with Intel around 1976-onwards; we even had Bill Davidow on our BODs and weren't told about the instructions. One has to wonder when and how the Datamaster team learned of their existence--and why Intel never really documented them in their Microsystem/80 books.
Who knows what they thought at Intel - at the end they only hurt their own sales against the Z80. I imagine the developers of the system found them when writing the startup diagnostics - I do really have never seen a microcomputer realizing the kind of checks this machine does. As the CPU is tested for failures, I can only guess they filled the voids in the opcode matrix (or that they got a preferential treatment from Intel, who knows). The only thing that I do really know is that by inspecting the code nearly all of them are there, especially 0x08, 0x10, 0x28 and 0x38.
 
That's interesting--we were pretty tight with Intel around 1976-onwards; we even had Bill Davidow on our BODs and weren't told about the instructions. One has to wonder when and how the Datamaster team learned of their existence--and why Intel never really documented them in their Microsystem/80 books.
From my understanding, the undocumented 8085 instructions do not carry forward to the 8086. My guess is that Intel originally added them to improve high-level language support, but later realized that they would be a compatibility burden if not carried forward. Without them, it is very easy to map the 8080/8085 onto the 8086 instruction set for simple machine translation.
 
I know that I've posted on this years ago and that was my suspicion, but that's hard to substantiate. The other less obvious is that using those instructions makes for incompatibility with earlier 8080 systems. I really don't know why we were kept in the dark.
 
I'm not sure whether Intel would have cared about compatibility with 8080 systems. It wouldn't have been Intels decision anyway, and vendors embracing the 8085 might even have been able to improve sales figures substantially. Compatibility with an envisioned 16-bit successor sounds much more likely to me. Either that, or plain old management incompetence; Intel has always kept instructions secret or undocumented.
 
Back
Top