• Please review our updated Terms and Rules here

Intel 8085 question

I take the Tundra docs as authoritative, since Calmos/Tundra actually made the CPU chips and included descriptions in their datasheet. There had been a lot of speculation on the opcodes, but it was mostly based on "let's try it and see what they do."
 
I would hope that Tundra did an accurate job of second-sourcing that part. Makes sense.


The above link is an article about reverse engineering the 8085, and while I did not do a comparison of the findings of this vs Tundra datasheet, I would hope they align! Particularly for bit 5 and 2.
 
I once wrote a fairly complicated bit of control software for the 8085 in Z80 assembly language, since I had an assembler for Z80 but not for 8085. The RIM and SIM instructions, both single bytes and always the same single bytes, were taken care of by just putting a defined byte equal to their hex opcode inline with other code and a comment next to them saying 'RIM' or 'SIM', as appropriate. Obviously I used only those instructions which were common to both Z80 and 8085.

I made the hex code freely available, but not the source as I didn't want anyone taking it up, altering it a bit and charging money for it. Some years later I noticed that someone had disassembled the code 'back' to 8085 source code which was highly amusing to me as it never had existed as 8085 source code up until that point. :)
 
Now at version 1.3 of the simulator, fixed some bugs.
Thinking about adding a second display/keyboard form for debugging other homemade (single board) systems, maybe based on a multiplexed system (port for digit, data for segments).
Any thoughts ?
 
Just my 2cents:
Recently acquired an SDK-85, so busy on a simulator for it last few months.
Just uploaded the first version to GitHub for those interested:


View attachment 1244142
That looks useful. Anyone tried compiling it under a non-Windows operating system?
 
I've run it just quickly as a test. One question I have is does it emulate the SDK 85 as in run its ROM and is operated via the keypad? Or is the keypad/display used for I/O, but not necessarily operating it.
 
I've run it just quickly as a test. One question I have is does it emulate the SDK 85 as in run its ROM and is operated via the keypad? Or is the keypad/display used for I/O, but not necessarily operating it.
It emulates the SDK-85 in the way that is displays what is send to memory address 1800H and it reads the keyboard (if you click with the mouse on a button) by filling that value in 1800H.
But only if interrupt 5.5 is enabled (not masked) because that is the interrupt the SDK-85 uses for the keyboard.
If you load the SDK-85 rom image (provided in the monitor folder) and assemble/run it you see it at work.
 
I've run it just quickly as a test. One question I have is does it emulate the SDK 85 as in run its ROM and is operated via the keypad? Or is the keypad/display used for I/O, but not necessarily operating it.
This brings up the question of what a simulator is. The execution of code that a processor does is not a system. The SDK-85 is a system. A simulator that simulates a SDK-85 is no longer a 8085 simulator. I've always called the distinction as instrumenting the simulator to simulate a system.
A processor simulator that doesn't make it easy to be attached to a system is not of much use ( not a comment about this simulator as I don't know anything about this one). There are several things that need to be worked out. How does the system simulator handle asynchronous events. Does the system own time and provide time to the processor or does the processor provide time to the system simulation. Regardless, the processor must maintain a cycle counter. It may or may not make that available to the external system, other than trough the passing of instructions.
Some system operations are required to maintain synchronous operation with the processors instruction execution while some are asynchronous to the processors execution. This is the most difficult thing to connect to the simulation of the processor. Things get even trickier if the system provides interrupts to the processor at random times. One must make sure that the timing between such external events don't exceed the processors ability to process them.
Anyway, I call building the system simulation the instrumenting of the processor simulation to be the system simulation. In my project to eventually build the hardware for the "Maneuver Board", using the 4004 processor, I had to deal with just such thinking about the problem.
I wanted to keep the 4004 simulation as a separate element but still be able to wrap it with the keyboard and display simulation that was consistent with the requirements of the how it inter connected with the processor.
So, AlanK2's question is an interesting one. There is a significant distinction between executing code and simulating a system, as a processor.
Dwight
 
I don't know. There are cycle-accurate FPGA implementations of the old MPUs, but I wonder how common a software simulator is. Non-cycle-and-system-incorrect software simulation is probably okay for most applications. I commented on another thread about the impracticability of substituting an NSC800 for an 8085 that used 8259 PIC, 8257 DMA and 8202 DRAM controller--and that's hardware for hardware.

In the past, I've adapted an x80 software simulator to also simulate certain peripherals to replace commercial PLCs, so it's possible to get an acceptable result.

A few years ago, we had a forum member, I think his name was Valentin (don't recall his handle), who used a souped-up 8052 to simulate an 8088 PC, providing CGA graphics, sound and even supporting an ISA slot. It was pretty nearly time-accurate and was a real tour de force. Don't know what happened to his project, or him, for that matter.
 
This brings up the question of what a simulator is. The execution of code that a processor does is not a system. The SDK-85 is a system. A simulator that simulates a SDK-85 is no longer a 8085 simulator. I've always called the distinction as instrumenting the simulator to simulate a system.
Indeed it is the question of what a simulator is.
In this case it just simulates the input/output of the SDK-85 in the form of a display and keyboard.
So nothing with the timing (which is considerably slower with this program for the purpose of examining the assembler code running on it).
I wrote it just to test code I'm creating for the 8085 to see if and where bugs may reveal themselves.
So in effect like a debugger so I could also name it like such in stead of a simulator ;-)
 
Since c# is now available under linux I will give it a try coming days.
After a quick look I came to the conclusion that it would take a lot of work, problem is the 'windows forms base' of the program as it is now.
Also converting to UWP or WPF would't be a solution.
Checked Mono as a possibility but there are many snags with that I'm afraid.
So I'm not porting the program anytime soon.
 
I don't know. There are cycle-accurate FPGA implementations of the old MPUs, but I wonder how common a software simulator is. Non-cycle-and-system-incorrect software simulation is probably okay for most applications. I commented on another thread about the impracticability of substituting an NSC800 for an 8085 that used 8259 PIC, 8257 DMA and 8202 DRAM controller--and that's hardware for hardware.

In the past, I've adapted an x80 software simulator to also simulate certain peripherals to replace commercial PLCs, so it's possible to get an acceptable result.

A few years ago, we had a forum member, I think his name was Valentin (don't recall his handle), who used a souped-up 8052 to simulate an 8088 PC, providing CGA graphics, sound and even supporting an ISA slot. It was pretty nearly time-accurate and was a real tour de force. Don't know what happened to his project, or him, for that matter.
I've not looked at the SDK85 code yet but there are times when knowing the cycles counts. In the 4004 code I've worked with, there a number of code timing loops. If the system is interrupt driven, cycle counting isn't as important for simulation of working code but can make or break new code. Working with new X86 stuff, it is useless to even try to be cycle accurate. Your don't even know for sure what order the processor executes the code. Still for these early processors timing accurate simulation can make or break a design.
For the new X86, you have to determine the worst case for any critical code.
Dwight
 
There are fixed frequency sources available on most X86 systems, so simulating something like a 4004 should be no problem timing-wise. I'd rather do the simulation on an MCU, however, where I'm not dealing with the vagaries of OS timing. My reference to Valentin's project shows how successful this can be.
 
I should clarify, I wasn't talking about early x86. Things were so much simpler then. Now we have speculative execution, register reassignment,various cache memories, multiple threads. and other tricks. Now it is fast but predicting how fast is not even possible. The same piece of code may take any amount of time to execute depending history and even a few instructions following it, as well as how many times it executed in the past. Things were so much simpler for things like the 8088.
Dwight
 
Sure, and that continues to this day. For example, those clever folks who think to create delays in some ARM code by including NOPs often are disillusioned when they realize that the execution logic edits them out of the instruction stream.
 
Sure, and that continues to this day. For example, those clever folks who think to create delays in some ARM code by including NOPs often are disillusioned when they realize that the execution logic edits them out of the instruction stream.
It can be done. There are a couple of instructions that control the instruction que. I recall I had to use one to get the clock speed change to work, on my Blue Pill. But in general you really want to have timed actions outside in the I/O circuits and not depend on instruction execution order. If you don't change a register, like with a NOP, and the are any branching instructions near it, The processor will often choose the fastest path. It doesn't like doing nothing when it could be faster doing something.
Dwight
 
Of course, but it takes a little thinking. For example, on a project I'm working on, the MCU can toggle lines faster than the TTL attached to them can handle them. Fortunately, I have timers running at 84 MHz to provide precise delays.
 
What MCU are you messing with Chuck? I've been an AVR fan for a long time, but haven't quite made the jump to messing with more powerful ones, ARM, etc.
 
Back
Top