• Please review our updated Terms and Rules here

Intel 8085 question

The instructions were documented in an Intel memo--but said paper was not widely known or distributed. In any case, the business of automatic translation of 8085 to 8086 opcodes was a bit of a red herring--"strict" compatibility usually meant generating extra instructions anyway. Accommodating the 8085-unique instructions would have been a minor detour.

Crawling inside of the mind of Intel going on nearly 50 years ago is probably an exercise left to the insane. I can remember being told that the 8086 should not be the evolutionary target for vendors seeking to implement multi-user systems, but rather the iAPX432. Soon, soon, some preliminary documents--and then ominous silence.
 
Last edited:
Microsoft did use the 8080 -> 8086 translator in their early work.
So it did turn out to have an actual valid use.
 
I'm not sure where I found this article, years ago.
(but I've been using those 8085 instructions since Jesus was a little boy)
 

Attachments

  • UnDoc8085Instructions.pdf
    276.9 KB · Views: 8
They're also documented in the Tundra ca80C85 datasheet. But we were working with the original samples of the 8085 (not the 8085A), you know, the one with the reset bug and were never informed by Intel about the added instructions.

As far as converters to 8086, there were many such, not just the Intel one. I recall trying out the 8086 converter on an MDS-800 with ISIS-II. We were being sold very hard on why we should use the 8086/186 for our next product. We'd already prototyped a 68K board; when Davidow got wind of that, he raised a stink and said it would be a cold day in Hell when he signed on to using a Motorola product in anything he had something to do with. So we eventually came out with an 80186+80286 system; the 80286 was still in early steppings, as was the 186, so the 286 code came much later (the 186 was used as the I/O front end to that).

Back to converters. I wrote a simple program in ISIS-II assembly--basically, it was floating-point code to calculate the value of pi (simple minded 4*atan(1.0)), since I'd only recently finished the floating point package for our 8085-based product.

So I cranked the covnerter up in "strict" mode and set to work. It eventually went belly-up without doing the job. "Fast Eddie", our Intel Sales contact said that the apps guys in Santa Clara had some bugfixes for the converter and invited me to bring my code over to the Santa Clara sales office for a demo.

What the heck, it was a nice day for a boondoggle...

We cranked the converter up on their MDS and Ed did his best to keep us entertained with various bits of news about what was going on at Intel. Well, 2 hours passed, and the converter was still cranking, so Ed took us to a very nice (and alcohol-besotted) lunch. Came back after a couple of hours; still going. Ed then took us out for happy hour. 5 PM rolled around and Ed said he'd let it crank overnight and see what happened.

We didn't hear from him to 2 weeks; he eventually said that they got the converter to finish, but that the test result was wrong--and worse yet, the result was about half-again as large as the original 8085 code.

Mind you, the code was only about 3000 lines. I believe that I still have it somewhere.

I recall that Sorcim had their own converter as well, because they used it on some code that I wrote for them.

@wrljet, I'd be interested to hear about your 8085 adventures circa 1976-77.
 
I'm not sure whether Intel would have cared about compatibility with 8080 systems. It wouldn't have been Intels decision anyway, and vendors embracing the 8085 might even have been able to improve sales figures substantially. Compatibility with an envisioned 16-bit successor sounds much more likely to me. Either that, or plain old management incompetence; Intel has always kept instructions secret or undocumented.

Having built a C compiler that makes full use of the 8085 instructions I wonder if it was in part because the 8085 was a bit too good at running high level languages with the 8086 just around the corner. The code density and speed improvements are huge, although it still sucks compared with a 6809.

I suspect we will never know. Intel is (not so well nowdays) built on layers of secrecy and paranoia. Anyone who knew would be long long gone.
 
Having built a C compiler that makes full use of the 8085 instructions I wonder if it was in part because the 8085 was a bit too good at running high level languages with the 8086 just around the corner.
I think that's unlikely. The 8085, I think was intended as a microcontroller-type product more than a general-use MPU. Consider the special support chips that allowed one to put together a complete 3 chip system. On that point, I don't imagine that 8086 was intended as much more than a temporary bridge to a full minicomputer-type system (the iAPX 432 was supposed to be that). I suspect that's why Intel never offered anything beyond a late 6MHz version of the 8085--most of the chips in the field were 3MHz NMOS jobs. The HCMOS 5 Mhz chips didn't get shipped until the early 80s, by which time the 8086 was already out.

Intel, in the 70s, was pretty disorganized without a clear message as far as I could tell from the marketing and sales people.
 
Having built a C compiler that makes full use of the 8085 instructions.... The code density and speed improvements are huge, although it still sucks compared with a 6809.
Can you explain how the few extra instructions on the 8085 gave a "huge" improvement in code density over the 8080? I'm not too familiar with the extra instructions, but I didn't see anything obvious in them that struck me as making a huge difference. (Not like, say, the Z80's relative branches.)
 
They helped with on-stack variable addressing (0x28 and 0x38 instructions), although not that much. Stack-relative addressing on the 8080 is a pain. Perhaps even 0xed and 0xd9 (load/store of HL to [DE])
Storing local variables in stack space was a comparatively new idea in 1973 MPU design. Several designs limited a stack (if there was one) to storing return addresses (e.g. National PACE).
 
Last edited:
  • Like
Reactions: cjs
Re: Intel's reason for leaving a dozen 8085 instructions undocumented (lack of forward compatibility with the 8086) -

I know that I've posted on this years ago and that was my suspicion, but that's hard to substantiate.

FWIW, Stanley Mazor substantiates it in an IEEE article (https://ieeexplore.ieee.org/ielx5/85/5430751/05430762.pdf). Supposedly it was Davidow's idea:

"Another constraint for the new 8086 CPU resulted from the 12 new instructions implemented in the yet to be announced 8085. Although 8085 users would benefit from these new instructions, they would burden the 8086 instruction set. Davidow made a surprising and important decision: leave all 12 instructions on the already designed 8085 CPU chip, but document and announce only two of them! A CPU chip is a monolithic silicon structure that doesn’t easily allow adding or removing logical functions whereas a CPUs paper reference manual is easily modified."
 
  • Like
Reactions: cjs
They helped with on-stack variable addressing (0x28 and 0x38 instructions), although not that much. Stack-relative addressing on the 8080 is a pain. Perhaps even 0xed and 0xd9 (load/store of HL to [DE])
Storing local variables in stack space was a comparatively new idea in 1973 MPU design. Several designs limited a stack (if there was one) to storing return addresses (e.g. National PACE).

They make a big difference in performance and code density

Something like

int foo(int x, int y)
{
return x+y;
}

ends up as
_foo:

ldsi 2
lhlx
push h
ldsi 6
lhlx
pop d
dad d
;
ret

and whilst you can do that specific case other ways on 8080 (pop a lot) it's not generalized whereas the above is. It's also smaller and usually faster than the Z80 trying to use IX as a frame pointer with long instructions for slow 8bit IX relative loads

By comparison - the Bourne Shell used in Fuzix is (same compiler in each case):
22.5K of code built for Z80 using size optimizations (call helpers for getting stack offsets like 8080), and a couple of K larger using IX offsets
22.5K of code built for 8085 using ldsi/lhlx inline (so much faster than the Z80)
24.5K of code built for 8080 using call helpers for getting stack offsets - and blows up hugely if you inline all that using LXI HL,offset DAD SP MOV ....

The much later 6809 is of course way better as you'd expect:

_foo:
ldd 2,s
addd 4,s
;
rts

Ditto the 6303/803/HC11

_foo:
tsx
ldd 2,x
addd 4,x
rts

And even the 6800 though a bit messed up on stack stuff is not much worse - two 8bit loads and add/adc instead of the 16bit ops

The 8085 compiler output is much smaller and a *lot* faster than the 8080 version. It's possible to get the 8080 one closer in size but there is then a big perf cost.

Some of it is the register focus of the 8080 - the lack of reg/const and reg/mem offsets also hurts code density, which was something 8086 also fixed comprehensively - aside from the boilerplate to set up bp it's basically the same as the 03/09 for the add example.


My 8MHz Tundra 80C85 is noticably faster than the 8MHz Z80 system running the same C code. Obviously it's not so simple for hand written assembly as the Z80 in particular can really shine with global values and some hand crafted all register chunk of computation.
 
  • Like
Reactions: cjs
When the 8085 (and for that matter, the 8080) was hatched, I don't think anyone was thinking about performance in HLLs, particularly C. I imagine that the performance with PL/M would be about the same, if there were a version of PL/M for the 8085-specifc instructions.
One thing that stood in the way for all of the x80 stuff, in my opinion is the lack of relative addressing. True, the Z80 is better, with PC-relative jumps and IX/IY limited relative addressing, but for us, a simple code relocation register would have made a huge difference in thinking. We capitulated to the state of things and went with interpreted P-code, which tuned out to be not such a bad idea.

As it was, CP/M (and MP/M, etc.) had to resort to the rather clumsy PRL bitmap and moving code, once loaded becomes nearly impossible, unless the bitmap is also kept around. At a minimum, I would have liked to have seen an 8-bit relocation register that was added to the upper 8 bits of a code address. We did do something akin to this later, by dividing memory up into 1K blocks and mapping them via a bipolar lookup RAM.

Well, it was what it was for the time.
 
I don't think that's totally true. More accurately they were not focused on ones that had elegant use of the stack for local variables and recursion. COBOL is very well supported by the 8080 DAA and Z80 RLD/RRD come into their own in those use cases (rather than bitmap scrolling ;) ). Likewise Fortran of the time wasn't really stack oriented. C was still a baby language that only really came into its own in the very late 1970s/early 80s. Even in the minicomputer space things like the DG Nova didn't grow stack instructions until the Nova 3 in the mid 1970s.

The 8085 clearly was thinking about stack oriented languages or they wouldn't have bothered with LDSI and the 8086 which was heavily oriented that way was being designed at that point. The 186 shows that it continued to be an important area too - ENTER dealt with things like Pascal, extending PUSH to push constants was a big win for stack oriented languages and added into the CPU at the same time).
 
  • Like
Reactions: cjs
COBOL has both decimal and binary representations (COMP, vs., COMP-1 or COMP-2, not to forget DISPLAY) and I doubt that DAA had much of an influence on performance. On the 4004 (but not the 8008), 8080 and 8085, DAA is only useful for addition; for subtraction, DAS didn't come along until the 8086. I wonder if it wasn't added to the 8080 set because of its presence in the 4004.

By "HLL", I should have said "modern-ish HLLs", all of which seem to assume a stack of some sort, although a hardware stack isn't strictly necessary (e.g. S/360 doesn't have one but accommodates stack-oriented languages quite nicely as do other large mainframes). Even on some 8080-contemporary CPUs, the stack was used mostly to store return addresses (e.g. National Semi PACE) and is of limited size.
 
Back
Top