• Please review our updated Terms and Rules here

Intel 8085 question

The instructions were documented in an Intel memo--but said paper was not widely known or distributed. In any case, the business of automatic translation of 8085 to 8086 opcodes was a bit of a red herring--"strict" compatibility usually meant generating extra instructions anyway. Accommodating the 8085-unique instructions would have been a minor detour.

Crawling inside of the mind of Intel going on nearly 50 years ago is probably an exercise left to the insane. I can remember being told that the 8086 should not be the evolutionary target for vendors seeking to implement multi-user systems, but rather the iAPX432. Soon, soon, some preliminary documents--and then ominous silence.
 
Last edited:
Microsoft did use the 8080 -> 8086 translator in their early work.
So it did turn out to have an actual valid use.
 
I'm not sure where I found this article, years ago.
(but I've been using those 8085 instructions since Jesus was a little boy)
 

Attachments

  • UnDoc8085Instructions.pdf
    276.9 KB · Views: 8
They're also documented in the Tundra ca80C85 datasheet. But we were working with the original samples of the 8085 (not the 8085A), you know, the one with the reset bug and were never informed by Intel about the added instructions.

As far as converters to 8086, there were many such, not just the Intel one. I recall trying out the 8086 converter on an MDS-800 with ISIS-II. We were being sold very hard on why we should use the 8086/186 for our next product. We'd already prototyped a 68K board; when Davidow got wind of that, he raised a stink and said it would be a cold day in Hell when he signed on to using a Motorola product in anything he had something to do with. So we eventually came out with an 80186+80286 system; the 80286 was still in early steppings, as was the 186, so the 286 code came much later (the 186 was used as the I/O front end to that).

Back to converters. I wrote a simple program in ISIS-II assembly--basically, it was floating-point code to calculate the value of pi (simple minded 4*atan(1.0)), since I'd only recently finished the floating point package for our 8085-based product.

So I cranked the covnerter up in "strict" mode and set to work. It eventually went belly-up without doing the job. "Fast Eddie", our Intel Sales contact said that the apps guys in Santa Clara had some bugfixes for the converter and invited me to bring my code over to the Santa Clara sales office for a demo.

What the heck, it was a nice day for a boondoggle...

We cranked the converter up on their MDS and Ed did his best to keep us entertained with various bits of news about what was going on at Intel. Well, 2 hours passed, and the converter was still cranking, so Ed took us to a very nice (and alcohol-besotted) lunch. Came back after a couple of hours; still going. Ed then took us out for happy hour. 5 PM rolled around and Ed said he'd let it crank overnight and see what happened.

We didn't hear from him to 2 weeks; he eventually said that they got the converter to finish, but that the test result was wrong--and worse yet, the result was about half-again as large as the original 8085 code.

Mind you, the code was only about 3000 lines. I believe that I still have it somewhere.

I recall that Sorcim had their own converter as well, because they used it on some code that I wrote for them.

@wrljet, I'd be interested to hear about your 8085 adventures circa 1976-77.
 
I'm not sure whether Intel would have cared about compatibility with 8080 systems. It wouldn't have been Intels decision anyway, and vendors embracing the 8085 might even have been able to improve sales figures substantially. Compatibility with an envisioned 16-bit successor sounds much more likely to me. Either that, or plain old management incompetence; Intel has always kept instructions secret or undocumented.

Having built a C compiler that makes full use of the 8085 instructions I wonder if it was in part because the 8085 was a bit too good at running high level languages with the 8086 just around the corner. The code density and speed improvements are huge, although it still sucks compared with a 6809.

I suspect we will never know. Intel is (not so well nowdays) built on layers of secrecy and paranoia. Anyone who knew would be long long gone.
 
Having built a C compiler that makes full use of the 8085 instructions I wonder if it was in part because the 8085 was a bit too good at running high level languages with the 8086 just around the corner.
I think that's unlikely. The 8085, I think was intended as a microcontroller-type product more than a general-use MPU. Consider the special support chips that allowed one to put together a complete 3 chip system. On that point, I don't imagine that 8086 was intended as much more than a temporary bridge to a full minicomputer-type system (the iAPX 432 was supposed to be that). I suspect that's why Intel never offered anything beyond a late 6MHz version of the 8085--most of the chips in the field were 3MHz NMOS jobs. The HCMOS 5 Mhz chips didn't get shipped until the early 80s, by which time the 8086 was already out.

Intel, in the 70s, was pretty disorganized without a clear message as far as I could tell from the marketing and sales people.
 
Having built a C compiler that makes full use of the 8085 instructions.... The code density and speed improvements are huge, although it still sucks compared with a 6809.
Can you explain how the few extra instructions on the 8085 gave a "huge" improvement in code density over the 8080? I'm not too familiar with the extra instructions, but I didn't see anything obvious in them that struck me as making a huge difference. (Not like, say, the Z80's relative branches.)
 
They helped with on-stack variable addressing (0x28 and 0x38 instructions), although not that much. Stack-relative addressing on the 8080 is a pain. Perhaps even 0xed and 0xd9 (load/store of HL to [DE])
Storing local variables in stack space was a comparatively new idea in 1973 MPU design. Several designs limited a stack (if there was one) to storing return addresses (e.g. National PACE).
 
Last edited:
  • Like
Reactions: cjs
Re: Intel's reason for leaving a dozen 8085 instructions undocumented (lack of forward compatibility with the 8086) -

I know that I've posted on this years ago and that was my suspicion, but that's hard to substantiate.

FWIW, Stanley Mazor substantiates it in an IEEE article (https://ieeexplore.ieee.org/ielx5/85/5430751/05430762.pdf). Supposedly it was Davidow's idea:

"Another constraint for the new 8086 CPU resulted from the 12 new instructions implemented in the yet to be announced 8085. Although 8085 users would benefit from these new instructions, they would burden the 8086 instruction set. Davidow made a surprising and important decision: leave all 12 instructions on the already designed 8085 CPU chip, but document and announce only two of them! A CPU chip is a monolithic silicon structure that doesn’t easily allow adding or removing logical functions whereas a CPUs paper reference manual is easily modified."
 
  • Like
Reactions: cjs
Back
Top