• Please review our updated Terms and Rules here

Z80 Intel hex loader

Intel Hex files by their very nature allow for the possibility of two or more distinct blocks of code which are intended to reside in different areas of memory without having to bridge or fill the gaps in between with padding code.

In your source code you would use ORG statements to define the start of each distinct block of code, and then if you have the assembler produce the output as Intel Hex, the Intel Hex code will contain the addresses to which the loader should direct the code.

For example if you assemble this bit of nonsense code and generate the output as Intel Hex

ORG 0100h
DEFB 00h, 01h, 02h, 03h, 04h, 05h, 06h, 07h
ORG 0200h
DEFB 08h, 09h, 0Ah, 0Bh, 0Ch. 0Dh, 0Eh, 0Fh

The loader will (or should) load the first 8 bytes into 0100-0107 and the second 8 bytes into 0200-0207 without any padding code in between.
 
I understood the issue to be "I assemble the code in low memory, then move it to high memory for execution". Intel hex format alone won't save you.
If you assemble the code as two sections, say as "x" and "y" and concatenate the files, the loader will still create an absolute image that extends from the beginning of x to the end of y.
You really want a solution that says "only x knows where y will eventually be located". That's the reason for the PHASE/DEPHASE pseudo ops. That saves you from having to code stuff like this:

Code:
RELOC   EQU     $D000      ; where my code will wind up
CBASE   EQU     $               ; where the code is assembled
        LD        A,(CELL+RELOC-CBASE)     ; get the contents of CELL
CELL    DEFB     0

So the code, when relocated to $D000 will correctly referece CELL Of course, relative jumps don't have to go through this obscenity, but there you have it.
 
My thinking is that if the whole program starts off with:
;----------------------------------------------------
;Code to start program and move to higher memory
;------------------------------------------------------
org 0100h
ld hl,010Eh ;start of code to transfer
ld bc,163h+1 ;length of code to transfer
ld de,0DB00h ;target of transfer
ldir ;Z80 transfer instruction
jp 0DB00h ;jump to relocated code

And is followed by:

;------------------------------------------------------
;Start of actual program
;------------------------------------------------------
code_origin:
org 0DB00h
code_start:
;
;READ AND PRINT SUCCESSIVE BUFFERS
LD DE,SIGNON ;WELCOME MESSAGE
;D,E ADDRESSES MESSAGE ENDING WITH "$"
LD C,PRINTF ;PRINT BUFFER FUNCTION 9
CALL BDOS
;------------------------------------------------------
;etc.........

The first part of the code will relocate the second part up to DB00h where it is designed to run, and then jump to that code at DB00h and start executing.
 
Sure,, but what will the address of "SIGNON" be in the LD DE instruction. If it was assembled, say, at 0120h, but really reside at DC20h, you'll have a problem because the assembler will have used the value that it knows, i.e. 0120h in the LD DE instruction. And that would be incorrect.

Honest, been there, done that. It's called Overlay execution. Used to be on systems with limited memory, it was the only way to handle very large programs. WordStar is one such.

One thing not mentioned, which I find peculiar, is that DOS/360 was a fairly small resident (4-12KB) with lots and lots of "transient phases". But they are mentioned in the WIkiP section for IBM DOS/360 operating systems $$BOPEN anyone?
 
Last edited:
Myke, your proposed code should be fine. Because you use a "org 0db00h" in your overlay module, all those addresses will be correct. One thing that is typically done in those cases is add a label for where the relocatable code will reside after being combined, like:
Code:
    org    100h
    ld    hl,code
    ld    bc,163h+1
    ld    de,0db00h
    ldir
    jp    0db00h
code:
    end
If your method of combining the two modules has byte-granularity (as opposed to padding things out to some block size), then the 'code' symbol will have the correct address. There are things you can do to get fancier, too, like making "0db00h" a symbol, and possibly even keeping it in a "include" file that both modules use. Abstracting the code length is a bit trickier.

I would throw away that 'code_origin' symbol - it should never really be used and is somewhat undefined. In most assemblers it probably equates to "0", but I don't think you should be using that in the program.
 
That second part is just the front few lines of the code that would execute at DB00h. Here it the rest of it that I have written so far:
 

Attachments

  • 2SIO.txt
    11.9 KB · Views: 5
That second part is just the front few lines of the code that would execute at DB00h. Here it the rest of it that I have written so far:
Your code is fine - at least as far as the ORG and addresses generated (I did not do a code review - but you have too much fun debugging, right? ;-) ).
 
In my previous episode, I added some code to let me know where I was in executing my program. It seemed to be hung in the loop to read the status from the 8251A on port A. So I checked to see if there was an error in entering the code and I couldn't find one. So I went back to the code I used as a starting point. One uses a 'BIT' test and the other uses an 'AND" test. I tried both ways and got the same result. I'm thinking that either my hardware data path control logic is not functioning, or I have an issue with my coding. I guess both 8251A chips could be bad; but I highly doubt that. And I'm thinking it isn't a code issue, since the code works in other programs. Anyway, these first two are the examples of what I used, and the third is what is in my code..... And Merry Christmas and Happy Holidays
 
It is funny that there is the effort to get hex to binary working on the target machine to boot strap. When I was getting my CP/M working on my IMSAI I wanted to keep the amount of code that I need to enter to a minimum. Why would I make something as complicated as a hex to bunary converter on the target where it was more likely that there was a entry error than a data error in sending. Then, what would I do if there was a data error in sending? I did any hex to binary or direct from assembly to binary on the sending machine.
Luckily, the IMSAI had a 6402 based serial ( I forget the exact manufacture number of the same thing ) as it needed no other initialization than to do a dummy read at the start.
Once I had the basic serial bios for CP/M running, followed by read/write of the disk, I could transfer hex files and let CP/M deal with it.
So. I'm wondering why would one go to the effort of writing a hex/bin code on the target machine when simple binary data would be enough.
Dwight
 
I should note that boot strapping a machine with 8250 or 8251 is a pain. Either takes about 50 bytes to get setup and as mentioned, only needed once.
Dwight
 
As Dwight mentions a fixed-rate UART, such as the 6402, or TR1602,TR1865, AY-3-1015 or AY-5-1013 makes life simple. All parameters are set by external wiring, so it's mostly a matter of reading and writing the data. Transmission error flags are brought out to hardwired connections as well. This is the way the Altair 8800 SIO card worked--no programming necessary. There are many other similar ICs.
 
Last edited:
Thanks for the list, I think mine was a 1602. That sounds familiar. I recall having trouble finding a data sheet for it when I found that there were many with exactly the same pins. The only difference I ever saw was the maximum baud rate ( well beyond anything I expected a 8080 to run at ).
Dwight
 
It is funny that there is the effort to get hex to binary working on the target machine to boot strap. When I was getting my CP/M working on my IMSAI I wanted to keep the amount of code that I need to enter to a minimum. Why would I make something as complicated as a hex to bunary converter on the target where it was more likely that there was a entry error than a data error in sending. Then, what would I do if there was a data error in sending? I did any hex to binary or direct from assembly to binary on the sending machine.
Luckily, the IMSAI had a 6402 based serial ( I forget the exact manufacture number of the same thing ) as it needed no other initialization than to do a dummy read at the start.
Once I had the basic serial bios for CP/M running, followed by read/write of the disk, I could transfer hex files and let CP/M deal with it.
So. I'm wondering why would one go to the effort of writing a hex/bin code on the target machine when simple binary data would be enough.
Dwight
I'm trying to 'bootstrap' the machine. It has a monitor ROM which has an RS-232 port and also gets me into CP/M 2.2 from a CF card. I want to use the console port solely for connecting the a VT100 emulated terminal. The other two 8251A RS-232 ports are for other peripherals. I want the ability to use one of them to download code from my PC. That is the sole reason for building the dual port serial card. I've in the process of debugging the hardware/software to figure out what isn't working. I'm probably going to have to wire up some kind of status indicators to do that since I do not have access to a logic analyzer.
 
The GI part numbers were always a mystery to me. I was never sure of what the number after the "AY" signified. One of these days, I'll figure it out.
 
I'm trying to 'bootstrap' the machine. It has a monitor ROM which has an RS-232 port and also gets me into CP/M 2.2 from a CF card. I want to use the console port solely for connecting the a VT100 emulated terminal. The other two 8251A RS-232 ports are for other peripherals. I want the ability to use one of them to download code from my PC. That is the sole reason for building the dual port serial card. I've in the process of debugging the hardware/software to figure out what isn't working. I'm probably going to have to wire up some kind of status indicators to do that since I do not have access to a logic analyzer.
darn, big typo... I'm NOT trying to bootstrap..... makes all the difference in the world.... sorry for the confusion.
 
I guess this would be a wise addition to make it easier to figure out what I'm doing:
1672088984544.png
 
That being said, I still think the 8251 is a terrible device. At the time it was introduced, the unique thing was that it could do both sync and async protocols. But it was a beast to get to work reliably.

Even in 2020 folks were still having issues with it

I last opined on it 6 years ago, and I haven't changed my opinion at all since then

Just as an aside, I trust that you're running the serial clock at 16X...don't try to do 1X; it doesn't work with async.
Yep, 1.836 MHz divided by 12 to get to 153KHz. The Mode instruction is 4Eh which is 16X, 8 data bits, no parity, and 1 stop bit.
 
Again, I'm still not real sure why you need to do "Intel HEX". Your pc source is the best place to convert from hex to binary. That way you can abort if there is an error in things like the check sums. Programming is easier there and does the work for you. The CP/M program to take a byte from a serial port and buffer it into a file, as binary data it a lot less work.
As I recall, CP/M has a hex to binary and binary to hex but lacks the Intel formatting ( might be brain fog on my part).
Anyway, have fun.
Dwight
 
Again, I'm still not real sure why you need to do "Intel HEX". Your pc source is the best place to convert from hex to binary. That way you can abort if there is an error in things like the check sums. Programming is easier there and does the work for you. The CP/M program to take a byte from a serial port and buffer it into a file, as binary data it a lot less work.
As I recall, CP/M has a hex to binary and binary to hex but lacks the Intel formatting ( might be brain fog on my part).
Anyway, have fun.
Dwight
I just titled the thread 'Hex Loader' since I started it by posting the code for a hex loader program someone wrote years ago.
 
Back
Top