• Please review our updated Terms and Rules here

Passing parameters/arguments

I'm guessing this is a really dumb question I'm about to ask, but isn't a program that types something out like you want normally just something small like this; ( I didn't include the check for file open before just assuming it will read, but I wanted to keep the example simple - though I remember you had a fairly long program that did other things, so I'm confused ) - Also please forgive the messy mnemonics. I don't have the hang of posting code snippets to the forum yet.

EQU BDOS,$0005
EQU FCB1,$005C

.ORG $0100

LD DE,$0080
LD C,$1A ; Function 1A - Set the DMA address to the buffer at 0080.
CALL BDOS ; Set DMA to 0080 -

LD DE,FCB1 ; FCB
LD C,$0F ; Open File.
CALL BDOS ; Get the first matching file.
TYPE_READ:
LD DE,FCB1 ; FCB
LD C,$14 ; read record
CALL BDOS ; Get the first matching file.

PUSH AF
CALL PRINT_RECORD
POP AF

OR A ; Check for file status
JR Z,TYPE_READ

LD E,$0D
LD C,2
CALL BDOS

LD E,$0A
LD C,2
CALL BDOS

ret

PRINT_RECORD:
LD B,128
LD HL,$0080 ; DMA address.
PRINT_RECORD_LOOP:
PUSH HL
PUSH BC
LD E,(HL)
LD C,2
CALL BDOS
POP BC
POP HL
INC HL
DJNZ PRINT_RECORD_LOOP

ret
 
Sure, there are many ways to accomplish this (your code needs to check ctrl-Z, but otherwise looks ok). Myke was wanting to learn all this (more or less) from scratch, so I pointed him to DUMP.ASM as an example that reads a file and processes each byte. Instead of writing it for him.
 
It very well could, and probably should, but since this is the first CP/M program I have fiddled with since the early 80's, I wanted to start with something that worked, and build from that. This helps me learn easier.
 
Ahh, I think I get it now. I thought the original was doing some other stuff, like printing the file-memory location of the start of each line and then just pushing out the ascii instead of the hex and that was why you started with existing code. I'm just learning also, and that was the first time I tried reading a file in and displaying it... I've been learning from this thread in parallel at the same time too so it's been a great thread to watch unfold. I wasn't able to do that when this thread started - Myke's right... There's some seriously good teachers on this forum.

Myke, From the look of that book, you are working across 8080/8085 as well - which I just can't seem to do. I wrote a translator to convert from 8080 to z80 and that seems to work OK, but I just can't get my head around the 8080 mnemonics.... :( I've been using this resource to translate - http://popolony2k.com.br/xtras/programming/asm/nemesis-lonestar/8080-z80-instruction-set.html

I understand z80 pretty well, but CP/M has been a steep learning curve.
 
Well, it seems a lot of early CP/M was written in 8080, which does run on a Z-80, so I am hoping the book will help. I have a softcopy I got from archive.org, but the last couple of appendices are all garbled. Besides, a book is easier to dig through for me. I like that link you provided. It will be another resource for me. Just wish I could find a 'good' Windows based GUI for dis-assembly. I know many people seem to use a command line assembler, but I really love the zDevStudio IDE I use for writing and assembling code.

1670511673890.png
 
...
Myke, From the look of that book, you are working across 8080/8085 as well - which I just can't seem to do.
...
Just one comment about Zilog vs. Intel mnemonics (note, it is not really about 8080 CPU vs. Z80 CPU). The Intel mnemonics are closely tied to the actual CPU instruction architecture, such that each opcode mnemonic represents a specific machine opcode, which also represents the addressing mode. There is no ambiguity with the Intel mnemonics. But, learning them is really just a matter of understanding the CPU and committing to memory the mnemonics. Again, this issue is not about CPU types but rather about assembler choices. I work in Intel mnemonics, even on a Z180.
 
I agree Doug, Heck, the same hold for 6502, 6800, etc. They all use the same zeros and ones. I don't mind manually converting 8080 to Z-80, as this too provides a good learning experience. For example, as I was working on this project, I had to research the RRCA mnemonic and came across an error in the actual Zilog Z80 CPU User's Manual:

1670512104471.png
 
Just a little back story about Intel mnemonics. Intel copyrighted their mnemonics, and seemed poised to aggressively pursue any perceived infringement. This would have especially been true of a company like Zilog, in direct competition and formed by a group of ex-Intel employees. So, you have Zilog mnemonics that don't look anything like Intel mnemonics, and even don't mesh well with the actual CPU architecture ('LD' for nearly everything, including move, load, and store operations).
 
You may want to locate / download XIZI.LBR and then use XIZ.COM instead of hand converting Intel to Zilog mnemonics. XZI.COM will translate the other way.

As to Intel versus Zilog mnemonics ... everyone has their own opinion. Having used S/360 assembler a lot before I also started using CP/M-2 when it was released, I found the Zilog mnemonics easier to understand and remember. However, it did take awhile to get used to "LD (" instead of "ST"ore and I resisted the temptation to use a macro for that.
I cannot make sense of Intel mnemonics - AND - I use emacs instead of vi. There, I've said it. Religion aside, I mentally read 'LD' as transfer and discipline myself to treat the left arg as target. As has been pointed out, there are plenty of tools for translating ASM formats back and forth. Use whatever works for you!
 
I cannot make sense of Intel mnemonics - AND - I use emacs instead of vi. There, I've said it. Religion aside, I mentally read 'LD' as transfer and discipline myself to treat the left arg as target. As has been pointed out, there are plenty of tools for translating ASM formats back and forth. Use whatever works for you!
I actually have a copy of XLATE5.LBR. But I confess, i have no earthly idea how to use XLATE5, or extract the XIZ.COM program out of it. That is on my 'list to learn'....
 
I believe that DRI RMAC was packaged with a file called Z80.LIB which is a collection of Intel-style macros for the Z80. AFAIK, DRI never wrote any product using the Z80 instruction set exclusively. System-specific code was supplied by vendors and could be written in Esperanto for all DRI cared. It would have been a foolish move to do otherwise--8080 code runs on 8080/8085 and Z80 as-is. Having started with the 8008, Intel mnemonics and syntax are logical to me--and I don't need to fear the use of parentheses in expressions not meaning what I intend.

Intel went off the rails, as far as I'm concerned, when it introduced the mnemonics and syntax for the 8086. What, exactly, does MOV mean? Worse, there may be more than one instruction encoding (one using the reg/R/M addressing, the other using the "special case" addressing) for a given symbolic syntax. Assembly mnemonics should be isomorphic at the very least.
 
Yes, DRI provided Z80.LIB with MAC and RMAC from an early time. It was generally considered "unsupported" but they did accept requests for changes. I use the same file today, with a few additions for Z180. Of course, DRI assemblers never added support for Zilog mnemonics, in contrast to Microsoft's M80. That kept the DRI assemblers smaller, which left more memory for assembly of large source files.

One thing about the DRI assembler/linker tools is they allow creation of SPR files, essential for CP/NET, CP/M 3, and MP/M system building. I've not found even a modern tool that supports this. Unfortunately, M80 generates REL files that contain some REL extensions that are not supported by the DRI linker, so you need to use MAC/RMAC for that reason as well. So, that means Intel mnemonics.
 
It would be interesting to ask 10 non-Intel assembler programmers what "CP" does. Although I was quite familiar with S/360, Interdata and PDP-8 assemblers when the 8080 was released, this certainly looks like a ComPare to me rather than a CALL. While it was easier for the assembler program to decode the difference between "CMP" and "CPI", that's two different mnemonics that the programmer has to learn for the same operation but with different operand locations. Likewise for several other Intel mnemonic pairs. I also felt like there were ambiguities in the 8086 mnemonics and although I've written my share of 8086 ASM code, I never enjoyed doing so.

I realize this is like a religious argument and everyone has their own opinion about whether the programmer should be thinking in terms of the operation versus the operation plus operand location when writing mnemonics. Intel forces operation plus location whereas Zilog simplified the mnemonic choices and created a consistent notation for operand addressing. As someone who has written assembler code for many different architectures, I prefer the reduced mnemonic alternative that's more in line with the processor's capabilities.

A similar debate can be made about the architecture of linear addressing with large address fields versus indexed addressing. Linear addressing is much easier for compiler writers and novice assembler programmers. However, S/360 programs that were written and compiled over 50 years ago with their 4K addressing range will still work on today's hardware, regardless of whether ASM, COBOL, FORTRAN etc.
 
I understand the meaning behind the 8080 opcodes, but they clash far too much with my understanding of z80... Like the CP call mentioned... I am going to read this as compare when I see it, and that messes up my thinking while looking at other's code.

I still use Intel opcodes on other CPUs... MCS48 back in the day, and MCS51 is one I still use regularly today ( I have written many thousands of lines of code for 51 in the past year alone ) and the biggest problem I have, are I occasionally mix MOV and LD in my code, and JR Z and JZ get me a lot and cause me to go back and fix up code many times...

One thing I noticed though- I have past experience with z80 - but I never learned CP/M, or OS style programming.

I was always an embedded programmer, writing real-time code that had to run dozens of threads every millisecond, and without any thread taking more time than it's slice allocated, and all of the spinlock functionality was built into each thread individually. Each thread knew when to give up control, and threads would world together to reallocate timeslices. It was a very specific architecture for a very specific application. All in z80.

Coming from there to CP/M was the biggest change - writing linear code is one thing, but getting used to the idea that the processor stops doing stuff when you call for, say, a keyboard input - it was alien to me. ( Yes, I realize I can scan for whether a code is available... but I used to write keyboard routines that would drive lexical routines by interrupt, which would set flags and trigger other threads... Now it's all BDOS calls and linear code, and I haven't worked out yet how CP/M uses interrupts at all... and I don't think it does by default ) Before I was counting cycles... Now it doesn't matter. It feels like a very different way to write the same opcodes I knew all along.

It's strange, because it felt normal on 8086, but doesn't feel the same on z80 to me.

Oh, and there are some more serious errors in the zilog manual. RR m has the same opcode listed as RRC m and SLL is missing entirely making me wonder if it's supposed to be missing, or whether it's considered "undocumented".
 
I guess I've used too many different assembly/machine languages over the years to be unable to learn new ones (I just chose a long time ago to avoid using Zilog if possible).

Regarding the "processor stops doing stuff" subject, I'm assuming you mean that CP/M is a single-tasking OS (regardless of the CPU). In MP/M, making the call for console input *would* dispatch you to a wait/poll list and allow other processes (tasks) to run. But, in all cases, the implementation of interrupts/polling/waiting/etc. is all done by the BIOS/XIOS, which is customized for the specific platform. Since CP/M is single tasking, there are no other threads or processes to run while waiting for I/O. It is certainly possible for one to write a multi-threaded runtime on top of CP/M. The same thing is done even today (for many different reasons), although with so many multi-tasking OSes it is less necessary.
 
Regarding the "processor stops doing stuff" subject, I'm assuming you mean that CP/M is a single-tasking OS

Yes. But not just a single-tasking OS, since as you say, you can find ways to do it on the z80... But there's an architecture build like CP/M and an architecture where every routine runs on interrupts and does only what it needs to ( even breaking up larger tasks into smaller tasks ) and saves it's own state and exits, with each task responsible for it's own state, and with functions passed from task to task via flags and variables.

So a task that needs a keyboard input doesn't wait for a keyboard input, or even check if a key is there. It flags the input routine that it wants keyboard input and exits. The input routine does nothing with this, but when it does get input, it checks the flags and saves the input and return flags that input is available. The original routine would have just been marking time checking the second flag and exiting and then input comes, so it does something with it, which in turn sets more flags and other tasks or threads note this and do something else. It's an interesting architecture where you can determine processor load just by sticking an oscilloscope on the interrupt acknowledge line and the longer it's low, the more CPU time you are consuming.

But it provides a very reliable way of making sure that a single critical routine that has to check something at a relatively high frequency never misses a beat and it doesn't matter how many critical threads you have since every single one runs under an interrupt, typically around once per millisecond or per ten milliseconds. And tasks that take longer than the allotted fraction of time between interrupt cycles simply complete the task over multiple cycles.

But even if every single task executes, the program will still get to the end of the task list and signal to the watchdog that all tasks completed as intended. This way, if any task doesn't finish on time, the system goes into reset and tries to recover.

As a guide, it was a pretty hostile environment, and the CPU often started executing strange code due to electrostatic shock and violent impact and EMF pulses and similar, and the boss used to test my code by dangling his huge bundle of metal keys on the pin side of the PCB, sparks flying and all, and if my program didn't resume normal operation and correct itself immediately that the keys were removed, he made me write it again. It was a rather brutal test. :( Many PCBs actually died from testing, and there were more than one test PCB in case that happened, so it was never really certain if the code failed, or the CPU died until I couldn't get the board working on a normal reset. Harshest UAT I've ever seen. And then I had to fix the boards that died.

By comparison, when I ask for a key input in CP/M, the whole system is going to loop until a key is pressed and then it continues, and there's no sparks. If I ask for a key input, I know that's exactly what I'm going to get. No waiting. No sleeping.
 
A rose by any other name... Back in the day, CP/M wasn't even called an OS. It stands for "Control Program for Microcomputers". Executive, monitor, etc. Many "real time operating systems" are like what you describe. Unless you've got highly specialized hardware, there is an "OS" of sorts that handles all the coordination, interaction, etc. That "OS" may be an actual independent chunk of code or it may be a set of libraries that attach the "OS" code to each task.
 
... when I ask for a key input in CP/M, the whole system is going to loop until a key is pressed and then it continues ...
Conceptually that is true but it's also dependent upon the user's BIOS implementation. My BIOS console routines are interrupt driven and currently issue a HALT or SLP to reduce power when waiting for input. That could easily be changed to a dispatcher call, disk buffer management or even something like wear leveling routines for flash drives. Other interrupt routines, such as timers, stay enabled.

I've also designed various embedded systems and I think the point to remember about CP/M is that it's primarily a file system manager that also supports a single console and the ability to load and run programs that interact with it's facilities. There's nothing stopping a developer from creating an embedded design without a console that simply uses CP/M's BDOS for file management.

My Z180 and eZ80 systems have a CP/M-2 system in flash memory that starts upon power-on. While it can simply be used as a conventional CP/M-2 system, it can also be used as a loader for things like an updated BIOS, CP/M-3, MP/M, etc. Startup is extremely quick and makes for an easy to use development environment.

MP/M is very different and supports multiple tasks, time slicing, multiple consoles, etc. Perhaps it is closer to a true operating system and is more in line with your expectations.
 
After your first dozen or so assemblers, you get used to it. :)
Oddly, one thing that gave me a headache was the ANSI assembler convention of "op source,dest" rather than the usual "op dest,source". If you were coding before your first cuppa or after your fifth brew, you could get trapped by this. Otherwise, you learn to compartmentalize your thinking--the same way that you learn that behavior at a football game is inappropriate for a funeral... :)
 
Back
Top