• Please review our updated Terms and Rules here

Software written in 100% Assembly

JDallas said:
...development...is just a one-time cost. With sufficient production...no consequence other than time to market.
Uniballer said:
...if production is huge...it makes a difference...save a buck on the hardware.
Yes, but time-to-market is actually more important than cost-to-build in competitive markets; ideally that wouldn't be so. But if you get to the market first, or at the right season, you get the revenue advantages; you can always, if not immediately, revise the design to a lower cost-to-build while sales revenue is already coming in and paying for it.
 
Last edited:
Nicely put.

But I think its already happened. :S

After sitting on what I/you said for a day or so, I think the moral of the story is to become a compiler writer :p.

There is an old story about one of the creators of Unix auto-generating a compiler that had a back-door in it's code to access root privileges. The moral of the story was "to not trust code you did not write yourself." To an appreciably lesser-extent, I think that also applies to high level code.

You can be rather confident that, for instance, if you create a function in a high level language, and call it, that function will in fact be called. Otherwise you write a bug report or yell at the vendor :p. But because a high-level language isn't one-to-one with what the CPU executes there it's possible that there is surrounding code that is changing the state of your program in ways you as the programmer are not privy to.

Mostly, it's for your own good- for optimization or housekeeping (i.e. startup code). But if something goes wrong, like an optimization bug or undesirable housekeeping (i.e. a case where you know more than the compiler), you may not know for a while until the program crashes or corrupts data in some obscure manner. One good example I'll always remember is when I learned the various x86 calling conventions for DOS, and a program that I wrote was crashing because a mixed C and assembly library was using a different calling convention that the compiler! An applications programmer who didn't know much about calling conventions (whether calling conventions are something that should be taught while learning programming or not isn't relevant here) would have a difficult time debugging this code. And quite frankly, so did I, because in 2012, I knew what a calling convention was, but never had previously written code where having to know calling conventions mattered.

In examples such as my little anecdote, you won't find the problem just by examining the high level code, because it's not code that you the programmer added in! And if you do not have a GOOD (as opposed to knowing an architecture inside and out) idea of how assembly constructs are generated from high level constructs, the extra undesirable changes in state or incorrect behavior may not be readily visible even in assembly. Considering that compiler-generated assembly code tends to be difficult to read unless you've studied compiler optimization strategies, this also complicates nontrivial debugging.

This is why I feel that "the compiler takes care of everything" mentality and that only compiler writers are still forced (again, emphasis on forced) to know competently know assembly constructs is a bad idea. Of course, when you get to the assembly-level, you need to be confident that the assembler/hardware isn't busted too XD. But in my limited experience, if the assembler/hardware's busted, you'll know... or can write tests to figure it out quickly (which is, for instance, what all PC BIOS do first- check if the register file works correctly).

More optimistically: You will NEVER EVER be worse off for knowing assembly language. It will always be worth the extra time you could've spent learning about higher-level constructs.
 
This is why I feel that "the compiler takes care of everything" mentality and that only compiler writers are still forced (again, emphasis on forced) to know competently know assembly constructs is a bad idea. Of course, when you get to the assembly-level, you need to be confident that the assembler/hardware isn't busted too XD. But in my limited experience, if the assembler/hardware's busted, you'll know... or can write tests to figure it out quickly (which is, for instance, what all PC BIOS do first- check if the register file works correctly).

More optimistically: You will NEVER EVER be worse off for knowing assembly language. It will always be worth the extra time you could've spent learning about higher-level constructs.

But isn't that simply saying that you need to know what you're doing with your toolset? Crikey, I remember when version 1.0 of the IBM (Microsoft) assembler came out for the 5150. It could generate the wrong code. Add to this that 8086 assembly is not an isomorphic situation. In " move" instructions, there can be as many as three different ways to interpret a single assembly instruction. At some point, Microsoft decided that "LEA reg,constant" didn't always mean to use the LEA instruction, but rather to "optimize" it to a "MOV". Damnit, when I code "LEA", I want an LEA instruction, even if it costs an extra cycle.

So, I'll argue that to be a really good programmer, it doesn't hurt to be able to read the binary code for instructions.

Calling C a "higher level language" is a bit of an oxymoron. For example, how does one declare a packed array of bit fields? Can't do it--yet older PL/I could--even declaring the bit fields as packed or aligned to a boundary. C only gives you access to a relatively simple set of common constructs that are shared by a number of platforms--while it may have been fine for a PDP-11, what do you do about valuable machine features that C cannot express? You write them in assembly or you forget about them.

And yet, there are compilers that, on an overall scale, can turn out better code than almost all assembly coders on a large application. The strategies for register optimization and, in particular, global optimization can look at a much larger block of code than the average human. There may be clever algorithms for doing something that the average programmer will not recognize, but the compiler has coded into it.

So, at best, it's a mixed bag. Know your instructions and you'll know what to expect. I thought that was a given.
 
cr1901 said:
...I think the moral of the story is to become a compiler writer :p...
But even there, a problem exists.

People that write compilers, even with their previously acquired average knowledge of assembly language, generally don't have the actual power-coder experience for the microcontrollers for which they're writing compilers.

Most can only bring average example microcontroller architecture knowledge and assembly language skills to the job while learning more on the job... but even with their potentially superb compiler knowledge, the end result is a compiler fit for *average use* of assembly language only.

And as average coders probably don't use assembly language anymore... the point of the compiler design becomes making a product departments will buy but likely never use. Not a lot of feedback potential to uncover bugs.

Knowing the recitation of the micro's architecture is not the same as understanding how it can be used in power-coding. Just ask for opinions on the Harvard Architecture in micros and you'll hear the full spectrum. I've found that architecture exceptionally powerful and fast in real-time applications, even though the first time I used it for a mundane application, I found it obtuse to work with.

Generally speaking, compilers will block power-coders from using those micro resources most advantageous for writing real-time code. Compilers have to use resources without knowledge of choosing registers that might require additional push/pop delays in comparison to power-coders reuse of registers and when they take a banked register set away it has a big effect on quick interrupt processing.

When you try to coexist with the compilers unknown actions, you waste time second guessing and checking that the compiler didn't hobble your carefully constructed resource utilization. That's a waste of productivity. Better to choose one or the other.

When not doing mundane designs, its better to be 100% in control of the micro with assembly language and avoid an unknown code-changer in the project; i.e. the compiler's versions.

...The moral of the story was "to not trust code you did not write yourself."...
For critical control design contracts it's dangerous to use library code or distributed snippets in your code - its a liability risk with financial consequences for the company.

In this field, snippets and libraries are fine only for ideas (usually bad ideas to reject) on how a portion of code might be done. I consider pasting code to be grounds for immediate dismissal though I'd mitigate that in practice.

A Code-Paster Example:

A consulting firm only good at analog was doing a digital design for a competitor at the same time. They'd remove the project materials before our meetings and though they never mentioned the competitor job; I knew from various sources.

During one meeting their coder was particularly proud of something he accomplished that day and told me he had bumped one client's PIC micro to yet a bigger one so he could fit the entire Dallas Semi i-Button library in firmware, and after a month had finally got it working.

I was kindly congratulatory, but what I didn't tell him was that I had spent my morning writing and testing my own concise i-Button firmware to run in background interrupts with invocation to the tasker and that I had seen the library code and knew he wouldn't even use but 20%; all he had done was waste a month of development time. Later heard that contract was cancelled after 18 months. I wasn't surprised.
 
But isn't that simply saying that you need to know what you're doing with your toolset?
Yes, but... I think it's different for computers because there is a real risk our outputs (programs) will become intractable-to-fix (debug) and intractable-to-analyze. I feel the mentality is that "we should keep abstracting and abstracting and abstracting" because "learning the toolset is hard (TM)."

You write them in assembly or you forget about them.
Writing 5 lines in inline assembler does not mean one understands the assembly language. It would be better if such programmers would write separate source files with just the assembly, with the appropriate directives/housekeeping so it can interface to C. To inline's credit, 8088 isn't tough (IMO) because of the assembly language proper- it's because MASM directive are/were a mess XD.

Calling C a "higher level language" is a bit of an oxymoron. For example, how does one declare a packed array of bit fields? Can't do it--yet older PL/I could--even declaring the bit fields as packed or aligned to a boundary. C only gives you access to a relatively simple set of common constructs that are shared by a number of platforms--while it may have been fine for a PDP-11, what do you do about valuable machine features that C cannot express? You write them in assembly or you forget about them.
I feel that's a bad example. As I'm sure your post implies, C was meant for portability, so it doesn't define the specific details that are essential to any real architecture (including how bit fields are laid out, other than that it won't split bit fields between individual characters).

All the C specification talks about is an abstract C machine that can take any C construct and make sure it changes the correct state in it's abstract environment. In that sense of one-to-one for an abstract machine, C is low-level IMO. But no one's created a hardware version of the abstract C machine. :p So C in practice is high (mid)-level, and there will be code that you as the programmer do not see. Optimizations in high level languages can cripple the readability of ASM even if the control flow doesn't change. In particular, the C standard states that sequence points (where all previous changes in state in the program environment MUST be done before starting a new change in state) don't even need to be honored except for volatile objects; an unoptimizing C compiler makes sure that all sequence points are honored (i.e. volatile is redundant).

I feel ANSI C was standardized with a very specific- and noble- purpose in mind: portability. Programmers who thought assembly was too tedious bastardized it into a panacea (the GCC compiler, notably, in practice depends on its own extensions to the C standard).

For the record, I don't like how MASM would swap instructions around either. But that seems to be older versions (and I've since switched to NASM, which DOES permit jump optimizations, but can be turned off).
 
Last edited:
Yes, but... I think it's different for computers because there is a real risk our outputs (programs) will become intractable-to-fix (debug) and intractable-to-analyze. I feel the mentality is that "we should keep abstracting and abstracting and abstracting" because "learning the toolset is hard (TM)."

Well, if you;re programming (for example) Java, you may not be able to figure out how your code is implemented, nor determine exactly what it's doing--or what to do about a bug. And you know as well as I do, that there have been and most probably are bugs in Java. If you're writing web code, you don't even have control over the runtime that's being used to interpret it.

That being said, an awful lot of code has been written in pseudo-code implemented systems. (Many Pascals, Modula-2, various FORTRANs, FORTH, several BASICs, Java...) It really is somewhat remarkable that it all manages to hang together.

Writing 5 lines in inline assembler does not mean one understands the assembly language. It would be better if such programmers would write separate source files with just the assembly, with the appropriate directives/housekeeping so it can interface to C. To inline's credit, 8088 isn't tough (IMO) because of the assembly language proper- it's because MASM directive are/were a mess XD.

In some cases, it's more than a few lines. I was part of the group who wrote for the ETA-10 super (one of a kind liquid nitrogen-cooled CMOS box). The architecture is basically memory-to-memory vector, 3 address,bit addressable, with 256 GP registers of 64 bits for doing scalar stuff. The language used for systems work was relative of LRLTRAN, which is itself a version of FORTRAN. The escape mechanism in the language for getting at the very large set of instructions not corresponding to a construct in LRLTRAN was a function called Q8INLINE. The first operand was the instruction opcode (8 bits), then the modifier (8 bits), then up to 6 more quantities specifying registers to be used (with occasionally an implied 7th).

There were pages of Q8INLINE code to the extent that it would be hard to identify the compiler language from a one-page listing. A C was also implemented and it suffered from the same problem. The idea of a "one size fits all" HLL was almost ludicrous. All of this was going on in the 1980s at the same time the IBM PC was the personal computer of choice. APL would have probably been a better implementation language.

I feel that's a bad example. As I'm sure your post implies, C was meant for portability, so it doesn't define the specific details that are essential to any real architecture (including how bit fields are laid out, other than that it won't split bit fields between individual characters)

But why? Is the C compiler incapable of generating shift and mask operations, while PL/I is? C is a language that has dirty fingernails; it isn't elegant; not that PLI/I is either.

All the C specification talks about is an abstract C machine that can take any C construct and make sure it changes the correct state in it's abstract environment. In that sense of one-to-one for an abstract machine, C is low-level IMO. But no one's created a hardware version of the abstract C machine. :p So C in practice is high (mid)-level, and there will be code that you as the programmer do not see. Optimizations in high level languages can cripple the readability of ASM even if the control flow doesn't change. In particular, the C standard states that sequence points (where all previous changes in state in the program environment MUST be done before starting a new change in state) don't even need to be honored except for volatile objects; an unoptimizing C compiler makes sure that all sequence points are honored (i.e. volatile is redundant).

I believe most, if not all, ANSI X3 language specs define a computer as "some thing or some one capable of interpreting instructions" or something to that effect.

Heh--I once asked on the cctalk list if anyone throught it was possible to write a C compiler for, say, an IBM 1401 or 1620 or even a 7080. The consensus seemed to be "no" because none of those are binary fixed-word length architectures. Less clear was if a ones complement or sign-magnitude architecture could support C. I think the group-think was that the ones complement was "probably" and the sign-magnitude was "probably not" (sign extension during a shift is meaningless and could break a lot of code).

Yet FORTRAN, COBOL and Algol exist for all of those. So perhaps C isn't really all that portable. If your defense is that FORTRAN isn't a systems programming language, then you probably need to look at the old Prime minicomputers.

The question for me is are we being artificially constrained by the languages we use?. Are our goals driving the cart, or is the cart dictating where we can go?
 
The question for me is are we being artificially constrained by the languages we use?. Are our goals driving the cart, or is the cart dictating where we can go?
Oh, whether it's the languages specifically at fault is a good question, but there's no doubt in my mind that the constraints of standard-practice have been sidelining tons of innovative stuff all the way back into the '70s. The fact that Unix is considered the Holy Grail of operating-system design, instead of just an unusually successful attempt at kludging a PDP-11 OS into something resembling modernity, for example...
 
I believe most, if not all, ANSI X3 language specs define a computer as "some thing or some one capable of interpreting instructions" or something to that effect.
That's kinda disappointing to me if all the specs define a computer like that- it's like they have a database of technical terms that ANSI can just pull words from instead of carefully defining an abstract machine for each platform :p.

Heh--I once asked on the cctalk list if anyone throught it was possible to write a C compiler for, say, an IBM 1401 or 1620 or even a 7080. The consensus seemed to be "no" because none of those are binary fixed-word length architectures. Less clear was if a ones complement or sign-magnitude architecture could support C. I think the group-think was that the ones complement was "probably" and the sign-magnitude was "probably not" (sign extension during a shift is meaningless and could break a lot of code).
Could you please elaborate on "fixed-word length architecture"? I assume you're referring to how the ANSI C spec has provisions to support ones complement and signed-magnitude arithmetic (this is why limits.h allows, for instance, SHRT_MIN to be -127 or less)? I don't know of any post-ANSI C compiler that implements ones-complement/signed magnitude arithmetic, however. I'll have to think about what you said in bold, because tbh, I'm not sure how signed magnitude works (ones/twos-complement is easy :p).

The question for me is are we being artificially constrained by the languages we use?. Are our goals driving the cart, or is the cart dictating where we can go?
The pessimist in me says the latter, as Wirth's Law continues to be in full effect. In my ideal world, .NET wouldn't exist, and Windows programming could still meaningfully be done in C/C++.
 
Could you please elaborate on "fixed-word length architecture"? I assume you're referring to how the ANSI C spec has provisions to support ones complement and signed-magnitude arithmetic (this is why limits.h allows, for instance, SHRT_MIN to be -127 or less)?

The basic unit of storage is either a digit or a character, with flags or special characters demarcating "word" boundaries. As an example, on an IBM 1620, a "word can be from 2 to 60,000 digits long--there is no machine-defined fixed quantity as a "word". The architecture has procedures for combining "words" of variable lengths. To add another layer of complexity, "words" can be grouped into "records" by demarcating record boundaries with a special character that does not take part in ordinary computation.

I don't know if there were computers that used as their sole means of representation variable-length bit fields, but it wouldn't surprise me if someone devised one.

Would a C implementation be possible on a Soviet Setun (balanced ternary)?

Sign-magnitude is just that (and is the way humans think about numbers. One bit reserved for sign (not a part of any magnitude for a number) and the remainder of the bits actually holding the magnitude. So, if 0000 0001 represents positive 1, 1000 0001 would represent -1 (yes, you can have a -0, just as in ones complement).
 
...
Professional digital designs always operate on more than one constraint, merely "what's cheap?" doesn't work in the real world.

This.

Besides the eZ80 is the same price as the 8051s ten years ago and has a lot more capability.

I'm somewhat biased, since Z80 assembly/hex machine language is my first and best programming language. And I actually do mean machine language; my first programming of any real complexity on the old TRS-80 was hand-assembled and entered in a hex debugger. (I couldn't afford even the Series-I Editor/Assembler at the time, and my time was very cheap back then, since I was in high school). My first really complex program written this way was a disassembler for the Z80. I later typed it in to the Series-I EDTASM, since my summer job of mowing cemeteries allowed me to be able to afford it.....

So I am very biased towards assembly language. Yet it is not the right fit for many things.

As a note, though, Z80-descendent processors are heavily used in the embedded market, and a number of DVD reader and writer drives for PC's use a Z80 as the MCU of choice. (As just one reference and example, here's a 'want ad' of sorts for a Z80 expert to 'hack' on a certain drive's firmware: http://forum.rpc1.org/viewtopic.php?f=31&t=13784).

With a vast amount of code already written for the Z80 to do these things it would be an engineering waste to recast to a different MCU 'just because.'

But, again, I am heavily biased towards the Z80. Maybe I should rename vcf handle in honor of the cover of the sample Z280 manual I got from Zilog years ago (Captain Zilog! The Captain's BACK!).

Edit: My favorite programming joke:
0000: 00 21 00 00 11 01 00 01 FE FF ED B0 00 00 00 00

'But teacher, the ldir ate my homework!'
 
Last edited:
lowen said:
...I actually do mean machine language; my first programming of any real complexity on the old TRS-80 was hand-assembled and entered in a hex debugger...
Same with me except instead of a TRS-80, it was a Cromemco Z2 kit. I used the graphics display (MERLIN) card's monitor to enter my hand-assembled machine code; it took about 10 minutes to enter an alternating, two player BATTLESHIP game I wrote for it. I didn't find cassette tape to be very useful so I later bought a NorthStar MDS kit I could afford.

I don't think I had a Z80 assembler at home until I built my three Xerox-820s systems from $50 motherboards at the local Xerox factory surplus sale.

...My first really complex program written this way was a disassembler for the Z80...
In fact I still have the 1977 Z80-CPU Technical Manual on my desk from which I once hand-assembled my S-100 code into penciled machine language. I'm using the reference to do a Z80 disassembler in my spare time, using Microsoft Excel to organize the data structure. After that I intend to use this foundation to do an PC emulator; yes I know others exist in both cases... I have my reasons for doing my own.

...So I am very biased towards assembly language. Yet it is not the right fit for many things...
I agree, for non real-time or otherwise critical applications its a toss-up. The management demand *is* for writing firmware in C for perceived development cost reductions, real or imagined, and I have no problem with that *because* it opens barriers-to-entry to compete in some markets by creating new, more profitable designs.

...With a vast amount of code already written for the Z80 to do these things it would be an engineering waste to recast to a different MCU 'just because.'...
I suspect you mean the entire Z80 family including the Z8 microcontroller.

The Z80 itself really fits the category of microcomputer because it requires the external memory. In various embedded designs I've done over the last 35 years, the Z80 has never been the best solution in any case, nor have the 8080 or other microcomputers. I used an 8085 in 1981 as that fit the design due to its almost-microcontroller architecture.

I have supported existing systems with Zx80s (Verifone Credit Card terminals) and 8085s (ElectroCom Automation's MDTs). In neither case would I have redesigned those systems at that time with the same micro.

I looked at the eZ80 when it came out, and while interesting in a nostalgia point of view, it had no application in any embedded designs I was doing. At that time I could make 8051s do things reasonably thought to be impossible.

However, back in March of this year I had a system requirement for an industrial system that fit the eZ80 with its moderate ethernet ability. I'd have used an ARM otherwise and didn't look forward to that.

Using eZ80 allows the removal of a PC and all its headaches, from the system. This eZ80 application is not a real-time design (except in ethernet rate) and could be written in ASM or C. Its basically the system master for a wireless network of sensors and alarms and it manages the human notifications of system status and alarms via ethernet and other interfaces. This will be less of a headache than an OS-Morphing PC running a Melt-Down Core processor and the eZ80 is something we make more money selling ourselves while reducing client system costs.

But as I studied the eZ80 last March to see if it was going to be around much longer, I discovered the Vintage Computer community and saw a fun second use for the same eZ80 board... and working with a Z80 equivalent again after so long was appealing to me personally.

I know its been suggested elsewhere in the discussion that its shocking that people have been using Z80s for the last 35 years, but that wasn't likely an attempt to accurately characterize what has been said by others. I've not seen a new Z80 design since the 80s. But one thing I noticed, was that the eZ80 was fast enough to remove cost of an additional floppy disk controller when creating a CP/M era OS platform; just took interrupt driven firmware. While my industrial application has no need of a floppy drive, I seems rather ZEN that I add it to the retro system. I have about 600 floppies I want to recover... its worth it to me for that alone.

My personal plans for this retro version is to make a development system I can use that doesn't rely on Microsoft operating systems. And as I'll have plenty of boards in the lab, I'll never have platform-panic even after the eZ80 disappears.

And I'll likely do the firmware development for both the retro and the industrial versions in assembly language as I have a lot of that still in my brain from the 70s and 80s. I'm inclined to even write some compilers to throw into the retro bundle, since you can't bundle abandon-ware or anything from computer antiquity.

I'd rather some of the tools be tied into the new operating system anyway.
 
...
In fact I still have the 1977 Z80-CPU Technical Manual on my desk from which I once hand-assembled my S-100 code into penciled machine language.

Heh, I got a copy electronically the other day in a SPAM e-mail as 'filler.' Seriously, in a spam e-mail for some strange pharmaceutical concoction I got the full text of Zilog's tech manual.

I'm using the reference to do a Z80 disassembler in my spare time, using Microsoft Excel to organize the data structure. After that I intend to use this foundation to do an PC emulator; yes I know others exist in both cases... I have my reasons for doing my own.

Yep, understood. In my case, the disassembler is pretty efficient, weighing in at 32KB of source, and 4.5KB of /CMD. It implements the technique written out by Toni Baker in the book "Mastering Machine Code on your ZX81" http://www.amazon.com/Mastering-machine-code-your-ZX81/dp/0835942619
...
But as I studied the eZ80 last March to see if it was going to be around much longer, I discovered the Vintage Computer community and saw a fun second use for the same eZ80 board... and working with a Z80 equivalent again after so long was appealing to me personally.

I've looked at the eZ80 more of as a plug-in CPU 'upgrade' for a TRS-80..... but a Z80-compatible core is available on opencores and thus could easily be blown into an FPGA. See http://pacedev.net/forums/index.php for one such project.

I know its been suggested elsewhere in the discussion that its shocking that people have been using Z80s for the last 35 years, but that wasn't likely an attempt to accurately characterize what has been said by others. I've not seen a new Z80 design since the 80s. ...

That's why I mentioned the Z80's use in DVD burners. Yes, the Z80, not the Z8; I was referring to the Z80 family, not the Z8 or the Z8000. Designs using the Z80 are still around.
 
lowen said:
...I've looked at the eZ80 more of as a plug-in CPU 'upgrade' for a TRS-80...
Yeah, I designed some hardware to selectively enable CP/M type banking and common page when needed. After looking at a TRS-80 expansion unit, I decided to add another selective enable to inject the lower 16KByte memory map of the TRS-80.

At least it could run many 64KByte pages of TRS-DOS application software that way without ever wearing out any vintage TRS-80 hardware. The new kernel would let you flip to other 64KByte pages so you could work on several things loaded at the same time.

I'm not too familiar with the TRS-80 though.

I've downloaded all the schematics and documentation for models 1 through 4. In concept it sounds easy... just need to add the circuitry and try not to miss something. :)

Probably the difficult part would be to create a complete re-write of the EProms by writing independent firmware based upon the user manual description of features and all. Basic and TRS-80 Graphics... additional work, but doable.
 
Last edited:
(Grabs the wheel and tries to steer the thread back on topic) I'd have to say that both Indianapolis 500 and Interphase are my favorite 100% (or nearly 100%) written-in-assembler programs. They produce filled-polygon 3-D graphics at speeds well above almost everything else made for a single-digit-MHz 808x system.
 
(Grabs the wheel and tries to steer the thread back on topic) I'd have to say that both Indianapolis 500 and Interphase are my favorite 100% (or nearly 100%) written-in-assembler programs. They produce filled-polygon 3-D graphics at speeds well above almost everything else made for a single-digit-MHz 808x system.

Hmm, the OP wasn't PC-specific, and this isn't the PC-specific area....

Anyway, my favorite 100% assembly language commercial software was the Orchestra-90 software for the TRS-80. The Orchestra-90 hardware is extremely simple (I have one); all of the work to produce in-tune tones is done in software on the Z80 and in assembly language. The surprisingly high-quality 'Piano-90' software is even more impressive, since the timbre and sound are strikingly good. One of the reasons I've bought a couple of Model 4P's lately (along with an Orc-90) is so I can enjoy the Piano-90 stuff again.....the emulators just don't cut it for this purpose.

My favorite three programs written in 100% assembly language would have to be my own 512-byte LIFE screen editor/engine, written in conjunction with a classmate in high school (he wrote the engine and I wrote the full-screen editor and run-time environment, each half fitting in a single 256-byte page), and my own disassembler, again written in high school. Both for the TRS-80 model III and 4. Oh, the third program.... well, it was a communications program, written for the Model III by a good friend in college, that I improved and stabilized. It could produce the ANSI BBS line characters using the TRS-80 Model III's high-resolution graphics board. A commercial version (by Richard VanHouten) for Model 4 was marketed for a while by ComputerNews 80.
 
Last edited:
lowen said:
...my favorite 100% assembly language...was the Orchestra-90...high-quality 'Piano-90' ...reasons I've bought a couple of Model 4P's lately...
Sounds to me like you're forming an automated orchestra using Model 4Ps labor rates. :)
 
I'd be curious to see the source of this, if you've made it available.

Hmm, I'll have to dig and see if I have it anywhere. The EDTASM source format is a bit unusual in that line numbers are binary, not ASCII. And there are no comments; the EDTASM code was a retype of the hand-assembled code that was entered using the OS debugger..... Watch this space....

UPDATE 1: Found a copy for Model 4 TRSDOS 6, but unsure if it was ever actually assembled into working code. Looking for the TRSDOS 1.3 Model III version (SDLTRS to the rescue......).

UPDATE 2: The Model 4 TRSDOS 6.1 source has hacks that only work on TRSDOS 6.1, and it doesn't properly execute on LS-DOS 6. I did find the original Model III-mode executable, but no source as yet. Model III executable is attached.
 

Attachments

  • life-m3-cmd.zip
    419 bytes · Views: 1
Last edited:
That makes it hard for me to learn why your editor was one of your favorite pieces of ASM, but thanks for digging it up. I'll poke through it in IDA to see if I can glean anything.

Well, the Model III code was a bit different from the Model 4 code, and, well, it was my first real try at doing anything like this.....

Anyway, a few comments added, and here is the Model 4 code. No guarantees that it will assemble or run, but here it it. GPLv2.
Code:
	COM	'Copyright 1984, 2014 Lamar Owen, GPLv2 or later license.'
	TITLE	'<Conways game of LIFE for TRS-DOS 6.1 on TRS-80 Model 4>'
	SUBTTL	'<PROGRAM Header>'
; Source reformatted from Series-I Editor-Assembler to what should be MRAS-compatible.
; Original Model III version hand-assembled; this is a reconstruction and thus is
; not well-commented.
; Engine is by Ray Chason.
;
; Original Model III code used direct access to video RAM; this version would be better
; if it built the screen in the buffer at 3000H as well as in the video memory....
; but hindsight is 20/20..... and using @VDCTL function 6 would have been useful.
; 
; Keys: 	space:  	puts a space at the cursor and moves cursor right
;		X:		puts an X at the cursor and moves cursor right
;		Up (0BH): 	moves cursor up
;		Down (0AH):	moves cursor down (yep, a linefeed)
;		Left (08H):	moves cursor left
;		Right (09H):	moves cursor right
;		SHIFT-CLEAR (1FH):	clears screen
;		ENTER (0DH):	starts LIFE
;		BREAK (80H):	exits.
;
; Version: 1.01, released September 4, 2014, via vintage-computer.com forums.
;  SVC MACRO is not actually used in the below code, but is here to show RST 28H calling
;  convention in this version (I may rebuild it later to use it....)
;
; SVC EUQates are for reference; please see 'Model 4/4P Technical Reference Manual' or
; 'The Programmer's Guide to LDOS/TRSDOS 6' for a list of SVC's and their arguments.
;
; EQUate for CRSAVE (0BC8H) constant used below is an undocumented for TRSDOS 6.1
; memory location for 'character currently under cursor' in the *DO display driver 
; (See 'The Source' or Frank Durda IV's LS-DOS 6.3.1 source code for DODVR/ASM.  EQUate in
;  LS-DOS source is for CRSAVE.  Frank Durda's site: nemesis.lonestar.org)
; Equivalent LS-DOS 6.3.1 location is 0B97H.
;
; MACROS
;
SVC	MACRO	#NUM		;Macro to invoke a SVC
	LD	A,#NUM
	RST	28H
	ENDM
; Character under the cursor
CRSAVE	EQU	0B97H		;TRS-DOS 6.1 location was 0BC8H.  Used internally in *DO
;
;
; SVC (SuperVisor Call) EQUates.
@DSP	EQU	2
@KEY	EQU	1
@KBD	EQU	8
@EXIT	EQU	16H
@VDCTL	EQU	0FH
;
; Calls to @VDCTL use register B for a function. 
; The model III version of this code used direct access to the video RAM instead
; of the buffer.
;
	SUBTTL	'<Main-line Code>'
	ORG 4000H
OFSTBL	DEFB 0FFH
	DEFB 0FFH
	DEFB 0
	DEFB 0FFH
	DEFB 1
	DEFB 0FFH
	DEFB 0FFH
	DEFB 0
	DEFB 1
	DEFB 0
	DEFB 0FFH
	DEFB 1
	DEFB 0
	DEFB 1
	DEFB 1
	DEFB 1
CLEAR	LD C,1CH
	LD A,2
	RST 28H
	LD C,1FH
	LD A,2
	RST 28H
	LD HL,0
PRCRSR	LD B,3		;@VDCTL function 'move cursor to position HL'
	LD A,0FH
	PUSH HL
	RST 28H
	POP HL
	LD C,0EH
	LD A,2
	RST 28H
	LD A,1
	RST 28H
	CP ' '
	JR Z,PRCELL
	CP 'X'
	JR Z,PRCELL
	CP 0BH
	JR Z,UP
	CP 0AH
	JR Z,DOWN
	CP 08H
	JR Z,LEFT
	CP 09H
	JR Z,RIGHT
	CP 1FH
	JR Z,CLEAR
	CP 0DH
	JR Z,STEP
	CP 80H
	JR NZ,PRCRSR
	LD A,16H
	RST 28H
;
; CRSAVE is 0B97H for LS-DOS 6.3.1, 0BC8H for TRS-DOS 6.1
;
PRCELL	LD (CRSAVE),A	;Put either space or X under cursor 'behind *DO's back'
RIGHT	INC L
	LD A,L
	CP 80
	JR C,PRCRSR
	LD L,0
	JR PRCRSR
;
LEFT	DEC L
	LD A,L
	CP 80
	JR C,PRCRSR
	LD L,79
	JR PRCRSR
;
UP	DEC H
	LD A,H
	CP 24
	JR C,PRCRSR
	LD H,23
	JR PRCRSR
;
DOWN	INC H
	LD A,H
	CP 24
	JR C,PRCRSR
	LD H,0
	JR PRCRSR
;
STEP	PUSH HL
	LD C,0FH
	LD A,2
	RST 28H
STEP2	LD IX,377FH
	LD H,23
ROW	LD L,79
CELL	LD DE,OFSTBL
	LD BC,0800H
NBRCHK	PUSH HL
	LD A,(DE)
	ADD A,L
	CP 80
	JR NZ,NORGT
	LD A,0
NORGT	CP 0FFH
	JR NZ,NOLEFT
	LD A,79
NOLEFT	LD L,A
	INC DE
	LD A,(DE)
	ADD A,H
	CP 24
	JR NZ,NODOWN
	LD A,0
NODOWN	CP 0FFH
	JR NZ,NOUP
	LD A,23
NOUP	LD H,A
	PUSH BC
	PUSH DE
	LD B,1		;@VDCTL function 1: return character at HL (H is row, L is column)
	LD A,0FH
	PUSH HL
	RST 28H
	POP HL
	POP DE
	POP BC
	CP 'X'
	JR NZ,NONBR
	INC C
NONBR	POP HL
	INC DE
	DJNZ NBRCHK
	PUSH BC
	LD B,1		;@VDCTL function 1: get char at HL
	LD A,0FH
	PUSH HL
	RST 28H
	POP HL
	POP BC
	CP 'X'
	JR NZ,DEDCEL
	SET 0,C
DEDCEL	LD A,C
	CP 3
	JR Z,LVCELL
	LD A,' '
	JR SETCEL
LVCELL	LD A,'X'
SETCEL	LD (IX+0),A
	DEC IX
	DEC L
	LD A,L
	CP 80
	JR C,CELL
	DEC H
	LD A,H
	CP 24
	JR C,ROW
	LD A,8		;@KBD Check for keypress, return to editor if so.
	RST 28H
	POP HL
	CP 0
	JP NZ,PRCRSR
	PUSH HL
	LD HL,3000H
	LD B,5		;@VDCTL function 5: blow 1920 bytes starting at HL into video.
	LD A,0FH	; (Basically a glorified LDIR, but has to map VIDEORAM in and out).
	RST 28H
	JP STEP2
	END CLEAR

And, yes, I know of at least two bugs in this code as written. It shouldn't be hard to figure them out..... And I never got around to a 'load/save state' routine.....

I would, however, be remiss in not crediting my classmate, Ray Chason, for the engine.

I'm also attaching the built binary from this code, hand-patched for the right CRSAVE location. It works, but it is slow.

EDIT:

So now I'm interested in making it faster; but faster will be larger.....
 

Attachments

  • life4631.zip
    373 bytes · Views: 1
Last edited:
Back
Top