• Please review our updated Terms and Rules here

Real Programmers

How many people have tried to write an ASCII sentence to execute as machine code?

Sure.. when I wrote my first assembly code and didn't realize I had to make my first instruction a jmp over my db statement. Found that odd (obviously I came from basic though not ML).. of course I was writing this in debug so I guess maybe a compiler would have been smarter and done that for me. Instead (for non assembly programmers) my program which would have looked like:
Code:
db 'Goodbye cruel world!$' ; my string to print
mov ah,9 ;function 9 = print string in memory
mov dx,100 ;where in memory is the string to print
int 21 ;do function 9 with location 100
mov ax,4c00 ;function to exit
int 21 ;do function 4c with return code 00

Anyway so my first code was written how I thought it would work like basic, present my variables first but I run it, got garbage and a crash and later found that (wow) it ran my variable as ML instead of seeing my db as an assign statement. I ended up doing a jmp over my db xxxxxxxxx statement so added jmp 117 as my first command and change my pointer to my variable. Yeah the "correct" (better) way would have been to put my variable at the end of my program but since I was writing it as I went I'd have to find out how long my program was and THEN change all my references to variables after the fact to wherever they ended up and if I ever added a line to my program I'd again have to change my references.

That was my old school programming intro but it was fun and while assembly seemed so far out of reach in my mind was really a lot less complex than I had realized at first. With just using helppc at the time for great examples and a list of assembly commands it was quite fun.

I remember hearing somewhere that when Wozniak wrote the monitor code for the Apple II he wrote it in ML instead of assembly just because he was used to it. Later he had to transcribe it I think back to assembly for others to look at but just him coding he wrote it his way.

Thanks and yeah the video works now. I was hoping to hear one running but that definitely explains how it works a lot better in my head now. So you'd read/write the bit which is one of those heads? Definitely much more hard drive looking vs memory looking. Non linear memory has always been something I never wrapped my head around. Like how the addressing of a memory core works, I don't quite understand how you read an x,y,z byte and not destroy everything in it's path.

Very interesting and neat to see these things!
 
Like how the addressing of a memory core works, I don't quite understand how you read an x,y,z byte and not destroy everything in it's path.

This is getting off topic, but I will take a stab at it.

Core planes were bit oriented planes -- it would take a stack of 8 planes to give you a byte. But to answer your question, it is only where the X and Y driver lines cross that the current would be strong enough to change the state in a given core. The remaining cores would receive only half that current and therefore would not change their state.

It is actually somewhat more complicated than that -- check out this article. Note that reading core memory was destructive, and each read was followed by a write cycle to restore the state of the memory altered by the read.

http://en.wikipedia.org/wiki/Magnetic_core_memory

The core plane shown is from a CDC 6600. The caption beneath the photo is wrong. These core planes were 32 x 32 for a total of 1024 bits. These were packaged in a stack of 12 planes. This might seem like a strange number but the 10 peripheral processors on the 6600 were 12 bit mini's and the central processor was a 60 bit word oriented machine. Five stacks of 12 planes comprised 1024 60 bit words. The early CDC computers used a 6-bit character for upper case alpha and numerics -- everything a multiple of 6.

By the way, these early core planes cost around $1000 -- memory cost was commonly refered to as "a buck a bit". The cost came down as time went on.
 
Last edited:
Except for the core that went into ECS. There the joke was that they originated at a Tiujuana Core House...

Most (and all ECS = extended core memory) actually came from an oriential core house. While repair was done by a few specialists in the Twin Cities, it was found that stringing these planes was beyond the average American's attention span. Other companies used European seamstress' whose jobs had been terminated by cheap labor from abroad. I have been told that IBM actually used blind workers to do the work -- that seems difficult for me to believe, but might be true. Where ever it was done, the work was almost always done by women.
 
Hong Kong Core House probably sounds better. :)

Weren't most of the workers with taper pin guns on the 6000/7000 assembly floor women? (Easier to fit into tight spots)
 
Weren't most of the workers with taper pin guns on the 6000/7000 assembly floor women? (Easier to fit into tight spots)

Yeah, I believe you are correct about that, especially the backplane assembly in Arden Hills (St. Paul). Some of the assembly was done elsewhere. Not sure where the Cordwood modules were made (two small PCB's with active elements such as transistors, resistors and diodes in between) -- they comprised most of the circuitry. Most of the time I spent in Arden Hills during the 6000/7000 era was in the benchmark labs, and I didn't have much time to wander about.
 
Last edited:
I have to admit that I get a little uncomfortable at times seeing the generated code for something written in, say, "C" and saying "I can do better than that".
Some of the code that came out of a Z80 C compiler I was working with a while back was pretty shocking. Passing function parameters on the stack led to an awful lot of cruft and lost cycles calculating offsets. Passing them using registers tidied things up a lot, but I ended up writing a lot more in assembly that I'd originally envisioned.

How many people have tried to write an ASCII sentence to execute as machine code?
The only time I've come across that was in buffer overflow and format string exploits which required the code consist purely of printable ASCII to get through input validation. It's remarkably easy to write x86 code that way but keeping to your quota of bytes can be a bit more difficult!
 
Yeah, I believe you are correct about that, especially the backplane assembly in Arden Hills (St. Paul). Some of the assembly was done elsewhere. Not sure where the Cordwood modules were made (two small PCB's with active elements such as transistors, resistors and diodes in between) -- they comprised most of the circuitry. Most of the time I spent in Arden Hills during the 6000/7000 era was in the benchmark labs, and I didn't have much time to wander about.

I've still got a couple of the cordwood mouldes--mostly dummies (resistors between the boards and a couple of switches (lever switch, a couple of reed relays) that probably were from some QSE. Almost all of my CDC time was in Sunnyvale with SSD and STAR.
 
Core planes were bit oriented planes -- it would take a stack of 8 planes to give you a byte.
That's certainly true for most mainframes, but smaller systems usually had the whole works on one plane:

MDScore.JPG
220 x 16 Core Memory from an MDS key-to-tape "keypunch"

SharpCore.JPG
Core Memory from a Sharp desk calculator

JugOcore.JPG
A 'Jug o' core'

Cores.JPG
How'd you like to thread three wires through a few thousand of these by hand?

REAL Programmers do it with wires and programming plugboards!
 
Last edited:
I have to admit that I get a little uncomfortable at times seeing the generated code for something written in, say, "C" and saying "I can do better than that".

Even among x86 assembly programmers, there appears to be few aware that x86 assembly is not one-to-one (symbolic form to machine form). There are often several ways to express the same operation given by a specific mnemonic in the machine encoding.

How many people have tried to write an ASCII sentence to execute as machine code?

I have. About two weeks ago... I'm teaching myself Assembler on the Apple //e and the Atari 8bit.

Kinda fun, actually. I think I'm years away from making my own Atari 2600 game though (in machine code--I'm well on my way with BATARI BASIC, but that's not the same thing).

Speaking of which, does anyone know where I can find a *good* guide to how many clock cycles each 6502 opcode takes?
 
Hi
Some processors lend them selves to ML because of the
regular structure. I've pointed this out often as the reason
HeathKit used Octal on their monitor.
As for the DB being executed as code. On the Z8000, to get
from segmented to unsegmented and back, there was
no clean way to do it other than a piece of code written
as ML in a DB statement.
Dwight
 
Back
Top