• Please review our updated Terms and Rules here

PCJr lesson learned - STAY AWAY FROM DOS!

It took Tandy to make a successful PCjr. The Tandy 1000 was a PCJr compatible, not really a PC compatible. They quickly flipped marketing campaigns when the Jr was discontinued. The Tandy went on to set the standards for PC compatibility for the home computer market until VGA eventually killed it off. Took a bunch of years though....

The Jr is a great little machine though, but you need to make some modifications which would have cost IBM very little, but would have potentially stolen low end IBM PC sales, so they intentionally hampered it. If IBM didn't want to make those changes, they should have stuck it out and been more like Commodore and worked to lower the component cost so that the machine was competitive. The problem was back in the 80's you never lost your job for buying IBM, but for a home machine you didn't need to spend the extra cash to protect yourself.

Cheers,
Corey
 
Huh. Reading this thread made me curious enough to look up the Tandy 1000 and Jr. service manuals just for grins, and... I guess I'm not really quite smart enough to understand these things fully because the way I'm interpreting the Jr. manual it pretty much looks like that all they saved by brain-damaging the keyboard hardware as badly as they did was the cost of a shift register. Is that really it?!

(Of course, I'm totally amazed Tandy didn't copy it; using the main CPU as a UART is pretty much how the serial port on the CoCo works.)

Sure, I know the Jr. had to make some compromises to hit a price point, but that's pretty absurd.
 
It took Tandy to make a successful PCjr. The Tandy 1000 was a PCJr compatible, not really a PC compatible. They quickly flipped marketing campaigns when the Jr was discontinued. The Tandy went on to set the standards for PC compatibility for the home computer market until VGA eventually killed it off. Took a bunch of years though....

The Jr is a great little machine though, but you need to make some modifications which would have cost IBM very little, but would have potentially stolen low end IBM PC sales, so they intentionally hampered it. If IBM didn't want to make those changes, they should have stuck it out and been more like Commodore and worked to lower the component cost so that the machine was competitive. The problem was back in the 80's you never lost your job for buying IBM, but for a home machine you didn't need to spend the extra cash to protect yourself.

Cheers,
Corey

The problem for IBM was that after the PCJr and TopView purchasers and developers figured out they could wait and see if IBM's ideas gain traction. That slowed adoption of Microchannel and OS/2 and ultimately transformed the Entry Systems Division from a source of massive profits into the cost sink that had to be unloaded to Lenovo.
 
Given the incompatibilities of the Peanut, one wonders why IBM didn't decide to just go with an 80186 CPU. They were certainly out by then, had a bunch of on-chip peripherals, etc.

What did the PC/JX use?
 
Given the incompatibilities of the Peanut, one wonders why IBM didn't decide to just go with an 80186 CPU. They were certainly out by then, had a bunch of on-chip peripherals, etc.

Probably because it was too expensive. It also handled most instructions faster than an 8088 (for a given clock frequency) so there is also the risk it would outperform the 8088 based PC/XT.
 
After you've included the cost of the added support components for an 8088, I"m not sure about the cost issue. But given that the first production units were 16-bit 6Mhz, they well could have outrun a PC-AT.

So economics, but not the sort you'd normally think of.
 
The IBM JX was another stodgy 8088 design.

The PC Jr couldn't use the 80186 since PC Jr shipped a few months before the 80186.
 
The IBM JX was another stodgy 8088 design.

The PC Jr couldn't use the 80186 since PC Jr shipped a few months before the 80186.

Ummm, I remember developing for the 80186 before the PC Jr. But then, we were on the development distro from Intel, so we got to see all of the buggy steppings. Curiously, we were also developing for the 80286 at the same time. But unless my wetware fails me, the 186 was being distributed from production in 1982.

One nicel thing about the 186 was 20-bit DMA (no"64K boundary"). The other was the programmable chip selects. Initially CLCC package, just like the early 286s.
 
Last edited:
Wikipedia claims, "In practice the PCjr proved incompatible with about 60% of PC applications including WordStar and two programs often used to test PC clones' compatibility, Lotus 1-2-3 and Microsoft Flight Simulator." Was it really that bad? I used 1-2-3 and Flight Simulator on my original Tandy 1000 and both ran fine; however, the versions I was using may have already been patched to make them work on the PCjr, thus improving Tandy compatibility as well.
 
Re: the intro of the 80186, the interwebs seem to claim that the Tandy 2000 shipped a few months before the Junior. (Granted it's very possible the Peanut project was started earlier and the CPU spec was settled early in. I'm sure it took a lot of man hours to design that bit-banging keyboard interface and merge it with the guts of a Zenith Space Command remote control.)
 
Alright, going to round four of optimizations brings me back to the keyboard ISR. Occurred to me that at least for this pac man ripoff, I don't need to know anything more when polling than the last key-down... That's it, that's all I need... and if I can make it return zero when no key-down or the same key-down occurred the better.

Originally my ISR9 was taking longer than the BIOS because I basically replicated what BIOS did - and of course unexpanded Jr. RAM is WAY slower than ROM.

So, if ALL I need the ISR to do is store the last keydown and if it has changed since it was last checked...

Code:
segment CODE

gameKeyData  db  0
oldISR9      dd  0x00000000

gameKeyInt9h:
	push  ax
	in    al, 0x60
	xor   al, 0x80
	jns   .done     ; ignore key releases
	mov   [cs : gameKeyData], al
.done:
	in    al, 0x61
	or    al, 0x80
	out   0x61, al
	mov   al, 0x20
	out   0x20, al
	pop   ax
	iret

The XOR does double duty, letting me use the sine flag in the read for if it's been changed.

The read routine:
Code:
; function gameKey:byte;
pProcNoArgs gameKey
	mov   al, [cs : gameKeyData]
	xor   al, 0x80
	jns   .processKey
	mov   al, 0x7F
	retf
.processKey:
	mov   [cs : gameKeyData], al
	mov   bx, gameKeyScanCodes
	xlat
	retf

Is pretty simple. Check for sign and strip it in one operation, if the sign is unset we have a new keycode, so report it and write it back so next read it rejects and sends nothing. I have the no key waiting result as the drop-through since it's far more likely to be the result.

I'm sending 0x7F as I wanted to use zero as one of the response values in the translation to work with the already in-place movement routines that do 0..3 as directions clockwise. (making flipping direction as easy as direction := (direction + 2) and 3);

Code:
segment CONST

gameKeyScanCodes:
	db  0x7F, 0x1B, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x00, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x18, 0x19, 0x7F, 0x7F, 0x1C, 0x7F, 0x03, 0x02
	db  0x01, 0x7F, 0x7F, 0x7F, 0x24, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x1C, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x00, 0x7F, 0x7F, 0x03, 0x7F, 0x01, 0x7F, 0x7F
	db  0x02, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F
	db  0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F

W and "up arrow" translates to zero, and so forth. I remap escape to 27 (since I can remember that) and pass the few other used keys as their normal scancodes.

One question though, the clearing operation on port 61 of reading the value, doing an or by 0x80 and then writing it back -- some implementations I've seen preserve the original value and write it back again as well... is there a reason for this as things seem to work fine without that and I've seen people code it both ways... the "other" way would read (in my version)

Code:
	in    al, 0x61
	mov   ah, al
	or    al, 0x80
	out   0x61, al
	mov   ah, al
	out   0x61, al

Anyone know what that extra writeback of the original value is supposed to accomplish? Doesn't seem to actually do anything...

Though I have to laugh at some of the code examples out there dicking around pushing/popping registers they aren't even changing, or values that are already preserved like flags... or clearing/starting interrupts that are already cleared by the keyboard ASIC.

Well, laugh or cry depending on if you're trying to learn how to do this stuff or already have a grasp of it.
 
Back
Top