• Please review our updated Terms and Rules here

Looking for a datasheet DEC T11 Cpu

Umm....

Just checked... that's definitely the T11 [DEC 310E]
t-11s.jpg

Slightly newer than mine.(?)
 
.. I am putting up "something" by literally using "the chips I have around here" ..

Schematics later .. ;)
 
I downloaded that schematic. Quite something for 1985.

Wow... Atari must have been a great place to work. Some one had an awful lot of fun doing that system.

I guess I'm re-considering the notion of running RT-11 on that board, even though it looks possible.

  • Your best opportunity to use this board without having to modify it, is to use the "Expansion Socket (IF)" [PDF page 10]. You can make anything you need - memory, I/O whatever - and plug it in there.

  • There are so many 6116 Rams on the board set, you have enough parts to do what ever you want as long as they're socketed.
The biggest misgiving I have about using that board, is powering all the things you won't be using on it. Just to make enough of a "memory hole" to do what you want, you'll need to un-populate a lot of what's there. This wouldn't involve "modifying" much [if anything], but it's still work, and comes with risk.


So maybe the notion of borrowing what you absolutely need from it, without disturbing the remainder of the board, and making your own - is more sensible. You certainly have schematics of a good example if you need one.

Using today's static RAMs and ROMs will leave you with needing very few components compared to yesteryear. Making your own console serial device is fairly simple, as we have discussed and needs to be done either way. The biggest design issue I foresee is that few modern devices use 5V any longer. This will mean you'll need to do your changes in a "mixed voltage" manner. Not a deal breaker, just an extra consideration.

I believe you can succeed, which ever way you go. You sound quite sensible and pragmatic about it. You obviously understand what's involved.

What would you like to accomplish, if you did tackle the project?



In the meantime, I'll keep looking for that T-11 "Test Mode" information. I don't recall ever seeing it detailed anywhere, just that it's mentioned in the chip descriptions and manuals.

There are obviously a lot of us T-11 fanatics around [more than I ever knew], so you'll be able to get plenty of help here, if you need it.
 
Ok this is "done while thinking and reading" ..

Yes "how do I do" well not using the 5 monitors really but using multiple windows open on the same monitor ;)

Ok this is so far "the idea" and about "why those chips and not others" the answer is "because those I have them around here I don't need to go out and buy anything else".

Yes I've seen there are various static rams 16 bits x something, but they normally come in SOJ package or such, I've done my own adapters xxx to DIP quite a few times ( photo etched home ) still "I have chips around, let's use those".

The idea is starting to get shape, I still have one of those XC9536 around I am going to use that.

As I said 'why those flipflops and not the latches' or such is because thanks to the CPLD I can still generate strobes/etc. for them unfortunately that CPLD is a bit small so the latches "has to stay out".

One thing the manual seems to point is that to drive the "mode register" directly via BCLR is a bit "risky", you shoulc conider PUP as well to avoid to put stuff on the bus while PUP is not yet ok.

Also somewhere they suggest to consider always CAS together with RAS to avoid another bus problem when responding to an ASPI.

In that thing I want to also attach that SPI uart I got, the practicality of that uart is "only 3 pins necessary", I have to study a bit about that story that the T11 seems to always do a READ cycle even when doing a WRITE cycle, this can have some implications.

Anyway so far "as rough idea" that's where I am at, but now I have to stop doing this and I have to finish to prepare myself some stuff for a trip tomorrow, however I am going to bring with me also all this ( the schematics ) to work on during nights ;)

SoFarT11.tif

Ah yes as you say today 'most stuff' is getting 3.3v but not really all of it so there's still some stuff to choose with.

Actually "packaging" is now a bit more of a thingy, DIL and such is vanishing almost totally replaced by SMD technology.

[edit] - forgot to say, the 27C64 in reality are some 28C64 EEprom I have, same pinout, same as a 6264 ram as well.

The rams I actually have around a few chips ( took of from some old '386 boards i believe ) UM61M256K-15, they are 15 ns 32kx8 static ram "usual pinout".

The latches .. "those I have here" even if they are not transparent ones it should not matter, in this case I should use NOT ( RAS ) to clock them.

I have to study a little about generating properly the RAMOE ( hi and low ), I think for the ROM I could use /OE = ( /CAS OR /RAS ) if course when it falls within the decoded address range I am going to use for the ROM

The "Mode register" I'll do it directly via the CPLD so not to add another chip out, I think I can too use the same idea they used in that Atari thing, all the other bits that are not 0 are guaranteed to be pulled up at '1', the CPLD has 3 state buffers in it, I can just connect some lines to the DALxx I need.

The "SPI port" is my "usual trick of trade", I've done it already a few times but you'll see when I'll put the VHDL up as well.

I still have to properly check timings but at 10 Mhz I should absolutely have 0 problems with those components.

[edit2] - forgot to say, no pins assigned to the CPLD yet, in those cases "I let the compiler choose", I first synthesize and test ( simulate ) the stuff THEN I lock the pins, so the compiler can choose better what it likes for what and the cells it prefers as well, I then add the pins/functions in the schematics as last thing.
 
Last edited:
I'm pretty sure you'll find many advantages, and few disadvantages, by using the CPU in 8-bit mode, rather than 16-bit mode.

You'll certainly have sufficient pins on your CPLD then to act as a UART and do most of the glue. Also, total pin counts can be reduced by using fewer, larger memory devices to your advantage.

Your other comments are well taken.

Enjoy the trip.
 
DCT11-AA Microprocessor - MODE register bit 12 - TESTER / USER mode

DCT11-AA Microprocessor - MODE register bit 12 - TESTER / USER mode

This question of the purpose and action of bit 12, in the T-11's "MODE" register is not easy to answer from available information.

The only comments offered, come from a scant line or two in the T-11 User Guide EK-DCT11-UG

SECTION 4.2.4 - Tester or User Mode (MR<12>)

Tester mode is for Digital Equipment Corporation's use only. If mode register bit 12 is high, (MR<12>=1), user mode is selected.
Timing diagrams on page A-52 seem to offer some hints how the MODE register is sampled during assertion of -BCLR [in 9 intervals], but no specific mention is made of bit 12.

Perhaps the word "Tester" indicates a provision for an In Circuit Tester, and tells the chip to go into tristate on all lines so the device pins can be manipulated without concern for damage.

This would be a fairly forward looking feature to include, based on traditional manufacturing problems of the day.

I've combed the internet for discussions on this, and been unable to locate any additional info.

Does anyone out there have "inside information" or can see if there are any ICT models for the part that might help us with this conclusion?

I'd appreciate a post. I have only a couple places remaining before exhausting everything I can think of to research this topic. [experimenting with my hardware being one of them]
 
I am lightly puzzled about a thing, been travelling most of the day had a bit of time to re-read the T11 data sheet.

"what about DAL0 when in 16 bits mode ?"

If I understand correctly basically the CPU always is like "addressing bytes" incrementing by 1 when accessing a byte or 2 when a word, so words will be always at even addresses like 0,2,4,6 .. Etc

However when in 16 bits ( static ) mode you are fetching one word at time and I think you see what portion you want ( h or l ) via the two rwh.. Signals ..

It seems to me that Dal0 ( as address line ) is "meaningless" in this case and you should connect a ram/rom using only DAL1 - DAL 15 lines.

Question remains what happens if you try a WORD access on an odd address ?

68k would throw an exception, other versions could still let you do it doing correctly the same ( but taking more clock cycles ).

I am not sure that if in 16 static bits mode DAL0 ( as address line ) is meaningful or not and/or tells you if high/low byte is being accessed.

Other mini question, if I understood correctly during an IACK cycle you should check the data presented on dal matches the one with your interrupt request and if it does you can clear the interrupt condition then ?

But that dal0 thing been spinning around my mind all the way to London today ...
 
Pardon me if I seem a little amused by your predicament. Some of what you are facing is what lead me to create a manual specific to my application.

However, I'll try to help you as best I can.

"what about DAL0 when in 16 bits mode ?"

If I understand correctly basically the CPU always is like "addressing bytes" incrementing by 1 when accessing a byte or 2 when a word, so words will be always at even addresses like 0,2,4,6 .. Etc

However when in 16 bits ( static ) mode you are fetching one word at time and I think you see what portion you want ( h or l ) via the two rwh.. Signals ..

It seems to me that Dal0 ( as address line ) is "meaningless" in this case and you should connect a ram/rom using only DAL1 - DAL 15 lines.
See T-11 Manual page 2-6, Table 2-1

In a sense, you are correct if you're thinking of only 16 bit Static Memory. Address <0> is redundant. It's not connected to any memory device, and it need not be considered when -WHB and -WLB do the job.

However, you may find it useful when considering how to interface with 8 bit devices, like I/O, which cannot be read 16 bits at a time.


  • Some designers choose to lie to the CPU and connect only data bits 0-7 to the device, and address it every other word. Some even force the upper data byte state to be sure. [software reasons usually]

  • Others choose to resolve Address bit 0 correctly, and provide a path for the devices to output on both upper and lower data bytes. However this approach requires rigid enforcement of byte only access in software. I don't advise it unless you also put in an error interrupt when said I/O is incorrectly accessed using WORD mode instructions. Even so, some devices still have problems with the READ-MODIFY-WRITE the CPU does. You need to consider this carefully.

Question remains what happens if you try a WORD access on an odd address ?
Ordinarily, most PDP-11s throw an "Odd Address Trap", but not the T-11. This mechanism was removed because of the complexity it caused [hardware and software] and it was considered unnecessary in an embedded environment.
See page B-10 section B.5.1:
"If a word instruction is executed and the source or destination address is odd, the least significant address bit is ignored and a word operation is performed at the even address."
One final word on this Address Bit 0 thing - The problem doesn't come up if you use the T-11 in 8-bit mode. :cool:

Other mini question, if I understood correctly during an IACK cycle you should check the data presented on dal matches the one with your interrupt request and if it does you can clear the interrupt condition then ?
Um... this is not a "mini" question... well maybe it is, but it hasn't got a mini-answer.

I don't recommend using "Externally Vectored" interrupts at all. It's completely unnecessary in systems with 4 or less sources. Consider using "Direct CP Encoding" [page 5-16 section 5.7.5]

Even simpler, "wire-or" all interrupts onto a single CP line and do it all in software. Of course, if you want to do DMA too, this is out of the question. [well... maybe not...]

edit - I'd better stay away from interrupt priorities for the time being. Suffice it to say - Even if you already knew the PDP-11 software and hardware interrupt priority structure, it would be a brain twister. [If you know QBUS intimately, this would require you to un-learn something.] Their intent was to be able to allow the designer to choose which elements of the mechanism [complexity] needed to be employed, and provide only the hardware to support that.

Encoders, Vectors, and daisy-chain arbitration are not required if the application doesn't warrant it. Even the priority mechanism can be reduced to simplicity by limiting the number of IRQs and taking a small hit on latency.

I'm not sure I've helped you much here, and I may have oversimplified. Try to choose a method that utilizes internal vectors. You'll sleep better.
 
Last edited:
Minimal Hardware for Interrupt Based System

Minimal Hardware for Interrupt Based System

Have a look at: page A-59, figure A-22.

This system is a static 8-bit T-11 example, with 2 interrupt sources. [both are on the SLU chip] No hardware for -VEC, IACK or other interrupt possibilities are needed.

A similar 16-bit static example is on the opposite page.

See section B.5 for a discussion on the implications of interrupts and priorities on process execution.
 
Quick reply from London ( still being a long day ) .

First of all thanks for your very valuable answers.

Second, yes, at some point there will be me sitting and writing down some documentation about alll this, I try to do it always now because it often happens I have a good time with thing then I am forced to interrupt them for a while and when I pick them up again it's like " where was I ? " so some good writte notes ( at least ) help.

In was checking also that development module manual, looking at the schematics in fact you see exactly they do not use dal0 as address at all.

About devices I have to study a little better, about IACK I was mainly concerned about " ok, when I can stop to assert int " .

The " funny " thing is that IMHO they could have actually had a 16 bits mode with 64k word addressing even with byte access as there are whb and wlb already, that would have given effectively 128k bytes available, pity they have not done that.

I need to study a bit more the manual but I will have some more time tomorrow ;)
 
The " funny " thing is that IMHO they could have actually had a 16 bits mode with 64k word addressing even with byte access as there are whb and wlb already, that would have given effectively 128k bytes available, pity they have not done that....

First, I know about being tired, and about the need for recreation. [most people don't think of designing stuff like this as fun... more's the pity]

On the subject of "more address" space...

Digital Equipment Corp. realized long ago, that if they developed "Families" of products, they better leveraged their past successes for the future. They came up with the notion of "Upward Compatibility"... that future products could add things, as long as they were not in conflict with previous ones.

They designed the PDP-11 MMU [Memory Management Unit] architecture to solve the problem of how their 16-bit PDP-11 CPUs could use memory space beyond their capability.

So, it would be out of character for them to do anything but include this as the architectural solution for an expanded PDP-11 embedded environment. However, putting this in the T-11 would take it way off course... [T-11 stands for "Tiny-11"]
It would probably double the cost of the chip, require even more power, raise it's complexity to program, and thus make it exponentially less attractive to it's target customers all at the same time.
I think they did well. They could have done yet another chip if there was really a need.

Whenever I considered a need for more memory in a PDP-11, the best solution I could see was the J-11 chip. 22 bits of addressing, CMOS process etc... The only thing it was short on was not including an 8-bit bus interface. [I designed one, but never tried it]

Frankly, I think they'd have been wiser to do a T-vax, than to mess up the T-11 But that's another discussion.
 
Last edited:
A quick reply, just got awake have to catch an airplane soon :)

There's a world of difference between " getting crazy " here duscussingbabout pdp CPUs and trying to make them work ( again ) and getting crazy for real spending hours trying to make your project work again dealing with the far too many crashes of stuff like Xcode 4 or OpenFeint with people coming out with strange bug reports on code that is not even yours and supposedly should work...

Then you look at things like rt11 and think " with 64 k of ram they were able to ... " with a sigh ;)

But no worries, this old T11 is going to shine again ;)
 
You have my sympathies.

Imagine my problem...

When I started, 4K of core [4096x12 bits=8"cube]was huge!
corememory_stack2.jpg


Numbers were displayed in realtime on NIXIE tubes [pre-LED]
170px-Nixie2.gif


A pair of NIKEs looked like this:
nike_herc_24-s.jpg


It's hell, to be a dinosaur.

"Welcome to the party, PAL!" - John MacLean, Die Hard​
 
You have my sympathies.

Imagine my problem...

When I started, 4K of core [4096x12 bits=8"cube]was huge!
corememory_stack2.jpg


Numbers were displayed in realtime on NIXIE tubes [pre-LED]
170px-Nixie2.gif


A pair of NIKEs looked like this:
nike_herc_24-s.jpg


It's hell, to be a dinosaur.

"Welcome to the party, PAL!" - John MacLean, Die Hard​

1. I grabbed time ago a very old book about "everything you always wanted to know about ferrite core memories" one day I have to try to make 16 bytes or so of working core memory :D

2. Nixie .. unforunately I don't have that many to do a nixie clock, I do have a ZM1081 as well and a couple of ZM1080, now that you mentioned I'll have to check tomorrow there is still ONE place in this town that is a "coffee toasting place" that should still have ( they keep them with great care as part of the decor ) some WORKING electronic scales with Nixie tubes.

In my collection I have a few EM84,an RCA931, a 2K25 , I think I have an EM4 somewhere.

When for my first time I saw an LED .. I tought it was nearly magic ..

I think I grew up a fondness for LEDs and LEDs displays since that moment ..

Did not have time to touch T11 thing in those days, maybe some tomorrow.
 
Having a bit of time to work more on this.

Trying to figure out how many TTL loads the CPU can drive before to require buffering, as by so I think the loads I am going to drive via the DAL lines is about 4, 1 for the address latches, 1 for the ROM, 1 for the ram and a few for the CPLD.

I wonder if some 244 or so must be considered even in this simple design.

[edit] - not sure by the look of it at table A.2 page A.3 they give some specs for about 2xTTL loads ( 3.2 mA current ).

[edit2] - after some study on IoH IoL and such looks like if I am correct it should be capable to drive various HCT loads so sounds like buffering in this case may not be necessary.
 
Last edited:
Having a bit of time to work more on this.

Trying to figure out how many TTL loads the CPU can drive before to require buffering...

after some study on IoH IoL and such looks like if I am correct it should be capable to drive various HCT loads so sounds like buffering in this case may not be necessary.

I see you're down to the details.

While this is a good consideration if one were using LS or standard NMOS, fanout can almost be ignored if using HC, or HCT logic at under 15MHz on this chip.

The only exceptions to this would possibly be the clock line, COUT [which I always like to be clean and crisp] and I usually buffer -WRL and -WRH with a line driver as a matter of habit, but it isn't "needed".

Even these precautions are left overs from the days when I was designing using CMOS logic and processors but had been stuck with NMOS memory. Before CMOS made it into EPROMS, the JDEC standard pinout [established later] had competition. [See TI 25series memory devices]

From interviews I've read on the T-11, the design team did a good job "designing out" race conditions from the chip, and it's very tolerant as well as highly compatible with most LSI devices out there.

A word on Power supply decoupling -
Our typical boards were hardly 6" in size. Because of this, we got quite lax [by most standards] about decoupling the 5V devices.

The usual practice of the day was to include a .01uf cap at every IC within 0.2" of the VCC and VSS leads. When ever we evaluated it, this was just pure overkill.

In most designs, it was far better to have a single high quality ~20uf cap on the board than numerous smaller ones.

As a matter of artwork, we used to include pads and holes for the typical caps, but rarely ever needed them. After we started using CMOS memories, even these were eliminated except on VLSI devices.

The fact that the T-11 was executed in NMOS process helped this greatly. It draws in the vicinity of 100ma on the 5V. This is a pretty low impedance compared to the surrounding CMOS devices. Because of this, the fast variations in current required by HCMOS during edge transitions don't make that much variation in the 5V.
Not meant as a lecture... just sharing.

I'm kind of interested to see what you decide about mass storage.
 
It's all very good talking ;)

So to share some knowledge/thing after reading various stuff there and there including some of the book "High Speed Digital Logic design" it seems we could summarize :

( ideally )

- ground plane, as large as you can ( ideally should be a plane ) with no cuts in the middle
- I tend to think as like "water flowing in a stream", you want a big fat uniniterrupted stream
- decoupling caps are almost useless if you don't have a proper ground return
- 100 nF is "a magic number" but that many experiments seems to prove to be one of the best
- add some 10 uF tantalum possibly as well, one of those is like/better than 100 uF standard electrolitic
- avoid "power fingers" layout

About mass storage I am definitely thinking at an IDE disk.
 
2 responses -

  • Ground Plane - While this is ideal, and necessary over 20Mhz... as one goes slower it becomes less essential. We were usually able to reduce our designs to 2-layers, avoiding the cost of 4-layers by employing layout techniques we developed. The decoupling was one of these, evolved from practice rather than engineering.
  • And yes - Tantalum is far superior because of it's frequency response despite MTBF issues.
I realized from your earlier comments that your inclination was to go IDE... I'm watching to see how you do it. Should be educational for me. ;)
 
2 responses -

  • Ground Plane - While this is ideal, and necessary over 20Mhz... as one goes slower it becomes less essential. We were usually able to reduce our designs to 2-layers, avoiding the cost of 4-layers by employing layout techniques we developed. The decoupling was one of these, evolved from practice rather than engineering.
  • And yes - Tantalum is far superior because of it's frequency response despite MTBF issues.
I realized from your earlier comments that your inclination was to go IDE... I'm watching to see how you do it. Should be educational for me. ;)

.. when I'll recover from the terrible shock I just got from a friend of mine visitng me saying "I have something I want to show you" .. and takes out some megabytes of sources of .NET code and asks me "what do you think about this ? .. "

I felt like in Faulty Towers "I know NOTHING about .NET NOTHING !"

Yes, you are right about the ground plane, in case I do a PCB anyway I always try to do a ground plane, if i "wire solder" instead I try to use a much thicker wire for GND and VCC, in some cases I also used some little strips of copper I soldered on.

I normally do ONLY 2 layers because those are the only ones one can do home with home equipement ;)

About the IDE .. first time I tried I watched the circuit of someone else .. that was OK but I needed to change it, then something was not working, then I spent 3 months to find out what wasn't working.

So in the end the approch is "this is the IDE specification with timings and all, this is the T11 specifications, makes something that turns A into B" ;)

In those 3 months it turned out, in the end, that after I put an HC08 to buffer the /CS0 line everything was working fine ( and after lot of tests including writing SWs to write patterns on disk, lot of oscilloscope and analyzer and hair ripped off the head ).

The guy who made that thing NEVER tried it with a full lenght IDE cable .. I did .. :D
 
Unless you have an interest in the .NET frameworks, consider the ol'e Nancy Reagan... ["Just say No"]

Everyone has those hardware horror stories. Some are due to the mistakes of others, some to incorrect documentation, badly etched PCBs, ... a few times our own error.

The important thing is you got through and understood it.

I can't tell you how many project rescues I've been called in on. They're my favorite. Everyone screaming at each other, ready to shoot, bloody... it's so great when you figure it out or work it through. The more impossible, the better.

You're lucky people think of you that way. Enjoy it.

Too bad it's no longer a skill that's called for here. I'd like to be still doing it. There are a few people who understand the difference between ordinary diagnostic skills, and extraordinary ones it takes to do something that has never been done, and you don't really know can work, until it does.

Such opportunities and challenges are rare in life.

Gee, I seem awfully philosophical tonight??!
 
Back
Top