• Please review our updated Terms and Rules here

KayNET, Anyone?

brain

Experienced Member
Joined
Apr 23, 2009
Messages
175
Location
Iowa, USA
Does anyone here have a KayNET System:

https://archive.org/details/bitsavers_kayproKayn_4822341
http://maben.homeip.net/static/S100/kaypro/hardware/kaypro kaynet users guide.pdf

The product is called "KayNet", a baseband twisted pair CSMA/CD local network operating at 125kbps. It uses a network called Web, designed by Centram Systems, Inc. of Camp Hill, PA. The network software , OPSnet, was written by Aquinas, Inc., and is compatible with CP/M 2.2

I have come upon the hardware for this solution, and was hoping to cobble together a KayNET solution from this HW.

I can't find much out about the solution, so I thought maybe folks here would know/have more information.

If it is truly rare, I'd like to get this reverse engineered and reproduced.

Jim
 
necro-reply since there is so little out there

someone made a request for kaynet software from the 5" maslin floppies and other than the manual
there is almost nothing on it other than this and one other thread here.

does someone have the board that goes into the kaypro uart socket and if so can they take some pictures
of it? this is the only mention of someone having the hardware on the web and i'd like to better document
it on bitsavers.
 
I did some reverse engineering on an alternate KayNet product called Web-NT. Schematic is here: http://sebhc.durgadas.com/kaypro/knet-ulcnet.pdf. I have photos of the board somewhere... can dig them up if needed.

I don't recall right now if I ever found software for it, but I did not implement it in my simulator (yet). I did not find anything much on the actual KayNet product, aside from the manual.
 
Might spend some time disassembling that, to see if anything can be guessed about the hardware. I've wondered before how close they are to "ULC Net" (being that they have things like "ULCINIT.CMD") - which itself seems to have evolved towards increasing complexity. It does have some similar complexity to the Web-NT product:
1749088582982.png
1749088486429.png
 
Not much gleaned from disassembly, so far. I don't see any re-initialization of the Z80-SIO, so that would imply they are using the port as it is initialized by CP/M - Async and standard bauds. This is in contrast to the Web-NT product, where the clock is overridden with a 500KHz signal and the SIO is initialized for synchronous mode. Both boards have an 8-pin DIP (close to the notwork connection), on the Web-NT that is a differential transceiver (SN75176). It may be stretch, but it is possible that both use the same.

The KayNet does use the DTR/DCD to signal end of message - perhaps to signal the bus is now free. When starting a transmission, the software sends 18 STX characters and the receiving end looks for those to indicate the message beginning. Since this is a half-duplex situation, every character sent is also seen by the sender, and is watched for evidence of collisions. All nodes watch the beginning of the message to see if the destination node ID is their own.

One odd thing is that there are two groups of I/O routines. These routines are modified by the code to set the I/O ports and Rx/Tx status masks. However, one set of routines defaults to the Kaypro SIO ports and masks, and the other defaults to some other device that is not Kaypro (using ports that on the Kaypro are the floppy controller chip).

The program ULCOPS.COM loads (and relocates) itself under the BDOS. I've not looked at the ULCOPSG.COM yet, but presumably that is a version for Kaypro CP/M 2.2g (as opposed to Kaypro II/IV). It's not clear why it needs to be different for the two, though - since it doesn't interact with the ROM or other hardware, there should be no need to be different. The software appears to act as both NDOS and CCP.

It seems odd that there is so much logic on the KayNet board, more than on the Web-NT which seems more sophisticated. They both seem to come from the same time period (~1983). The KayNet software is copyright 1985, though.

The Web-NT product allows the use of the original SIO "serial data" port, by having a multiplexer on the relevant signals and controlled in software using a spare handshake line from the keyboard port. I see no evidence of such a thing in the KayNet software.

It is interesting reading some early articles on "ULCNet", where it starts out as adding a few diodes just to facilitate multi-drop. It then talks about adding transistors and such to improve the design. Nothing in KayNet nor Web-NT indicates they stayed true to the "ULC" concept (Ultra Low Cost).
 
  • Like
Reactions: cjs
I was able to implement a basic multi-drop KayNet-like device for my simulator, and was able to start the ULCNet software. Haven't gone any further yet.
Screenshot from 2025-06-08 15-31-34.pngScreenshot from 2025-06-08 15-31-30.png
 
Discovered a bit more about the KayNet hardware and software. First off, ULCOPSG.COM is for the Gatekeeper node and ULCOPS.COM is for the "workstations". According to the manual, the nodes are mostly peers but one node takes on the role of "gatekeeper" and becomes essentially the administrative "center" of the network.

I found the code that initializes the SIO in INET.COM, which must be run before a node can start using the network. The SIO is initialized for 8 data bits, 2 stop bits, and even parity. Still using ASYNC, but it selects the 1X clock so it has a faster data rate than 9600. However, to run RS-232 ASYNC at 1X clock, you must be sending clock along with data to ensure synchronization. We don't know the pin-out of the 4-wire telephone jack, so we can't be sure exactly what that means. Given they quote over 2000 feet possible network span, I would think they are using differential transceivers (such as RS-422). In that case, two wires for the data signal, one for ground, and the other for the wire-ORed DTR. So, I'm thinking they have some RLL encoding going on, similar to what the Web-NT product does (and similar to floppy disks). So there must be a data separator on the board, feeding the RxD and RxC pins of the SIO channel.

Collision detection is a bit different, as the Web-NT board detects the collision and drives the DCD input on the SIO - while KayNet appears to simply rely on checking whether the same character sent is seen reflected back (i.e. collision must guarantee data corruption - sufficiently to prevent a complete message from appearing good).

But, it seems to me that we could pull part of the Web-NT circuitry and adapt it, then add the DTR wire-OR stuff, and have a functional KayNet equivalent. At least have something that would run the original software.
 
Does the original software have to successfully communicate with other nodes at start? If not, and if it doesn't touch the config of the SIO more than at initialization, worst case the software could probably run as-is with a wire-or for the serial data if you just run the initialization and then run any other program that just switches the SIO configuration to a regular asynchronous mode. Would of course be really slow, and there is a risk that timeouts would be triggered, but would be a way to get not-that-handy enthusiasts going with cheap components they likely already have in their junk box. Or maybe this is a bad idea :)
 
I haven't computed the timeouts yet to see if they would tolerate a 16X difference is speed, but what you're describing could otherwise work with a modified INET.COM. The software does depend on timing, though, as I had some issues even on the simulator if the data was not fast enough. The only really special thing about the SIO init is the switch to 1X clock, which requires the ability to send bit clock along with data. At the normal 16X, it should work with simpler wire-OR data and DTR. We'd have to work out the polarity of the DTR such that ANY node turning DTR off causes ALL nodes to see DCD off - I did not even check that yet (might be simple wire-OR, might require inversion of signals).
 
forensically, I have another mystery on how this original product was implemented. In the manual, it shows how to create the termination blocks needed at each end of the network. The use and placement of the resistors implies the network is two differential pairs. That could be clock and data separately (similar to modern twisted-pair ethernet) or it could be a single combined clock+data plus the DTR. If it is separate clock and data, then there must be some extra circuitry to factor-in the DTR to those signals, and also mitigate the fact that each node must stop sending clock at the end of the data. Perhaps this means that the DTR (and DCD) are entirely local conditions, and the KayNet board uses the DTR to drive a latch that enables the clock and data outputs (DTR dropping must clear the latch, and start of first data must enable?). That would make some if this less complicated (but further raises the question as to why there are so many chips on the board).

the other mystery here is the that termination block diagram shows (standard TELCO) signal pairs being red-green and yellow-black. The text earlier (Ch. 2), though, describes signal pairs being red-black and yellow-green, invalidating the placement of the termination resistors. Not sure how to reconcile that.
 
Looking at the code some more, I see that there may not be any connection between DTR and DCD. The send operation simply bounces (pulses off) the DTR line 3 times when finished. Before sending, the code insists that DCD be on. So it may be the case that DCD does nothing more than detect an idle network, and DTR does nothing more than reset (turn off) the differential drivers. I.e. these signals may be entirely local to the node. The line drivers must (normally) be tri-state for all nodes except the one that is sending (else a collision results).

Given this, a possible equivalent circuit for the board might be this. It just detects the START bit of the first character to send, and uses that to turn on the drivers. It then detects a clock coming in from the network (someone is transmitting) and uses that to turn off DCD (in this case, the sense of DCD is reversed - it is OFF when there IS a carrier (clock) on the line). I will modify my simulator to do this and see how the software runs.
 

Attachments

I also found out that the INET.COM program is reprogramming the baud generator to 19.2K, which means the data rate is actually over 300KHz (closer to the 500KHz clock on the Web-NT). Looking at some of the timeouts, they are cutting things very close, giving barely 1 character time (39uS) - assuming a 4MHz Z80. From the documentation, it does not seem that they support the 2.5MHz Kaypros.

Of course, another approach to replicating this would be to put some sort of microcontroller/SOC on the piggy-back board, which could even make the actual physical network WIFI/Ethernet, allowing the use of TCP/IP outside of the Kaypro nodes and allowing some of the nodes to be PCs (or rPis or ...). Also allowing nodes to be separated by great distances (imagine networking two Kaypros on opposites sides of the planet). This is similar to something I did for Heathkit computers that leveraged the WizNET WIZ850io module. We had CP/M machines running CP/NET that were separated by over 2000 miles - probably a first. We also had a raspberry-pi running server code for CP/NET machines.
 
I think that a pair for clock and a pair for data seems likely. The handshake wouldn't need a balanced signal.

With balanced signals it's really easy to detect presence/absence of a transmitter, as presence = different voltages and absence = the same voltage on both wires. I.E you can kind of just feed the two wires to an EOR gate and low pass filter the output so you don't get spikes at each level transition when an actual signal is transmitted.

Going off on a tangent:
Was this ever exported outside North America? In particular I'm thinking about running balanced 5V signals without also running a signal ground. That would probably work fine as long as all stations have signal ground connected to mains ground and all are connected to the same electric supply, and/or the power supply is fully insulated with no leakage (Y capacitors) from mains to signal ground. At least in parts of Europe grounded sockets wasn't mandated in "dry insulated rooms" until for example the 1990's (IIRC 1994 for Sweden), and computers and other equipment intended for office use but also TV's tended to have relatively high leakage current from mains to signal ground. That wouldn't work with a balanced 5V signal network without a separate signal ground wire unless each station (technically all but one) use opto couplers (or transformers) for all signals, or all stations share the same grounded power strip (even if the wall socket isn't grounded). Going off on a tangent, this was a problem for computers at homes too. The 80's home computers suffered from either mains leakage to the TV, or the TV being grounded by the shared antenna in a multi family house or via the lightning rod on multi/singly family houses, and other devices also leaked mains to the signal ground, resulting in that you really wanted to pull all mails plugs before connecting/disconnecting certain signal wires. In the 90's this was a problem for TV outputs on PC graphics cards - I've seen "TV output add-on" chips (BT-something) with a crater... The RCA connectors didn't help as they connected signal before ground... Also in parts of Europe you can still find sub panels that are fed with a single wire acting both as neutral and ground (PEN = Protective Earth + Neutral), and current running through the neutral would cause voltage drop that makes the ground potential differ slightly between sub panels. Even more common is this sharing for the feed into buildings from the electric company.

This also reminded me of Acorn / BBC Micro having something I think is called Econet, that also uses 5V balanced signals, without insulation (UK had grounded sockets since forever, or rather since right after WW2) and I've read about problems when using this at some stock market place where they ran this network across multiple floors.

Anyways, this tangent kind of says that any detection of "no signal present" would likely need an analogue circuit, I.E. an OP amp, connected in a way that it can handle input signals outside the voltage rails, in order to correctly detect if both signals are at the same voltage but still allowing them to drift outside the voltage rails. I'd have to read up on the RS485 spec, or the 75185 (or whatever they are called) drivers that they likely use, to know if this is within spec or not. In particular I'm thinking about a mains hum sine wave overlaid with a voltage of a few volts or so.

For hobbyist purposes it seems easy to just for example use RJ45 ethernet wires and connect the extra wires to signal ground. Also I'd use Dsub connectors that screw on to the computer, so if someone trips on a wire they would break the RJ jack or whatnot rather than pull the Dsub. (Maybe use an intermediate dsub between the interface and the RJ45 connector, perhaps the same DE9-RJ45 pinout at Cisco RS232 consoles?).

But I'm getting ahead quite a bit here :)


Btw re interfacing to other network: Wait until Cherryhomes reads about this and barges in with a Fujinet thing for KayNET! :)
 
Back around 1983, I was working on yet-another grass-roots network product (MMS422) for CP/M machines (there were a lot, it seems, before Ethernet took the crown) which also used RS-422 (RS-485). In that case, we also sent clock and data as separate pairs but added signal ground. We used DB-9 connectors (only 5 pins used). So these three examples all used RS-422/RS-485 and implemented the physical network as:

Code:
Web-NT:   4 pins, combined clock-data pair, ground
ULCNet:   4 pins, separate clock/data pairs, no ground
MMS422:   9 pins, separate clock/data pairs, ground
 
Arcnet also could use twisted pair, but I don't know what voltages and whatnot it used.

DMX512 used by stage lightning is kind of RS232 8,n,1 at 250kbps but with balanced 5V levels. Fun fact: By using a "dumb" RS232-RS485/RS422 level converter, an Amiga can send out DMX512 compatible signals using it's built in serial port. However as there are no buffers it has to be done in a busy loop, and by breaking the standard and only sending fewer bytes than the spec'd 512 bytes, you save a lot of CPU time.
 
I did more research on KayNet ULCNet. Trying to see if it could be implemented in a simple DB-25 dongle. At this point, I don't think that works with existing software. The first thing is that software expects that DCD indicates network activity (DCD on is network "quiet"). The other thing is the software expects to be able to hold DTR off while sending, in order to do a "loopback" operation where nothing is sent on the wire but TxD is still looped back to RxD (the software insists on seeing the packet bytes echo back). I'm not seeing a way to do those without external logic, and the Kaypro serial data DB-25 does not provide power for external logic.

One thing I ran into was when trying to add a third node. When I try to init the third node, I get an "Authorization error". The manual, which is two years out of date with the software, does not mention anything special being needed. I wonder if they added some licensing stuff so that you could only "demo" the network with two nodes without getting something else. I'll see if I can find clues in the code.

Here's the latest schematic, taking into account that the manual mentions a switch on the back panel that selects between using the serial port "locally" as the normal RS-232 connector or using on ULCNet. That was obviously not under software control and I don't see anything in the software that would accommodate switching dynamically. I think you had to shutdown the network software before using the port locally.
 

Attachments

Reminds me an awful lot of localtalk on the Apple machines except that as ever with Apple back then localtalk pulled some rather clever trickery to make a few things work. In particular it relies upon the 3 byte FIFO in the sync serial chips used so that it can do collision detection by sending a 3 byte header and gap for listening and also because that meant the interrupt latency on the host didn't drop anything at high speed and once a host saw a header for it then it had time to switch to polling the usart for the rest of the bytes or for a moment to check if they were going to arrive.
 
I figured out why I could only start one workstation, but am not sure how to fix it. There is an extended BDOS function 98 that returns a 16-bit value which is used like a GUID or MAC address to uniquely identify the workstation. However, this value is embedded in the ULCOPS image and I can find nothing that allows the value to be changed. I don't see any likely utility that patches ULCOPS.COM with a new value, and there is nothing in the code that allows it to be changed "live". That leaves having to manually patch a copy of ULCOPS.COM for each workstation... yuk. Anyway, as-is every copy of ULCOPS.COM has the same value and so when the second workstation tries to join the network, the gatekeeper thinks that workstation is already joined and so refuses the request. The manual, which is two years out of date with the software, says nothing about that.
 
Maybe you got a separate ULCOPS.COM for each license you bought?
 
Back
Top