• Please review our updated Terms and Rules here

2.11 BSD ethernet throughput?

shirsch

Veteran Member
Joined
Aug 17, 2008
Messages
870
Location
Burlington, VT
I have built up a pseudo-11/73 in my extended Heath H-11 chassis using an M8192-YB CPU, DELQA-Turbo ethernet board, DLV11-J SLU and Joerg Hoppe's QBone. The QBone supplies 4MB of memory and an MSCP storage adapter + drive. Priority order is as listed above. I'm running 2.11 BSD on the system and have rebuilt the kernel to use the 'qt' (turbo) ethernet driver. I'm experiencing very slow throughput with ftp up and down speeds around 20kB/sec. This seems very sluggish, even for such old hardware. Can anyone provide some insight into whether this is expected, or if there's some configuration option I might be missing? It occurs to me that the QBone memory emulation may be extremely slow, but I do not have enough physical memory available to run a 22-bit system (just a few 8 and 16k word 11/03 ram boards).
 
Can you describe the remainder of your network configuration? How is the 11/73 attached to the rest of your network? Presumably the other system involved is more recent hardware that can easily overwhelm the 11/73. One problem seen when a much faster system talks to an older system is that lost Ethernet packets can cause TCP to perform poorly. I don't recall if 2.11 BSD ever had the TCP selective acknowledgement (SACK) implemented, which goes far to ameliorate this issue.

I'd see this type of poor performance everytime we deployed servers with the next data rate Ethernet when they talked to clients with the previous generation data rate. In this case it can be important to know if you are using hubs or switches between the two systems. Another thing to try is to directly connect the 11/73 to the second system or force the Ethernet data link speed to 10 Mb/s on the second system to see if that improves throughput.

The other part to characterize is what data rates the 11/73 can manage to the simulated disk storage, just to be sure that isn't an unexpected bottleneck.

Also if you can run Wireshark on the second system while you are performing transfers that can give a good clue as to what is happening.
 
All very good points. Currently I have an AUI-10Base-T MAU (Digital branded) connected to the cab-kit, with RJ cable plugged into a 10/100 switch that extends my 10/100/1000 shop switch. I can certainly force a spare machine to 10Mb speed and see if that makes a difference.

What would you recommend for benchmarking the 11/73?
 
These examples come from my 11/73 just now with a DELQA Ethernet and CMD CQD-220/TM SCSI adapter with a SCSISD model 5.2 (I think) with firmware version 4.2 disk on ra0 and a DEC RZ26 disk on ra1. For the purposes of these tests, it is not important to get exact timings and I/O rates, but to understand approximately what is the ballpark for I/O throughput.

The CQD-220/TM manual states it is capable of transfers up to 4.8 MiB/s in synchronous mode and 3.0 MiB/s in asychronous mode. And I don't know what sort of disk fragmentation my file system has. Contiguous sequential I/O usually performs better than random I/O for older storage devices.

You can use the dd command to see how well the disk and file system perform. First we'll test the read speed for the SCSI2SD ra0 hard drive.
Code:
# time dd if=/dev/rra0e of=/dev/null bs=16k count=1024
1024+0 records in
1024+0 records out
       25.7 real         0.2 user         6.6 sys

16 MiB in 25.7 seconds gives a read rate of 652,810 bytes/second from the raw disk device.

Now for the RZ26 ra1 hard drive.
Code:
# time dd of=/dev/null bs=16k count=1024 if=/dev/rra1a
1024+0 records in
1024+0 records out
       22.6 real         0.2 user         6.0 sys

A somewhat better rate of 742,355 bytes/sec.

Next we'll write and read through the file system on the SCSI2SD ra0 disk. My 11/73 only has 3 MiB of RAM, so writing 16 MiB will push data through the file system cache to the disk. This version of dd doesn't have an option to use direct I/O bypassing the disk cache, so we'll run a sync afterwards to flush the cache to disk.
Code:
# time dd if=/dev/zero of=testfile1 bs=16k count=1024
time sync
1024+0 records in
1024+0 records out
      268.7 real         0.2 user       255.7 sys 
#         0.3 real         0.0 user         0.1 sys

Giving a write rate of 62,369 bytes/sec. Not very encouraging given what today's hardware is capable of, but it is what this combination is capable of.

Next we read back the data (some of the end of the written file may still be in the cache, but reading from the start of the file will evict it).
Code:
# time dd if=testfile1 of=/dev/null bs=16k
1024+0 records in
1024+0 records out
      149.5 real         0.3 user       146.5 sys

Giving a read rate of 112,222 bytes/sec. There are a variety of tools to characterize file system I/O performance, a search will turn some up, such as ioperf or fio (unknown how easy to port to old non-ANSI C compiler).

Next we'll test FTP reads from my MacBook Pro, across a 5 GHz WiFi through two Cisco 1 Gb/s small office 8 port switches to the RJ-45 to AUI adapter on the 11/73.
Code:
ftp> get testfile1
200 PORT command successful.
150 Opening BINARY mode data connection for testfile1 (16777216 bytes).
226 Transfer complete.
16777216 bytes received in 567 seconds (28.9 kbytes/s)

You can see the transfer rate is similar to the one you had. In this case since we aren't sending data from my MacBook to the 11/73, we don't have to be concerned with the link speed disparity caused by the DELQA 10 Mb/s link rate.

If you really want to test the network stack and Ethernet adapter rates, you could use something like iperf3 (or an older version, I don't know offhand if any will compile on old non-ANSI C compilers). Iperf3 is a very flexible tool which will show any issues with the network connection pretty clearly, including lost packets.
 
I tried to run your 'dd' based test on the root disk, but that resulted in an error message about "disk in use". Then I ran it on the swap partition and measured about 180kB/sec. But, somehow this operation hosed the root partition very badly so I'm currently recovering back to where I was. Hard to understand how reading the swap partition and dumping it to /dev/null could damage anything, but it did. Somehow.
 
Did you use the "raw disk" device, such as /dev/rra0 (note the two "rr" characters)? Using instead, for example /dev/ra0 (one "r"), could result in the "disk in use" message. Just make sure you never put the disk partition as the output file for the dd command unless you know that partition doesn't contain data you need to keep (and is not mounted)! Reading from the swap partition shouldn't have caused an issue either.

Also run the tests with the system otherwise idle (especially other disk I/O) to see what to expect as the upper end of the throughput.
 
Back
Top