• Please review our updated Terms and Rules here

Stone Age HTTPD

snq

Experienced Member
Joined
Mar 29, 2009
Messages
164
Location
Sweden, way up north
So I finally got my old PCs installed again yesterday, after moving over a year ago.
I figured I'd share my simple http server, maybe someone has a use for it.
It uses my homebaked TCP lib, which has only really been properly tested over here on my AT and Dosbox. The main focus has been on speed, while keeping memory usage at a reasonable level at the same time too.
It should run on anything from 8086 and up. Some CRT functions may need a certain DOS version though.

The webserver will use the current directory as the root dir. It doesn't do index files or anything, its primary use is to provide an easy way to download files off your DOS machine.
The IP and port to use are specified from the command line, DHCP works too.
To keep things simple it only handles one request at a time, so the server will be busy while downloading files.

So, to get this thing running first load a packet driver for your card, and then start sahttpd like this: sahttpd.exe ip port, eg sahttpd 192.168.1.22 8080
The port param is optional, the default port is 80. For DHCP: sahttpd d 80 or just sahttpd d
The packet driver interrupt should be automatically detected, so no need for any config files.

On my AT with Linksys Ether16 card I'm getting 280k/s downloads. It does have an XTIDE card and a drive that maxes out at 2000k/s, so obviously ymmv.

View attachment sahttpd.zip

Bugreports are welcome :)
My next project will be an ftpd. I remember trying one, but it was so awfully slow that it was basically unusable.
 
My next project will be an ftpd. I remember trying one, but it was so awfully slow that it was basically unusable.


Since we last spoke in August 2010 I've written an FTP server and open sourced all of the code. You might want to look at that to see if it is usable for you; I don't think you've seen it because the speed is more than adequate. (But anything can be improved ...)


Mike
 
Since we last spoke in August 2010 I've written an FTP server and open sourced all of the code. You might want to look at that to see if it is usable for you; I don't think you've seen it because the speed is more than adequate. (But anything can be improved ...)
Ah, I'll check that out!
No the one I tested was definitely not yours, I can't remember what it was called or where I got it from but I'd have remembered if it was something you wrote :)
 
I did some optimizations in my lib today and the top speed on my AT is now 291k/s, so a couple % faster than before. I think the gain will be a lot more on 386 and newer machines but haven't tested yet. It would be nice to actually get 300+ on the AT!
I'll test some more before uploading a new version.

Also started on an FTPD as I can't get Mikes ftpsrv working properly :( I worked on it a couple hours tonight and dir lists and file retrieval work now, even using a modern client (tested flashfxp, ncftp, windows 7 ftp.exe). It only works passive mode though as my lib doesn't do outgoing tcp connections yet. Next on the list is storing files. Nothing fancy, only bare essentials for now, as you can see in the screenshot below it doesn't even support CWD yet ;)
saftpd.gif
 
Last edited:
Well, 300k/s is a fact :) I did have to compile for 286 to get there though, pretty much stuck at 297k/s compiled for 8086.
 
Also started on an FTPD as I can't get Mikes ftpsrv working properly :(

Just for the record, I gave Nico some suggestions to figure out why mTCP wasn't working for him. It was working, but very slowly. Apparently he forgot to turn off the trace debugging mode when he last used mTCP two years ago. Getting rid of the detailed tracing at run-time makes it run acceptably.

I take bug reports and fix them if there is a problem. You just have to email me - I don't read minds yet. (But I did make a good educated guess on this particular problem.)


Mike
 
Yep! From 10k/s both ways to 160k/s down and 200k/s uploads made quite a difference ;)
I might have to patch the source to support/ignore the "-al" param for LIST though, I don't want to switch to a different client, it's been serving me well since 2008.
 
Yep! From 10k/s both ways to 160k/s down and 200k/s uploads made quite a difference ;)
I might have to patch the source to support/ignore the "-al" param for LIST though, I don't want to switch to a different client, it's been serving me well since 2008.

I think you are using FlashFXP - if so, take a look here: http://www.flashfxp.com/history-latest

An excerpt from that page:

4.2.1.1744 March 25 2012 Maintenance release: Improved [server compatibility] We now attempt to detect "mTCP FTP server" and "PS3 FTP Server" during login, when detected FlashFXP will issue the standard LIST command without any parameters.


A user reported the problem with the mTCP FTP server and they fixed it without too much discussion. I had a side discussion with the author directly about whether mTCP should change or the client, and they were pretty insistent that the client should change and not do the non-standard behavior.

Those guys rock ... I think that for anybody else, as soon as you mention that it is an FTP server running under DOS they would laugh.


Anyway - my FTP server speeds are going to be much slower than your HTTPD speeds because the FTP server is trying to handle multiple connections at a time without starving any, which on a small/slow machine means lots of ping-ponging back and forth between active connections and accepting new connections. With some of the config parameters it could be made faster, but as you point out the design goals are different.


Mike
 
That's awesome :) Like you say most developers would laugh at something like that.

I've patched my copy of tcpsrv for now to work with my old version of FlashFXP. Basically the same fix except serverside. Works great now!

I'll still continue work on my own ftpd though, just for the fun of it. Not trying to give you competition or anything. As I wrote in my email my primary goal is to have something I can use myself and I have a weak spot for optimizing stuff. I'll put it online when I'm done and if anyone else ever is in a huge hurry to get a file from a DOS machine they can use it too ;)
 
I don't mind competition - I look at it more as collaboration. And in fact, now I'm thinking about your checksumming method. I had spent a lot of time in the checksumming code and I had some help from Krille on in too. Combining the checksumming it with the copy of data from the user buffer to the outgoing TCP buffer would save some time, but at the cost of yet more complexity. Definitely something to think about.
 
I have 8086 and 386 versions of the checksum/copy code and the lib uses whatever is going to be fastest on the current CPU. The 386 version is pretty much twice as fast as the 8086 version so it really makes a difference. I also have a Pentium version of the checksum code that I haven't adapted yet, might be a bit more work to adapt as it's pretty complex. So for now I just use those first 2 versions.
If you want to have a look you know where to find me!
 
FYI, with Mikes' ftpsrv application running on a 286 (16MHz, a modern 5400rpm 2.5" IDE and a 3Com 3C509B) I get transfers of larger, multi-megabyte files maxing out at circa 800kbytes/sec. Way faster than I thought I'd get from that little 16bit cpu. It's a great set of tools.
 
I had a chance to test. Here are my observations:

The test machine is a 386-40 with 128KB L2 cache and an IDE hard drive. When running my FTP client and sending it is capable of 400KB/sec, including the file reads. With your code serving a 4MB file over HTTP I got 520KB/sec, so it is at least 25% faster than my comparable FTP code. Your technique of combining checksum and data copying could be a lot of that gain - checksumming is the single biggest part of my code when I profile it. Patching the code to use the correct software interrupt instead of using the C runtime int86x() function is probably helping a lot too - the compiler supplied code for that is ugly. I also have tracing support, fragments, and other things compiled in that you probably are not dealing with, so it's not a perfect comparison but it's pretty close.

There were two problems:

  • The DHCP code did not work with my common Linksys router.
  • LFTP (my favorite command line client) could not talk to it at all. That means that you probably just need to implement a few more functions. (I used wget for testing.)



Mike
 
Cool, thanks for testing.
I wonder how the int86x() function works? Iirc I looked at the source once and it has some kind of jump table or something? You could implement that yourself if you don't want self-modifying code, maybe in a more optimal way than the CRT does.

I tested DHCP with a couple of different routers that I've had over the years, all ASUS though. I'm not really using DHCP myself but if it's an easy fix I'll fix it. If you happen to have wireshark or any other sniffer running and notice anything weird in the dump do let me know so I can fix it :)
I took a look at lftp, that's a cool tool! Can't believe I never used it before. The problem was I wasn't supporting HEAD requests. It seems to be all fixed now, it gets a list and can download files. I had to make it output the list in an lftp-friendly format when it sees an lftp client. The dates and sizes were parsed wrongly as they were.
The version you tested didn't have the checksum optimization yet btw, I hadn't uploaded that anywhere yet.

Here's the latest version (0.2), with the checksum optimization, lftp support and some more small bugfixes. I'm curious to hear the difference in speed between this one and the previous version!
View attachment sahttpd-0.2.zip
 
Last edited:
It's a jump table. What they do is ugly, but I'm not terribly concerned about it. Not enough to replace it ...

In the last 1.5 hours since I posted I rewrote some code in the mTCP FTP client and brought the speed up from 400KB/sec to 528KB/sec, which is about the same speed I am getting from your mini http server. The trick involved flow control. My current code reads quite a bit ahead, but when that buffer gets exhausted it is possible to send a short packet. So besides the inefficiency of sending the short packet, there is also quite a lag to reload the buffer from the filesystem. The larger the buffer the less often you get short packets, but the longer the delay when you need to reload the buffer.

The new code is a bit smarter. I don't use such a large buffer, which cuts the reload time. And I do the next file read after I have sent out the current chunk of data, making better use of the time between packets. That little flow control trick makes quite a difference - I'm now sending files at nearly the same speed I receive them.

I was going to release another mTCP in the next few weeks. I'll probably put this new code in it, after I get some more testing time on. (With the code changes that I made I can't use the checksum/memcpy optimization that you have, but I eliminated a memcpy so that isn't bad.)
 
That's a pretty large improvement! If you want a beta tester, send it over :)

My flow control could really use some work, I'm reading 32k at a time and just put it in the send window right away, with possible delays if the send window gets filled up (which it probably will). But 32k is what gave me the best performance here. I also discovered yesterday that my receiving code needs a bit of work, receiving is slower than sending even if I take the file operations out of the equation. What's happening is that the receive window gets filled up by the faster machine and it has to send a lot of window update packets. Once it's full it sends out 2 packets for every incoming packet. One to ack the packet which will also tell the other side the window is full, and then another one to tell it we have some space again. Not very optimal :)

For the int calls you can do most of it in cpp and keep it nice and tidy. We're calling the same interrupt all the time so once we know what int we need we can prepare the code and be done with it. So in my case I patch the code once when initializing, and from then on it's never changed any more.
In cpp you can probably do something like this, keeping it somewhat elegant ;)
Code:
typedef void (__cdecl *CallIntFunc)();
void __cdecl CallInt60() { __asm { int 0x60 }; }
void __cdecl CallInt61() { __asm { int 0x61 }; }
void __cdecl CallInt62() { __asm { int 0x62 }; }
void __cdecl CallInt63() { __asm { int 0x63 }; }
// .... etc

CallIntFunc CallInt = CallInt60; // or whatever int you want to call
At driver inititialization just set CallInt to the int the driver is at, and you should be good to go and just call CallInt() where you need it. Just need to make sure the registers don't get changed when calling it.
 
So I finally got my old PCs installed again yesterday, after moving over a year ago.
I figured I'd share my simple http server, maybe someone has a use for it.
It uses my homebaked TCP lib, which has only really been properly tested over here on my AT and Dosbox. The main focus has been on speed, while keeping memory usage at a reasonable level at the same time too.
It should run on anything from 8086 and up. Some CRT functions may need a certain DOS version though.

The webserver will use the current directory as the root dir. It doesn't do index files or anything, its primary use is to provide an easy way to download files off your DOS machine.
The IP and port to use are specified from the command line, DHCP works too.
To keep things simple it only handles one request at a time, so the server will be busy while downloading files.

So, to get this thing running first load a packet driver for your card, and then start sahttpd like this: sahttpd.exe ip port, eg sahttpd 192.168.1.22 8080
The port param is optional, the default port is 80. For DHCP: sahttpd d 80 or just sahttpd d
The packet driver interrupt should be automatically detected, so no need for any config files.

On my AT with Linksys Ether16 card I'm getting 280k/s downloads. It does have an XTIDE card and a drive that maxes out at 2000k/s, so obviously ymmv.

View attachment 8878

Bugreports are welcome :)
My next project will be an ftpd. I remember trying one, but it was so awfully slow that it was basically unusable.

very, very cool! i'm going to give it a try. good stuff. coincidentally i've done the same thing, i wrote my own TCP/IP library and used it in a webserver. so that makes at least three of us on this forum that have written a TCP lib. :bigeyes:

it's running live on the internet on my 5150 at http://irc.rubbermallet.org:8088 and has been up for weeks.

you can download my TCP lib source code from there. there's a programmer's reference PDF in the ZIP file. you can download the httpd executable directly from it too at http://irc.rubbermallet.org:8088/httpd.exe

i'm going to give yours a try on one of my 8088 machines and see how it does.

EDIT: just want to add that it's designed to be built with Turbo C++ plus TASM. i've successfully built it with both Turbo C++ 3.1 and Turbo C++ 1.01, and i used TASM 2.02.
 
Last edited:
It's a jump table. What they do is ugly, but I'm not terribly concerned about it. Not enough to replace it ...

In the last 1.5 hours since I posted I rewrote some code in the mTCP FTP client and brought the speed up from 400KB/sec to 528KB/sec, which is about the same speed I am getting from your mini http server. The trick involved flow control. My current code reads quite a bit ahead, but when that buffer gets exhausted it is possible to send a short packet. So besides the inefficiency of sending the short packet, there is also quite a lag to reload the buffer from the filesystem. The larger the buffer the less often you get short packets, but the longer the delay when you need to reload the buffer.

The new code is a bit smarter. I don't use such a large buffer, which cuts the reload time. And I do the next file read after I have sent out the current chunk of data, making better use of the time between packets. That little flow control trick makes quite a difference - I'm now sending files at nearly the same speed I receive them.

I was going to release another mTCP in the next few weeks. I'll probably put this new code in it, after I get some more testing time on. (With the code changes that I made I can't use the checksum/memcpy optimization that you have, but I eliminated a memcpy so that isn't bad.)

both of your TCP stacks are probably faster than mine. i'm not doing anything terribly fancy and haven't spent much time trying to get the most speed out of it yet. i was just focused on getting it working. it's proven to be pretty reliable though in all of my testing so far.

if you guys have any suggestions for improving my code or see something is broken please let me know! i knew almost nothing about low-level TCP/IP detalis when i started on this, so writing this was basically a crash course in the protocol for me. so far i've tested the lib on a few 8088s, a 286, 386, 486, and inside my PC emu.
 
Last edited:
okay snq, i had a chance to run it on my XT. it worked nicely. i only have one XT-IDE card and it's in my 5150, so just a regular old seagate 30 MB MFM drive in the XT. network card is an NE1000 clone. downloading files from it, wget reports an average of 11.2 KB/s which is not bad at all. that actually beats my httpd using my stack which gets 10.0 KB/s on the same machine. how many concurrent connections can sahttpd.exe handle? that can have a big effect on the speed on these slow machines.

something else i noticed is that when i try to see a .HTM file on it from a browser (firefox), it's displayed as plain text html code. you should add a Content-Type field to your HTTP response based on the file extension.

EDIT: if i use IE (although it pains me to do so!) it does show the HTM files properly.

i get this response to a GET /INDEX.HTM HTTP/1.1 if i connect with putty as a raw connection:

Code:
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 305
Connection: close
 
Last edited:
Back
Top