• Please review our updated Terms and Rules here

mTCP web server (HTTPServ) available for downloading

Testing cancelled for now - sorry for the false alarm!

Testing cancelled for now - sorry for the false alarm!

Apparently this isn't as well baked as I though it was. I've removed the download site and I'm going to work on trying to find out what is plaguing Mike.

There is also a problem I discovered tonight when testing against lftp using the mirror mode; lftp is not happy with one of my responses to a HEAD request. I need to figure that out; most of them work but there is one in particular I don't understand.

Stay tuned ...
 
Mike, I know I've been nagging you about this before but you really should use the SVN repository on Google Code. This particular bug may or may not be easily found by someone else looking at the source but still, chances are that an extra pair of eyes could help with problems in situations like this.
 
Mike, I know I've been nagging you about this before but you really should use the SVN repository on Google Code.
Whereas I'd sooner put a bullet in my head than use any sort of 'versioning' software; it's pissed on every project I've ever tried with it, and seems more like a crutch for the inept or badly managed development than providing any sort of actual benefits... Either that or I coded for WAY too many DECADES without that sort of thing, and it's as alien to me as visual programming. (which I also have a complete mental block on the use of).

BTW, the link is now 404. -- edit -- nm, didn't see the message about that on this page.
 
Whereas I'd sooner put a bullet in my head than use any sort of 'versioning' software; it's pissed on every project I've ever tried with it, and seems more like a crutch for the inept or badly managed development than providing any sort of actual benefits... Either that or I coded for WAY too many DECADES without that sort of thing, and it's as alien to me as visual programming. (which I also have a complete mental block on the use of).

Well, I suppose there are good and bad version control software, just like any other kind of software. Personally, I like the TortoiseSVN client and have never had any real problems with that. I am curious about your experiences though, specifically how "it pissed" on your projects? :)
 
DeathShadow,

The greatest threat to humanity is hyperbole.


Krille,

I use SVN at home. As some point I'll enable SVN for the Google code downloads, but it's not going to help in this case - I don't like checking in code that is not complete. The HTTPServ code would not have been in the repository, and it won't be in the mTCP package until I am totally happy with it.

Another thing to consider is that if I have not seen the crash and I can't recreate it, then the chances on somebody else finding it are pretty small. The TCP library is pretty complex, but it is solid and it has not changed much. The new code is probably the source of the problem, but it is not going to be easy to find. I would much rather have people contributing new features than trying to debug my problems, and if I can't debug my problems then the code is too complex and I need to fix that.

I use Tortoise SVN as well, and I've used it in a professional setting too. Source control systems are good things.
 
I know I'll never be able to convince you but here I go anyway... ;)

I don't like checking in code that is not complete.
Define complete? If the code can be successfully compiled to an executable then I'd say it's complete enough. People doing a checkout or just browsing the code will know it's a work in progress and won't expect it to be perfect.

The HTTPServ code would not have been in the repository, and it won't be in the mTCP package until I am totally happy with it.
Why not? Don't take this the wrong way but a very interesting talk by a couple of guys you might know is starting to feel relevant here. :D

Another thing to consider is that if I have not seen the crash and I can't recreate it, then the chances on somebody else finding it are pretty small.
Yes, but it is possible. Besides, maybe there are other bugs in the code that someone could find.

I would much rather have people contributing new features than trying to debug my problems, and if I can't debug my problems then the code is too complex and I need to fix that.
If the code is easily accessible (with a browser) then it will attract interest from people. If they see something they think they can improve then they will contribute (with new features, bugfixes, optimizations, whatever). If you only release full featured, bug free, optimized and polished code in a zip file (in other words, your finished programs) that's never going to happen.
 
Whereas I'd sooner put a bullet in my head than use any sort of 'versioning' software; it's pissed on every project I've ever tried with it, and seems more like a crutch for the inept or badly managed development than providing any sort of actual benefits... Either that or I coded for WAY too many DECADES without that sort of thing, and it's as alien to me as visual programming. (which I also have a complete mental block on the use of).

BTW, the link is now 404. -- edit -- nm, didn't see the message about that on this page.

You're always so grumpy, lighten up!
 
As far as version control repos, I have my TCP/IP stack, MoarNES, and Fake86 available via git on SourceForge... but honestly I only like to work on it myself because then I can say "I wrote all of that!" :) It's only up there so people can get "bleeding edge" code with new features or bugfixes before I release a new official version if they want to.
 
I know I'll never be able to convince you but here I go anyway... ;)

It's good that you know you are fighting a losing battle. It puts things in perspective. ; - 0

Define complete? If the code can be successfully compiled to an executable then I'd say it's complete enough. People doing a checkout or just browsing the code will know it's a work in progress and won't expect it to be perfect.

"Successfully compiled to an executable" is a terrible standard. Nothing can probably ever be perfect but no code should be shared/checked into a library without a reasonable amount of review and testing. (In my case I use a lot of testing to get around the fact that I don't have peer reviews until an initial "good enough" version is shared.)

Why not? Don't take this the wrong way but a very interesting talk by a couple of guys you might know is starting to feel relevant here. :D

I think they know something about software engineering. But this isn't a matter of proving I am a genius or not. The only opinion that matters there is my wife's opinion. And she is firmly in the "no" camp.

But seriously, as more of a firmware engineer I am not a big fan of "throw it against the wall and see if it sticks". Especially when programming in what is essentially an embedded environment. I need the base code to be as reliable as possible so that other people do not have to struggle with it. It was flakiness with NTCPDRV that led me to write my own entire stack and applications.

Yes, but it is possible. Besides, maybe there are other bugs in the code that someone could find.

I have no problem with people finding bugs; you have found your share. But I do take responsibility for getting rid of the obvious ones, and for finding the devilish ones.

If the code is easily accessible (with a browser) then it will attract interest from people. If they see something they think they can improve then they will contribute (with new features, bugfixes, optimizations, whatever). If you only release full featured, bug free, optimized and polished code in a zip file (in other words, your finished programs) that's never going to happen.

I'm not sure I agree with most of these points. A browseable repository might make things easier for the casual user, but I'm not terribly interested in casual users. I think there are plenty of problems in the current code, and plenty of features still left to implement. The distribution of the source code is not a real impediment - people have been working with tarballs and other "bunch of files" distribution formats since the beginning of time.

Maybe people have less incentive to work on things that are good enough; I can't argue that. But if you use IRCjr as an example, the first source code was released in 2011. In the two years since the code was first released I added the mIRC color codes, additional /ctcp commands, fixed bugs with some servers, added standard IRC attributes such as bold, italics, underlining, added 132 column support, added the PASS command, added config file parameters for handling the various quit and nick messages, and did some refactoring to clean things up and reduce memory footprint. I did these things because they bothered me and nobody else was interested ...

You in particular have had a history of working on the performance and picking holes in what the compiler generates. But up until about a week ago, you were the only other person who got their hands dirty. Last week somebody gave me code for handling additional code pages. Their barrier to entry wasn't the ZIP file with the source code; it was learning how to use Watcom. And I think they were a lot happier getting their hands dirty knowing that the foundation they were working with was solid.

As for the mystery Mike C bug, I think I found it tonight. It was a classic uninitialized pointer, which worked great if the pointer was NULL but failed miserably if it picked up previously dirtied memory. I've added a lot of consistency checking code and I'm going to add some more so that the next time this happens there will hopefully be less flailing.

And don't worry. After I release an initial version there will still be plenty of other things for people to do. And yes, I will get the SVN repository enabled at some point. I'm never going to check in my works-in-progress, but it is a nice way to get bug fixes out there faster.
 
But seriously, as more of a firmware engineer I am not a big fan of "throw it against the wall and see if it sticks". Especially when programming in what is essentially an embedded environment. I need the base code to be as reliable as possible so that other people do not have to struggle with it. It was flakiness with NTCPDRV that led me to write my own entire stack and applications.

Speaking of NTCPDRV, what would be really cool is to make a TSR with mTCP that's compatible with the NTCPDRV ABI, enabling older programs written for that to use a better TCP stack without modification... or if you're not interested maybe I'll do it with my stack instead. It seems pretty reliable too. I had an HTTP server powered by it running on an 8088 that stood up to being linked on hackaday and several DoS attacks. It kept running for weeks, until I shut it off myself. :)



As for the mystery Mike C bug, I think I found it tonight. It was a classic uninitialized pointer, which worked great if the pointer was NULL but failed miserably if it picked up previously dirtied memory. I've added a lot of consistency checking code and I'm going to add some more so that the next time this happens there will hopefully be less flailing.

Great, glad you found it! Hopefully that's it. Looking forward to testing the fixed version.
 
Speaking of NTCPDRV, what would be really cool is to make a TSR with mTCP that's compatible with the NTCPDRV ABI, enabling older programs written for that to use a better TCP stack without modification... or if you're not interested maybe I'll do it with my stack instead. It seems pretty reliable too. I had an HTTP server powered by it running on an 8088 that stood up to being linked on hackaday and several DoS attacks. It kept running for weeks, until I shut it off myself. :)

That specific idea (NTCPDRV compability) has not come up but the general topic of a TSR version of mTCP comes up fairly often. My response generally is "I'd love to consult and here is what you need to do to get started, but I'm out of bandwidth to do that myself."

Networking *should* be a service that is available to all applications, handled by the operating system. The TSR mechanism gives us a way to do extend DOS to do things like add networking services. On one hand, shipping a full version of the TCP library with every end application is silly. But you have a different problem when you make a TSR version - you need a one (or two) sizes fits all uses set of code, which is not easy to do either. To make a TSR usable by a wide array of applications you have to have every feature and every buffer enabled. A web or FTP server has different characteristics than an IRC client so in a limited environment where tradeoffs are necessary one size fits all might not make sense.

The TSR version of mTCP is a great example of an opportunity for others to contribute.


Great, glad you found it! Hopefully that's it. Looking forward to testing the fixed version.

I take your bug reports seriously. And then something weird happened to me too - I was not able to crash it by hammering it, but I was able to crash it by starting, sending a few requests, stopping, and reloading. After I was able to recreate it semi-reliably hunting down the root cause was just a matter of divide and conquer.

I've added a lot of ugly "test" code designed to ensure the data structures are coherent, the parameters being passed are sane, and that my assumptions about the way code (APIs within the program) are being used are actually correct. The good news is that none of the consistency checks I added fired, so things are operating pretty much as expected. I'm going to add a "dirty memory before allocating objects" option to catch other instances of uninitialized variables and test that first before letting another version out. The versions I let out will be "normal" (no consistency checks) to keep the performance reasonable.

You can work around this particular bug by adding the "status" page to all of the directories. That's a pain in the rear, but it eliminates this particular uninitialized pointer.


Mike
 
As for version control, these days I use GIT. And I use it as a tool to support my programming, not just as a way to store several (working) versions of software. Let's say I'm planning on adding a feature. First I may add a set of smaller functions. Then I may add a function or two which call the other functions to do something. Finally I add calls from the main program to activate the new feature.

I commit these different steps to Git. Let's say I start adding the smaller functions. As I have them ready I commit them. After that I start commiting the function(s) which call them. And finally I commit the changes of the main program.

The point of working this way is that it records what I have been doing and it shows me how I have been thinking. So when I get back to it the next day (not to mention the next week, if I have been working on something else, or, as in some cases, several years later) a quick 'git log' (which is something TOTALLY different from svn or cvs logging) will show me what I last did, and what I did (git log -p), in the right sequence. Then I'm right on track again. It helps me think modularly. It's a very efficient way of working.
I can even throw in debugging printfs all over the code and _still_ commit while leaving out the debugging code. So when I have something working, or something worth keeping for reference, I commit. So what gets commited is the actual code, the debugging printfs, work in progress or whatever can stay in all through the project and never get commited if I don't want to.

It's a great way to work for me, and I would argue for any programmer who likes to think of development as a series of steps which leads to the desired outcome.

I've worked with version control systems back from SCCS and RCS through CVS and SVN (and been exposed to ClearCase and the horrors of what Microsoft provided in the past), and only Git can assist me in this way. Compared to Git the others are NOT version control systems, just glorified snapshot-backup systems. Hg (Mercury) works in a similar way to Git, as I understand. These tools are a different world, they're part of my programmer's arsenal of tools. I'll never go back to the old way.

-Tor
 
I thought I would have time to test this during the week, no luck yet :(
Don't have much to contribute in the debate that seems to have started in the thread, I'll just say keep up the good work Mike! Let us know if we can help you testing in any way. The concept looks really promising!
 
It is running again. The links have changed slightly - just go to http://www.brutman.com/ and navigate from there.

(I wanted more realistic traffic for the test so I am redirecting traffic from my paid web hosting to the PCjr. It has been running for a few days with the only problem being a small power outage.)

This time around the PCjr is using a real ISA Ethernet card, so it should be a little faster. I've made some bug fixes too.
 
Current statistics (from http://67.185.176.54:8088/proc/Status) :

Server Information

Server build date: Jun 15 2014
Machine type: PCjr
DOS version: 3.30
BIOS date: 06/01/83
Server started: Tue Jun 17 20:37:02 2014
Elapsed time: 205:17:39
Dir cache size: 16384, Free: 5392
Files cached: 11, Bytes: 147193
Free memory: 22928

HTTP Stats

Active connections: 1
Total connections: 7069
Requests served: 7707

TCP Level Stats

Packets sent: 122767
Packets received: 107172
Packets retransmitted: 712
Sequence or ACK errors: 3155
Dropped (no space): 23

Packet driver Level Stats

Packets sent: 126021
Packets received: 229331
Packets dropped: 0
Packet send errors: 0
Lowest number of rcv bufs avail: 11​

200+ hours of continuous operation with 7000+ connections and 7700+ objects served; I think this version is probably usable. ; - 0
 
Back
Top