• Please review our updated Terms and Rules here

Utility Linux Distribution

I'm doing more and more research into Linux distro's myself. So far I'm mostly installing on P4 laptops that had XP on them, but I do have PII and PIII laptops that are running Win98 or other old Windows installs. For my 32 bit machines I've been mostly using Zorin 15.3 (32 bit) to allow them to search on the internet and remain useful. Most might be able to run 32 bit Win 7 if you could find device drivers, but sometimes that's a long process. I have a small collection of older Linux disro's that are at least 14 - 15 years old including Free Spire, Suse 10.1. Belenix 0.4, Kubuntu 6.06 and maybe some others that I might try on the PIII's. The Zorin 15.3 runs fairly well if you have 2GB of ram. FOSS has a write up on 32 bit disro's with memory requirements etc: https://itsfoss.com/32-bit-linux-distributions/
 
Can anyone recommend a good modern(ish) linux distribution for Pentium II class machines?

By utility I mean wide hardware support and being able to image ancient scsi disks, mount a variety of filesystems, check out various PCI and ISA cards, etc.

By modern I mean features like python more recent than 1.x, etc.

My pentium III 450mhz/256mb ram is my fastest system that still has a full compliment of legacy ports and I find these type of activities much easier in linux than in WIndows 2000 or XP. I tried a version of Puppy Linux based on Xenial and encountered a panic on boot because it does not understand my Adaptec 2940...

thanks
mike
I have found the best modern distro for old Pentium II and III class system to be antiX 32-bit. It is based on Debian and is fully supported by a small staff of developers and users. There is a connection with MX Linux which is more targeted to modern machines. I recommend antiX. I use it on equipment with as little as 192MB RAM and with a GUI (IceWM) although they support non-GUI setups as well, or full Desktop Environments like XFCE.

I have used the latest Debian versions and kernels but you can use earlier versions and change the kernel to suit your hardware. You can "spin" your own setup and easily save it using their snapshot and Live systems. Very comprehensive. But I usually just load the standard FULL 32-bit distro and it works. I've had a couple of issues with video drivers but they have been solved with kernel mods.

For old computers antiX and Debian are the way to go. You can try Debian directly but unless you are fairly geeky with Linux you will find antiX easier to setup and manage. The forum is very helpful and I am active there. (as Seaken64).

Seaken
 
I was going to start a new thread, but poking around I found this one that's at least mostly about the same question, so I figured I'd just perform a little thread necromancy...

(Part of my thought process here is maybe, since the huge argument about what "UNIX" even is has already happened, we could avoid falling into that pothole again. Of course, this could just turn into another one of those "Linux isn't Windows so it sucks" arguments, but... maybe we can avoid that too? Maaaybe?)

The OP of this thread was asking for modern Linux distributions suitable for (by modern standards) extremely low spec computers like a Pentium II. (Although at this point this question would probably apply to just about any IA-32-only CPU.) I have somewhat-related follow-up, which is: does anyone have a favorite no-fuss Linux for low-spec x86-64 computers? The specific criteria I'm looking for is a distribution that's optimized for disk space while still offering a GUI windowing environment with a modern web browser.

The target I have for this is a hacked Chromebook that only has 14GB of internal eMMC memory. (I have two hacked Lenovo N42s; one has 30GB, the other has 14. 30GB isn't *great*, but it's enough for most "normal" Linux distributions, but 14GB... it turns out that's *really* tight.) The thing I keep running into is many of the *really small* GUI linux setups, like Puppy Linux, are specifically set up to be live USB sticks and aren't meant for permanent installation. (I did in fact go through the motions of getting Puppy "installed" on the internal flash to see if that was workable, but it still operates in the live mode, and, worse, its "save session" functionality didn't work when I booted it internally.) I know I *could* just start with bare Debian or the like and slowly build up to a working desktop, but I gotta be frank, who's got the time for that? What I'm hoping for is that someone's already done the work to tune up a minimal-but-workable-for-basic-web-and-utility-tasks mix of packages that I can just slap on there and will work while still reserving enough space for upgrades and some small amount of local scratch files. (I had Lubuntu on the machine originally after hacking it, and it ran out of disk space in the middle of an apt-get dist-upgrade. Ouch.)

So far the best off the shelf thing I've found is BunsenLabs Boron; it has kind of a weird interface but it actually kind of works on the Chromebook (it makes heavy use of the "Super" key, and on the Chromebook that maps to a dedicated "Search" key where a normal keyboard has capslock). Installing it was a hassle because of bugs that I was probably asking for, but I did get it to work and it occupies about 5.5GB of space. That's better than Lubuntu by about 4GB, but... does anyone have a favorite light GUI that beats that? While, again, being "normal" installations, not compressed initrd/ramdisk liveCD-optimized things.

(And yes, I know, I should be considering BSDs and other non-Linux things, but... I gotta be frank, I'm extremely skeptical the BSDs will be able to handle these Chromebook-focused SOCs gracefully. Linux *barely* does. But, hey, if you've actually run xBSD on a Braswell platform system I have two of these things, I'm willing to experiment.)

This is all kind of depressing considering how we used to be able to happily run Slackware in a couple hundred megabytes (or a *lot* less, actually) with Netscape or Mosiac back in the 1990's, but time marches on.
 
Low spec Linux should target what the current e-waste selection is not some ancient collectable Pentium II. Anything not X64 should be forgotten by new desktop OS.

I got work done back in the day using DOS and 1MB of RAM and was happy with it (there is something to say about single tasking OS that keep you from wondering to other apps that pop up and bug you for attention).
 
Anything not X64 should be forgotten by new desktop OS.

No. There is still far too much 32 bit software out there to just dump 32 bit entirely. Requiring 64 bit just for the sake of 64 bit is just nonsense. A text editor doesn't need to be 64 bit, neither does a paint program, or a vast majority of things that don't require much memory to run. It just contributes to software bloat and artificial obsolescence. We don't need more of Apple's abhorrent behavior in the market. If you want a list of the worst eWaste offenders, they'd be public enemy number one, followed by Microsoft.

If you want to use the argument that low spec Linux should still support eWaste, which would be stuff in the Core 2 to second gen Core i series stuff, using only 64 bit software on them is just painful. While they are technically 64 bit CPUs (most of them anyway), modern 64 bit software is a whole lot slower on them due to CPU vulnerability mitigations. Spectre/Meltdown alone can cause a 50-70% performance hit in some situations, worse if HT is enabled.

Intel knew this even back when they were shipping the first Atom processors that supported 64 bit. They were so slow in 64 bit mode that Intel shipped them with 32 bit EFI firmware. The Pentium 4 in 64 bit mode had similar problems due to the cache ordering not being arranged properly.
 
Last I checked 32-bit software runs on 64-bit OS, it's just 16-bit software you have a problem with on x64.

The only core2 stuff in e-waste now is Apple iMacs (people hang on to apple AIO gear longer for some reason), all the PC stuff of that generation is mostly recycled by now.

Never seen anyone quote over 30% loss in speed as the worst case for Spectre/Meltdown.

Most people these days have dumped anything that can't boot NVME and that would be 4th gen Intel or older. Early ATOM laptops were slow when they were new, plus they had very limited RAM upgrade paths so no idea why a moder OS of any kind would want to support those systems.
 
I have somewhat-related follow-up, which is: does anyone have a favorite no-fuss Linux for low-spec x86-64 computers? The specific criteria I'm looking for is a distribution that's optimized for disk space while still offering a GUI windowing environment with a modern web browser.
Take a look at [Knoppix]. Designed as a CD/DVD live environment, it uses squashfs plus overlayfs to fit as much data as possible on the disc (~10 GB of packages on a 4.7 GB DVD). It's based on Debian, but it's largely a one-man show, so releases aren't too often. The overlay can be persisted. A secondary focus of Knoppix is making Linux available to blind people, originally the main developer's wife.

It's mostly a 32-bit distribution, but contains a 64-bit kernel (32-bit binaries are smaller).

I know I *could* just start with bare Debian or the like and slowly build up to a working desktop, but I gotta be frank, who's got the time for that?
I had a mini notebook with only 2 GB IDE flash memory (both too small and way too slow). To make it more useful, I placed /usr on a USB drive and, after installing all packages I wanted, I turned that into a squashfs file. Then systemd removed support for a separate /usr partition... but it worked while it lasted.

The btrfs file system supports transparent compression. On a 2 GB drive and a decade ago, it didn't work well, but today and on a 10+ GB drive it might be fine. Going that route allows you to use whatever distribution you want.

No. There is still far too much 32 bit software out there to just dump 32 bit entirely. Requiring 64 bit just for the sake of 64 bit is just nonsense.
It's a bit too late for that. While there's still a lot of 32-bit software out there (and most of it will never need more than that), an increasing amount of software requires a 64-bit. Browsers being the obvious thing (including all the "ships a browser and pretends to be an application" cases), but many large packages can no longer be compiled on 32-bit systems. According to the OpenBSD people, who are very annoyed, that includes the Rust ecosystem.

In other words, we've already reached the point where 32-bit systems have become embedded systems - no longer capable enough for their own development, but still useful to run software in a limited context.

To iterate further, Linux fully supports 32-bit RISC-V, and so do embedded distributions. But to my knowledge, no general Linux distribution (not even Debian) will target it. They all aim at the 64-bit variant exclusively.
 
Wow, I’m impressed how quickly and directly this went off the rails. N/M?

64 bit bad!!! Performance!!! Spectre!!!

Yep, lowest end CPU in Intel’s 2016 lineup is kinda slow. There is nonetheless a world of difference between it and circa 2008 Diamondville… but whatever. Everything is bad and sucks.
 
I had a mini notebook with only 2 GB IDE flash memory (both too small and way too slow). To make it more useful, I placed /usr on a USB drive and, after installing all packages I wanted, I turned that into a squashfs file. Then systemd removed support for a separate /usr partition... but it worked while it lasted.

The real solution to the problem, I suppose, is just putting a big SD card into the Lenovo’s external slot and putting a piece of tape over it. It’s not like it’d be slower than the internal eMMC, it’s literally the same bus.

I dunno, I just thought it might be interesting to see what was possible with the internal space. I could just turn it back into a Chromebook by dropping an unofficial Chromium build on it, but I’m starting to recall why not putting all the eggs in the Googlebasket might not be a bad idea.

Also, a slight logistical issue: Lenovo only made the SD slot about half an inch deep, so a normal card sticks waaaaay out. I did buy a “flush mount” Micro-SD converter intended to not stick out of MacBooks that also have shallow slots, but their slots are still twice as deep as the N42’s.
 
Last I checked 32-bit software runs on 64-bit OS, it's just 16-bit software you have a problem with on x64.

Apple dropped all support for 32 bit software awhile ago. Canonical tried to ram that through also, but users revolted and got them to delay that, but not forever. Windows 10 S doesn't support 32 bit software.

16 bit software not working is mostly artificial limitation by Microsoft. Someone recompiled a leaked source copy of NTVDM for 64 bit, which allows running of 16 bit software on 64 bit versions of Windows. Of course the same limitations apply, like no direct access to hardware.

The only core2 stuff in e-waste now is Apple iMacs (people hang on to apple AIO gear longer for some reason), all the PC stuff of that generation is mostly recycled by now.

Yeah, no. There is still a ton of gear out there from the Core 2 era in PCs. I find them in second hand stores all the time, and local computer shops have pallets full of them. Those things will be around mainstream until Windows 10 is EOL, and looooong after that. Dell by themselves sold millions and millions of those machines.

Never seen anyone quote over 30% loss in speed as the worst case for Spectre/Meltdown.


The bigger problem is the cumulative compounding nature of the patches. You just keep adding more and more performance loss over time that encompasses an ever greater number of instructions, and makes it more likely you'll be impacted by the performance loss.

Most people these days have dumped anything that can't boot NVME and that would be 4th gen Intel or older.

Rather strange to hear this coming from someone on a *vintage* computer form lol. Let's all go throw out our 386s and PDP-11s because they can't boot from PCIe storage.

Early ATOM laptops were slow when they were new, plus they had very limited RAM upgrade paths so no idea why a moder OS of any kind would want to support those systems.
 
I would bet I have more vintage gear then a lot of people here, but I have newer gear to get work done and play newer games.

Heck, I do have a few old boxed Linux distros for old gear just to see how it worked.
 
16 bit software not working is mostly artificial limitation by Microsoft

… did you read the comments in the GitHub repo about how NTVDM64 works? It leverages CPU emulator code from Insignia SoftWindows that Microsoft licensed for use with the non-x86 versions of NT.

Switching the CPU to “long mode” disables VM86 mode on all x86-64 CPUs. Full stop. NTVDM64 is an emulator with deep OS integration (roughly akin to something like Rosetta on Macs) but it’s just as much an emulator as DOSbox.

The bigger problem is the cumulative compounding nature of the patches. You just keep adding more and more performance loss over time that encompasses an ever greater number of instructions, and makes it more likely you'll be impacted by the performance loss.

You are aware of the fact that all these vulnerabilities apply to affected CPUs in 32 bit mode as well, and mitigation was also implemented, where possible, in the 32 bit kernels?

That Intel screwed the p**ch designing multiple generations of CPUs is not a problem unique to x86-64. (Also, fwiw, some of these vulnerabilities even affect non-x86 CPUs, like Power and some ARM variants.) I’m not sure what the point you’re trying to make here even is. Buggy CPUs have been a thing forever; the “f00f” bug renders nearly all P5 Pentiums untrustworthy without some cycle-wasting mitigations…
 
Last edited:
… did you read the comments in the GitHub repo about how NTVDM64 works? It leverages CPU emulator code from Insignia SoftWindows that Microsoft licensed for use with the non-x86 versions of NT.

Switching the CPU to “long mode” disables VM86 mode on all x86-64 CPUs. Full stop. NTVDM64 is an emulator with deep OS integration (roughly akin to something like Rosetta on Macs) but it’s just as much an emulator as DOSbox.

I never said it wasn't emulation. But the fact that it is mostly MS/Rosetta code from NT4, and likely used on every NTVDM instance up until Windows 10, means that MS was just lazy/didn't care to have it on x64. Which is weird considering they have an x86 emulator baked into Windows on ARM, and have since Windows RT. And the whole reason for that code to exist in the first place was to be cross platform. Really a shame that we never got a PowerPC Windows 2000/XP.

Rosetta is far more deeply embedded than just the OS level. The custom ARM SoCs that Apple uses are designed to accelerate x86 emulation to make it faster. Microsoft has been trying to play catchup with their own ARM chips in their tablets.

You are aware of the fact that all these vulnerabilities apply to affected CPUs in 32 bit mode as well, and mitigation was also implemented, where possible, in the 32 bit kernels?

Even with the mitigations, 32 bit code runs faster and uses less space than 64 bit code on those weak processors. If you have a system that maxes out at 1, 2 or 4 GB of RAM, there's no point in running 64 bit code on them.

you’re trying to make here even is. Buggy CPUs have been a thing forever; the “f00f” bug renders nearly all P5 Pentiums untrustworthy without some cycle-wasting mitigations…

It's a rather slippery slope to whether vulnerabilities like meltdown are bugs, they're more oversights than anything. It has existed for literal decades before it was discovered by accident. F00F on the other hand is a bug, because it locks the machine up.
 
The real solution to the problem, I suppose, is just putting a big SD card into the Lenovo’s external slot and putting a piece of tape over it.
Either that, use compression, or replace the internal flash. What else did you expect? :)

On the mentioned machine with 2 GB flash, I ended up installing Windows XP on a 4 GB SD card. Graphics driver support for the VIA-based GPU is better, and the OS copes better with the tiny 800x480 resolution.

Standard Linux distributions, like everything else, simply have grown a lot. Small alternatives (both in time and space) do exist, but you have to invest time to make use of them. Replacing the majority of core utilities with BusyBox or Toybox is feasible, and embedded-focused projects can get you quite far. But not ready-made and out of the box, no.

16 bit software not working is mostly artificial limitation by Microsoft.
Support for 16-bit software in 64-bit mode does not exist. That does not prevent emulating a fully foreign architecture, and in 64-bit mode, x86-16 is one. Microsoft also never shipped an ARM, MIPS, PPC, or Alpha emulator for x86 Windows. They could have, but is that a mostly artificial limitation, too?

It's a rather slippery slope to whether vulnerabilities like meltdown are bugs, they're more oversights than anything.
Who cares about security vulnerabilities on vintage machines anyway? Just disable the mitigations if you're not running a server and get your performance back. Classifying whether why Spectre/Meltdown are different than F00F does not help - both would have prevented you from exposing the machine on the current internet. Live with the problem or deal with the cost of mitigation/replacement.

Wow, I’m impressed how quickly and directly this went off the rails. N/M?
Guess I will be the only one who even tries to give an answer to the question.
 
...does anyone have a favorite no-fuss Linux for low-spec x86-64 computers? The specific criteria I'm looking for is a distribution that's optimized for disk space while still offering a GUI windowing environment with a modern web browser.

The target I have for this is a hacked Chromebook that only has 14GB of internal eMMC memory.
That should be fine for a standard Debian install; that, last I checked, eats around 4 GB of disk space (at least for the XFCE version). But maybe less; this page says the installer takes into consideration the environment and can require as little as 485 MB RAM and 1160 MB disk space. At any rate, even with the standard installation, you can easily trim that down to under 2 GB if after the install you then go through and just delete the piles of crap it adds, like OpenOffice.

I just brought up a debian:12 Docker container which claims it's using about 350 MB of "virtual" space (i.e., including all the images on which it depends). An apt-get install xfce4 firefox-esr says it will use 1,158 MB of additional disk space. So it looks like the minimum practical disk size for your application (which I presume needs a good, modern web browser), without switching to special tools specifically designed to keep things small, is about 2 GB.

Another way of trying to keep the install small, which I've not tried, is to install without a connection using the CD image. At only 700 MB (albeit no doubt compressed), it places a pretty hard limit on how much it can shovel on to your disk.

The next step beyond that would probably be to do a custom configuration of the installer for a particular set of packages. This is known as "preseeding", and lets you choose exactly what packages will be installed.
 
Btw I can't believe discussion went into 16-bit. I guess it couldn't be any different on this forum :)

What's the point of forcing the VM86 in long mode 'feature'? Take a look at the 2006 mainboard, how many peripherals can you address from real mode?
I did not follow the entire discussion, but if it was implied somebody did this on malicious purposes, I don't know about that...features are made if they're needed, not because it would be cool to have them. Running 16-bit games 'natively' under 64-bit OS was absolutely not a design requirement.

Now I'm not saying it couldn't be done, it's just that largely no-one wanted it.
 
Even with the mitigations, 32 bit code runs faster and uses less space than 64 bit code on those weak processors. If you have a system that maxes out at 1, 2 or 4 GB of RAM, there's no point in running 64 bit code on them.

There is *some* legitimacy to the argument that 32 bit binaries are more compact than x86-64, but there is essentially zero support for a claim that 32 bit is universally faster. Maybe you can cherry pick a few examples if you search hard enough (here, I’ll help you: many Intel CPUs before Ice Lake have an anomalously slow 64 bit divide instruction), but in general x86-64 has more registers and other optimizations that make it easier for compilers to generate fast code, and in most cases you *gain* a few percent targeting 64 bit.

Anyway, sorry, this ship has sailed. If you want to run a mainstream web browser on Linux today you are essentially screwed on 32 bit.

… and, whatever, this wasn’t what I was asking about anyway. I have seen how fast this machine runs and deemed it acceptable, so I don’t care. (Also, the version of Puppy Linux I booted on it was running a 32 bit version of Firefox ESR, and, no, it was not faster than regular Firefox. Full stop. It was very much worse.)
 
Back
Top