The older a machine gets, the more likely it is you just need to bite the bullet and roll a custom kernel. After doing it a few times honestly it isn't that bad. If the machine is similar enough to others you may even be able to transplant a userland from an existing system. Playing nice with package managers can be tricky, sure, but there's a lot more containerization in the Linux world than there used to be. You can also cross compile so that building stuff isn't bottlenecked by the slowness or archaicness of the host.
Ymmv with desktop stuff especially if you can't get accelerated graphics, but there are some quite minimal environments that don't suck down all the air, leaving more resources for your actual applications. In my case xserver+xdm+xsm+dwm make for a zippy experience on my RPi 400, granted my whole system is from scratch so is very minimal in all areas.
MVP on a Linux system is the kernel (and firmware BLOBs), a C library, and userland utilities like busybox or GNU. You'll want an init daemon capable of spawning multiple gettys to get VTs on the function keys, although you could also brute force it from a single tty by launching additional shells with their terminal redirected to tty2, tty3, or whatever alias the fbcon terminals are under in /dev. I've done this in single user mode before. As for a build system, libc, libstdcxx, binutils/gcc or clang, and make get you most of the way there. Word on the street is Linux is starting to rust, although I'm not sure if rustc is a requirement yet or just for some optional/experimental stuff. Which C library you use is important, informs the last part of your target triplet (linux-gnu vs linux-musl vs others).
Finally FreeBSD or NetBSD may be options although there too you might want to consider recompiling a custom kernel more tightly coupled to the specific piece of hardware. Forego OpenBSD, all the extra security stuff adds overhead that isn't justified in your case.