• Please review our updated Terms and Rules here

When did the 8-bit era end?

Depending on who you ask (many CoCo enthusiasts spring to mind), the 8 bit era is still going strong. I would say that what most folks refer to as the 8 bit era began winding down as 80286 (or maybe 80386) and 68000 based home computers began displacing them.
 
...not only that, but when the 386 hit the PC market, most were using 16-bit software for a fair number of years (I remember the remark by a disgruntled Intel engineer that the world opted to use DOS (based on an 8-bit OS) long after the 386 was introduced).
 
DESQview made sticking with MSDOS all worthwhile. :)
Seriously, I used to be the SYSOP of a 4 node BBS for one of the local County Bar Associations and it all ran on a 486 with DESQview and PCBoard. It was a real smooth ride, too. This was in 1995 - 1996.
 
Is 64-bit the inflection point for performance gain by most consumer applications? I know of very few applications requiring 64-bit and delivering substantial performance over 32-bit versions of the same application.
 
Is 64-bit the inflection point for performance gain by most consumer applications? I know of very few applications requiring 64-bit and delivering substantial performance over 32-bit versions of the same application.

Agreed. Individual threads are generally not running faster on 64 bit CPUs than they did on 32 bit CPUs. But many server systems out there are benefiting from the ability to easily and directly address more than 4GiB of RAM for disk/database caching while running many threads and large workloads. Putting 32 GiB of RAM in a box is cheap compared to some of the alternatives. Filesystems like ZFS can take advantage of all that RAM.

EDIT: I have no problem with this. I remember when I made the transition from shipping 16-bit PDP-11/RSX systems to 32-bit 80386/UNIX systems (there were a few VAX/VMS systems in there, too). The systems ran at about the same speed, even though the 386 CPU was a lot faster (the disks were not), but the increased memory addressing capabilities and lower hardware costs made it easier to get systems out the door, and probably more profitable.
 
Last edited:
In 95-96 I was running a 25Mhz 80386 desktop machine, and running DESQview to it's fullest extent. I was on the other end of the lines: I was constantly logged into two BBSs and doing other things. A guy owed me money and couldn't pay so he gave me a broken A500. Fixing that and finding the power of it was the only thing that got me off DESQview.
 
Is 64-bit the inflection point for performance gain by most consumer applications? I know of very few applications requiring 64-bit and delivering substantial performance over 32-bit versions of the same application.

I think the problem is that x86 has had 64-bit databuses since the Pentium. So the biggest performance gains of going 64-bit were already cashed in in the 32-bit era.
 
Back in the day Sun used to warn developers that 64 bit code could in many cases be slower than 32 bit on the same UltraSPARC CPU; as I recall the difference was in large part blamed on memory bandwidth. (For instance, if your standard INTs are 32 bits instead of 64 bits the 64 bit bus on an UltraSparc CPU can fetch two in one bus cycle vs. one.) So whether 64 bit is faster than 32 is the sort of question that's going to depend a lot on the specific CPU implementations... and probably compilers, for that matter. x86-64 has more registers than i386; for code that can make good use of them it might make a big difference in favor of the 64 bit implementation, for reasons only peripherally related to the increase in word size, while code that's more memory bound isn't going to be helped at all.
 
Last edited:
I think that still applies on x86. I don't seem to experience any palpable improvement in speed in x64 Linux code over x32 code. I've considered going back to the x32 versions--fewer compatibility issues.
 
I suppose the supply of archaea like me, still happily using an 8-bit

programmable calculator (circa 1986) will gradually diminish.

Since I know where most of the bodies are buried on my device,

there's not much incentive to upgrade to a theoretically more capable

platform requiring hundreds (thousands?) of hours hacking to be as

useful.
 
I've been using a 32S and 32SII continuously since 24 Apr 1988. I had used other people's 67s, 41s, and 28s prior to that but always thought I couldn't afford one. Watching the prices climbing I recently changed my mind and bought a 28C and 41CV. (I also mistakenly bought a 35S :( )
 
One of the things that I like about my 48GX is the availability of emulators for it. If I don't happen to have my real 48GX handy, I just use the emulated one on my iPhone! I still prefer real buttons over a touch screen, of course, so I do use my real calculator when it's nearby.
 
I think that still applies on x86.

Yes it does. If you aren't careful, the extra bandwidth requirements will bite you (eg many game engines... what a culture shock... In the old days, game programmers used to be the most clever, skilled assembly programmers around... these days, they barely know their way around a C++ compiler, and don't seem to understand too much about the hardware).

For my graphics engine, I moved to 64-bit relatively early in the process (around 2006 I believe, when I got my first 64-bit machine with XP x64), and found that especially recursive routines took quite a hit. The cause was that every function parameter was now 64-bit instead of 32-bit on the stack.
So instead of using function parameters, I resorted to using other methods to pass parameters where possible. Eg, pass values in struct/class members, so you only need to pass one pointer... Or store data in member variables when they don't change every call.

In the end I did get it to be faster than x86, although the difference was never large. But when I first tried the 64-bit build of Half-Life 2 it was actually considerably slower, something like 30%. It was only faster in loading levels, perhaps because it could just use more memory. But the actual game logic was much slower, so lower framerates. And the loading was never a problem in the 32-bit build.
They eventually pulled the 64-bit build, I guess they just don't have the skill to optimize it.
Crytek on the other hand had a nicely optimized 64-bit engine for Far Cry and for Crysis.
 
One thing that I remember from my work with vector architectures 40 years ago, was that needless precision is actually detrimental to performance. On the Cyber 200 series, the multiply pipeline could pump out 128 bits every cycle. This meant either 2 64-bit words or 4 32-bit words. Using 32-bit precision, we were able to hit the magic (for that time) 100 Mflops number on benchmarks, but not with the 64 bit precision version.

Granted, this was a vector-oriented benchmark, but it did drive home an issue--that defaulting to an overly-large result size can have detrimental effects. In particular, defaulting to a 64 bit int may not be good, all things considered. So going from 32 to 64 bits optimally may require quite a bit of tinkering. This is just a guess,mind you.

I think that the big fault with the x86 architecture is the small register file size; that, and the single condition code register would seem to stand in the way of some really creative code optimization.
 
that defaulting to an overly-large result size can have detrimental effects.

Yes, exactly.
I hate it when people throw double precision floating point at everything. Most of the time you can get the same calculations working fine with single precision, if you just get the order of operations right, and make sure your code is robust (do comparisons with sensible epsilon values and such).

In GPUs this is still a big thing these days. nVidia actually opted to go for significantly less double-precision performance in their latest GM200 GPU:
http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/2
So they basically just optimized the hardware-design for single-precision calculations. It is capable of double-precision calculations, but at a fraction of the speed of single-precision, only 1/32th.
So only use double-precision when you really REALLY need it.

Since earlier designs had a ratio of 1/3 or even 1/2, it seems like nVidia deliberately went this way, because the need for full double-precision isn't that big in practice.
It hurts them in synthetic benchmarks though.

I think that the big fault with the x86 architecture is the small register file size; that, and the single condition code register would seem to stand in the way of some really creative code optimization.

Yes, also the two-operand model. You always overwrite one of your two operands with every instruction. So you often need a second instruction to preserve a copy in-register or in-memory.
A three-operand model and more registers would probably bring down the total number of operations required for a lot of routines, and also reduce the total number of bytes of code.
 
I suppose the supply of archaea like me, still happily using an 8-bit

programmable calculator (circa 1986) will gradually diminish.....

Well, there is a whole generation currently using 8-bit programmable calculators, and paying a premium for them. The TI-82, 83, 83Plus, 84, 84Plus, and 85 all use Zilog Z80's, and have been required purchases for some high school curricula for a number of years. TI does RSA crypto (signed binaries) on those Z80's in addition to all the math functions. It is somewhat normal for students to write programs in Z80 assembler (with some very nice macro definitions available in the SDK) for these calculators. There is a good comparison table at https://en.wikipedia.org/wiki/Comparison_of_Texas_Instruments_graphing_calculators and it is really interesting that a mainstream graphing calculator was released as recently as 2013 with a Z80 as its CPU. EDIT: There is an even newer one for 2015.....

So I would contend that the 8-bit era is still going strong, if you can accept that a 'graphing calculator' qualifies as a 'personal computer.' (The TI-84 Plus C Silver Edition with a 15MHz Z80 was released in 2013; see https://en.wikipedia.org/wiki/TI-84_Plus_C_Silver_Edition and compare its specs to those of 'personal computers.') It must be reiterated: these are not vintage computers, but mainstream ones that are found by the millions and still being sold at retail.
 
Last edited:
The advent of monkey-programmers and multi-level (un-optimized) Frameworks, really never allowed 64 bit code to be as potentially fast as it could be...

Add to that that in (even the most optimized) C++, individual pushes & pops off the stack from a subroutine call with CLASS parameters can involve MANY 64/32 bit reads & writes...

Not to mention that JAVA & JVM's, and that whole obfuscation mess puts real overhead on most "64 bit" programs, built with, or around those platforms.

On the more specific of when 8 bit died, My vote--it'd have to be the last Acorn or Apple // or 8 bit Commodore was sold...

Since much of a modern program (specially it's I/O) is still 8 (maybe some 16) bit x86 type code, one could argue that in some very specialized way, the 8 bit world never truly left us...
 
Back
Top