• Please review our updated Terms and Rules here

So how many cores are enough these days?

On the subject of running Virtual Machines I'd have to say honestly that if you gave me the choice between "twice as many cores" and "hardware striped RAID controller" I'd totally pick the RAID controller. Yes, having tons of cores to throw at VMs is nice if your load is compute limited (which is actually pretty rare) but I can speak from experience that if you have a huge flaming 12 core desktop workstation tied to a single spinning SATA platter (@#$#ing corporate hardware purchasers) the whole computer is going to get thrown under the bus if one of your VMs starts banging on the disk hard enough. To my mind "how many cores is enough" is sort of a nonsequitor; the answer is "it totally depends what you're doing". There's plenty of circumstances in which trading off some cores for an upgrade somewhere else would give you a better experience.
 
Just loaded this box up with a number of google and ebay tabs using the latest firefox and cpu usage didn't hit more than 5%. I must be doing something wrong.
E-bay usually isn't a problem -- but when you say Google what parts of google? Their search pages while now slow loading and reeking of developer ineptitude usually aren't a problem -- but you open up two or three tabs on Google Plus, a Gmail tab, a couple of Google crApps, toss in facebook and Twitter for good measure, then leave those open for a few hours... and watch the RAM usage spike up to around a gig and a half at which point the CPU usage starts to climb... almost like their scripting engine has a memory leak; of course being Mozilla they'll claim it's not a leak, It's a feature just like it was roughly a decade ago.

But really on those sites I just mentioned, it's a nasty case of "JS for nothing" on the part of the developers that are the real problem.
 
Last edited:
Desktop/workstation wise, most of my extra cores get used when I'm doing large compile tasks or huge file compress/decompress tasks. They're also useful for the type of software development work I do for my day job: running an integration test suite against a full instance of an application will generally consume all four cores. I also run VMs for some services that are impractical/inconvenient to run on the workstation itself (don't want to pollute the install).

Server-wise, ZFS is happy to chew through RAM and CPUs. Automatic background tasks like Munin reporting don't interfere as much with user tasks. Daemon processes like BIND and MariaDB don't have to fight user processes as much. And of course, VMs.

I've got a fun bit of hardware to start playing with soon: it's an Intel Xeon Phi coprocessor card. 57 x86-compatible cores and 8GB RAM on a double-width PCIe card!
 
Desktop/workstation wise, most of my extra cores get used when I'm doing large compile tasks or huge file compress/decompress tasks. They're also useful for the type of software development work I do for my day job: running an integration test suite against a full instance of an application will generally consume all four cores. I also run VMs for some services that are impractical/inconvenient to run on the workstation itself (don't want to pollute the install).

Server-wise, ZFS is happy to chew through RAM and CPUs. Automatic background tasks like Munin reporting don't interfere as much with user tasks. Daemon processes like BIND and MariaDB don't have to fight user processes as much. And of course, VMs.

I've got a fun bit of hardware to start playing with soon: it's an Intel Xeon Phi coprocessor card. 57 x86-compatible cores and 8GB RAM on a double-width PCIe card!

That thing require the Xeon to run?
 
With consoles that's a given, since all consoles of a given type are the exact same speed, so games are tuned to run at good framerates for that hardware.



Well, as I say... a fast GPU is so fast that you need a very fast CPU to feed it with commands quickly enough. AMD's CPUs simply aren't fast enough to take advantage of present day technology in GPUs. Note also how AMD's latest GPUs are never reviewed with AMD CPUs for that very reason.

That may be true, but that's all lab benchmark hoopla. Set Intel and AMD side by side running the same game and you would have a hard time hard time discerning any difference. Crunching numbers in a spreadsheet or the like Intel would have a small edge.
 
99%+ of PC sound hardware in circulation has NO ASIO support under Linsux or Winblows. That's why software like ASIO4ALL exists.

Well yes, ASIO4ALL is an exercise in futility really. The reason why most PC sound hardware doesn't have ASIO in the first place, is because it's cheap shit that isn't capable of low-latency buffer transfers.
I don't count ASIO4ALL because it's just a hack: A wrapper over other APIs, while ASIO is actually designed to go BELOW regular Windows sound APIs, and avoid any overhead or processing.
Using an extra wrapper over some other API with undetermined performance characteristics while trying to emulate a direct buffer interface is just asking for trouble.
To me, ASIO is only devices that natively support ASIO. ASIO4ALL is something I wouldn't touch with a 10 feet pole.

Have a laptop? FORGET IT, never happen...

I have an E-Mu 0404 USB, which is nice and portable, and gives you decent ASIO 2.0 and 192/24 recording on location. Latency is < 3 ms. I've used it with PC-based guitar modeling software (Amplitube, Guitar Rig) to play in realtime, and that was no problem, latency-wise.
 
Just loaded this box up with a number of google and ebay tabs using the latest firefox and cpu usage didn't hit more than 5%. I must be doing something wrong.
No, that's how it behaves in the beginning. Leave them up, and keep Firefox running for some days, or a week or two. When you need to look at something else, do that in another tab. To re-visit an old page, go to the tab where you keep it.
That's how I use Firefox. At my office desktop (half a globe away) I keep lots of tabs open for months at the time. That also triggers Firefox' memory leaks, so I had to upgrade to 16GB on that computer. And I still have to stop and restart Firefox now and then (with all the tabs). It's just that with 16GB I don't have to do that several times a week, as I had to with 4GB, and once a week with 8GB.

The CPU usage has been consistent over a number of Firefox versions (between 10.x and 3x.x), and computers. The memory leaks seem to vary a bit between versions.

-Tor
 
While the number of multicore systems in my inventory is quite considerable compared to my other PC systems, I still use single-core systems particularly those that are used for small business and light graphics applications. The underlying principles have little changed since multiplan, 1,2,3 and WordPerfect/Word. My C=128 and TRS-80 Model 4 can do all this in 80 column without breaking a sweat as can my Tandy 1000 SL. Of course they are poor multitaskers. I could then switch to my Amiga A4000 which does this with ease and grace.
 
No, that's how it behaves in the beginning. Leave them up, and keep Firefox running for some days, or a week or two. When you need to look at something else, do that in another tab. To re-visit an old page, go to the tab where you keep it.
That's how I use Firefox. At my office desktop (half a globe away) I keep lots of tabs open for months at the time. That also triggers Firefox' memory leaks, so I had to upgrade to 16GB on that computer. And I still have to stop and restart Firefox now and then (with all the tabs). It's just that with 16GB I don't have to do that several times a week, as I had to with 4GB, and once a week with 8GB.

The CPU usage has been consistent over a number of Firefox versions (between 10.x and 3x.x), and computers. The memory leaks seem to vary a bit between versions.

-Tor

Chrome and IE are also offenders of the memory leak bugs, but lately Firefox seems to be the worst (for me anyway).
I do the same thing, keep browsers open with a lot of tabs - one thing I like about Windows 7 is being able to right click on the Firefox or Chrome icons and and close all instances+tabs in one foul swoop for clean up.

I run 6GB at home, and 8GB at work - I usually call in the memory police when they're in the 1-2GB range.
 
That may be true, but that's all lab benchmark hoopla. Set Intel and AMD side by side running the same game and you would have a hard time hard time discerning any difference.

Erm no, there are various games that depend a LOT on CPU-performance, and the difference can be quite significant.
Take the Starcraft 2 benchmark here for example: http://www.anandtech.com/show/6396/the-vishera-review-amd-fx8350-fx8320-fx6300-and-fx4300-tested/5
AMD gets 47.9 fps on its fastest CPU, Intel gets 71.9.
That means that AMD won't be able to run 60 Hz v-synced smoothly, while Intel will. That is an obvious difference.
Note also that even the 2500K gets 64.9 fps, while it has half the cores of the FX-8350, and no HT either. It's all about the single-threaded performance there.

In most other games, AMD is 'fast enough', but the gap is still huge.

Crunching numbers in a spreadsheet or the like Intel would have a small edge.

Again, Intel has more than just a small edge. Especially in the single-threaded Cinebench, the difference is large. More than 50% faster. That sort of speed difference takes years to develop.
 
Last edited:
Erm no, there are various games that depend a LOT on CPU-performance, and the difference can be quite significant.
Take the Starcraft 2 benchmark here for example: http://www.anandtech.com/show/6396/the-vishera-review-amd-fx8350-fx8320-fx6300-and-fx4300-tested/5
AMD gets 47.9 fps on its fastest CPU, Intel gets 71.9.
That means that AMD won't be able to run 60 Hz v-synced smoothly, while Intel will. That is an obvious difference.
Note also that even the 2500K gets 64.9 fps, while it has half the cores of the FX-8350, and no HT either. It's all about the single-threaded performance there.

In most other games, AMD is 'fast enough', but the gap is still huge.



Again, Intel has more than just a small edge. Especially in the single-threaded Cinebench, the difference is large. More than 50% faster. That sort of speed difference takes years to develop.

You are aware that minimum requirements for Starcraft II, with respect to AMD, are an Athlon XP processor 2200+ and GeForce 6600/Radeon 9800. That being said, it stands to reason that it ought to run fairly well on a FX-8350 system oc'd to @ 4.6 GHz, w/ 2 XFX 7970's in Crossfire, 16 gig of ram on Qnix 28" LED.
 
hah, I wrote up a reply to the "small edge" myself but never posted it.
Mine was basically a combination of Scali and Agent Orange's posts. This is just my thoughts on choosing a processor - I haven't really been building or playing high end latest releases for a few years now.

I'm sure the FX will run games well, and for the most part running on Intel for program execution will be an edge rather than a jump in gaming - for the most part the average title is limited mostly by GPU performance anyway. In the practical sense, not really an issue, but an enthusiast pushing for the best is surely going to want the best - and it should be considered if it's worth the 30-50% performance penalty per core for the sake of more cores and a lower price, because you're only going to get the true performance on tasks that load the processor with 1 or more threads per core.

As an example of more cores vs more speed, this is from a few years ago now.

Core 2 Duo - 2.8Ghz - overclocked it to 3.8Ghz - from an idiot end user perspective, this jump in processor performance (the feel) was very noticable (I later went to 4.4 over time, but you dont notice small increments)
Then I upgraded to a fancy Core i7 930 - 2.8Ghz - overclocked it up to 4Ghz - general use / games/ etc - damn thing felt exactly the same. The extra cores, they do nothing :(

Only time I get to enjoy having more than two cores, is video encoding :/
 
Last edited:
You are aware that minimum requirements for Starcraft II, with respect to AMD, are an Athlon XP processor 2200+ and GeForce 6600/Radeon 9800.

Yes, if you run everything at the lowest possible settings. But that's not the point here, is it?
The point is that Intel gives significantly better framerates because of its significantly better single-threaded performance. Which means that at the highest settings, the Intel systems can still run 60 fps v-synced, while the AMD's cannot, with the same videocard.
Hence, you are CPU-bottlenecked, resulting in a worse gaming experience.

That being said, it stands to reason that it ought to run fairly well on a FX-8350 system oc'd to @ 4.6 GHz, w/ 2 XFX 7970's in Crossfire, 16 gig of ram on Qnix 28" LED.

I'm not sure where you are going with this statement... Overclocking AMD's fastest offering, to try and get it up to the performance level of an outdated low-end Intel CPU like the 2500K? And then what? You can't overclock 2500Ks as well?
Why are you even defending the FX in the first place?
The point was that there is a big difference between Intel and AMD in single-threaded performance, which is very much apparent in certain software, as demonstrated.
 
and for the most part running on Intel for program execution will be an edge rather than a jump in gaming - for the most part the average title is limited mostly by GPU performance anyway.

I disagree with that. The benchmarks clearly show that in many games the Intel systems are considerably faster.
The only thing is... even the AMD ones get 60+ fps in most games, so on the average 60 Hz monitor, you won't see the difference.
But the difference is there, and it is indeed a jump. Just look at the benchmarks. The difference can be 20% or more, which is huge (the difference between the 2500K and 3770K is smaller in many cases, and just look at what that performance difference is worth in price).
 
The problem for AMD is while they may have adequate performance, especially when overclocked, that requires a hefty power bill. AMD gambled that they could squeeze more cores into a die and make up for limited single thread performance and inferior nodes with extra cores and remain viable in the server market. Instead, AMD falls far enough behind on performance that they are competing with the lower range of Intel chips and further has to heavily discount to compensate for the increased power consumption. Big chips are expensive to manufacture; selling them at low, low prices does not result in a profitable company.
 
I disagree with that. The benchmarks clearly show that in many games the Intel systems are considerably faster.
The only thing is... even the AMD ones get 60+ fps in most games, so on the average 60 Hz monitor, you won't see the difference.
But the difference is there, and it is indeed a jump. Just look at the benchmarks. The difference can be 20% or more, which is huge (the difference between the 2500K and 3770K is smaller in many cases, and just look at what that performance difference is worth in price).

Yeah, I was more getting at the concept that AO suggested - that the average person likely wouldn't notice the difference. Many console gamers (including me) still get our jollies from playing at a mere 30fps. So if you needed performance for H.264 encoding, but wanted to play games too - an 8 core FX is probably a damn good option.

But yes there is certainly a step, and I think it's a lot larger than many realize.
 
I really miss the glory days of the Athlon Thunderbird, back when an AMD chip could strangle the best Intel had to offer to death with one hand tied behind its back. (Particularly when it came to FPU performance; even the lowly first-gen Durons could come close to besting a 1Ghz PIII.) I was still rocking an overclocked (400 up to 550mhz) ABIT BP6 Dual Celeron box as my desktop when I picked up a 700 mhz Spitfire+motherboard combo at Fry's to upgrade the "game console" connected to the TV from a K6-400, only to discover that the thing I paid less than $100 for ran rings around the Frankenstein SMP system at almost everything. The positions are almost 180 reversed these days, and while programs that leverage multi-threading are far more common these days I'll still take one horse over a flock of chickens if I have to choose.
 
Last edited:
Ah, now you guys are talking gaming, which I don't do--ever. For the kind of work I do, even my old Socket A motherboard, retrofitted with an Athlon XP-M and OC-ed still makes for a useful box.
 
I really miss the glory days of the Athlon Thunderbird, back when an AMD chip could strangle the best Intel had to offer to death with one hand tied behind its back.

I remember those days, I had a Duron 600 at the time - faster and cheaper than Intel P3 equivalents with no downsides that I remember.
 
Back
Top