• Please review our updated Terms and Rules here

So how many cores are enough these days?


Intel probably should not have put an i7 sticker on a dual core processor at 1.8Ghz but it does actually only have two cores.
Despite those benchmarks, I think you'd find the 4500U would still be noticeably faster to use for general tasks - as per the fast core vs number of cores discussion earlier.

But if I was drunk enough to want to play games on an Ultrabook with integrated graphics, I certainly would not want Intel. AMD made a good decision purchasing ATI.

Generally I tend to always (even in laptops) go with the Intel CPU / AMD GPU combo - but you wont find a discrete GPU in an Ultrabook.
 
Last edited:
So you've now moved the goalposts from CPU performance to GPU performance?

What are you talking about? I haven't moved anything - it's the overall that counts. A high-end FX chip paired with a good video card and you'll have a nice AMD gamer at reasonable cost. You need to get you head out of the benchmarks and into a real machine, and then you might have an argument based on your own practical experiences. You are not going to convince anyone here that they should scrap their AMD gamer because you can quote a few obscure figures off some Intel leaning forum - not me anyway.
 
What are you talking about? I haven't moved anything

I'm quite sure this thread was about CPU cores, not about GPUs.

it's the overall that counts.

Except that there is no 'the overall'.
You are using some AMD slides, where they cherry-picked games that weren't CPU-limited, so that they could show off their faster GPUs compared to Intel.
There are also games where CPU performance is more important, and the Intel GPU will deliver better framerates despite the GPU itself being slower than the AMD one on paper:
46677.png

46680.png

46681.png


You need to get you head out of the benchmarks and into a real machine, and then you might have an argument based on your own practical experiences. You are not going to convince anyone here that they should scrap their AMD gamer because you can quote a few obscure figures off some Intel leaning forum - not me anyway.

Firstly, I have not suggested that people scrap their AMD machines. The fact that you interpret my posts that way says more about you than about me.
Secondly, I suggest you drop the attitude. You don't know who you're talking to.
I have been programming graphics for more than 20 years, and own my own company which develops realtime video processing software and does videowarping/mapping, dome projections and whatnot.
I am intimately familiar with developing graphics engines using the latest Direct3D or OpenGL on multicore/multi-GPU systems.
In fact, my company is participating in the DirectX 12 early access program.
So who are you to tell me about "practical experiences"? I probably know more about this topic than anyone else on this forum.
Oh, and I'll let you in on a little secret: the PC we use for the dome projection is actually an AMD Phenom X4 system which had a set of AMD Radeon GPUs until a recent upgrade to some GeForce cards.

You are a fine example of why I dislike AMD cheerleaders so much.
 
I'm quite sure this thread was about CPU cores, not about GPUs.



Except that there is no 'the overall'.
You are using some AMD slides, where they cherry-picked games that weren't CPU-limited, so that they could show off their faster GPUs compared to Intel.
There are also games where CPU performance is more important, and the Intel GPU will deliver better framerates despite the GPU itself being slower than the AMD one on paper:
46677.png

46680.png

46681.png




Firstly, I have not suggested that people scrap their AMD machines. The fact that you interpret my posts that way says more about you than about me.
Secondly, I suggest you drop the attitude. You don't know who you're talking to.
I have been programming graphics for more than 20 years, and own my own company which develops realtime video processing software and does videowarping/mapping, dome projections and whatnot.
I am intimately familiar with developing graphics engines using the latest Direct3D or OpenGL on multicore/multi-GPU systems.
In fact, my company is participating in the DirectX 12 early access program.
So who are you to tell me about "practical experiences"? I probably know more about this topic than anyone else on this forum.
Oh, and I'll let you in on a little secret: the PC we use for the dome projection is actually an AMD Phenom X4 system which had a set of AMD Radeon GPUs until a recent upgrade to some GeForce cards.

You are a fine example of why I dislike AMD cheerleaders so much.

I'm really impressed with your credentials. Maybe you should cut loose a little by getting out in your shop more often and playing a few games on your X4. And by the way, you seem to be the one with the attitude. You tend to be somewhat sarcastic and have a 'know-it-all attitude. At least that's my impression, albeit not worth much. I've been dealing with electronics and computers since the early 60's, so I think I know my way around, having made my living in the industry. Like most folks on this forum, it's a hobby and mostly for fun and games, so why so snobby? BTW, those stats that you trotted out above show some fairly dated GPU's for the Batman game, which was issued in late 2011.
 
Indeed, AMD used to be THE chip to have, i had a duron 750 at 1.1 and it smoked the intel offerings at the time, by a long shot. But now though, my C2Q 2.66@ 3.2 smokes just about anything amd has, in fact, the only bottleneck i have with it is i am still running DDR2, i have a DDR3 board on the way, and that will clear up my major bottleneck. Matter of factly, the same system outperforms my i3 laptop, which is only just over a year old, even with the ram difference. I see no reason to upgrade too much yet.
 
Like most folks on this forum, it's a hobby and mostly for fun and games, so why so snobby?

Snobby?
Look, I was just giving my friendly advice and experience as far as multi-core CPUs and performance go.
You were the one who went 'all AMD' over this thread, with a rather pedantic attitude.
You asked for it, basically.

BTW, those stats that you trotted out above show some fairly dated GPU's for the Batman game, which was issued in late 2011.

Well yes, this sort of thing is not investigated often by reviewers. Besides, I am not interested enough in doing a dedicated search for these results. I just happened to know where these particular results could be found, as I had used them before to demonstrate the notion that faster GPU does not necessarily mean better gaming experience.
Today the difference would only be more in Intel's favour, since Intel has made bigger leaps forward in GPU performance than AMD has in CPU performance.
 
I still wouldn't use an intel integrated gpu for anything graphics intensive. However, for what they are, there are good. A decent intel laptop or desktop paired with a decent graphics unit, can go a really long way. Though i haven't had much luck mixing ati graphics with nvidia based motherboards, or vice versa....
 
Indeed, AMD used to be THE chip to have

At one small moment in time yes :)
This forum goes a bit further back than most :)
Prior to the K7 architecture, AMD never threatened Intel's faster CPUs, and was merely price-fighting the budget models. Which in those days generally meant last-gen CPUs. Eg, when AMD had its first successes with the 386DX-40, Intel was already offering a 486DX-50.
AMD hit the sweet-spot for users on a budget (it was very popular among students for example).
 
I still wouldn't use an intel integrated gpu for anything graphics intensive.

I wouldn't use an AMD integrated GPU for anything graphics intensive either.
They may be faster than Intel, but they're still bottlenecked badly by the shared DDR3 memory. Dedicated GPUs with dedicated graphics memory are just much faster.
To me, integrated graphics is just a bit 'meh'... They just don't reach an acceptable level of performance for any serious graphics work/gaming/etc.

What I do find rather ironic though... I have used a few of these nVidia Optimus laptops now... But they didn't have very high-end nVidia GPUs on them. The Intel GPUs are actually good enough that the nVidia GPUs are barely faster in most software I tried.
 
Intel will stay ahead as long as they can profitable use their massive FABs with cutting edge production equipment. Once those FABs get too expensive for even Intel to keep going (many many Billions of dollars for facilities that go obsolete in a few years) then maybe things will even out a bit assuming AMD is still around.
 
At one small moment in time yes :)
This forum goes a bit further back than most :)
Prior to the K7 architecture, AMD never threatened Intel's faster CPUs, and was merely price-fighting the budget models. Which in those days generally meant last-gen CPUs. Eg, when AMD had its first successes with the 386DX-40, Intel was already offering a 486DX-50.
AMD hit the sweet-spot for users on a budget (it was very popular among students for example).

Heh, and while Intel was busy with a single-chip 8-bit slowpoke AMD was powering some of the largest minicomputers of the day with their cutting-edge 2900-series processor chipsets.... computers built with them walked all over the 8080 competition....

(sorry, couldn't resist.....) and yes, take that a bit tongue-in-cheek, as this forum does go back that far.
 
Heh, and while Intel was busy with a single-chip 8-bit slowpoke AMD was powering some of the largest minicomputers of the day with their cutting-edge 2900-series processor chipsets.... computers built with them walked all over the 8080 competition....

Yes, and IBM only made it worse, by wanting to stick to the 8080-related 825x chipset for their PC, therefore choosing the 8088 over the 8086.
The 8086 was actually a half-decent 16-bit CPU... but putting it on an 8-bit bus pretty much cut the performance in half. The thing is struggling just to get its instructions read from memory. All that when the competition is already starting to adopt the Motorola 68000.
 
I always love the 'who's go the bigger machine epeen' debates when both epeens will be shriveled and flaccid in 6 months. The correct answer is what-ever is on-sale and do it again in a year.
 
I always love the 'who's go the bigger machine epeen' debates when both epeens will be shriveled and flaccid in 6 months. The correct answer is what-ever is on-sale and do it again in a year.

The classic Atari ST vs Amiga wars were much better!
And they actually had substance to it... These days it's all the same generic stuff, just faster/bigger/more.
 
Yes, and IBM only made it worse, by wanting to stick to the 8080-related 825x chipset for their PC, therefore choosing the 8088 over the 8086.
The 8086 was actually a half-decent 16-bit CPU... but putting it on an 8-bit bus pretty much cut the performance in half. The thing is struggling just to get its instructions read from memory. All that when the competition is already starting to adopt the Motorola 68000.

I recall the excitement just before the PC was officially announced that IBM rolled out their lab computer using a 68K--could the new "personal computer" use the 68K?

Z80 machines were still being purchased as personal computers for quite some time after the 5150 came out. I recall a friend with his own business picked up a complete Morrow MD2 system with software for much less than an IBM PC--and it was ready to be put into use the same day.
 
Z80 machines were still being purchased as personal computers for quite some time after the 5150 came out.

Yup.. the Z80 was apparently quite popular in combination with CP/M. I'm not that familiar with the world of CP/M myself, but I know that Commodore added a Z80 to their C128 specifically for CP/M support (its regular CPU is a 6502-derivative, for C64-compatibility), so apparently even as late as 1985 it was still a big deal.
 
Yes, and IBM only made it worse, by wanting to stick to the 8080-related 825x chipset for their PC, therefore choosing the 8088 over the 8086.

Yet for some bizarre reason they fitted the Displaywriter "Word Processor" with the 8086. (While at the same time sticking the System/23 Datamaster with an 8085 and bank-switched memory.) One starts to wonder if circa 1980 IBM engineers were simply told to play "spin the bottle" when choosing CPUs for small computer projects.
 
Yet for some bizarre reason they fitted the Displaywriter "Word Processor" with the 8086. (While at the same time sticking the System/23 Datamaster with an 8085 and bank-switched memory.) One starts to wonder if circa 1980 IBM engineers were simply told to play "spin the bottle" when choosing CPUs for small computer projects.

I think it's more a matter of one division not knowing what another was doing. It happens more often than you'd think in large organizations.
 
Back
Top