• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Haswell E review from Anandtech: 5930K, 5960X, and 5820K

LordOfChaos

Member
http://anandtech.com/show/8426/the-intel-haswell-e-cpu-review-core-i7-5960x-i7-5930k-i7-5820k-tested

HSW-E%20Die%20Mapping%20Hi-Res_575px.jpg


The i7-5960X comes across as the new champion in terms of non-Xeon throughput, although kudos will lay more on having the up-to-date chipset that users have been requesting. Most people moving from a Sandy Bridge-E or Ivy Bridge-E will not see a day-to-day adjustment in the speed of their workflow on the new platform, and the real benefit will be for those that are CPU limited. Haswell-E does mark the time that Nehalem and Westmere users, or 3820K/4820K users, who do anything other than gaming, might consider switching.
 
Damn, Intel has been on a roll for many years. The "i" series is awesome. Imagine the PS4 or XBO with one of these. Even slowed down. The console would be a monster. And cost $1000, lol
 

LordOfChaos

Member
Damn, Intel has been on a roll for many years. The "i" series is awesome. Imagine the PS4 or XBO with one of these. Even slowed down. The console would be a monster. And cost $1000, lol

Yeah, the issue was likely both die size adding to the cost, and Intels pricing structure. Intel and Nvidia both maintained too much control over their architectures, making them less than ideal partners for consoles. But yes, even four lower clocked Haswell cores would make either console that took them sing. I think 8 jaguar cores may come close to the total performance, but 4 cores is easier to use in their entirety without scaling issues.



Is it me or is that memory controller larger than usual?

Although I'm excited, I'm deeply concerned nothing much will support >4 cores for a good while. :/

Here's hoping that the two leading consoles this generation being 8 cores may make highly threaded games more commonplace. That, and 14nm Broadwell may make octacores more accessible for everyone.
 

Buzzman

Banned
Yeah, the issue was likely both die size adding to the cost, and Intels pricing structure. Intel and Nvidia both maintained too much control over their architectures, making them less than ideal partners for consoles. But yes, even four lower clocked Haswell cores would make either console that took them sing. I think 8 jaguar cores may come close to the total performance, but 4 cores is easier to use in their entirety without scaling issues.

pffffthahahaha

I'm sorry, I don't think that would be the case :p
 

The Goat

Member
I was planning on a new build in November, once all the dust settled with new cpu / gpu's, but it's looking like I will just hold off. My 2 sticking points in my current machine (2600K Sandy Bridge / 680gtx) is the lack of true pci-e 3.0 features, and the gpu's 2 gig Vram. I'm certain it will still hold it's own going into the next cycle, albeit I won't be able to run all the bells and whistles.

I've still some OC'ing room yet, so I'll eek out every bit I can.
 
Yeah, the issue was likely both die size adding to the cost, and Intels pricing structure. Intel and Nvidia both maintained too much control over their architectures, making them less than ideal partners for consoles. But yes, even four lower clocked Haswell cores would make either console that took them sing. I think 8 jaguar cores may come close to the total performance, but 4 cores is easier to use in their entirety without scaling issues.



Is it me or is that memory controller larger than usual?



Here's hoping that the two leading consoles this generation being 8 cores may make highly threaded games more commonplace. That, and 14nm Broadwell may make octacores more accessible for everyone.


Yep. But that has made them both advance significantly technologically speaking. Specially Intel. AMD has absolutely nothing to face their i architecture. Which is very unfortunate.

pffffthahahaha

I'm sorry, I don't think that would be the case :p

I think he's saying that, at the same frequency, the chips would be close in performance.
 
Here's hoping that the two leading consoles this generation being 8 cores may make highly threaded games more commonplace. That, and 14nm Broadwell may make octacores more accessible for everyone.

So say we all. I'm probably gonna hop onto Broadwell once it arrives. I heard what's coming inititally in Q1 2015 is all the low power chips?
 

LordOfChaos

Member
pffffthahahaha

I'm sorry, I don't think that would be the case :p

Adjusted for the clock speed, half the performance (so 8 in total come close to 4) isn't a crazy thought.

AMD A4-5000 (1.5GHz Jaguar x 4) 1323 in 7Zip single threaded
Intel Core i5-3317U (1.7GHz IVB x 2) 2816

And 1.5 vs 2.39 in Cinebench, which if you double Jaguar would actually put it OVER Intel. So saying half the performance is a fair judge.

So I'm pretty close (even slightly over if adjusted for clocks?), nothing to laugh at.

http://www.anandtech.com/show/6974/amd-kabini-review/3

I think he's saying that, at the same frequency, the chips would be close in performance.

Yep
 

Kezen

Banned
How much more powerful are those new I7 compared to the Jaguar cores in the current-gen consoles ?
Serious question as I'm still trying to understand what kind of CPU power is needed to make up for the API's slowness.
 

LordOfChaos

Member
How much more powerful are those new I7 compared to the Jaguar cores in the current-gen consoles ?
Serious question as I'm still trying to understand what kind of CPU power is needed to make up for the API's slowness.

See my post two above
 

tuffy

Member
I'm looking forward to an 8 core chip so I can transcode things at ludicrous speed. It'll be my first significant upgrade in quite awhile.
 

GSG Flash

Nobody ruins my family vacation but me...and maybe the boy!
Any chance of the 4790k coming down in price when these CPUs drop?
 

x3sphere

Member
Yeah. Maybe, I should cancel my 5930K order and go with a 5820K instead. I don't plan to go more than 2-3 GPUs.

I wouldn't. 3 GPUs could definitely benefit from the extra PCI-E lanes with the newer cards coming out.

Plus if you are spending the extra premium on a X99 setup, which is significant with the higher priced mobos and DDR4 memory, I see little reason not to spend a few hundred extra to get all the PCI-E lanes.
 

Blanquito

Member
Although I'm excited, I'm deeply concerned nothing much will support >4 cores for a good while. :/

The Witcher 3 devs mentioned that both consoles allow devs to access 6 cores and that their goal was to "make the [Witcher 3] engine scalable." So it looks like there's already effort being put into taking advantage of extra cores if you have them.
 

wwm0nkey

Member
After I get my 880 I think I will save up for this and a new MB. Or should I just wait since i have a 2500k at 4.4Ghz right now? I know for a lot of things I wont need it but for future proofing it seems like it would be a good idea?
 
Adjusted for the clock speed, half the performance (so 8 in total come close to 4) isn't a crazy thought.

AMD A4-5000 (1.5GHz Jaguar x 4) 1323 in 7Zip single threaded
Intel Core i5-3317U (1.7GHz IVB x 2) 2816

And 1.5 vs 2.39 in Cinebench, which if you double Jaguar would actually put it OVER Intel. So saying half the performance is a fair judge.

So I'm pretty close (even slightly over if adjusted for clocks?), nothing to laugh at.

http://www.anandtech.com/show/6974/amd-kabini-review/3



Yep

shitty netbook cpu (the i5 3317U) does not have the same ipc as the desktop haswell I believe.
Also the full amd desktop fx cores (not the netbook jaguar ones) are less than half as fast as a haswell i5 desktop core at the same clockspeed, they are still slower (still less than half as fast) even at 5ghz compared to stock i5 4670k

You are massively overestimating the amd cpus

edit: I was right:
Passmark cpu benchmark score:
i5 3317u netbook cpu, 4 cores @1.7 ghz : 3100
Intel Core i7-4770K desktop cpu, 4 cores @ 3.50GHz 10,280

core for core, clock for clock, the desktop i5 is 1.6 times faster than the notebook i5
So even if the 8 amd jaguar cores were clocked equally to a desktop haswell quad core, the haswell quad core would still be 3.2x faster (not 2x)
But with jaguar at 1.7 ghz a quad core haswell cpu at stock clocks is about 6.5x faster, once you overclock it it's another 30 percent so almost 8.5x faster
There is a reason why the desktop amd fx 8 core cpu needs to be clocked at 5ghz to match a quad core ivy bridge i5 at 3.5ghz.


Anyhow, now that misinformation was corrected maybe we can discuss the article and the 6-8 core haswell-e instead of having these dumb jaguar comparisons
again. It's pointless they are more than a full generation apart.

edit: silly me, I can just look up the jaguar's passmark score to compare them without doing this crude math assuming performance scales 100 percent with clocks for either.
AMD A4-5000 APU 1,905
so a stock desktop haswell quad core is 5.4x faster
 

Henrar

Member
If only Sony had gone with Intel + dedicated GPU instead. Jaguar is too weak.

We would have to pay problably over a thousand dollars for the console and it would have huge TDP (therefore it would probably be bigger than Xbox One).

Sure, they could've done that. But then we would witness the biggest catastrophe in history of SCE, easily overshadowing the PS3.
 

Felspawn

Member
on the one hand i'm glad to see Intel move the CPU power yard stick down the field. On the other hand in terms of Gaming performance anything beyond an i5 is a waste, and things are only going to get more GPU bound with Direct3D 12/Mantle/Next Gen OpenGL. not to mention using GPU for compute tasks. If you doing (a LOT) video encoding maybe but otherwise its just an epeen thing
 
Anandtech are idiots btw, benchmarking games at gpu limited settings, and on an sli set up to boot, why can't they just set it to low@800*600 and get meaningful data for the cpu results.
 
Benchmarks are hard
the point of gaming benchmarks is to see how much faster the cpu actually is in games, which you can only measure if you don't have a gpu bottleneck.
It doesn't matter if you aren't going to play that specific game at 800*600 ,the point is to see how much better it does in games so when future games come out using the same engine or you upgrade your gpu later on you know if the cpu is more capable than what you have or not.

If you test it at gpu limited settings it's a gpu benchmark not a cpu benchmark.
You turn off /down all gpu reliant settings so you can test the cpu reliant ones and see how the cpu performs.

Potential buyer has 2 titans and a 144hz monitor
He doesn't give a shit about a benchmark showing him that those 1 or 2 770s run tomb raider at 50-90fps at 1080p with 4x msaa and tressfx on high, he needs to know if he can get 144 fps with this cpu when he can't with his current cpu.
This worthless anandtech benchmark tells him nothing (it only tells him how well a 770 performs in tomb raider), other than that anandtech are a glorified product overview site rather than a respectable review site.

Might as well look on newegg or amazon product overview page for a 'review'
 

mkenyon

Banned
Because it's as far from relevant as you can get. Nobody plays at those resolutions and it's not indicative of anything anyone would actually do.
Yep.

People have long abandoned the silly low resolution tests for CPUs because they're literally meaningless.

Also, better benches from Techreport:


Not a single surprise there. IPC still king.
If you test it at gpu limited settings it's a gpu benchmark not a cpu benchmark.
You turn off /down all gpu reliant settings so you can test the cpu reliant ones and see how the cpu performs.
'
Bottlenecks don't work like that. They can exist on all parts at the same time. The information is still very useful. They are generally more engine dependent than "settings" dependent.
 
Yep.

People have long abandoned the silly low resolution tests for CPUs because they're literally meaningless.

Also, better benches from Techreport:



Not a single surprise there. IPC still king.

Bottlenecks don't work like that. They can exist on all parts at the same time. The information is still very useful. They are generally more engine dependent than "settings" dependent.


it's like the real mkeynon got kicknapped by aliens

read my bit on the 120-144 hz monitors
testing games with settings noone with such a monitor will use (tressfx high etc) is stupid, because they care about getting the framerate they desire (which is why they'd be looking at a high end cpu to begin with)
gpu limited benchmarks are completely worthless.

frametimes and minimum fps are better metrics to get a feel for cpu performance in a gpu limited scenario, but what you need is minimum fps and frametimes in a cpu limited scenario
 

mkenyon

Banned
frametimes and minimum fps are better metrics to get a feel for cpu performance in a gpu limited scenario, but what you need is minimum fps and frametimes in a cpu limited scenario
What do you mean by this?

Because, the benches I posted above, especially for Watch Dogs and Batman, are both CPU limited but for different reasons. Batman is on UE3 and sings with two speedy as fuck cores.
 

2San

Member
I wanted to see how the mess that is watch dogs would do.

I'm still good with my 2500k. I'm getting some crazy mileage out of this CPU.
 

Cyriades

Member
Can Intel's new chip improve gaming performance?

Hot damn this is some quick, expensive silicon. But even though this brand new, $1,000 eight-core, sixteen thread, Core i7 5960X processing monster is capable of some serious number-crunching, it’s probably not the CPU you’re really looking for.

The i7 5960X is the first, and the most powerful, of the new Haswell E range of Intel CPUs. They represent the processors of a whole new PC platform, comprising new motherboards and the next generation of system memory, namely X99 and DDR4 respectively. But all this comes from one place. And it’s a bit of a dull, grey, air-conditioned place: servers.

Intel doesn’t specifically design chips for PC gamers, with our discrete graphics cards and desktop PCs. They haven’t done for years. What they do is develop spankingly good, powerful, efficient mobile processors and ludicrously big, multi-core server chips.

Intel's desktop division then repurposes those mobile chips for use in our desktop Z97 motherboards and does the same for the server parts with the Extreme Edition CPUs, of which this Core i7 5960X is one. That’s why the new processors come with support for DDR4 memory.

The new memory doesn’t do anything particularly fancy on the desktop, but in the server-land it cuts down power demands and boost efficiency. DDR4 doesn’t need as much juice and you don’t need as many modules to operate at the same capacities. Great in servers, not so exciting on the desktop.

Verdict

"A super-powerful, octo-core CPU, but has little to really offer in gaming performance."

Full Article: http://www.pcgamer.com/review/intel-core-i7-5960x-review/ (link is external)
 

mkenyon

Banned
I wanted to see how the mess that is watch dogs would do.

I'm still good with my 2500k. I'm getting some crazy mileage out of this CPU.
Yeah, games are still really all about IPC. As long as you have a SB or newer at 4.5Ghz, there's going to be an imperceptible difference in performance between any Intel proc in any game.
 

2San

Member
Yeah, games are still really all about IPC. As long as you have a SB or newer at 4.5Ghz, there's going to be an imperceptible difference in performance between any Intel proc in any game.
Wondering when we will finally see a game that properly takes advantage of 6 or above multithreated cores.
 

mkenyon

Banned
Wondering when we will finally see a game that properly takes advantage of 6 or above multithreated cores.
There are some already, Civ V is n-threaded.

The rule of thumb right now is basically that the more threads a game uses, the less reliant it is on CPU performance in terms of affecting average frame rate. (assuming a modern quad core is in use). Even still, games that are well known to be multi-threaded, like Crysis 3, are still all about that IPC. The Intel Pentium Anniversary at 4.8GHz keeps up with the 2500K and 8350.
 
What do you mean by this?

Because, the benches I posted above, especially for Watch Dogs and Batman, are both CPU limited but for different reasons. Batman is on UE3 and sings with two speedy as fuck cores.

When you turn off 4xmsaa and hbao+ and tressfx and whatever other fps destroying settings that are 100 percent gpu dependant and that noone with a 120hz monitor is going to use if it prevents them from getting the 120fps they desire.

So you turn anything gpu related (resolution, aa, ao,physx, tressfx etc) off/to low so it doesn't interfere with the results of the benchmark and you can see what fps the cpu can achieve in that game or engine.
People can decide on what gpu and settings they use for themselves, they don't need a benchmark to guess for them.
here's what's relevant to performance: (aka to how well and smoothly a game will play)
-minimum framerates caused by CPU
-average fps enabled by CPU
-minimum framerates caused by gpu
-frametime variance caused by gpu
-average fps enabled by gpu

This is the information you need to decide if there are any bottlenecks in your system today, and wether there will be any bottlenecks in future games or when you replace either the cpu or gpu. And wether you are buying a cpu that is overkill for your gpu or vice versa.

the first 2 are represented in a good cpu benchmark, the last 3 in a good gpu benchmark (pcper for example for gpu benchmarks)

If someone wants to know how their gpu performs at max settings in a game they can look at a gpu benchmark for it. it's irrelevant to the cpu benchmark so it doesn't matter if you test the game at 800*600.

These stupid gpu limited benchmarks for cpus (as well as product overview sites like andandtech rarely bothering to test actual cpu limited games like guild wars 2 , arma or sr4 or planetside 2 or ns2 etc) is why people ended up making the horrible mistake of buyig an amd fx8350 for gaming...
Unless you are alien spy mkeynon you surely will agree with this last point. (fx8350 missbuys due to shitty misleading reviews that leave information out)

obfuscating the results is the exact opposite of what a benchmark is intended to do or what a review is supposed to accomplish.


also on a totally different topic: anyone notice how minimum fps in bioshock infinite is STILL 20 fps even on a haswell-E and sli gtx770? that game was such a disgrace
Just look at any frametime percentile graph on pcper to see how that game should have never passed QA.
I don't get why any review site dignifies the existance of that monstrosity
 
Adjusted for the clock speed, half the performance (so 8 in total come close to 4) isn't a crazy thought.

AMD A4-5000 (1.5GHz Jaguar x 4) 1323 in 7Zip single threaded
Intel Core i5-3317U (1.7GHz IVB x 2) 2816

And 1.5 vs 2.39 in Cinebench, which if you double Jaguar would actually put it OVER Intel. So saying half the performance is a fair judge.

So I'm pretty close (even slightly over if adjusted for clocks?), nothing to laugh at.

http://www.anandtech.com/show/6974/amd-kabini-review/3



Yep

I think the i5 3317U is a Dual Core CPU, not quad.
 
Damn, Intel has been on a roll for many years. The "i" series is awesome. Imagine the PS4 or XBO with one of these. Even slowed down. The console would be a monster. And cost $1000, lol

the CPU in the OP is more than half the size of the XO APU(which houses the GPU and the CPU) + the eSRAM, coupled with the much higher clock speed and you have a monster. it would probably fry the XO power adaptor and melt the plastic lol. these types of hardware is just impossible to integrate in consoles, regardless of the costs. even a much smaller quad core processor from intel will cause problems from very high power consumption to heat and size of the hardware. this isn't 2005 anymore where top end hardware didn't get so hot, the X360 suffered high failure rates during its first couple of years and thats with 2004 technology. people need to understand that costs weren't the only concerns when they developed the current gen of consoles.
 

mkenyon

Banned
When you turn off 4xmsaa and hbao+ and tressfx and whatever other fps destroying settings that are 100 percent gpu dependant and that noone with a 120hz monitor is going to use if it prevents them from getting the 120fps they desire.

So you turn anything gpu related (resolution, aa, ao,physx, tressfx etc) off/to low so it doesn't interfere with the results of the benchmark and you can see what fps the cpu can achieve in that game or engine.
People can decide on what gpu and settings they use for themselves, they don't need a benchmark to guess for them.
here's what's relevant to performance:
-minimum framerates caused by CPU
-average fps enabled by CPU
-minimum framerates caused by gpu
-frametime variance caused by gpu
-average fps enabled by gpu

This is the information you need to decide if there are any bottlenecks in your system today, and wether there will be any bottlenecks in future games or when you replace either the cpu or gpu. And wether you are buying a cpu that is overkill for your gpu or vice versa.

the first 2 are represented in a good cpu benchmark, the latter 2 in a good gpu benchmark (pcper for example for gpu benchmarks)

If someone wants to know how their gpu performs at max settings in a game they can look at a gpu benchmark for it. it's irrelevant to the cpu benchmark so it doesn't matter if you test the game at 800*600.

These stupid gpu limited benchmarks for cpus (as well as product overview sites like andandtech rarely bothering to test actual cpu limited games like guild wars 2 , arma or sr4 or planetside 2 or ns2 etc) is why people ended up making the horrible mistake of buyig an amd fx8350 for gaming...
Unless you are alien spy mkeynon you surely will agree with this last point. (fx8350 missbuys due to shitty misleading reviews that leave information out)

obfuscating the results is the exact opposite of what a benchmark is intended to do or what a review is supposed to accomplish.


also on a totally different topic: anyone notice how minimum fps in bioshock infinite is STILL 20 fps even on a haswell-E and sli gtx770? that game was such a disgrace
Just look at any frametime percentile graph on pcper to see how that game should have never passed QA.
I don't get why any review site dignifies the existance of that monstrosity
This is what I'm saying though, CPU and GPU bottlenecks can (and do) happen at the same time on both parts for different reasons.

Playing with all those high settings is certainly going to show you an average frame rate that is being heavily limited by the GPU. However, by looking at frame time analysis, you will also get a perfectly good picture of what is going on in terms of limiting CPU performance.

The "Frames Above" chart for Batman perfectly illustrates this with the 5960X.

It doesn't take a genius to extrapolate that data to get an idea of how it will affect performance on various setups. Not only that, but it gives you a much clearer picture of how decent hardware in the rest of the system will affect things with this one part switch.

Figuring the data from a standard benchmark done with a single 770 with max settings is far more realistic for determining performance on a 780 with lowered settings than it is with the settings turned way way down at a low resolution.
 
shitty netbook cpu (the i5 3317U) does not have the same ipc as the desktop haswell I believe.
Also the full amd desktop fx cores (not the netbook jaguar ones) are less than half as fast as a haswell i5 desktop core at the same clockspeed, they are still slower (still less than half as fast) even at 5ghz compared to stock i5 4670k

You are massively overestimating the amd cpus

edit: I was right:
Passmark cpu benchmark score:
i5 3317u netbook cpu, 4 cores @1.7 ghz : 3100
Intel Core i7-4770K desktop cpu, 4 cores @ 3.50GHz 10,280

core for core, clock for clock, the desktop i5 is 1.6 times faster than the notebook i5
So even if the 8 amd jaguar cores were clocked equally to a desktop haswell quad core, the haswell quad core would still be 3.2x faster (not 2x)
But with jaguar at 1.7 ghz a quad core haswell cpu at stock clocks is about 6.5x faster, once you overclock it it's another 30 percent so almost 8.5x faster
There is a reason why the desktop amd fx 8 core cpu needs to be clocked at 5ghz to match a quad core ivy bridge i5 at 3.5ghz.


Anyhow, now that misinformation was corrected maybe we can discuss the article and the 6-8 core haswell-e instead of having these dumb jaguar comparisons
again. It's pointless they are more than a full generation apart.

edit: silly me, I can just look up the jaguar's passmark score to compare them without doing this crude math assuming performance scales 100 percent with clocks for either.
AMD A4-5000 APU 1,905
so a stock desktop haswell quad core is 5.4x faster

the consoles would have used laptop CPUs anyway, even if they went with intel as the supplier. the core i7 in my laptop even after two years eats through anything i give it. from games to many other intensive things. gaming is much more dependent on the GPU.
 
This is what I'm saying though, CPU and GPU bottlenecks can (and do) happen at the same time on both parts for different reasons.

Playing with all those high settings is certainly going to show you an average frame rate that is being heavily limited by the GPU. However, by looking at frame time analysis, you will also get a perfectly good picture of what is going on in terms of limiting CPU performance.

The "Frames Above" chart for Batman perfectly illustrates this with the 5960X.

It doesn't take a genius to extrapolate that data to get an idea of how it will affect performance on various setups. Not only that, but it gives you a much clearer picture of how decent hardware in the rest of the system will affect things with this one part switch.

Figuring the data from a standard benchmark done with a single 770 with max settings is far more realistic for determining performance on a 780 with lowered settings than it is with the settings turned way way down at a low resolution.

It's still incomplete information (shows you part of the fps drops caused by cpu,but does not represent the performance during all of the time where the performance is purely gpu limited , at whatever arbitrary settings and gpu the reviewers have decided on)

-which brings me to another point, imagine the 2500k reviews benching all games with a hd4870 at max settings, who in 2013 buying one new or second hand i5 2500k and buying a hd 7970 ghz edition would give two shits about how it performs in crysis with a 4870, their gpu is 4x faster the game running at 35 fps tells them NOTHING about the cpu, they could compare it to an athlon 2 benchmark and it would look just as good an option for their 7970ghz than the i5)

and as for the bolded: really? the majority of people in the new gpu threads etc on gaf are not able to correctly read or interpret benchmarks or put them into context.
Why make it even harder for them...
(unless you're a product overview site and not a review site)

@ the underlined: again who cares about how a gtx770 performs? you can look up gpu benchmarks for a gtx770 for that info, it's totally worthless in a cpu benchmark.
 
Top Bottom