• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Leak pegs desktop Broadwell, Skylake for mid-year

pottuvoi

Banned

element

Member
I recently upgraded from 2500k to 5930k and have been pretty happy. Certain apps SCREAM, while others are faster but not really what you would expect.

It is also getting to the point where I/O rates of SATA3 6 are capped. Time to start looking at M.2 drives.
 

Durante

Member
I recently upgraded from 2500k to 5930k and have been pretty happy. Certain apps SCREAM, while others are faster but not really what you would expect.

It is also getting to the point where I/O rates of SATA3 6 are capped. Time to start looking at M.2 drives.
Yeah, the next thing I want to get for my PC is a 1TB M.2 drive. I really hope competition in that segment heats up soon, there's no reason why those should be substantially more expensive than SATA SSDs.
 

KKRT00

Member
Problem with upgrade from something like i5 2500k is that with Pascal GPUs You will have to change mobo again.
Personally I still didnt decide if i should upgrade to 5820k this year or just wait for Pascal and bear with my 2500k
 

Seanspeed

Banned
CPU's don't generally go down in price, do they? Like, waiting for Skylake and Broadwell-E to come out and then searching for a 5820k isn't gonna get me any noteworthy discount, is it?
 

Human_me

Member
CPU's don't generally go down in price, do they? Like, waiting for Skylake and Broadwell-E to come out and then searching for a 5820k isn't gonna get me any noteworthy discount, is it?

No not an official price drop.
But retailers will want to get rid of any over-stock that they have on previous cpu's and will discount them slightly.
Typically I find it's better just to go for the latest and greatest.
 

Zabojnik

Member
I like your logic. I will await the motherboards with Nvidia-exclusive Intel-designed and implemented GPU-only upgrade slots with bated breath.

suspicious-fry.jpg
 

Rafterman

Banned
I need to see real world performance before I even remotely get excited. CPU advancement, in gaming terms, has been stagnant for so long it's hard to care anymore. Add to that that stock performance means little when my current chip overclocks so well, and the fact that current Gen consoles have shit for CPU power, I'm not convinced that these new chips will make any difference.
 

Durante

Member
Problem with upgrade from something like i5 2500k is that with Pascal GPUs You will have to change mobo again.
Personally I still didnt decide if i should upgrade to 5820k this year or just wait for Pascal and bear with my 2500k
I upgraded to a 5820k last year from a 920 and am really happy with it.

One thing I was concerned about is paying through the nose on DDR4 (getting it right after release), but despite my expectations it hasn't really gone down in price at all yet.

Its only one additional module.
Yeah, I don't think it's that out of the question. Plenty of MBs had NV SLI bridges back when such things were necessary.

Also, given that IBM uses NVLink it probably does have some significant advantages.
 

mrklaw

MrArseFace
Will they allow access to quick sync even if you have discrete graphics connected? That restriction is bloody stupid on current chips.

Also, with the increase yet again in IGP size, how much of the CPU is actually...CPU? Probably less than half now? I'd like a version that replaces the IGP section with four more cores and doesn't charge a fortune for it.
 
Why is everybody hankering for 6-core CPUs? IIRC, they don't offer any substantial advantages compared to quad core CPUs (at least in gaming situations) and a highly clocked and/or overclocked quad is always preferable to 6 cores with lower max. clock rates. Or am I totally out of the loop here?
 

KKRT00

Member
Why is everybody hankering for 6-core CPUs? IIRC, they don't offer any substantial advantages compared to quad core CPUs (at least in gaming situations) and a highly clocked and/or overclocked quad is always preferable to 6 cores with lower max. clock rates. Or am I totally out of the loop here?

Future proofing, 6c/12t should be enough even for future generation, at least for first two years of it. And current gen consoles have 6 cores, so most games should scale to 6 cores by default.
 
Why is everybody hankering for 6-core CPUs? IIRC, they don't offer any substantial advantages compared to quad core CPUs (at least in gaming situations) and a highly clocked and/or overclocked quad is always preferable to 6 cores with lower max. clock rates. Or am I totally out of the loop here?

6 cores are less clocked only if you run them at stock settings :D

Once overclocked you get maybe 100-200 Mhz diffrence in speeds for same generation chips.
 

Durante

Member
Also, with the increase yet again in IGP size, how much of the CPU is actually...CPU?
100% of my CPU is CPU :p

In all seriousness, I agree with you, but this is how Intel sells their enthusiast-grade chips so I don't see that changing soon.

To be fair, I think the pricing on the 5820k isn't too bad.
 
Its only one additional module.

No it isn't. It's a different form factor. Different motherboards, different coolers, different cases even? Possibly a different protocol entirely, so Intel would have to bake support for it into their chips. You honestly can't expect the entire industry to roll over just because muscle man held up a mock-up and some rubes got hype.

It was very obviously designed specifically for HPC where Nvidia and IBM have end-to-end control over the entire design.

Seriously, I've seen this notion elsewhere lately and all I can think is that I must have taken a wrong turn somewhere and ended up in an alternate universe where Nvidia said "we're replacing PCI-E" and people took them seriously. WTF?
 

kharma45

Member
No it isn't. It's a different form factor. Different motherboards, different coolers, different cases even? Possibly a different protocol entirely, so Intel would have to bake support for it into their chips. You honestly can't expect the entire industry to roll over just because muscle man held up a mock-up and some rubes got hype.

It was very obviously designed specifically for HPC where Nvidia and IBM have end-to-end control over the entire design.

Seriously, I've seen this notion elsewhere lately and all I can think is that I must have taken a wrong turn somewhere and ended up in an alternate universe where Nvidia said "we're replacing PCI-E" and people took them seriously. WTF?

I don't see it happening either. I'd be extremely surprised and will have large helpings of crow if Pascal isn't PCI-E.
 
Future proofing, 6c/12t should be enough even for future generation, at least for first two years of it. And current gen consoles have 6 cores, so most games should scale to 6 cores by default.

Yes, but future proofing for how long? Considering how long it took for quad cores to show a significant advantage vs. dual cores in actual games, I just don't know whether it makes sense at this point to invest in six cores instead of just buying a cheaper quad and overclocking the hell out of it. I'm also not sure about comparisons with current console hardware. Sure, their CPUs have eight cores, but they're crappy cores that are in no way comparable to a modern Haswell / Broadwell / Skylake core (they also run at lowly 1.6Ghz). And considering that it'll be several years until we're going to see a PS5 and/or Xbox Two, I don't think you can reliably do future proofing with regards to a hypothetical next console generation.

Anyway, I'm probably going to replace my Ivy Bridge i5 with a quad core Skylake towards the end of the year and do some moderate overclocking again. Hope the latter is as easy as it has been with previous i5/i7 generations.
 

KungFucius

King Snowflake
For real... it feels like it's been so long since I built my last PC. Kind of a dumb complaint to have, I know, but I'd love a huge CPU power bump.

It isn't a dumb complaint at all though. CPUs have not been innovated enough in recent years. They are just a commodity and the only enthusiast products are just ridiculously overpriced crap. The market is dead and there is no real competition or need for it. Same is true for GPUs as well. The plus side is that you don't need to upgrade, the downside is that when a part fails or you simply want to upgrade, you don't get much bang for your dollar.
 

tuxfool

Banned
Problem with upgrade from something like i5 2500k is that with Pascal GPUs You will have to change mobo again.
Personally I still didnt decide if i should upgrade to 5820k this year or just wait for Pascal and bear with my 2500k

Do you honestly think NVLink is going to come to consumer graphics cards? All the documentation so far has shown Nvidia is first targeting HPC clients. You will therefore see it in their Tesla cards first.

AFAIK the only working configuration with direct CPU interfaces is PPC. I should also point out that the physical interface is quite a bit different than anything used in commercial motherboards, so there are questions regarding form factor.

Also what is the point? No GPU currently can saturate a 16x PCI-E 3.0 bus.

What is it with magical thinking and Nvidia?
 

cyen

Member
I really doubt something proprietary can become the next "pci-e", the adoption rate for a diferente expasion slot takes ages (just see the time it took for pci-e to overcome agp) and combining the fact that AMD will stick to pci-e sucessor (4.0) it will be dead on arrival for consumer nvlink IMO. I believe that nvlink will not see the light of the day in the consumer market.
 

dr_rus

Member
Running a 3 years old i7-3820 here and judging from the game benchmarks of latest CPUs -- will be running it for another 3 years probably. Will likely upgrade no earlier than a twice IPC with twice the cores CPU will be available. Not worth to upgrade otherwise.
 

SliChillax

Member
I upgraded to a 5820k last year from a 920 and am really happy with it.

One thing I was concerned about is paying through the nose on DDR4 (getting it right after release), but despite my expectations it hasn't really gone down in price at all yet.

Probobably because the only motherboards that support DDR4 are only for "prosumers". Once we get cheaper consumer mobos with consumer Intel chips then DDR4 will drop in price.
 
Running a 3 years old i7-3820 here and judging from the game benchmarks of latest CPUs -- will be running it for another 3 years probably. Will likely upgrade no earlier than a twice IPC with twice the cores CPU will be available. Not worth to upgrade otherwise.

Similar boat as I am on.
I am on an i7930 @ 4.2 Ghz.. and I am only CPU limited in one game I play semi-regularly. I think I will only part with this CPU when NVLink comes out. I have a surprisingly strong fortitude to resist upgrading. DX12 just makes upgrading your CPU even more trivial in the short term.
 

KKRT00

Member
Yes, but future proofing for how long? Considering how long it took for quad cores to show a significant advantage vs. dual cores in actual games, I just don't know whether it makes sense at this point to invest in six cores instead of just buying a cheaper quad and overclocking the hell out of it. I'm also not sure about comparisons with current console hardware. Sure, their CPUs have eight cores, but they're crappy cores that are in no way comparable to a modern Haswell / Broadwell / Skylake core (they also run at lowly 1.6Ghz). And considering that it'll be several years until we're going to see a PS5 and/or Xbox Two, I don't think you can reliably do future proofing with regards to a hypothetical next console generation.

Anyway, I'm probably going to replace my Ivy Bridge i5 with a quad core Skylake towards the end of the year and do some moderate overclocking again. Hope the latter is as easy as it has been with previous i5/i7 generations.
5-6 years. There will be games that will be using multiple cores, like next Arma, next gen MMOs or Star Citizen.
Generally most Frostbite, CryEngine and Unreal games will utilize more than 4 cores natively.
And then we have VR.

-------------------------

Do you honestly think NVLink is going to come to consumer graphics cards? All the documentation so far has shown Nvidia is first targeting HPC clients. You will therefore see it in their Tesla cards first.

AFAIK the only working configuration with direct CPU interfaces is PPC. I should also point out that the physical interface is quite a bit different than anything used in commercial motherboards, so there are questions regarding form factor.

Also what is the point? No GPU currently can saturate a 16x PCI-E 3.0 bus.

What is it with magical thinking and Nvidia?

Yes, i think it will be in consumer grade GPUs.

And what for? Unified Memory.
 

BasicMath

Member
The CPU arithmetic score is actually quite good, nearly matching the i7 4810MQ which has a base clock of 2.8Ghz. Suggesting that intel will likely introduce worthwhile IPC improvements of roughly 20% with Skylake over Haswell.
So that's more or less in line with what we've gotten, right? The only reason it seems like more than usual is because Broadwell was delayed/skipped.

Oh and Broadwell-E for Q1 2016 is a bit later than I expected (Q4 2015).
 

Scandal

Banned
So excited for Skylake! I'm sitting on an Core i7 930 so needless to say, it's time.

Five year since my last upgrade and it's going to be huge! Can't wait. My goal is 1440p gaming at 120 FPS or 4K at 30-60 FPS. It's likely to be 1440p because I want a 144 Hz screen.
 

BasicMath

Member
that's what I hope for.

Except I haven't oc'ed my 2500k yet.
It still isn't going to be worth it for performance. I mean, let's say we've had a 10% increase on average with every release. Simply stacking them up would give you a 40% (Ivy, Haswell, Broadwell,Skylake) difference at stock. An OC will shorten the gap significantly. And yes, I know this is a very simple example.

That said, I still would upgrade on Skylake/Cannonlake at the latest since that's likely the last time you'll be able to reuse RAM. There's more to it than just raw performance (Heat, Power Consumption, New Instruction sets, Chipset/Motherboard innovations etc).
 
That should be illegal. Grab a decent cooler if you don't have one already and let it stretch it's legs.

I've tried adding ram(adding 8gb) to my mobo, the cpu wouldn't post with the new RAM( tried two pairs of new sticks same problem).

Same make, same speed. it won't budge.(well it will post, only 25% of the time with the new ram in)

so I don't want to invest more on this CPU&Mobo,

Start from Scratch.
 
Performance is at a standstill, happy now? you know very well what I meant.
Any transistor count gains are wasted on shitty integrated gpu performance in a high end bracket desktop cpu that most people are going to use a dedicated gpu with anyhow...

Also the moment amd could not compete anymore, intel simply shrunk their die size when they shrunk the process node (selling effectively lower end parts as the high end)
We've gone from 270-300mm dies to 150-170mm ones... what's the point of smaller transistors if it doesn't result in getting more of them NOR in getting the same amount of them for less money?

This is a gaming forum and we are getting fucked hard by intel now they have a monopoly in the high end.
Moore's law is meaningless to the consumer if it doesn't equate performance/dollar

20 percent more IPC on skylake, count on 20 percent price hike to go with it.

Skylake was the last hope for being the first decent cpu jump in 4 years time.

How many games are CPU bound when using the current i7's?
 
x86 hasn't really stagnated. The E5-2699s we have at work are amazingly fast :p

Ya and for $4K+ each I would hope so. I was absolutely dumbfounded by the HPC Lab on my campus. We recently got in a few systems each with E5-2699s and 128GB-256GB DDR4 for the graduate students and research faculty.

Talk about another world of performance.
 

tuxfool

Banned
Yes, i think it will be in consumer grade GPUs.

And what for? Unified Memory.

Sure. Eventually. Not convinced it will be there for Pascal. There are other Factors at play here.

1) The physical interface requires a totally different topology from conventional motherboards.

2) There is no support from x86 CPUs. I seriously doubt intel/amd are going to cough up the licensing fees to support a *very* proprietary interface (unless nvidia opens up the technology).

3) The bandwidth offered by NVlink is probably going to go to waste on most GPUs.

There are arguments for it such as the fact that the SLI header is very out of date and as you said unified memory.
 

LordOfChaos

Member
Will they allow access to quick sync even if you have discrete graphics connected? That restriction is bloody stupid on current chips.

Also, with the increase yet again in IGP size, how much of the CPU is actually...CPU? Probably less than half now? I'd like a version that replaces the IGP section with four more cores and doesn't charge a fortune for it.

Depends on how many cores and which graphics configuration, but yeah, some of them have well over half gone to GPU , especially pairing GT3 with a dual core.

Haswell-Microarchitecture-Die.png


3.jpg
 

mjontrix

Member
Why is everybody hankering for 6-core CPUs? IIRC, they don't offer any substantial advantages compared to quad core CPUs (at least in gaming situations) and a highly clocked and/or overclocked quad is always preferable to 6 cores with lower max. clock rates. Or am I totally out of the loop here?

Makes encoding faster. Run more things at once. Closer to 8-cores.

It still isn't going to be worth it for performance. I mean, let's say we've had a 10% increase on average with every release. Simply stacking them up would give you a 40% (Ivy, Haswell, Broadwell,Skylake) difference at stock. An OC will shorten the gap significantly. And yes, I know this is a very simple example.

That said, I still would upgrade on Skylake/Cannonlake at the latest since that's likely the last time you'll be able to reuse RAM. There's more to it than just raw performance (Heat, Power Consumption, New Instruction sets, Chipset/Motherboard innovations etc).

Why reuse RAM? Just go DDR4 all the way!

i7-930 has served me well. Heck, might be enough to survive until Cannonlake/Zen - whatever's faster. Dolphin has gotten enough optimizations to keep me satisfied for now - and KH3 might end up on PC by then as well xD
 

LordOfChaos

Member
Why reuse RAM? Just go DDR4 all the way!

Funny enough, that will likely benefit APUs (and Intel integrated graphics) more than it will ever affect processor performance. I don't think current CPUs are anywhere near bandwidth starved excluding the integrated GPU, and they're damned good these days at prefetching and filling caches to hide any flaws in memory performance.
 

pottuvoi

Banned
Funny enough, that will likely benefit APUs (and Intel integrated graphics) more than it will ever affect processor performance. I don't think current CPUs are anywhere near bandwidth starved excluding the integrated GPU, and they're damned good these days at prefetching and filling caches to hide any flaws in memory performance.
Only when thinking about total bandwidth.
Memory latency is one of the bigger problems for performance in modern CPUs.
https://www.youtube.com/watch?v=fHNmRkzxHWs.

Really hope we will see reasonably priced Skylake for workstations with a high performance L4 cache.
 

LordOfChaos

Member
Only when thinking about total bandwidth.
Memory latency is one of the bigger problems for performance in modern CPUs.
https://www.youtube.com/watch?v=fHNmRkzxHWs.

Really hope we will see reasonably priced Skylake for workstations with a high performance L4 cache.

Yeah, and at the same clock speed, DDR4 is higher latency than DDR3. The higher clock speeds it's capable of will mitigate this, but latency is pretty much going to be stagnant (and worse for the first few speed bins).
 

pixlexic

Banned
I think more attention needs to be placed on logic boards and not cpus or gpus.

We need faster bus speeds, we should be able to habe a gpu bus so fast that you could use multiple cards as a single memory pool or at least use system ram to gpu a lot more efficently.
 

LordOfChaos

Member
I think more attention needs to be placed on logic boards and not cpus or gpus.

We need faster bus speeds, we should be able to habe a gpu bus so fast that you could use multiple cards as a single memory pool or at least use system ram to gpu a lot more efficently.

Which is why I have a keen eye on NVLink in the consumer space, but thus far it's just supercomputer material. 5 to 12 times faster than PCI-E over the same sort of bus.
 
Top Bottom