• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Intel: future CPUs to be slower but more efficient


I know it's technically a silly thought since the extra 4 threads are just virtualized that's also why I was a bit confused when people stated i7's as truly superior to i5's but I see now what I've been missing. I wasn't aware that Haswell-E 5xx0 i7's actually come with 6 physical cores and thought Intel only ever did 4 cores for their "normal" CPU lines, the more you know o.o

EDIT: Whoops seemingly all -E lines have 6 cores, wow I've been living under a rock as far as that goes o.o
 
With cpu's becoming more efficient instead of ramping up the clocks, it makes you wonder if Intel will ever become so focused on efficiency that they stop allowing overclocking altogether.
 
I know it's technically a silly thought since the extra 4 threads are just virtualized that's also why I was a bit confused when people stated i7's as truly superior to i5's but I see now what I've been missing. I wasn't aware that Haswell-E 5xx0 i7's actually come with 6 physical cores and thought Intel only ever did 4 cores for their "normal" CPU lines, the more you know o.o

EDIT: Whoops seemingly all -E lines have 6 cores, wow I've been living under a rock as far as that goes o.o

Not all have 6. The i7 5960X has 8 cores/16 threads.
 
I'm not sure because I don't really follow PC hardware, but I assume this will allow AMD some breathing room knowing that Intel won't be having any performance gains.

Not really, as low power devices such as tablets are an equally important market for AMD as well.
 
still running my i7 920. overclocked to 3.6 ghz and its running everything just fine.

Just upgraded the one at work with a 5820 and the difference is not *that* huge tbh.
I mean, renders are faster of course, but not dramatically so. Modern mobo features and the M2 were worth it though.
Also, 32GB DDR4.
And the GTX970.
 
the GPU area already eats quite a big bit of this silicon
Soon it will be larger than whats is dedicated to the CPU

A9XDie_Chipworks_Annotated_575px.jpg


A9X die shot. CPU? What CPU? Oh it's that tiny slice of silicon in the corner. It's like a massive GPU with a CPU as an afterthought.
 
It's been lousy since Ivy Bridge launch unless you either forked for the enthusiast chips or bought a Devil's Canyon chip for which the better TIM was a major selling point. A Skylake refresh like Devil's Canyon might be welcome considering I expect the next generation to be delayed, but it's not as essential as it was with Haswell since Intel moved the voltage regulation out of the CPU package again.

This is my impression what Kaby Lake is. Maybe not identical, but basically the "OK, we've finally got Skylake architecture nailed down"-release, with some features like HDMI 2.0a, 10-bit h.265 video acceleration, etc., that almost seemed to be held back from Skylake for product-positioning purposes.

And also of course the new GPU architecture, which I'm excited for.
 
This is my impression what Kaby Lake is. Maybe not identical, but basically the "OK, we've finally got Skylake architecture nailed down"-release, with some features like HDMI 2.0a, 10-bit h.265 video acceleration, etc., that almost seemed to be held back from Skylake for product-positioning purposes.

And also of course the new GPU architecture, which I'm excited for.
We will just have to wait and see. I'm quite disappointed with how Skylake has been rolled out. Though I don't really care about some of these omissions since I am a desktop user and I don't use Intel's h.265 acceleration or their built in GPU.

Actually bought a 6700k a few hours ago. Heavily considered the 5920k (or 5930k due to the extra PCI-E lanes) but I value the increase in IPC more than the extra cores in my application, as well as wanted some of the later features. Won't be tempted to build another PC unless Skylake-E blows my socks off or something
 
My 2500K at 4.6GHz will never have to be replaced, will it?

4.6GHz? You really hit the jackpot, mine becomes unstable with anything higher than 4.3GHz.

I'm going to rock my 2500K till the end of times it seems, I'll frame that thing when I upgrade in a few years lol.
 
We will just have to wait and see. I'm quite disappointed with how Skylake has been rolled out. Though I don't really care about some of these omissions since I am a desktop user and I don't use Intel's h.265 acceleration or their built in GPU.

Actually bought a 6700k a few hours ago. Heavily considered the 5920k (or 5930k due to the extra PCI-E lanes) but I value the increase in IPC more than the extra cores in my application, as well as wanted some of the later features. Won't be tempted to build another PC unless Skylake-E blows my socks off or something

Yeah, my primary PCs are notebooks (MacBooks, specifically), so I don't want to buy a new notebook until they support fully hardware accelerated UHD playback for Netflix, etc. I have no desire to go back to the pre-hardware accelerated video battery life.
 
Yeah, my primary PCs are notebooks (MacBooks, specifically), so I don't want to buy a new notebook until they support fully hardware accelerated UHD playback for Netflix, etc. I have no desire to go back to the pre-hardware accelerated video battery life.

Didn't think Netflix even supports ultra HD on computers - only TVs or streaming boxes? Hopefully they change that soon.
 
4.6GHz? You really hit the jackpot, mine becomes unstable with anything higher than 4.3GHz.

I'm going to rock my 2500K till the end of times it seems, I'll frame that thing when I upgrade in a few years lol.
Yeah i really wished id kept my i5 2500k it ran at 4.8 no issues on air and easily pushed to 5.0 i never changed anything but multiplier on it...
 
Yeah i really wished id kept my i5 2500k it ran at 4.8 no issues on air and easily pushed to 5.0 i never changed anything but multiplier on it...
Damn. I'm assuming aftermarket cooling at least though, right. On stock cooling I doubt it'd be easy to go much beyond 4.0.
 
I'm not really sure this says anything about the curve on higher end CPUs going forward or gaming requirements of CPUs.

If speed means clockspeed, this isn't really a new trend.

Greater power efficiency within one core will only open the door for more cores. Speed and power don't have a simple relationship.

His commentary seems also to be at least half about where increasing demand will come from - the lower power markets - rather than technical limits.
 
Honestly, I just hope AMD starts catching up and cleaning house in terms of pricing. I'm so sick of this Intel world where we buy new mother boards for the tiniest upgrades.

AMD had it right all along with long term support for CPUs. If AMD can get their things together, I imagine a new architecture with HBM as standard RAM and a CPU that performs 90% as well as the competing Intel processor at a far deeper discount with plenty of upgrade paths that don't shoe horn you in to one particular mother board.

Or am I crazy?
 
Honestly, I just hope AMD starts catching up and cleaning house in terms of pricing. I'm so sick of this Intel world where we buy new mother boards for the tiniest upgrades.

AMD had it right all along with long term support for CPUs. If AMD can get their things together, I imagine a new architecture with HBM as standard RAM and a CPU that performs 90% as well as the competing Intel processor at a far deeper discount with plenty of upgrade paths that don't shoe horn you in to one particular mother board.

Or am I crazy?

You are optimistic. That's as far as I'm willing to go.
 
May I present for your perusal, AMD's new cooler design:



Just blowing hot air over your VRMs making them hit 101C at IDLE. Great work there, guys.

Following AMD's forward-thinking designs, this is most likely their first attempt at a perpetual energy machine. Why dump valuable energy outside the case when you can transfer it back into the components themselves!
 
To be fair, most coolers (air & water) have problem cooling the VRM area. I think the Wraith will do a better job because it is a down draft cooler

Here the Wraith does much better, even better cooling than the famous Thermalright one.
http://www.tomshardware.com/reviews/amd-wraith-cpu-cooler,4450-2.html

The problem i have with AMD coolers are the use of metal clips onto the 2 point plastic bit...hard to install, fearful of ripping up the plastic and hard to remove. Thankfully, my last AMD was Phenom 2 times.
 
But it says "under load" and both coolers did the same, even the Hyper D92; so both of them are badly designed or the VRM just get really hot under load.

I saw "without it under load" and came to the different conclusion. But upon reflection yours seems far more correct.
 
Damn. I'm assuming aftermarket cooling at least though, right. On stock cooling I doubt it'd be easy to go much beyond 4.0.
Yeah aftermarket air 😊 always wished I'd tried water on it. With my i7 4770k on water it dont like 4.2 😕
 
Intel calls end to Moore's Law

Not just that Moore’s Law is coming to an end in practical terms, in that chip speeds can be expected to stall, but is actually likely to roll back in terms of performance, at least in the early years of semi-quantum-based chip production, with power consumption taking priority over what has been the fundamental impetus behind the development of computers in the last fifty years.

What does this have to do with gaming?
  • Your 2600K or 5820K might last a very long time =/
  • No point waiting for next CPU gen for your gaming rig unless you want future tech like Optane

I feel like that article is pretty bad in that I'm not sure I agree with his conclusions of the Intel CEOs quotes at all..

What I got from what he was saying was:

- Conventional semiconductor research into performance gains by increasing density of logical fabric is hitting both shrinkage and heat dissipation walls soon (something 3D X-point, aka 3D stacking, will likely not solve...)
- Medium-to-long term strategy is to migrate to altogether separate fundamental technologies (quantum tunnel transistors, aka one-small-step-towards-full-blown-quantum-computers, and spintronics)
- Said technologies are maturing, with spintronics likely commercially viable in the next 18 months
- As part of this migration, <clock-speed + transistor-count> will no longer represent the true measure of computational performance metric comparable to current-gen tech
- Some new set of metrics will be established and so naturally Moore's Law will end (since it's specifically dedicated to clock-speeds as the core factor) and Moore's Law 2.0 (or some other more appropriate name) will replace it
- Computational performance will still continue to increase though, with a likely exponential jump somewhere along the technology paradigm shift

I suspect the decrease in clock speeds when moving over to quantum-tunnel transistors or magnetic-tunnel transistors, will be more than made up for in raw power, due to higher computational power, and the lower power requirements will eventually translate into higher densities and computational leaps in the mobile computing space.
 
I feel like that article is pretty bad in that I'm not sure I agree with his conclusions of the Intel CEOs quotes at all..

What I got from what he was saying was:

- Conventional semiconductor research into performance gains by increasing density of logical fabric is hitting both shrinkage and heat dissipation walls soon (something 3D X-point, aka 3D stacking, will likely not solve...)
- Medium-to-long term strategy is to migrate to altogether separate fundamental technologies (quantum tunnel transistors, aka one-small-step-towards-full-blown-quantum-computers, and spintronics)
- Said technologies are maturing, with spintronics likely commercially viable in the next 18 months
- As part of this migration, <clock-speed + transistor-count> will no longer represent the true measure of computational performance metric comparable to current-gen tech
Correct. Aside from the performance part - throughput and latency will always remain fundamental metrics for computations, whether that's transistors and clock speed or something entirely different.

- Some new set of metrics will be established and so naturally Moore's Law will end (since it's specifically dedicated to clock-speeds as the core factor) and Moore's Law 2.0 (or some other more appropriate name) will replace it
Incorrect. Moore's Law has never been about performance but about transistor fabrication capabilities. It's been already slowing down, and it is slowing down to a halt within a couple of gens.

Performance has been coming as a side-effect of Moore's law, and that's what been on people's minds.

- Computational performance will still continue to increase though, with a likely exponential jump somewhere along the technology paradigm shift
And that's just wishful thinking : )

I suspect the decrease in clock speeds when moving over to quantum-tunnel transistors or magnetic-tunnel transistors, will be more than made up for in raw power, due to higher computational power, and the lower power requirements will eventually translate into higher densities and computational leaps in the mobile computing space.
It's not that simple. Performance nowadays already comes mainly via parallelism, and that too has its quirks - not everything is parallelisable, and the things that are are still subjects to Amdahl's Law.
 
The only real gripe about my 2011 2500K is that I cannot stream @60 FPS or 1080p.
For the rest, we are still very good friends.
 
The only real gripe about my 2011 2500K is that I cannot stream @60 FPS or 1080p.
For the rest, we are still very good friends.

what? i have a 2011 2500k and streaming 1080p @ 60 fps works fine.
I've never felt that the cpu is limiting my experience. maybe when doing some heavy 3d rendering but that's not something I do very often.
 
By cist do you mean the greatest challenges?

The problem with xray lithography compared to UV

1) Creating an X ray source is more difficult and bulky compared to UV. A lot more but still doable.
2) The optics are not simple mirrors or lenses. Xrays like to go through things. The optics are multilayer mirrors, zone plates so on and so forth. The optics and their resolution becomes difficult too and they are heavily being researched at the moment.
3) When you create smaller and smaller structures on silicon defects become more and more of a problem.

If I get time I can post some presentations though they are probably very dated by now.

No...I meant cost, was on my phone and fat fingered it :/

However, this post was probably more informative than what I was expecting - thank you. I would definitely like to see those presentations.
 
The only real gripe about my 2011 2500K is that I cannot stream @60 FPS or 1080p.
For the rest, we are still very good friends.

That's odd. I had a 2011 15" MacBook Pro with a quad-core Sandy Bridge CPU (Core i7 2720QM) and it definitely streamed at 1080p and 60fps with no issue.
 
I remember trying to install a t-bird hsf setup. God forbid your screwdriver slip and damage the traces on the board.

Could be worse, I think the Athlon Thunderbird days were the ones where putting the HSF on at an awkward angle would crack the die and destroy your new CPU.

It didn't take long for Intel and then AMD to start mounting the metal plates (heatspreaders) instead of exposing the bare CPU dies to prevent this from happening.
 
5820k for life...does not feel like a joke anymore.

Our 5820k's felt like a joke at any point? It's a beastly processor at a good price.

Can someone explain what exactly TSX is in laments terms?

I mean I think I understand based on the wiki page somewhat. It's sounds like two potential instruction sets that are capable of being run on the threads under TSX.

So when a thread "locks" on a certain set of data it becomes problematic for the other threads to access the data at the same time and must essentially wait until the thread "releases" the data to access it slowing down multi-threaded aplications because it's trying to avoid overwrites. What TSX does is allow a non traditional instruction set to be run which assumes that they will not overwrite eachother thus not force any locks so by speeding up multi-threaded performance. However, if it does happen to overwrite a variable then it will abort the instruction set and go back to a traditional lock and unlock instruction set to protect against overwrites?

I seriously feel like I'm trying to read Latin. Somebody explain to me like I'm 10 please! Also as a 5820k owner I see TSX was also scrapped on my CPU due to the issue. What exactly are the main benefits to TSX and where is its most useful implementation?
 
I'm also surprised at the huge performance improvement in some apps and games when you go from 2133 to 2666MHz DDR4 RAM. For example, Ryse almost doubles its framerate based on Eurogamer's benchmarks
 
Could be worse, I think the Athlon Thunderbird days were the ones where putting the HSF on at an awkward angle would crack the die and destroy your new CPU.

It didn't take long for Intel and then AMD to start mounting the metal plates (heatspreaders) instead of exposing the bare CPU dies to prevent this from happening.
Oh man, thanks for reminding me (victim of a screwdriver slip)
 
Our 5820k's felt like a joke at any point? It's a beastly processor at a good price.

Can someone explain what exactly TSX is in laments terms?

I mean I think I understand based on the wiki page somewhat. It's sounds like two potential instruction sets that are capable of being run on the threads under TSX.

So when a thread "locks" on a certain set of data it becomes problematic for the other threads to access the data at the same time and must essentially wait until the thread "releases" the data to access it slowing down multi-threaded aplications because it's trying to avoid overwrites. What TSX does is allow a non traditional instruction set to be run which assumes that they will not overwrite eachother thus not force any locks so by speeding up multi-threaded performance. However, if it does happen to overwrite a variable then it will abort the instruction set and go back to a traditional lock and unlock instruction set to protect against overwrites?

I seriously feel like I'm trying to read Latin. Somebody explain to me like I'm 10 please! Also as a 5820k owner I see TSX was also scrapped on my CPU due to the issue. What exactly are the main benefits to TSX and where is its most useful implementation?

Anyone care to chime in?
 
Anyone care to chime in?
It's an ISA extension (ie. software-wise it's a set of new CPU ops and/or op modifiers) that provides transactional memory. The latter is a short way to say atomic memory access to data larger than a single memory-bus word. The idea is that when there are threads competing for a multi-word memory resource they don't need to lock out each other explicitly (via spinlocks or kernel synchronization mechanisms) to ensure consistency of the access, but can ask the hw to arbitrate the resource for them, so the process goes 'locklessly' for all involved parties. In practice that means that for one of them the entire process goes smoothly, and for the rest (if there are actual competitors) the transaction is rather automatically re-started or dropped, depending on what the code wants. Basically, (pseudo) code of the form:

Code:
acquire_lock(x);
access_protected_memory_region();
release_lock(x);

becomes with transactional memory

Code:
open_transaction(x);
access_protected_memory_region();
close_transaction(x);

ed: forgot to mention that the big effect is not so much how TSX code looks, but how it performs - mechanisms like spinlocks are very taxing on the memory subsystems of all cores involved.
 
Top Bottom