• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

E-cores and HT are bad for gaming

kruis

Exposing the sinister cartel of retailers who allow companies to pay for advertising space.
Most PC gamers expect that buying the latest and greatest CPUs will automatically translate to games running with higher frame rates and less dropped frames. Look at any CPU benchmark test and you'll see that the CPUs with the most cores and highest clock speed dominate the charts. Of course they do, so hip hip hooray for technological progress!

But they do really?

It's becoming clearer to me that Intel's most recent CPUs with additional efficiency cores are actually inefficient in many games. Maybe even most games but we don't really know how many games are affected because generally tech/games reviewers never do these kinds of performance tests. That means there's little information about this problem out there. It's a major issue IMO but completely unreported.

Alex Batagglia demonstrated in a number of recent videos that games engines like Unreal Engine 4/5 don't make good use of CPUs with many cores. In those cases a game's performance will not linearly scale with the increased number of cores but will get only marginal performance increases when turning on additional CPU cores. There are also games out there that perform better when you turn off features like hyperthreading and e-cores.

Takes Lords of the Fallen for an example of a games that has negative performance scaling. (video link). Here you can see an example of a game that runs better using just 6-cores of a 12900K CPU than in its default configuration with all cores, hyperthreading and e-cores turned on.

P8TdTJm.jpg


To combat this embarrassing situation Intel has recenlty introduced APO (Application Optimization Overview). This tool from Intel can make unoptimized games run better on Intel 14the gen CPUs. In the screenshot below you can see an example of what's possible. Rainbow Six Siege now runs at 724 fps instead of 610 fps. Impressive! But if you look at the chart more closely you'll see that you could wring more frames out of that game by turning HT off (633 fps) and by turning e-cores off (651 fps). (video link)

rU2qfSr.jpg


When Intel introduced their 12th gen CPUs there were people who feared performance degradations, but these fears were waved away. Intel's new Thread Director working together with Windows 11's improved task scheduler would monitor the system and move move background tasks to E-cores and make sure that performance would be optimized. Well, that obiously didn't come to pass.

Even worse is that APO, Intel's solution to these performance problems, currently supports only two games and will ru only on 14th gen Intel CPUs even though 12th and 13th gen Intel CPUs use almost the same tech. It's clearly more important for Intel to sell more 14th gen CPUs than support old customers who bought into the E-core fantasy.
 

justiceiro

Marlboro: Other M
More like apple m line is proof that the x86 architecture is a dinosaur in it's last legs that needs to be phased out as soon as possible in favor of RISC processors. Don't even bother because with almost 4 chip makers now developing arm processor for domestic motherboards and with a Window version in the works too, this Intel approach is about to become history.
 

zeroluck

Member
UE is basically a single threaded game engine that can't take advantage of multi cores, if UE is the only application that CPU manufacturers care about they would go back to dual cores.
 
Last edited:

winjer

Gold Member
AMD is going to do something similar next

It's not. Zen4C is very different from E-cores.
It's the same architecture, the same IPC, the same instruction set, as the full Zen4.
But they use higher density cells, and this is why they get a 35% reduction in size for the cores+L2.
The problem is that higher density cells mean lower signal integrity and higher temperatures per area. So they have to be clocked lower.
Another thing to consider is that the design has changed. We can see on the die shots that the layout of units in different and it's packed much closer.
Usually, AMD design their CPUs by designing each part individually and then putting it back together. This has the advantage of being faster to develop and if there is a problem in one individual part, AMD only has to fix one.
The downside is that the layout of the chip is not very optimized, resulting in some wasted space.
With Zen4C, the layout was optimized not to waste any die space.

FsFNn3z.png
 

DonkeyPunchJr

World’s Biggest Weeb
It's not. Zen4C is very different from E-cores.
It's the same architecture, the same IPC, the same instruction set, as the full Zen4.
But they use higher density cells, and this is why they get a 35% reduction in size for the cores+L2.
The problem is that higher density cells mean lower signal integrity and higher temperatures per area. So they have to be clocked lower.
Another thing to consider is that the design has changed. We can see on the die shots that the layout of units in different and it's packed much closer.
Usually, AMD design their CPUs by designing each part individually and then putting it back together. This has the advantage of being faster to develop and if there is a problem in one individual part, AMD only has to fix one.
The downside is that the layout of the chip is not very optimized, resulting in some wasted space.
With Zen4C, the layout was optimized not to waste any die space.

FsFNn3z.png
This seems like a smart approach but it still means that software will need to be aware that some cores are faster than others and games should prioritize those cores, right?

Or is this solved already since Windows will see some cores running at higher clock speeds?
 

Kenpachii

Member
turning ht off has always been better in games that don't use many cores because heat reduction and higher main clocks. This is nothing new.

As i own a 12900hx, and have e cores on it. I can say the following, i like them a lot. Windows automatically pushes windows stuff towards the e-cores while ramming the game towards the bigger cores.

I can decompress game data that slams my e cores to 100% use and have no negative performance hits on the game.
 
Last edited:

winjer

Gold Member
This seems like a smart approach but it still means that software will need to be aware that some cores are faster than others and games should prioritize those cores, right?

Or is this solved already since Windows will see some cores running at higher clock speeds?

It's a much better approach than the E-cores. Especially because it doesn't have 2 different types of cores, with different support for different instructions.

Then we have non-sense, like AVX10. It's a very deep and complex read, but it tells a lot about how it complicates things. And makes X86 even more bloated.

EDIT: regarding scheduling threads across Zen4 and Zen4C cores, it will probably be an update on what we have now with CPPC support in hardware, UEFI and Windows.
 
Last edited:

Bojji

Member
E cores idea is good, and in a perfect world it work great, multi core in general would. But problem is most games/game engines are dumb as fuck and max out around 6 cores/6 threads. I doubt much will change in the future.

That's why strong core with 3D cache is the way to go.
 

HL3.exe

Member
Yeah it sucks, but it wasn't hard to see this coming for the industry to adopt e-cores. CPU progress has stagnated for the last 10 years or more.

Especially on the Single threaded and frequency points.
Scaling-is-Falling-SemiWiki.jpg.webp
* blue and green are the main contributors for game-logic and simulation complexity. Multicore is rising, yes, but in games it's used to offload work-load, but games are still dependent on one fast single-thread (the main game thread) to collect/synchronize all the workload in order

Performance gains are only possible with die shrinkage, more cache and software being better optimized to utilize multiple cores (which is especially hard for realtime deterministic simulation-logic which games use)

So big company rather focus on making mobile and server CPU's as those markets are ever growing. (Even though gaming is hugh, the mainstream are not buying high-end single threaded performance chips like in the Crysis days)
 
Last edited:

Leonidas

Member
Even with E-Cores active, 13th and 14th Gen is faster than Zen4.

Zen4 is only faster (on average) in gaming with V-Cache variants, but the high core count models of those have their own issues...
 

Silver Wattle

Gold Member
It's partly why when a friend asked me what parts to buy for his money is no object PC I told him to get a 7800X3D.

Even the multiple CCD's are bad for gaming, hopefully AMD increase single CCD core counts soon.
 

Buggy Loop

Member
Is this again something we see pop-up at 1080p with a 4090 slapped in and as soon as you up the resolution all the differences disappear and nearly all CPUs normalize to the same performance?

I'm on AMD but i feel like CPU bound scenarios are mostly for wanking
 

Drew1440

Member
More like apple m line is proof that the x86 architecture is a dinosaur in it's last legs that needs to be phased out as soon as possible in favor of RISC processors. Don't even bother because with almost 4 chip makers now developing arm processor for domestic motherboards and with a Window version in the works too, this Intel approach is about to become history.
They've been RISC internally since the Pentium/K6
 

kruis

Exposing the sinister cartel of retailers who allow companies to pay for advertising space.
What I find surprising about the fact that games can run WORSE with hyperthreading and E-cores turned on is that there's so little information about which games are affected. It would be great if PC game reviewers would check not only the framerate of new games, but also this performance aspect. If the devs didn't fix the problem, we could at least turn off e-cores ourselves on a per game basis using a tool like Process Lasso.

So which games suffer rotten CPU optimization? Unreal Engine games do (like Lords of the Fallen). Atlas Fallen was another one, that game ran A LOT worse with e-cores turned on.


I also found a reference to microstutters in Star Citizen that could be fixed by using Process Lasso to turn off e-cores.



Any others that people know of?
 

Ulysses 31

Member
I think it's more on a case by case basis, I turned off simultaneous multithreading and lost nearly 50% fps in Metro Exodus EE.
 
The "E" cores is the reason i'm switching to AMD for the first time after 20 years.
I did the same, first time in 13 years. Unfortunately AMD is going down this same shitty path probably with their next generation of CPUs. I fucking hate it. It goes against what gaming performance demands which is fewer cores of higher clock speed and single threaded performance.
 

IFireflyl

Gold Member
E-Cores and HyperThreading aren't bad for gaming. Games/Engines aren't utilizing the technology. Until games/engines start optimizing for these technologies, you can't draw the conclusion that the technologies are bad. It's the developers that are the problem.
 

Tarin02543

Member
What I find surprising about the fact that games can run WORSE with hyperthreading and E-cores turned on is that there's so little information about which games are affected. It would be great if PC game reviewers would check not only the framerate of new games, but also this performance aspect. If the devs didn't fix the problem, we could at least turn off e-cores ourselves on a per game basis using a tool like Process Lasso.

So which games suffer rotten CPU optimization? Unreal Engine games do (like Lords of the Fallen). Atlas Fallen was another one, that game ran A LOT worse with e-cores turned on.


I also found a reference to microstutters in Star Citizen that could be fixed by using Process Lasso to turn off e-cores.



Any others that people know of?


I have process lasso pro, best paid software ever.
 

Poplin

Member
issue isnt with the chip, its with the engine. Unreal has never supported multi-threading, wasnt as big an issue back in UE4 days but now its a pretty big issue. Epic has said they plan to support multithreading for their renderer, but the underlying engine is still going to primarily be single thread.

So yeah, likely wont be an issue with games on proprietary engines that do support multi-threading, but anything on UE5 is going to be hamstrung like this until Epic finally invests in fixing it.
 

M1chl

Currently Gif and Meme Champion
HT or SMT often has some smartass microcode attached to it, which often results in higher latency and paradox-ically its causing issues when code is multithreaded. Of course if the engines wouldn't be written by amateurs whose main domain is Graphics processing rather than CPU-logic processing, this wouldn't occur, but given how much emphasis is put into Graphics these days, who can blame them. This is why it does not occurs generally in software outside of games, because emphasis is put into how CPU is loaded.

Only one example of good CPU processing engine is ID TECH.

Also often user is at fault, having HT on and HPET off, which obviously cause desync of instruction running on multiple threads, etc.
 

Three

Member
issue isnt with the chip, its with the engine. Unreal has never supported multi-threading, wasnt as big an issue back in UE4 days but now its a pretty big issue. Epic has said they plan to support multithreading for their renderer, but the underlying engine is still going to primarily be single thread.

So yeah, likely wont be an issue with games on proprietary engines that do support multi-threading, but anything on UE5 is going to be hamstrung like this until Epic finally invests in fixing it.
Not when it comes to hyperthreading and E-cores, engines often have no idea about the type of cores since it is abstracted. The thread director and windows task scheduler are in charge of this task and slow things down when HT is enabled. Me and the OP have discussed this topic in the past. It's an interesting topic but it's not really the engines to blame.
 
Last edited:
Not when it comes to hyperthreading and E-cores, engines often have no idea about the type of cores since it is abstracted. The thread director and windows task scheduler are in charge of this task and slow things down when HT is enabled. Me and the OP have discussed this topic in the past. It's an interesting topic but it's not really the engines to blame.
It is, as usual, mainly a Windows problem. Linux doesn't have any issues with heterogeneous architectures, and God knows that both iOS or Android, which are always running on heterogeneous architectures, never have problems either.
 
Last edited:

Three

Member
It is, as usual, mainly a Windows problem. Linux doesn't have any issues with heterogeneous architectures, and God knows that both iOS or Android, which are always running on heterogeneous architectures, never have problems either.
I think it's a little unfair to blame windows for this even though I'm a linux advocate. Hyperthreading often reduces performance on linux too if you rely on its scheduler. The good thing about linux though is that you can have very fine control and dedicate cores/hardware just to specific tasks (the running game) meaning you can manually mitigate some of the problems/performance drops that hyperthreading causes.
 

nkarafo

Member
It's the developers that are the problem.
Why make this design in the first place? Why not just make less regular cores? Instead of, say, 6+4 why not go for just 8?

Why add this level of complexity for the developers to solve?

To me this whole design exist only to make their CPUs look more impressive to the masses. Because this way they can make and sell CPUs with "more" cores. More is better, right? It's like the Pentium 4 CPUs and how they looked better on paper because the numbers were higher VS the Athlons. The only difference is that back then it was all about the GHz speed and now it's about the number of cores. This is classic Intel.
 

IFireflyl

Gold Member
Why make this design in the first place? Why not just make less regular cores? Instead of, say, 6+4 why not go for just 8?

Why add this level of complexity for the developers to solve?

To me this whole design exist only to make their CPUs look more impressive to the masses. Because this way they can make and sell CPUs with "more" cores. More is better, right? It's like the Pentium 4 CPUs and how they looked better on paper because the numbers were higher VS the Athlons. The only difference is that back then it was all about the GHz speed and now it's about the number of cores. This is classic Intel.

You're asking why companies try to innovate and make their products better...?

Question Mark What GIF by MOODMAN


From their website:

Intel® Core™ desktop processors integrate two types of cores into a single die: powerful Performance-cores (P-cores) and flexible Efficient-cores (E-cores). Both types of core have a different role.

Performance-cores are:
  • Physically larger, high-performance cores designed for raw speed while maintaining efficiency.
  • Tuned for high turbo frequencies and high IPC (instructions per cycle).
  • Ideal for crunching through the heavy single-threaded work demanded by many game engines.
  • Capable of hyper-threading, which means running two software threads at once.
Efficient-cores are:
  • Physically smaller, with multiple E-cores fitting into the physical space of one P-core.
  • Designed to maximize CPU efficiency, measured as performance-per-watt.
  • Ideal for scalable, multi-threaded performance. They work in concert with P-cores to accelerate core-hungry tasks (like when rendering video, for example).
  • Optimized to run background tasks efficiently. Smaller tasks can be offloaded to E-cores — for example, handling Discord or antivirus software — leaving P-cores free to drive gaming performance.
  • Capable of running a single software thread.

Many developers show they can efficiently use these cores in AAA. Technology changes. Companies can either adapt or fall behind.

Also, just for giggles:



E-Cores enabled is definitely beneficial in a lot of games.
 
Last edited:
I think it's a little unfair to blame windows for this even though I'm a linux advocate. Hyperthreading often reduces performance on linux too if you rely on its scheduler. The good thing about linux though is that you can have very fine control and dedicate cores/hardware just to specific tasks (the running game) meaning you can manually mitigate some of the problems/performance drops that hyperthreading causes.
I was speaking more about the big.LITTLE heterogenous designs pioneered by ARM and brought to x86 by Intel P and E cores. I think we should keep SMT methods like Hyperthreading as a separately considered problem.
 
Oh wow, look 1080p and low settings and I can get 730 fps instead of 650 fps with the same CPU? Just wow!

I'm being sarcastic, of course, because at that framerate no-one is going to notice the extra 80 fps but I would certainly notice the low resolution and graphics settings. And the benches show that as you ramp up the resolution to 1440p and higher along with the graphics settings that games become more GPU limited and the improvements to framerates with Intel's APO become much smaller.

Also, Intel have stated that they have no plans to add support for APO to their 12th and 13th gen CPUs even though there is no reason for them not to. My guess is they are doing this in order to sell the 14th gen CPUs which otherwise are basically refreshed 13th gen CPUs with minor changes and improvements. Also APO only supports 2 games currently and seems to be reliant on developers adding support for it to their games which makes it a lot less appealing that having something that can be enabled on a system wide level. Most developers likely won't bother supporting it as most gamers won't own 14th gen Intel CPUs. I mean if these same developers who cannot fix and address shader optimisation in their PC releases before launch then what chance is there that they will support Intel's APO?

Interesting technology, any additional performance, no matter how small is always welcome but it would have been better if it had supported 12th to 14th gen Intel CPUs and also had the option to enable it globally for all games with per-game profiles to disable it for those games that don't work well with it or have issues.
 
Last edited:

Dr.D00p

Gold Member
This isn't a problem with the hardware.

It's a problem caused by developers using ineffective and inefficient coding.

Intel has given them the tools to significantly boost performance as this APO has proven.

Intel can be pillioried for many things but bad developers isn't one of them.
 

kruis

Exposing the sinister cartel of retailers who allow companies to pay for advertising space.
.... Also, just for giggles:



E-Cores enabled is definitely beneficial in a lot of games.


That video shows that on average (with these particular games) there's not much difference percentagewise between E-cores on and off (which actually is not a great advertisement for technology that can double or triple the number of available CPU cores), but there are the problem cases where you should turn off e-cores. And that's my point. I'm not playing 40 games at once, I'm playing one at the time. How do I know I'm not playing one of those games that have worse performance? If e-cores provide little or no additional performance, that's ok, but worse?

oWRlkFY.jpg


BTW Starfield is (or was, perhaps it's been patched) also a game that runs better when you turn E-cores and in particular HT off according to Digital Foundry. (video link)

PYIOJZB.jpg
 
Last edited:
Top Bottom