• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Zen 4 reviews

winjer

Gold Member


THMKVff.png
UiGCmvX.png
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
KO for intel, the e-cores are a temporary solution...why intel did not do the 13900k 16 Pcore e 4ecore?
Cuz the 590 dollar 13900K with its petty e-cores still beats the 700 dollar 7950X when the cores are fully loaded 9 times out of 10.

16 P-cores would leave no room for competition.
Atleast AMD have a chance with Intel not fully loading up the chip with P-cores.
8 P-cores and beating the competish is more a sign AMD need to step up their game than a KO to Intel.



*MSRP prices, I know no one is buying 7000 series CPUs so theyve seen drastic drastic price cuts.
 

winjer

Gold Member
KO for intel, the e-cores are a temporary solution...why intel did not do the 13900k 16 Pcore e 4ecore?

Because of space on the chip.
The 13900K is already 257 mm², using Intel 10. But one P-core occupies as much space as 3 E-cores.
Adding 3 e-cores ads more performance than 1 p-core in highly parallelized threads.
In applications like renderers, those e-cores can be well used to render a few tiles here and there.
But in games those e-cores are almost useless. In some games, they can even lead to a loss in performance.
Then there are applications that require real cores with grunt. While talking with devs that use UE5, they told me that the 7950X is the fastest CPU to batch projects.
And they expect that the 7950X3D to be even faster.

BTW, the 7950X has 2 CCDs with 70mm² each, at N5. And the IO die is at N6, with 124.7 mm²
Using much smaller chips, yields increase drastically. And only the CCDs are in the most expensive node.
 
Last edited:

winjer

Gold Member




 
They were only selling badly due to AM5 and DDR5 costs. I'm looking at either a 5600/12400 and AM4/Intel platform or an 7800X chip. Waiting until black Friday to decide. If there's reasonable GPU costs, it's the X, if not, I'll grab a 5600.

The 13xxx is a non-starter due to lack of AVX instructions. I love me some PDX games.
 
Last edited:

winjer

Gold Member
AMD just killed their own X models which were already selling badly?

The X models have always been the worst option, in every Zen architecture.
Unfortunately, AMD with the last few gens, has been releasing first the X models to force early adopters to buy the more expensive CPU.
And this is one of the main reasons why Zen4 sold so poorly.
 

winjer

Gold Member
?????????????????????????
how so?? i own an X model

Because X models only have one or two hundred extra Mhz and a higher TDP. Something that can be changed easily on the BIOS or Ryzen master.
And even without any tweaks, the performance difference is very small.
On the other hand, the non-X models are significantly more power efficient. Especially Zen4.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
?????????????????????????
how so?? i own an X model
They cost more yet perform practically indistinguishably from non X models.

See the above graph, the 7700 performs less than a percent slower than a 7700X
The 5600 was 1% slower than the 5600X yet look at the price differential.

Why would anyone buy an X model when the non X models give you everything for less money.
Better to use the ~100 dollars to buy a better cooler.
 

GymWolf

Member
I have a question.

People always says that in 4k the cpu doesn't matter that much so having an i5 over an i7 is not a big deal.

But is this true for games with like a lot of npcs on screen or a lot of physics? like when in cyberpunk\w3 i put the crowd density to max or when there is hell on screen in noita, doesn't that impact only the cpu?

Also i'm sure i heard people saying that a powerfull cpu is also important for the rtx effects because the ray needs fast math or some shit:lollipop_grinning_sweat:

Does playing in 4k really trumps npcs density\physics\rtx?
 
Last edited:

M1chl

Currently Gif and Meme Champion
I have a question.

People always says that in 4k the cpu doesn't matter that much so having an i5 over an i7 is not a big deal.

But is this true for games with like a lot of npcs on screen or a lot of physics? like when in cyberpunk\w3 i put the crowd density to max or when there is hell on screen in noita, doesn't that impact only the cpu?

Also i'm sure i heard people saying that a powerfull cpu is also important for the rtx effects because the ray needs fast math or some shit:lollipop_grinning_sweat:

Does playing in 4k really trumps npcs density\physics\rtx?
For RTX is definitely good to have good CPU, single core performance is important there. Rays are hard to parallelize, so that is why it wants a fast CPU or rather CPU core.

Otherwise I don't know, strategy games are big on CPU, like Anno and something like Fly Sim, basically anything which has to load a lot of data, decompress, etc.

Which in the future will be on GPU inshallah
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I have a question.

People always says that in 4k the cpu doesn't matter that much so having an i5 over an i7 is not a big deal.

But is this true for games with like a lot of npcs on screen or a lot of physics? like when in cyberpunk\w3 i put the crowd density to max or when there is hell on screen in noita, doesn't that impact only the cpu?

Also i'm sure i heard people saying that a powerfull cpu is also important for the rtx effects because the ray needs fast math or some shit:lollipop_grinning_sweat:

Does playing in 4k really trumps npcs density\physics\rtx?
Most games usually have a "master thread" that actually does most of the work then "worker threads" that handle background stuff.
Having strong single thread performance is currently the most important thing in most games. (cache rules everything around me, notwithstanding)
So an i5 vs an i7 vs an i9 assuming similar binning will perform exactly the same even in Cyberpunk cuz the game will still use that master thread and have worker threads but more worker threads after a certain point has diminishing returns so 6 cores is more than enough right now.
Maybe in the future games will be way more parallelized, (ive said this every CPU generation since like the Xbox One launched it still isnt really true and even 4 powerful cores are enough for most games) but as is right now 6 p + ecores isnt just the sweet spot, its realistically the best setup to have from bang for buck perspective as more cores will basically just be half assing their way through life so why even have them?

The reason the 139K beats the 136K (by like 3% on average) isnt because it has more cores, its because the 139K is binned better, if you have a well binned 136K vs a similarly binned 139K there will be virtually no difference in performance in 99% of games.
 

Buggy Loop

Member
I have a question.

People always says that in 4k the cpu doesn't matter that much so having an i5 over an i7 is not a big deal.

But is this true for games with like a lot of npcs on screen or a lot of physics? like when in cyberpunk\w3 i put the crowd density to max or when there is hell on screen in noita, doesn't that impact only the cpu?

Also i'm sure i heard people saying that a powerfull cpu is also important for the rtx effects because the ray needs fast math or some shit:lollipop_grinning_sweat:

Does playing in 4k really trumps npcs density\physics\rtx?

Take a 5800x vs 5800x3d for a 4090

https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-ryzen-7-5800x3d/2.html

Even 4K some games have drastic >30% delta, for an avg of 6.8%

Then 13900k vs 5800x3d again with the same 4090

https://www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-vs-ryzen-7-5800x3d/2.html

Starts to lower, some >16% outliers, with avg 1.3% faster.

Ada will basically take whatever CPU you throw at it. I wouldn’t be surprised that it scales again very well with 7800x3d
 

Rickyiez

Member
You also gets much better low at 4k with better CPU. It contributes to smoother gameplay. Even upgrading from 3700x to 5700x gave me a much more stable 90FPS framerates with FF7R at 4k for example.
 

thuGG_pl

Member
I upgraded PC to 7600X lately. I'm running Windows 11.
As usual after an upgrade I ran some benchmarks just to check if performance is as expected.
And one thing was very weird, AIDA64 memory benchmarks reported very poor latency results for RAM and L1/2/3 caches, (first screenshot below).
I couldn't figure out what is causing that, played in bios with memory, run default settings etc, but nothing really helped.
Also 3dmark score was way off normal values, like 30-40% lower than it should be.

So after googling a bit, I stumbled upon some information that Win 11 had this particual problem, but it was from around when W11 was released. And some early updated apparently remedied this problem.
But here I am in 2023 having exact same problem. To solve that I just needed to install AMD B650 chipset driver. After that the latencies went back to normal (second screenshot).

BUT today there was some update to Win 11, and after this it went back to shit again. So I installed the B650 chipset driver again and it's fine again. So looks like Win 11 updates may reset this.
I'm wondering how many people have this, but are not aware of it. Or maybe it's something really specific, like motherboard or sth. Weird.


Fm5N3cV.png



imxy2Af.png
 
Last edited:

winjer

Gold Member

The good news is that budget friendly AMD A620 motherboards might be just around the corner. Some new models have just been spotted over at Eurasian Economic Commission regulatory office and on Goofish selling platform. The following motherboards were listed there:

The A620 chipset will almost certainly get fewer PCIe lanes and USB ports. Customers should also prepare for missing PCIe Gen5 support and most likely locked overclocking capabilities. However, AMD is yet to confirm the exact details for this entry-level chipset.
 

rnlval

Member
Cuz the 590 dollar 13900K with its petty e-cores still beats the 700 dollar 7950X when the cores are fully loaded 9 times out of 10.

16 P-cores would leave no room for competition.
Atleast AMD have a chance with Intel not fully loading up the chip with P-cores.
8 P-cores and beating the competish is more a sign AMD need to step up their game than a KO to Intel.



*MSRP prices, I know no one is buying 7000 series CPUs so theyve seen drastic drastic price cuts.
Cinebench R23 only uses AVX-128 bit width.



-----

https://youtu.be/M__zn9fJEMA?t=244
Ryzen 9 7900X vs Core i9 13900K running RPCS3's Uncharted Drake's Fortune. Ryzen 9 7900X is faster when compared to Core i9 13900K.

Intel has disabled Core i9 13900K's AVX-512. Core i9 13900K should be compared with a Ryzen 9 7950X.
 

rnlval

Member
Because of space on the chip.
The 13900K is already 257 mm², using Intel 10. But one P-core occupies as much space as 3 E-cores.
Adding 3 e-cores ads more performance than 1 p-core in highly parallelized threads.
In applications like renderers, those e-cores can be well used to render a few tiles here and there.
But in games those e-cores are almost useless. In some games, they can even lead to a loss in performance.
Then there are applications that require real cores with grunt. While talking with devs that use UE5, they told me that the 7950X is the fastest CPU to batch projects.
And they expect that the 7950X3D to be even faster.

BTW, the 7950X has 2 CCDs with 70mm² each, at N5. And the IO die is at N6, with 124.7 mm²
Using much smaller chips, yields increase drastically. And only the CCDs are in the most expensive node.
FYI, 13900K uses Intel 7.

Intel 7 refers to TSMC's 7nm transistor density since Intel's 10nm SuperFin is similar to TSMC's 7nm transistor density.
 

rnlval

Member
I upgraded PC to 7600X lately. I'm running Windows 11.
As usual after an upgrade I ran some benchmarks just to check if performance is as expected.
And one thing was very weird, AIDA64 memory benchmarks reported very poor latency results for RAM and L1/2/3 caches, (first screenshot below).
I couldn't figure out what is causing that, played in bios with memory, run default settings etc, but nothing really helped.
Also 3dmark score was way off normal values, like 30-40% lower than it should be.

So after googling a bit, I stumbled upon some information that Win 11 had this particual problem, but it was from around when W11 was released. And some early updated apparently remedied this problem.
But here I am in 2023 having exact same problem. To solve that I just needed to install AMD B650 chipset driver. After that the latencies went back to normal (second screenshot).

BUT today there was some update to Win 11, and after this it went back to shit again. So I installed the B650 chipset driver again and it's fine again. So looks like Win 11 updates may reset this.
I'm wondering how many people have this, but are not aware of it. Or maybe it's something really specific, like motherboard or sth. Weird.


Fm5N3cV.png
5PzkMHw.png


AMSbW0W.png


I manually tighten memory settings with two G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR AMD EXPO modules.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Cinebench R23 only uses AVX-128 bit width.



-----

https://youtu.be/M__zn9fJEMA?t=244
Ryzen 9 7900X vs Core i9 13900K running RPCS3's Uncharted Drake's Fortune. Ryzen 9 7900X is faster when compared to Core i9 13900K.

Intel has disabled Core i9 13900K's AVX-512. Core i9 13900K should be compared with a Ryzen 9 7950X.

Im not sure what im supposed to do with this information?
If most programs arent using AVX512 then its borderline irrelevant, to say "Zen4 loses because there arent programs that utilize AVX512".
Until a bunch of programs use AVX512, it isnt really something worth bringing up.

And the 7900X is 550 MSRP
The 13900K is 590 MSRP
The 7950X is 700 MSRP.

Direct competitors are
7900X to 13900K.
7950X to 13900KS.

But realistically no one should be buying the KS chip anyway.
 

winjer

Gold Member
FYI, 13900K uses Intel 7.

Intel 7 refers to TSMC's 7nm transistor density since Intel's 10nm SuperFin is similar to TSMC's 7nm transistor density.

You are correct.
But it's still just a half node, an improvement over Intel 10.
Before Intel renamed all their nodes, this Intel 7 was known as Intel 10nm Enhanced SuperFin.
It does bring some minor, but decent improvements with 10-15% better performance/power usage.
 

winjer

Gold Member
And the 7900X is 550 MSRP
The 13900K is 590 MSRP
The 7950X is 700 MSRP.

Direct competitors are
7900X to 13900K.
7950X to 13900KS.

Those prices are no longer current.
Most of the 7000 series CPUs are now priced ~100$ cheaper than that.

For example, a look at Newegg:
13900K: 610$
7950X: 589$
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Those prices are no longer current.
Most of the 7000 series CPUs are now priced ~100$ cheaper than that.

For example, a look at Newegg:
13900K: 610$
7950X: 589$
Which is why I listed MSRP.
If Zen4 was a success the CPUs wouldnt have gotten such steep price cuts so close to release.
But in terms of which CPUs take on which we arent really going to say the 7700X is the direct competitor to the 136K just cuz of the price cuts the r7 has experienced.
The r7s competition was/is the i7
r5 vs i5.
139K was vs the 7900X.
 

winjer

Gold Member
Which is why I listed MSRP.
If Zen4 was a success the CPUs wouldnt have gotten such steep price cuts so close to release.
But in terms of which CPUs take on which we arent really going to say the 7700X is the direct competitor to the 136K just cuz of the price cuts the r7 has experienced.
The r7s competition was/is the i7
r5 vs i5.
139K was vs the 7900X.

Who cares about launch prices? What matters is the current price.
It's normal for companies to adjust prices according to market.
 

rnlval

Member
Im not sure what im supposed to do with this information?
If most programs arent using AVX512 then its borderline irrelevant, to say "Zen4 loses because there arent programs that utilize AVX512".
Until a bunch of programs use AVX512, it isnt really something worth bringing up.

And the 7900X is 550 MSRP
The 13900K is 590 MSRP
The 7950X is 700 MSRP.

Direct competitors are
7900X to 13900K.
7950X to 13900KS.

But realistically no one should be buying the KS chip anyway.
1. Unlike Intel's 13900 (non-K), AMD's 7900 (non-X) can be overclocked by the multiplier.

From https://pcpartpicker.com/products/cpu/#F=99,101&C=8,64&sort=price&page=1 (location USA)

For multiplier-unlocked CPU SKUs with integrated graphics.

Intel Core i9-13900K = $594.99
Intel Core i7-13700K = $421.96

VS

AMD Ryzen 9 7950X = $574.00
AMD Ryzen 9 7900X = $419.99
AMD Ryzen 9 7900 = $429.00


https://www.techpowerup.com/cpu-specs/ryzen-9-7900.c2961#:~:text=You may freely adjust the,with a dual-channel interface.
You may freely adjust the unlocked multiplier on Ryzen 9 7900, which simplifies overclocking greatly, as you can easily dial in any overclocking frequency.


https://www.pcworld.com/article/705...verclock-amds-next-gen-b650-motherboards.html

"You can overclock on AMD’s next-gen B650 motherboards."




Intel B760 and H770 lack AC_LL / DC_LL overclock adjustments. https://en.wikipedia.org/wiki/LGA_1700

SbLt2KY.png



All AMD Zen 4 and X670/X670E chipsets include at least separate PCIe 5.0 4X NVMe from graphics PCIe 5.0 16X lanes. Intel Raptor Lake is missing this feature.

Unlike Intel desktop motherboards, some AMD X670E motherboards support UDIMM ECC e.g. ASUS X670E SKUs.

Intel LGA 1700 is a dead-end platform.

2. https://blender.community/c/today/MjP1/?sorting=hot
Intel Embree is a library give optimized code for raytracing by using the intel CPU extensions (Ex: SSE, AVX, AVX2, and AVX-512 instructions).

https://www.techspot.com/review/2552-intel-core-i9-13900k/


yLHgh67.png


Unlike Cinema4D R23, Blender 3D is a free download.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
1. Unlike Intel's 13900 (non-K), AMD's 7900 (non-X) can be overclocked by the multiplier.

https://www.techpowerup.com/cpu-specs/ryzen-9-7900.c2961#:~:text=You may freely adjust the,with a dual-channel interface.
You may freely adjust the unlocked multiplier on Ryzen 9 7900, which simplifies overclocking greatly, as you can easily dial in any overclocking frequency.

https://www.pcworld.com/article/705...verclock-amds-next-gen-b650-motherboards.html

You can overclock on AMD’s next-gen B650 motherboards.


Intel B760 lacks AC_LL / DC_LL adjustments.

2. https://blender.community/c/today/MjP1/?sorting=hot
Intel Embree is a library give optimized code for raytracing by using the intel CPU extensions (Ex: SSE, AVX, AVX2, and AVX-512 instructions).
Again im not sure what im supposed to do with this information.
 

rnlval

Member
Again im not sure what im supposed to do with this information.
1. AMD doesn't follow Intel's CPU K vs non-K and Intel chipset Zx90 vs Bx60 vs Hx70 product segmentation.

2. Intel's own AVX-512 push is recycled. Unlike the ARM, Intel didn't properly plan for BIG.small multi-CPU design since P-cores' AVX-512 was disabled due to superglued E-cores.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
1. AMD doesn't follow Intel's CPU K vs non-K and Intel chipset Zx90 vs Bx60 vs Hx70 product segmentation.

2. Intel's own AVX-512 push is recycled. Unlike the ARM, Intel didn't properly plan for BIG.small multi-CPU design since P-cores' AVX-512 was disabled due to superglued E-cores.
I know all this.
Im not sure what your point is?
 

rnlval

Member
I know all this.
Im not sure what your point is?

You posted

And the 7900X is 550 MSRP
The 13900K is 590 MSRP
The 7950X is 700 MSRP.
Direct competitors are
7900X to 13900K.
7950X to 13900KS.

I posted

From https://pcpartpicker.com/products/cpu/#F=99,101&C=8,64&sort=price&page=1 (location USA)

For multiplier-unlocked CPU SKUs with integrated graphics.

Intel Core i9-13900K = $594.99
Intel Core i7-13700K = $421.96

VS

AMD Ryzen 9 7950X = $574.00
AMD Ryzen 9 7900X = $419.99
AMD Ryzen 9 7900 = $429.00

https://www.techpowerup.com/cpu-specs/ryzen-9-7900.c2961#:~:text=You may freely adjust the,with a dual-channel interface.
You may freely adjust the unlocked multiplier on Ryzen 9 7900, which simplifies overclocking greatly, as you can easily dial in any overclocking frequency.


https://www.pcworld.com/article/705...verclock-amds-next-gen-b650-motherboards.html

"You can overclock on AMD’s next-gen B650 motherboards."




Intel B760 and H770 lack AC_LL / DC_LL overclock adjustments. https://en.wikipedia.org/wiki/LGA_1700

SbLt2KY.png


All AMD Zen 4 and X670/X670E chipsets include at least separate PCIe 5.0 4X NVMe from graphics PCIe 5.0 16X lanes. Intel Raptor Lake is missing this feature.

Unlike Intel desktop motherboards, some AMD X670E motherboards support UDIMM ECC e.g. ASUS X670E SKUs.

AMD doesn't follow Intel's CPU K vs non-K and Intel chipset Zx90 vs Bx60 vs Hx70 product segmentation.


Intel LGA 1700 is a dead-end platform.


Stop dodging the issue.

---------------
 

thuGG_pl

Member
You posted "And one thing was very weird, AIDA64 memory benchmarks reported very poor latency results for RAM and L1/2/3 caches, (first screenshot below)."
I know, and I also described how I got rid of the problem. And you went and posted your timings, what was the point?
 

rnlval

Member
I know, and I also described how I got rid of the problem. And you went and posted your timings, what was the point?

Your AsRock B650 PG Lightning motherboard experience wasn't encountered with my ASUS TUF X670E Plus Wifi and ROG Crosshair X670E Hero.


This is the standard XMP (via the DOCP) profile from Team T-Force Delta RGB DDR5-5600 32GB (2x16GB) FF4D532G5600HC36BDC01, CL36-36-36-76 XMP from my ASUS TUF Gaming X670E Plus Wifi

uY0nirP.png



I replaced Team T-Force Delta RGB DDR5-5600 FF4D532G5600HC36BDC01 with G.Skill Trident Z5 Neo RGB DDR5-6000 32GB (2x16GB) F5-6000J3038F16GX2-TZ5NR AMD EXPO CL30-36-36-96 and I tighten the memory timings.

-------------------



X670 motherboard gaming variance example.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Intel LGA 1700 is a dead-end platform.


Stop dodging the issue.

---------------
What issue am I dodging?

Intel have been a Tick-Tock company since like SandyBridge.
Every 2 or 3 generations leads to a new chipset.
I know this already.
If you got LGA1700 with AlderLake, you had an upgrade path with RaptorLake and still have a further upgrade path with RaptorLakeR.
After that we are at ArrowLake which is the disaggregated CPUs.....thats like in 2025 assuming your RaptorLakeR CPU is inexplicably too weak, then you can move to LGA1851 and be ready for LunarLake.

I know AMD allows overclocking across the roster.
I listed MSRP prices.....no one bought the 7000 series so they got severe price cuts. (a good thing, punish companies for over charging platform and all)

Whats that got to do with the 139K being able to beat the 7950X even with its lower P-core count?
No AVX512 on 139K....who gives a shit, most programs dont use AVX512 so its borderline irrelevant.
You posted that Cinema4D and most renderers dont use AVX512 cuz the clockdown would negate the benefits.......that kinda proves my point.
Who gives a shit about AVX512?
But again, whats that got to do with the 139K being able to beat the 7950X?
 
Lower sales of 7000-series is more to do with motherboard prices, and AMD's own 5000-series eating sales. I'm about to buy a 5700X for £190 for example. Crazy good value.
 

Justin9mm

Member
About to start my 7700X build with a 3080. It's cost a bit more with needing AM5 mobo and DDR5 but hoping it will keep me going with 1440p and VR gaming for a couple years before I need to upgrade GPU.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Intel has abandoned it's Tick Tock strategy since 2016.

Tech.png
Thats the lie they spun to us because they couldnt shrink shit fast enough and had to sell a "new" generation.
The Skylake years were quite rowdy for Intel.

If their plans to LunarLake work out we might be having a bunch of Ticks and no Tocks for while.
 

winjer

Gold Member
Thats the lie they spun to us because they couldnt shrink shit fast enough and had to sell a "new" generation.
The Skylake years were quite rowdy for Intel.

If their plans to LunarLake work out we might be having a bunch of Ticks and no Tocks for while.

Intel is not going from node to node. They are jumping from node to half node, to node and half node again.

And let's not forget we are getting a refresh of Raptor Lake, not a new arch, for 2023.

So in neither case is Intel doing the Tick Tock strategy.
 
Last edited:
Top Bottom