• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Leak: AMD Ryzen 9950X wrecks 7950X in cinebench

DenchDeckard

Moderated wildly
So, i've always had intel in my personal pcs but I bought a 7800x3d for my daughter with a b760 mag tomahawk mobo. The 9800x3d will just drop I'm right?

AMD kill it with forward compatibility.
 

SmokSmog

Member
This will age badly.

65e6b677f1c30e6264041dfaf3433034c9b9bda808cfe56b3c4e8e6f4473d1d1.jpg
 

FingerBang

Member
I doubt I'll upgrade from the 7950x3D anytime soon (I'm literally never CPU limited at the resolution I play at), but goddamn that's a spicy jump.

I was thinking of building a second PC so I might go for one of the non 3D variants
 

FireFly

Member
Zen 5 being on n4p would then make it extra Disappointing if the ps5 pro doesn’t use it considering it’s the expected process node for the pro
Not really since Sony have a limited transistor budget and it makes more sense to use that budget on the GPU.
 

FireFly

Member
I’

Zen 5 shouldn’t be that much more expensive in 2024 compared to zen 2 in 2020 and the pro is gonna be sold at a premium
Zen 4 has 73% more transistors per chiplet than Zen 2, so we would expect Zen 5 to be about 2.5X more. That's almost 6 billion transistors that you could spend on the GPU. If the Pro has 60 CUs + Zen 2, then the Zen 5 version might only have 48 CUs.

Edit: meant Zen 5, not RDNA 5.
 
Last edited:

DaGwaphics

Member
That looks like a nice upgrade. If they did that while keeping the power draw in check, that is quite an accomplishment.
 

SScorpio

Member
intel here but my experience with amd ( both cpu and gpu) is that they run fast buggy and hot
Compared to Nvidia on the GPU side they do use more power and run hotter. But on the CPU side, it's Intel who has the hotter chips. The 14900K can hit over 250W turboing and requires a high performance cooling solution. AMD on the other hand sips power. If it's used for gaming, AMD's CPUs are actually better to undervolt which lowers their power draw from stock, but since it produces less heat they maintain higher boost clocks longer. AMD is just conservative when it comes to power requirements.
 

SHA

Member
This will age badly.

65e6b677f1c30e6264041dfaf3433034c9b9bda808cfe56b3c4e8e6f4473d1d1.jpg
I despise dudes who only fantasize upcoming tech but don't like released products for what they're, it's not like he's taking shortcuts, his content is basically useless cause he's not really into practical use of these products, this is stupid and only serve the adds he get sponsored, the type of contents he's delivering isn't meaningful, he's not telling why these products are amazing, he's just fantasizing them.
 
Last edited:

Danknugz

Member
Compared to Nvidia on the GPU side they do use more power and run hotter. But on the CPU side, it's Intel who has the hotter chips. The 14900K can hit over 250W turboing and requires a high performance cooling solution. AMD on the other hand sips power. If it's used for gaming, AMD's CPUs are actually better to undervolt which lowers their power draw from stock, but since it produces less heat they maintain higher boost clocks longer. AMD is just conservative when it comes to power requirements.
sounds like cherry picking cause the 14900k is clearly meant for performance at the cost of power consumption. what about all their other chips. amd plays a cheap marketing strategy and tries to court gamers who want the most bang for their buck, their drivers cut corners and have huge compatibility issues outside gaming. for example pytorch only supports amd gpu on linux
 
Last edited:

SScorpio

Member
sounds like cherry picking cause the 14900k is clearly meant for performance at the cost of power consumption. what about all their other chips. amd plays a cheap marketing strategy and tries to court gamers who want the most bang for their buck, their drivers cut corners and have huge compatibility issues outside gaming. for example pytorch only supports amd gpu on linux
OK, then look at the 14700K vs the 7800X3D power draw during gaming. AMD has at least 1/2 the power draw, and can be a 1/3 Intel's for some games.

https://www.techpowerup.com/review/intel-core-i7-14700k/23.html

AMD has been failing to compete against NVIDIA for a long time. But it's Intel who is struggling when it comes to CPUs. Hopefully AMD can pull a Ryzen in the GPU space and create something that can match NVIDIA's best, but right now they need to reach feature parity. Intel could be a dark horse and end up beating AMD, but who knows what the future will bring. Just as long as we get something that makes NVIDIA compete so we get new cards that were like the 1080ti rather than NVIDIA just sand bagging it generation to generation.
 

FireFly

Member
sounds like cherry picking cause the 14900k is clearly meant for performance at the cost of power consumption. what about all their other chips. amd plays a cheap marketing strategy and tries to court gamers who want the most bang for their buck, their drivers cut corners and have huge compatibility issues outside gaming. for example pytorch only supports amd gpu on linux
The 13400F is competitive but other Intel chips get soundly beaten. Nothing can touch the efficiency of the 7800X3D in gaming.


Zen 4 is designed to hit 90C safely and the 13th gen Intel chips are similar.
 

Celcius

°Temp. member
My new PC this fall is going to feel like a generational leap coming from my 10700k + 3090. Hopefully Samsung or WD have their pcie 5 ssd’s out by then too.
 
Last edited:

SScorpio

Member
not up to date on amd things but i also just found this about ryzen cpus and pytorch basically they run slow to the point of almost not being worth it

That thread is from over three years ago and discusses issues where pytorch isn't detecting CPU features correctly, and thus ends up using slower code.

The testing is on Zen 2 which is two generations behind. I don't work with machine learning, but AMD has ZenDNN which supposedly integrates and accelerates pytorch.

https://www.amd.com/en/developer/zendnn.html
 

S0ULZB0URNE

Member
It's not. While downloading files, Steam decompresses a ton of data, because games are stored compressed in their servers to save space and bandwidth.
And if someone has a fast internet connection, it means the CPU will always have work to do, decompressing files.
And of course, a 7800X3D, has double the cores of a 6700K. Has higher clock speeds, higher TDP. So the CPU can heat up a bit during Steam downloads.
Still, 60ºC is nothing special for modern CPUs and no reason to worry about.
At most I hit the 40's Air cooled...
 

OverHeat

« generous god »
7950x3D 3GB internet connection gen5 ssd and I hit high 50 when downloading on steam with a 360 AIO
 

twilo99

Member
My 7800x3D kicks ass, but I will say that the fans spin up the moment I move my mouse. File Explorer makes more noise than a full game under load with my old CPU (6700k).

Mild undervolt might help quite a bit. Great CPUs but still require further extra set up work from the user which isn’t great
 
Last edited:

simpatico

Member
The 13400F is competitive but other Intel chips get soundly beaten. Nothing can touch the efficiency of the 7800X3D in gaming.


Zen 4 is designed to hit 90C safely and the 13th gen Intel chips are similar.
Question I've been wondering is, what about the physical nature of the chips has changed? Before this chip, I think every CPU I've ever owned has had a tjmax in the 60s. What I'm wondering is, if the silicon is the same and companies are just comfy with higher temps, could old chips with a 63tjmax be safely overridden to 90c?
 

winjer

Gold Member
Question I've been wondering is, what about the physical nature of the chips has changed? Before this chip, I think every CPU I've ever owned has had a tjmax in the 60s. What I'm wondering is, if the silicon is the same and companies are just comfy with higher temps, could old chips with a 63tjmax be safely overridden to 90c?

Intel changed how they rate the temperature limits.
Up until the 6000 series (Skylake) Intel reported something called Tcase. For example, for the 6700K, the Tcase is 64ºC
Intel describes this as "Case Temperature is the maximum temperature allowed at the processor Integrated Heat Spreader (IHS)."
But if we look at the 7000 series (Kaby Lake), which is just an overclocked Skylake, they started reporting Tjunction. For example, the 7700K as a Tjunction of 100ºC.
Intel describes this as "Junction Temperature is the maximum temperature allowed at the processor die."

So the thing is Intel just started reporting thermal limits in different places.
And off course, temperatures measured at the HIS will be lower than if measured at the CPU die.
Chances are, that those 64ºC Tcase match exactly to those 100ºC Tjunction.
 

Danknugz

Member
That thread is from over three years ago and discusses issues where pytorch isn't detecting CPU features correctly, and thus ends up using slower code.

The testing is on Zen 2 which is two generations behind. I don't work with machine learning, but AMD has ZenDNN which supposedly integrates and accelerates pytorch.

https://www.amd.com/en/developer/zendnn.html

right, read the thread my dude the OP is asking why pytorch runs slower with AMD as if the onus is on individual developers to make up for shortcomings in drivers, the top response basically explains that the onus is on the manufacturer that provides the drivers and is something that intel and nvidia have always done and AMD has not.

i'm not familiar with this other thing you linked to but i would take a closer read yourself, seems like it's just for inference on CPUs. i wouldn't try and reach or grasp for straws in this kind of argument it's clear AMD is not this kind of company and are always just playing catch up, they would never have innovated this tech on their own, getting back to my original argument that they can't even write decent drivers.
 

SScorpio

Member
right, read the thread my dude the OP is asking why pytorch runs slower with AMD as if the onus is on individual developers to make up for shortcomings in drivers, the top response basically explains that the onus is on the manufacturer that provides the drivers and is something that intel and nvidia have always done and AMD has not.

i'm not familiar with this other thing you linked to but i would take a closer read yourself, seems like it's just for inference on CPUs. i wouldn't try and reach or grasp for straws in this kind of argument it's clear AMD is not this kind of company and are always just playing catch up, they would never have innovated this tech on their own, getting back to my original argument that they can't even write decent drivers.
Perhaps you should stop telling other people to read, and read the things presented to you first. The page I linked has compiled versions of the pytorch runtime on it that were optimized by AMD for their hardware. Your several years out of date arguments were that the math library Intel wrote and optimized for their CPUs wasn't performing well on AMD CPUs. Well AMD wrote their own library to fix this. CPU's don't really have drivers like you keep going on about. There could be a driver to interface with some specific feature, but not for general usage compute, you instead have libraries that programs need to compiled or linked to, for it to be utilized.

AMD has done well in current day HPC, and many companies are moving away from Intel due to AMD being able to fit 96 cores with 192 threads in a single socket for a CPU you can buy today. Intel's previously maxed out at 54 cores, 108 threads. But supposedly Sierra Forest is releasing in the next three months and will have up to 288 cores, but those are all Intel E cores. It could still be an amazing processor depending on your work load, but it will be clocked lower, and feature wise is cut down from the standard performance cores.

But I get it your world view is that an Intel and NVIDIA system is the bestest, and the AMDumbs don't know what they are talking about. AMD just can't innovate and create industry standards like Vulken or FSR, excpect they did. Or create the hardware for all of the video games consoles outside of the almost decade old Switch hardware. So far a single handheld PC has been released with Intel hardware, and it's a completely flop even compared to the over two year old Steam Deck.
 

Danknugz

Member
Perhaps you should stop telling other people to read, and read the things presented to you first. The page I linked has compiled versions of the pytorch runtime on it that were optimized by AMD for their hardware. Your several years out of date arguments were that the math library Intel wrote and optimized for their CPUs wasn't performing well on AMD CPUs. Well AMD wrote their own library to fix this. CPU's don't really have drivers like you keep going on about. There could be a driver to interface with some specific feature, but not for general usage compute, you instead have libraries that programs need to compiled or linked to, for it to be utilized.

AMD has done well in current day HPC, and many companies are moving away from Intel due to AMD being able to fit 96 cores with 192 threads in a single socket for a CPU you can buy today. Intel's previously maxed out at 54 cores, 108 threads. But supposedly Sierra Forest is releasing in the next three months and will have up to 288 cores, but those are all Intel E cores. It could still be an amazing processor depending on your work load, but it will be clocked lower, and feature wise is cut down from the standard performance cores.

But I get it your world view is that an Intel and NVIDIA system is the bestest, and the AMDumbs don't know what they are talking about. AMD just can't innovate and create industry standards like Vulken or FSR, excpect they did. Or create the hardware for all of the video games consoles outside of the almost decade old Switch hardware. So far a single handheld PC has been released with Intel hardware, and it's a completely flop even compared to the over two year old Steam Deck.
it wasn't my argument it was just a discussion i linked to, and how old it is is irrelevant because it involves amds approach in writing their drivers, i'm not gong to spoon feed it to you. what you linked to seems to only involve inference, maybe you should read up on that since you said yourself you don't know much about ML. of course cpus have chipset drivers my dude
 

xVodevil

Member
Well with 7800X3D and a decent GPU I don't think I should be bothered at the moment, though definitely gonna be nice if they keep up the pace in following gen or two. More interested what's in store with the upcoming RTX 5xxx GPU's
 

Codeblew

Member
it wasn't my argument it was just a discussion i linked to, and how old it is is irrelevant because it involves amds approach in writing their drivers, i'm not gong to spoon feed it to you. what you linked to seems to only involve inference, maybe you should read up on that since you said yourself you don't know much about ML. of course cpus have chipset drivers my dude
Key word: Chipset. That isn't a CPU driver, its a chipset driver. You know, for the other chips on the motherboard.
 

Gaiff

SBI’s Resident Gaslighter
Depends entirely on the GPU and rendering resolution.
Of course, but the Zen 3 CPUs are so good that unless you really have bottom-tier parts, you'll very likely be GPU-limited. The upcoming series is good for people on Zen 2 or before. Zen 3 owners won't need to upgrade for at least another generation or two. Hell, you can ride high-end CPUs close to a decade and remain GPU-limited provided you target high enough resolutions.
 

marquimvfs

Member
Can anyone who owns an amd zen rig tell the different issues between Intel and amd? Cause I'm hesitant to switch to amd.
I use an AMD rig and an sell computers with it. I sell more AMD than Intel since Zen 2 and after putting up hundreds of computers with Ryzens (and sometime low end Athlons too) I can assure you, you won't regret changing. No problems, at all. Intel since 11th gen is what AMD was with the FX line: Hot, expensive, and the higher end line is problematic.
 
Last edited:
  • Like
Reactions: SHA
Top Bottom