• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Unreal Engine 5 Matrix City Sample PC Analysis: The Cost of Next-Gen Rendering

sircaw

Banned
I have worked on realtime game loops for 5yrs now at Lockheed Martin on their missile simulation programs.

Your comment is ridiculous and no real programmer would ever disjoin film graphics (the science of graphics) with realtime graphics programming. It's like saying one of my buddies that worked at Disney and now works at Nvidia on their OptX program has no validity to the knowing what goes on in a realtime graphics pipeline.

I won't even read the rest of your post as I will assume it's just trolling. You can think I have no credible knowledge all you want, but I get plenty of invites from game companies to join their graphics team. They must be stupid.
Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?

I mean, is it wise to disclose this type of information on a forum, any forum for that matter?

Or is the work you do not of that sensitive nature?
 
Last edited:
They give stats on resolution and framerates, when they saw a machine doesn't change those stats.
How naive, they pick and chose what to show all the time, it's very opinion based. They even changed how they did their videos over time to so they would become more opinion based.

I never said they were lies and that they were faking everything, I just have a problem with their credibility after it became obvious how close they are to MS (a company caught astroturfing multiple times already, it wouldn't be out of character).

You believe whatever you want to believe, my face is tired from this back and forth.
 
Last edited:

VFXVeteran

Banned
Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?

I mean, is it wise to disclose this type of information on a forum, any forum for that matter?

Or is the work you do not of that sensitive nature?
I can disclose my position and a general practice of what I work on. It's all advertised as public domain on LinkedIn. If it wasn't allowed, LMT would have told me years ago.
 

assurdum

Banned
I just can't believe they didn't do a video on your claim of 30fps games play at 60fps on certain tv's either.
back-at.gif
 
Last edited:

Elog

Member
I am not sure it is correct to call this CPU bound just because the GPU is not completely utilised. If additional computational capacity was required, Epic would have utilised multiple cores and not one thread/core. I think it is I/O limited and hence why they see a good correlation with frequency.

If it is I/O limited you would also see quite large differences in PC performance when comparing comparable CPUs in terms if single threaded performance but with large cache size differences. I hope someone does that test.
 

Three

Member
I am not sure it is correct to call this CPU bound just because the GPU is not completely utilised. If additional computational capacity was required, Epic would have utilised multiple cores and not one thread/core. I think it is I/O limited and hence why they see a good correlation with frequency.

If it is I/O limited you would also see quite large differences in PC performance when comparing comparable CPUs in terms if single threaded performance but with large cache size differences. I hope someone does that test.

He only makes that claim because he incorrectly thinks it's not optimised for multicore processors and therefore bottlenecked by the CPU itself, i.e "CPU bound". It's actually bottlenecked mostly by I/O and he can test this with higher performance memory with different setups. He should have seen this in other engines already, even in Remedy's Control. There is nothing "odd", "not for modern CPUs", "unoptimised" about it.
 

SatansReverence

Hipster Princess
😄 Right, "sony warriors" . What would they gain from it considering that the CPU is clocked lower?
What? Like, literally what are you event trying to say here?

Such a stupid thing to say from an xbox warrior.
The irony

It's the idea that everything can be done asynchronously and that UE5 is "odd" or "unoptimised" or "single threaded" when pretty much most current RT capable engines would behave the same that you are not understanding.
Tell me you have no idea what you're talking about without actually saying it.

UE5 is evidently poorly threaded as per Alex' testing where lower core counts made little difference but decreasing clocks made a significant difference.

And do tell, what even does RT have to do with it beyond the significant increase in CPU load when enabling it?

If Alex knew anything too what he should actually check is higher performance memory.

Do tell me what memory offers near 100% uplifts, I want a good laugh.
It doesn't matter how many damn cores you have in most engines. Are you one of those people who buy a threadripper expecting massive gains in your games too?

just off the top of my head,

asscreed, significantly multi threaded.
Battlefield, significantly multi threaded.
Metro, significantly multi threaded.
Red Dead, significantly multi threaded.
Tomb Raider, significantly multi threaded.

And the most impressive display of raytracing to date, cyber punk, you guessed it, significantly multi threaded.

Are you upset and wildly projecting because you're still running a core 2 duo?
 
Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?
Everybody knows that there must be some kind of missile simulation thing going on.... it's not like he spat the code and scematics in github or something.

I have a friend who did some path finding programming on military HW 20 years ago (not in the US). This is literally all I know about his work, all he could ever say.
 
Last edited:

Three

Member
What? Like, literally what are you event trying to say here?


The irony
It's pretty self explanatory. Why would a "sony warrior" in your eyes advocate an engine requiring higher clock CPU speed over more cores if the CPU clock speed is lower on a PS5?
It's you projecting your nonsense here.

"Significantly multithreaded"

Go bench the games you listed on an 10900K. First cut your cores in half (therefore having 10 threads still available) and see what percentage drop in performance you experience. Then cut your clocks in half and tell me what percentage loss in performance you get. Then kindly get lost. Who mentioned "100% uplifts" anywhere either?
 

SatansReverence

Hipster Princess
It's pretty self explanatory. Why would a "sony warrior" in your eyes advocate an engine requiring higher clock CPU speed over more cores if the CPU clock speed is lower on a PS5?
It's you projecting your nonsense here.

It's more about sony warriors having a big ol' REEEEEEEEEEEEEEE fest over DF findings.
"Significantly multithreaded"

Go bench the games you listed on an 10900K. First cut your cores in half (therefore having 10 threads still available) and see what percentage drop in performance you experience. Then cut your clocks in half and tell me what percentage loss in performance you get.
Just because I like to clown you lot,

16 Threads, e-cores limited
srab8mf.png


8 threads
J7AJXyT.png


Wow, that's quite a lot more than the ~10% difference that alex showed, isn't it. It's... it's almost like he's right

And for fun

4 threads
B9fDOx5.png


Because it isn't 2005 any more and games are indeed highly threaded work loads now.

Then kindly get lost.

After you
Who mentioned "100% uplifts" anywhere either?

You did, the moment you tried to act as though a demonstration of performance for clocks vs threads was "nonsense" while trying to bring ram into the equation 🤡
 

ethomaz

Banned
UE5 and previous ones are heavy optimized for multithreaded.

I believe you guys are missing the point because Alex talks about Lumen/Nanite being optimized to single threaded high clocks and not UE5 that is basically UE4 with these features disabled.
 

VFXVeteran

Banned
It's more about sony warriors having a big ol' REEEEEEEEEEEEEEE fest over DF findings.

Just because I like to clown you lot,

16 Threads, e-cores limited
srab8mf.png


8 threads
J7AJXyT.png


Wow, that's quite a lot more than the ~10% difference that alex showed, isn't it. It's... it's almost like he's right

And for fun

4 threads
B9fDOx5.png


Because it isn't 2005 any more and games are indeed highly threaded work loads now.



After you


You did, the moment you tried to act as though a demonstration of performance for clocks vs threads was "nonsense" while trying to bring ram into the equation 🤡
The term "highly" threaded means that you are assuming the thread count is more sensitive to FPS than clock frequency. How about comparing that test with decreasing clock freq by 1/2. I'm curious actually. If there is MORE FPS loss than the count reduction, then you can't claim "highly" threaded (i.e. thread count > clock frequency).
 

SatansReverence

Hipster Princess
The term "highly" threaded means that you are assuming the thread count is more sensitive to FPS than clock frequency. How about comparing that test with decreasing clock freq by 1/2. I'm curious actually. If there is MORE FPS loss than the count reduction, then you can't claim "highly" threaded (i.e. thread count > clock frequency).
I'm not claiming that there is a 1 to 1 relation of core count vs performance. No one has in fact. Not me, not Alex.

The problem is the 4% delta in UE5 demo.

4% lower performance for 50% drop in cores available shows a clear lack of multi threaded optimisation. The End.
 

Three

Member
It's more about sony warriors having a big ol' REEEEEEEEEEEEEEE fest over DF findings.

Just because I like to clown you lot,

16 Threads, e-cores limited
srab8mf.png


8 threads
J7AJXyT.png


Wow, that's quite a lot more than the ~10% difference that alex showed, isn't it. It's... it's almost like he's right

And for fun

4 threads
B9fDOx5.png


Because it isn't 2005 any more and games are indeed highly threaded work loads now.



After you


You did, the moment you tried to act as though a demonstration of performance for clocks vs threads was "nonsense" while trying to bring ram into the equation 🤡
Riiight, a REEE fest over DF findings. What does that even mean?

Lol 4 threads? are you having a laugh? Read fully and look at the bolded below :
It's not a single threaded engine. There are things you can't do asynchronously in a frame so faster always gains you quicker frametimes even if it's completely multithreaded. Faster clocks always gives you faster frametimes if you're CPU bound. More cores rarely does. Only if you don't have enough does it become a problem.
This is what you think a "Sony warrior" REEE fest is? Just stating some facts?

Why didn't you do the 10900k 20 vs 10 threads benchmark?
Look at the CPU bound CP2077 benchmark below. Notice above 8 cores it makes fuck all difference? "Lightly multithreaded", am I right?

Notice only an 8% drop going from 12 cores (24 threads) down to 6 cores (12 threads) in the benchmark?

13054218903l.jpg


Now show me your half clockspeed results. Where did those go?
see what a massive difference that makes.

Now test some high performance memory on the UE5 demo and see what percentage gains you get in comparison to this dumb "UE5 isn't properly multithreaded" nonsense. You will see that the CPU becomes less of a problem. You can't do everything asynchronously, sometimes you're waiting for something else.

All I'm saying is that the benchmark isn't odd, UE5 isn't "lightly multithreaded" or "unoptimised" and that most engines would actually see similar results.
 

VFXVeteran

Banned
I'm not claiming that there is a 1 to 1 relation of core count vs performance. No one has in fact. Not me, not Alex.

The problem is the 4% delta in UE5 demo.

4% lower performance for 50% drop in cores available shows a clear lack of multi threaded optimisation. The End.
The engine is highly multithreaded. You would have to dissect the specific algorithms at play to see where they implement a complex function without using more cores. We would have to dissect the Nanite code and the Lumen code to find out where the bottleneck is (if there is one). I can only guess but perhaps they had to use a single thread for Lumen for it's specific algorithm and haven't added multithreading support yet. Or something in their algorithm HAS to be single threaded (i.e. reading in a large packet of data that must be read serially).
 
Last edited:
  • Like
Reactions: Rea

SatansReverence

Hipster Princess
Only if you don't have enough does it become a problem.
Yea, and 5 cores is more than enough for the poorly multi threaded demo.

And why would I test clocks? Do you need someone to show the obvious that clocks do affect performance? Who said otherwise.

Here, I'll type it out in big letters so you might have a chance of understanding the problem.

50% less threads, only 4% less performance

There, do you understand the problem now? Cool.

Oh and apparently it needs to be pointed out to you that a 10900k is not a 16 core/32 thread ryzen 3950x which is just a straight up worse CPU for gaming.
 
Last edited:

VFXVeteran

Banned
Yea, and 5 cores is more than enough for the poorly multi threaded demo.
I have a problem with this statement as you have no idea how these algorithms work in code for you to say it's poorly multithreaded. There are many things in a pipeline that can't be multithreaded. Just because you can use multiple cores to solve a problem doesn't mean it can be applied to ANY problem and in that situation clock frequency is the only way to gain more performance.
 
Last edited:

VFXVeteran

Banned
^

No one cares the exact cause of the problem. It is still a problem.
I care. Because you could be stating a problem without understanding how algorithms work. There are many armchair programmers on these boards that make ridiculous claims as if they are supervisors with years of experience to claim something is "unoptimized" when in fact, it is not. It could simply be that the algorithm is too expensive for today's hardware - case in point - RT GI. The technique is simply more expensive than GI light probes. There is no amount of "optimization" on a low-end hardware configuration like the consoles that will remedy the expense. It's simply too expensive for said hardware and we need to wait for more powerful hardware.
 
Last edited:

Hoddi

Member
I'm not sure why people think this is lowly threaded. Disabling HT fully pegs my 9900k at 100% and you cannot otherwise judge 'CPU utilization' when you have HT/SMT enabled.

Here's the thread distribution by seconds spent fully utilizing the cores.

RlRD3jp.png
 

SatansReverence

Hipster Princess
I care. Because you could be stating a problem without understanding how algorithms work. There are many armchair programmers on these boards that make ridiculous claims as if they are supervisors with years of experience to claim something is "unoptimized" when in fact, it is not. It could simply be that the algorithm is too expensive for today's hardware.
Anything that leaves hardware resources on the table while also having limited performance is, by definition, unoptimised.
 

Three

Member
Yea, and 5 cores is more than enough for the poorly multi threaded demo.

And why would I test clocks? Do you need someone to show the obvious that clocks do affect performance? Who said otherwise.

Here, I'll type it out in big letters so you might have a chance of understanding the problem.

50% less threads, only 4% less performance

There, do you understand the problem now? Cool.

Why would you test it? Because you are making claims like this

"UE5 is evidently poorly threaded as per Alex' testing where lower core counts made little difference but decreasing clocks made a significant difference."

When that's actually normal


CP2077
12 to 6 cores
50% less threads only 8% less performance

Above 8 cores
50% less threads 0% less performance

Wow "Significantly multithreaded"

Matrix UE5 Demo
10 to 5 cores
50% less threads only 4% less performance

"Why isn't it properly multithreaded. What are we in 2005?"


Which ironically is the engine CD Project Red decided to adopt. You don't even realise these percentages will change based on things like your memory speed.
 
Last edited:
  • Like
Reactions: Rea

SatansReverence

Hipster Princess
Why would you test it? Because you are making claims like this

"UE5 is evidently poorly threaded as per Alex' testing where lower core counts made little difference but decreasing clocks made a significant difference."

When that's actually normal
I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ cores

And no one cares about your shitty 3950x which wasn't even good when it launched. All that showing it's crappy benchmarks do is tell us AMD tried to throw cores at their lack of performance problem and that 8-10 cores is around the limit for current engines for performance. I do find it rather adorable that you scoured the internet for hours to find one crappy benchmark to try and back up your warrior bullshit though.

And guess what, UE5 demo is still poorly optimised just as Alex showed. The End.
 
Last edited:
Thanks for the link, I just read it. Dictator is either missing my point or intentionally stating the incorrect assumption. I clearly state that the game is CPU bound and mostly cache/data related and his comment is focussing on the SSD and memory allocation/size. When I clearly state in the video the size of assets and throughput is NOT the issue here, missed data and stalling is.

Schools out it seems lol

You spent over 21 mins ranting about PS5's super fast SSD and I/O.
Clearly the people who engineered UE5 have no idea what they created and you do.
But don't let me stop you. You got a huge sony fan base who thinks regurgitating technical jargon means you actually know what they mean.

EXH60gv.png
 
Last edited:
I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ cores

And no one cares about your shitty 3950x which wasn't even good when it launched. All that showing it's crappy benchmarks do is tell us AMD tried to throw cores at their lack of performance problem and that 8-10 cores is around the limit for current engines for performance. I do find it rather adorable that you scoured the internet for hours to find one crappy benchmark to try and back up your warrior bullshit though.

And guess what, UE5 demo is still poorly optimised just as Alex showed. The End.
Zen 2 wasn't good when it launched?

Just because the 9900k was better than zen 2 doesn't mean the latter wasn't good considering how much cheaper it was.
 

hlm666

Member
It does when you turn on the stats, this can impact performance a great deal but only when on. During testing I have this disabled, but I will build a release version to see if that is also affecting it.
I find it hard to believe that with your software engineering experience you don't think running software in a debug/dev build has overhead?

"Any reason you aren't just doing a packaged build if you are looking for best performance?"

"I'm not sure if the demo you are using was compiled in "shipping" config either, but that can sometimes be important vs. "development"

"Hard to know for sure, but it made a non-trivial difference on consoles and when I compiled the shipping version earlier in this thread I wasn't really seeing significant stutter after the shader compile stuff happened."

 

sachos

Member
I wish Alex did more head to head comparisons with the PS5 in this video, i mean try and do a benchmark run as closely as possible in both platforms.
 

Shmunter

Member
Thanks for the link, I just read it. Dictator is either missing my point or intentionally stating the incorrect assumption. I clearly state that the game is CPU bound and mostly cache/data related and his comment is focussing on the SSD and memory allocation/size. When I clearly state in the video the size of assets and throughput is NOT the issue here, missed data and stalling is.

Schools out it seems lol
It’s so obvious. As soon as I/o on consoles is mentioned and in this context freeing up cpu resources, deliberate misunderstanding takes place.

Dictator seems threatened by your level of insight.
 

Tchu-Espresso

likes mayo on everthing and can't dance
So he has nothing at all to do with programing and game making. That would explain why he's been a laughinstock of this domain for years and years. And why on beyond3d they actually opened a thread dedicated to his mistakes that are present in every video because they got tired of people face palming on his every video and infecting every thread with how wrong he is with absolutely every video he ever makes. Including the video this thread is about being disproven by Alex and Epic devs
giphy.gif
 

Bogroll

Likes moldy games
Don't know if someone already knew it or if it works with XSX or PS4 and other consoles, but there is a "trick" to play all your games "at 60 FPS", at least on ps5, with a new Bravia TV . Keep in mind I'm not english so my translation of the menu setting could be completely wrong. It's very simple: first make you sure the hdmi enhanced setting is not settled on VRR mode because it forces the TV to the game mode and such setting not support such "trick". Go to the picture/image setting menu in your Bravia (again I don't know how it's named in english ); active and set everything the higher possible in the motionflow option and do the same for the movie/film mode voice. Via interpolation now all your games will runs at 60 FPS.

Just......Wow! The audacity.
 

ChiefDada

Gold Member
I never said they could test it, you are the one trying to get me on a technicality or trying to restrict what access can mean. They were given privileged information while visiting MS and then made a video speculating about something they already knew about and even got the specs right (when there were multiple different rumors going around).

4TF seemed unbelievably low at that time for a nextgen console, so I think it's pretty acceptable to assume they were given that information too. You seem 100% sure of everything they were told or all they talked about during that secret trip, I'm not.

They keep pushing the Series S as brittliantly designed at $300, that doesn't sound right at all to me and never will. Sounds like straight up shilling.

To be fair, Rich is on record having serious doubts about Series S RAM and bandwidth as the generation continues. I don't think it is inappropriate YET to be content with Series S performance since we are still in cross-gen period. But in 2 years time I will be checking in on how they portray Series S abilities.

You spent over 21 mins ranting about PS5's super fast SSD and I/O.
Clearly the people who engineered UE5 have no idea what they created and you do.
But don't let me stop you. You got a huge sony fan base who thinks regurgitating technical jargon means you actually know what they mean.

EXH60gv.png

Knock it off already.
 

DenchDeckard

Moderated wildly
Woooof what a trip, interesting stuff that I have no real desire to understand. I just hope it gets up to speed and we get great games at 60fps and people put out correct videos and information on subjects so my lamen brain can understand.

The power of the magical I/O will really deliver I'm sure..
 

sircaw

Banned
Everybody knows that there must be some kind of missile simulation thing going on.... it's not like he spat the code and scematics in github or something.

I have a friend who did some path finding programming on military HW 20 years ago (not in the US). This is literally all I know about his work, all he could ever say.
I just found it rather surprising that's all, my father worked for the ministry of defense in the 60s, among other things, on the engines of the Harrier jump jet.

Just the mere mention of him saying he was working on a certain project could and would have got him fired.
 
Last edited:

Three

Member
I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ cores

And no one cares about your shitty 3950x which wasn't even good when it launched. All that showing it's crappy benchmarks do is tell us AMD tried to throw cores at their lack of performance problem and that 8-10 cores is around the limit for current engines for performance. I do find it rather adorable that you scoured the internet for hours to find one crappy benchmark to try and back up your warrior bullshit though.

And guess what, UE5 demo is still poorly optimised just as Alex showed. The End.
😄 "My 3950x" what are you 12yrs old? I thought you said I had a core 2 duo anyway?

So you're telling me that more cores didn't give AMD better performance in the land of "significantly multithreaded" engines? Wow who would have thought. Maybe they listened to your flawed logic.
 
Last edited:

SatansReverence

Hipster Princess
😄 "My 3950x" what are you 12yrs old? I thought you said I had a core 2 duo anyway?

A basic figure of speech is apparently too high a concept for you to grasp, can't say I'm surprised.
So you're telling me that more cores didn't give AMD better performance in the land of "significantly multithreaded" engines? Wow who would have thought. Maybe they listened to your flawed logic.
Oh, you glossed over the part where I said 8-10 cores is the limit for current games and engines? Of course you did.
 

Fredrik

Member
Based on these tech videos I feel like Epic has over-promised and under-delivered with this engine. If this had happened at the start of the generation it wouldn’t be so bad but after 1.5 years of 60fps games on console it’ll be painful to go back to 30fps. And if we can’t even get 60fps on PC, why would we want this engine to be used?
 

Three

Member
A basic figure of speech is apparently too high a concept for you to grasp, can't say I'm surprised.

Oh, you glossed over the part where I said 8-10 cores is the limit for current games and engines? Of course you did.
Figure of speech my ass. You are so immature that you think it's all about the size of your e-penis in conversations instead of valid arguments. hence your silly call of "Sony warriors", assuming I have a "Core 2 Duo", and now "my shitty 3950x" when being shown that the result of having your cores halved in a 10 core 20 thread CPU would result in single figure percentage drops compared to halving your clocks which would result in far bigger drops as expected as long as you have enough.

You were blatantly saying low clocks affecting performance more than lower cores is a sign of being "poorly multithreaded". I tell you that more cores rarely give you significant performance boosts and a threadripper would give you no gain compared to higher clocks and you point to "significantly multithreaded engines like CP2077". I tell you to test the two options on those engines. Halve clocks then halve cores to 10 thread. You go as low as 2 cores (4 threads) and don't even do the other half of the test. Half clocks.

low and behold you would get no performance boost from having a threadripper and now you're trying to pretend that you knew all along that "8 cores was a limit" and going from 12 cores to 6 with a 8% drop is significant and highly multithreaded.

When being told that other factors like memory performance can change this percentage in fps because the cores themselves can still idle (see Hoddi's histogram) you call that irrelevant. You would rather talk complete shit and engage in epenis swinging than bring a valid point.
 
Last edited:

sircaw

Banned
Figure of speech my ass. You are so immature that you think it's all about the size of your e-penis in conversations instead of valid arguments. hence your silly call of "Sony warriors", assuming I have a "Core 2 Duo", and now "my shitty 3950x" when being shown that the result of having your cores halved in a 10 core 20 thread CPU would result in single figure percentage drops compared to halving your clocks which would result in far bigger drops as expected as long as you have enough.

You were blatantly saying low clocks affecting performance more than lower cores is a sign of being "poorly multithreaded". I tell you that more cores rarely give you significant performance boosts and a threadripper would give you no gain compared to higher clocks and you point to "significantly multithreaded engines like CP2077". I tell you to test the two options on those engines. Halve clocks then halve cores to 10 thread. You go as low as 2 cores (4 threads) and don't even do the other half of the test. Half clocks.

low and behold you would get no performance boost from having a threadripper and now you're trying to pretend that you knew all along that "8 cores was a limit" and goi g from 12 cores to 6 with a 8% drop is significant and highly multithreaded.

When being told that other factors like memory performance can change this percentage in fps because the cores themselves can still idle (see Hoddi's histogram) you call that irrelevant. You would rather talk complete shit and engage in epenis swinging than bring a valid point.

Lord have mercy, Why don't you just rip off his arms and beat him to death too. :messenger_grinning:


episode 1 sport GIF by UFC
 
Last edited:

yamaci17

Member
Zen 2 wasn't good when it launched?

Just because the 9900k was better than zen 2 doesn't mean the latter wasn't good considering how much cheaper it was.
yes, zen 2 was still not good enough

people only appreciated them for their price and nothing more. zen, zen+, zen 2 all of them were horrible products. you can see 5900x destroying a 3950x casually %60-70 in flight simulator. when zen/zen 2 chokes, they choke hard. that cross ccx latency is not helping. there were instances were a 7700k was still dismantling a 3600. it happened. it still happens. with usual benchmarks, zen 3 seems to be %45 50 faster than zen 2, but in real, actual CPU bottleneck situations zen 3 can fly away up to %70-80... there are situations where zen zen/zen 2 is far worse than zen 3 than the usual





you can literally see ZEN 3 beats zen 2 BY %50. all tests on the net would tell you a deficit around %20-25.

even DF themselves found such weird instances where zen 2 choked hard against zen 3 and intel



a literal %60 deficit.

zen 3 is the only proper gaming architecture AMD has ever released. and that's the end of the story. their horrible zen 2 products with enormous inter ccx latencies will haunt developers until the end of the generation.
 
Last edited:

VFXVeteran

Banned
You really are especially clueless aren't you?

I get it, you thought you finally had a DF gatcha moment, ended up getting clowned instead and now you're upset.
He's right though and you are backed into a corner it seems. I also asked you about comparing the clock frequency as a regular programmer would do such a test and you brushed it off.
 

Shmunter

Member
Based on these tech videos I feel like Epic has over-promised and under-delivered with this engine. If this had happened at the start of the generation it wouldn’t be so bad but after 1.5 years of 60fps games on console it’ll be painful to go back to 30fps. And if we can’t even get 60fps on PC, why would we want this engine to be used?
Not sure, but I think they did say they are aiming for 60 with the nextgen fruit. So with hope what we are seeing is far from complete. Someone may correct or verify
 
Top Bottom