• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

DaMonsta

Member
You literally said this. Which is what I replied to :messenger_neutral:
It literally says that in the article.

“ Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core.”
 
It literally says that in the article.

“ Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core.”

And the second part? What does that say?
 

DaMonsta

Member
And the second part? What does that say?
The second part says they are doing that because they don’t need the CPU power.

So I said Next Gen games WILL need that power therefore devs will have to make choices/sacrifices on how to deal with the variable clocks.

It’s as easy as choosing a low CPU usage profile now, but that won’t be the case for intensive next gen games.
 
The second part says they are doing that because they don’t need the CPU power.

So I said Next Gen games WILL need that power therefore devs will have to make choices/sacrifices on how to deal with the variable clocks.

It’s as easy as choosing a low CPU usage profile now, but that won’t be the case for intensive next gen games.

But that would only be the case if the IPC of the Zen 2 core was the same as the Jaguar core in the current gen systems. Zen 2 is exponentially faster than Jaguar clock for clock. Current games will be massively under utilising the CPU, in both Ps5 & Series X, because they are built with Jaguar in mind. There'll still plenty more to come for the CPU's in both systems
 

DaMonsta

Member
But that would only be the case if the IPC of the Zen 2 core was the same as the Jaguar core in the current gen systems. Zen 2 is exponentially faster than Jaguar clock for clock. Current games will be massively under utilising the CPU, in both Ps5 & Series X, because they are built with Jaguar in mind. There'll still plenty more to come for the CPU's in both systems
I get that, but I’m talking about Next Gen games. Presumably they will push the CPU
 

Renozokii

Member
Nah dawg, who cares what this guys says, clearly he is bias some how... 12 is higher than 10!

Yall need to stop falling in love with the numbers. Having the "most powerful console" didn't work in the PS3 era, it didn't work in this one either. In the end the numbers do no matter. It didn't matter in the PS3 days. Game after game came out and the PS3 fell behind. The exclusives were beautiful, but so were the 360's! The numbers don't mean squat, it's always been ease of getting that power and.... shockingly, the games!
Look at how easily sheep are really buying Sonys PR blitz. Comparing the Series X to the PS3? The cell was a nightmare for developers to work with, the Series X is a fucking slightly specialized PC. There is NO reason to believe the series is substantially harder to make games on. The PS5 might have some tricks up it's sleeve to in terms of dev tools, but that will be irrelevant 2 years in once devs are familiar with both consoles.
 
Last edited:

GymWolf

Gold Member
That's the thing, both GPUs will share similar architecture and both should be equally bottlenecked, yet some people try to say PS5 has some kind of magic and therefore XSX GPU is way more bottlenecked.















So PS5 will have less bottlenecks, and it's ultra fast SSD will make XSX obsolete. Yesterday I have even read here on neogaf XSX GPU will be comparable to 8TF GPU because of bottlenecks while PS5 will run at 10TF despite variable clock. Right now I dont even know when people are joking or being serious.
i think ND need to name his next game Unbottled 5 a neck's end :messenger_sunglasses:
 
Last edited:

B_Boss

Member
That clears things up some what with odd dates.

But that to me suggests that he wasn’t a big part of the team and was bought on board for additional rendering responsibilities.

While it doesn’t discredit his claim, it does some what seem to lend credence that this was removed not because it was right, but because it was wrong. Even more so when you read what was said (several translations, and I will stress that point, translation). There were so many basic errors and mistakes that nobody would make working where he does. So either he’s a raging fanboy, or the translations came out wrong. I refuse to believe anybody could be as factually wrong as he was and be in any form of game design profession. I’m leaning more on him putting a slight PlayStation spin on things due to being a fan, which clearly didn’t come out in translation right and came off massively negative, hence the pull.

Just my perspective anyway.

Would you care to elaborate on Salehi’s points you believe (or know) to be in error? The more facts we learn, the better.
 

Panajev2001a

GAF's Pleasant Genius
I get what you're saying, but Cell Processor the Series X setup is not. It's not even in the same stratosphere. Also, didn't you work for Lionhead?
Not saying it is chokefull of bottlenecks, neither was the PS2 and back then there was no problem questioning anything and everything about its theoretical peaks, that is all.
Also things can be comparable despite not being the same or at the same level of intensity.

Not me, I never worked for Lionhead nor the other greats in the Guildford mega hub :).
 

Dory16

Banned
Not saying it is chokefull of bottlenecks, neither was the PS2 and back then there was no problem questioning anything and everything about its theoretical peaks, that is all.
Also things can be comparable despite not being the same or at the same level of intensity.

Not me, I never worked for Lionhead nor the other greats in the Guildford mega hub :).
This is fascinating. There used to be a time when technical people had to say where and in what conditions an architecture would be bottlenecked. Not just wave bottlenecks like a scarecrow to downplay the system more powerful than the one that they personally prefer.
 

ethomaz

Banned
"Rnlval" is right because we already know how fast Gears 5 is running on XSX based on what Digital Foundry has told and XSX GPU is clearly much faster than standard 5700XT. On the other hand you had to lie in order to prove your point because in reality games (contrary to what you have said) in higher resolutions are using 2080ti GPU almost fully for entire time (of course with fast CPU and no CPU bottleneck) and not to mention 2080ti is faster than just 20% compared to 2070S in games.
It is not a lie... that is what bench shows.
2080TI increase in SPs is really sub-utilized... it can't match the increase with performance.
There is no 50-70% increase in performance even when the 2070 is bootlenecked by the VRAM speeds.

relative-performance_1920-1080.png
relative-performance_2560-1440.png


relative-performance_3840-2160.png


You can compare the 2080 Super to 2080TI too that will show the same... big increase in SPs counts to disproportional increase in performance (way lower).

BTW his Gears comparison has nothing to do with that... I don't know why he keep posting something so unrelated even after people called him out.
 
Last edited:

ethomaz

Banned
Yours is nonsense when MS already confirmed XSX GPU's Gears 5 result already scales to RTX 2080.

RX 5700 XT has 1887 Mhz clock speed average, hence 9.6 TFLOPS average.



https://www.guru3d.com/articles_pages/gears_of_war_5_pc_graphics_performance_benchmark_review,6.html

Gears 5 with PC Ultra settings at 4K

index.php



Scale RX 5700 XT's 40 fps with average 9.6 TFLOPS into XSX's 12.147 TFLOPS and it lands on RTX 2080 level results.

RX 5700 XT's 40 fps x 1.25 = 50 fps.

Ali Salehi's statement is nonsense.
Again what Gears have to do with what I said.
You keep posting that unrelated without make a point lol
People already called you out before.
 

Allandor

Member
It is not a lie... that is what bench shows.
2080TI increase in SPs is really sub-utilized... it can't match the increase with performance.
There is no 50-70% increase in performance even when the 2070 is bootlenecked by the VRAM speeds.

relative-performance_1920-1080.png
relative-performance_2560-1440.png


relative-performance_3840-2160.png


You can compare the 2080 Super to 2080TI too that will show the same... big increase in SPs counts to disproportional increase in performance (way lower).

BTW his Gears comparison has nothing to do with that... I don't know why he keep posting something so unrelated even after people called him out.
are you really still trying to compare the units vs frequency stuff with PC hardware, that is never optimal utilized?
This is a game console. Developers will always be able to utilize all the hardware in it, but only if the performance is guaranteed. Else they can just build a flexible system (e.g. dynamic resolution, dynamic lod) and hope that it will look like it is intended to look in specific situations. Btw, those flexible solutions have also big disadvantages, as they can never really fully utilized the hardware, else they could not react to performance degradation. If you push it to hard frames would delivered unbalanced, this would result in micro stuttering. So you must implement a system where you can still influence the frame rendering before it is to late. This really limits your frame-time up until that point. And because you don't know what ferquency the processor has in the next ms, it will really be hard to predict what the frame can contain. So you must always underdeliver in order to get an even framerate.
 

ethomaz

Banned
are you really still trying to compare the units vs frequency stuff with PC hardware, that is never optimal utilized?
This is a game console. Developers will always be able to utilize all the hardware in it, but only if the performance is guaranteed. Else they can just build a flexible system (e.g. dynamic resolution, dynamic lod) and hope that it will look like it is intended to look in specific situations. Btw, those flexible solutions have also big disadvantages, as they can never really fully utilized the hardware, else they could not react to performance degradation. If you push it to hard frames would delivered unbalanced, this would result in micro stuttering. So you must implement a system where you can still influence the frame rendering before it is to late. This really limits your frame-time up until that point. And because you don't know what ferquency the processor has in the next ms, it will really be hard to predict what the frame can contain. So you must always underdeliver in order to get an even framerate.
But that is what the Crytek guy is saying lol
C'mon.
 
Last edited:

ethomaz

Banned
Gears 5 benchmarks and results are debunking your nonsense .
The opposite... even being unrelated to the discussing it agrees.

RX 5700 XT has 40 CUs.
Xbox 52 CUs.

30% increase in CU count.

Digital Foundry analises of the Tech Demo showed it closes to RTX 2080 results... not there yet but pretty close.

Difference between RX 5700 XT and GTX 2080 in Gears 5 is around 12%.
30% increase in CUs to around 12% increase in performance.

2160.png
 

Dory16

Banned
But that is what the Crytek guy is saying lol
C'mon.
You really wish anyone with developer credentials would be taken at face value as long as they say good things about the don't you? I'm pretty sure I saw you downplay the statements of the SONY developer with decades of experience in games that said the performance difference was staggering. And he has a lot more credibility that this junior guy who was speaking in Farsi and still got taken down. "C'mon".
 
Last edited:

rnlval

Member
It is not a lie... that is what bench shows.
2080TI increase in SPs is really sub-utilized... it can't match the increase with performance.
There is no 50-70% increase in performance even when the 2070 is bootlenecked by the VRAM speeds.

relative-performance_1920-1080.png
relative-performance_2560-1440.png


relative-performance_3840-2160.png


You can compare the 2080 Super to 2080TI too that will show the same... big increase in SPs counts to disproportional increase in performance (way lower).

BTW his Gears comparison has nothing to do with that... I don't know why he keep posting something so unrelated even after people called him out.
Actually, scale RX 5700 XT's 85% (for 4K resolution) by 1.25 and it lands on RTX 2080 range.

XSX's TFLOPS increase, 12.147 / 9.66 = 1.257
XSX's Memory bandwdith increase, 560 /448 = 1.25‬

Keep posting nonsense when you haven't factored in memory bandwidth scaling!
 
Last edited:

ethomaz

Banned
You should really not start to believe what any developer has to say, just it suits your opinion.
Even developers can be fanboys (AWS vs Azur, AMD vs intel, nvidia vs amd, ...)
You really wish anyone with developer credentials would be taken at face value as long as they say good things about the don't you? I'm pretty sure I saw you downplay the statements of the SONY developer with decades of experience in games that said the performance difference was staggering. "C'mon".
The developer works are the same than Cerny and others developers.
I'm a developer too (not gaming).

And yes I take their works over your nonsense lol
 

ethomaz

Banned
Actually, scale RX 5700 XT's 85% (for 4K resolution) by 1.25 and it lands on RTX 2080 range.

TFLOPS increase, 12.147 / 9.66 = 1.257
Memory bandwdith increase, 560 /448 = 1.25‬

Keep posting nonsense when you haven't factored in memory bandwidth scaling!
Where are you getting that 1.25 nonsense?

40 to 52 CUs = 30% increase
40 @ 1755MHz to 52 @ 1825Mhz = 35% increase

Performance in 4k Ultra = 12% increase

It is another example of what the CryTek dev and Cernt said.
But you keep trying to FUD lol
 
Last edited:
The opposite... even being unrelated to the discussing it agrees.

RX 5700 XT has 40 CUs.
Xbox 52 CUs.

30% increase in CU count.

Digital Foundry analises of the Tech Demo showed it closes to RTX 2080 results... not there yet but pretty close.

Difference between RX 5700 XT and GTX 2080 in Gears 5 is around 12%.
30% increase in CUs to around 12% increase in performance.

2160.png
You have to be one of the "smartest" posters here on the forum, lul.
Everything you post is nonsense.

The difference between AMD and Nvidia in Gears 5 is because it's a Microsoft game that is optimized on GNC
The Series X outperforms a RTX 2080 TI in Gears 5
 

ethomaz

Banned
You have to be one of the "smartest" posters here on the forum, lul.
Everything you post is nonsense.

The difference between AMD and Nvidia in Gears 5 is because it's a Microsoft game that is optimized on GNC
The Series X outperforms a RTX 2080 TI in Gears 5
:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:

I thought that FUD was already debunked but people keep spreading it lol
It was running at 4k Ultra with better effects at 100fps on Xbox, no?

And my posts are nonsense :messenger_tears_of_joy:


Gears 5 Tech Demo on Series X is near RTX 2080 perfomrance... a bit below like described by MS and Digital Foundry.
 
Last edited:

rnlval

Member
Where are you getting that 1.25 nonsense?

40 to 52 CUs = 30% increase
40 @ 1755MHz to 52 @ 1825Mhz = 35% increase

Performance in 4k Ultra = 12% increase

It is another example of what the CryTek dev and Cernt said.
But you keep trying to FUD lol
From https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/34.html
clock-vs-voltage.jpg


RX 5700 XT has 1887 Mhz average, hence 9.66 TFLOPS average.

1887 Mhz x (64 x 40) x 2 = 9,661,440‬ or 9.661 TFLOPS


12.147 TFLOPS / 9.66 TFLOPS = 1.257 or 25.7%

I don't calculate from paper spec when boost modes are active.


There is a corresponding memory bandwidth increase between XSX's 560 GB/s and RX 5700 XT's 448 GB/s which is 25%.
 
Last edited:

Allandor

Member
The developer works are the same than Cerny and others developers.
I'm a developer too (not gaming).

And yes I take their works over your nonsense lol
sry, to tell you, I'm a software developer, too, but that doesn't mean much in this regards. Just stop this nonsense you post the whole time.

You pick up benches that suit your opinion, choose a "developer" interview that has been pulled back and you are trying to tell me, you believe every bit of the PR some people say. Please just stop this console war nonsense and try to look on it from an objective position.

What Cerny said with their "frequency is actually better" only applies if you have the same chips with the same front and backend. Yes, the frequency will help to close the gap a bit, but noch much more.
Funny thing is Cerny told with the PS4 a whole other story. More CUs = better to use. Therefore the PS4 chip also had more ACEs so the CUs won't get undersaturated with work. And now you are trying to tell us, that the CUs of a console chip would get undersaturated with work? This is a console and not PC hardware, where something like this is really possible because of missing optimization. On a console almost everything is written for that hardware to perform as fast as it can.

Please, just stop that console warrior in yourself.
 

rnlval

Member
:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:

I thought that FUD was already debunked but people keep spreading it lol
It was running at 4k Ultra with better effects at 100fps on Xbox, no?

And my posts are nonsense :messenger_tears_of_joy:


Gears 5 Tech Demo on Series X is near RTX 2080 perfomrance... a bit below like described by MS and Digital Foundry.
XSX's two weeks raw Gears 5 port was put into built-in benchmark mode at PC Ultra settings and it resulted in RTX 2080 class performance.
 

tkscz

Member
Going through this thread is like going through the thread where people were pissed that the XSX only has 16GBs of GDDR6. It REALLY shows their understanding of how hardware works.

More CUs is only better when those CUs can efficiently be used. If developers aren't going to be constantly using them, then they only waste power and resources for the rest of the GPU. Having more efficient CUs over having more CUs IS better until developers get a better feel for trying to use more CUs. His example of AMD's bulldozer era of CPUs is the best example of this. AMD did hit 16 cores before Intel, and yet a 16 core bulldozer was less efficient and over all under performed compared to the 8 core Intel CPU because no game took advantage of the extra cores and the cores themselves weren't as fast or efficient.

That's not to say devs can't use the more CUs to their advantage to give a graphical boost to XSX games, more CUs can be better if properly put to use. Just saying that if devs who do multi-plats aren't using them all, games on the XSX will take a performance hit.

Using two pools of RAM works the same way. Unless devs find a balance when programming for them, the slower pool can seriously bottleneck the faster pool (which should on it's own be obvious).

Edit: This kind of makes the interview seem more fake. He's just saying basic hardware facts (not efficiently using the hardware will make the software worse, better architecture is better than higher raw numbers etc etc). When you really read it, besides API talk, he doesn't really say anything.
 
Last edited:

Radical_3d

Member
Are the sources of “Moore Law is Dead” not considered credible? Because that guy have heard the same thing the Crytek engineer is saying. I’d take this with a grain of salt. Optimisation is not something that starts until the end of a project. “Getting the same frame rate” in early stages of development may translate that both are running at sub-30FPS frame rates. Once the game is in the final stages it may well translate to that 2160p vs 2000p difference.
 

rnlval

Member
Are the sources of “Moore Law is Dead” not considered credible? Because that guy have heard the same thing the Crytek engineer is saying. I’d take this with a grain of salt. Optimisation is not something that starts until the end of a project. “Getting the same frame rate” in early stages of development may translate that both are running at sub-30FPS frame rates. Once the game is in the final stages it may well translate to that 2160p vs 2000p difference.
Atm, XSX's RTX 2080 like results from Gears 5 seems to be operating like a PC with 12.147 TFLOPS RDNA GPU and desktop-class Ryzen 7-3700X at 3.6 Ghz with disabled boost mode.
 

rnlval

Member
Going through this thread is like going through the thread where people were pissed that the XSX only has 16GBs of GDDR6. It REALLY shows their understanding of how hardware works.

More CUs is only better when those CUs can efficiently be used. If developers aren't going to be constantly using them, then they only waste power and resources for the rest of the GPU. Having more efficient CUs over having more CUs IS better until developers get a better feel for trying to use more CUs. His example of AMD's bulldozer era of CPUs is the best example of this. AMD did hit 16 cores before Intel, and yet a 16 core bulldozer was less efficient and over all under performed compared to the 8 core Intel CPU because no game took advantage of the extra cores and the cores themselves weren't as fast or efficient.

That's not to say devs can't use the more CUs to their advantage to give a graphical boost to XSX games, more CUs can be better if properly put to use. Just saying that if devs who do multi-plats aren't using them all, games on the XSX will take a performance hit.

Using two pools of RAM works the same way. Unless devs find a balance when programming for them, the slower pool can seriously bottleneck the faster pool (which should on it's own be obvious).

Edit: This kind of makes the interview seem more fake. He's just saying basic hardware facts (not efficiently using the hardware will make the software worse, better architecture is better than higher raw numbers etc etc). When you really read it, besides API talk, he doesn't really say anything.
Intel Haswell 4C/8T was Intel's Bulldozer 8T with good single-thread performance.
 

pawel86ck

Banned
It is not a lie... that is what bench shows.
2080TI increase in SPs is really sub-utilized... it can't match the increase with performance.
There is no 50-70% increase in performance even when the 2070 is bootlenecked by the VRAM speeds.

relative-performance_1920-1080.png
relative-performance_2560-1440.png


relative-performance_3840-2160.png


You can compare the 2080 Super to 2080TI too that will show the same... big increase in SPs counts to disproportional increase in performance (way lower).

BTW his Gears comparison has nothing to do with that... I don't know why he keep posting something so unrelated even after people called him out.
Currently we are talking only about 2070S because you wrote 2080ti is only 20% faster on average compared to that card and that's indeed a lie. What's funny even your own chart from techpowerup shows 2080ti is 34% faster compared to 2070S (and as I have proven there are games that shows even above 40% performance gap between 2070S and 2080ti). Also you have lied about low 2080ti GPU usage compared to 2070 when in fact in higher resolutions 2080ti GPU usage is at 99% assuming game is not CPU bottlenecked.
 

Radical_3d

Member
Atm, XSX's RTX 2080 like results from Gears 5 seems to be operating like a PC with 12.147 TFLOPS RDNA GPU and desktop-class Ryzen 7-3700X at 3.6 Ghz with disabled boost mode.
That doesn’t answer the question about the sources of that channel, but anyways, if RDNA2 is a 50% more efficient than RDNA it should run operate as an 18TF RDNA card, hence proving the inefficiency of the system.

Then again, as I said, not a single game has ended its development (Gears was a demo) in neither of both platforms and the performances shouldn’t be that representative.
 

ethomaz

Banned
Currently we are talking only about 2070S because you wrote 2080ti is only 20% faster on average compared to that card and that's indeed a lie. What's funny even your own chart from techpowerup shows 2080ti is 34% faster compared to 2070S (and as I have proven there are games that shows even above 40% performance gap between 2070S and 2080ti). Also you have lied about low 2080ti GPU usage compared to 2070 when in fact in higher resolutions 2080ti GPU usage is at 99% assuming game is not CPU bottlenecked.
Where it is bandwidth bottlenecked yes.
Actually I said 23%... that is the first graph example where the bandwidth doesn’t take effect but we can use the second or third... 30%? The SPs increase are not scaling like some people believes here and more like what the CryTek dev and Cerny said.
 

pawel86ck

Banned
Going through this thread is like going through the thread where people were pissed that the XSX only has 16GBs of GDDR6. It REALLY shows their understanding of how hardware works.

More CUs is only better when those CUs can efficiently be used. If developers aren't going to be constantly using them, then they only waste power and resources for the rest of the GPU. Having more efficient CUs over having more CUs IS better until developers get a better feel for trying to use more CUs. His example of AMD's bulldozer era of CPUs is the best example of this. AMD did hit 16 cores before Intel, and yet a 16 core bulldozer was less efficient and over all under performed compared to the 8 core Intel CPU because no game took advantage of the extra cores and the cores themselves weren't as fast or efficient.

That's not to say devs can't use the more CUs to their advantage to give a graphical boost to XSX games, more CUs can be better if properly put to use. Just saying that if devs who do multi-plats aren't using them all, games on the XSX will take a performance hit.

Using two pools of RAM works the same way. Unless devs find a balance when programming for them, the slower pool can seriously bottleneck the faster pool (which should on it's own be obvious).

Edit: This kind of makes the interview seem more fake. He's just saying basic hardware facts (not efficiently using the hardware will make the software worse, better architecture is better than higher raw numbers etc etc). When you really read it, besides API talk, he doesn't really say anything.
Shader cores in GPU cant be compared to CPU cores. Developers have to optimize their games in order to use more CPU cores, while GPU distribute workload evenly by itself. 2080ti is extremely big chip, yet GPU usage is almost always at 99% in higher resolutions (GPU usage only drops below 99% in CPU bottlenecked scenarios). On the other hand CPU usage is almost never 100% in real games.
 
Last edited:

ethomaz

Banned
That doesn’t answer the question about the sources of that channel, but anyways, if RDNA2 is a 50% more efficient than RDNA it should run operate as an 18TF RDNA card, hence proving the inefficiency of the system.

Then again, as I said, not a single game has ended its development (Gears was a demo) in neither of both platforms and the performances shouldn’t be that representative.
50% increase in perf. per clock.
That doesn’t mean 50% increase in TFs.

It means that at the same power the card can increase the clocks in 50% or at the same clock the card will power draw 33% less.

Of course that AMD results in determined clock and power draw... more you increase the clock that perf. per watt decrease.
 

GymWolf

Gold Member
Going through this thread is like going through the thread where people were pissed that the XSX only has 16GBs of GDDR6. It REALLY shows their understanding of how hardware works.

More CUs is only better when those CUs can efficiently be used. If developers aren't going to be constantly using them, then they only waste power and resources for the rest of the GPU. Having more efficient CUs over having more CUs IS better until developers get a better feel for trying to use more CUs. His example of AMD's bulldozer era of CPUs is the best example of this. AMD did hit 16 cores before Intel, and yet a 16 core bulldozer was less efficient and over all under performed compared to the 8 core Intel CPU because no game took advantage of the extra cores and the cores themselves weren't as fast or efficient.

That's not to say devs can't use the more CUs to their advantage to give a graphical boost to XSX games, more CUs can be better if properly put to use. Just saying that if devs who do multi-plats aren't using them all, games on the XSX will take a performance hit.

Using two pools of RAM works the same way. Unless devs find a balance when programming for them, the slower pool can seriously bottleneck the faster pool (which should on it's own be obvious).

Edit: This kind of makes the interview seem more fake. He's just saying basic hardware facts (not efficiently using the hardware will make the software worse, better architecture is better than higher raw numbers etc etc). When you really read it, besides API talk, he doesn't really say anything.
not an expert but are core in a cpu and CU on a gpu really comparable?
again, not an expert but it's the first time i read this comparison.
 

tkscz

Member
not an expert but are core in a cpu and CU on a gpu really comparable?
again, not an expert but it's the first time i read this comparison.

As pawel86ck pawel86ck said, not really. GPUs handle data better than CPUs and need less programming to know what to do with them. My point is more that the architecture and the understanding of how an architecture works is more important than just raw numbers.
 

Dural

Member
Where it is bandwidth bottlenecked yes.
Actually I said 23%... that is the first graph example where the bandwidth doesn’t take effect but we can use the second or third... 30%? The SPs increase are not scaling like some people believes here and more like what the CryTek dev and Cerny said.

The 2080 S to 2080ti is scaling exactly as you'd expect based on their teraflops. You proved that with what you posted; the 2080 S has ~17% less teraflops than the 2080ti and it's performance at 4k is 20% faster, 17% faster at 1440p. If higher clockspeeds allow a lower tf card to catch up some ground you'd see it here, this is one of the most ridiculous things I've seen mentioned over the last several weeks and honestly I know you know better.


not an expert but are core in a cpu and CU on a gpu really comparable?
again, not an expert but it's the first time i read this comparison.

No, not at all. You're almost never hitting 100% on a cpu when gaming, however, your gpu will almost always be pegged at 100%. The workloads of the gpu are completely different than that of a cpu and scale well with more cores due to the parallel nature of it.
 

GymWolf

Gold Member
As pawel86ck pawel86ck said, not really. GPUs handle data better than CPUs and need less programming to know what to do with them. My point is more that the architecture and the understanding of how an architecture works is more important than just raw numbers.
got it.
did you watched the inside xbox show of yesterday?
xbox people said that they made the console in order to get all the power every time they need it, do you think it's a response to the crytek guy leak?
 
Last edited:

GymWolf

Gold Member
The 2080 S to 2080ti is scaling exactly as you'd expect based on their teraflops. You proved that with what you posted; the 2080 S has ~17% less teraflops than the 2080ti and it's performance at 4k is 20% faster, 17% faster at 1440p. If higher clockspeeds allow a lower tf card to catch up some ground you'd see it here, this is one of the most ridiculous things I've seen mentioned over the last several weeks and honestly I know you know better.




No, not at all. You're almost never hitting 100% on a cpu when gaming, however, your gpu will almost always be pegged at 100%. The workloads of the gpu are completely different than that of a cpu and scale well with more cores due to the parallel nature of it.
last 2 Assassins creed on pc at launch begs to differ 😆 😆
(i get your point tho)
 
Top Bottom