• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The PlayStation 5 GPU Will Be Supported By Better Hardware Solutions, In Depth Analysis Suggests

S0ULZB0URNE

Member
GTX 1080Ti, nitrogen and all obviously but it goes to show how fast things move (PS5's GPU speed is a testament to that).

Wrcv7cF.jpg
I hit 2030 on AIR but man that's impressive!
 

Ascend

Member
"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," he explains. "All things considered, the change to a variable frequency approach will show significant gains for PlayStation gamers."



Why so many discussions about this? The fact is, the PS5 is power limited, and the GPU and CPU are allowed to increase clock speeds up to the advertised max clocks, as long as the power usage is available/allowable.
 

bitbydeath

Member
And you don't think MS's years of profiling console games, and AMD's DECADES of TRYING TO BUILD CPUs AND GRAPHICS CARDS doesn't involve trying to prevent common and avoidable bottlenecks?

You don't think this is part of trying to build any system?

Are you special??

And who amongst them has ever claimed to have "resolved" *all* bottlenecks??

You don't think everyone throws engineering resources at this?

Can you stay on track please?
All bottlenecks AND every single potential bottleneck is the exact same thing.
 
Last edited:

TLZ

Banned
"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," he explains. "All things considered, the change to a variable frequency approach will show significant gains for PlayStation gamers."
Why so many discussions about this? The fact is, the PS5 is power limited, and the GPU and CPU are allowed to increase clock speeds up to the advertised max clocks, as long as the power usage is available/allowable.
You took that quote the opposite way. So I'll fix it for you.

Fixed version:
Why so many discussions about this? The fact is, the PS5 has 10.28TF, and the GPU and CPU will run at the advertised max clocks, apart from the few games that'll need very minor downclocking.
 

sircaw

Banned
"I just feel Sony has thought about the end game a little bit more on optimisation and design."

100963_15e98339e0d447b8ab4b66ed44d7e335~mv2.jpg
What facts are you talking about.
Stop making shit up to fit your narrative.

If you Don't like digital foundry as a source, what is ok for you.
Less Fanboy logic please.

I release a direct quote from a Digital foundry video stating
Taking Ps5's new audio chip, A direct quote from the digital foundry video by Richard, he says " I am kinda blown away by it" another quote from Richard "This is potentially game changing"

This is a direct quote................................... You don't like it and say its cherry picking.
Then you post a facts don't care about your feelings picture on a completely different quote. The irony is you are kind contradicting yourself there with that picture sport.

Good god, sometimes this forum is more work than its worth.

.
 
Last edited:
We can tell who actually listened to what he said and who didn't.

Well you certainly didn't. And that stands out like a distress beacon.

Can you stay on track please?
All bottlenecks AND every single potential bottleneck is the exact same thing.

Cerny never said that all potential bottlenecks across the system were eliminated.

He never said that. Because that's not possible.

It's quite embarrassing for you, that you thought he did say that.
 
Sure, the SSD’s in these new consoles are going to bring significant advantages over last gen. But the core components (CPU, GPU, RAM) are still the most important.

This is similar to talking up the Xbox One’s esram, where in some circumstances it could out manoeuvre the PS4 setup. But overall the PS4 smoked it. The main raw numbers don’t lie.
While that's true, raw numbers don't tell the whole story. There's a Ryzen 9 CPU out there running at 4.3GHZ and still outperforming Intels running at 5GHz in some cases.

Why? Efficiency. This is just one example of how that can be achieved.

So in the end, only time's gonna tell. But again, I highly doubt that this is the "devastating miscalculation" Xbox fanboys are presenting it as, just like it's not going to magically be a much stronger console like PS Fanboys insist it will be. They're gonna be much closer than either camp wants to admit overall.
 
Last edited:
I linked the quote.
He said 'Every Single Potential Bottleneck'
Go back and read it again.

Relating to storage. Not at a platform level.

When a fanboy potato thinks there's a universally perfect system with his favourite badge on it .... smfh.

Edit: depending on how you want to use PS5 storage, it's not immune from bottlenecks either.

There is no magic "manufacturer badge" friendly secret sauce that negates all the tradeoffs in computing.

FFS, if you think there is, you're the biggest problem in the conversation.
 
Last edited:

bitbydeath

Member
Relating to storage. Not at a platform level.

When a fanboy potato thinks there's a universally perfect system with his favourite badge on it .... smfh.

Edit: depending on how you want to use PS5 storage, it's not immune from bottlenecks either.

There is no magic "manufacturer badge" friendly secret sauce that negates all the tradeoffs in computing.

FFS, if you think there is, you're the biggest problem in the conversation.

You should really go and re-watch the video it's clearly said that he resolved bottlenecks for how the data interacts with all hardware components of the PS5, not just the SSD.

the SSD is just one piece of the puzzle there's a lot of places where bottlenecks can occur in between the SSD and the game code that uses the data
 
You wanna talk about the rabid Xbox Fanboys too that discredit anything positive about the PS5 and how it's engineered?

Why bother at this point? They're few in number, and half the board and mods deal with them already so they can't get away with anywhere near the amount of trolling and bad-faith arguments as Sony fanboys do. Those are just the facts.

I remember some on the webz saying power doesn't matter! It's all about that SSD! But for some reason lot's of opinion pieces coming out trying to paint the PS5 more powerful. Lol. We're only about to start April! Long time till November.

The pivot to obsession over the SSD came as a coping mechanism over the TF "battle" perceived to have been lost. The very moment that number came out (actually, the moment the mention of 36 CUs in the presentation even popped up as a comparison to a theoretical example (which I and others were pontificating as maybe another Oberon revision or the full chip), the narrative swiftly changed to cling onto anything some felt could give them an advantage, which has turned out to be the SSD.

And like clockwork, they're (either intentionally or not) wrongly perceiving aspects of XSX's SSD solution because they need PS5's to be absolutely, certifiably superior. Meanwhile downplaying pretty much every advantage XSX has as "not a big deal".

I just dislike the disingenuous nature of it all. And yes, there have been some Xbox guys downplaying aspects of PS5's SSD (and by association, XSX's SSD), but it was the more rabid Sony fanboys who started this by suddenly doing a lot of mental gymnastics to whittle away any TF advantage, etc. etc. Most of these supposedly technical analysis feel thinly veiled in some console warrior hoo-rah, and yes that unfortunately includes a few channels I otherwise enjoy watching and getting content from. It doesn't mean I wont' enjoy their content in the future, but it does let me know they are not impervious to falling into the same traps as any other random in that regard.

Honestly the most enlightening details on these systems I've seen so far are from the disruptive ludens blog. Lots of very technical explanations on the various aspects of the systems and seems to keep it very honest, giving both systems their due and avoiding misleading narratives to try and prop one up by putting the other one down. Thankfully you can translate the articles into English. I think I saw some post here quoting an article here about "current state of PS5" and the info reported there was probably from an older dev kit, just being mentioned crazy late in that piece. The actual technical analysis on both systems is fantastic and the best I've seen or read from anywhere with regards to next-gen so far tho, easily.

I think that'll do it for me; hopefully convos from here out keep going civil and honest in regards to the systems.
 
Last edited:

mckmas8808

Banned
The pivot to obsession over the SSD came as a coping mechanism over the TF "battle" perceived to have been lost. The very moment that number came out (actually, the moment the mention of 36 CUs in the presentation even popped up as a comparison to a theoretical example (which I and others were pontificating as maybe another Oberon revision or the full chip), the narrative swiftly changed to cling onto anything some felt could give them an advantage, which has turned out to be the SSD.

And like clockwork, they're (either intentionally or not) wrongly perceiving aspects of XSX's SSD solution because they need PS5's to be absolutely, certifiably superior. Meanwhile downplaying pretty much every advantage XSX has as "not a big deal".

I just dislike the disingenuous nature of it all. And yes, there have been some Xbox guys downplaying aspects of PS5's SSD (and by association, XSX's SSD), but it was the more rabid Sony fanboys who started this by suddenly doing a lot of mental gymnastics to whittle away any TF advantage, etc. etc. Most of these supposedly technical analysis feel thinly veiled in some console warrior hoo-rah, and yes that unfortunately includes a few channels I otherwise enjoy watching and getting content from. It doesn't mean I wont' enjoy their content in the future, but it does let me know they are not impervious to falling into the same traps as any other random in that regard.

Honestly the most enlightening details on these systems I've seen so far are from the disruptive ludens blog. Lots of very technical explanations on the various aspects of the systems and seems to keep it very honest, giving both systems their due and avoiding misleading narratives to try and prop one up by putting the other one down. Thankfully you can translate the articles into English. I think I saw some post here quoting an article here about "current state of PS5" and the info reported there was probably from an older dev kit, just being mentioned crazy late in that piece. The actual technical analysis on both systems is fantastic and the best I've seen or read from anywhere with regards to next-gen so far tho, easily.

I think that'll do it for me; hopefully convos from here out keep going civil and honest in regards to the systems.

I don't know about Sony fanboys, but I'll speak for myself. I personally love everything I'm hearing about the SSDs in both machines due to it being a game changer in consoles. The fact that Sony has put more emphasis on the SSD than TFs isn't the reason I'm personally excited about it. It's one of the key points next-gen that'll lift ALL games be it on PC, XSX, or PS5 next-gen. Yes, higher TFs and ray tracing matters a lot. But I don't think next-gen could be what it will be had they both stuck with 5400 RPM mechanical drives.
 
So look I know that we are in the land of Sony Oz
But PS5 boys PS5 is the weaker system. So I have read that long PR paper.

446GB/s 256bits
446GB/s 320bits

Who wins?


It's that easy.
 

SmokSmog

Member
Sony should be proud, they have console apu which is clocked higher than custom water cooled and overclocked AMD/Nvidia top discrete graphic cards.

In other words I don't believe in 2230mhz sustained, even with underclocked cpu.
 

rnlval

Member
RAM

Let me put this bluntly - the memory configuration on the Series X is sub-optimal.

I understand there are rumours that the SX had 24 GB or 20 GB at some point early in its design process but the credible leaks have always pointed to 16 GB which means that, if this was the case, it was very early on in the development of the console. So what are we (and developers) stuck with? 16 GB of GDDR6 @ 14 GHz connected to a 320-bit bus (that's 5 x 64-bit memory controllers).

Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".

RAM%2Bconfiguration%2Bgraphic.jpg

Interleaved memory configurations for the SX's asymmetric memory configuration, an averaged value and the PS5's symmetric memory configuration... You can see that, overall, the PS5 has the edge in pure, consistent throughput...

.....

The SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4. Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.
Against "the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation", argument.

From https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory.
Six gigabytes [runs at] 336GB/s. We call this standard memory.
GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."
In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory.
This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell


nrQAT14.png


When the game is at full screen, OS front-end render will be suspended.

James Prendergast failed to understand CPU is not pretending to be a half-ass'ed GPU like in PS3's CELL SPUs i.e. CPU and GPU memory bandwidth intensity are different e.g. computation intensity difference for XSX's CPU : GPU ratio is 1 to 13, hence Prendergast's averaging argument is flawed.

Prendergast's argument is flawed when the CPU is the bottleneck i.e. a similar argument to why XBO GPU's 1.32 TFLOPS with +200 GB/s memory bandwidth was nearly useless against PS4 GPU with 1.84 TFLOPS and 176 GB/s memory bandwidth.


Targeting 60 Hz game loop, game programmer slices machine's potential in a 60 Hz game loop

PS5:

448 GB/s turns into 7.467 GB per 16ms frame budget potential



XSX:

336 GB/s turns into 5.60 GB per 16ms frame budget potential. 3.5 GB memory storage

560 GB/s turns into 9.33 GB per 16ms frame budget potential. 10 GB memory storage

2.86: 1 memory storage ratio between 10 GB vs 3.5 GB

As long XSX doesn't give equal time between 5.6 GB and 9.33 GB memory pools, XSX has the advantage.



Scenario 1

15% of 16 ms for 5.6GB memory pool= 0.84 GB

85% of 16 ms for 9.33 GB memory pool = 7.93 GB

Frame total bandwidth budget: 8.77 GB

XSX has 17.5% per frame BW advantage over PS5





Scenario 2

10% of 16 ms for 5.6GB memory pool= 0.56 GB

90% of 16 ms for 9.33 GB memory pool = 8.397‬ GB

Frame total bandwidth budget: 8.957‬ GB

XSX has 20% per frame BW advantage over PS5



Scenario 3

5% of 16 ms for 5.6 GB memory pool= 0.28‬ GB

95% of 16 ms for 9.33 GB memory pool = 8.8635 GB

Frame total bandwidth budget: 9.1435‬ GB

XSX has 22.5% per frame advantage over PS5



Scenario 4

2% of 16 ms for 5.6 GB memory pool= 0.112‬ GB

98% of 16 ms for 9.33 GB memory pool = 9.1434‬ GB

Frame total bandwidth budget: 9.2554‬ GB

XSX has 24% per frame advantage over PS5





Assigning CPU and GPU memory bandwidth for frame

Scenario A


Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617‬ GB available to GPU per frame

XSX: 7.92‬ GB‬ available to GPU per frame with 8.77 GB (from Scenario 1)

XSX GPU has 19.7% memory bandwidth advantage per frame over PS5 GPU



Scenario B

Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617 GB available to GPU per frame

XSX: 8.107 GB‬ available to GPU per frame with 8.957 GB (from Scenario 2)

XSX GPU has 22.5% memory bandwidth advantage per frame over PS5 GPU



Scenario C

Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617 GB available to GPU per frame

XSX: 8.296 GB‬ available to GPU per frame with 9.1435 GB (from Scenario 3)

XSX GPU has 25.3% memory bandwidth advantage per frame over PS5 GPU



Tile compute methods on both CPU's and GPU's multi-MB caches with TMU/ROPS can conserve external memory IO access.





--------------------
The reason for 970's slow 0.5GB

3648456-9497634327-gtxcr.jpg


0.5GB DRAM bottlenecked without its own L2 cache and dedicated I/O link into the crossbar.
 

Sosokrates

Report me if I continue to console war
Against "the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation", argument.

From https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs




nrQAT14.png


When the game is at full screen, OS front-end render will be suspended.

James Prendergast failed to understand CPU is not pretending to be a half-ass'ed GPU like in PS3's CELL SPUs i.e. CPU and GPU memory bandwidth intensity are different e.g. computation intensity difference for XSX's CPU : GPU ratio is 1 to 13, hence Prendergast's averaging argument is flawed.

Prendergast's argument is flawed when the CPU is the bottleneck i.e. a similar argument to why XBO GPU's 1.32 TFLOPS with +200 GB/s memory bandwidth was nearly useless against PS4 GPU with 1.84 TFLOPS and 176 GB/s memory bandwidth.


Targeting 60 Hz game loop, game programmer slices machine's potential in a 60 Hz game loop

PS5:

448 GB/s turns into 7.467 GB per 16ms frame budget potential



XSX:

336 GB/s turns into 5.60 GB per 16ms frame budget potential. 3.5 GB memory storage

560 GB/s turns into 9.33 GB per 16ms frame budget potential. 10 GB memory storage

2.86: 1 memory storage ratio between 10 GB vs 3.5 GB

As long XSX doesn't give equal time between 5.6 GB and 9.33 GB memory pools, XSX has the advantage.



Scenario 1

15% of 16 ms for 5.6GB memory pool= 0.84 GB

85% of 16 ms for 9.33 GB memory pool = 7.93 GB

Frame total bandwidth budget: 8.77 GB

XSX has 17.5% per frame BW advantage over PS5





Scenario 2

10% of 16 ms for 5.6GB memory pool= 0.56 GB

90% of 16 ms for 9.33 GB memory pool = 8.397‬ GB

Frame total bandwidth budget: 8.957‬ GB

XSX has 20% per frame BW advantage over PS5



Scenario 3

5% of 16 ms for 5.6 GB memory pool= 0.28‬ GB

95% of 16 ms for 9.33 GB memory pool = 8.8635 GB

Frame total bandwidth budget: 9.1435‬ GB

XSX has 22.5% per frame advantage over PS5



Scenario 4

2% of 16 ms for 5.6 GB memory pool= 0.112‬ GB

98% of 16 ms for 9.33 GB memory pool = 9.1434‬ GB

Frame total bandwidth budget: 9.2554‬ GB

XSX has 24% per frame advantage over PS5





Assigning CPU and GPU memory bandwidth for frame

Scenario A


Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617‬ GB available to GPU per frame

XSX: 7.92‬ GB‬ available to GPU per frame with 8.77 GB (from Scenario 1)

XSX GPU has 19.7% memory bandwidth advantage per frame over PS5 GPU



Scenario B

Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617 GB available to GPU per frame

XSX: 8.107 GB‬ available to GPU per frame with 8.957 GB (from Scenario 2)

XSX GPU has 22.5% memory bandwidth advantage per frame over PS5 GPU



Scenario C

Let CPU consumes 0.85 GB for 16 ms frame similar to 16 ms frame slice from 51 GB/s 128 DDR4-3200 PC config

PS5: 6.617 GB available to GPU per frame

XSX: 8.296 GB‬ available to GPU per frame with 9.1435 GB (from Scenario 3)

XSX GPU has 25.3% memory bandwidth advantage per frame over PS5 GPU



Tile compute methods on both CPU's and GPU's multi-MB caches with TMU/ROPS can conserve external memory IO access.





--------------------
The reason for 970's slow 0.5GB

3648456-9497634327-gtxcr.jpg


0.5GB DRAM bottlenecked without its own L2 cache and dedicated I/O link into the crossbar.

BOOM

Thread over.
 

JLB

Banned
What facts are you talking about.
Stop making shit up to fit your narrative.

If you Don't like digital foundry as a source, what is ok for you.
Less Fanboy logic please.

I release a direct quote from a Digital foundry video stating
Taking Ps5's new audio chip, A direct quote from the digital foundry video by Richard, he says " I am kinda blown away by it" another quote from Richard "This is potentially game changing"

This is a direct quote................................... You don't like it and say its cherry picking.
Then you post a facts don't care about your feelings picture on a completely different quote. The irony is you are kind contradicting yourself there with that picture sport.

Good god, sometimes this forum is more work than its worth.

.

pff, you are unsmokable. You are seeing ghosts. Let me hit the ignore button and move fwd.
 

Neo_game

Member
Glad your so confident that PS5 will be weak and not push above its weight, I would not be so confident. My prediction, for games with last gen assets XSX will dominate, with larger games and assets > 10 GB both will be similarly memory bandwidth bound and difference will be neglible.



Your calcs are different to others...and below has serious expertise on this stuff.

AjKaTko.png

Interesting. Just curious, who is Lady Gaia ?
 

SatansReverence

Hipster Princess
Yes, there are still upper limits defined by the hardware itself but that is actually not a bottleneck since it's infinite to the problem itself.

No, it's integral to the hardwares function. A thermal bottleneck is every bit as problematic as a memory bus bottleneck.

So again, the PS5 has a grade schoolers bottleneck.
 

rnlval

Member
This is how a bottleneck looks.

0UCaJsb.jpg


This is how Mark Cerny has been describing the PS5 with no bottlenecks.

U7XlyKK.jpg


Yes, there are still upper limits defined by the hardware itself but that is actually not a bottleneck since it's infinite to the problem itself.
CPU itself is the bottleneck. Note why XBO GPU's 1.32 TFLOPS with 200+ GB/s memory bandwidth(or use AMD FireGL W5000 example) couldn't beat PS4 GPU's 1.84 TFLOPS with 176 GB/s memory bandwidth(or use AMD Radeon R7-265 example).

Intel Core i7-7820X CPU's X299 chipset with 256bit DDR4-3200 has 104 GB/s memory bandwidth and it's unable to match Intel 620 IGP in Crysis DX9c render performance (using Switftshader 3.0 for CPU).

800-900 GFLOPS Zen 2 will not have computation intensity like a 10 -12 TFLOPS GPU. CPU's GFLOPS numbers will be halved when 256 bit AVX 2 is not used.
 
Last edited:

bitbydeath

Member
No, it's integral to the hardwares function. A thermal bottleneck is every bit as problematic as a memory bus bottleneck.

So again, the PS5 has a grade schoolers bottleneck.

CPU itself is the bottleneck. Note why XBO GPU's 1.32 TFLOPS with 200+ GB/s memory bandwidth(or use AMD FireGL W5000 example) couldn't beat PS4 GPU's 1.84 TFLOPS with 176 GB/s memory bandwidth(or use AMD Radeon R7-265 example).

Intel Core i7-7820X CPU's X299 chipset with 256bit DDR4-3200 has 104 GB/s memory bandwidth and it's unable to match Intel 620 IGP in Crysis DX9c render performance (using Switftshader 3.0 for CPU).

800-900 GFLOPS Zen 2 will not have computation intensity like a 10 -12 TFLOPS GPU. CPU's GFLOPS numbers will be halved when 256 bit AVX 2 is not used.

Bottlenecks aren’t related to physical hardware, it’s about how data gets passed between said hardware.
 
Last edited:

bitbydeath

Member
You keep saying it.

Doesn't make you any less incorrect.

Well here’s a definition to further prove my point.

Bottlenecks can be thought of as choke points inside of a computer system. Each component has its respective responsibilities to the data flowing through the computer and is susceptible to encountering a bottleneck. Each component must communicate and pass along the data it’s processed to the next piece in the system.

 
Bottlenecks aren’t related to physical hardware, it’s about how data gets passed between said hardware.
Bottlenecks while not entirely responsible by hardware are absolutely related to it and in most cases completely responsible.

What on Earth would ever lead you to say something like this?
 
You can keep parroting wrong information all you like.

Guess what happens when your processor starts overheating and thermal throttling? All that data gets slowed down. Congratulations, BOTTLENECK.
This is the kind of shit we're dealing with here, and it's getting really old. People who don't actually understand these things think because someone said something or that because they read it on the internet that they're suddenly capable of holding a debate on it.

It's not even armchair, it's just total ignorance and being entirely out of their depths. This is why the SSD nonsense has spun out of control, none of the people parroting this trash have a clue what they're talking about.
 

bitbydeath

Member
Bottlenecks while not entirely responsible by hardware are absolutely related to it and in most cases completely responsible.

What on Earth would ever lead you to say something like this?

Again, bottlenecks are about how data is transmitted between hardware. Sure you can get hardware that ultimately performs better but that has nothing to do with the bottlenecks itself as it pertains to the hardware and removing bottlenecks is about dealing with existing hardware.
 

SatansReverence

Hipster Princess
Why don’t you find something to backup your own stance then?
Because it's already been explained to you. You're denying simple facts.

The system is being slowed down because of thermals (or power, doesn't really matter) and thus it has a thermal bottleneck.

Yes, a BOTTLENECK. A rather amateur level bottleneck.
 

bitbydeath

Member
This is the kind of shit we're dealing with here, and it's getting really old. People who don't actually understand these things think because someone said something or that because they read it on the internet that they're suddenly capable of holding a debate on it.

It's not even armchair, it's just total ignorance and being entirely out of their depths. This is why the SSD nonsense has spun out of control, none of the people parroting this trash have a clue what they're talking about.

At least I’m backing up what I say with links. You should do the same.
 
Again, bottlenecks are about how data is transmitted between hardware. Sure you can get hardware that ultimately performs better but that has nothing to do with the bottlenecks itself as it pertains to the hardware and removing bottlenecks is about dealing with existing hardware.
Bottlenecks are a stall or slowing in transmission caused by an intersection point i.e. hardware which is incapable of keeping up with the flow of data being provided to it by other hardware within the system.

This is the point we've reached folks, they're now trying to redefine a bottleneck.

martin.png


At least I’m backing up what I say with links. You should do the same.
Absolutely 100% not going to happen, this is too laughably dumb for me to take seriously.
 
Last edited:

bitbydeath

Member
Bottlenecks are a stall or slowing in transmission caused by an intersection point i.e. hardware which is incapable of keeping up with the flow of data being provided to it by other hardware within the system.

This is the point we've reached folks, they're now trying to redefine a bottleneck.

martin.png


Absolutely 100% not going to happen, this is too laughably dumb for me to take seriously.

Because you can’t and won’t admit you are wrong.
You could create the very first wiki article since none currently exists.

ciNWfyD.jpg



2YiqEDp.jpg
 

rnlval

Member
Bottlenecks aren’t related to physical hardware, it’s about how data gets passed between said hardware.
The memory bus is the physical hardware.
The processor is the physical hardware.
Physical hardware's design can bottleneck the processing pipeline.

Memory bandwidth is just a pipe's size.

Reminder, AVX workload can cause clock speed degradation for PS5.
 
Last edited:

SatansReverence

Hipster Princess
bitbydeath bitbydeath

"A bottleneck occurs when the capacity of an application or a computer system is limited by a single component"

I guess the heatsink isn't a component now.

Congratulations, you played yourself.
 
Top Bottom