• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Matt weighs in on PS5 I/O, PS5 vs XSX and what it means for PC.

Panajev2001a

GAF's Pleasant Genius
Basically if their custom work has allowed the world first 2.23ghz gpu, that's an achievement to be proud of, in the same veins as their custom ssd work, imo. Did they forget to put the clockspeed achievement up the same pedestal as the ssd?

No, but they believe the SSD is more game changing for their strategy overall. They also gave a lot of space to their clocking/power consumption management solution and not surprisingly you are minimising / brushing it aside.
They also did say that for them higher clocks are preferable at the same performance profile and why, but if you wanted BS about how that made up or gave the secret performance multipliers to breeze past the XSX you are barking up the wrong tree and you know that.
Still, thanks for confirming your concern was just “concern” as in F.U.D. concern... I guess...
 
Last edited:

Psykodad

Banned
UE4 will still work on UE5, EPIC have already said that.

Its just that Nanite subset has been optimsied FIRST on Ps5 and will clearly be more performant on Ps5 and why Ps5 was chosen.

I thought the other poster was implying that changes Epic is making for PS5 won't be added to UE5. If not, disregard my comment.
 

geordiemp

Member
So XSX fans, when doyou think MS will be able to match the graphic fidelity shown below



Any ideas when or how ? Do you think it will ever be matched ?

Can XSX stream stuff available to the GPU to process in milliseconds and fractions of a frame necessary (as explained by Tim Sweeny is not the same as raw SSD speed) ?

Any takers ?
 
Last edited:

Bryank75

Banned
(10*560 + 6*336)/16 = (5600+2016)/16= 7616/16 = 476gbps

Thats called a weighted average.

476>448

QED

3GB of SX Ram is dedicated to OS.
So, in fact...

(10*560 + 3*336) = 6608 VS 16*448 = 7168

I find it particularly pathetic that you try to solve for a single GB or RAM. Why try to spin this?
The facts are that PS5 has more Ram than Series X by about 8%.

We are NOT talking the bandwidth of one unit, we are talking total bandwidth of all the ram.

If you can't understand or are arguing in bad faith, that is on you.

Understand longdi longdi & T Trueblakjedi ?
 
Last edited:

Dontero

Banned
Basically if their custom work has allowed the world first 2.23ghz gpu, that's an achievement to be proud of, in the same veins as their custom ssd work, imo. Did they forget to put the clockspeed achievement up the same pedestal as the ssd?

Not really. They achieved it by basically cutting down power of chip. Xbox GPU will be faster than it and probably run cooler. What they achieved though is that they got more performance per dollar. Aka they opted out for cheap GPU but they OCit to hell to keep price competetive.

Imho there will be about 100$ difference between PS5 and Xbox4.

So XSX fans, when doyou think MS will be able to match the graphic fidelity shown below

Considering Xbox4 is more powerful... So from day 1 and they could run it at better frame-rate with extra effects.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
3GB of SX Ram is dedicated to OS.
So, in fact...

(10*560 + 3*336) = 6608 VS 16*448 = 7168

I find it particularly pathetic that you try to solve for a single GB or RAM. Why try to spin this?
The facts are that PS5 has more Ram than Series X by about 8%.

We are NOT talking the bandwidth of one unit, we are talking total bandwidth of all the ram.

If you can't understand or are arguing in bad faith, that is on you.

Understand longdi longdi & T Trueblakjedi ?

I have to give it to them, another I/O speed thread derailed back to TFLOPS or RAM or anything but that discussion somehow... they are well coordinated at least...
 
Last edited:

geordiemp

Member
Not really. They achieved it by basically cutting down power of chip. Xbox GPU will be faster than it and probably run cooler. What they achieved though is that they got more performance per dollar. Aka they opted out for cheap GPU but they OCit to hell to keep price competetive.

Imho there will be about 100$ difference between PS5 and Xbox4.



Considering Xbox4 is more powerful... So from day 1 and they could run it at better frame-rate with extra effects.

Considering XSX iO and architecture for streaming high quality assets is considerably less powerful, when will we see high asset streaming looking as good as UE5 demo ?

3GB of SX Ram is dedicated to OS.

We dont know ps5 OS size yet or where game recoding will go on either system, or if Ps5 will do same as ps4pro for the OS.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Not really. They achieved it by basically cutting down power of chip. Xbox GPU will be faster than it and probably run cooler. What they achieved though is that they got more performance per dollar. Aka they opted out for cheap GPU but they OCit to hell to keep price competetive.

Imho there will be about 100$ difference between PS5 and Xbox4.



Considering Xbox4 is more powerful... So from day 1 and they could run it at better frame-rate with extra effects.

Cheaper != cheap though...
 

Bryank75

Banned
Considering XSX iO and architecture for streaming high quality assets is considerably less powerful, when will we see high asset streaming looking as good as UE5 demo ?



We dont know ps5 OS size yet or where game recoding will go on either system, or if Ps5 will do same as ps4pro for the OS.
I would imagine it will be suspended to the SSD while a game is active and be recalled on the fly when needed. But we do need clarification.
 

Bryank75

Banned
All that with less detail and/or more pop-in + loading.
They won't get extra frames outta anything... CPU's are practically identical.

I was thinking of standing in shops from time to time around launch and ask parents buying SX if they were buying S-X for their son! Suggest that it is a sick thing to do...
 
Last edited:

Dontero

Banned
Considering XSX iO and architecture for streaming high quality assets is considerably less powerful, when will we see high asset streaming looking as good as UE5 demo ?

There will be no difference. Most of SSD talk is gibberish PR. What matters for games when they load stuff is random read not sequential read. And regardless of controller or disc they will be using as long as someone has an SSD it will work mostly the same. There will be barely any difference compared to normal SATA SSD let alone NVME pcigen3 ssd.

Moreover GPU reading stuff directly from SSD is even bigger gibberish for PR. Games operate at GIGABYTES per second and not at few GIGABYTES but 100s of GIGABYTES per second. So you don't have to be an Einstain to do 2+2 and see that it not 5.

SSD will be huge improvement over HDD but most of sweeney cerny talk and the rest is just gibberish. "but how can devs lie ?!" someone might say to that. Look at techonology and PR and you will see everyone lying non stop. "OUR NEW NVME is twice as fast !" says kingstone, yes in sequential read but in random read it is maybe 2-5% faster. "PS3 has ability to run games at 120hz 1080p !!" if you run simple 3D objects in small print somewhere.

Reason why random read didn't improve much is due to physics and technology. SSDs dont't have seek time anymore (they do but it is super small) and you can't really improve it a lot unless you completely change technology involved. Intel optane has some upgrades to that but they are still small.

The only interesting and "seems to be true" tech mentioned is data compression and adujusting what is loaded first. But both of those are not really huge issues to begin with. If you create any streaming engine you already do it since you don't want far elements load first instead of those close to player.

Data compression is already being used right now in everything. No one uses bitmaps for textures, no one uses wavs for sound files. Everything is compressed and decompressed all the time. Hardware level decompression is great, for an example Win10 already supports hardware compression via zlib and you can compress your games and files (even windows files) like that and use those without active decompression.

Speaking of Cerny and compression. This is how PR works. Cerny talked about going from 5-8GB/s to 22GB/s. This is what non technical people see. But technical people know that you can't sum transfer rate and decompression because compression and decompression is highly varied process. Some files can be compressed by as much as 99% and some as much as 2% because they are already compressed. If you take video file and you will compress it via kraken or zlib it will still be same size because videofiles are already supercompressed by default. So that 22GB/s goes back down to 5-8GB/s and that is assuming that peak can be sustained and file itself will be nice for sequential read/write.
 
Last edited:

Bryank75

Banned
There will be no difference. Most of SSD talk is gibberish PR. What matters for games when they load stuff is random read not sequential read. And regardless of controller or disc they will be using as long as someone has an SSD it will work mostly the same. There will be barely any difference compared to normal SATA SSD let alone NVME pcigen3 ssd.

Moreover GPU reading stuff directly from SSD is even bigger gibberish for PR. Games operate at GIGABYTES per second and not at few GIGABYTES but 100s of GIGABYTES per second. So you don't have to be an Einstain to do 2+2 and see that it not 5.

SSD will be huge improvement over HDD but most of sweeney cerny talk and the rest is just gibberish. "but how can devs lie ?!" someone might say to that. Look at techonology and PR and you will see everyone lying non stop. "OUR NEW NVME is twice as fast !" says kingstone, yes in sequential read but in random read it is maybe 2-5% faster. "PS3 has ability to run games at 120hz 1080p !!" if you run simple 3D objects in small print somewhere.

Reason why random read didn't improve much is due to physics and technology. SSDs dont't have seek time anymore (they do but it is super small) and you can't really improve it a lot unless you completely change technology involved. Intel optane has some upgrades to that but they are still small.

The only interesting and "seems to be true" tech mentioned is data compression and adujusting what is loaded first. But both of those are not really huge issues to begin with. If you create any streaming engine you already do it since you don't want far elements load first instead of those close to player.

Data compression is already being used right now in everything. No one uses bitmaps for textures, no one uses wavs for sound files. Everything is compressed and decompressed all the time. Hardware level decompression is great, for an example Win10 already supports hardware compression via zlib and you can compress your games and files (even windows files) like that and use those without active decompression.

Speaking of Cerny and compression. This is how PR works. Cerny talked about going from 5-8GB/s to 22GB/s. This is what non technical people see. But technical people know that you can't sum transfer rate and decompression because compression and decompression is highly varied process. Some files can be compressed by as much as 99% and some as much as 2% because they are already compressed. If you take video file and you will compress it via kraken or zlib it will still be same size because videofiles are already supercompressed by default. So that 22GB/s goes back down to 5-8GB/s and that is assuming that peak can be sustained and file itself will be nice for sequential read/write.
Yeah cause marketing is Cerny's background! /S

I actually preferred Xbox's highly technical reveal... 'it's a monster', 'it eats monsters for breakfast'. hahaha
 

Marlenus

Member
We know they both have 64 rops as number of rops is dictated by the number of shader array (and both have 4). But PS5 has other parts about 20% stronger than XSX, basically everything that is not in the CUs as both should have the same number of (because again those are determined by the number of shader array, not the number of CUs, and we know XSX has 2 Shader Engines and 4 shader arrays):

- Geometry processor
- Primitive units
- Graphics command processor
- Rasterizer
- Rops
- ACEs (Async Compute Engines)
- The new L1 cache (compared to GCN)


Navi-Slide-2.jpg

Just because AMD gave their 2 SE design 64 rops does not mean that all 2 SE designs have to have 64 rops since they are modular units.

We also dont know that series X has 2 SEs because 4 SEs would also be a possibility.
 

Frederic

Banned
So XSX fans, when doyou think MS will be able to match the graphic fidelity shown below



Any ideas when or how ? Do you think it will ever be matched ?

Can XSX stream stuff available to the GPU to process in milliseconds and fractions of a frame necessary (as explained by Tim Sweeny is not the same as raw SSD speed) ?

Any takers ?


what? Match? You mean surpass it right? I mean the demo was only 1440p, so XSX should be able to output it in 4K native.
I mean even notebooks can run it easily:


I mean epic didn’t even have any details they didn’t tell us how many bandwidth the demo actually used etc. Why not? Because it’s not as much as they want us to believe maybe?
 

Frederic

Banned
There will be no difference. Most of SSD talk is gibberish PR. What matters for games when they load stuff is random read not sequential read. And regardless of controller or disc they will be using as long as someone has an SSD it will work mostly the same. There will be barely any difference compared to normal SATA SSD let alone NVME pcigen3 ssd.

Moreover GPU reading stuff directly from SSD is even bigger gibberish for PR. Games operate at GIGABYTES per second and not at few GIGABYTES but 100s of GIGABYTES per second. So you don't have to be an Einstain to do 2+2 and see that it not 5.

SSD will be huge improvement over HDD but most of sweeney cerny talk and the rest is just gibberish. "but how can devs lie ?!" someone might say to that. Look at techonology and PR and you will see everyone lying non stop. "OUR NEW NVME is twice as fast !" says kingstone, yes in sequential read but in random read it is maybe 2-5% faster. "PS3 has ability to run games at 120hz 1080p !!" if you run simple 3D objects in small print somewhere.

Reason why random read didn't improve much is due to physics and technology. SSDs dont't have seek time anymore (they do but it is super small) and you can't really improve it a lot unless you completely change technology involved. Intel optane has some upgrades to that but they are still small.

The only interesting and "seems to be true" tech mentioned is data compression and adujusting what is loaded first. But both of those are not really huge issues to begin with. If you create any streaming engine you already do it since you don't want far elements load first instead of those close to player.

Data compression is already being used right now in everything. No one uses bitmaps for textures, no one uses wavs for sound files. Everything is compressed and decompressed all the time. Hardware level decompression is great, for an example Win10 already supports hardware compression via zlib and you can compress your games and files (even windows files) like that and use those without active decompression.

Speaking of Cerny and compression. This is how PR works. Cerny talked about going from 5-8GB/s to 22GB/s. This is what non technical people see. But technical people know that you can't sum transfer rate and decompression because compression and decompression is highly varied process. Some files can be compressed by as much as 99% and some as much as 2% because they are already compressed. If you take video file and you will compress it via kraken or zlib it will still be same size because videofiles are already supercompressed by default. So that 22GB/s goes back down to 5-8GB/s and that is assuming that peak can be sustained and file itself will be nice for sequential read/write.

yup, even a notebook can run it easily:

but we will find out as soon as the games arrive. People will be disappointed
 

Panajev2001a

GAF's Pleasant Genius
Just because AMD gave their 2 SE design 64 rops does not mean that all 2 SE designs have to have 64 rops since they are modular units.

We also dont know that series X has 2 SEs because 4 SEs would also be a possibility.

Their architecture white papers talk about 4 RB’s per Shader Array (each RB being the equivalent of 4 NV ROPS) and two Shader Arrays per Shader Engine and I have yet to see any GPU defying that (even Sony got more than they needed with PS4 Pro as they doubled up GPU wise so to speak).

I think it is more likely people to wish for 4 Shader Engines with all the extra HW because of a “it must be a monster, it cannot just be 18% or so faster GPU wise” or “well, maybe... bet you cannot prove it does not 100% (as if those making the claim could instead...).
 

Panajev2001a

GAF's Pleasant Genius
yup, even a notebook can run it easily:

but we will find out as soon as the games arrive. People will be disappointed

Back to the video of the PS5 demo played on a laptop or some other conspiracy that only native Chinese speakers that are not PS5 fans can properly decode?
 
Last edited:

MCplayer

Member
3GB of SX Ram is dedicated to OS.
So, in fact...

(10*560 + 3*336) = 6608 VS 16*448 = 7168

I find it particularly pathetic that you try to solve for a single GB or RAM. Why try to spin this?
The facts are that PS5 has more Ram than Series X by about 8%.

We are NOT talking the bandwidth of one unit, we are talking total bandwidth of all the ram.

If you can't understand or are arguing in bad faith, that is on you.

Understand longdi longdi & T Trueblakjedi ?
dont forget xbox has bigger bus witdth, which equals both platform RAM, plus OBVIUSLY PS5 has to dedicate RAM to other resources like OS etc, they just didnt mentioned it yet
 
Last edited:

Rien

Jelly Belly
I actually preferred Xbox's highly technical reveal... 'it's a monster', 'it eats monsters for breakfast'. hahaha

Yeah that was kinda embarrising I think. Xbox can often come off as that ‘Whats up fellow children’ meme. Xbox Series X sounds try hard as well. Same as renaming Gears of War just simply Gears. Many of these little things. Dunno, just something that makes me cringe. Lol

Further no bad word on the brand tho. Love their consoles and always have their newest consoles day one.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Speaking of Cerny and compression. This is how PR works. Cerny talked about going from 5-8GB/s to 22GB/s

He never said that. He said: “the SSD has 5.5 GB/s throughput and, with average Kraken compression rates, it should probably transfer an equivalent of 8-9 GB/s”.
He then added that the physical limit of the Kraken decompressor, in ideal and edge case like scenarios, has a maximum inflation rate of 22 GB/s... technical, no gibberish nor wild PR statements.
 

geordiemp

Member
There will be no difference. Most of SSD talk is gibberish PR. What matters for games when they load stuff is random read not sequential read. And regardless of controller or disc they will be using as long as someone has an SSD it will work mostly the same. There will be barely any difference compared to normal SATA SSD let alone NVME pcigen3 ssd.

Moreover GPU reading stuff directly from SSD is even bigger gibberish for PR. Games operate at GIGABYTES per second and not at few GIGABYTES but 100s of GIGABYTES per second. So you don't have to be an Einstain to do 2+2 and see that it not 5.

SSD will be huge improvement over HDD but most of sweeney cerny talk and the rest is just gibberish. "but how can devs lie ?!" someone might say to that. Look at techonology and PR and you will see everyone lying non stop. "OUR NEW NVME is twice as fast !" says kingstone, yes in sequential read but in random read it is maybe 2-5% faster. "PS3 has ability to run games at 120hz 1080p !!" if you run simple 3D objects in small print somewhere.

Reason why random read didn't improve much is due to physics and technology. SSDs dont't have seek time anymore (they do but it is super small) and you can't really improve it a lot unless you completely change technology involved. Intel optane has some upgrades to that but they are still small.

The only interesting and "seems to be true" tech mentioned is data compression and adujusting what is loaded first. But both of those are not really huge issues to begin with. If you create any streaming engine you already do it since you don't want far elements load first instead of those close to player.

Data compression is already being used right now in everything. No one uses bitmaps for textures, no one uses wavs for sound files. Everything is compressed and decompressed all the time. Hardware level decompression is great, for an example Win10 already supports hardware compression via zlib and you can compress your games and files (even windows files) like that and use those without active decompression.

Speaking of Cerny and compression. This is how PR works. Cerny talked about going from 5-8GB/s to 22GB/s. This is what non technical people see. But technical people know that you can't sum transfer rate and decompression because compression and decompression is highly varied process. Some files can be compressed by as much as 99% and some as much as 2% because they are already compressed. If you take video file and you will compress it via kraken or zlib it will still be same size because videofiles are already supercompressed by default. So that 22GB/s goes back down to 5-8GB/s and that is assuming that peak can be sustained and file itself will be nice for sequential read/write.

What you just typed was a load of PR TALK. Technical people, are you one LOL :messenger_beaming:

Is below just PR to you ? We have seen it on ps5, its too LATE to debunk it as PR, it happened and we have all seen it.



Can you link us to what XSX can do ?

 
Last edited:

Dontero

Banned
He never said that. He said: “the SSD has 5.5 GB/s throughput and, with average Kraken compression rates, it should probably transfer an equivalent of 8-9 GB/s”.
He then added that the physical limit of the Kraken decompressor, in ideal and edge case like scenarios, has a maximum inflation rate of 22 GB/s... technical, no gibberish nor wild PR statements.

He presented it as if those speeds are normally achievable. In my book this is PR gibberish. Find me first any file that can be compressed with 99% level let alone something like 20%. Other than text and scripts i don't think there exist any.

I know because i extensively use zlib compression on my PC with win10. If people are interested they can try it themselves.

yup, even a notebook can run it easily:

I don't think that was true. I think it was just video of demo running at laptop. Sweeney even confirmed it i think.

I actually preferred Xbox's highly technical reveal... 'it's a monster', 'it eats monsters for breakfast'. hahaha

Saying "it is most powerful consoles" when it is true is ok in my book. Selling obscure features as game changing features is not. Cerny already did something like this with PS4 and its "garlic bus"
 
Last edited:

Shmunter

Member
Back to the video of the PS5 demo played on a laptop but or some other conspiracy that only native Chinese speakers that are not PS5 fans can properly decode?
Yeah, it’s surprisingly far fetched for a gaming laptop without RDN2 and basic current gen I/o ran the same demo at equivalent fidelity. Common, it’s almost conspiracy levels.

Cerney always says, Ue5 will off course fun great on XsX. But he never says a word about the demo doing so. There’s a reason for it, the demo tapped into the next gen I/o at 5.5 gig. Doesn’t mean other Ue5 projects won’t be looking amazeballs on XsX, just not that demo which was designed to max out streaming, in particular the flyover.
 

Bryank75

Banned
Yeah that was kinda embarrising I think. Xbox can often come off as that ‘Whats up fellow children’ meme. Xbox Series X sounds try hard as well. Same as renaming Gears of War just simply Gears. Many of these little things. Dunno, just something that makes me cringe. Lol

Further no bad word on the brand tho. Love their consoles and always have their newest consoles day one.
Xbox has done a great job in most areas of the console design, I just have one or two issues... but then I have issues with the PS5 too. It's refreshing to get such an honest reply... thank you!
 

FranXico

Member
He presented it as if those speeds are normally achievable. In my book this is PR gibberish. Find me first any file that can be compressed with 99% level let alone something like 20%. Other than text and scripts i don't think there exist any.

I know because i extensively use zlib compression on my PC with win10. If people are interested they can try it themselves.
They presented a believable ballpark number of 8GB/s with Kraken as opposed to 5.5GB/s raw. Which is with about 30% compression. 22GB/s is indeed for stuff like text files, it's just not gonna happen. Kraken manages to compress better than zlib, does it not?

If you find those numbers dishonest, what would you say to going from 2.4GB/s raw to 4.8GB/s thanks to a whooping 50% compression ratio? That would sound like an even more outrageous lie to me.
 
Last edited:

Md Ray

Member
Not possible. 4k is a four time increase of pixels from 1440p.

When the GPU difference is a minimum of 16% and maximum of 26%, it simply isn’t enough to facilitate that large of a resolution jump. Best case scenario is 1800p while worst case is 1670p.
Even 1800p would be too much for XSX GPU to handle if PS5's resolution is 1440p. I'd say the best-case scenario is 1620p for XSX.
 

Bryank75

Banned
He presented it as if those speeds are normally achievable. In my book this is PR gibberish. Find me first any file that can be compressed with 99% level let alone something like 20%. Other than text and scripts i don't think there exist any.

I know because i extensively use zlib compression on my PC with win10. If people are interested they can try it themselves.



I don't think that was true. I think it was just video of demo running at laptop. Sweeney even confirmed it i think.



Saying "it is most powerful consoles" when it is true is ok in my book. Selling obscure features as game changing features is not. Cerny already did something like this with PS4 and its "garlic bus"
I'd take Cerny, Sweeney, Epic VP of tech, Cherno and countless industry professionals over your biased interpretation.... but thanks for the laugh!
 

Vaztu

Member
He presented it as if those speeds are normally achievable. In my book this is PR gibberish. Find me first any file that can be compressed with 99% level let alone something like 20%. Other than text and scripts i don't think there exist any.

I know because i extensively use zlib compression on my PC with win10. If people are interested they can try it themselves.

You clearly have not watched Road to PS5. Cerny just mentions in passing that the unit itself is capable of 22GB/s.

If you want PR gibberish, why has MS stated fixed figure of 4.8GB/s after decompression. Normally it has a range, and is never fixed. What is MS' range ? 3-4.8GB/s or what ?
 

Marlenus

Member
Their architecture white papers talk about 4 RB’s per Shader Array (each RB being the equivalent of 4 NV ROPS) and two Shader Arrays per Shader Engine and I have yet to see any GPU defying that (even Sony got more than they needed with PS4 Pro as they doubled up GPU wise so to speak).

I think it is more likely people to wish for 4 Shader Engines with all the extra HW because of a “it must be a monster, it cannot just be 18% or so faster GPU wise” or “well, maybe... bet you cannot prove it does not 100% (as if those making the claim could instead...).

The RDNA whitepaper talks more specifically about the 5700XT.

The RX 5700 XT is organized into several main blocks that are all tied together using AMD’s
Infinity Fabric. The command processor and PCI Express interface connect the GPU to the
outside world and control the assorted functions. The two shader engines house all the
programmable compute resources and some of the dedicated graphics hardware. Each of the
two shader engines include two shader arrays, which comprise of the new dual compute units,
a shared graphics L1 cache, a primitive unit, a rasterizer, and four render backends (RBs). In
addition, the GPU includes dedicated logic for multimedia and display processing. Access to
memory is routed via the partitioned L2 cache and memory controllers.

Nothing there indicates that the ratio of units can change and before this they state.

Graphics processors (GPUs) built on the RDNA architecture will span from power-efficient
notebooks and smartphones to some of the world’s largest supercomputers. To accommodate
so many different scenarios, the overall system architecture is designed for extreme scalability
while boosting performance over the previous generations.

This would indicate that the exact ratio of the various modules can be varied based upon the use case.
 

THE:MILKMAN

Member
They presented a believable ballpark number of 8GB/s with Kraken as opposed to 5.5GB/s raw. Which is with about 30% compression. 22GB/s is indeed for stuff like text files, it's just not gonna happen. Kraken manages to compress better than zlib, does it not?

If you find those numbers dishonest, what would you say to going from 2.4GB/s raw to 4.8GB/s thanks to a whooping 50% compression ratio? That would sound like an even more outrageous lie to me.

IIRC Cerny was also very generous to PS4 when comparing how much time PS4 takes to load 1GB of data. He allowed PS4 to have 50% compression. He was pretty conservative with PS5 as he gave it 'only' ~35% (8.5GB/s) even though Kraken is ~10% better than Zlib.
 

Panajev2001a

GAF's Pleasant Genius
He presented it as if those speeds are normally achievable. In my book this is PR gibberish. Find me first any file that can be compressed with 99% level let alone something like 20%. Other than text and scripts i don't think there exist any.

Take it up with the Kraken developers at RAD Game Tools ( http://www.radgametools.com/oodlekraken.htm ) or MS Architects talking about BCPack + Zlib achieving even higher compression rates on average (2.4 GB/s to 4.8 GB/s), to game developers talks at GDC ( around 14’35’’ in), etc... this did not seem controversial to anyone...
 

Panajev2001a

GAF's Pleasant Genius
The RDNA whitepaper talks more specifically about the 5700XT.



Nothing there indicates that the ratio of units can change and before this they state.



This would indicate that the exact ratio of the various modules can be varied based upon the use case.

It used that chip as an example yes, but you are assuming scalability at that granularity sense because the argument you are making needs it.
All can be done to a point (you could have wider vectors, you could add more specialised HW, etc... it does not mean it is all wired and balanced for it)... MS is not willing to spend infinite money on it despite what people wanted to believe (else all memory would run at 560 GB/s and the console would only cost $299...).
 
Last edited:

longdi

Banned
Why are you ignoring this...? longdi longdi

Sorry i was on mobile.

Imo, the difference as i said before, is akin to, PS5 = 2070S + PCei4 SSD. Series X = 2080Ti + qlc PCie3 SSD.
Granted this comparison is a bit bigger gap, but the 2070S v 2080S is much smaller than what i feel.
Maybe a 2080Ti without nvidia boost on.

Mainly Im still suspicious about the 2.23ghz claims.
 
Last edited:

Bryank75

Banned
Sorry i was on mobile.

Imo, the difference as i said before, is akin to, PS5 = 2070S + PCei4 SSD. Series X = 2080Ti + qlc PCie3 SSD.
Granted this comparison is a bit bigger gap, but the 2070S v 2080S is much smaller than what i feel.
Maybe a 2080Ti without nvidia boost on.

Mainly Im still suspicious about the 2.23ghz claims.

AkCsUJI.jpg
 

longdi

Banned
3GB of SX Ram is dedicated to OS.
So, in fact...

(10*560 + 3*336) = 6608 VS 16*448 = 7168

I find it particularly pathetic that you try to solve for a single GB or RAM. Why try to spin this?
The facts are that PS5 has more Ram than Series X by about 8%.

We are NOT talking the bandwidth of one unit, we are talking total bandwidth of all the ram.

If you can't understand or are arguing in bad faith, that is on you.

Understand longdi longdi & T Trueblakjedi ?

Strange that MS openess to figures is deflected back at them. :messenger_grinning_sweat:
Let see Sony figures next.
But i wont worry about RAM capacity. It has always been to reserved more and open up as things mature. It is nothing new really.
 

Marlenus

Member
It used that chip as an example yes, but you are assuming scalability at that granularity sense because the argument you are making needs it.
All can be done to a point (you could have wider vectors, you could add more specialised HW, etc... it does not mean it is all wired and balanced for it)... MS is not willing to spend infinite money on it despite what people wanted to believe (else all memory would run at 560 GB/s and the console would only cost $299...).

GCN had variable RBs per shader engine. Polaris had 2 per SE and vega had 4 per SE so it would be a backwards step for RDNA to lose this flexibility.

Like I said several posts back i don't know how many rops it will have but any number in the 64 to 96 range seems entirely reasonable but making the assumption it has 64 seems premature when series X has an entirely unique GPU with a shader count and memory controller couunt we have not seen from an RDNA architecture before.
 

Panajev2001a

GAF's Pleasant Genius
GCN had variable RBs per shader engine. Polaris had 2 per SE and vega had 4 per SE so it would be a backwards step for RDNA to lose this flexibility.

Like I said several posts back i don't know how many rops it will have but any number in the 64 to 96 range seems entirely reasonable but making the assumption it has 64 seems premature when series X has an entirely unique GPU with a shader count and memory controller couunt we have not seen from an RDNA architecture before.

Polaris, Vega, etc... are different GCN revision like RDNA1 and RDNA2. It is possible, but highly unlikely as it is part of tuning that family of GPU’s, but then again someone could believe in RDNA4 features premiering in PS5 too and it would be possible but unlikely.
 

longdi

Banned
For sure AMD and Sony worked on this power control together but SmartShift, as I understand it, shifts power to the GPU from the CPU if the current load allows and vice versa. The knock-on effect is one clocks down and the other clocks up but this is a good thing. Maybe we could call the PS5 version SmartShift Turbo or V2.0?

I'm not sure where 'sustained' is coming from? Mark Cerny was clear clocks are variable or continuous boost but will spend "most of the time at or close to" the quoted maximums. I'll take clocking down a little if it means more effective utilisation of the APU.

I will call it Smartshift with PS games library. This is probably the collaboration Mark spoke of. Software library AI tuing. Nvidia would be jealous, so much games data can be used to train their DLSS.

Sustained came from Mark claiming PS5 heat sink is so good, they will never be thermally constrained. Im skeptical AF, especially for a processor that runs 2.23ghz.

Meaning they just need to give PS5 the best power supply the APU needs, and it can run 2.23ghz sustained graphics if developers wills it. 🤷‍♀️
In theory the power supply should be big enough to feed the cpu + gpu + other parts at their maximum capacity.

Zen2 dont use too much power, i have it in my PC.
Leaving the GPU portion. Which im questioning why the CPU needs to drop clocks to run the GPU at 2.23ghz. Something is off, no?
 

Panajev2001a

GAF's Pleasant Genius
I will call it Smartshift with PS games library. This is probably the collaboration Mark spoke of. Software library AI tuing. Nvidia would be jealous, so much games data can be used to train their DLSS.

Sustained came from Mark claiming PS5 heat sink is so good, they will never be thermally constrained. Im skeptical AF, especially for a processor that runs 2.23ghz.

Meaning they just need to give PS5 the best power supply the APU needs, and it can run 2.23ghz sustained graphics if developers wills it. 🤷‍♀️
In theory the power supply should be big enough to feed the cpu + gpu + other parts at their maximum capacity.

Zen2 dont use too much power, i have it in my PC.
Leaving the GPU portion. Which im questioning why the CPU needs to drop clocks to run the GPU at 2.23ghz. Something is off, no?

Something is off indeed :rolleyes:... but let’s not distract from what you admitted is concern trolling essentially as you are spreading doubt and uncertainty (without much evidence) around something you are not interested in the least beside to put it down.
 

Marlenus

Member
Polaris, Vega, etc... are different GCN revision like RDNA1 and RDNA2. It is possible, but highly unlikely as it is part of tuning that family of GPU’s, but then again someone could believe in RDNA4 features premiering in PS5 too and it would be possible but unlikely.

Vega 11 in raven ridge and picasso has a single SE and has 2 RBs (8 rops) yet full fat vega in the 64 or radeon 7 have 4 RBs per SE for a total of 64 rops.
 
Top Bottom