• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF Direct: The PS5

Calm down man, come on :messenger_pensive:

WHY and HOW am I biased, this is not proganda this is just discussing tech. I even say in my video the rates ARE variable and i even show that this is the case now for PC hardware, it is now new. But to help clear it up. The PS5 clocks ARE variable as required by the load.

As the for the second part, I am truly lost.
I am completely calm. I am also betting that there is absolutely no way that the ps5 OS will only take 0.5 GB of RAM because of the ssd. Your comments in the video and the tone you use speak for themselves. You imply that good Sony made a console (with ...variable clock speeds) for developers while the big bad MS made an ultra powerful console because of marketing reasons. Everyone can watch the video, listen to what you are saying and draw their own conclusions.

As for the second part if you are lost then you have never read either of the two forums.
 
Last edited:

ethomaz

Banned
WTF?

XSX has twice the bamdwidth of the PS5;
896 GB/s vs 448 GB/s.
It that a joke? Because it makes absolutely no sense lkl

Xbox memory is 10GB at 560GB/s and 6GB at 336GB/s.
It is only a single bus/pool you can access it simultaneously to combine the bandwidth lol

At best case you have a theorycal 560GB/s memory access... at worst you have a theorycal 336GB/s memory access.

896GB/s is not even a good joke.
 
Last edited:

Shmunter

Member
Actually, NX gamer can I request your kind input;

The o/s ram allocation being large in size may actually be in large part the gameplay recording from the game dvr functions. Do you think it would be best kept in ram -or- direct record to ssd ;or is that too punishing on the ssd over time. I suspect it may need to stay in ram and a couple of gig are unavoidable.


Also we need a video on the potential RT performance off both systems, stat! 😇
 

Sosokrates

Report me if I continue to console war
I love the entire "You are biased and here is why" projection I have seen a great deal today, all this feign "I am not bothered about any of this, he is wrong, no details on why but hey, I am too good for that and also, I am so not invested" attitude. Why are you making anything personal here?

The For Developers comment is, as I do explain in the video, getting the Buy-in from the team to help support the product. Sony know they need the 3rd Party as much as 1st. MS have done a great job and they are pushing things on, but I have spoken with enough Devs in detail on the subject and Sony has/was in a much better place on the PS4 with a better piece of hardware, SDK and support. This carries a great deal of weight and is the factor I am discussing here.



I have the ego?, I actually find that almost all people that know me and watch me think the opposite, but I am happy to discuss this why do I have an ego?

And what am I "getting wrong" or stretching here so you can enlighten me?

You First say the xsx is the stronger platform at the start of the video and then go on to say the xsx has all these bottlenecks which is purely your conjecture, and you mention no potential bottlenecks of the PS5, that seems rather biased when you consider the potential ps5 bottlenecks I listed in my prior post.

Also you say sony dont have a goal of getting good headlines, of course they do, if the xsx was 9tf you dont think sony would be screaming from the rooftops about it like they did with the PS4?

You say theres 2 strategys at play, wide and fast, well the XSX is wide and fast, 1825mhz is very fast for a console, not been done before, sonys 2.23ghz is just insane for a console, there is a reason why consoles have not traditionaly gone with very high clocks.

Your flaw when explaining the PS5s faster ssd solution is that it will be limited to 1st party games because devs will not be able to design a game based on streaming 8-9gb/s of compressed data directly from the ssd, because 3rd party games have to work on the PC and xsx, and even if the PC catches up to the ps5 ssd spec a large portion of PC gamers will still be using slower ssd's aswell as the xsx.
Also this is not taking into account the real world performance increase the PS5s ssd solution will have when you factor in the extra 112gb/s of ram bandwidth and other innovations ms have made for the XSX.

We dont know how limited streaming assets from the ssd directly will be, while 8gb/s is a huge increase over current gen it is still vast delta between GDDR6 speeds, the thing that cernys explanation did not make sense is that he made it sound like his ssd will work exactly like ram which cant be the case. so what does it mean for games, 4gb worth of textures per 0.5 seconds of gameplay? How would that compare to 2gb worth of textures for 0.5 seconds of gameplay?

I think people jumping on possible scenarios that the PS5 ssd might enable without really knowing enough about the technology and how Microsofts solution compares seems to be based on peoples imagination rather then factual data.
 
Last edited:
People should just go to Tom’s hardware site (best tech site for just about every piece of tech), as unbiased and indifferent to consoles as they come, and read the comparison there.
 

McRazzle

Member
Did you just add 560 + 336 Gbps to make 896Gbps? Please explain that logic. Also I was talking about I/O bandwidth not memory.
And both Xbox and Ps5 has the same Ram. Don't know where you're pulling all this from.

Also please explain what you mean by ps5 sharing it with cpu and how its different from xbox.

It's possible for the GPU to access the 10 GB of ram and the CPU to access 3.5 GB of the 6 GB of ram. with 2.5 GB reserved for the OS.

Why do you think this? Both use the same GDDR6 ram from what I saw.

My mistake,it seems they're both using 14 Gbps ram.

But it doesn't.



And ps5's ssd isn't? Genuinely curious..

You will be able to replace the PS5's internal SSD.
from Digital Foundry:
"The internal SSD can be replaced with a bigger hard drive with an off-the-shelf drive - meaning NVMe PC drives will work on your console."
And of course Mark Cerny is an idiot who wouldn't take into account how the drive he himself designed gets hot, and throttled to nullify that 5GBps he worked so hard to achieve I suppose.

No, but he was giving the maximum sequential speed of the SSD, which is irrelevant for games as random access is what matters in gaming.
All evidence points to the opposite so far.

What evidence?
 

-kb-

Member
It's possible for the GPU to access the 10 GB of ram and the CPU to access 3.5 GB of the 6 GB of ram. with 2.5 GB reserved for the OS.



My mistake,it seems they're both using 14 Gbps ram.



You will be able to replace the PS5's internal SSD.
from Digital Foundry:
"The internal SSD can be replaced with a bigger hard drive with an off-the-shelf drive - meaning NVMe PC drives will work on your console."


No, but he was giving the maximum sequential speed of the SSD, which is irrelevant for games as random access is what matters in gaming.


What evidence?

You cannot add the two memory speeds on the XSX together because they are from the same memory pool, the slower set of memory is when you access the larger chips, the max peak memory bandwidth is 560GB/s but itll likely be lower then this in practise when the CPU has to access the same pool at 336GB/s the GPU cannot access this memory at the same time.
 

McRazzle

Member
It that a joke? Because it makes absolutely no sense lkl

Xbox memory is 10GB at 560GB/s and 6GB at 336GB/s.
It is only a single bus/pool you can access it simultaneously to combine the bandwidth lol

At best case you have a theorycal 560GB/s memory access... at worst you have a theorycal 336GB/s memory access.

896GB/s is not even a good joke.
It's possible for the GPU to access the 10 GB of memory and CPU to access 3.5 GB of the slower 6 GB of memory, as 2.5 GB is reserved for the OS.
The GPU can be utilizing 560 GB/s of bandwidth and the CPU can be using 336 GB/s.
So, the XSX doesn't have to share it's bandwidth with the CPU unlike the PS5.
It does not have to be partitioned that way as developers can use it how they want to, but that's likely what most devs will do.


from Digital Foundry:

"Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating.

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen. "
 

ethomaz

Banned
It's possible for the GPU to access the 10 GB of memory and CPU to access 3.5 GB of the slower 6 GB of memory, as 2.5 GB is reserved for the OS.
The GPU can be utilizing 560 GB/s of bandwidth and the CPU can be using 336 GB/s.
So, the XSX doesn't have to share it's bandwidth with the CPU unlike the PS5.
It does not have to be partitioned that way as developers can use it how they want to, but that's likely what most devs will do.


from Digital Foundry:

"Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating.

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen. "
It is only a single 320but bus... you can’t access at the same time more than that lol

You are at max 560GB/s even if CPU is using one part of the bus and GPU using another part of the bus simultaneously.

You own quote says that... it is still a Unified Memory Pool.
CPU and GPU shares the same bus like PS5.

The difference is that on Xbox they used different density chips of memory and these with 2GB has lower access speed for half of it.

While Sony uses the same density chips that allow the same access speed to all memory chips.
 
Last edited:

DerFuggler

Member
I'll be honest... the most confusing thing about Mark Cerny's big reveal was the fact that he's got a VAIO laptop. Sony sold VAIO off about 5 years ago. Why's this dude shillin' for rent off some divestment from the last decade? What gives, my dudes?

But seriously, it seems like Sony has taken the Mercurial approach while Microsoft has taken the path of power. Speed overcomes strength at a certain threshold but only time will tell who will be the victor. Exciting times for console gaming! I really hope some of these advances carry over to PC gaming.
 

-kb-

Member
It's possible for the GPU to access the 10 GB of memory and CPU to access 3.5 GB of the slower 6 GB of memory, as 2.5 GB is reserved for the OS.
The GPU can be utilizing 560 GB/s of bandwidth and the CPU can be using 336 GB/s.
So, the XSX doesn't have to share it's bandwidth with the CPU unlike the PS5.
It does not have to be partitioned that way as developers can use it how they want to, but that's likely what most devs will do.


from Digital Foundry:

"Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating.

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen. "

You have misunderstood the presentation, it is a shared 320bit bus that drops to 192bits when accessing the higher density chips due to striping. When you access the faster 10GB of the memory space you 560GB/s, when you access the slower 6GB of the memory space you get 336GB/s.

You cannot access these at the same time, and the CPU and GPU cannot access memory at the same time, its contended resource and not a split pool.
 

McRazzle

Member
You have misunderstood the presentation, it is a shared 320bit bus that drops to 192bits when accessing the higher density chips due to striping. When you access the faster 10GB of the memory space you 560GB/s, when you access the slower 6GB of the memory space you get 336GB/s.

You cannot access these at the same time, and the CPU and GPU cannot access memory at the same time, its contended resource and not a split pool.
How is the CPU accessing the slower GDDR6 memory reserved for the OS and Front Shell if they can't be accessed at the same time ?
And why is 13.5 GB of ram available for games as Microsoft says ?
 

-kb-

Member
How is the CPU accessing the slower GDDR6 memory reserved for the OS and Front Shell if they can't be accessed at the same time ?
And why is 13.5 GB of ram available for games as Microsoft says ?

Because it is physically the same pool of ram. Theres a single memory controller for both the fast and slow pool of ram and you cannot task the memory controller with reading two different areas at once. The 13.5GB is available for games because its the total amount of memory minus the OS reservation.

The reason its setup like this is to keep costs down because Microsoft didn't want 20GB of GDDR6 to be part of the BoM.
 
Last edited:

Goliathy

Banned
That is what Cerny said. He also said he spent 2 years talking with devs to learn what they want for a new Playstation. Any reason to think he lied?

because he wants to sell the console? Also?To what devs did he talk to? First party devs?
Or any multiplatform games creator? I think any dev would prefer LOCKED frequency instead of variable frequencies.
 

sendit

Member
Fd7div0.jpg
 

sendit

Member
because he wants to sell the console? Also?To what devs did he talk to? First party devs?
Or any multiplatform games creator? I think any dev would prefer LOCKED frequency instead of variable frequencies.

Why would you want a locked frequency when say a scene in a game doesn't need all the compute?
 

TLZ

Banned
because he wants to sell the console? Also?To what devs did he talk to? First party devs?
Or any multiplatform games creator? I think any dev would prefer LOCKED frequency instead of variable frequencies.
I have to listen to you. You must surely know better than the Lead PS Architect. Armchair analysts ftw! What more can you tell us?
 

Keihart

Member
I’m confused on a couple of points:
if the system can sustain the “boost” speeds... then it’s not a “boost”, it’s just ... the speed that the thing operates at. If he meat to explain that the system clocks down when under less load to drop power and heat, that’s a fundamental difference to “boosting” when under load. Which is it?

Cerny explains that the CPU and GPU power draw dictates the clock speeds, and that this is a deterministic process. But he also says you can trade CPU power for GPU power. Does this mean that the system can theoretically go higher than the caps listed, where I can trade CPU clock for more GPU clock, or does this actually mean that it can’t sustain both cap clock speeds simultaneously?

36CU at higher clocks is designed to widen the pipe, so to speak, rather than offer up more pipes. However, this limits parallelism in this aspect of the hardware. In modern software trends, threading and parallelism is basically the only way to scale up efficiently. Why did Sony go against the grain, so to speak? The amount of parallelism in modern graphics programming needs as many pipes as it can get.

Clock to heat ratio is not linear; it’s exponential. AMD have had issues with this in their hardware with their consumer cards, where they ran much hotter than their nVidia counterparts. While I don’t doubt Sony’s thermal solution will be sufficient, the noise of the PS4 Pro is not really acceptable for a consumer grade appliance. What thermal solution can Sony offer up that would be power efficient, silent, and sufficient, for those boost clock speeds?

The audio engine is compared to, basically, sticking the entire PS4 cpu on the board and dedicating it to audio. Do developers have more control over this piece of the hardware, or is this a “hands off” area, that uses the power independently to produce Sony’s desired audio output? If developers can control it, can it only be used for audio? How’s does it interface with the rest of the machine?
Cerny even mentions that this audio could be used to offload other tasks when not used to process audio if needed. Similiar tricks have been used by first party studios in past PS consoles, so not a shocker but cool to be something considered from the inceptions instead of hack for the situation.
 

sinnergy

Member
because he wants to sell the console? Also?To what devs did he talk to? First party devs?
Or any multiplatform games creator? I think any dev would prefer LOCKED frequency instead of variable frequencies.
It’s locked at 9.2, this variable BS is just for show and tell imo , in the beginning 3 years in we might see some use of variable clocks.
 
because he wants to sell the console? Also?To what devs did he talk to? First party devs?
Or any multiplatform games creator? I think any dev would prefer LOCKED frequency instead of variable frequencies.
Just like Nintendo. Those are not our friends.

For the rest, have you a reason not to thrust Cerny? Has he track records of faking informations?

Devs never use full power of any hadware at 100% of times, i don't see what wrong there. You should have called him captain obvious because that's what it is, obvious.
 
Top Bottom