• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

geordiemp

Member
Hard case....

Please list all the contradictions point by point and we will find out who is right and who is wrong.

Why ? Everything you posted is flat out wrong for an APU with shared bandwidth, its not a GPU sharing bandwidth between pools, so where do you start ?

CPU needs to read memory to tell GPU what to do, its a constant contention all the time, GPU shared is totally different.
 

SamWeb

Member
Why ? Everything you posted is flat out wrong for an APU with shared bandwidth, its not a GPU sharing bandwidth between pools, so where do you start ?

CPU needs to read memory to tell GPU what to do, its a constant contention all the time, GPU shared is totally different.
Let's start with the fact that before that we did not discuss the issue of total bandwidth usage between CPU and GPU...
You just meant it by keeping this information in your mind, while I was talking about sharing bandwidth between the two pools
 
Last edited:

geordiemp

Member
Let's start with the fact that before that we did not discuss the issue of total bandwidth usage between CPU and GPU...
You just meant it by keeping this information in mind your while I was talking about sharing bandwidth between the two pools

Go read the REsetera thread on next gen, they talking about relative L3 cache and CPU write back and how it could also effect bandwidth after the bandwidth discusuion was put to bed. So XSX might have a better design yet, so dont sweat yet.

I cant be arsed, you have your agenda and just dont want to learn,
 
Last edited:

SamWeb

Member
I mean it's not really hard to comprehend.
Just imagine 10 or 20GB on the same 320bit bus. Would that somehow magically be faster or slower just because the size of the ICs is different?
No of course not.

That graphic is so stupid
For many, representations based on their faith seem preferable and believable.
 
It's much bigger than expected. Ps4 vs xbone sales in US are something like 53:47 and this being a US poll it's pretty surprising.

1. its an ign poll.....i mean, actual pollsters in politics have hard times being accurate

2. 90% consumers dont actually know that both console specs have been released and that the xbox is more powerful.

3. no pricing yet

4. no lockhart news yet


this poll is entirely based on brand loyalty. ps4 users voted ps5 and xb1 users voted xbox, simple.
 

SamWeb

Member
Go read the REsetera thread on next gen, they talking about relative L3 cache and CPU write back and how it could also effect bandwidth after the bandwidth discusuion was put to bed. So XSX might have a better design yet, so dont sweat yet.

I cant be arsed, you have your agenda and just dont want to learn,
I do not need in-depth knowledge here and now. I do not need to defend a dissertation... In any case, I answered your complaints.
 
Last edited:

Lunatic_Gamer

Gold Member


60fps should be the standard this upcoming console generation. If the developer wants to push for eye candy visuals @30fps at least give us the option to lower the resolution in order to achieve 60fps aka “performance mode”.
 
Last edited:


60fps should be the standard this upcoming console generation. If the developer wants to push for eye candy visuals @30fps at least give us the option to lower the resolution in order to achieve 60fps aka “performance mode”.
It should, yes. Fluidity is the baseline of interactivity, not visual fidelity at this point.
 


60fps should be the standard this upcoming console generation. If the developer wants to push for eye candy visuals @30fps at least give us the option to lower the resolution in order to achieve 60fps aka “performance mode”.

Technically I don't see any reason why every game can't have a 60fps mode come next-gen. Just drop the resolution and or use dynamic resolution scaling. The CPU and GPU have enough grunt to make this a possibility. Even if devs had to drop down to 1080p I doubt people would mind for the better frame rate. Not adding a 60fps mode is just either straight up laziness or limited development time to optimize for such a mode.

Devs at least had a legit excuse why they couldn't do this with current gen consoles because of the weak ass CPUs.
 
Last edited:

01011001

Banned
Technically I don't see any reason why every game can't have a 60fps mode come next-gen. Just drop the resolution and or use dynamic resolution scaling. The CPU and GPU have enough grunt to make this a possibility. Even if devs had to drop down to 1080p I doubt people would mind for the better frame rate. Not adding a 60fps mode is just either straight up laziness or limited development time to optimize for such a mode.

this ☝
 

01011001

Banned
Xbox wins sorry Naughty Dog you cannot do any similar.... hahaha oh the irony


well it already looks like it has better gameplay than any naughty dog game of the last decade
trollface_xd_by_x3nice_chuux3-d4l39w7.png
 

StreetsofBeige

Gold Member
Technically I don't see any reason why every game can't have a 60fps mode come next-gen. Just drop the resolution and or use dynamic resolution scaling. The CPU and GPU have enough grunt to make this a possibility. Even if devs had to drop down to 1080p I doubt people would mind for the better frame rate. Not adding a 60fps mode is just either straight up laziness or limited development time to optimize for such a mode.

Devs at least had a legit excuse why they couldn't do this with current gen consoles because of the weak ass CPUs.
Because by bumping up a game to 60 fps, it'll make the textures worse. Which every game designer at the studio will go haywire as gamers won't be able to notice the ingredients list on a rusty soup can on the ground.

Never mind a solid 60 fps. Some games can't even hold 30. We had PS1 3D polygon games at 30 in 1995. It's now 2020, and some games are still 30.

Funny how most console games don't have performance/res options (some games you get 2-3 choices), yet on PC they don't have a problem finding the resources to ensure a million configs work where there are 20-30 sliders to fiddle with to help a gamer adjust performance.

A console game might have visual and audio settings, yet they don't do anything to performance unless there are dedicated perf/res modes.
 
Last edited:
Because by bumping up a game to 60 fps, it'll make the textures worse. Which every game designer at the studio will go haywire as gamers won't be able to notice the ingredients list on a rusty soup can on the ground.

Never mind a solid 60 fps. Some games can't even hold 30.

Funny how most console games don't have performance/res options (some games you get 2-3 choices), yet on PC they don't have a problem finding the resources to ensure a million configs work where there are 20-30 sliders to fiddle with to help a gamer adjust performance.

A console game might have visual and audio settings, yet they don't do anything to performance unless there are dedicated perf/res modes.

You don't have to make the textures much worse, just dropping resolutions gains huge performance increases. And if you are using a lower resolution anyways then you can get by going from Ultra to High textures. Its not like a 60fps mode is gonna make it look like a Switch port.
 

PaintTinJr

Member
I dont know if consoles will never hit that peak performance. my gpu runs at 1950 mhz at all times even though nvidia says the boost clocks are only 1750 mhz on it. my pro runs so hot there is no way devs like ND and SSM are letting precious clock cycles go free.

I think next gen is going to be very interesting because even if the worst case scenario is that the audio chip and SSD dont help make up the resolution gap, shorter load times, and better audio will give each and every PS5 port several distinct features and advantages.

This is the first time im hearing about the L2/L3 cache bandwidth advantage. where was this mentioned? also, is the faster rasterization going to make up a 17% gap in tflops? ps5 only has 22% higher clockspeeds, and lacks 44% of CUs, i dont see how faster clocks on 16 fewer CUs will make a dent.

I suspect the 16 less CUs will be a moot point with the advent of RT, which I suspect favours higher clocks instead of wider CU count.

Thinking about it, the current-gen rasterization visuals are composited from many 2D deferred textures, that are updated constantly for things like shadow maps and cube mapping etc. These unit tasks are typically at resolutions (4096x4096) that exceed the final 1080p or 4K framebuffer. From the GPU’s perspective the deferred textures provide good workloads to bin pack a large CU count with constant work. By comparison, marching the path of the composited light that gives colour to a pixel in the 1080p framebuffer seems like if you don’t treat each pixel as an atomic workload to be bin-packed in the frequency domain then you either duplicate workloads in CUs or incur redundancy even when splitting partial pixel ray marches across multiple CUs – and then have the additional work of collating and summing the result. From what I can see, the PS5 2230Mhz clock divided by a target frame-rate (say 30fps) results in each CU getting 74M clocks per frame (looking like more than 1M clock per WGP thread of the 64 threads) to bin-pack as many conclusive path trace workloads in as possible. On the lower clock of 1860Mhz each WGP would drop to 62M, giving each of the 64threads less than 1M clocks per frame which could become wasteful as scene complexity and bounce count increased – if it lead to an excess of marched rays almost finishing but not quite concluding.
 

thelastword

Banned
Orphan Of The Machine Gameplay video from Dynamic Voltage Games

Coming to XBOX Series X at 4k 120fps .


I know these guys are top tier devs who think the PS5 is unbalanced, afterall it's SSD will be slowed down significantly by it's CPU, and I know Orphan of the Machine is going to wrap up all the best graphics awards later this year, but I really wished they spent as much time on sound as they did on graphics...Orphan of the Machine sound effects comes straight out of a 90's MS-DOS game....
 

BluRayHiDef

Banned
I know these guys are top tier devs who think the PS5 is unbalanced, afterall it's SSD will be slowed down significantly by it's CPU, and I know Orphan of the Machine is going to wrap up all the best graphics awards later this year, but I really wished they spent as much time on sound as they did on graphics...Orphan of the Machine sound effects comes straight out of a 90's MS-DOS game....

LOL, the PS5'S CPU is only 100Mhz slower than that of the XSX, which is a negligible difference. And yes, it will run at that speed most of the time while the GPU runs at 2.23Ghz most of the time. The console is designed to sustain both components at their maximum speeds simultaneously.
 

Neo_game

Member
Let me put this bluntly - the memory configuration on the Series X is sub-optimal.

I understand there are rumours that the SX had 24 GB or 20 GB at some point early in its design process but the credible leaks have always pointed to 16 GB which means that, if this was the case, it was very early on in the development of the console. So what are we (and developers) stuck with? 16 GB of GDDR6 @ 14 GHz connected to a 320-bit bus (that's 5 x 64-bit memory controllers).

Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".

Now, you may be asking what I mean by "full". Well, it comes down to two things: first is that, unlike some commentators might believe, the maximum bandwidth of the interface is limited to the 320-bit controllers and the matching 10 chips x 32 bit/pin x 14 GHz/Gbps interface of the GDDR6 memory.

That means that the maximum theoretical bandwidth is 560 GB/s, not 896 GB/s (560 + 336). Secondly, memory has to be interleaved in order to function on a given clock timing to improve the parallelism of the configuration. Interleaving is why you don't get a single 16 GB RAM chip, instead we get multiple 1 GB or 2 GB chips because it's vastly more efficient. HBM is a different story because the dies are parallel with multiple channels per pin and multiple frequencies are possible to be run across each chip in a stack, unlike DDR/GDDR which has to have all chips run at the same frequency.

However, what this means is that you need to have address space symmetry in order have interleaving of the RAM, i.e. you need to have all your chips presenting the same "capacity" of memory in order for it to work. Looking at the diagram above, you can see the SX's configuration, the first 1 GB of each RAM chip is interleaved across the entire 320-bit memory interface, giving rise to 10 GB operating with a bandwidth of 560 GB/s but what about the other 6 GB of RAM?

Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.

The fallout of this can be quite complicated depending on how Microsoft have worked out their memory bus architecture. It could be a complete "switch" whereby on one clock cycle the memory interface uses the interleaved 10 GB portion and on the following clock cycle it accesses the 6 GB portion. This implementation would have the effect of averaging the effective bandwidth for all the memory. If you average this access, you get 392 GB/s for the 10 GB portion and 168 GB/s for the 6 GB portion for a given time frame but individual cycles would be counted at their full bandwidth.

However, there is another scenario with memory being assigned to each portion based on availability. In this configuration, the memory bandwidth (and access) is dependent on how much RAM is in use. Below 10 GB, the RAM will always operate at 560 GB/s. Above 10 GB utilisation, the memory interface must start switching or splitting the access to the memory portions. I don't know if it's technically possible to actually access two different interleaved portions of memory simultaneously by using the two 16-bit channels of the GDDR6 chip but if it were (and the standard appears to allow for it), you'd end up with the same memory bandwidths as the "averaged" scenario mentioned above.

If Microsoft were able to simultaneously access and decouple individual chips from the interleaved portions of memory through their memory controller then you could theoretically push the access to an asymmetric balance, being able to switch between a pure 560 GB/s for 10 GB RAM and a mixed 224 GB/s from 4 GB of that same portion and the full 336 GB/s of the 6 GB portion (also pictured above). This seems unlikely to my understanding of how things work and undesirable from a technical standpoint in terms of game memory access and also architecture design.

In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4. Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.



RAM%2Bconfiguration%2Bgraphic.jpg



Wow, thanks for posting this. Very interesting read.

So it seems that Xbox is not having that big of advantage in RAM that I thought. The difference between the console seems less and less as the days go by. lol
 

Fdkenzo

Member
I do not understand why some people still compare consoles with some PC components. On paper you can compare, but not in reality! Don't be surprised if the PS5 and XSX will have better performance on the new next-gen games than the 2080ti and I'm not exaggerating when I say that. Remember, the consoles are totally different from a PC, and the specifications in the new consoles are extremely powerful!
Another problem, where's that concern with 4K 60fps ?! All games will run smoothly at 4K 60fps (this will be the standard).
 
I do not understand why some people still compare consoles with some PC components. On paper you can compare, but not in reality! Don't be surprised if the PS5 and XSX will have better performance on the new next-gen games than the 2080ti and I'm not exaggerating when I say that. Remember, the consoles are totally different from a PC, and the specifications in the new consoles are extremely powerful!
Another problem, where's that concern with 4K 60fps ?! All games will run smoothly at 4K 60fps (this will be the standard).
The 4k 60 fps not will be an standard on console for the simple reason the AAA games want to have more details in screen of course some of them will have 60 fps.

But when start to use RT in many parts the 4k 30 fps will be more common.
 

Fdkenzo

Member
The 4k 60 fps not will be an standard on console for the simple reason the AAA games want to have more details in screen of course some of them will have 60 fps.

But when start to use RT in many parts the 4k 30 fps will be more common.

No, the optimization is better on console, dont forget that, and dont forget that all the power from a console is dedicated to the Games. Man, this is new hardware, dont put the NexT gen Games to the hardware that is avaible today.
And, Even now there are new tools like DLSS that can increase the FPS, it can double the FPS with RT ON. RT will evolve and will not be a killer like in the first year of His life.
 
Last edited:
No, the optimization is better on console, dont forget that.
And, Even now there are new tools like DLSS that can increase the FPS, it can double the FPS with RT ON. RT will evolve and will not be a killer like in the first year of His life.
Is not about the optimization, is about the way developer or even the business people want to present something.

When some leader of engine of some big studio usually try to hit so high as they can, always has been in this way. Even with all technologies you want to push more and more.

Just imagine in the middle of a presentation of for example Ubisoft show to their VPs Assassin Creed of new gen which do you think they will prefer 4k 60 fps or 4k 30 fps even if both use
all the reconstructions techniques is probably they will choose 30 fps, is not like they now they are not able to make is game to 60 fps is all about objectives of each studio and publisher.
 
Status
Not open for further replies.
Top Bottom