• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Neofire

Member
Another PS5 concept design.

HvTEgYt.jpg


v5fmx35.jpg


4TpbqpZ.jpg


r5f2KJp.jpg


I kinda like it.
No, no, no and no my friend 👀
 

PaintTinJr

Member
Rolling_Start Rolling_Start
It isn't a pure HSA system because the memory isn't unified, which undermines one of the cornerstone advantages of HSA to begin with. Failing to understand the importance of unified memory looks like they've made a huge mistake, and given the form factor of the XsX they should have probably did a discrete PC card setup instead. But I guess we'll see in the coming years how well developers cope with it. From a game development view, it feels like they've preemptively removed options from design, by assuming that the CPU and GPU should need asymmetric patterns, thereby encouraging the same access patterns that older games use now, and discouraging GPU processed data to be fed back to the game logic to be meaningful, as opposed to visual eye-candy overlays that get sent to the GPU never to return.
 

rnlval

Member
The discrepancies can be explained by botlenecks in the pipeline and the 5700XT being power starved at higher clocks.
In terms of IPC or per/tflop they are about on par and i expect further improvements from Sony/MS custom RDNA2 designs.

I think MS/Sony carefully designed consoles to minimize bottlenecks and made sure the APUs will be sufficiently fed
MS made sure XSX GPU's memory bandwidth bottleneck is lessened with 560 GB/s while MS added CPU memory bandwidth consumer.

Sony recycled RX 5700/5700 XT's 448 GB/s memory bandwidth and added CPU memory bandwidth consumer.
 

rnlval

Member
Rolling_Start Rolling_Start
It isn't a pure HSA system because the memory isn't unified, which undermines one of the cornerstone advantages of HSA to begin with. Failing to understand the importance of unified memory looks like they've made a huge mistake, and given the form factor of the XsX they should have probably did a discrete PC card setup instead. But I guess we'll see in the coming years how well developers cope with it. From a game development view, it feels like they've preemptively removed options from design, by assuming that the CPU and GPU should need asymmetric patterns, thereby encouraging the same access patterns that older games use now, and discouraging GPU processed data to be fed back to the game logic to be meaningful, as opposed to visual eye-candy overlays that get sent to the GPU never to return.
CPU has no business with the majority of GPU workloads since CPU couldn't keep up with GPU's computational and memory access intensity.

AI and collision physics can be offloaded to the GPU with deep machine learning instruction set and RT cores.

8 cores Zen 2's AVX 2 with gather instruction is not a modern GPU.
 
Last edited:
Rolling_Start Rolling_Start
It isn't a pure HSA system because the memory isn't unified, which undermines one of the cornerstone advantages of HSA to begin with. Failing to understand the importance of unified memory looks like they've made a huge mistake, and given the form factor of the XsX they should have probably did a discrete PC card setup instead. But I guess we'll see in the coming years how well developers cope with it. From a game development view, it feels like they've preemptively removed options from design, by assuming that the CPU and GPU should need asymmetric patterns, thereby encouraging the same access patterns that older games use now, and discouraging GPU processed data to be fed back to the game logic to be meaningful, as opposed to visual eye-candy overlays that get sent to the GPU never to return.

It is unified memory though, only the bus width is different depending on physical address.

And CPU and GPU do have different access patterns. Not just in terms of range of addresses they will most frequently access within a frame, but in terms of the number of times they will access [edit: particular] addresses within a frame.

I have have confidence in both Sony and MS's analysis of their consoles subsystems.
 
Last edited:
He's gonna flip, just like MBG, Crapgamer. I can smell it.
I saw the video and he still a unpaid PR of Xbox, the guy mentioned the number of TF each 30 seconds so I don't think that.

Before this era of consoles my favorite was the xbox 360 but then something happens in the end of its life the exclusive just disappears
and the few were not so good as in PS3.

When I bought my xbox one oh my god was not good experiencie all with first parties were meh (Halo 5,Gears4,Forza Motorspor, Sunset,Rise)
until Forza horizon 3 for me the xbox one was not worth it but the problem still and Sony just start to have more and more IPS which give value
to its console.

When they start to have good games they announce all its will be now in PC so for my was WTF so why I bought you console ? for nothing, in PC the
game could cost same or less and I don't need to pay the gold for play online and now with gamepass they devaluate the sensation of price of the
new games. Even now if you told which game I will remember of xbox maybe will just two Forza Horizon, Ori and maybe the last Gears.

A couple of months before the new console arrive I am still waiting they show something more of its IPs but why should I buy it if I can just upgrade my pc
and have a better experience and even of just see how my old console suffers two years running the new games.
 

SonGoku

Member
MS made sure XSX GPU's memory bandwidth bottleneck is lessened with 560 GB/s while MS added CPU memory bandwidth consumer.

Sony recycled RX 5700/5700 XT's 448 GB/s memory bandwidth and added CPU memory bandwidth consumer.
Recycled uh? idk man seems to me you have your mind already made up before even seeing comparisons and analysis of games.
You do realize 14Gbps chips on a 256bit bus is not unique to the 5700... The 2080 has it as well and if it was really an issue they would take a temporary loss and switch to 16Gbps chips.

MS has 25% extra bandwidth but they also have a 18-21% more powerful GPU to feed, and its not like its asymmetric solution its without its drawbacks there will inevitably be scenarios were the SEX incurs in performance penalties while accessing the slower pool and its less flexible as well. MS settled for the solution that fit their console the best not for the perfect solution.
 

Tripolygon

Banned
XSX SSD IO has two decompression paths
1. General via Zlib with 4.8 GB/s
2. Textures target via BCpack with "more than 6 GB/s"
No

Zlib is the general compression format meaning everything like audio, texture, animation, video and more gets compressed overall using zlib. But in addition to that, the texture within the zlib file is further compressed with BC format. All gpus support texture decompression using that format. The texture within the zlib is compressed using BCPack. It is not two separate decompression paths so to speak.

Microsoft says they have created their own BC format called BCPack which is supposed to be better.
 
Last edited:

Bo_Hazem

Banned
I saw the video and he still a unpaid PR of Xbox, the guy mentioned the number of TF each 30 seconds so I don't think that.

Before this era of consoles my favorite was the xbox 360 but then something happens in the end of its life the exclusive just disappears
and the few were not so good as in PS3.

When I bought my xbox one oh my god was not good experiencie all with first parties were meh (Halo 5,Gears4,Forza Motorspor, Sunset,Rise)
until Forza horizon 3 for me the xbox one was not worth it but the problem still and Sony just start to have more and more IPS which give value
to its console.

When they start to have good games they announce all its will be now in PC so for my was WTF so why I bought you console ? for nothing, in PC the
game could cost same or less and I don't need to pay the gold for play online and now with gamepass they devaluate the sensation of price of the
new games. Even now if you told which game I will remember of xbox maybe will just two Forza Horizon, Ori and maybe the last Gears.

A couple of months before the new console arrive I am still waiting they show something more of its IPs but why should I buy it if I can just upgrade my pc
and have a better experience and even of just see how my old console suffers two years running the new games.

Sony aren't dumb enough to send another IP, especially the new gen, to PC. They've seen the huge reaction for just one IP, they can't gamble with their reputation. It was literally to lure PC gamers to PS5 as has been said. I doubt Horizon will make that much of success that will make them think of sending them to PC.

PC gamers in general can be delusional comparing console gaming to PC as "cheaper/no PSN or GOLD". You pay freakin $300 just for the shitty Windows OS, that's worth of at least 5 years, not to mention 2 free monthly games, and that's 120 games in 5 years.

If someone wants to play on PC, it's ok, but let's not lie about the fact how insanely expensive PC gaming is. Buying both consoles would be cheaper than building a PC that's comparable to XSX. I would say you would pay at least $3000 for that. And not a better experience, it's always crashing, or something working in the background, or updating without you knowing, and unoptimized system overall for gaming.

I personally need a PC for many other things, but not for gaming, if you do so then go ahead, but if gaming is the main goal then spend big or never mention it. After all, 99% of PC's out there are weak compared to next gen consoles, including mine that I've spent around $3400 on it, I think, but as a workstation first and didn't even bother to run a game on it.

Consoles are easy, straight forward, no hackers, no threats (except some kid wants to fu** your mom because you kicked his ass), a lot more stable systems and built around gaming. PC gaming is superior for shooters, but on console the odds are even and only the better player gets ahead of the others, not investments on higher quality materials.
 
Last edited:

Fafalada

Fafracer forever
It is unified memory though, only the bus width is different depending on physical address.
Semantics - from application perspective that's exactly how non-unified memory behaves on consoles (well, most of them anyway).

XSX SSD IO has two decompression paths
That's not how any of this works - lossless compressors don't give you 'guaranteed' compression ratio. ZLib can just as easily reach 3x or more compression ratio depending on the source data you feed it.
Likewise there'll be cases of files that compress at 0%(doesn't matter which compressor you use) - Sony/MS numbers attempt to represent a likely 'average' for aggregate assets in a typical game-scenario (ie. both numbers are just estimates).
Given that there's no chance they used the same methodology to arrive at their estimated numbers - the 4.8/9 are also not directly comparable, nor are they representative of any relative "compressor efficiency".
 

Andodalf

Banned
Recycled uh? idk man seems to me you have your mind already made up before even seeing comparisons and analysis of games.
You do realize 14Gbps chips on a 256bit bus is not unique to the 5700... The 2080 has it as well and if it was really an issue they would take a temporary loss and switch to 16Gbps chips.

MS has 25% extra bandwidth but they also have a 18-21% more powerful GPU to feed, and its not like its asymmetric solution its without its drawbacks there will inevitably be scenarios were the SEX incurs in performance penalties while accessing the slower pool and its less flexible as well. MS settled for the solution that fit their console the best not for the perfect solution.

So having more power and more proportional bandwidth isn't a good thing? Good lord the spin. I love how you assume worst case for everything to do with XSX RAM and SSD too, a beautiful cherry on top here. Lets take the lowest SX could possibly be, pretending they just had no idea how memory works when they designed it, but assume that PS5 never drops below theoretical maximums even when it's been admitted it will.
 

SlimySnake

Flashless at the Golden Globes
MS made sure XSX GPU's memory bandwidth bottleneck is lessened with 560 GB/s while MS added CPU memory bandwidth consumer.

Sony recycled RX 5700/5700 XT's 448 GB/s memory bandwidth and added CPU memory bandwidth consumer.
yup.

dont forget cerny said rt is very memory intensive and we know the 5700xt is already bandwidth. whats going to happen when you have rt and cpu stealing half the bandwidth? ps4 pro bandwidth.
 
Sony aren't dumb enough to send another IP, especially the new gen, to PC. They've seen the huge reaction for just one IP, they can't gamble with their reputation. It was literally to lure PC gamers to PS5 as has been said. I doubt Horizon will make that much of success that will make them think of sending them to PC.
Never underestimate how dumb can be a company
 

rnlval

Member
Recycled uh? idk man seems to me you have your mind already made up before even seeing comparisons and analysis of games.
You do realize 14Gbps chips on a 256bit bus is not unique to the 5700... The 2080 has it as well and if it was really an issue they would take a temporary loss and switch to 16Gbps chips.

MS has 25% extra bandwidth but they also have a 18-21% more powerful GPU to feed, and its not like its asymmetric solution its without its drawbacks there will inevitably be scenarios were the SEX incurs in performance penalties while accessing the slower pool and its less flexible as well. MS settled for the solution that fit their console the best not for the perfect solution.
Turing's deep learning, integer and floating-point workloads are split to Tensor cores, split INT and FP pipelines.
RDNA deep learning, integer and floating-point workloads are shared with the same FP shader resources.
When integer and DirectML type workloads are used, Turing can sustain different workloads better than RDNA's shared design.

GPU memory bandwidth difference between XSX and PS5 is greater than 25 percent when CPU consumes its memory bandwidth proportion.

Part of XSX's slower 6 GB memory is 2.5 GB OS which is low-intensity memory bandwidth consumer.
 

SonGoku

Member
So having more power and more proportional bandwidth isn't a good thing?
Did i said it wasn't? That's a strawman and you know it. I merely pointed out that proportionally to GPU performance the SEX doesn't have that much extra bandwidth compared to PS5.
I love how you assume worst case for everything to do with XSX RAM and SSD too. Lets take the lowest SX could possibly be, pretending they just had no idea how memory works when they designed it
Quite the opposite of your strawman actually
  • I constantly refer to the 4.8GB/s figure even though that's a peak figure (perfect 100% compression)
  • Explicitly said MS went with the best possible memory configuration for their design even though its not perfect or without its drawbacks, PS5's isn't perfect either
  • Rate it in between 2080S-2080Ti
I see both consoles in their best possible light.
but assume that PS5 never drops below theoretical maximums even when it's been admitted it will.
Another strawman
Seriously did you quote the wrong person or what?
 
Last edited:

Neo_game

Member
Wrong, Both consoles have similar CPU, hence similar BW usage. PS5 GPU's BW is under RX 5700/5700 XT's 448 GB/s when desktop-class CPU's BW is factored in.

From where are you getting the info regarding the BW sharing which will compromise the GPU ?? PS3 was criticized for its split ram. Having unified memory is what developer want. PC is different but on console Programmer has complete control what they like to use. Your propaganda of how sharing the BW will compromise PS5 with those 5 difference scenario barely made it into 30%. Which is nothing compared to Xbox using only 48.3gb/sec for gfx compared to PS4 156gb/sec going by your logic. That is whopping difference of more than 2.5 times. With PS4 having 40% advantage in gfx as well shouldn't the difference between the two been day and night rather than just 720P vs 900 and 900P vs 1080P
 
Last edited:

SonGoku

Member
Turing's deep learning, integer and floating-point workloads are split to Tensor cores, split INT and FP pipelines.
RDNA deep learning, integer and floating-point workloads are shared with the same FP shader resources.
When integer and DirectML type workloads are used, Turing can sustain different workloads better than RDNA's shared design.
RDNA2 will incorporate similar features on top of any customization Sony/MS add to maximize silicon utilization, hell the PS4Pro already has FP16 X2 (aka RPM) which DirectML exploits. Not denying Turing has its strong points btw.
Another advantage consoles have is devs will target and exploit said features making the most out of the hw
GPU memory bandwidth difference between XSX and PS5 is greater than 25 percent when CPU consumes its memory bandwidth proportion.
If 40GB/s is allocated to CPU the difference is 27.4% so ~2% extra

Would the SEX incur in a performance penalty in loads where CPU & GPU access the fast & slow pool simultaneously? F Fafalada
 

rnlval

Member
From where are you getting the info regarding the BW sharing which will compromise the GPU ?? PS3 was criticized for its split ram. Having unified memory is what developer want. PC is different but on console Programmer has complete control what they like to use. Your propaganda of how sharing the BW will compromise PS5 with those 5 difference scenario barely made it into 30%. Which is nothing compared to Xbox using only 48.3gb/sec for gfx compared to PS4 156gb/sec going by your logic. That is whopping difference of more than 2.5 times. With PS4 having 40% advantage in gfx as well shouldn't the difference between the two been day and night rather than just 720P vs 900 and 900P vs 1080P
Under PS3, CELL SPUs and RSX have separated rendering workloads which are not the same as gaming PC setup i.e. PC CPU is not pretending to be half-assed GPU.

Gaming PC still has Xbox 360 style processing model with CPU in command (e.g. ~900 GFLOPS) while GPU has mass data processing grunt work (e.g. 12 TFLOPS range).

AMD based PS4/PS4/XBO/X1X consoles still have Xbox 360 style processing model with CPU in command (e.g. ~100 to 143 GFLOPS) while GPU (e.g. 1.3 to 6 TFLOPS) has mass data processing grunt work.

Gaming PC is like Xbox 360 processing model with the CPU being attached with a very large L4 cache in a form of DDR4 and GPU still has the large GDDR6 memory pool.

Your propaganda omitted the processing model differences between gaming PC and PS3. PS3's split memory model argument omitted split rendering argument!
 

rnlval

Member
RDNA2 will incorporate similar features on top of any customization Sony/MS add to maximize silicon utilization, hell the PS4Pro already has FP16 X2 (aka RPM) which DirectML exploits. Not denying Turing has its strong points btw.
Another advantage consoles have is devs will target and exploit said features making the most out of the hw

If 40GB/s is allocated to CPU the difference is 27.4% so ~2% extra

Would the SEX incur in a performance penalty in loads where CPU & GPU access the fast & slow pool simultaneously? F Fafalada
1. DirectML comes with Metacommands which is MS's official direct hardware access API.

2. Both RDNA 2 and Turing supports Shader Model 6's wave32 compute length.

3. Both RDNA 2 and Turing RTX supports the same major hardware feature set e.g. hardware async compute, BVH Raytracing Tier 1.1, DirectML (e.g. RT denoise pass), Variable Rate Shading, mesh shaders (aka geometry engine), Sampler Feedback, Rasterizer Ordered Views, Conservative Rasterization Tier 3, Tiled Resources Tier 3 and 'etc'.

4. XSX's slower ram performance penalty level is dependent on memory bandwidth usage's intensity on 6 GB memory address range e.g. 2.5GB of 6 GB is allocated to low-intensity memory bandwidth usage OS-related workload. For games, that's only 3.5GB of 336 GB/s memory pool vs 10GB 560 GB/s memory pool.

CPU's intensity with memory bandwidth usage is less than the GPU i.e. CPU is not pretending to be a half-assed GPU.
 

rnlval

Member
Semantics - from application perspective that's exactly how non-unified memory behaves on consoles (well, most of them anyway).

That's not how any of this works - lossless compressors don't give you 'guaranteed' compression ratio. ZLib can just as easily reach 3x or more compression ratio depending on the source data you feed it.
Likewise there'll be cases of files that compress at 0%(doesn't matter which compressor you use) - Sony/MS numbers attempt to represent a likely 'average' for aggregate assets in a typical game-scenario (ie. both numbers are just estimates).
Given that there's no chance they used the same methodology to arrive at their estimated numbers - the 4.8/9 are also not directly comparable, nor are they representative of any relative "compressor efficiency".
Note that GPU has native support for S3TC/BCn formatted textures.
 
Last edited:

Shmunter

Member
"No developers say they will not program for the special things in the Playstation... DICE is going to... DICE said they will." "... the third party games are gonna use the Playstation 5 to its best ability."

"All my sources are third party devs."

"It's not about the loading times."

Timestamped.


Bold statement I’m hesitant to say unlikely. Keeping a level playing field is easier on PR than having an obvious gulf and having to explain it.
 

Neo_game

Member
Under PS3, CELL SPUs and RSX have separated rendering workloads which are not the same as gaming PC setup i.e. PC CPU is not pretending to be half-assed GPU.

Gaming PC still has Xbox 360 style processing model with CPU in command (e.g. ~900 GFLOPS) while GPU has mass data processing grunt work (e.g. 12 TFLOPS range).

AMD based PS4/PS4/XBO/X1X consoles still have Xbox 360 style processing model with CPU in command (e.g. ~100 to 143 GFLOPS) while GPU (e.g. 1.3 to 6 TFLOPS) has mass data processing grunt work.

Gaming PC is like Xbox 360 processing model with the CPU being attached with a very large L4 cache in a form of DDR4 and GPU still has the large GDDR6 memory pool.

Your propaganda omitted the processing model differences between gaming PC and PS3. PS3's split memory model argument omitted split rendering argument!

Hey man, I have no propaganda. You are the one who is trying to push your narrative. I was giving you instance based on previous gen consoles where the difference was far superior between the ram and GPU. This time the difference is actually far less and we know how theoretical does not resemble to practical performance. But you can carry on 🤷‍♂️
 
Last edited:

Shmunter

Member
Just searching I saw the geometry engine will do primitive shaders also know as mesh shaders .... why they love to use different names for the same thing

https://www.starcitizen.gr/2642867-2/
"This Nvidia dev blog goes more in-depth about the system (but this is not Nvidia specific, AMD just calls them primitive shaders) and from what I can
tell this is the future
"

Do you think Sony just extract this feature in a separete chip ? or this will be in all RNDA 2 gpus in the same way?

Because at least how Xbox PR works don't think so and they just say will use mesh shaders as feature but never mention will be in a separate chip and
how angry was one Dev of Sony when someone said XSX will have the same as PS5 (VRS with a Primitive Shader chip) I think Sony decidmes to put it
apart so in this way the devs can have more control of the same (is just a theory).

UpeO4IX.jpg


9kuwBcT.jpg
It’s interesting. I suspect Sony wouldn’t put focus on something that wasn’t unique to them, or at least enhanced in some way.
 
Last edited:

BluRayHiDef

Banned
I've had enough of this debate about whether the PlayStation 5 or the Xbox Series X is more powerful. So, I am ending this debate once and for all.

The Xbox Series X is the more powerful of the two in terms of rendering output and general processing per cycle; this is due to its GPU having 44% more compute units than the PlayStation 5's GPU (despite them running at a lower frequency) and due to its CPU running at a higher frequency than that of the PlayStation 5 by 100Mhz to 300Mhz.

However, due to the radically novel optimization of I/O in the PS5's design and the more than adequate power of the PS5 in regard to rendering games in 4K at 60 frames per second, the superior power of the Xbox Series X won't be as impactful as the new approaches to game design that developers will be able to implement on the PlayStation 5, such as genuinely instantaneous fast travel, seamles transitions from cutscenes to gameplay, smaller file sizes due to the obsolescence of redundant assets in storage and subsequently more unique assets and therefore larger worlds, and the lack of loading screens and restrictive areas that must be traversed while large portions of worlds are loaded into RAM in the background.

Hence, I hereby declare the PlayStation 5 to be the overall better console relative to the Xbox Series X; and to cement my declaration, I hereby rebut all counterarguments in advance. I have spoken.
 

Bogroll

Likes moldy games
Scaling between a 1.3tflop XB1 and 12tflop XsX is going to seriously gimp Series X games surely? The games you will be playing will be literally upscaled XB1 games with better performance and effects. PS5's exclusives will make these games look a gen apart if you pay attention to the details.

The decision to gimp all Series X games for 2 years and surrender-logic behind putting everything on more powerful PCs will be a disaster.
For a year or 2 but the Ps5 is gimped for its life, don't get me wrong i'll get one but it is gimped no matter how people on here try to portray it.
 

SonGoku

Member
1. DirectML comes with Metacommands which is MS's official direct hardware access API.

2. Both RDNA 2 and Turing supports Shader Model 6's wave32 compute length.

3. Both RDNA 2 and Turing RTX supports the same major hardware feature set e.g. hardware async compute, BVH Raytracing Tier 1.1, DirectML (e.g. RT denoise pass), Variable Rate Shading, mesh shaders (aka geometry engine), Sampler Feedback, Rasterizer Ordered Views, Conservative Rasterization Tier 3, Tiled Resources Tier 3 and 'etc'.

4. XSX's slower ram performance penalty level is dependent on memory bandwidth usage's intensity on 6 GB memory address range e.g. 2.5GB of 6 GB is allocated to low-intensity memory bandwidth usage OS-related workload. For games, that's only 3.5GB of 336 GB/s memory pool vs 10GB 560 GB/s memory pool.

CPU's intensity with memory bandwidth usage is less than the GPU i.e. CPU is not pretending to be a half-assed GPU.
1. Ok Did i say otherwise?
2. & 3. Ok... Its great to read RDNA2 is so feature rich! never doubted it!

4. Im well aware of that, never stated its a bad compromise, its quite likely the best MS could do while remaining within the desired budget. I just pointed out its not without its drawbacks the most obvious one reruced flexibility i.e devs must work within those hard limits or else incur performance penalty
 
Last edited:

Lort

Banned
How good will speed runs be? 100gigs for a game means that it will take 20 seconds to finish your 100 gig game on ps5 or 40 seconds on an XSX.
 

pasterpl

Member
Very unlikely, and they had to damage control even for that one IP. But they went beyond the point of no return so they can't just scrap the idea after making a deal with steam.

cannot one play lots of ps4 exclusives via ps now on pc? I know that these are not direct ports.
 

rnlval

Member
1. Ok Did i say otherwise?
2. & 3. Ok... Its great to read RDNA2 is so feature rich! never doubted it!

4. Im well aware of that, never stated its a bad compromise, its quite likely the best MS could do while remaining within the desired budget. I just pointed out its not without its drawbacks the most obvious one reruced flexibility i.e devs must work within those hard limits or else incur performance penalty
1. It's against your optimization argument.

2 &3. Mentioned features are mostly resource conservation and it's against your optimization argument.

4. MS still has a last-minute change option by changing four 1GB chips into four 2GB chips.


udgSuWs.png



PS5's 448 GB/s memory bandwidth has its own drawbacks when there's additional CPU memory bandwidth consumer, hence it's lower than PC's RX 5700/5700 XT with 448 GB/s.

PS4's 176 GB/s memory bandwidth has 20 GB/s from CPU links, hence leaving 156 GB/s for the Pitcairn class GPU which is close HD 7850's 153.6 GB/s memory bandwidth.

Kill Zone Shadowfall's CPU vs GPU data storage example

6PsPasi.jpg


GPU data dominates.
 

pasterpl

Member

 
I agree completely and I'm looking foward to the games to be shown. It was interesting that someone posted low quality vs high for raytracing because I would totally be happy with low. I'm so ready for this next gen. A little frustrated I'll have to get both systems however.
I mean we are talking about a real time simulation of milions of rays bouncing everywhere, a "low settings" RT is still hella complex and unless they don't drop off entire features to save performance you're not going to scream in despair for a lesser ultra simulation of light lol
 
are we now okay with crossgen games?:unsure:
The problem are third parties, not first parties. Crossgen games always existed and they will exists even this time, force ALL devs to develop only for PS5 since the start would be a suicide if even possible. This game was much probably developed for PS4 and of course Sony couldn't force the canceling of that version, but at the very least got them to develop a PS5 version, it's different.
What it's not liked about MS move is doing the same with first parties. Gears of War wasn't on the first Xbox, it was directly 360, Killzone Shadowfall and Infamous weren't on PS3. It's silly, or at least seems so, to have COMPLETE CONTROL over an IP like in first parties (contrary to third parties) and chose to limit the games that are gonna primarly define your new console and not the dying one.
Edit: actually One is already dead. Even more so.
 
Last edited:

Vae_Victis

Banned
are we now okay with crossgen games?:unsure:
Some overlap is inevitable initially, nobody really knows when these consoles will really come out with the current situation, how widely available they will be or how quickly they will sell. In the first 6 months at least, developing for XSX and PS5 exclusively will only happen if Microsoft and/or Sony put something more on the table for the developers that do so (and first parties, of course).

Even the PS3/Xbox 360 to PS4/Xbox One leap had several cross-gen games during the launch window and further on, despite the new consoles having a much better general market outlook than now and the generetional difference in hardware structure being greater (which meant more work to have both versions done and running). PS4 launched at the end of 2013, the last Call of Duty for PS3 was at the end of 2015.
 

-kb-

Member
1. It's against your optimization argument.

2 &3. Mentioned features are mostly resource conservation and it's against your optimization argument.

4. MS still has a last-minute change option by changing four 1GB chips into four 2GB chips.


udgSuWs.png



PS5's 448 GB/s memory bandwidth has its own drawbacks when there's additional CPU memory bandwidth consumer, hence it's lower than PC's RX 5700/5700 XT with 448 GB/s.

PS4's 176 GB/s memory bandwidth has 20 GB/s from CPU links, hence leaving 156 GB/s for the Pitcairn class GPU which is close HD 7850's 153.6 GB/s memory bandwidth.

Kill Zone Shadowfall's CPU vs GPU data storage example

6PsPasi.jpg


GPU data dominates.

I don't believe the bandwidth actually comes out unless its used, its not like a reserved amount that cannot be used by the GPU its just the speed that the CPU can access the bus.
 
Status
Not open for further replies.
Top Bottom