• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Matt weighs in on PS5 I/O, PS5 vs XSX and what it means for PC.

longdi

Banned
No, but they believe the SSD is more game changing for their strategy overall. They also gave a lot of space to their clocking/power consumption management solution and not surprisingly you are minimising / brushing it aside.
They also did say that for them higher clocks are preferable at the same performance profile and why, but if you wanted BS about how that made up or gave the secret performance multipliers to breeze past the XSX you are barking up the wrong tree and you know that.
Still, thanks for confirming your concern was just “concern” as in F.U.D. concern... I guess...

I will put targetted BOM as part of the consideration. MS seems to like $499 while Sony seems to want maintain their $399 success.
In an ideal case, Sony SSD input(R&D) will produce a much higher output(results), or in another case, the SSD output is proportionate to its input. So it's a wonder how much effort Sony spent on this 'game changer' at $399 BOM.
IIRC i read from Epic China forums, that Sony chose 825gb for cost reasons.

Im just throwing tons of questions beyond fawning over the SSD. Hopefully this opens up more threads, and more people can pick up and makes its way to the press or Sony. We need more clarity. With more openess, it puts pressure how both consoles will set the msrp. Instead of riding on the buzz and excitement. If for unfavorable circumstances that your targetted BOM got inflated, gamers deserve to know too.
I also think PS5 visuals will start off strongly because of their generation policy. Hopefully the press and gamers take that into consideration. 🤷‍♀️
 
Last edited:

Md Ray

Member
Sorry i was on mobile.

Imo, the difference as i said before, is akin to, PS5 = 2070S + PCei4 SSD. Series X = 2080Ti + qlc PCie3 SSD.
Granted this comparison is a bit bigger gap, but the 2070S v 2080S is much smaller than what i feel.
Maybe a 2080Ti without nvidia boost on.

Mainly Im still suspicious about the 2.23ghz claims.
If I'm understanding this correctly, are you saying that the difference between PS5 and XSX GPU's computational power (TFLOPS) is the same as the difference between RTX 2070S and 2080 Ti's compute power?

And why exactly are you suspicious about the 2.23 GHz claim?
 
Last edited:
So XSX fans, when doyou think MS will be able to match the graphic fidelity shown below



Any ideas when or how ? Do you think it will ever be matched ?

Can XSX stream stuff available to the GPU to process in milliseconds and fractions of a frame necessary (as explained by Tim Sweeny is not the same as raw SSD speed) ?

Any takers ?


I'd like to think it could be matched if not surpassed, considering it's a tech demo before the generation's even officially started. And that applies to both systems; if we can't expect games even a year or two from launch offering the level of fluidity in the UE5 demo at the resolution and framerate it ran in, then these systems will probably be more disappointing than anyone could've guessed.

As to XSX having anything to this demo, well I'll just preface this by saying the UE5 demo itself is still just a tech demo. There's almost no true game-like AI, complex physics systems, scripting logic or enemies to deal with. Collision systems I would also assume are pretty light in the demo from what we've seen. If this were a slice from a game actually in development it would've been even more impressive than it already is. Also worth mentioning that MS should've definitely considered showing some type of gameplay demo slice, even just a two or three minute teaser, of a game actually in development (preferably Halo Infinite) showing off the graphical strengths of their platform and some sort of high-speed asset streaming segment, and on the SlipSpace Engine just to dab a bit of salt in the wound. Especially seeing now that the PS5 reveal event's been delayed (to what date we don't even know), it would've been a great marketing opportunity for MS to drop such a demo on, say, Friday, or maybe they're planning sometime next week? They need something aside from the hardware event later this month IMHO.

All that said...I'd probably say something in July since that's the gameplay event. How? Well, firstly I'll just speculate that the fast asset streaming in the UE5 demo, I hardly think it was taxing the SSD's full speed or capabilities. I've seen estimates of around 1 GB/s at most for that segment, so it would be capable on any system with a drive of at least that fast. And yes I'm sure the demo was also using other aspects of the PS5 SSD I/O, but it's not like the XSX doesn't have equivalents for most of those features, regardless of how "powerful" those equivalents might be.

There's always combinations of techniques leveraging the additional GPU power on XSX that could be used to simulate many of the approaches for the streaming segments in the UE5 demo. For example, if the ML modifications to the GPU are robust enough they wouldn't need to stream the native high-quality textures; simply sample lower-quality ones (reduce the bandwidth footprint), and then scale them to a higher resolution (DLSS-type techniques) during processing. There's already many good cases for this including textures upscaled this way looking even better than the native high-resolution ones. There's also clever ways that one could simply have a standard base (or set of base) mesh and texture models for environmental details, the statues, etc., and simply utilize transformation algorithms on the GPU at render time to morph and alter those assets. There's a range to which they can be morphed depending on the original data; I guess some people would call this procedural generation, but it could also be comparable to granular synthesis where your base determines the number of "seed" permutations (my understanding on granular synthesis comes from studying some sound design/engineering).




I don't think it was ever actually debunked. The Epic China guys played back their demo on-stream and watched the video file on the stream itself, but beforehand had ran the demo on a laptop fitting certain specifications.

Confusion in terms of branch communications is nothing new, it's been happening since the 1980s. companies like SEGA became a bit infamous for it during the mid/late 1990s. I'd assume the Epic China guys did in fact run the demo on a laptop fitting a particular specification but during the stream itself they watched recorded footage of that build run on said laptop. Members of teams among the different branches are never really 100% aware of what people on other branches are doing, so that could explain why Tim was a bit surprised by it when the word got around about that demo.

Regardless it's not like he'd be in a position to affirm the validity of what the Epic China guys did, for legal reasons.

Dontero Dontero The fact we don't have random read speed/latency figures on the NAND for PS5 or XSX isn't surprising, but those are very important things. The way I see it is part of Sony's solution was to dedicate a flash channel for each module, so if data is known to be on a given module then you simply select that module and off you go. That doesn't say anything to any latency, but it as a setup does help a lot in increasing random access capabilities of the NAND.

XSX, imho, has a very different setup, and in some ways that setup doesn't facilitate for increased ability in random NAND access quite the way Sony's does (this has to do with modules to channels), but for all we know they could be using NAND modules with better latency figures on random access. They're unknowns for now and will probably remain so for a very long time.

Bryank75 Bryank75 You messed up your figures a lot. XSX uses 2.5 GB for the OS, not 3 GB. PS5 very likely reserves 2 GB for its OS.

For the rest of your calculations, it's just mumbo jumbo. If you're trying to factor out the RAM bandwidth the OS will occupy, first off you need to understand that the OS will spread its physical reserve across multiple chips on both systems (2 on PS5, 3 on XSX) so as to ensure the other processors can have data in the RAM in such a way full bandwidth utilization can still be achieved if required. There's almost no point in trying to pettily factor out the OS physical RAM reserve from overall system bandwidth because the systems aren't putting entire GDDR6 chips to OS data reserves, and it's not like having the OS occupy some RAM modules means those modules are now only usable by the OS altogether.

So by your own calculations you'd have to remove 2 GDDR6 modules from PS5, or 112 GB/s, and by your own logic that'd leave the system with 336 GB/s bandwidth, which is ridiculous. That's not how you account for system bandwidth (doing that I also noticed you assumed because XSX's OS is reserving some space on the lower-bound 1 GB of 3x 2 GB modules that somehow means the upper-bound 1 GB on those 3x 2 GB modules are inaccessible whatsoever which is....just extremely flawed idea). So there's no method you can calculate PS5 having more system RAM than XSX since they both have 16 GB; you can only claim it having an additional 512 MB of usable memory for games, and again that's on the assumption the PS5 OS reserves 2 GB of physical memory.
 

longdi

Banned
If I'm understanding this correctly, are you saying that the difference between PS5 and XSX GPU's computational power (TFLOPS) is the same as the difference between RTX 2070S and 2080 Ti's compute power?

And why exactly are you suspicious about the 2.23 GHz claim?

Yeah i think the perf difference will be hypothetically around there. With this imaginery 2080Ti on stock boost. And also this 2070S having the same hotass Tdp as this 2080Ti.

As an aside, Nvidia cards, while stock boost are advertised at around 1.5ghz~1.8ghz, in reality they easily runs ~200mhz higher than that.
 
Last edited:
They presented a believable ballpark number of 8GB/s with Kraken as opposed to 5.5GB/s raw. Which is with about 30% compression. 22GB/s is indeed for stuff like text files, it's just not gonna happen. Kraken manages to compress better than zlib, does it not?

If you find those numbers dishonest, what would you say to going from 2.4GB/s raw to 4.8GB/s thanks to a whooping 50% compression ratio? That would sound like an even more outrageous lie to me.

I agree with you on the 22 GB/s figure being for specific types of data that can compress well lossy where it doesn't actually affect the files too much. Text, some audio and some video files tend to compress well at those rates.

I don't agree with the assertion the 2.4 GB/s / 4.8 GB/s figure is outrageous or a lie, though; lossless compression is generally a 2:1 ratio. 2x 2.4 is 4.8. So nothing really outrageous in that claim. It does ask a question as to why PS5's compressed rate is "only" 8-9 GB/s and not 11 GB/s, it could be down to any combination of things with the NAND, the flash memory controller, algorithms, etc. No way to tell.
 

longdi

Banned
Something is off indeed :rolleyes:... but let’s not distract from what you admitted is concern trolling essentially as you are spreading doubt and uncertainty (without much evidence) around something you are not interested in the least beside to put it down.

Tbf, unless Sony opens up more, everything is speculation.
We can even say Sony designed PS5 for the best in class $399 BOM, while MS aimed for the best $499 BOM.
There is a movement to present this SSD I/O as the sauce that can help a $399 BOM perform close to a $499 BOM. 🤷‍♀️
 

Bryank75

Banned

I said this guy should join Neogaf...
I'd like to think it could be matched if not surpassed, considering it's a tech demo before the generation's even officially started. And that applies to both systems; if we can't expect games even a year or two from launch offering the level of fluidity in the UE5 demo at the resolution and framerate it ran in, then these systems will probably be more disappointing than anyone could've guessed.

As to XSX having anything to this demo, well I'll just preface this by saying the UE5 demo itself is still just a tech demo. There's almost no true game-like AI, complex physics systems, scripting logic or enemies to deal with. Collision systems I would also assume are pretty light in the demo from what we've seen. If this were a slice from a game actually in development it would've been even more impressive than it already is. Also worth mentioning that MS should've definitely considered showing some type of gameplay demo slice, even just a two or three minute teaser, of a game actually in development (preferably Halo Infinite) showing off the graphical strengths of their platform and some sort of high-speed asset streaming segment, and on the SlipSpace Engine just to dab a bit of salt in the wound. Especially seeing now that the PS5 reveal event's been delayed (to what date we don't even know), it would've been a great marketing opportunity for MS to drop such a demo on, say, Friday, or maybe they're planning sometime next week? They need something aside from the hardware event later this month IMHO.

All that said...I'd probably say something in July since that's the gameplay event. How? Well, firstly I'll just speculate that the fast asset streaming in the UE5 demo, I hardly think it was taxing the SSD's full speed or capabilities. I've seen estimates of around 1 GB/s at most for that segment, so it would be capable on any system with a drive of at least that fast. And yes I'm sure the demo was also using other aspects of the PS5 SSD I/O, but it's not like the XSX doesn't have equivalents for most of those features, regardless of how "powerful" those equivalents might be.

There's always combinations of techniques leveraging the additional GPU power on XSX that could be used to simulate many of the approaches for the streaming segments in the UE5 demo. For example, if the ML modifications to the GPU are robust enough they wouldn't need to stream the native high-quality textures; simply sample lower-quality ones (reduce the bandwidth footprint), and then scale them to a higher resolution (DLSS-type techniques) during processing. There's already many good cases for this including textures upscaled this way looking even better than the native high-resolution ones. There's also clever ways that one could simply have a standard base (or set of base) mesh and texture models for environmental details, the statues, etc., and simply utilize transformation algorithms on the GPU at render time to morph and alter those assets. There's a range to which they can be morphed depending on the original data; I guess some people would call this procedural generation, but it could also be comparable to granular synthesis where your base determines the number of "seed" permutations (my understanding on granular synthesis comes from studying some sound design/engineering).




I don't think it was ever actually debunked. The Epic China guys played back their demo on-stream and watched the video file on the stream itself, but beforehand had ran the demo on a laptop fitting certain specifications.

Confusion in terms of branch communications is nothing new, it's been happening since the 1980s. companies like SEGA became a bit infamous for it during the mid/late 1990s. I'd assume the Epic China guys did in fact run the demo on a laptop fitting a particular specification but during the stream itself they watched recorded footage of that build run on said laptop. Members of teams among the different branches are never really 100% aware of what people on other branches are doing, so that could explain why Tim was a bit surprised by it when the word got around about that demo.

Regardless it's not like he'd be in a position to affirm the validity of what the Epic China guys did, for legal reasons.

Dontero Dontero The fact we don't have random read speed/latency figures on the NAND for PS5 or XSX isn't surprising, but those are very important things. The way I see it is part of Sony's solution was to dedicate a flash channel for each module, so if data is known to be on a given module then you simply select that module and off you go. That doesn't say anything to any latency, but it as a setup does help a lot in increasing random access capabilities of the NAND.

XSX, imho, has a very different setup, and in some ways that setup doesn't facilitate for increased ability in random NAND access quite the way Sony's does (this has to do with modules to channels), but for all we know they could be using NAND modules with better latency figures on random access. They're unknowns for now and will probably remain so for a very long time.

Bryank75 Bryank75 You messed up your figures a lot. XSX uses 2.5 GB for the OS, not 3 GB. PS5 very likely reserves 2 GB for its OS.

For the rest of your calculations, it's just mumbo jumbo. If you're trying to factor out the RAM bandwidth the OS will occupy, first off you need to understand that the OS will spread its physical reserve across multiple chips on both systems (2 on PS5, 3 on XSX) so as to ensure the other processors can have data in the RAM in such a way full bandwidth utilization can still be achieved if required. There's almost no point in trying to pettily factor out the OS physical RAM reserve from overall system bandwidth because the systems aren't putting entire GDDR6 chips to OS data reserves, and it's not like having the OS occupy some RAM modules means those modules are now only usable by the OS altogether.

So by your own calculations you'd have to remove 2 GDDR6 modules from PS5, or 112 GB/s, and by your own logic that'd leave the system with 336 GB/s bandwidth, which is ridiculous. That's not how you account for system bandwidth (doing that I also noticed you assumed because XSX's OS is reserving some space on the lower-bound 1 GB of 3x 2 GB modules that somehow means the upper-bound 1 GB on those 3x 2 GB modules are inaccessible whatsoever which is....just extremely flawed idea). So there's no method you can calculate PS5 having more system RAM than XSX since they both have 16 GB; you can only claim it having an additional 512 MB of usable memory for games, and again that's on the assumption the PS5 OS reserves 2 GB of physical memory.
Was still the most accurate of anyone else... which is surprising since I thought the Xbox crowd would know all the figures by heart, they've been looking at it so much.

I didn't read the rest cause I don't care tbh.
 

Md Ray

Member
Yeah i think the perf difference will be hypothetically around there. With the imaginery 2080Ti on stock boost. And also a 2070S having the same hotass Tdp as this stock 2080Ti.

As an aside, Nvidia cards, while boost are advertised at around 1.5ghz~1.8ghz, in reality they runs 200mhz higher mostly.
What a load of bollocks. 2080 Ti FE has 14 TFLOPS peak and 2070S has 9 TFLOPS. That's 55% higher computational power in favor of 2080 Ti. PS5 to XSX is just 18%. 18 vs 55. How did you come to this stupid conclusion that PS5 and XSX's GPUs are akin to 2080 Ti and 2070S?

The perf difference between 2070S vs 2080 Ti on avg. is about ~33%. i.e. at identical graphics settings and resolution, the 2080 Ti has about 30-33% higher fps on avg. compared to 2070S despite that 55% TFLOPS advantage. Not to mention the b/w difference between these two cards (616 GB/s vs 448 GB/s) is even larger than the difference between PS5 and XSX's memory b/w.

I wouldn't be surprised if a game comes out running at identical graphics/resolution on PS5 and XSX, and the avg. frame-rate advantage ends up being less than 18% in favor of XSX.

An e.g. like Hitman which ran at the same settings/res on PS4/XB1. Despite PS4 having 40% more TFLOPS, the avg. frame-rate advantage ended up being 30% higher on PS4, in that particular game.

EDIT:
EXhO0SUU0AAnmzE
 
Last edited:

quest

Not Banned from OT
Tbf, unless Sony opens up more, everything is speculation.
We can even say Sony designed PS5 for the best in class $399 BOM, while MS aimed for the best $499 BOM.
There is a movement to present this SSD I/O as the sauce that can help a $399 BOM perform close to a $499 BOM. 🤷‍♀️
The PS5 is not the el-cheapo of the ps4 it is a 499.99 console. If Sony wants to take a loss for our wallets hell yes 399.99 and force Microsoft to 399.99 to. With how strong the playstation brand is especially in the EU i see no reason for Sony to sell at loss it is selleout regardless 399 or 499 even 549. They are in position to take 75%-80% market share thanks to epic and knock off its last competitor. So i see 499.99 price cut in 18 months. Microsoft no idea they are desperate this is the last stand for the division. The CEO gave them the cash for new studios for 1 last shot with the promise of doing better and subscriptions that the CEO loves. It will be fun to see the pricing play out. I'll be happy to see the return to the selling at a loss for my wallet
 

ToadMan

Member
shifts power to the GPU from the CPU if the current load allows and vice versa. The knock-on effect is one clocks down and the other clocks up but this is a good thing. Maybe we could call the PS5 version SmartShift Turbo or V2.0?

This is incorrect. Smartshift is only allocating power - Wattage - between components, it isn't (directly) influencing clocks.

The variable clock is a Sony feature and is what happens when SmartShift reaches the limits of its capability to efficiently allocate power based on processor activity. Ultimately every processor has a temperature limit - if that TDP (thermal design power) is exceeded it either shuts down (i.e. down clocks to 0!) or becomes damaged.

The reason Cerny has described the clocks as continuous boost but has come up short of saying "max clock all the time", is because that choice is left to devs. If they want max cpu and gpu 100% of the time - they can have it and they achieve it by writing optimized code.

Xsex has exactly the same set of considerations for developers - instead of variable clocks, they get to optimize for variable power. Clocks, like flops are only 2 elements of a processor's capability, the other is power supply and heat dissipation. PS5 has provided a solution for the latter built into hardware, MS have decided to deal with it in quality control instead.

MS approach - if not done adequately well - can lead to system instability, thermal shutdown or even RROD scenarios. Sony has provided a system whereby the system protects itself in a more graceful way.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Tbf, unless Sony opens up more, everything is speculation.
We can even say Sony designed PS5 for the best in class $399 BOM, while MS aimed for the best $499 BOM.
There is a movement to present this SSD I/O as the sauce that can help a $399 BOM perform close to a $499 BOM. 🤷‍♀️

This movement is something I see Xbox fans mention, but where is it in practice?
 

geordiemp

Member
I'd like to think it could be matched if not surpassed, considering it's a tech demo before the generation's even officially started. And that applies to both systems; if we can't expect games even a year or two from launch offering the level of fluidity in the UE5 demo at the resolution and framerate it ran in, then these systems will probably be more disappointing than anyone could've guessed.

As to XSX having anything to this demo, well I'll just preface this by saying the UE5 demo itself is still just a tech demo. There's almost no true game-like AI, complex physics systems, scripting logic or enemies to deal with. Collision systems I would also assume are pretty light in the demo from what we've seen. If this were a slice from a game actually in development it would've been even more impressive than it already is. Also worth mentioning that MS should've definitely considered showing some type of gameplay demo slice, even just a two or three minute teaser, of a game actually in development (preferably Halo Infinite) showing off the graphical strengths of their platform and some sort of high-speed asset streaming segment, and on the SlipSpace Engine just to dab a bit of salt in the wound. Especially seeing now that the PS5 reveal event's been delayed (to what date we don't even know), it would've been a great marketing opportunity for MS to drop such a demo on, say, Friday, or maybe they're planning sometime next week? They need something aside from the hardware event later this month IMHO.

All that said...I'd probably say something in July since that's the gameplay event. How? Well, firstly I'll just speculate that the fast asset streaming in the UE5 demo, I hardly think it was taxing the SSD's full speed or capabilities. I've seen estimates of around 1 GB/s at most for that segment, so it would be capable on any system with a drive of at least that fast. And yes I'm sure the demo was also using other aspects of the PS5 SSD I/O, but it's not like the XSX doesn't have equivalents for most of those features, regardless of how "powerful" those equivalents might be.

There's always combinations of techniques leveraging the additional GPU power on XSX that could be used to simulate many of the approaches for the streaming segments in the UE5 demo. For example, if the ML modifications to the GPU are robust enough they wouldn't need to stream the native high-quality textures; simply sample lower-quality ones (reduce the bandwidth footprint), and then scale them to a higher resolution (DLSS-type techniques) during processing. There's already many good cases for this including textures upscaled this way looking even better than the native high-resolution ones. There's also clever ways that one could simply have a standard base (or set of base) mesh and texture models for environmental details, the statues, etc., and simply utilize transformation algorithms on the GPU at render time to morph and alter those assets. There's a range to which they can be morphed depending on the original data; I guess some people would call this procedural generation, but it could also be comparable to granular synthesis where your base determines the number of "seed" permutations (my understanding on granular synthesis comes from studying some sound design/engineering).




I don't think it was ever actually debunked. The Epic China guys played back their demo on-stream and watched the video file on the stream itself, but beforehand had ran the demo on a laptop fitting certain specifications.

Confusion in terms of branch communications is nothing new, it's been happening since the 1980s. companies like SEGA became a bit infamous for it during the mid/late 1990s. I'd assume the Epic China guys did in fact run the demo on a laptop fitting a particular specification but during the stream itself they watched recorded footage of that build run on said laptop. Members of teams among the different branches are never really 100% aware of what people on other branches are doing, so that could explain why Tim was a bit surprised by it when the word got around about that demo.

Regardless it's not like he'd be in a position to affirm the validity of what the Epic China guys did, for legal reasons.

Dontero Dontero The fact we don't have random read speed/latency figures on the NAND for PS5 or XSX isn't surprising, but those are very important things. The way I see it is part of Sony's solution was to dedicate a flash channel for each module, so if data is known to be on a given module then you simply select that module and off you go. That doesn't say anything to any latency, but it as a setup does help a lot in increasing random access capabilities of the NAND.

XSX, imho, has a very different setup, and in some ways that setup doesn't facilitate for increased ability in random NAND access quite the way Sony's does (this has to do with modules to channels), but for all we know they could be using NAND modules with better latency figures on random access. They're unknowns for now and will probably remain so for a very long time.

Bryank75 Bryank75 You messed up your figures a lot. XSX uses 2.5 GB for the OS, not 3 GB. PS5 very likely reserves 2 GB for its OS.

For the rest of your calculations, it's just mumbo jumbo. If you're trying to factor out the RAM bandwidth the OS will occupy, first off you need to understand that the OS will spread its physical reserve across multiple chips on both systems (2 on PS5, 3 on XSX) so as to ensure the other processors can have data in the RAM in such a way full bandwidth utilization can still be achieved if required. There's almost no point in trying to pettily factor out the OS physical RAM reserve from overall system bandwidth because the systems aren't putting entire GDDR6 chips to OS data reserves, and it's not like having the OS occupy some RAM modules means those modules are now only usable by the OS altogether.

So by your own calculations you'd have to remove 2 GDDR6 modules from PS5, or 112 GB/s, and by your own logic that'd leave the system with 336 GB/s bandwidth, which is ridiculous. That's not how you account for system bandwidth (doing that I also noticed you assumed because XSX's OS is reserving some space on the lower-bound 1 GB of 3x 2 GB modules that somehow means the upper-bound 1 GB on those 3x 2 GB modules are inaccessible whatsoever which is....just extremely flawed idea). So there's no method you can calculate PS5 having more system RAM than XSX since they both have 16 GB; you can only claim it having an additional 512 MB of usable memory for games, and again that's on the assumption the PS5 OS reserves 2 GB of physical memory.

I agree on most points, we should get other examples of streaming this gen, Media molecule playing around with gigavoxels and Decima and Spiderman all take streaming at their core, as well as future games on that EU5 which is not finished yet of course (if they use that subset of those engine features naturally).

For XSX we have not seen anything yet, but agree MS need to show something that is more than SSD speed rhetoric, its about getting the needed small tiled assets (whatever name its given lol) into GPU work flow in the sub frame times required. to show the ability to play strongly for this form of rendering.

We also need to know more about it from both Sony and MS techniques as it is a paradigm shift in rendering philosopy over current gen for sure.

Finally you are correct, data is in slices across all GDDR6 and read in the same way and timing of it needs to be in this manner to get the speed, as its 4 operations per clock GDDR6. Some mind bending funny maths going on. Alsomits the nanosecond time per frame the bus spends in each "pool" that matters and if any time is taken up arranging data in pools.

We also dont know exactly how sony and mS are doing thier OS and if both consoles are recording gameplay for upload and where its being stored. So much to find out.
 
Last edited:

Exodia

Banned
Whats easy to market is what people see with their eyes, all the SFS. Geometry engine, cache scubber dubbers, velocity Big fucking gun super ray illumination, wingardian levatosa is just marketing and PR.

The proof is in the running of code and players can see for themselves. what works and what looks whatever

So this is the current sony Ps5 marketing, best visuals to date bar none...until they show stuff next week or 2.



When do you think we will see anything impressive on XSX that gets players talking about next gen visuals ?


Whatever they show in their conference won't look as detailed as that. it will look good in its own way. Just like that cavern video i posted.

Nanite is unique to Epic. No one has that and no one knows how to replicate it. There were hundreds of Rendering engineers on twitter from all diversity of AAA studio guessing and trying to figure it out and failing.

Epic games partnering with Sony on a demo isn't them giving them their source code or telling them the intricate way Nanite works. Its Sony telling them how their SSD api works and giving them tips and tricks concerning any bottleneck they are seeing.

Nanite is a project of ONE singular person from 15 years ago. And only because of fortnite money is it possible. Epic hired hundreds of engineers. They have unlimited funding. No studio has that type of funding. This is a complete rendering redesign and I don't see this being replicated for another 4 years.

People erroneously praise Sony first party studio engines and claim they are the best. But they are missing so many graphical features.
For example none of their games other than Drive Club had dynamic GI. Think about that.
Not spiderman, not uncharted 4, not last of us 2, not HDZ not God of war, not Detroit, not The Order...

 
Last edited:

II_JumPeR_I

Member
Why is it so difficult for some to admit Sony did a fantastic job with the ssd subsystem?
Why is it difficult to admit for some that MS built a stronger System? This SSD jerkoff fest everywhere gets tiresome.
An SSD wont fix missing GPU performance etc
 

Panajev2001a

GAF's Pleasant Genius
Why is it difficult to admit for some that MS built a stronger System? This SSD jerkoff fest everywhere gets tiresome.
An SSD wont fix missing GPU performance etc

Nobody is dismissing the power delta, but people are not going to embellish it either just to keep the same MONSTER BEAST BEAST power dominance rhetoric going...
 

Panajev2001a

GAF's Pleasant Genius
Whatever they show in their conference won't look as detailed as that. it will look good in its own way. Just like that cavern video i posted.

Nanite is unique to Epic. No one has that and no one knows how to replicate it. There were hundreds of Rending engineers on twitter from all diversity of AAA studio guessing and trying to figure it out failing.

Epic games partnering with Sony on a demo isn't them giving them their source code or telling them the intricate way Nanite was. Its Sony telling them how their SSD api works and giving them tips and tricks concerning any bottleneck they are seeing.

Nanite is a project of ONE singular person from 15 years ago. And only because of fortnite money is it possible. Epic hired hundreds of engineers. They have unlimited funding. No studio has that type of refunding. This is a complete rendering redesign and I don't see this being replicated for another 4 years.

People erroneously praise Sony first party studio engines and claim they are the best. But they are missing so many graphical features.
For example none of their games other than Drive Club had dynamic GI. Think about that.
Not spiderman, not uncharted 4, not last of us 2, not HDZ not God of war, not Detroit, not The Order...



I guess other studios could try to license UE5, which ships with the source code too, mmh... ;) (and that is not to talk about first party developers and the DICE, Ubisoft, etc... of the world... we shall see how easy to maximise the use of the extra I/O is... or not).
 
Last edited:

geordiemp

Member
Whatever they show in their conference won't look as detailed as that. it will look good in its own way. Just like that cavern video i posted.

Nanite is unique to Epic. No one has that and no one knows how to replicate it. There were hundreds of Rending engineers on twitter from all diversity of AAA studio guessing and trying to figure it out failing.

Epic games partnering with Sony on a demo isn't them giving them their source code or telling them the intricate way Nanite was. Its Sony telling them how their SSD api works and giving them tips and tricks concerning any bottleneck they are seeing.

Nanite is a project of ONE singular person from 15 years ago. And only because of fortnite money is it possible. Epic hired hundreds of engineers. They have unlimited funding. No studio has that type of refunding. This is a complete rendering redesign and I don't see this being replicated for another 4 years.

People erroneously praise Sony first party studio engines but they are missing so many graphical features.
For example none of their games other than Drive Club had dynamic GI. Think about that.



You hope :messenger_beaming: . I dont think small triangle work or different rendering techniques is really unique to just Epic, Media molecule did some crazy stuff with gigavoxels.


Frostbyte are also mentioned in the above.

I think we will be surprised, not everybody stands still..

Sony planned to show this on GDC floor and let people play it. Lets see what Sony have cooked up as they have been in partnership on that concept for 5 years according to Sweeny.
 
Last edited:

longdi

Banned
What a load of bollocks. 2080 Ti FE has 14 TFLOPS peak and 2070S has 9 TFLOPS. That's 55% higher computational power in favor of 2080 Ti. PS5 to XSX is just 18%. 18 vs 55. How did you come to this stupid conclusion that PS5 and XSX's GPUs are akin to 2080 Ti and 2070S?

The perf difference between 2070S vs 2080 Ti on avg. is about ~33%. i.e. at identical graphics settings and resolution, the 2080 Ti has about 30-33% higher fps on avg. compared to 2070S despite that 55% TFLOPS advantage. Not to mention the b/w difference between these two cards (616 GB/s vs 448 GB/s) is even larger than the difference between PS5 and XSX's memory b/w.

I wouldn't be surprised if a game comes out running at identical graphics/resolution on PS5 and XSX, and the avg. frame-rate advantage ends up being less than 18% in favor of XSX.

An e.g. like Hitman which ran at the same settings/res on PS4/XB1. Despite PS4 having 40% more TFLOPS, the avg. frame-rate advantage ended up being 30% higher on PS4, in that particular game.

An overclocked 2070S at 2ghz would put it like 10.2tflops... :goog_hugging_face:

As i said i took 2080Ti over 2080S because this hass more 'representative gap' of what i expect. If Nvidia had blessed 2080S with more cores, than i take that.
Again this is my guess, a 25~30% performance advantage on Series X overall.
Thats how both hardware stacked up imo
 

THE:MILKMAN

Member
This is incorrect. Smartshift is only allocating power - Wattage - between components, it isn't (directly) influencing clocks.

This is what I meant. Shifts [electrical] power from say a CPU with low utilisation running at full clocks to a GPU with 100% utilisation running at lowered clocks. The knock-on (indirect) effect being the GPU clocks rise when it gets the extra electrical power.

That's correct, right?
 

longdi

Banned
The PS5 is not the el-cheapo of the ps4 it is a 499.99 console. If Sony wants to take a loss for our wallets hell yes 399.99 and force Microsoft to 399.99 to. With how strong the playstation brand is especially in the EU i see no reason for Sony to sell at loss it is selleout regardless 399 or 499 even 549. They are in position to take 75%-80% market share thanks to epic and knock off its last competitor. So i see 499.99 price cut in 18 months. Microsoft no idea they are desperate this is the last stand for the division. The CEO gave them the cash for new studios for 1 last shot with the promise of doing better and subscriptions that the CEO loves. It will be fun to see the pricing play out. I'll be happy to see the return to the selling at a loss for my wallet

As i said, BOM vs MSRP.
Sony can design PS5 for certain amount, but rumors are swirling that said amount got inflated due to circumstances outside better performance.
Gamers deserve to know how much perf/$ their consoles are worth, instead of getting into the buzz of one component.
 

longdi

Banned
This is incorrect. Smartshift is only allocating power - Wattage - between components, it isn't (directly) influencing clocks.

The variable clock is a Sony feature and is what happens when SmartShift reaches the limits of its capability to efficiently allocate power based on processor activity. Ultimately every processor has a temperature limit - if that TDP (thermal design power) is exceeded it either shuts down (i.e. down clocks to 0!) or becomes damaged.

The reason Cerny has described the clocks as continuous boost but has come up short of saying "max clock all the time", is because that choice is left to devs. If they want max cpu and gpu 100% of the time - they can have it and they achieve it by writing optimized code.

Xsex has exactly the same set of considerations for developers - instead of variable clocks, they get to optimize for variable power. Clocks, like flops are only 2 elements of a processor's capability, the other is power supply and heat dissipation. PS5 has provided a solution for the latter built into hardware, MS have decided to deal with it in quality control instead.

MS approach - if not done adequately well - can lead to system instability, thermal shutdown or even RROD scenarios. Sony has provided a system whereby the system protects itself in a more graceful way.

:messenger_weary:
Where is Panajev to call out FUD here?
 

geordiemp

Member
Why is it difficult to admit for some that MS built a stronger System? This SSD jerkoff fest everywhere gets tiresome.
An SSD wont fix missing GPU performance etc

The SSD jerkoff is because we have been introduced to new better quality rendering techniques which blow everything else outof the water.

And at 1440p that UE5 demo looks better than any 4k image on any PC game (Unless its all preloaded in a small room). Thats why everyone is talking about it.

Assets >>> resolution.

MS have built a stronger system for traditional rendering, and a weaker system for high asset detail streaming.

Its hard to bear that cross, but deal with it as its all your going to hear about for next 7 years.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
a lot of SSD topics in the front page here.
Even Linus Media Group, which is big in YT.ca, got their hands dirty with SSD. :messenger_grinning_sweat:

Ae you just throwing random stuff out? How are the SSD topics negating the shader performance advantage? Just by existing?!
 

longdi

Banned
This is what I meant. Shifts [electrical] power from say a CPU with low utilisation running at full clocks to a GPU with 100% utilisation running at lowered clocks. The knock-on (indirect) effect being the GPU clocks rise when it gets the extra electrical power.

That's correct, right?

If you ask me why PS5 uses smartshift, my guess is the components inside it are using power above what they deemed a suitable power supply.
Hence system is forced to load balance. :pie_smirking:
We shall see once the tdp powerwatt benchies are out.
 

Exodia

Banned
I dont think small triangle work or different rendering techniques is really unique to just Epic, Media molecule did some crazy stuff with gigavoxels.


Frostbyte are also mentioned in the above.

Thats not what Nanite does.

And EA rendering engineers were some of those on twitter guessing and were way off.

I think we will be surprised, not everybody stands still.. Lets see what Sony have cooked up as they have been in partnershup on that concept for 5 years according to Sweeny.

You are again ignoring facts and logic.

Partnering on new gen console is literally just telling Sony/MS what you would like. That happened just as with OTHER developers were speaking to sony/ms about what kind of arch and tech they wanted. It was nothing special. Development of Next Gen started 2016ish i bet.



In the case of Epic games, they also partnered on a demo. But the partnership for the demo has only been the past several months. They DIDN'T share nanite source code and IP with sony. Partnering on a demo isn't giving away your software or how your system works. All they did was have Sony help them to learn what each SSD/I/O API and general new API does and utilize it properly.



Sony does not have nanite source code or its intricate details. How is it that you don't understand?
 
Last edited:

geordiemp

Member
The fact the GPU can hit 2.23Ghz in a console thermal/power envelope is pretty staggering to be honest.

Since Cerny said it holds its clockspeeds in most cases i tend to believe him but i am interested what scensrios cause clock drops.

Here is a good scenario of a clock drop, a bug loop which consumes lots of power and dows nothing which can get missed. Do you keep your clock at 2 Ghz and pack lots of cooling for such a edge case which is a programming error, or do you deisgn it out ?


oadBvMq.png
 
D

Deleted member 775630

Unconfirmed Member
The SSD jerkoff is because we have been introduced to new better quality rendering techniques which blow everything else outof the water.

And at 1440p that UE5 demo looks better than any 4k image on any PC game (Unless its all preloaded in a small room). Thats why everyone is talking about it.
So a demo of (rumored) 300GB for 5min of gameplay looks better than 4K gaming on an actual PC game? Who would've thought... Can't wait for game reveals of first party studios this and next month.
 

THE:MILKMAN

Member
If you ask me why PS5 uses smartshift, my guess is the components inside it are using power above what they deemed a suitable power supply.
Hence system is forced to load balance. :pie_smirking:
We shall see once the tdp powerwatt benchies are out.

In that post I'm referring strictly to AMD's SmartShift (used in the Dell laptop). It might also apply to PS5's implementation but I'll wait for more details.

If you are referring to watt meter numbers then I will guess XSX will come in at ~240W and possibly closer to 220W. I will give a guess for PS5 when I know what the PSU rating is.
 

geordiemp

Member
Thats not what Nanite does.

And EA rendering engineers were some of those on twitter guessing and were way off.



You are again ignoring facts and logic.

Partnering on new gen console is literally just telling Sony/MS what you would like. That happened the same as OTHER developers were speaking to sony about what kind of arch they wanted. It was nothing special. Development of Next Gen started 2016ish i bet.



In the case of Epic games, they also partnered on a demo. But the partnership for the demo has only been the past several months. They DIDN'T share nanite source code and IP with sony. Partnering on a demo isn't giving away your software or how your system works. All they did was have Sony help them to learn what each SSD/I/O API does and utilize it properly.



Sony does not have nanite source code or its intricate details. How is it that you don't understand?


Your replying to something I never said FFS.

I never said Sony will do nanite did I , I suggested Sony engines will leverage the fast IO within the frame time in their own ways and as they have been involved with EU5 demo for 5 years they will be well aware of the CONCEPT.

Sony engines will do high assets their own way, but be clear, Sony will throw higher assets into games, you know they will.

Also the interview with Sweeny, Sweeny said partnership with Sony was over the last 3 ,4 or 5 years (his words). Its in one of the after demo interviews, go find it yourself.

I did not suggest Sony were gouing to steal or copy Nanite tech. Jeesh.
 
Last edited:

geordiemp

Member
So a demo of (rumored) 300GB for 5min of gameplay looks better than 4K gaming on an actual PC game? Who would've thought... Can't wait for game reveals of first party studios this and next month.

FUD, did you make that number up all by yourself or it it a Colbert brainchild. Timdog is too stupid.
 
Last edited:
D

Deleted member 775630

Unconfirmed Member
FUD, did you make that number up all by youdelf or is it Colbert brainchild. Timdog is too stupid.
You did see the word between brackets right? Nonetheless, you are acting as if the XSX is already an outdated console, because apparently Microsoft just builds a strong PC and doesn't discuss these things with devs, only Sony asks people what they want.
 

geordiemp

Member
You did see the word between brackets right? Nonetheless, you are acting as if the XSX is already an outdated console, because apparently Microsoft just builds a strong PC and doesn't discuss these things with devs, only Sony asks people what they want.

Rumoured from where, link ? Give us a laugh.

And the rest of your sentence is strawman, are you arguing with youself ? I have no idea. High asset streaming is INTERESTING, I did not say its best for eveything.

High asset rendering isnt the only game in town, I like fast action like Ninja Gaiden and God War, for such games 60 FPS comes first. Even games like TLOU I prefer in performance mode so...

So until fast asset streaming gets to 60 FPS, which it will as Epic did say that was the target, no need for that crazy detail, scale it a bit and we are good. It still needs fast IO, maybe even less latent and quicker, as you only have 16 ms frame time.

If fast streaming is not good for 60 FPS gaming for game sthat need it, then load the levels up in a second or 2.
 
Last edited:

longdi

Banned
Here is a good scenario of a clock drop, a bug loop which consumes lots of power and dows nothing which can get missed. Do you keep your clock at 2 Ghz and pack lots of cooling for such a edge case which is a programming error, or do you deisgn it out ?


oadBvMq.png

This is nothing new. Even in games menu, you got cases of the gpu ramping 100% load, developers fix this by using a framerate limiter in their menus.
As for power virus apps, Amd/Nvidia has controls in their driver to throttle down the gpu when detecting said apps.

What we are concerned about the 2.23ghz figure is in realworld load, we hope to see '10.2tf' of performance in a real game. Like Mark said, rising tide raise all boats, we want all boats inside PS5 floating at 2.23ghz for a game.
(not that we will ever find out, unlike a PC where you can run hwinfo monitoring :messenger_grinning_sweat: )
 
Last edited:

Exodia

Banned
Your replying to something I never said FFS.

I never said Sony will do nanite did I , I suggested Sony engines will leverage the fast IO within the frame time in their own ways and as they have been involved with EU5 demo for 5 years they will be well aware of the CONCEPT.

Also the interview with Sweeny, Sweeny said partnership with Sony was over the last 3 ,4 or 5 years (his words). Its in one of the after demo interviews, go find it yourself.

I did not suggest Sony were gouing to steal Nanite tech. Jeesh.

No they haven't. This is blatant false.
The demo was a 7 months project that started late 2019 for GDC.
Stop spreading misinformation.

Also the interview with Sweeny, Sweeny said partnership with Sony was over the last 3 ,4 or 5 years (his words). Its in one of the after demo interviews, go find it yourself.

Yeah those are discussions. The same discussions that every other developer were having with MS/Sony.
The minute Epic got confirmation that MS and Sony were going to put SSDs on their next gen system.
They resurrected the nanite project. It had nothing to do with Sony in particular because MS/Sony never gave them specs. They only got the specs and api when devkits shipped in 2019.

Nanite had already been at the tail end of its development. It had nothing to do with Sony. This is why the IO improvements came with 4.25 after the devkits were sent out.

Nanite has already been proven to work on a laptop at 1440p at 40 fps using a (most-likely sata) SSD that's probably plugged through USB. This is without on board nvme ssd, no hardware decompression, no custom designed heat-sink to sustain throughput, two sets of memory bottle-neck and no directstorage to eliminate cpu and gpu bottlenecks.

Where do you think development of Nanite happened? On devkits? Devkits weren't here 3, 4, or 5 years ago. They only came last year. It happened on PCs!

Once they finally got the devkits last year, they worked on improving their IO after being motivated by Sony's fast SSD specs as they wanted to take full advantage of it.

Since Epic ALWAYS showcases a demo every next gen on the new PlayStation. When that time came Q4 2019. That was when the collaboration on 'Lumen in the land of nanite' started.

Your replying to something I never said FFS.

No that is what you are saying/implying. That Sony knew about the intricate of the rendering pipeline of what Epic was doing and helped create/facilitate it. That way giving them a heads up to follow-up research on that pipeline for their studio.
 
Last edited:

ToadMan

Member
This is what I meant. Shifts [electrical] power from say a CPU with low utilisation running at full clocks to a GPU with 100% utilisation running at lowered clocks. The knock-on (indirect) effect being the GPU clocks rise when it gets the extra electrical power.

That's correct, right?

No. The clocks are unrelated to smart shift - it's purely allocating power.

You can see that from AMDs own info. AMDs tech that involves variable clocks are things like Cool 'n' Quiet.

If you're a formula person. the "simple" formula for dynamic power consumption is this:

Power = Cv^2 Af

C is capacitance, the power required to flip a transistor. It's effectively fixed so ignore for this.
V = Voltage (this can vary by the way).
A = activity on the processor/gpu. Think of it as the percentage of transistors flipping in one unit of time
f = clock rate

From that the answer is obvious, traditional consoles (and old PCs) have fixed clocks, the Activity varies and so does the power consumption until it hits its limit. In PS5 the power is held to a cap and the clocks are allowed to vary if activity drives that power to be exceeded. It's not that different - its just looking at the problem from a different perspective. There's always a limit - Sony have chosen to cap power and adjust activity and if necessary clocks.


The more practical and simplistic answer follows - assume fixed clocks.

The the CPU has a "TDP" - the power it can consume for the cooling it has. The GPU has a separate TDP. Let's stay the CPU is 70W TDP and the GPU is 200W - that's in the ball park for PCs. In traditional design they have separate power rails.

The clock rate is fixed but power consumption isn't fixed. Power consumption depends on the work (literally the number of transistors flipping in a clock tick) the unit is doing - more work (transistor flips) per clock, more power consumption, more heat that has to go away.

A traditional system says if code consumes 200W of GPU power, and then carries on going, it is exceeding the power and cooling capability. If it continuously does that it can cause instability (due to insufficient power supply) or failures (due to insufficient cooling). Ordinarily a processor is probably only rated to continuously use < 100% of its transistors in a single tick - there is supposed to be spare capacity to avoid exceeding the TDP, but it can be exceeded for short periods.

Side note - this is why flops are a "theoretical" limit. GPUs don't flip 100% of their transistors every clock tick but flops are based on this 100% theoretical work. In fact that would be a sure way to either get the system to shutdown or cause damage.

MS will certify code and part of that is to ensure the CPU and GPU do not exceed their TDPs for long enough to cause instability or damage. Of course that relies on good QC and testing all scenarios - i.e. identifying performance bugs.

So lets say our GPU is using 200W, the CPU is using 50W. There's 20W of power available from the CPU that could be given to the GPU if the architecture supported sharing that power (including cooling across the components - SoC can do this, discrete components not so easy). This is nothing to do with clocks and all about how much work can be done per clock.


Enter Smarshift - smartshift lets the CPU and GPU have a combined TDP (and combined cooling solution of course) - in our example 270W to share between CPU and GPU.

So the GPU reaches that 200W limit but wants to do more work per clock - instead of stopping it keeps going using the extra 20W the CPU isn't using, the alternative would be to start dropping frames (given that we're using fixed clocks).

Now here's the kicker - what happens if the CPU wants to use it's power in this moment? That would exceed 270W combined TDP.

In PS5, Sony down clock "2%" to induce a 10% drop in power consumption, making 27W available so the CPU can continue its work.

AMD say smartshift happens in 2ms, Cerny seems to suggest the variable clocking was fast and entirely deterministic so devs choose whether to optimize the power use, or allow the down clock.

There's one more thing to say. Smartshift is enabling the higher frequency. Why wouldn't Sony just clock the GPU as high as they could and cool it all and provide a power supply?

Well power supplies and cooling add up (and bulk up), and also allowing say 220W and 70W on the CPU plus ancillary power - that's getting on for +300W. The reality is that both xsex will work to a total TDP of about 200W - maybe 220W. Xsex isn't clocked higher because it can't do anything with the clock - its power limited, so it would be going faster but doing less work each clock to stay within the discrete gpu power limit.

The reason Sony can clock so fast is because their system also allows them to shift enough power to usefully fill those clocks with useful work done.
 
Not possible. 4k is a 2.25 increase of pixels from 1440p.

When the GPU difference is a minimum of 16% and maximum of 26%, it simply isn’t enough to facilitate that large of a resolution jump. Best case scenario is 1800p while worst case is 1670p.

With that said you don't need games to be at a native 4K unless you have a really big TV screen. For example on my 4K PC monitor I don't notice a difference between a native 4K and a native 1440P. I'm pretty sure many people are in my situation as well.
 

THE:MILKMAN

Member
ToadMan ToadMan

You seem to be misunderstanding me here. I used the phrase 'knock-on effect' and the word 'indirect'. I understand SmartShift itself doesn't alter clocks.

We're agreeing here as far as I can tell! Though the formulas and maths is giving me a headache (don't like maths!).
 

ToadMan

Member
You did see the word between brackets right? Nonetheless, you are acting as if the XSX is already an outdated console, because apparently Microsoft just builds a strong PC and doesn't discuss these things with devs, only Sony asks people what they want.

Im curious about the source of 300gb too - perhaps you should post the source of this rumor. Or remove it from your posts if its as silly as it sounds...
 

ToadMan

Member
ToadMan ToadMan

You seem to be misunderstanding me here. I used the phrase 'knock-on effect' and the word 'indirect'. I understand SmartShift itself doesn't alter clocks.

We're agreeing here as far as I can tell! Though the formulas and maths is giving me a headache (don't like maths!).

Ah Ok. Sorry about that - misunderstood you then.

As far as I'm aware there's no implementation of SmartShift which directly tweaks clocks either in PS5 or laptops.

But the implmetation on PS5 does seem to be orthogonal in purpose compared to laptops. For laptops they're trying to optimize laptop performance without installing bigger batteries by pumping more Watts to the gpu at the expense of the CPU. PS5 is using it to achieve higher clocks speeds, keep power consumption and cooling under control to avoid having to put a cooling tower in people's living rooms.
 

geordiemp

Member
No they haven't. This is blatant false.
The demo was a 7 months project that started late 2019 for GDC.
Stop spreading misinformation.



Yeah those are discussions. The same discussions that every other developer were having with MS/Sony.
The minute Epic got confirmation that MS and Sony were going to put SSDs on their next gen system.
They resurrected the nanite project. It had nothing to do with Sony in particular because MS/Sony never gave them specs. They only got the specs and api when devkits shipped in 2019.

Nanite had already been at the tail end of its development. It had nothing to do with Sony. This is why the IO improvements came with 4.25 after the devkits were sent out.

Nanite has already been proven to work on a laptop at 1440p at 40 fps using a (most-likely sata) SSD that's probably plugged through USB. This is without on board nvme ssd, no hardware decompression, no custom designed heat-sink to sustain throughput, two sets of memory bottle-neck and no directstorage to eliminate cpu and gpu bottlenecks.

Where do you think development of Nanite happened? On devkits? Devkits weren't here 3, 4, or 5 years ago. They only came last year. It happened on PCs!

Once they finally got the devkits last year, they worked on improving their IO after being motivated by Sony's fast SSD specs as they wanted to take full advantage of it.

Since Epic ALWAYS showcases a demo every next gen on the new PlayStation. When that time came Q4 2019. That was when the collaboration on 'Lumen in the land of nanite' started.



No that is what you are saying/implying. That Sony knew about the intricate of the rendering pipeline of what Epic was doing and helped create/facilitate it. That way giving them a heads up to follow-up research on that pipeline for their studio.

Yes it a partnership and discussions on high end graphics and fast storage thats gone on for 3,4 or 5 years according to tim sweeny, go watch the interview, your making things up again.



11.40 onwards, OWNED yet again.

That demo will not have been that long to make, they did say how long it took somehwere else.

Again, I posted my link, where is your sources for your narrative (Timdog ?)
 
Last edited:

yurinka

Member
We can even say Sony designed PS5 for the best in class $399 BOM, while MS aimed for the best $499 BOM.
There is a movement to present this SSD I/O as the sauce that can help a $399 BOM perform close to a $499 BOM. 🤷‍♀️
I don't know if weed is legal in Washington (I bet Redmond is there), mr. Phil Spencer. But winners don't use drugs.

Even without considering the advantage of having 2X faster SSD & I/O to help reaching closer results to its theorical peak CPU & GPU performance, just looking at the CPU+GPU theorical peak difference of only ~18% we already know they will have a pretty close real performance. That will be closer, because PS5 is likely to achieve a closer real result to the its theorical peak due to having a more balanced, optimized and flexible architecture and also will optimize better the memory because it streams twice as fast.

There's also no reason to think Series X will have a $100 higher BOM just because it has a slower GPU with more CUs when other than that they have similar CPU/GPURAM and PS5 has a way faster SSD solution which as of now it's the best solution in the market so I doubt it won't be cheap.
 

Neo_game

Member
An overclocked 2070S at 2ghz would put it like 10.2tflops... :goog_hugging_face:

As i said i took 2080Ti over 2080S because this hass more 'representative gap' of what i expect. If Nvidia had blessed 2080S with more cores, than i take that.
Again this is my guess, a 25~30% performance advantage on Series X overall.
Thats how both hardware stacked up imo

Xbox is around RTX 2080 not RTX 2080ti. But whatever make you feel better. Their biggest advantage I think is 25% BW. But once games uses more than 10GB I am not sure the wider and narrow pool of memory is going to help them.
 
Top Bottom