• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Real Talk: Gaming Hardware Is About Maximizing Efficiency Not TFLOPs

That's kinda the point of his post.

Sure, the Series X is theoretically more powerful. That's undeniable. And you're right the difference in that processing power may actually not amount to much, native 4K vs 82% of 4K with checkerboarding or maybe 1 billion ray bounces vs 800 million. You're right we'll probably need DF to tell us.

But the question the OP poses is that you need to consider, in addition to the theoretical maximum performance, how easily can you extract that performance. Because no computer system runs at 100% performance continuously. So for sake of argument say that on average SeriesX gets 75% then that's 9.1 Tflops but if PS5 gets 90% then thats 9.2 (numbers plucked out my arse to make the point).

The next-gen systems are using very similar RDNA2 architecture. In fact, I would say MS are using a few more recent/efficient RDNA2 features in their system since Sony have explicitly mentioned Geometry Engine and Primitive Shaders, both being things in RDNA1 and the latter now supplanted with Mesh Shaders in RDNA2.

Both companies have taken a lot of measures to approach bottlenecks and tune optimizations. The funny thing is, MS have actually made mention of more of these than Sony. Sony's speak on efficiency design in their systems is basically the notion of smaller/faster GPU being easier to saturate than larger/slower GPU (which is true only up to a point; when the GPUs are the same architecture and relatively close in clock speeds the larger/slower chip ALWAYS outperforms the smaller/faster one in almost every area of GPU benchmarks), cache scrubbers (which are more a necessity for their design rather than in general due to the fast GPU clock, though it would definitely be useful in other designs too), cache coherency engine (again moreso a requirement of their design since they have a dedicate processor in the I/O block for handling bulk of access between storage/RAM), SRAM (faster for storage cache than DRAM but also smaller in capacity, potentially MUCH smaller), and aforementioned dedicated I/O processor (which, when it's addressing RAM, the CPU, GPU, Audio etc. have to wait since the system is a hUMA architecture).

They've hinted at some customizations to the Geometry Engine and Primitive Shaders, but again the latter would still suggest they are primarily using Primitive Shader approach and not the newer, more efficient Mesh Shader approach. The Geometry Engine customizations could be big or small, however seeing that the GE is a unit AMD already specified themselves in RDNA1, RDNA2 is sure to either have it as well, or have a newer take that supplants Geometry Engine altogether. And aside from the Tempest Audio (which also has to manage how much bandwidth it takes up, it could take up to 20% aka 89.6 GB/s if devs have it do so), that's the extent of what Sony have mentioned regarding efficiency designs and optimizations that aren't related to more banal stuff like the SSD NAND setup (12 chips in parallel, not dismissing it but it's a smaller thing overall) and variable frequency (which is a compromise to squeeze out a bit more performance from a limited power budget and power-hungry processor components on that power budget).

On the other hand, MS have mentioned (or at least hinted towards):

>Dynamic Latency Input (some patents suggest this is also something being done via hardware customizations, not just software)
>Tier 2 Variable Rate Shading (Sony still haven't mentioned anything on this or if they have an equivalent. If they do there's no telling if
it's Tier 1 or Tier 2)
>VRR (we can assume Sony has this too or, again, an equivalent if VRR in it's specific implementation is a MS thing. But it could also be a
situation like Tier 2 VRS support)
>Mesh Shading (more evidence points to Sony not having Mesh Shading as in the more modern Mesh Shading MS are using, but rather
efficiency tuning to Primitive Shading)(Also there's some evidence to suggest MS stipulated Mesh Shading support to AMD on XSX/XSS
and their PC GPUs since the Primitive Shader units of RDNA1 did not meet their specification requirements for DX12U, suggesting AMD's
Mesh Shading implementation might be MS-derived hence a possible reason Sony aren't using Mesh Shaders)
>BCPack
>Sampler Feedback Streaming
>Direct ML
>DXR Ray-Tracing (Mention of effectively 13 TFLOPs performance concurrent to the general 12.147 TF performance suggests additional
hardware support for RT has been added that is MS-derived to the GPU)
>Explicit mention of ARM co-processors in APU design by AMD hardware engineer who worked on the XSX APU (this was mentioned
through their LinkedIn a LONG time ago); likely these are ARM cores on the GPU to help facilitate taskwork such as....
>executeIndirect (allowing GPU to operate on data without needing to stall for CPU instruction commands down the pipeline; XBO
already supported this but had several design shortcomings preventing best utilization. Only a few select Nvidia GPUs also support
executeIndirect. AMD might introduce it to their PC GPU line going forward due to collaborations with MS)
>I/O hardware decompression block (FWIW, the full details on this haven't been divulged the way Sony's done for the majority of their
SSD I/O block. But that isn't saying too much)
>ECC-modified GDDR6 memory (likely to facilitate with system use in Azure servers; helps with cutting out memory errors though
general ECC memory has a 2% reduction in performance compared to non-ECC)

The OP's point is that efficiency is just as important as max power and can equalise things. This exact scenario was demonstrated in real life with both PS3 and Dreamcast which were outperformed by theoretically weaker systems.

Wait, what theoretically weaker system in Dreamcast's generation outperformed it? If anything Dreamcast outperformed some of the other systems that gen in very select areas (such as resolution output support and textures vs. PS2).

Unless you meant other systems on the market in general, like PS1. But there isn't a single PS1 (or N64 nor Saturn) game that approaches the technical cleaniness/IQ/resolution of even the lesser demanding Dreamcast games.
 
To be fair we've only been fed "Megaherts" and "Teraflops" honestly thats all we have ever known. No one in here, before this coming generation really knew, or cared, about bottlenecks, SSD's, I/O and everything around the CPU/GPU. We don't know what "efficiency" means when it comes to game consoles or game creation. I honestly don't blame people for focusing on that stuff, that's all they knew.

Now we're being told something different. There's a lot of talking, which some people are not buying, we need to be shown what it means, then we'll get it. I think we do need to be told what's going on in a game and how the new architectures are actually help and how these things are "possible" now.
 
Last edited:

Chiggs

Member
The same.

"Same" as in both sides using their opponent's narrative; or "same" as in Sony fans appreciating the efficiency of the SSD/IO and Tempest audio design choices, with Xbox fans praising the PS5's brute force advantage?

Former: definitely.

Latter: :messenger_grinning_sweat:
 
Last edited:

chilichote

Member
"Same" as in both sides using their opponent's narrative; or "same" as in Sony fans appreciating the efficiency of the SSD/IO and Tempest audio design choices, with Xbox fans praising the PS5's brute force advantage?

Former: definitely.

Latter: :messenger_grinning_sweat:
The same in the sense of "according to reality", like now.
 
Last edited:
The discussion on which next gen console is more "powerful" has been heating up lately with most believing the Xbox Series X to be more powerful solely on higher spec counts in certain categories. Yet some folks counter that with how the custom hardware in PS5 will alleviate some of it's relative performance deficit and the difference will be minimal.

Before I proceed, let's really think about what we mean by "powerful" in this context because it could mean several different things. People tend to just toss that number around and say "system X has more TFLOPs so it's more powerful" or "System Y can run at higher framerates so it's more powerful". It is an important distinction in the context of the next generation consoles since both system have advantages in different areas.

For this discussion, I want to focus on actual game performance as the goal. Meaning which system can actually process the most data in the shortest amount of time. This will yield richer worlds at higher framerates. Thus, I am getting away from the theoretical and the TFLOPS and high level specs and focusing on which system ultimately runs games with same or higher details and higher framerates.

Now of course let me state the obvious: at this point, nobody really knows which system is more powerful between the Xbox Series X and PS5. Why? Because nobody has seen both running in final hardware form up close with the same game side by side to do a comparison. So I'm not here to declare either one as more "powerful" but just to check some folks on claiming one as superior solely based on numbers on paper or video streams.

Now many people in the know including developers have said this but let me reiterate: virtually no real world game running on any system does so in a manner which utilizes 100% of that system's capability at all times. As beautiful as TLOU2 or God of War looks on PS4, it is completely incorrect to think that either of those games are extracting the maximum 1.8 TFLOPs of GPU power for any sustained period. Yes, even if the clock speeds are fixed the actual utilization is based on the software running on it. For example, I can have a 5 Ghz CPU and a 2Ghz top of the line GPU running a simple 'for' loop or simple binary search algorithm. Does that mean that the system is running at it's theoretical 14 TFLOPs while running those few lines of code in that for loop simply because it's frequencies are locked? Theoretically, I could build a 15 PetaFlop machine (15000 TFLOPS) that is several orders of magnitude more powerful than anything on the market today. But if all it could play were Wii games by design, would that be a system which is utilizing it's full potential? Would that be next gen?

The point here is something that I've mentioned several times in this forum and I think a lot of people miss. When we really think about "next gen" gaming and transitioning to a new generation it really isn't the hardware that achieve those milestones. It's the actual software/games that truly define a new generation. We don't remember the specs of the N64 and how much more horsepower it had over the PS1, but we remember how seeing Super Mario 64 for the first time took our breath away. Try as we might, few people could look at Mario 64 in motion and translate that to exactly what hardware specs made that possible and how any theoretical advantages over competing hardware is showing up in the images being rendered before in front of them. The same could be said in moving to PS2: it was seeing GT3, Metal Gears Solid 2, and GTA III that defined what "next gen" really meant for that generation. It was not a GFLOP count or marketing buzz words like "Emotion Engine". We could go on with seeing Halo for the first time, Gears of War, Uncharted 2, and Killzone Shadowfall in later generations but you get my point. But here is the question: if you didn't know the hardware specs of the system running those games, would that change how you looked at that system? In other words, if Kojima today mentioned that MGS2 on PS2 only used <1 GFLOP of performance, would you now look at the PS2 as being "weaker" than the Dreamcast (capable of a theoretical 1.4 GFLOPS) even though it clearly looked better than anything on the Dreamcast at that time?

In thinking with that, we should realize that all of this talk about TFLOPs and theoretical numbers is really moot at the end of the day and misses the point. If we understand that maximum theoretical numbers are quite meaningless in determining actual real game performance and we agree that the real world performance or demonstrative power is actual more meaningful to evaluate, then we should be focusing on which system will actually be able to deliver it's theoretical performance best to the screen. There are indeed a tremendous number of system components and variables that all have to play nice and align perfectly for a system to operate at it's maximum capacity. In truth, it almost never happens with real workloads but the systems that are perceived to be the most "powerful" are generally the ones that have come closest to it's theoretical maximums…meaning the ones that are most efficient. That truly is the name of the game…trying to remove bottlenecks and create a balanced system that can work together effectively is the really the art of designing a game console ( or any system).

I recently got into a back and forth with someone who shouted to me: Xbox Series X is clearly more powerful because "The numbers don't lie". I literally LMAO and shouted back "LOL. YES THEY DO!" There are countless examples of this and many on this forum have posted PC GPU comparisons demonstrating the lower TFLOP GPU outperforming (in real games) a higher TFLOP GPU etc. But there are 2 examples I want to remind people of in particular:

  1. The first and more recent example of "numbers telling lies" is with the PS3 and Xbox 360 comparison. Now on paper, there is no denying that the PS3 had a much higher theoretical performance ceiling when you factored in the Cell, it's SPUs, along with the RSX GPU. Yet, most multiplatform games ran better on the Xbox 360. Why? Because the X360 was a much more balanced system that allowed developers to extract more performance with less effort than the PS3. In other words, it's "power" was much more accessible and the system more efficient. It's unified memory, symmetrical CPU design, and larger GPU with more parallel pipelines meant there was more power on tap in the X360. This was evident in many third party games throughout the generation but was very evident in the first few years (Anyone remember Madden 07 running at 60fps on X360 vs only 30fps on PS3). But other big titles such as Read Dead Redemption, Skyrim, Assassin's Creed and many others ran at lower resolution and/or lower framerates on the PS3. One way to categorize this at a high level of abstraction (not literal figures, just an example to illustrate the point) is that 70% of the Xbox 360 was better than 40% of the PS3.
  2. For those old enough to remember, the second major example of this was with the original PS1 vs the Sega Saturn. People may not remember but on paper the Sega Saturn was superior to the PS1 in almost every respect! More polygon pushing power, higher pixel fillrate, more RAM and VRAM, better sprite processing, higher maximum resolution and more! Yet and still, the vast majority of 3rd party multiplatform games looked and ran better on the PS1. Games like Tomb Raider, Resident Evil, and Wipeout are just some example where the Saturn version had poorer performance or was missing visual elements altogether. Why was this? Again, the Saturn was notoriously difficult to develop on and particularly to harness it's max potential. It featured dual CPU processors that was very tricky to code and in fact most developers literally ignored the 2nd processor altogether reducing the theoretical performance of the system by a tremendous amount. The PS1 on the other hand was well balanced and easy to get the desired level of performance out of it. For developers, you got much more out of it with less effort. Again, high level abstraction description: 60% of the PS1 was a lot better than 30% of the Saturn

So how does this relate to the current discussions around PS5 and Xbox Series X. Again let me reiterate, I'm not saying that one is more powerful than the other. In fact, by my comments in this thread I cannot say that until I've seen games running side by side on both. I believe like many that both will have their advantages in different areas. But we've been hearing and talking a lot recently about how so many developers seem to be singing the praises of the PS5 using big hyperbolic words like "masterpiece", "best ever", "dream machine" etc. The general excitement from the development community around the PS5 seems tangible and there isn't that same vibe at this time around the Series X (despite the higher spec numbers). Why is that?

We've heard things mentioned about the PS5 such as it's one of the easiest systems ever to develop on, it's very easy to get the power out of it, it removes many of the bottlenecks that have existed for many years, it frees developers from design constraints that they have been working around for decades etc. These kinds of statements all point to a system that will be extremely efficient and allow developers to harness more power for less time and effort. The fact that we haven't heard the same sorts of statements around Series X lead me to believe that the PS5 is in fact the more efficient between the two.

This means that you can get much closer (still not likely 100%) to that 10.28 TFLOPs of GPU power more consistently in actual workloads. This means that you can utilize much more of those 8 Zen 2 cores to doing meaningful work that that the player will see as opposed to "under the hood" tasks around data management, audio processing etc. This means that can actually achieve near 100% of the theoretical SSD read/write speeds without the traditional bottlenecks that have existed with HDDs in games for years. This means that you can get much more efficient use out of the physical RAM allotment because there is less wasteful or unnecessary assets taking up space.

The people that truly follow what I'm saying in this thread will realize that these things are much more exciting to both a developer and end user than some higher numbers on a spec sheet. These are the things that can make a meaningful difference in the quality of games we play in the next few years. These are things that will directly improve the quality of the software, which is really what delivers the next gen experience. This is absolutely cause to sing the praises of the PS5 as many developers have done.

Unfortunately for Cerny and the team at Sony, most of the real work and genius in the PS5 design is not easy to communicate to end users. It's also not something that end users can really appreciate since it's not something they can truly understand until they see the results. And that of course will not happen right away at launch in 2020. But ultimately, there is much to be excited about with the innovations Sony is bringing in the PS5 and the level of efficiency they could have possibly achieved.

So while I am not saying the PS5 is definitely more powerful (meaning more performance) than the Series X, I am saying that it is absolutely inaccurate to say that the Series X is more powerful solely based on TFLOPs ratings and other theoretical specs. In other words, despite what the numbers say it is entirely possible that we may see many cases where games are performing better (i.e. more complex scenes and/or higher framerates) on PS5. To use my analogy above: 85% of the PS5 maybe better than 60% of the Series X (for example). It wouldn't be the first time that the numbers did not tell the whole truth :)

Great comments guys. This was the point of this thread. Much of this was already being said by many around the forum but i thought it would be good to have it centralized and visible for more to see.

Let me add another nugget here. In terms of the efficiency difference between the PS5 and Xbox Series X, we can already see it in practice. Particularly with regards to I/O and SSD speeds we have some comparison points. Now mathematically, the full 100% utilization of the SSD speed for Seriex X would be 2.4 GB/s. So to load a full game into memory would take roughly 16 GB / 2.4 GB/s = 6.6s. So IF the Series X was 100% efficient with its SSD loads, then the worst case scenario for loading would be roughly 6-7s.

Now on the PS5 side, 100% efficiency from its raw SSD speed would be 5.5GB/s. Thus the time it would take to load a full game into memory would be 16GB/5.5GB/s = 2.9s. So assuming 100% efficiency, it should take worst case ~3s to load a full game into memory on the PS5. (NOTE: I intentionally did not account for compression. Worst case would be the data isn't compressed at all)

Now what have we already seen from both sides? Microsoft has already shown several demos highlighting the speed of their loading and quick resume feature. In particular, their official loading demo from their marketing team shows a the time to load a game on Series X compared to the Xbox One X. In that demo, the Series X took about 9s to load the game (for those that would point out this not being a great game to demo due to Backwards Compatibility, its the game their PR and marketing team selected! Believe me, if they had a better game to demo they would have).

Now what have we seen from Sony specifically with regards to loading. Well 2 things: one official and one unofficial. The official example which is great because it's in a live real-time game is in the Rachet & Clank demo on PS5. Go back and watch that trailer and count how long it takes for Rachet to travel to a new world through the dimensional rifts. He does this 5 times in the trailer and every single time, the new world was loaded in under 3s! Go back with a stopwatch and time it if you need to :)

The other unofficial demo of course is of the Spiderman PS5 loading demo. There we see PS5 loading a scene 10x faster than PS4 pro and doing it in just under 1s. Now you can say this isn't the actual game etc but it's still important because it actually demonstrates at least one example where the PS5 can load something in under 1s which was the goal mentioned by Cerny several times. Just as important, we actually see a real example where 10x improvement over current generation was realized. What a lot of people miss about this demo, is that the PS4 Pro was actually using an SSD! So that 8s load is fairly fast by current gen standards but the demo really highlights not just the raw SSD speed on PS5 but rather how PS5 removes the bottlenecks with the I/O and SSD bandwidth to actually get near 100% efficiency out of the raw theoretical specs.

If we go back to the Series X loading demo vs Xbox One X, the Series X did it in ~9s while the One X did it in ~50s. That's a great improvement for sure but that was only a 5x improvement over the One X which was using a standard HDD (as opposed to an SSD). Remember, PS5 demo was 10x over an SSD!

So guys, this discussion on efficiency isn't conjecture at this point. Everything that we have been privy to thus far including paper specs, direct communication from Sony and Xbox, and demos point to PS5 being the more efficient machine.

Now there has been talk about how some of those efficiencies can benefit the CPU and GPU as well as the SSD from developers and Sony directly. Things like how the Tempest engine, Coherency engine, Cache scrubbers, and other aspects of the I/O controller block will significantly alleviate the CPU and GPU allowing it to put more of its power directly into the game processing. IF the lengths of those efforts toward efficiency come close to matching what they've achieved with the SSD, then it is not far fetched to think that in real game workloads, that efficiency could make up for a 2% CPU advantage, 18% GPU advantage, and 20% bandwidth advantage...at least in some cases :)

Got it, the PS5 is efficient, multi-platform will run better despite xbox being more capable (2tf more powerful, faster ram, faster cpu) even if they both using the same tech (RDNA2, ZEN2, GDDR6)
I just hope PS5 has something similar to VRS.
Short answer: Show us results, not talk us to death with PR and no games or running on other hardware.
comparison.png
 
That's not really true. The Series X RAM is split with 10GB of GDDR6 @ 556GB/s and 6GB of GDDR6 @ 336GB/s

That CPU side will depend completely on how much of the PS5's Unified RAM will be used for games. This is the point of the OP, but many of you have dismissed it completely even though the 12TF number only represents the Series X GPU, not the console as a whole.

Oooo I didn't know you were on Gaf as well.

:messenger_winking_tongue:
 

FacelessSamurai

..but cry so much I wish I had some
360 compared to PS3 is a bad example as 360 was easier to squeeze out performance out of than PS3 because of the simpler architecture, same with PS1 vs Saturn. First party games on Saturn or PS3 looked just as good if not better, especially on PS3.
 

iHaunter

Member
I am not reading that wall of text :)

To summarize, and this was already said in Cerny's talk, it's not about TFLOPs - Filling CUs is more important.

Sony went with a higher frequency instead of more TFLOPs because, with a good cooling system, you can get more performance with fewer TFLOPs.
 
Last edited:

truth411

Member
While that is true to some extent, if you have better hardware you've got more to work with...



You don't even know what exclusives the XSX really has yet.
Microsoft bought a shit ton of studios that haven't showcased anything yet.

Considering how TLOU2 turned out I don't think that people should just assume that Sony exclusives ( especially sequels ) are automatically amazing or as good as the first.
Your missing the point, there aren't going to be XSX exclusives for a few years. There going to be cross gen games for Xbox one and Xbox series X/S. Sony is going exclusively Next Gen.
 

jakinov

Member
There is nothing on the PS5 that is going to allow it to get more out of the GPU than the XSX can. And in Fact, with the PS5 using variable clocks, there is more chance of the PS5 not getting the most out of it.

The reality is, even if the XSX can use the full 12.2tflops, and the PS5 can use the full 10.24tflops, its not going to make it look much better, if at all.
That's the only point that needs to be made here.
Both companies are shoving in custom chips designed to help offload work or minimize bottlenecks of primary components. For example Sony claims that their proprietary GPU cache scrubbing technology will remove bottlenecks and improve efficiency. Sony also took debatable pragmatic approaches to certain areas such as to how many CUs they will include and the clock rate that they use. They argue that it's harder to keep many CUs busy with substantial work that it's better to have less CUs that are faster. They also claim that having a high clock speed drives other components of the GPU to also work more efficiently. Things like that can result in PS5 getting more out of their GPU than the XSX can. But Microsoft does a lot of custom stuff too. Then the SSD also reduces a bottle neck for future workloads that may require working with a lot of data in secondary storage by reducing the chances of processing components idling because they need data to work with. In computer science, this is commonly taught bottleneck but it's usually talked about in relation to the CPU and main memory but if your data is in secondary, it's jut the bottleneck exasperated. Of course though developers can try to program their game to do other things while waiting for data. But that's only if there's other things to do at the time, that work may not be substantial in having a positive impact on the user's quality perceptions, and there's both development time and resource management overhead in doing so. On the PS5, developers arguably wouldn't have to worry as much about this and get more efficiency from the GPU with less/no overhead and work.
 
Oooo I didn't know you were on Gaf as well.

:messenger_winking_tongue:
I just joined last month. I got tired of arguing with Xbox fanboys who don't know anything on N4G and Gamefaqs. I see the discussions here are somewhat civil, but there are definitely people here who haven't got a clue as to what they're talking about, especially when it comes to teraflops. I guess you got tired of N4G too huh?
 
Last edited:

NullZ3r0

Banned
Realistically?

It's like arguing that:

  • A PC with an i9-9900k running at a faster clock speed, with faster RAM, a 2080TI, and a good SSD...

...is going be outperformed by:

  • A PC with a slower i9-9900k running in boost mode, with slower RAM, a 2070 Super also running in boost mode, but equipped with a great SSD.

I don't buy it. Not for a second.

Now, will Sony's first party developers have better looking games than anything on Series X? That I do buy, and that's because DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS.
Its more like a great SSD vs a custom-designed awesome SSD faster than anything out there.

Like the rest of the system, Xbox Series HDD was designed around sustained performance.. SSD performance nosedives when the chips heat up. MS accounted for this in the performance.
 

NullZ3r0

Banned
MS: "industry experts" say Xbox One has "a better line-up", gamers "don't buy stats"

Microsoft's corporate vice-president Yusuf Mehdi has told investors that the upcoming clash between Xbox One and PS4 is great for consumers, and reiterated his colleague Albert Penello's claim that the statistical capabilities of each machine are "meaningless" in themselves.
"One of the things that has never happened is we go head-to-head with a competitor on a console launch. "Competitively speaking, so far, I feel like we have a much better... more complete value prop," Mehdi went on. "We do things that aren't found on other platforms with the Kinect, a huge piece of differentiation. As I said before, the fact that we do entertainment and gaming.

"And then even if you just take gamers, hardcore gamers, gamers buy for the game. They don't buy for stats on a spec sheet. And if you look at the games we have, according to most industry experts now, a better lineup of games."

Sep 9, 2013


Phil Spencer felt “even better” after knowing the PS5 specs

Apr 3, 2020

Phil Spencer "Just being honest" Claims Advantage over PS5

“Just being honest, I felt good after seeing their show. I think the hardware advantages that we have built are going to show up as we’re talking more about our games and frame rates and other things,”

Saturday at 2:20 AM






The past 7 years have been a goldmine. Microsoft will never stop, not to be manipulative, bending the rules to fit their narrative. Based on the games we´ve seen from Microsoft ... after overhyping the event, claiming for 7 years to have the best e3 line-up in xbox history (every january) ...

We better all have enough aspirin ready, because gonna be unbearable from now on.

There's a whole new group of people in charge vs the people back in 2013. Same is true with Sony. Past performance is no indication of future success or failure.
 

Chiggs

Member
Its more like a great SSD vs a custom-designed awesome SSD faster than anything out there.

Like the rest of the system, Xbox Series HDD was designed around sustained performance.. SSD performance nosedives when the chips heat up. MS accounted for this in the performance.

I think that's a fair assessment.
 
I just joined last month. I got tired of arguing with Xbox fanboys who don't know anything on N4G and Gamefaqs. I see the discussions here are somewhat civil, but there are definitely people here who haven't got a clue as to what they're talking about, especially when it comes to teraflops. I guess you got tired of N4G too huh?

Just a word of advice. They don't like console warring at all here so don't call anyone a pony, Xbots or a fanboy.

I still post on N4G from time to time but I like Gaf better. You can actually maintain a lengthy discussion instead of being limited by an invisible bubble count.
 

ZywyPL

Banned
There's a whole new group of people in charge vs the people back in 2013. Same is true with Sony. Past performance is no indication of future success or failure.

Actually, Spencer during one podcast (at IGN I believe) said that's not truth, not at all, that every single person who worked on XB1 and its launch is still here, doing the exact same job, the difference now is they have been given enough time and proper directions, hence vastly different outcome. The only person who left is Don Mattrick.
 

Great Hair

Banned
How can you be old enough to use a message board but not old enough to understand how marketing works?

Because one thing is to deceive, to be manipulative, to lure fans into buying the hype, the additional stuff like in Halo 5 and the other is to use a trailer with a different character than in the game (TLOU2).

In case you did not notice, there are not Brad Sams und Consorts on Sony side. People have to be very gullible to think, Microsoft has changed a lot. So much that, Gamepass on PC is worse compared to Gamepass on Xbox.

Not long ago, Phil and Aaron hyped the shit out of their May event.
 

geordiemp

Member
I love this bottleneck theory. Tell me about the bottlenecks the Series X has.

I was watching the XSX loading and game reloading demos, some of them were tiny indie games like Cave that runs on 1 GB of RAM . One GB......

Cant tell where all the bottlenecks are, but there are allot at that demsotration and it was only a month or so ago. Maybe its got patched since then/
 
Last edited:

UnNamed

Banned
TF is just a metric. It gives a good example how powerful a system can be. It's not that bad and it hasn't to be accurate.
Better TF than Bits.
 

bitbydeath

Member
Show me the quote where Epic said that the PS5 will display more detail than the XSX will.
You got the quote?

Below it’s stated the level of detail shown is only possible on PS5 due to the SSD being able to stream at its incredibly fast rate which we know is pulling directly from zBrush.

“This is not just a whole lot of polygons and memory. It’s also a lot of polygons being loaded every frame as you walk around through the environment and this sort of detail you don’t see in the world would absolutely not be possible at any scale without these breakthroughs that Sony’s made.”

Sweeney says that Sony’s storage architecture is far ahead of “the best SSD solution you can buy on PC today. And so it’s really exciting to be seeing the console market push forward the high-end PC market in this way.”

 

Xplainin

Banned
Both companies are shoving in custom chips designed to help offload work or minimize bottlenecks of primary components. For example Sony claims that their proprietary GPU cache scrubbing technology will remove bottlenecks and improve efficiency. Sony also took debatable pragmatic approaches to certain areas such as to how many CUs they will include and the clock rate that they use. They argue that it's harder to keep many CUs busy with substantial work that it's better to have less CUs that are faster. They also claim that having a high clock speed drives other components of the GPU to also work more efficiently. Things like that can result in PS5 getting more out of their GPU than the XSX can. But Microsoft does a lot of custom stuff too. Then the SSD also reduces a bottle neck for future workloads that may require working with a lot of data in secondary storage by reducing the chances of processing components idling because they need data to work with. In computer science, this is commonly taught bottleneck but it's usually talked about in relation to the CPU and main memory but if your data is in secondary, it's jut the bottleneck exasperated. Of course though developers can try to program their game to do other things while waiting for data. But that's only if there's other things to do at the time, that work may not be substantial in having a positive impact on the user's quality perceptions, and there's both development time and resource management overhead in doing so. On the PS5, developers arguably wouldn't have to worry as much about this and get more efficiency from the GPU with less/no overhead and work.

Whether or not these optimizations outweigh the more "brute-force" approach , would depend on the individual games, and the overall game development trend of how games are designed.
You are throwing around the world "efficiency" a bit loose.
MS has released far more information about their optimizations to increase efficiencies.
VRS
Sampler Feedback
Velocity Architecture
Mesh Shading
Machine Learning
Etc
Etc

At this point all we have is Mark Cerny saying he likes to go faster and narrower. I guess you have to sell what you have.
They have said nothing that gives the PS5 any efficency over XSX.
There is a saying. "There is no replacement for displacement."
Its apt here.
 

Xplainin

Banned
Below it’s stated the level of detail shown is only possible on PS5 due to the SSD being able to stream at its incredibly fast rate which we know is pulling directly from zBrush.



He said PC. He did not say XSX.
Show me a quote where he compares the PS5 directly to the XSX, and where he says that the PS5 will be able to display better details than the XSX.
That Was your claim.
 

jakinov

Member
You are throwing around the world "efficiency" a bit loose.
MS has released far more information about their optimizations to increase efficiencies.
VRS
Sampler Feedback
Velocity Architecture
Mesh Shading
Machine Learning
Etc
Etc

At this point all we have is Mark Cerny saying he likes to go faster and narrower. I guess you have to sell what you have.
They have said nothing that gives the PS5 any efficency over XSX.
There is a saying. "There is no replacement for displacement."
Its apt here.
I don't see how I'm using it "a bit loose" I used it twice; the first being to describe a claim and the second to also describe how a developer could easily use more available resources by maybe doing less.

I'm aware that Microsoft has their own custom things for improving efficiencies as well. I mentioned it in my post twice. Some of those things you listed are not non proprietary software solutions/techniques. One of them is just a "marketing" name for a bunch of little things (including software solutions) and even encompasses other items your list.

Like I mentioned in other post Sony has a few things that could help dev efficiently using what GPU they have. They are the SSD (and all the custom components within/around that), proprietary cache scrubbers,opting for less CUs and higher clock rate.

Again whether or not this actually makes up for having less raw power would probably depend on the games; as games present different workloads. How much better one would be better on average would again depend on the future trends of developers and what games they choose to make and how they make them.
 

Jubenhimer

Member
In the end, I don't either consumers or developers care that much about having the most powerful hardware. Just give people simple, well designed hardware that's easy to use and you've got a winner. Time and time again, it's been proven that specs aren't what sell consoles, games and marketing do.
 
Below it’s stated the level of detail shown is only possible on PS5 due to the SSD being able to stream at its incredibly fast rate which we know is pulling directly from zBrush.




TBF, that isn't Sweeney saying anything regarding XSX, just comparing PS5's SSD I/O to PCs ATM. It's not like XSX can't stream directly from the SSD at high levels.

You have to watch the subtle marketing double-speak in pieces like that, which pops up in the last sentence of the first paragraph you quoted. That is a generalized comparison, it's nothing specific to any given hardware and certainly not to another next-gen console given that if anyone at Epic were to even suggest this, MS would not take too kindly to that and probably scale back support of their engine software on their platform and/or other punitive actions.

That's part and parcel of this type of stuff.

They announced alot of tech and features to increase efficiencies. Obviously between the two some tech will be better than others. It's up to you to decide which ones you prefer.

Some things aren't down to preference. Some are just outright better or worst. Mesh Shaders, for example, for all intents and purposes are just objectively better than Primitive Shaders, since they do the same thing as Primitive Shaders but a lot more efficiently. You wouldn't really need to roll with Primitive Shaders unless out of necessity.

Whatever fuels that necessity is always up for speculation, though.
 
Last edited:
The level of dunning-kruger is astonishing and hilarious 😂; "I am an armchair dev hear me roar, insert straw-man, ad hominem, red herring, I am authority..."

I'm just going to leave this here:

The “technology” behind Quixel Megascans is nothing at all to do with Nanite virtualised micro-polygon geometry.
Megascans are assets you can use in your game or other production. That they are included or recommended as part of the Unreal Engine developer kit doesn’t mean that’s all Nanite is, or it is at all the same thing.
“1440P” isn’t a texture size, either. Nor is a 4K texture the same as a 4K frame buffer.

Load of gibberish.

We have no idea how much IO latency/bandwidth was required to demonstrate the UE5 demo.

All we know is that it was a collaboration between Epic and Sony, the result of which seems to be Sony selecting the speed they did for PS5, and Sweeney saying what we saw was only possible thanks to the work Sony did on IO.

We also know and have been told that not only is UE5 designed to scale down to mobile devices, but that Nanite itself is also tuneable.

Lumen in the Land of Nanite may very well only be possible on PS5 in the manner it was presented, just as Ratchet & Clank is only possible on PS5 in the manner it was presented in the trailer.

That doesn’t mean UE5 or Nanite as a useful technology only works on PS5. We’ve been told that’s scalable across systems with NVMe drives. How many assets are being loaded at once and the quality of the source assets being potentially limiting factors, and how many pixels per triangle being a scalable parameter.

If Lumen in the Land of Nanite was designed to really stretch the IO capabilities of the PS5 at the level of quality demonstrated then it will only be possible on it. That’s not a contentious or controversial point.

If it was only using something like a quarter of its capability then maybe it’s possible to render as shown on almost anything with a decent NVMe drive. If this is the case then it’s a mystery as to why Sony would spend so much money and die space on what they have done if it’s all unnecessary. That implication I do find a bit nonsensical.

But this has already been discussed to death entire chapters ago.
 

bitbydeath

Member
TBF, that isn't Sweeney saying anything regarding XSX, just comparing PS5's SSD I/O to PCs ATM. It's not like XSX can't stream directly from the SSD at high levels.

You have to watch the subtle marketing double-speak in pieces like that, which pops up in the last sentence of the first paragraph you quoted. That is a generalized comparison, it's nothing specific to any given hardware and certainly not to another next-gen console given that if anyone at Epic were to even suggest this, MS would not take too kindly to that and probably scale back support of their engine software on their platform and/or other punitive actions.

That's part and parcel of this type of stuff.

XSX can certainly stream the quote was specific to the detail being the best on PS5.

My interpretation of that is the 8K textures as it would need a very hefty SSD to deliver such large textures in little time.
 

Bill O'Rights

Seldom posts. Always delivers.
Staff Member
Folks, word of advice going forward. the 'brute force' statements requires the same level of evidencing as the 9TF argument people are being banned for. Show us how the XsX is brute forcing using real world examples or benchmarking or drop the console warrior fuel. We're really clamping down on dismissive posts to try and get some sanity back on the boards. If you're going to use a lazy crutch like '9TF' or 'Xbox can only brute force' you're likely to earn warnings, reply bans and eventually bans. What's good for one side is good for the other.
 
XSX can certainly stream the quote was specific to the detail being the best on PS5.

My interpretation of that is the 8K textures as it would need a very hefty SSD to deliver such large textures in little time.

Well, we can actually break down some numbers and see. An 8K texture is 7,680 x 4,320 pixels, or 33 million pixels. Each pixel is 3 bytes, so each 8K texture is 99,144,000 bytes. In megabytes, that becomes 99.144 MB. This is all uncompressed, by the way.

So let's just say they were streaming one uncompressed 8K texture per frame in the UE5 demo. The demo was capped at 30 FPS, so in 30 frames they're streaming 2.974 GB of data. Now, that's clearly more than XSX's raw bandwidth cap of 2.4 GB/s uncompressed, but I'm talking a very extreme case here, a case of the demo streaming in a new raw 8K texture every single frame to the RAM, which I'm almost certain isn't happening.

Not only because that would be excessive for a real game scenario, but because we can also assume on PS5 that when the dedicated processor in the I/O block is streaming data from storage to RAM, the other system components are waiting on bus access, since they're all a part of a hUMA architecture. So the GPU isn't going to be able to read the new texture data in RAM until the I/O block returns access of the bus to another system component. The thing is, then, if the I/O block is writing those 8K textures for 30 frames worth consecutively, those are 30 frames where the GPU isn't accessing any of those frames since it doesn't have access back to the memory bus.

This same issue also pops up on XSX since it's also hUMA, but there's a fraction of a CPU core still handling movement of data between storage and RAM in that situation so while the GPU has to wait, CPU-bound tasks could (in some limited capacity) still access data in the RAM while new data from storage is being copied to it. There's other things that might prevent this though, or at least limit it a lot, because you probably don't want a game's CPU-bound logic trying to access data in RAM that is actively being replaced or will very soon be replaced, as that could cause errors (this same scenario would happen on PS5 if in fact it handled transfer of data from storage to RAM the same way, which it doesn't).

So back to the UE5 demo scenario, yes if it were drawing raw 8K textures at a rate of 1 new texture per frame at 30 frames per second, that would exceed XSX's raw SSD bandwidth. However, literally the only realistic scenario where you would be doing that...is in a tech demo, which is what the UE5 demo was. An actual real game on PS5 won't be able to stream in data at that rate because other game logic has to actually be performed. On both PS5 and XSX the issue can be addressed some by compressing those 8K textures ahead of time and then decompressing them through the decompressors, and while both systems can decompress data MUCH faster than on PC thanks to dedicated decompression hardware, it still adds a bit of a time penalty to decompress.

If you start talking in the realm of using compressed 8K textures, then you get into a scenario where that same UE5 demo streaming in compressed 8K textures could easily stream them in on XSX in addition to PS5 but, again, you're talking about consecutive streaming per frame with a new texture per frame, which is simply not realistic for an actual gameplay scenario where other game logic is functioning.

I think folks have to look at Sweeney's comments in that context because it's the only one that makes sense.
 

Xplainin

Banned
XSX isn’t better than PC.
You need to go and see what makes both the PS5 and XSXs SSDs different from a PC SSD as well as different to just changing out you PS4 or XOX hard drive for an SSD. And its not just optimization.
Go and see how both the new consoles SSDs are "wired" into the system, and then you will understand why they are both very different to a PC with an SSD.
 

Dlacy13g

Member
Real talk...gaming hardware for the last few generations is about spin control and making the consumer base buy into said PR spin. We have heard the "balanced", "Developed to the metal", "maximized efficiency" and "optimized" spin before.
 

Gamerguy84

Member
Real talk...gaming hardware for the last few generations is about spin control and making the consumer base buy into said PR spin. We have heard the "balanced", "Developed to the metal", "maximized efficiency" and "optimized" spin before.

Yep and every company is doing it.
 

bitbydeath

Member
You need to go and see what makes both the PS5 and XSXs SSDs different from a PC SSD as well as different to just changing out you PS4 or XOX hard drive for an SSD. And its not just optimization.
Go and see how both the new consoles SSDs are "wired" into the system, and then you will understand why they are both very different to a PC with an SSD.

That doesn’t change what I said.
XSX isn’t better than PC.
 

Xplainin

Banned
That doesn’t change what I said.
XSX isn’t better than PC.
You are just so wrong.
XSX is far better than PC with how the SSD is integrated into the APU.
As I said before, a PC SSD will need to load the data into the system RAM and then from the system RAM into the video RAM on the GPU.
That alone shows that the XSX is nothing like a PC.
Then you have custom decompression blocks which you dont have with PC.

There is a reason that Epic didn't say XSX, and thats because the PS5 isn't that much different to the XSX.
 

ZywyPL

Banned
You need to go and see what makes both the PS5 and XSXs SSDs different from a PC SSD as well as different to just changing out you PS4 or XOX hard drive for an SSD. And its not just optimization.
Go and see how both the new consoles SSDs are "wired" into the system, and then you will understand why they are both very different to a PC with an SSD.

But how does it transfer to the games/engines themselves tho? What's the end impact for us gamers? After tens of games shown so far we only saw R&C that stands out, which again, isn't anything new because Star Citizen has already been doing that on an ordinary ~500MB/s SATA SSDs with all its suppose bottlenecks.
 
Top Bottom