• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Real Talk: Gaming Hardware Is About Maximizing Efficiency Not TFLOPs

Tqaulity

Member
The discussion on which next gen console is more "powerful" has been heating up lately with most believing the Xbox Series X to be more powerful solely on higher spec counts in certain categories. Yet some folks counter that with how the custom hardware in PS5 will alleviate some of it's relative performance deficit and the difference will be minimal.

Before I proceed, let's really think about what we mean by "powerful" in this context because it could mean several different things. People tend to just toss that number around and say "system X has more TFLOPs so it's more powerful" or "System Y can run at higher framerates so it's more powerful". It is an important distinction in the context of the next generation consoles since both system have advantages in different areas.

For this discussion, I want to focus on actual game performance as the goal. Meaning which system can actually process the most data in the shortest amount of time. This will yield richer worlds at higher framerates. Thus, I am getting away from the theoretical and the TFLOPS and high level specs and focusing on which system ultimately runs games with same or higher details and higher framerates.

Now of course let me state the obvious: at this point, nobody really knows which system is more powerful between the Xbox Series X and PS5. Why? Because nobody has seen both running in final hardware form up close with the same game side by side to do a comparison. So I'm not here to declare either one as more "powerful" but just to check some folks on claiming one as superior solely based on numbers on paper or video streams.

Now many people in the know including developers have said this but let me reiterate: virtually no real world game running on any system does so in a manner which utilizes 100% of that system's capability at all times. As beautiful as TLOU2 or God of War looks on PS4, it is completely incorrect to think that either of those games are extracting the maximum 1.8 TFLOPs of GPU power for any sustained period. Yes, even if the clock speeds are fixed the actual utilization is based on the software running on it. For example, I can have a 5 Ghz CPU and a 2Ghz top of the line GPU running a simple 'for' loop or simple binary search algorithm. Does that mean that the system is running at it's theoretical 14 TFLOPs while running those few lines of code in that for loop simply because it's frequencies are locked? Theoretically, I could build a 15 PetaFlop machine (15000 TFLOPS) that is several orders of magnitude more powerful than anything on the market today. But if all it could play were Wii games by design, would that be a system which is utilizing it's full potential? Would that be next gen?

The point here is something that I've mentioned several times in this forum and I think a lot of people miss. When we really think about "next gen" gaming and transitioning to a new generation it really isn't the hardware that achieve those milestones. It's the actual software/games that truly define a new generation. We don't remember the specs of the N64 and how much more horsepower it had over the PS1, but we remember how seeing Super Mario 64 for the first time took our breath away. Try as we might, few people could look at Mario 64 in motion and translate that to exactly what hardware specs made that possible and how any theoretical advantages over competing hardware is showing up in the images being rendered before in front of them. The same could be said in moving to PS2: it was seeing GT3, Metal Gears Solid 2, and GTA III that defined what "next gen" really meant for that generation. It was not a GFLOP count or marketing buzz words like "Emotion Engine". We could go on with seeing Halo for the first time, Gears of War, Uncharted 2, and Killzone Shadowfall in later generations but you get my point. But here is the question: if you didn't know the hardware specs of the system running those games, would that change how you looked at that system? In other words, if Kojima today mentioned that MGS2 on PS2 only used <1 GFLOP of performance, would you now look at the PS2 as being "weaker" than the Dreamcast (capable of a theoretical 1.4 GFLOPS) even though it clearly looked better than anything on the Dreamcast at that time?

In thinking with that, we should realize that all of this talk about TFLOPs and theoretical numbers is really moot at the end of the day and misses the point. If we understand that maximum theoretical numbers are quite meaningless in determining actual real game performance and we agree that the real world performance or demonstrative power is actual more meaningful to evaluate, then we should be focusing on which system will actually be able to deliver it's theoretical performance best to the screen. There are indeed a tremendous number of system components and variables that all have to play nice and align perfectly for a system to operate at it's maximum capacity. In truth, it almost never happens with real workloads but the systems that are perceived to be the most "powerful" are generally the ones that have come closest to it's theoretical maximums…meaning the ones that are most efficient. That truly is the name of the game…trying to remove bottlenecks and create a balanced system that can work together effectively is the really the art of designing a game console ( or any system).

I recently got into a back and forth with someone who shouted to me: Xbox Series X is clearly more powerful because "The numbers don't lie". I literally LMAO and shouted back "LOL. YES THEY DO!" There are countless examples of this and many on this forum have posted PC GPU comparisons demonstrating the lower TFLOP GPU outperforming (in real games) a higher TFLOP GPU etc. But there are 2 examples I want to remind people of in particular:

  1. The first and more recent example of "numbers telling lies" is with the PS3 and Xbox 360 comparison. Now on paper, there is no denying that the PS3 had a much higher theoretical performance ceiling when you factored in the Cell, it's SPUs, along with the RSX GPU. Yet, most multiplatform games ran better on the Xbox 360. Why? Because the X360 was a much more balanced system that allowed developers to extract more performance with less effort than the PS3. In other words, it's "power" was much more accessible and the system more efficient. It's unified memory, symmetrical CPU design, and larger GPU with more parallel pipelines meant there was more power on tap in the X360. This was evident in many third party games throughout the generation but was very evident in the first few years (Anyone remember Madden 07 running at 60fps on X360 vs only 30fps on PS3). But other big titles such as Read Dead Redemption, Skyrim, Assassin's Creed and many others ran at lower resolution and/or lower framerates on the PS3. One way to categorize this at a high level of abstraction (not literal figures, just an example to illustrate the point) is that 70% of the Xbox 360 was better than 40% of the PS3.
  2. For those old enough to remember, the second major example of this was with the original PS1 vs the Sega Saturn. People may not remember but on paper the Sega Saturn was superior to the PS1 in almost every respect! More polygon pushing power, higher pixel fillrate, more RAM and VRAM, better sprite processing, higher maximum resolution and more! Yet and still, the vast majority of 3rd party multiplatform games looked and ran better on the PS1. Games like Tomb Raider, Resident Evil, and Wipeout are just some example where the Saturn version had poorer performance or was missing visual elements altogether. Why was this? Again, the Saturn was notoriously difficult to develop on and particularly to harness it's max potential. It featured dual CPU processors that was very tricky to code and in fact most developers literally ignored the 2nd processor altogether reducing the theoretical performance of the system by a tremendous amount. The PS1 on the other hand was well balanced and easy to get the desired level of performance out of it. For developers, you got much more out of it with less effort. Again, high level abstraction description: 60% of the PS1 was a lot better than 30% of the Saturn

So how does this relate to the current discussions around PS5 and Xbox Series X. Again let me reiterate, I'm not saying that one is more powerful than the other. In fact, by my comments in this thread I cannot say that until I've seen games running side by side on both. I believe like many that both will have their advantages in different areas. But we've been hearing and talking a lot recently about how so many developers seem to be singing the praises of the PS5 using big hyperbolic words like "masterpiece", "best ever", "dream machine" etc. The general excitement from the development community around the PS5 seems tangible and there isn't that same vibe at this time around the Series X (despite the higher spec numbers). Why is that?

We've heard things mentioned about the PS5 such as it's one of the easiest systems ever to develop on, it's very easy to get the power out of it, it removes many of the bottlenecks that have existed for many years, it frees developers from design constraints that they have been working around for decades etc. These kinds of statements all point to a system that will be extremely efficient and allow developers to harness more power for less time and effort. The fact that we haven't heard the same sorts of statements around Series X lead me to believe that the PS5 is in fact the more efficient between the two.

This means that you can get much closer (still not likely 100%) to that 10.28 TFLOPs of GPU power more consistently in actual workloads. This means that you can utilize much more of those 8 Zen 2 cores to doing meaningful work that that the player will see as opposed to "under the hood" tasks around data management, audio processing etc. This means that can actually achieve near 100% of the theoretical SSD read/write speeds without the traditional bottlenecks that have existed with HDDs in games for years. This means that you can get much more efficient use out of the physical RAM allotment because there is less wasteful or unnecessary assets taking up space.

The people that truly follow what I'm saying in this thread will realize that these things are much more exciting to both a developer and end user than some higher numbers on a spec sheet. These are the things that can make a meaningful difference in the quality of games we play in the next few years. These are things that will directly improve the quality of the software, which is really what delivers the next gen experience. This is absolutely cause to sing the praises of the PS5 as many developers have done.

Unfortunately for Cerny and the team at Sony, most of the real work and genius in the PS5 design is not easy to communicate to end users. It's also not something that end users can really appreciate since it's not something they can truly understand until they see the results. And that of course will not happen right away at launch in 2020. But ultimately, there is much to be excited about with the innovations Sony is bringing in the PS5 and the level of efficiency they could have possibly achieved.

So while I am not saying the PS5 is definitely more powerful (meaning more performance) than the Series X, I am saying that it is absolutely inaccurate to say that the Series X is more powerful solely based on TFLOPs ratings and other theoretical specs. In other words, despite what the numbers say it is entirely possible that we may see many cases where games are performing better (i.e. more complex scenes and/or higher framerates) on PS5. To use my analogy above: 85% of the PS5 maybe better than 60% of the Series X (for example). It wouldn't be the first time that the numbers did not tell the whole truth :)
 
Ok I have a question though that is irrelevant to all that. Take Multiplatform games out of the equation. The best looking game from the ps3/360 era was probably Last of Us or God of War 3 or Killzone - so ultimately that console ended up outdoing the xbox later in the generation. Now I personally don't care about high frame rates or resolution really beyond 720-1080p - it just doesn't matter that much to me as far as graphics are concerned. Mainly its probably the level of detail in things, lack of pop in, physics, etc, so if you make something highly detailed on one of these consoles thats optimized purely for the console later down the line like lets say 6 years from now (and then those future super pc's will run it at high frame rates with 4k and all that) which one has the higher likelihood of looking better - and will in the xbox's case - something like the series s hold it back?
 
Last edited:
For those that want the TLDR:

"Look, I'm not saying that either system is more powerful but the PS5 is more powerful because Sony devs praised it and also TFLOPS are just like, theoretical and stuff, it is all about efficiency and removing bottlenecks not TFLOPS. Again, not claiming that one is more powerful but the PS5 will be more powerful."
 
Interesting you bring this up; speaking of efficiency though I don't think people should sleep on MS's approach here. Some of the fellas over on Beyond3D did a little digging for AMD patents related to VRS and came across some interesting stuff. iRoboto posted a summary of their own that is pretty interesting and also ties in with something I speculated in the Velocity Architecture thread:

Actual patent above.

It doesn't read like it's Tier 2 to be honest.
Tier 1
  • Shading rate can only be specified on a per-draw-basis; nothing more granular than that
  • Shading rate applies uniformly to what is drawn independently of where it lies within the rendertarget
  • Use of 1x2, programmable sample positions, or conservative rasterization may cause fall-back into fine shading
Tier 2
  • Shading rate can be specified on a per-draw-basis, as in Tier 1. It can also be specified by a combination of per-draw-basis, and of:
    • Semantic from the per-provoking-vertex, and
    • a screenspace image
  • Shading rates from the three sources are combined using a set of combiners
  • Screen space image tile size is 16x16 or smaller
  • Shading rate requested by the app is guaranteed to be delivered exactly (for precision of temporal and other reconstruction filters)
  • SV_ShadingRate PS input is supported
  • The per-provoking vertex rate, also referred to here as a per-primitive rate, is valid when one viewport is used and SV_ViewportIndex is not written to.
  • The per-provoking vertex rate, also referred to as a per-primitive rate, can be used with more than one viewport if the SupportsPerVertexShadingRateWithMultipleViewports cap is marked true. Additionally, in that case, it can be used when SV_ViewportIndex is written to.

Just to be clear, I don't know what i"m doing. Reading patents in a head scratcher.
But it seems to me, comparing the two, the MS Patent at each stage of the unified shader pipeline, has the option to take in shading rate parameters or output them.

Compared to AMDs patent, which is also runs VRS through the unified shader pipeline, it doesn't seem to indicate that.

If you look at the Tier 2 highlights, it does appear that, yes, some form of supporting optional shading rate parameters on different stages.

MS Patent
Is here https://patents.google.com/patent/US20180047203A1/en

  • The input assembler stage 80 supplies data (triangles, lines, points, and indexes) to the pipeline. It also optionally processes shading rate parameters per object (SRPo), per primitive (SRPp), or per vertex (SRPv), generally referenced at 112, as determined by the application 46 (FIG. 1). As generally indicated at 114, input assembler stage 80 may output the SRPp, or an SRPv if the SRPv is not generated by a vertex shader stage 82.
  • [0042]
    The vertex shader stage 82 processes vertices, typically performing operations such as transformations, skinning, and lighting. Vertex shader stage 82 takes a single input vertex and produces a single output vertex. Also, as indicated at 110, vertex shader stage 82 optionally inputs the per-vertex shading rate parameter (SRPv) or the per-primitive shading rate parameter (SRPp) and typically outputs an SRPv, that is either input or calculated or looked up. It should be noted that, in some implementations, such as when using higher-order surfaces, the SRPv comes from a hull shader stage 84.
  • [0043]
    The hull shader stage 84, a tessellator stage 86, and a domain-shader 88stage comprise the tessellation stages—The tessellation stages convert higher-order surfaces to triangles, e.g., primitives, as indicated at 115, for rendering within logical graphics pipeline 14. Optionally, as indicated at 111, hull shader stage 84 can generate the SRPv value for each vertex of each generated primitive (e.g., triangle).
  • [0044]
    The geometry shader stage 90 optionally (e.g., this stage can be bypassed) processes entire primitives 22. Its input may be a full primitive 22 (which is three vertices for a triangle, two vertices for a line, or a single vertex for a point), a quad, or a rectangle. In addition, each primitive can also include the vertex data for any edge-adjacent primitives. This could include at most an additional three vertices for a triangle or an additional two vertices for a line. The geometry shader stage 90 also supports limited geometry amplification and de-amplification. Given an input primitive 22, the geometry shader can discard the primitive, or emit one or more new primitives. Each primitive emitted will output an SRPv for each vertex.
  • [0045]
    The stream-output stage 92 streams primitive data from graphics pipeline 14 to graphics memory 58 on its way to the rasterizer. Data can be streamed out and/or passed into a rasterizer stage 94. Data streamed out to graphics memory 58 can be recirculated back into graphics pipeline 14as input data or read-back from the CPU 34 (FIG. 1). This stage may optionally stream out SRPv values to be used on a subsequent rendering pass.
  • [0046]
    The rasterizer stage 94 clips primitives, prepares primitives for a pixel shader stage 96, and determines how to invoke pixel shaders. Further, as generally indicated at 118, the rasterizer stage 94 performs coarse scan conversions and determines a per-fragment variable shading rate parameter value (SRPf) (e.g., where the fragment may be a tile, a sub-tile, a quad, a pixel, or a sub-pixel region). Additionally, the rasterizer stage 94performs fine scan conversions and determines pixel sample positions covered by the fragments.
Click to expand...
As I review this, and the words 'Hololens' pop up in this patent, it would appear that MS has been working on their own from of VRS for some time in association with trying to extract as much shader power as possible while reducing the amount of power required to do it.

That research got them to this point, and instead of going with AMDs solution, their solution was better (the Hololens one) and they plopped it here.

That may imply, Hololens and any other VR type device that they've been working on with foveated rendering in mind (and thus VRS), could be driven by in this case, XSX. For context, Hololens 3 is supposed to launch with foveated rendering.

In determining the shading rate for different regions of each primitive (and/or different regions of the 2D image), the described aspects take into account variability with respect to desired level of detail (LOD) across regions of the image. For instance, but not limited hereto, different shading rates for different fragments of each primitive may be associated with one or more of foveated rendering (fixed or eye tracked), foveated display optics, objects of interest (e.g., an enemy in a game), and content characteristics (e.g., sharpness of edges, degree of detail, smoothness of lighting, etc.). In other words, the described aspects, define a mechanism to control, on-the-fly (e.g., during the processing of any portion of any primitive used in the entire image in the graphic pipeline), whether work performed by the pixel shader stage of the graphics pipeline of the GPU is performed at a particular spatial rate, based on a number of possible factors, including screen-space position of the primitive, local scene complexity, and/or object identifier (ID), to name a few.

Basically, I speculated a while ago that the chances are very high that MS's engineering team, along with the AMD engineers they worked with on their next-gen platforms, might've been the ones to stress implementation of Mesh Shader support in the RDNA2 PC GPUs and MS's Series X/Series S GPUs. Reason being because the Primitive Shaders of RDNA1 are generally less capable/efficient than the Mesh Shaders RDNA2 bring.

But what slipped by me at the time (and what the post I quoted highlights) is that, if MS had a patent for VRS implementation of their own (as this post and others in that thread at B3D show), and that implementation is closer to (or is exactly) the Tier 2 variety versus AMD's Tier 1 implementation, it might be possible that MS have also brought that to their system side and AMD might therefore be using that with the PC side. If that's the case, however, then Sony would not likely be able to utilize that VRS implementation if it's in fact a basis of RDNA2 on the PC side since that would be a case of AMD leveraging development with MS (the same applies potentially with Mesh Shading).

This doesn't mean that Sony might not have their own variation of VRS (I speculated that they very well might), but whether it's Tier 1 or Tier 2 is actually up for serious debate since the Tier 2 version would be based on MS's patent and work (likely also based off of/influenced by or designed as a blueprint spec in tandem with Nvidia). Sony could have it as a Tier 2 type implementation in a different way, but the fact that Geometry Engine and Primitive Shaders (both also clearly mentioned in RDNA1 spec) are AMD nomenclature and AMD patented, suggests that Sony most likely are leveraging those (and therefore RDNA1 features; FWIW Geometry Engine is also in RDNA2, but Primitive Shaders are not, they've been replaced with Mesh Shaders), and if they have some variation of Tier 2 VRS in PS5 it'll need to have been heavily customized and implemented somewhat differently.

We can say that they've suggested this through some mentions of the Geometry Engine in Road to PS5, but in all honesty that talk also fits in line with Tier 1 VRS or an equivalent. Perhaps they'll go in-depth on it in the near future, we'll see.
 
Last edited:

teezzy

Banned
Baby, you know you're Judas, and I'm your priest!
Baby, what I got is not from the least!
Bring it through the stage in the rage of a beast!


Step in the arena, and break the wall down!
Step in the arena, and break the wall down!


So good - You know I got ya' - Soooooo right...
Yeahhh, Yeah...

I wake up from a deep sleep! You're all weak!
You're living in the agony of defeat!
I am the master of your whole heap!
I am the pasture that flock ya like sheep!

Step into the town, and break the wall down!
Your heart beat is the only sound!
Step into the light, and then you'll know!
You were stopped and dropped by the Walls of Jericho!!!!!
 

Slayer-33

Liverpool-2
While you have some valid points all of that doesn't change the fact that XSX won't be inefficient and it's already more powerful by a fair margin.

At this point it's like we're trying to twist reality instead of accepting it.

PS5 is a weaker system.
 

Dnice1

Member
I skimmed through it knowing what conclusion you were coming to. One thing we can agree on is that its all about the games, but the interesting thing about that is everyone doesn't like the same games. So that's a subjective argument. What isn't is subjective is hardware capability, and no amount of word salad is going to change the fact XSX is a more powerful and capable machine.
 

teezzy

Banned
For those that want the TLDR:

"Look, I'm not saying that either system is more powerful but the PS5 is more powerful because Sony devs praised it and also TFLOPS are just like, theoretical and stuff, it is all about efficiency and removing bottlenecks not TFLOPS. Again, not claiming that one is more powerful but the PS5 will be more powerful."

134cb838aab815fb675fa9e6a640a1c2.gif
 

Tqaulity

Member
For those that want the TLDR:

"Look, I'm not saying that either system is more powerful but the PS5 is more powerful because Sony devs praised it and also TFLOPS are just like, theoretical and stuff, it is all about efficiency and removing bottlenecks not TFLOPS. Again, not claiming that one is more powerful but the PS5 will be more powerful."
I get what you are doing :). But no that’s not what I am saying. What I am saying is that yes the PS5 could have better performance based on some of the work they’ve done to make the system
easier to extract it’s performance. It’s entirely possible.

My issue is with those that dismiss the possibility based on nothing but theoretical specs.
 
More directly to your main point though....I honestly don't think using PS3/360 and especially Saturn/PS1 analogies here is suitable for PS5 and XSX. Even if PS5 is easier to utilize in some areas, the XSX is nowhere near the level of "complex for complexity's sake" like PS3 and Saturn were.

Also, FWIW the dev sentiments we've been seeing for PS5 like the Epic stuff recently...well, we kinda need to read between the lines. Epic's always used Sony hardware for tech showcases primarily, this goes all the way back to the PS2! So what they're doing right now is just more of the same. The way they in particular are phrasing some of their stuff seems very PR-ish however; some folks want to call it a "technology partnership", but that doesn't mean there isn't some PR partnership between them and Sony, either. Both could be equally valid.

Aside from that, there's also the Crytek stuff (which FWIW, was quickly taken down); that dev was basically saying a lot of the same thing saying, for them, PS5 felt like an easier platform to develop for. Which is perfectly fine. However the issue is OTHER PEOPLE were taking "easier" to mean "superior/better", and that was fanboys reading their own console biases into it.

Sony, let's face it, they are the #1 player in the gaming market, which means they're in the best position to leverage good coverage and technology/marketing partnerships. That comes in handy if you know how to utilize it, which Sony does. Now, in light of some recent things like Sony PR contacting Variety on their TLOU2 review...we could speculate that they might be abusing this position somewhat, but it doesn't change the fact they're in a unique position in that regard.

I think if you take a step back, you'll see that there are a good deal of developers praising both PS5 and XSX. We might see more doing so for PS5, but there's always some backroom politics and making-merry involved in any amount of pre-launch praise next-gen systems are getting from the dev scene. This applies to both Sony and Microsoft, but again considering Sony's position in the industry, "that" kind of stuff is going to happen more on their side of things.

It doesn't invalidate general praises the systems get, but you have to put them in their truthful context no matter what.
 
I get what you are doing :). But no that’s not what I am saying. What I am saying is that yes the PS5 could have better performance based on some of the work they’ve done to make the system
easier to extract it’s performance. It’s entirely possible.

My issue is with those that dismiss the possibility based on nothing but theoretical specs.

Both the PS5 and XSX are standard x86 systems. There’s no black magic going on here.
 

Quezacolt

Member
Ok I have a question though that is irrelevant to all that. Take Multiplatform games out of the equation. The best looking game from the ps3/360 era was probably Last of Us or God of War 3 or Killzone - so ultimately that console ended up outdoing the xbox later in the generation. Now I personally don't care about high frame rates or resolution really beyond 720-1080p - it just doesn't matter that much to me as far as graphics are concerned. Mainly its probably the level of detail in things, lack of pop in, physics, etc, so if you make something highly detailed on one of these consoles thats optimized purely for the console later down the line like lets say 6 years from now (and then those future super pc's will run it at high frame rates with 4k and all that) which one has the higher likelihood of looking better - and will in the xbox's case - something like the series s hold it back?
I guess it depends on what could end up being the bottleneck in each console.
 

teezzy

Banned
I find it hard to imagine many multi-platform titles being optimized for either system.

Instead they'll run better on the more powerful platforms - i.e. PC or Series X

Same as it ever was.
 

teezzy

Banned
It's like arguing that a PC with a i9-9900k and a 2080ti with an HDD could be outperformed by a PC with a i7 4970k and a 2070 because the latter build also has a top of the line SSD.

Unless I'm super ignorant here, which is also completely possible.
 

Tqaulity

Member
I skimmed through it knowing what conclusion you were coming to. One thing we can agree on is that its all about the games, but the interesting thing about that is everyone doesn't like the same games. So that's a subjective argument. What isn't is subjective is hardware capability, and no amount of word salad is going to change the fact XSX is a more powerful and capable machine.
You’re Right hardware capability is entirely objective. My point is that is not really what we care about (or at least what we should care about). What we really care about is how effectively the hardware is used. Using my 15PetaFlop example, that hardware objectively is 100x more powerful than anything out today. But so what if it only is running Wii games (just an extreme example to illustrate the point).

So no nobody is refuting the notion that the Series X has higher specs on paper. But that’s not the important question. The important question is which system will actually be able to run games better? That is a loaded question with many factors that play into besides the theoretical figures.
 
Last edited:
Bleh its just specs. The Xbox will be more powerful and you may see it if you are looking for it. But it's about the games.
 

Chiggs

Member
It's like arguing that a PC with a i9-9900k and a 2080ti with an HDD could be outperformed by a PC with a i7 4970k and a 2070 because the latter build also has a top of the line SSD.

Unless I'm super ignorant here, which is also completely possible.

Realistically?

It's like arguing that:

  • A PC with an i9-9900k running at a faster clock speed, with faster RAM, a 2080TI, and a good SSD...

...is going be outperformed by:

  • A PC with a slower i9-9900k running in boost mode, with slower RAM, a 2070 Super also running in boost mode, but equipped with a great SSD.

I don't buy it. Not for a second.

Now, will Sony's first party developers have better looking games than anything on Series X? That I do buy, and that's because DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS.
 
Last edited:

DonF

Member
Series x Is more powerful, period.
But I'm not comparing flops between Xbox and ps5, I just wanna compare PS4 to ps5. See all the games and how they look and play in PS4, now we will have a way more powerful machine, it's hard to think that games will look worse. Hell, I just can't even imagine how the exclusives are going to look.
All this dick measuring are so useless. Both machines are a massive leap. Go where the games you want are.
Bottlenecks are less each generation.
 

Tqaulity

Member
It's like arguing that a PC with a i9-9900k and a 2080ti with an HDD could be outperformed by a PC with a i7 4970k and a 2070 because the latter build also has a top of the line SSD.

Unless I'm super ignorant here, which is also completely possible.
Your comparison would be accurate IF both systems were identical aside from the CPU, GPU, and HDD. In that case clearly the first system would dominate.

However, in the case of the Series X and PS5 there are far more differences than just the main components and those differences cannot be dismissed. Many of the other system components beside the GPU favor PS5 so how can we be sure that the Series X will be faster when we haven’t seen what it’s actual real world performance looks like compared to the PS5?
 

teezzy

Banned
Realistically?

It's like arguing that:

  • A PC with an i9-9900k running at a faster clock speed, with faster RAM, a 2080TI, and a good SSD...

...is going be outperformed by:

  • A PC with a slower i9-9900k running in boost mode, with slower RAM, a 2070 Super also running in boost mode, but equipped with a great SSD.

I don't buy it. Not for a second.

Now, will Sony's first party developers have better looking games than anything on Series X? That I do buy, and that's because DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS.

True. I was being admittedly hyperbolic in regards to the systems' discrepancies, but MS have the horsepower on their side this gen - that's inarguable.

I won't pretend like I'm not blown away by what first party devs were able to pull off on PS4 pro... but I don't think it's fair to underestimate all the freshly hired talent at Xbox Game Studios either.

My overall point is that, that great SSD really won't be doing anything that the SSD in the Series X can't do in regards to multi-platform titles. That's what OP seems to be referring to by notating the differences between PS1 and Saturn, PS3 vs 360, etc.
 
Last edited:

Stuart360

Member
Your comparison would be accurate IF both systems were identical aside from the CPU, GPU, and HDD. In that case clearly the first system would dominate.

However, in the case of the Series X and PS5 there are far more differences than just the main components and those differences cannot be dismissed. Many of the other system components beside the GPU favor PS5 so how can we be sure that the Series X will be faster when we haven’t seen what it’s actual real world performance looks like compared to the PS5?
How so?. I havent been following the tech side lately, and i probably wont go in the 'next gen' thread ever again (total fanboy cesspit), but XSX gpu is 2tf more powerful, has a faster cpu, and faster ram (albeit a split pool), with PS5 having the faster SSD.
What am i missing?
 

bitbydeath

Member
The differences are going to be so minor (if any exist at all) that it’s really not worth talking about.

Sonys presentation we learned native 4K wasn’t an issue in reaching so any GPU advantage concerns are gone.

CPU specs are even more negligible and RAM is heavily backed by the SSDs.

Hopefully once people come to terms with it we can be done with the ‘wars’.
 

Chiggs

Member
The differences are going to be so minor (if any exist at all) that it’s really not worth talking about.

Sonys presentation we learned native 4K wasn’t an issue in reaching so any GPU advantage concerns are gone.

CPU specs are even more negligible and RAM is heavily backed by the SSDs.

Hopefully once people come to terms with it we can be done with the ‘wars’.

I agree; the graphics are going to be so good on both platforms that very few people will give a shit, outside of the sad souls that jerk themselves limp to Digital Foundry.

I think the Series X will be able to provide a more consistent performance in its mature phase, but even then, will anyone care?
 
Last edited:
In paper I feel MS create a Pseudo PC called Xbox Series X it's just console brute forcing to gain graphical and performance for gaming purposes

While on Sony's aiming to eliminate bottlenecks,latency with revolution hardware that doesn't follow PC standard but a way of being a true console gaming experience

That's is my judge on this..... But I feel games will be the one will show the "result" of this agenda
 
Lol this again. “We’ve got no way to know which is better yet, so here’s why theoretically PS5 could be better in certain scenarios”. Sure, theoretically. Theoretically I could shit gold bricks tomorrow but I don’t like my chances.
 
Short answer: Show us results, not talk us to death with PR and no games or running on other hardware.
In paper I feel MS create a Pseudo PC called Xbox Series X it's just console brute forcing to gain graphical and performance for gaming purposes

While on Sony's aiming to eliminate bottlenecks,latency with revolution hardware that doesn't follow PC standard but a way of being a true console gaming experience

That's is my judge on this..... But I feel games will be the one will show the "result" of this agenda
Fucks sake OP, see what you’ve done. Now we’ve got another X amount of pages of this bullshit. Threads like this should either be quarantined to their own megathread or banned until both the consoles are released.
 

Bo_Hazem

Banned
Fucks sake OP, see what you’ve done. Now we’ve got another X amount of pages of this bullshit. Threads like this should either be quarantined to their own megathread or banned until both the consoles are released.

What's happening to you? Isn't games the final results? Or you put in a PR Word file and play it? Stop acting smart.
 

jakinov

Member
It's like arguing that a PC with a i9-9900k and a 2080ti with an HDD could be outperformed by a PC with a i7 4970k and a 2070 because the latter build also has a top of the line SSD.

Unless I'm super ignorant here, which is also completely possible.
It's about the workload and situation. Both Microsoft and Sony include a lot of custom hardware designed to be good at very specific things or to offload work off the CPU/GPU. All of that help, and we don't know for sure how good these solutions. So it's not as simple as comparing two PCs with similar architectures. Even with those PCs the one with the faster SSD can outperform the one without depending on the workload.

The faster SSD will play a role in getting better performance and so will maybe other design decisions that Sony made. The measurements of teraflops is of raw theoretical performance. It's like if we took a grocery store bagger and counted how many times he can put an item in a bag within a minute. Lets say it's 100 items/min When we actually put him to work in a store, there's going to be many things that's going to make it so he won't actually be able to get 100 items/minute. How close the bagger is going to get to 100 items/min is going to depend on everything around him such as people in his way, conveyor belt, time to get out bags, etc.. Working on a game that requires loading a a lot of data from the ssd at a grocery store that relies on items going down a long conveyor belt, having a faster conveyor belt is going to get you closer to getting that 100 items/min. If you work at a grocery store where you don't need to rely on being able to get items fast from a conveyor belt (e.g. for the sake the analogy meat department) you can bag closer to that 100 items/min. and having a fast conveyor belt would mean nothing. Circling back to gaming, a GPU or CPU might have to idle more often waiting for items from storage that it's not working as efficient as it could. But the thing is not all grocery stores games will have a strong requirement on storage. But more might in the future.
 

longdi

Banned
Realistically?

It's like arguing that:

  • A PC with an i9-9900k running at a faster clock speed, with faster RAM, a 2080TI, and a good SSD...

...is going be outperformed by:

  • A PC with a slower i9-9900k running in boost mode, with slower RAM, a 2070 Super also running in boost mode, but equipped with a great SSD.

I don't buy it. Not for a second.

Now, will Sony's first party developers have better looking games than anything on Series X? That I do buy, and that's because DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS.

Richard DF said it best, platform owners will always try to add ambiguity or give incomplete presentation when things don't favour them, to allow the internet bridgage to fill in the blanks with positive thoughts.

He chided Xbone and now PS5.
 
Last edited:

Xplainin

Banned
I'm Not sure why some people dont want to accept it, but the XSX is more powerful of the two.
More powerful GPU, faster CPU, faster RAM bandwidth.
The real question is will anyone really be able to tell the difference between the two.
If the XSX is 18% more powerful, and has a slightly better resolution, and a couple of frames more per second, is anyone other than DF with their pixel counting, and FPS monitoring, going to be able to notice?
I personally dont think so.
 

bitbydeath

Member
I'm Not sure why some people dont want to accept it, but the XSX is more powerful of the two.
More powerful GPU, faster CPU, faster RAM bandwidth.
The real question is will anyone really be able to tell the difference between the two.
If the XSX is 18% more powerful, and has a slightly better resolution, and a couple of frames more per second, is anyone other than DF with their pixel counting, and FPS monitoring, going to be able to notice?
I personally dont think so.

Question though, what is slightly better than 4K?
 
Top Bottom