• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

husomc

Member
I don't want to sound like a Microsoft or Xbox hater because I really am not one, but the more time passes the more they talk fancy names and show fancy presentation without actually saying or clarifying much.


We already had a console teardown, and yet a lot of aspects past the general specifications are not clear at all and they never bothered to clarify later on (a lot of stuff relative to the SSD and the actual I/O structure was kept quite vague compared to the PS5 talk, even though that's a pretty major point of the new systems).

There was a full gameplay event, where the only "gameplay" anybody saw were a couple of seconds here and there as part of scripted trailers, and not a single game shown was even actually running on an Xbox (as they thankfully disclaimed, this time around).

Now with this video (and the accompanying blog post) I don't even really know what they were trying to say other than the absolute obvious or the absolutely pointless. All I can gather is that "Optimized for Series X" means that the game is an actual XSX release, not just an XB1 game running in backwards compatibility mode (well you don't fucking say), and that "select games" will run at 4K and "up to 120 fps" (again, this is so vague they are basically saying nothing; can you be bothered to name ONE game that you KNOW will run at 60, let alone one at 120?).

EDIT: I rembered and checked that on a different occasion the developers of Dirt 5 said the game will run at 120 fps on XSX. Why Microsoft doesn't parade this a lot more remains a mystery to me.


I don't doubt that the XSX will be a good piece of hardware, because the components inside it are objectively good, but I'm starting to wonder if something behind the curtains is moving slower than anticipated (like issues with devkits optimization/performance), and they are putting forward as much filler as possible for months on end with little to no substance behind.

I really hope for Microsoft's sake that they are just playing a weird waiting game and come July they will actually show something tangible running on this machine, because if a Halo video plays and I read "Game footage shown is running on equivalent hardware as it is expected to perform on Xbox Series X", then I'll feel justified anticipating issues at launch.

Tetris runs at 120 fps on the XSX
 

Corndog

Banned
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.
 

geordiemp

Member
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.

You have not been thinking too hard lol



Poor attempt at FUD, try and learn about how stuff works and try again a little harder.

I would discuss with you, but your just trolling and its too hot,
 
Last edited:

Brudda26

Member
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.
You didnt pay much attention to cerny during the road to ps5. He stated a 2% drop in clocks results in 10% less power consumption or something along those lines. They will barely have to drop frequency if at all.
 

IntentionalPun

Ask me about my wife's perfect butthole
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.
Well it's workload based; I don't think anyone has to "select" anything for the final production code of a game. The dev kits support locked profiles for testing, but that's not how final game code works.

You send a workload with more GPU tasks than CPU tasks and the GPU gets more of the power. If you overload both somehow it likely down-clocks both, but as Cerny said in this interview that's pretty much not gonna happen:


Developers don't need to optimise in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing. I think you're asking what happens if there is a piece of code intentionally written so that every transistor (or the maximum number of transistors possible) in the CPU and GPU flip on every cycle. That's a pretty abstract question, games aren't anywhere near that amount of power consumption. In fact, if such a piece of code were to run on existing consoles, the power consumption would be well out of the intended operating range and it's even possible that the console would go into thermal shutdown. PS5 would handle such an unrealistic piece of code more gracefully.
 
Last edited:

FacelessSamurai

..but cry so much I wish I had some
The PCMR people I know typically have a Switch and PS4 Pro, too. Back when I still had the time, money, and foreveraloneability to build top-end stuff I’d always buy consoles, too. They’d barely get used in comparison to my PC, but they sometimes made a nice change of pace, or had some interesting game I couldn’t play elsewhere. Plus it’s just nice buying tech and gaming stuff in general. I had cash burning a hole in my wallet every pay-day back then.

Despite some (typically newer) members of the PCMR cult not realising it’s tongue in cheek and a bit of a joke; it’s typically less well off people that have to commit to one console for 7 years and then feel the need to aggressively defend that decision.

As for VR, between me and my immediate family we have 4x Oculus (DK2, Rift, Rift S, Quest), 1x HTC (Vive), 1x Valve (Index) and 3x Sony (1x PS VR V1, 2x PS VR V2) headsets.

The most used headset is probably the Rift S being used for iRacing. The second most by a clear margin is my PS VR V1 being used for Sony’s own Firewall Zero Hour. I also have the HTC Vive and it doesn’t get anywhere near the usage of my PS VR because of that game.

It always comes down to the games.
You are definitely not the norm. I know of 2 people that own VR headsets (Oculus Quest and Valve Index) in all my circle of friends. Of all the ones that play on PC, only a few have bought a Nintendo Switch, with the ones that have bought a PS4 or Xbox One just left them collecting dust as they bought it early thinking they'd use it and in the end just play on their PCs. And the ones that own desktop PCs also usually own a gaming laptop for on the go gaming for some reason.

I've seen polls online asking PC gamers if Series X and PS5 were interesting enough for them to buy them to play alongside their PCs, and most PC gamers (usually above 85%) had no interest in getting a console, and that's a sentiment I've seen with in my entourage of friends.

PC is about better graphics, but most people don't own high end GPUs, so It's about choice, open platform, mods, etc.

If it was all about the games people would own all platforms, yet it's not the case (definitely isn't mine either and it's not for a lack of money either). I firmly stand by my opinion that PC gamers have close to 0 interest in consoles and that most people that say they can play games there better (like Xbox games and now stuff like Horizon) won't actually be playing those games at higher fidelity than on Xbox One X or PS4 Pro (and now next gen consoles) but at a lower fidelity and honestly they don't care! They prefer the openness of the platform and what it brings to the table.
 
You are definitely not the norm. I know of 2 people that own VR headsets (Oculus Quest and Valve Index) in all my circle of friends. Of all the ones that play on PC, only a few have bought a Nintendo Switch, with the ones that have bought a PS4 or Xbox One just left them collecting dust as they bought it early thinking they'd use it and in the end just play on their PCs. And the ones that own desktop PCs also usually own a gaming laptop for on the go gaming for some reason.

I've seen polls online asking PC gamers if Series X and PS5 were interesting enough for them to buy them to play alongside their PCs, and most PC gamers (usually above 85%) had no interest in getting a console, and that's a sentiment I've seen with in my entourage of friends.

PC is about better graphics, but most people don't own high end GPUs, so It's about choice, open platform, mods, etc.

If it was all about the games people would own all platforms, yet it's not the case (definitely isn't mine either and it's not for a lack of money either). I firmly stand by my opinion that PC gamers have close to 0 interest in consoles and that most people that say they can play games there better (like Xbox games and now stuff like Horizon) won't actually be playing those games at higher fidelity than on Xbox One X or PS4 Pro (and now next gen consoles) but at a lower fidelity and honestly they don't care! They prefer the openness of the platform and what it brings to the table.
I'm a PC gamer and I do have a ps4 pro and a switch. Will definitely pre order a ps5 too as soon as available. Most of my friends from the pc space are the same minus switch most of them,
 

Lunatic_Gamer

Gold Member
Inside Xbox Series X Optimized: Call of the Sea

Q: In addition to benefiting from the power and performance of Xbox Series X for quicker load times etc. what Xbox Series X features were you most excited to explore leveraging in the development of Call of the Sea?

A: Without a doubt, benefiting from the power and performance of the new generation is something that everyone is looking for, but it is not the only thing that calls our attention. We believe that some features like Smart Delivery are building the future of gaming by putting the players first and we are happy to be a part of this.

Q: How will these enhancements impact a player’s experience with Call of the Sea?

A: With the Smart Delivery feature you’ll always have access to the best version of the game. Sharing settings and games between different systems. That makes you design the game thinking as a whole and not as something that is tied to a single platform. On the other hand the power and performance of Xbox Series X will allow us to offer the game at beautiful 4K at 60fps, leveraging the rich game environments and making the art really shine.

Q: Why did your development team choose to focus on 4K Resolution, 60FPS and DirectX Raytracing as enhancement areas for Call of the Sea?

A: We are focusing on delivering the most beautiful game possible. Although we have a stylized art style, we are giving it a next-gen look, full of visual effects and movement in the scene. With DirectX Raytracing, we will have the chance to make the island even more present, almost come to life. Players will have the opportunity to enjoy the island’s stunning environments in beautiful 4K, allowing for a greater immersion and an overall better experience.

The power of this new hardware allows us to not have to make compromises between frame rate and resolution. We can finally offer the best of the two worlds to Xbox Series X players!

 

zaitsu

Banned
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.
This dude.
Droping anything is automatic, also it is happening in friction of frame on goes back. Its not fucking pc boost.
 
We're all getting amazing next-gen consoles. There's no reason to bicker. Sure, everybody has preferenecs but that shouldn't stop all of us for celebrating that we're getting elite consoles and our favorite developers are competing with each other.

Xbox needs Sony, Sony needs Xbox, we all laugh at Nintendo.

This is the cycle of things.
Call brap brap , he specialises in Nintendo.
 

geordiemp

Member
XBOX X = 12 TF
XBOX S = 4 TF
_____________________
16 / 2 = 8 TF

is this true ?


Theoretical based on RDNA1 mumbers, we dont have ROPS yet and all numbers are max potential including TF.

Triangle rasterisation is 4 triangles per cycle.

PS5:
4 x 2.23 GHz ~ 8.92 Billion triangles per second

XSX:
4 x 1.825 GHz - 7.3 Billion triangles per second

Triangle culling rate is twice number triangles rasterised per cycle.

PS5:
8 x 2.23 GHz - 17.84 Billion triangles per second

XSX:
8 x 1.825 GHz - 14.6 Billion triangles per second

Pixel fillrate is with 4 shader arrays with 4 RBs (render backends) each, and each RB outputtting 4 pixels each. So 64 pixels per cycle.

PS5:
64 x 2.23 GHz - 142.72 Billion pixels per second

XSX:
64 x 1.825 GHz - 116.8 Billion pixels per second

Texture fillrate is based on 4 texture units (TMUs) per CU.

PS5:
4 x 36 x 2.23 GHz - 321.12 Billion texels per second

XSX:
4 x 52 x 1.825 GHz - 379.6 Billion texels per second

Raytracing in RDNA2 is alleged to be from modified TMUs.

PS5:
4 x 36 x 2.23 GHz - 321.12 Billion ray intersections per second

XSX:
4 x 52 x 1.825 GHz - 379.6 Billion Ray intersections per second
 
Last edited:

Corndog

Banned
This dude.
Droping anything is automatic, also it is happening in friction of frame on goes back. Its not fucking pc boost.
You’re right it’s not pc boost. I am wondering about different scenarios. How profiles work, etc. Not much talk has been done about this.

edit: what is friction of frame?
 
Last edited:

zaitsu

Banned
You are definitely not the norm. I know of 2 people that own VR headsets (Oculus Quest and Valve Index) in all my circle of friends. Of all the ones that play on PC, only a few have bought a Nintendo Switch, with the ones that have bought a PS4 or Xbox One just left them collecting dust as they bought it early thinking they'd use it and in the end just play on their PCs. And the ones that own desktop PCs also usually own a gaming laptop for on the go gaming for some reason.

I've seen polls online asking PC gamers if Series X and PS5 were interesting enough for them to buy them to play alongside their PCs, and most PC gamers (usually above 85%) had no interest in getting a console, and that's a sentiment I've seen with in my entourage of friends.

PC is about better graphics, but most people don't own high end GPUs, so It's about choice, open platform, mods, etc.

If it was all about the games people would own all platforms, yet it's not the case (definitely isn't mine either and it's not for a lack of money either). I firmly stand by my opinion that PC gamers have close to 0 interest in consoles and that most people that say they can play games there better (like Xbox games and now stuff like Horizon) won't actually be playing those games at higher fidelity than on Xbox One X or PS4 Pro (and now next gen consoles) but at a lower fidelity and honestly they don't care! They prefer the openness of the platform and what it brings to the table.
I came from PC being my only gaming machine whole life to PS4, my best PCMR race buddy is buying his first console when PS5 launches because he throw axe in god of war for 30 minutes straight at my house and played half of first TLOU. He is interested in Sony exclusives because he says he isnt seeing that kind of games on PC. Clearly he isnt intrested in MS franchises
 

Corndog

Banned
Profiles are only available to developers for debugging and optimization. Released code does not run using a fixed profile. Those questions have also been addressed by Cerny.
Have they said how you determine what gets downclocked? Like I said it seems like only down clocking the cpu would be the way to go. How is that determined?
 

FranXico

Member
Have they said how you determine what gets downclocked? Like I said it seems like only down clocking the cpu would be the way to go. How is that determined?
There are profiles available to developers favoring the GPU or the CPU. The assumption that the CPU always gets downclocked stems from the fact that most games are GPU-bound.
Again, all of this is common knowledge.
 
I think people still dont know how scaling works. lol

XSX: 16GB RAM, 3Ghz processor, 12TF GPU
XSS: 7.5GB RAM, "a less powerful processor", 4TF GPU

Yep, simply lowering the resolution is going to make up for all those other short comings.

"Scaling" means the ability to handle increased workloads (both horizontally and vertically) by adding more resources to the system. That could mean, for example, bringing more servers online to better handle requests to a website as demand grows, or by adding more RAM to a single system to give it more to work with.

What you're talking about is ... well, completely and utterly wrong.

If you have less RAM, that means you have fewer resources for things like character models, AI, environment details, size, etc., etc. etc. Your less powerful GPU could be helped by cutting the resolution, but even then certain effects and other calculations might be too heavy for it to handle and would have to be cut. See Apple's iPhones: the less powerful iPhones have certain graphics effects cut because the GPUs don't have the grunt to handle them.

And what about the CPU? Lowering the resolution in The Witcher 3 doesn't help in situations where the game is CPU-bound: Novigrad being the example cited most often. There's a lot going on there that simply drawing fewer pixels won't fix. Having fewer people and less going on would, though. Take all those entities out so that they're not trying to interact with the world and one another, and you're going to see a spike in performance.

So, to return to the point - in order to scale, the devs would need to cut content from the XSS version of the game or simply not include it in either version, in order to deliver an even experience.

And no, Smart Delivery doesn't fix these things. That's just fancy marketing speak for free upgrades.
 

Corndog

Banned
There are profiles available to developers favoring the GPU or the CPU. The assumption that the CPU always gets downclocked stems from the fact that most games are GPU-bound.
Again, all of this is common knowledge.
But you said profiles are only for debugging. Then what happens when you are not using the profile for release executable? Are they using profiles in development to determine when they hit a power threshold. Then they hand tune the code to run less cpu or gpu in certain circumstances. I would hope not. That’s my confusion.
 

FranXico

Member
But you said profiles are only for debugging. Then what happens when you are not using the profile for release executable? Are they using profiles in development to determine when they hit a power threshold. Then they hand tune the code to run less cpu or gpu in certain circumstances. I would hope not. That’s my confusion.
Not necessarily. And even if that was the case, what you are describing is optimization, which already happens nowadays. Even for PC ports.
 

Corndog

Banned
XSX: 16GB RAM, 3Ghz processor, 12TF GPU
XSS: 7.5GB RAM, "a less powerful processor", 4TF GPU

Yep, simply lowering the resolution is going to make up for all those other short comings.

"Scaling" means the ability to handle increased workloads (both horizontally and vertically) by adding more resources to the system. That could mean, for example, bringing more servers online to better handle requests to a website as demand grows, or by adding more RAM to a single system to give it more to work with.

What you're talking about is ... well, completely and utterly wrong.

If you have less RAM, that means you have fewer resources for things like character models, AI, environment details, size, etc., etc. etc. Your less powerful GPU could be helped by cutting the resolution, but even then certain effects and other calculations might be too heavy for it to handle and would have to be cut. See Apple's iPhones: the less powerful iPhones have certain graphics effects cut because the GPUs don't have the grunt to handle them.

And what about the CPU? Lowering the resolution in The Witcher 3 doesn't help in situations where the game is CPU-bound: Novigrad being the example cited most often. There's a lot going on there that simply drawing fewer pixels won't fix. Having fewer people and less going on would, though. Take all those entities out so that they're not trying to interact with the world and one another, and you're going to see a spike in performance.

So, to return to the point - in order to scale, the devs would need to cut content from the XSS version of the game or simply not include it in either version, in order to deliver an even experience.

And no, Smart Delivery doesn't fix these things. That's just fancy marketing speak for free upgrades.
I have to disagree. If you massively cut cpu power I would say you are correct. Low cpu power would be a problem. But if it is close and gpu is proportional to the lower resolution, and memory is proportional to lower res textures , it should work in most situations. There maybe some outliers if you ran a lot of compute on the gpu.
 

Corndog

Banned
Not necessarily. And even if that was the case, what you are describing is optimization, which already happens nowadays. Even for PC ports.
But what determines if and when the cpu downclocks? Does each game specify whether it is cpu or gpu bound? Then when you are about to go over your power budget either the cpu or gpu gets its frequency lowered? Or do you try to use the profilers to write your code in such a way that you never go over your power budget?

Anyways, I’m done. Thanks for your input.
 

K.N.W.

Member
These games arnt using turing features, so I think it was a fair comparison. If game would use all turing features the difference would be only bigger.
I don’t think the comparison between a higher clocked Turing and a lower clocked Pascal GPU in current games will help guessing next gen GPU performance. We do not know yet to what extend RDNA2 GPUs benefit from a higher clock speed. Also, we do not know how the typical rendering workload of a next gen engine is composed. The Unreal Engine 5 demo might imply that more detail is rendered by micro polygons instead of complex pixel shaders, which could significantly change the workload.
Can't wait for your comparison! Just hope it wouldn't cost you too much time and efforts.


Okkkk, I got a bit further into my GPU confrontation battle. Here you can see my 1080TI (First video) running at stock frequencies (1481 Mhz Base Clock) with its whopping 11,5 TFLOPS, facing against a 1080 running at fixed 2000 Mhz (having 1733 Mhz as the default max boost frequency, capable of outputting 8.9 TFLOPs, by doing some quick and dirty and bad maths we might assume (2000/1733)*8,9=10,2 TFLOPs).

Batman Arkham Knight 1440P ALL MAX 1080 TI FE @ stock



Batman Arkham Knight 1440P ALL MAX 1080 FE @ 2Ghz (From 0:52 to 1:36)




Annnnnd, we can see how they perform just the same o_O A few times 1080 TI runs a bit faster, but many times I can see the 1080 dropping way less and, in general, I feel it stays more easily above 100 FPS. So we have a smaller (2560 vs 3584 CUDA CORES), less powerfull (11,5 vs 10,2 TFLOPS) but faster card (around 1580 Mhz vs 2000 Mhz) running the same or better compared to the bigger and more expensive card. I don't really know what to say. Just the thought of them performing the same it's ridicolous, but it certainly shows that TFLOPs, as a metric, don't count that much, if you don't factor all of the system's characteristics.
 
Last edited:

Tiago07

Member
I been thinking about the scenarios where ps5 has to drop the speed of its apu. I assume most developers would drop the cpu speed over the gpu in most scenarios considering how much more cpu power they now have.
Would this result in a fairly steep drop for the cpu since it is a lot less power hungry compared to the gpu?Is it possible to just run fewer cores and not drop frequency? That could be a better scenario.

I am also interested in how this all works. Do you select a profile that drops cpu if you hit above max power. Another profile that drops gpu at above max power. Or something more complex.
Okay what Iam giving you now is my interpretation.

The scenarios where PS5 drop the clock is the scenario where power consumption is low, like Mark Cerny said "10% reduction in power consumption only reduces a couple percent of the clock, so I would expect any downclock to be pretty minor".

Let's make a thought about this, with Smartshift when the power consumption of the of the CPU is low but the GPU is higher, so the power of CPU goes to GPU and maintain the 2.23ghz, in this time the reduction of the CPU clock is possible but as the CPU is not so much used by the system we will not see the frame or the resolution drops, the same is for GPU is low and CPU is higher.

In this case developers don't have to optimize nothing, the system find the balance for itself.
 

Bo_Hazem

Banned
Okkkk, I got a bit further into my GPU confrontation battle. Here you can see my 1080TI (First video) running at stock frequencies (1481 Mhz Base Clock) with its whopping 11,5 TFLOPS, facing against a 1080 running at fixed 2000 Mhz (having 1733 Mhz as the default max boost frequency, capable of outputting 8.9 TFLOPs, by doing some quick and dirty and bad maths we might assume (2000/1733)*8,9=10,2 TFLOPs).

Batman Arkham Knight 1440P ALL MAX 1080 TI FE S stock



Batman Arkham Knight 1440P ALL MAX 1080 FE @2Ghz (From 0:52 to 1:36)




Annnnnd, we can see how they perform just the same o_O A few times 1080 TI runs a bit faster, but many times I can see the 1080 dropping way less and, in general, I feel it stays more easily above 100 FPS. So we have a smaller (2560 vs 3584 CUDA CORES), less powerfull (11,5 vs 10,2 TFLOPS) but faster card (around 1580 Mhz vs 2000 Mhz) running the same or better compared to the bigger and more expensive card. I don't really know what to say. Just the thought of them performing the same it's ridicolous, but it certainly shows that TFLOPs, as a metric, don't count that much, if you don't factor all of the system's characteristics.


Wonderful efforts, mate. Interesting test!
 

Tqaulity

Member
Okkkk, I got a bit further into my GPU confrontation battle. Here you can see my 1080TI (First video) running at stock frequencies (1481 Mhz Base Clock) with its whopping 11,5 TFLOPS, facing against a 1080 running at fixed 2000 Mhz (having 1733 Mhz as the default max boost frequency, capable of outputting 8.9 TFLOPs, by doing some quick and dirty and bad maths we might assume (2000/1733)*8,9=10,2 TFLOPs).

Batman Arkham Knight 1440P ALL MAX 1080 TI FE S stock



Batman Arkham Knight 1440P ALL MAX 1080 FE @2Ghz (From 0:52 to 1:36)




Annnnnd, we can see how they perform just the same o_O A few times 1080 TI runs a bit faster, but many times I can see the 1080 dropping way less and, in general, I feel it stays more easily above 100 FPS. So we have a smaller (2560 vs 3584 CUDA CORES), less powerfull (11,5 vs 10,2 TFLOPS) but faster card (around 1580 Mhz vs 2000 Mhz) running the same or better compared to the bigger and more expensive card. I don't really know what to say. Just the thought of them performing the same it's ridicolous, but it certainly shows that TFLOPs, as a metric, don't count that much, if you don't factor all of the system's characteristics.

Appreciate the work and you sharing. I still can't believe there are people here after months of discussion and explanation that still don't get this. There have been a number of examples of that exact phenomenon already posted around the forums. Maybe we need a sticky thread with several real examples of smaller GPUs at higher clocks actually outperforming (or matching) the larger slower GPUs so people can see it clearly.

There will be some workloads where Xbox Series X will indeed outperform PS5. And for the record, IF the Xbox Series X had the advantage in all aspects of the GPU, then it may turn out that we will see that 15-20% realized in the majority of cases. But the 20% clock advantage in favor of PS5 along with several other custom optimizations will close the gap considerably in general workloads. And yes, there will be some cases will PS5 will match or even exceed the Series X performance. I can't wait for the internet meltdown that will happen once we see an actual game performing better on PS5 :)
 
Last edited:

roops67

Member
Okkkk, I got a bit further into my GPU confrontation battle. Here you can see my 1080TI (First video) running at stock frequencies (1481 Mhz Base Clock) with its whopping 11,5 TFLOPS, facing against a 1080 running at fixed 2000 Mhz (having 1733 Mhz as the default max boost frequency, capable of outputting 8.9 TFLOPs, by doing some quick and dirty and bad maths we might assume (2000/1733)*8,9=10,2 TFLOPs).

Batman Arkham Knight 1440P ALL MAX 1080 TI FE S stock



Batman Arkham Knight 1440P ALL MAX 1080 FE @2Ghz (From 0:52 to 1:36)




Annnnnd, we can see how they perform just the same o_O A few times 1080 TI runs a bit faster, but many times I can see the 1080 dropping way less and, in general, I feel it stays more easily above 100 FPS. So we have a smaller (2560 vs 3584 CUDA CORES), less powerfull (11,5 vs 10,2 TFLOPS) but faster card (around 1580 Mhz vs 2000 Mhz) running the same or better compared to the bigger and more expensive card. I don't really know what to say. Just the thought of them performing the same it's ridicolous, but it certainly shows that TFLOPs, as a metric, don't count that much, if you don't factor all of the system's characteristics.

I'm curious. The videos look nowhere near the framerate shown to left of the screen, is that cos you captured it at 30fps? And how is it calculating the loads of the GPU and CPU, it was showing like 95 to 99% GPU load, surely that can't be right especially on an 11TF card you said? What those 'gpu load' percentage actually showing?? Definitely can't be occupancy!??
 

Bo_Hazem

Banned
hi guys according to my math.

PS5 disc = 10 TF
PS5 digital = 10 TF
_____________________
20 / 2 = 10 TF




XBOX X = 12 TF
XBOX S = 4 TF
_____________________
16 / 2 = 8 TF

is this true ?

46gxpr.jpg


/s

I get what you're doing there, but we need more devs to speak about it. So far, reports/rumors are conflicting, most likely depending on devs current vision of next gen, as things might change going forward:


 
Last edited:

Fordino

Member
That is true, however I don't see the point of that Math. Assuming similarly capable IO's and CPU's, the Series S will run games designed or the Series X at a lower resolution, that's it.
I don't think it'll be that easy tbh.

Can you imagine a 4TF PS5 Lite running the new Ratchet & Clank at 1080p. There's so much going on per scene that they'd surely have to look at also cutting down on the crazy particle effects, ray tracing, warping/swirling effects, number of enemies on-screen, etc, to have it rendering comfortably on a 4TF GPU.

I suppose considering that all XBox exclusives will be designed to run on the XBox One anyway, and not exclusively for the Series X, this shouldn't be an issue for their devs.
 

Corndog

Banned
Well it's workload based; I don't think anyone has to "select" anything for the final production code of a game. The dev kits support locked profiles for testing, but that's not how final game code works.

You send a workload with more GPU tasks than CPU tasks and the GPU gets more of the power. If you overload both somehow it likely down-clocks both, but as Cerny said in this interview that's pretty much not gonna happen:

How do you send less “workload” to the cpu?
Give me an example in code please. C, or java, or c++. Thanks.
 

Corndog

Banned
Thread thread = new Thread(runnable);
thread.start();
Thread thread2 = new Thread(runnable2);
thread2.start();

vs.

Thread thread = new Thread(runnable);
thread.start();

lol


WTF kind of question is that?
What determines how many threads? What thread are you not running when you hit max power? You obviously can just have it sleep.

Edit: Does that make sense. In other words how could you decrease your cpu workload unless you are already not maxing out the cpu.

Edit2: I really think the best method is to leave plenty of headroom on the cpu side to the point where you can max out the gpu in all situations.
 
Last edited:
Inside Xbox Series X Optimized: Call of the Sea

Q: In addition to benefiting from the power and performance of Xbox Series X for quicker load times etc. what Xbox Series X features were you most excited to explore leveraging in the development of Call of the Sea?

A: Without a doubt, benefiting from the power and performance of the new generation is something that everyone is looking for, but it is not the only thing that calls our attention. We believe that some features like Smart Delivery are building the future of gaming by putting the players first and we are happy to be a part of this.

Q: How will these enhancements impact a player’s experience with Call of the Sea?

A: With the Smart Delivery feature you’ll always have access to the best version of the game. Sharing settings and games between different systems. That makes you design the game thinking as a whole and not as something that is tied to a single platform. On the other hand the power and performance of Xbox Series X will allow us to offer the game at beautiful 4K at 60fps, leveraging the rich game environments and making the art really shine.

Q: Why did your development team choose to focus on 4K Resolution, 60FPS and DirectX Raytracing as enhancement areas for Call of the Sea?

A: We are focusing on delivering the most beautiful game possible. Although we have a stylized art style, we are giving it a next-gen look, full of visual effects and movement in the scene. With DirectX Raytracing, we will have the chance to make the island even more present, almost come to life. Players will have the opportunity to enjoy the island’s stunning environments in beautiful 4K, allowing for a greater immersion and an overall better experience.

The power of this new hardware allows us to not have to make compromises between frame rate and resolution. We can finally offer the best of the two worlds to Xbox Series X players!


This seems as contrived as the Dirt interview. Both seem entirely about mentioning certain marketing bullet points and driving them home instead of really talking about the game or even being remotely specific in features. It’s almost as if the questions were written after the fact, or were presented along with a list of phrases they need to include in the answer.

I get that all interviews are essentially marketing, but this is so wooden and so obviously marketing speak instead of an actual conversation or deep dive.
 

Corndog

Banned
This seems as contrived as the Dirt interview. Both seem entirely about mentioning certain marketing bullet points and driving them home instead of really talking about the game or even being remotely specific in features. It’s almost as if the questions were written after the fact, or were presented along with a list of phrases they need to include in the answer.

I get that all interviews are essentially marketing, but this is so wooden and so obviously marketing speak instead of an actual conversation or deep dive.
Ya. I think Microsoft would be better off just waiting until they show the games next month.
 
Status
Not open for further replies.
Top Bottom