• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Kusarigama

Member
Oh. So that's how it works. When it is something that you claim, i.e. that there's more to smart shift than that, then suddenly the burden of proof is not on the one making the claim anymore, but I have to go look for it myself? GTFO.
You're a liar, you've been caught trying to spread nonsense saying MS started "enhanced bc". You want to be excited about XSX, go on. Now you are caught again lying about Cerny saying that. You are here clearly spreading your agenda trying to spread FUD about PS5.
 

BatSu

Member
IOBsRo6.gif
 
The Xbox Series X takes 11 seconds to load State of Decay, an open-world game that is smaller than PlayStation's Spider-Man.




On the other hand, the PlayStation 5 takes less than a second to load Spider-Man, and it's fast enough to render Spider-Man's massive open world as the field of view speeds through it at lightning speeds. This is game-changing technology.




Humans are very visual creatures. You show them the difference between something and they will be amazed. If I were MS, I’d hope 3rd parties make loading times for the PS5 and XSX the same because if WoM gets out that games on PS5 are loading up almost instantly while you still have to wait for XSX, that’s going to spread pretty quickly.
 

pasterpl

Member
I guess indie developers are not developers :pie_thinking:

nope, devs that are saying stuff that some here don’t like/agree with are not devs (simple)

Oh jeez, lol



lol


re. That Spider-Man demo vs state of decay, Spider-Man is a tech demo, we don’t know anything about what was going behind that presentation (was the os running, were there any other games in the background, is this the best case scenario behaviour we can expect etc.) while state of decay is a full game demonstrating backward compatibility and improvements that users will get out of the box.

Humans are very visual creatures. You show them the difference between something and they will be amazed. If I were MS, I’d hope 3rd parties make loading times for the PS5 and XSX the same because if WoM gets out that games on PS5 are loading up almost instantly while you still have to wait for XSX, that’s going to spread pretty quickly.

ms could counter with better FPS, better rt, better resolution etc. We don’t have idea what could be more appealing to the customers. We don’t know what the actual load times will be on native next gen games As we have seen none of them yet.
 

EliteSmurf

Member
re. That Spider-Man demo vs state of decay, Spider-Man is a tech demo, we don’t know anything about what was going behind that presentation (was the os running, were there any other games in the background, is this the best case scenario behaviour we can expect etc.) while state of decay is a full game demonstrating backward compatibility and improvements that users will get out of the box.
They are both tech demos its not that hard to understand
 

BluRayHiDef

Banned
No, but I can verify XSX has the better gpu,cpu,and bandwidth all at sustained clock ratio making it the superior console in terms of power.

All of which will result in a meager 15% more graphical fidelity, which will be insignificant considering that the PS5 will be POWERFUL ENOUGH to render games in 4k with great graphics and stable framerates. Hence, the super fast loading times and in-game streaming of assets, as well as Sony's terrific first-party games, will make the PS5 the better machine.
 

nosseman

Member
The Xbox Series X takes 11 seconds to load State of Decay, an open-world game that is smaller than PlayStation's Spider-Man.




On the other hand, the PlayStation 5 takes less than a second to load Spider-Man, and it's fast enough to render Spider-Man's massive open world as the field of view speeds through it at lightning speeds. This is game-changing technology.




No. That demo with Spiderman was not loading the game. It was changing location in a map.
 
D

Deleted member 775630

Unconfirmed Member
The Xbox Series X takes 11 seconds to load State of Decay, an open-world game that is smaller than PlayStation's Spider-Man.
Did you even watch that video? They clearly said, unoptimised purely to show how much faster it even is when you don't do anything special. Do you have the feeling that the Spiderman demo was not optimised?
 

martino

Member

ZehDon

Gold Member
The Xbox Series X takes 11 seconds to load State of Decay, an open-world game that is smaller than PlayStation's Spider-Man.

On the other hand, the PlayStation 5 takes less than a second to load Spider-Man, and it's fast enough to render Spider-Man's massive open world as the field of view speeds through it at lightning speeds. This is game-changing technology.

That's... not how comparisons work. Let's pretend that the Spiderman code wasn't re-engineered for the purposes of being a demo (it was). The base load time of State of Decay 2 is apprximately 45 seconds, which the Series X decreases to approximately 7 seconds. This is approximately just 15% of the original load time. The base load time of Spiderman on the PS4 is approximately 8 seconds, which the PS5 decreases to approximately 0.8 seconds. This is approximately just 10% of the original load time. So, we see a 10% vs 15% final load time. However, it's likely that a game with a significantly improved asset streaming service - such as Spiderman, in direct comparison to State of Decay 2 - would see a better improvement overall from the SSD speeds. So, even when adjusting the numbers to enable a correct comparison, we're still not comparing apples to apples.

Further to your posts, Sony haven't invented "game-changing technology", they just have a fast SSD. Microsoft are also using a fast SSD, but I think we can agree it's certainly not as fast as Sony's. Microsoft, however, actually have invented some game-changing technology. In addition to Mesh Shaders - worth pointing out that the PS5 has the Geometry Engine to replicate this functionality - the real dark horse is SFS, or Sampler Feedback Streaming. This tech, available in DX12 Ultimate, is going to shrink IO bandwidth requirements and memory footprints pretty significantly for texture assets, which are vast majority of game data being loaded in. If you care to learn more, check out Microsoft's 17 minute developer-focused presentation on it. Warning: it's dense and technical in nature.
The short version is that, while Sony can load an 8mb 4k texture quickly, Microsoft have developed a way to load just 800kb of that same texture for the same rendered result. Thus, Microsoft's slower SSD combined with SFS will most likely outpace Sony's faster SSD when loading the same texture assets. Being as this is patented technology that requires the suitable hardware to utilise it, it's unlikely that Sony will be able to replicate this within their own API in the near-term.

Anyway, I hope this helps you understand why, while Sony's loading presentation is impressive, I'm not convinced that their raw IO throughput is the best method to achieve the goals of diminished load times and better asset streaming.
 

ethomaz

Banned
PlayStation 5:
GPU: 2,304 TMUs (texture mapping units)
GPU frequency: 2,230 Mhz
Fill Rate: 2,304 TMUs x 2,230 x 1000 = 5,137,920,000 texels per second

Xbox Series X:
GPU: 3,328 TMUs (texture mapping units)
GPU frequency: 1,825 Mhz
Fill Rate: 3,328 TMUs x 1,825 x 1000 = 6,073,600,000 texels per second

-------------
Calculation of Percentage Difference: (5,137,920,000 texels per second) / (6,073,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%

The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate, which is negligible since its fill rate is fast enough to render games in 4K at 60 frames per second.




Oops, my bad.
I believe these TMUs numbers are very wrong.

It is 4 per CU.

PS5: 144 TMUs / 321.12 GTexel/s
Xbox: 208 TMUs? / 379.6 GTexel/s?
 
Last edited:

BluRayHiDef

Banned
I believe these TMUs numbers are very wrong.

It is 4 per CU.

PS5: 144 TMUs
Xbox: 208 TMUs?

Fixed.

PlayStation 5:
GPU: 144 TMUs (texture mapping units)
GPU frequency: 2,230 Mhz
Fill Rate: 144 TMUs x 2,230 x 1000 = 321,120,000 texels per second

Xbox Series X:
GPU: 208 TMUs (texture mapping units)
GPU frequency: 1,825 Mhz
Fill Rate: 208 TMUs x 1,825 x 1000 = 379,600,000 texels per second

-------------
Calculation of Percentage Difference: (321,120,000 texels per second) / (379,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%

The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate, which is negligible since its fill rate is fast enough to render games in 4K at 60 frames per second.
 
Basically they are banning anyone who disagree with Dictator's opinion.
Fixed.

PlayStation 5:
GPU: 144 TMUs (texture mapping units)
GPU frequency: 2,230 Mhz
Fill Rate: 144 TMUs x 2,230 x 1000 = 321,120,000 texels per second

Xbox Series X:
GPU: 208 TMUs (texture mapping units)
GPU frequency: 1,825 Mhz
Fill Rate: 208 TMUs x 1,825 x 1000 = 379,600,000 texels per second

-------------
Calculation of Percentage Difference: (321,120,000 texels per second) / (379,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%

The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate, which is negligible since its fill rate is fast enough to render games in 4K at 60 frames per second.
That's texture fill rate. There is also the pixel fill rate that is done mostly with the Rops. PS5 has a ~20% (accounting for average clocks) advantage here.
 
Last edited:

EliteSmurf

Member
So he basically said open world games were not affected by SSDs speeds and a dev called him out.
Now he is crying saying he never said that?

lol pathetic.

And he asked the dude saying what happened to be banned... and ERA banned him lol
Dictator called the site trash and called out the moderation team over at that place .

And what happens ?

Moderation team bows to his every command and you got users talking about how great he is . The guy must feel immortal over at that place .
 

BluRayHiDef

Banned
That's... not how comparisons work. Let's pretend that the Spiderman code wasn't re-engineered for the purposes of being a demo (it was). The base load time of State of Decay 2 is apprximately 45 seconds, which the Series X decreases to approximately 7 seconds. This is approximately just 15% of the original load time. The base load time of Spiderman on the PS4 is approximately 8 seconds, which the PS5 decreases to approximately 0.8 seconds. This is approximately just 10% of the original load time. So, we see a 10% vs 15% final load time. However, it's likely that a game with a significantly improved asset streaming service - such as Spiderman, in direct comparison to State of Decay 2 - would see a better improvement overall from the SSD speeds. So, even when adjusting the numbers to enable a correct comparison, we're still not comparing apples to apples.

Further to your posts, Sony haven't invented "game-changing technology", they just have a fast SSD. Microsoft are also using a fast SSD, but I think we can agree it's certainly not as fast as Sony's. Microsoft, however, actually have invented some game-changing technology. In addition to Mesh Shaders - worth pointing out that the PS5 has the Geometry Engine to replicate this functionality - the real dark horse is SFS, or Sampler Feedback Streaming. This tech, available in DX12 Ultimate, is going to shrink IO bandwidth requirements and memory footprints pretty significantly for texture assets, which are vast majority of game data being loaded in. If you care to learn more, check out Microsoft's 17 minute developer-focused presentation on it. Warning: it's dense and technical in nature.
The short version is that, while Sony can load an 8mb 4k texture quickly, Microsoft have developed a way to load just 800kb of that same texture for the same rendered result. Thus, Microsoft's slower SSD combined with SFS will most likely outpace Sony's faster SSD when loading the same texture assets. Being as this is patented technology that requires the suitable hardware to utilise it, it's unlikely that Sony will be able to replicate this within their own API in the near-term.

Anyway, I hope this helps you understand why, while Sony's loading presentation is impressive, I'm not convinced that their raw IO throughput is the best method to achieve the goals of diminished load times and better asset streaming.

Until Microsoft shows Sampler Feedback Streaming in real time, I'll remain unconvinced. Hardware is foundational, and the I/O hardware of the PS5 is inherently faster than that of the XSX. Assuming that SFS will indeed deliver what it intends, I'm pretty sure that Sony can invent software that can match the compression of SFS. Furthermore, with Kraken, the PS5 is purported to be able to process data at 22GB/s.
 

Kusarigama

Member
That's... not how comparisons work. Let's pretend that the Spiderman code wasn't re-engineered for the purposes of being a demo (it was). The base load time of State of Decay 2 is apprximately 45 seconds, which the Series X decreases to approximately 7 seconds. This is approximately just 15% of the original load time. The base load time of Spiderman on the PS4 is approximately 8 seconds, which the PS5 decreases to approximately 0.8 seconds. This is approximately just 10% of the original load time. So, we see a 10% vs 15% final load time. However, it's likely that a game with a significantly improved asset streaming service - such as Spiderman, in direct comparison to State of Decay 2 - would see a better improvement overall from the SSD speeds. So, even when adjusting the numbers to enable a correct comparison, we're still not comparing apples to apples.

Further to your posts, Sony haven't invented "game-changing technology", they just have a fast SSD. Microsoft are also using a fast SSD, but I think we can agree it's certainly not as fast as Sony's. Microsoft, however, actually have invented some game-changing technology. In addition to Mesh Shaders - worth pointing out that the PS5 has the Geometry Engine to replicate this functionality - the real dark horse is SFS, or Sampler Feedback Streaming. This tech, available in DX12 Ultimate, is going to shrink IO bandwidth requirements and memory footprints pretty significantly for texture assets, which are vast majority of game data being loaded in. If you care to learn more, check out Microsoft's 17 minute developer-focused presentation on it. Warning: it's dense and technical in nature.
The short version is that, while Sony can load an 8mb 4k texture quickly, Microsoft have developed a way to load just 800kb of that same texture for the same rendered result. Thus, Microsoft's slower SSD combined with SFS will most likely outpace Sony's faster SSD when loading the same texture assets. Being as this is patented technology that requires the suitable hardware to utilise it, it's unlikely that Sony will be able to replicate this within their own API in the near-term.

Anyway, I hope this helps you understand why, while Sony's loading presentation is impressive, I'm not convinced that their raw IO throughput is the best method to achieve the goals of diminished load times and better asset streaming.
The 1st Wired article reported a loading time of 15second on PS4 Pro was reduced to 0.8second on a early "low-speed" version of PS5 dev kit.

So xsx loads about 6.5 times faster than xox and PS5 early "low-speed" version of dev kit loads MORE than 15 times faster than PS4 Pro.
 
Last edited:

ZehDon

Gold Member
Until Microsoft shows Sampler Feedback Streaming in real time, I'll remain unconvinced. Hardware is foundational, and the I/O hardware of the PS5 is inherently faster than that of the XSX. Assuming that SFS will indeed deliver what it intends, I'm pretty sure that Sony can invent software that can match the compression of SFS. Furthermore, with Kraken, the PS5 is purported to be able to process data at 22GB/s.
And that's fair enough. For me, based on Microsoft's spec, and nVidia and AMD's releases on the DX12 Ultimate utilisation, I'm more inclined to believe their claims - AMD and nVidia both quite literally spent millions building hardware for it.

I actually highlighted Sony's potential to replicate SFS. As you keenly pointed out - hardware is foundational. Microsoft's APU design incorporates hardware specially designed for DX12U implementation. That includes the CU count and the optimal VRAM speeds. Sony will be hard-pressed to replicate Microsoft's patented technology's performance using hardware that wasn't designed for the same technology. As for Kraken, it's a third party solution designed for general purpose compression. It's good stuff, no question. Let's ignore that your 22gb/s figure doesn't match Cerny's own presentation, which touted the SSD as moving around 9gb/s compressed. Microsoft invented a new compression system called BCPack. This system was specially designed for texture compression - textures, as I highlighted in my original post, are the vast majority of a game's assets - and can reportedbly outpace Kraken for texture compression. Combined with SFS, the amount of texture data that would clog the IO throughput of Sony's custom-built SSD would shrink significantly, freeing up Microsoft's IO throughput for... well, everything else. Combined with Microsoft's optimal VRAM configuration, and they have a distinct edge in loading, storing, recalling, and rendering texture data. Sony didn't bother trying to outpace Kraken, their hardware wasn't designed for SFS, and Microsoft own both pieces of technology. Sony's hardware is well designed, no questions there, but Microsoft's software engineers are the best in the world; I wouldn't discount their efforts so easily.
 
Seems like some people in here are only hearing to what developers say, and it seems it doesn't matter what they actually developed or what plattform they develop for.

So about 15 years ago I developed a game demo by myself using a freesoft gameengine using blender to create assets, programming with c++ etc.
Am I now a developer who has a say in that matter?

Because as a self proclaimmed developer I can tell you that I haven't seen any Informations yet that could possibly tell the whole picutre for any of both consoles.

I couldnt't even tell you which one is better when all specs, apis, bandwiths etc. were to be made public. Why?, you might ask - because developing a game is something else then developing an engine. And developing an engine for lets say a broad range of pc configurations is something else then developing a gameengine for a specific hardware set.

An to that regard I do see microsoft at a disadvantage. Their games will need to take pc configurations into account. Because of that gamepass stuff they'll probably need to always take pc setups into consideration. ( So pc gamers will be able to use their gamepass to play xbox games and vice versa )
This can only be bypassed of they'll change their crossplattform gamepass to a streaming service, allowing games to be rendered on their preferred hardware setup on a serverfarm.

Sony on the other hand can pump ressources into low level api/engine/software to support developing for PS5 and therefore utilising their hardware to perfection.

I would suggest that people that debate over next-gen possibilities educate how much one can achive by perfectly understanding the hardware and utilising that to the utmost degree.

For exmample read that and watch some of the stuff those people were able to make with very limited hardware
 
Last edited:

HawarMiran

Banned
So he basically said open world games were not affected by SSDs speeds and a dev called him out.
Now he is crying saying he never said that?

lol pathetic.

And he asked the dude saying what happened to be banned... and ERA banned him lol

Edit - WOW even people that shows his quote saying what he said got banned.
that dude has no fucking clue and is talking out of his ass.
 

DForce

NaughtyDog Defense Force
And there you have it: another power battle lost for the Sony Defence Force! No games gonna use more than 10Gb of RAM next gen, an actual dev:

A few weeks ago, they were saying that the SSD would only provide nothing but faster loading times.

Now after finding out about Xbox Velocity Architecture and BCPack, they're now saying XsX's SSD will be on par with PS5's SSD.

I thought they laughed at the secret sauces? :messenger_grinning_sweat:
 
Last edited:

kensama

Member
And there you have it: another power battle lost for the Sony Defence Force! No games gonna use more than 10Gb of RAM next gen, an actual dev:



And don't even try to debate it: if by any means, by any unlikely remote chance of that happening in a game ... Xbox 10GB RAM is actually like 30 GB with SFS!




Is she reliable? Cause seeing that she hadn't developped game on Playstation and assuming she doesn't have PS5 dev kit how can she pretends what she said?
 

HawarMiran

Banned
And there you have it: another power battle lost for the Sony Defence Force! No games gonna use more than 10Gb of RAM next gen, an actual dev:



And don't even try to debate it: if by any means, by any unlikely remote chance of that happening in a game ... Xbox 10GB RAM is actually like 30 GB with SFS!


ok I'll do a game in RPG maker and after that I get a free pass talking out of my ass. Dat bitch has the Xbox and Windows logo in her header hahaha. Void2D :messenger_tears_of_joy: never heard of it. She has the same pedigree as the other guy who made a Flipper simulator
 

ZehDon

Gold Member
...In to that regard I do see microsoft at a disadvantage. Their games will need to take pc configurations into account. Because of that gamepass stuff they'll probably need to always take pc setups into consideration. ( So pc gamers will be able to use their gamepass to play xbox games and vice versa )
This can only be bypassed of they'll change their crossplattform gamepass to a streaming service, allowing games to be rendered on their preferred hardware setup on a serverfarm.

Sony on the other hand can pump ressources into low level api/engine/software to support developing for PS5 and therefore utilising their hardware to perfection...
Sorry, I normally try not to snippet posts... but everything you've written is basically incorrect, and, frankly, that's really not how anything here works at all. "Microsoft" and "Sony" don't make the majority of "engines" that will operate on their platforms - Sony has zero advantage in this area by targeting a single static hardware platform in terms of API development. Your claim that Microsoft can only bypass PC Platforms if they "...change their crossplattform gamepass to a streaming service, allowing games to be rendered on their preferred hardware setup on a serverfarm..." is laughably incorrect. In fact, Microsoft are basically the leaders in terms of simultaneous multi-platform software development. For example, my posts above talk about something called DirectX. This is an API developers can use to, effectively, talk to hardware. Think of it as Microsoft doing the heavy lifting, so all you need to do is tell the hardware what you want to do, and Microsoft will make sure the hardware understands you - regardless of what that hardware actually is.
To go a level deeper, Microsoft have moved towards cross-platform utilisation of their DirectX APIs for the past two decades, to the point that they utilise a "code once, deploy everywhere" mantra for all of their services. For example, the recent shift for their platform .Net Core moved to full hardware utilisation for the Linux platform, enabling a developer to code for .Net Core, and deploy their application on both Windows and Linux without re-coding anything. For Xbox, this means that literally the same code that runs on your PC can, and will, run on your Xbox Series X. They already did this for the Xbox 360 with the XNA Framework, a .Net Framework derivative that focused on multi-platform game development. This work, and Microsoft's expertise, provides an enormous boon to developers who target the Xbox and PC platforms. As for "utilising their hardware to perfection", the code and asset creation technique to address platform hardware utilisation is called scalability, and it's been in use since the mid-90s. It's significantly more developer-dependant, regardless of how much "ressources" Sony pump into it. For example, the most technically impressive mutli-platform games from the current generation all employ significant scalability within their multiplatform engines - games like Red Dead Redemption 2, Call of Duty: Modern Warfare, Doom Eternal, and so forth.

I'm not sure you've fully understood the situation, or what you've insinuated with your post. If anything is unclear, please let me know, and we can discuss this further. It's a fun topic to delve into :)
 
Last edited:

joe_zazen

Member
So he basically said open world games were not affected by SSDs speeds and a dev called him out.
Now he is crying saying he never said that?

lol pathetic.

And he asked the dude saying what happened to be banned... and ERA banned him lol

Edit - WOW even people that shows his quote saying what he said got banned.

“Resetera mod captains are here to help.“ lol.

even NXGamer NXGamer replied to Dicktatoe:

I think you are looking at this from the wrong angle, having a vast pool with a much wider pipe to the RAM means that devs and more so artists can ramp up the options within density, detail, materials and worlds. The SSD here gives them a much bigger scope and enables a more streamlined approach to procedural placement, not textured surfaces or MIP chains. Teams use procedural creation to reduce the build and layout time, here they can expand the variety within the Frustum, the variety of NPC models, clothes even enemies within RAM which will reduce the amount of times you see Zombie A in Resident Evil or the face of Shopkeep B in Skyrim 15.

This is the options that SSD and the core design of the process delivers but also a great deal more besides this, something like Legacy of Kain warping of vertices for example could now be an entire transportation of everything within your view and world within a couple of frames, data is still key and will still need to be authored on the Disc and SSD, the option now is how much the devs want to cram into the Frustum and 3D space is no longer limited "directly" by the {Ram / FPS / objects / Stream speed} = Maximum density of game frames. The BIG gain in development time is no longer having to go back through streaming chain and LOD system creating artifical walls to manage this, sector points to initiate seek, or just reduce LOD and MIP bias at a per segment basis to meet target rates and performance.

and lol at James Sawyer Ford James Sawyer Ford being called a problematic fanboy.
 
Last edited:

BluRayHiDef

Banned
And that's fair enough. For me, based on Microsoft's spec, and nVidia and AMD's releases on the DX12 Ultimate utilisation, I'm more inclined to believe their claims - AMD and nVidia both quite literally spent millions building hardware for it.

I actually highlighted Sony's potential to replicate SFS. As you keenly pointed out - hardware is foundational. Microsoft's APU design incorporates hardware specially designed for DX12U implementation. That includes the CU count and the optimal VRAM speeds. Sony will be hard-pressed to replicate Microsoft's patented technology's performance using hardware that wasn't designed for the same technology. As for Kraken, it's a third party solution designed for general purpose compression. It's good stuff, no question. Let's ignore that your 22gb/s figure doesn't match Cerny's own presentation, which touted the SSD as moving around 9gb/s compressed. Microsoft invented a new compression system called BCPack. This system was specially designed for texture compression - textures, as I highlighted in my original post, are the vast majority of a game's assets - and can reportedbly outpace Kraken for texture compression. Combined with SFS, the amount of texture data that would clog the IO throughput of Sony's custom-built SSD would shrink significantly, freeing up Microsoft's IO throughput for... well, everything else. Combined with Microsoft's optimal VRAM configuration, and they have a distinct edge in loading, storing, recalling, and rendering texture data. Sony didn't bother trying to outpace Kraken, their hardware wasn't designed for SFS, and Microsoft own both pieces of technology. Sony's hardware is well designed, no questions there, but Microsoft's software engineers are the best in the world; I wouldn't discount their efforts so easily.

I understand that because Microsoft is the most successful software company in the world, that their claims about the efficiency of software that they've invented should be respected. However, I find it strange that they haven't demonstrated SFS being implemented by the Xbox Series X in real time considering how revelatory they've been about the console thus far (e.g. revealing what it looks like months ago, allowing YouTubers to assemble it and play games on it, continually tweeting about it, etc).

In fact, what's particularly strange is that in the demonstrations of the machine running games, it loads games quickly but much slower than the PS5 loads Spider-Man and that the excuse for this is that the games used in these demonstrations were unoptimized. In the demonstration whose primary purpose is to demonstrate the machine's faster loading times relative to current-gen tech, it takes a whole ten seconds to do so (i.e. to load State of Decay). If the reason for the long load time relative to what we've seen and heard of the PlayStation 5 is that State of Decay is unoptimized, then why use it in the demonstration? Why not use a game that's optimized for SFS?

Considering that the XSX is similar to PC in terms of its architecture and its software (APIs, Drivers, etc) and considering that SFS is available on PC via DX12 already, Microsoft could have certainly demonstrated SFS being implemented on XSX by now. So, I have my suspicions that it's not as effective as the PS5's raw, hardware-level processing speed.
 

ZehDon

Gold Member
I understand that because Microsoft is the most successful software company in the world, that their claims about the efficiency of software that they've invented should be respected. However, I find it strange that they haven't demonstrated SFS being implemented by the Xbox Series X in real time considering how revelatory they've been about the console thus far (e.g. revealing what it looks like months ago, allowing YouTubers to assemble it and play games on it, continually tweeting about it, etc).

In fact, what's particularly strange is that in the demonstrations of the machine running games, it loads games quickly but much slower than the PS5 loads Spider-Man and that the excuse for this is that the games used in these demonstrations were unoptimized. In the demonstration whose primary purpose is to demonstrate the machine's faster loading times relative to current-gen tech, it takes a whole ten seconds to do so (i.e. to load State of Decay). If the reason for the long load time relative to what we've seen and heard of the PlayStation 5 is that State of Decay is unoptimized, then why use it in the demonstration? Why not use a game that's optimized for SFS?

Considering that the XSX is similar to PC in terms of its architecture and its software (APIs, Drivers, etc) and considering that SFS is available on PC via DX12 already, Microsoft could have certainly demonstrated SFS being implemented on XSX by now. So, I have my suspicions that it's not as effective as the PS5's raw, hardware-level processing speed.
Excellent post, and an excellent question. Slight correction though - I highlighted the comparison between SOD2 and Spiderman isn't about "time", it's about "reduction". Worth pointing out, I feel. Anyway, frankly, I don't have a real answer - just speculation. For example, State of Decay 2 is a little infamous for loading problems. Perhaps they wanted to use it as an example so assuage the core Xbox fans? Perhaps they don't have announced optimised games they can demonstrate without revealing another title. Like I said, excellent question. I'm keen to see their answer.

Lastly, I actually don't think SFS is available in any PC titles yet. It's hardware dependant, and they only recently - Nov 2019 if I recall correctly - unveiled the technology. Do you have a source for citing implementation? I'd be fascinated to grab the title and actually review its VRAM usage.

Edit: Clarified the loading comparison.
 
Last edited:

joe_zazen

Member
I understand that because Microsoft is the most successful software company in the world, that their claims about the efficiency of software that they've invented should be respected. However, I find it strange that they haven't demonstrated SFS being implemented by the Xbox Series X in real time considering how revelatory they've been about the console thus far (e.g. revealing what it looks like months ago, allowing YouTubers to assemble it and play games on it, continually tweeting about it, etc).

In fact, what's particularly strange is that in the demonstrations of the machine running games, it loads games quickly but much slower than the PS5 loads Spider-Man and that the excuse for this is that the games used in these demonstrations were unoptimized. In the demonstration whose primary purpose is to demonstrate the machine's faster loading times relative to current-gen tech, it takes a whole ten seconds to do so (i.e. to load State of Decay). If the reason for the long load time relative to what we've seen and heard of the PlayStation 5 is that State of Decay is unoptimized, then why use it in the demonstration? Why not use a game that's optimized for SFS?

Considering that the XSX is similar to PC in terms of its architecture and its software (APIs, Drivers, etc) and considering that SFS is available on PC via DX12 already, Microsoft could have certainly demonstrated SFS being implemented on XSX by now. So, I have my suspicions that it's not as effective as the PS5's raw, hardware-level processing speed.

it could be that they aren't currently making games using the tech as all their games are going to have to run on non SFS hardware for the foreseeable future, and they dont want to make tech demos showing things that wont be on the console.

I am sure that the trillion dollar world dominating software company is creating proprietary tech to squeeze all competitors out of the markets they want to rent seek in such as game development toolchains and the like...may take a decade or so, and will require the American DoJ to continue to not care about corporate consolidation and market distortions.
 

BluRayHiDef

Banned
Excellent post, and an excellent question. Slight correction though - I highlighted the comparison between SOD2 and Spiderman isn't about "time", it's about "reduction". Worth pointing out, I feel. Anyway, frankly, I don't have a real answer - just speculation. For example, State of Decay 2 is a little infamous for loading problems. Perhaps they wanted to use it as an example so assuage the core Xbox fans? Perhaps they don't have announced optimised games they can demonstrate without revealing another title. Like I said, excellent question. I'm keen to see their answer.

Lastly, I actually don't think SFS is available in any PC titles yet. It's hardware dependant, and they only recently - Nov 2019 if I recall correctly - unveiled the technology. Do you have a source for citing implementation? I'd be fascinated to grab the title and actually review its VRAM usage.

Edit: Clarified the loading comparison.

I haven't been able to find any PC titles that implememt it. However, considering that it's been available on PC since November or December of last year and that it's their technology, I think that they could have created a demo that implements it by now.
 

ANIMAL1975

Member
Is she reliable? Cause seeing that she hadn't developped game on Playstation and assuming she doesn't have PS5 dev kit how can she pretends what she said?
Just read the all conversation, the flip flop and moving goal-posts... and you got your answer right there. Even if she's 'reliable' (not disputing that) it's just pure console wars twitter bullshit, lol

And of course she doesn't have devkits, just speculating with the information already released by both companies... like every other one is.
 

ZehDon

Gold Member
I haven't been able to find any PC titles that implememt it. However, considering that it's been available on PC since November or December of last year and that it's their technology, I think that they could have created a demo that implements it by now.
Sorry friend, you might be a touch confused there. It hasn't been "available" - the technology was unveiled in November of 2019, and DirectX 12 Ultimate was only publically unveiled on March 19 of 2020. Developers may have had access to the technology earlier, but they'll be under tight NDAs if they're using Microsoft's proprietary technology. As for a demo, like all of the DX12U features, Microsoft have implemented "developer demos". The demo for SFS demonstrates rendering a more accurate scene faster at 1/10th of the VRAM, and they highlight the gains will only improve in real-world scenarios due to the nature of the technology. The video I linked above is a part of a series, each one showing off the features of DX12U and going under the hood on the hows and whys. This is why I'm sold on the tech - Microsoft have the science to prove the theory, and AMD and Nvidia built the hardware to use it.
 

BluRayHiDef

Banned
Sorry friend, you might be a touch confused there. It hasn't been "available" - the technology was unveiled in November of 2019, and DirectX 12 Ultimate was only publically unveiled on March 19 of 2020. Developers may have had access to the technology earlier, but they'll be under tight NDAs if they're using Microsoft's proprietary technology. As for a demo, like all of the DX12U features, Microsoft have implemented "developer demos". The demo for SFS demonstrates rendering a more accurate scene faster at 1/10th of the VRAM, and they highlight the gains will only improve in real-world scenarios due to the nature of the technology. The video I linked above is a part of a series, each one showing off the features of DX12U and going under the hood on the hows and whys. This is why I'm sold on the tech - Microsoft have the science to prove the theory, and AMD and Nvidia built the hardware to use it.

Was the demo running on XSX? If not, why haven't they created a demo for XSX? Until they show the tech running on their console, I'll remain skeptical.
 
Last edited:
Status
Not open for further replies.
Top Bottom