• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Vae_Victis

Banned
It's a pretty misunderstood quote, Xbox wire and DF are the sources for two basically identical quotes, here's DF

"The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system."

In context the 100 GB is referring to an individual game's assists in this hypothetical scenario, not any arbitrary limit. The quote simply states that the game in storage acts as extended memory. All the talk of "only 100 gb" is a poor inference based off this
Wait, so what does it mean in practice? "Instantly accessible" as opposed to what?

And isn't that at the end of the day the same exact thing the PS5 also does, but much more efficiently (as far as we know from the XSX components)?
 
If PS5 and XSX have the same amount of ACE, then PS5 will be ~20% faster at Nanite crunching?
It would make sense as to why Cerny and Sweeney’s collaboration went with big IO and narrow and fast GPU?
we also need to know what the geometry engine is, is that a name for a technology or an actual block of hardware in the gpu? it is not inconceivable that the geometry engine with its primitive shaders is custom hardware designed to handle billions of polygons.
 
Last edited:

Sinthor

Gold Member






52 CU vs 36 CU.
Nanite runs faster on XSX. This is an un-disputeable fact.

LOL. Sure thing, Sparky. Point us to the link and proof if you would? I'd really like to see that.

Seriously, I'd like to see that. Where is this "indisputable proof?" And by the way it IS "INDISPUTABLE" not UNDISPUTEABLE. Call your high school English teacher and get your credit back. :)

 

ToadMan

Member
It's a pretty misunderstood quote, Xbox wire and DF are the sources for two basically identical quotes, here's DF

"The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system."

In context the 100 GB is referring to an individual game's assists in this hypothetical scenario, not any arbitrary limit. The quote simply states that the game in storage acts as extended memory. All the talk of "only 100 gb" is a poor inference based off this

The 100gb is the arbitrary limit. What happens when a game wants to use 101gb of data?

Presumably that extra gb is accessed via a slower file system, thus reducing access speed, or the developers have to manage their direct access storage to make sure they’re keeping important stuff there - another level of memory management they won’t want to do.
 

DeepEnigma

Gold Member
How come Sony's SSD has no thermal heatsink to prevent overheating?
And how do you know that XSX SSD doesn't have a DRAM cache? Alot of nvme ssd have that.

What are you talking about? How do you know there is no thermal heatsink?

Are you even reading the thread? The heatsink patent suggests it will sandwich the IO controllers on the other side of the mainboard opposite side of the APU, much like the Vita setup.

Yup, I remember reading that from the official patent register in some US agency.

xkhahxsexmu41.jpg


It's undeniable at this point that it'll be stacked, probably this is a collaboration between both Sony and AMD as making innovative tech, like the stacked sensor for smartphones, even stacked sensors in their new mirrorless cameras that are dominating. Lately they've introduced 3-layers of stacked sensors:

iedm-2017-sony-stack.png


It was attempted by Sony before somehow with the PS Vita, as @dan_of_orion has pointed out previously:

Sony’s PS Vita Uses Chip-on-Chip SiP – 3D, but not 3D

Inside we found the usual set of wireless chips, motion sensors, and memory, but the key to the increased performance of the PS Vita is the Sony CXD5315GG processor, a quad-core ARM Cortex-A9 device with an embedded Imagination SGX543MP4+ quad-core GPU.

New+Picture+%25281%2529.png


This type of face-to-face connection showed up back in 2006 in the original Sony PSP, and Toshiba had dubbed it “semi-embedded DRAM”, now they are calling it “Stacked Chip SoC”. The ball pitch is an impressive ~45 µm, almost as tight as TI’s copper pillars, but they are staggered to achieve 40-µm pitch.

New+Picture+%25282%2529.png



Sony is thinking out of the box, and driving AMD with it to make more efficient solutions.

It does have on chip SRAM, which is faster than DRAM and doesn't need to be constantly refreshed:
71340_512_understanding-the-ps5s-ssd-deep-dive-into-next-gen-storage-tech.png
 
Last edited:
Yeah I actually think MS got wind of how fast Sony’s storage would be and are scrabbling to close the gap with software. They’re definitely trying not to end up with egg on their face as happened at PS4/xb1 launch when the performance difference became clear.

Not to mention that the Xsex SSD can only direct address 100gb while PS5 can direct address the entire 825gb of their SSD.

I’m still curious if this will cause a potentially significant restriction on developers and offering strong compression is a mitigation for MS.

You don’t think Velocity Architecture was in their plan from the beginning?
 

Vae_Victis

Banned
The 100gb is the arbitrary limit. What happens when a game wants to use 101gb of data?

Presumably that extra gb is accessed via a slower file system, thus reducing access speed, or the developers have to manage their direct access storage to make sure they’re keeping important stuff there - another level of memory management they won’t want to do.
Asking as somebody who can't do programming, does it make sense from a hardware/software standpoint to have different file systems to manage limited portions of the SSD? What negative side effects would just going with the faster solution all over the board bring?

Also, could this in any way be related to the fact that XSX has RAM chips at two different speeds?
 

Sinthor

Gold Member
It's a pretty misunderstood quote, Xbox wire and DF are the sources for two basically identical quotes, here's DF

"The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system."

In context the 100 GB is referring to an individual game's assists in this hypothetical scenario, not any arbitrary limit. The quote simply states that the game in storage acts as extended memory. All the talk of "only 100 gb" is a poor inference based off this

Well, I think the question is then why Microsoft, who doesn't often miss an opportunity to call out a feature, especially if it's related to power or speed, wouldn't state that their entire SSD is DMAC? Seems like an awfully big miss for them. Also, how does even this 100GB partition if you will correspond to their stated throughput for the SSD? Seems again a big missed opportunity to talk about how exactly "instant" this access is?
 

Andodalf

Banned
Wait, so what does it mean in practice? "Instantly accessible" as opposed to what?

And isn't that at the end of the day the same exact thing the PS5 also does, but much more efficiently (as far as we know from the XSX components)?

In terms of what it practically means, I'm not sure anyone understands what the full impact could be. It's difficult to directly compare the two, but PS5 will most likely do a decently better job of it, if i had to guess. The key here, I think, will be latency more so than the total rate of data transfer when it comes to feeling impact from "instant" access, but we already know that even the fastest ssd pales in comparison to ram in this area.


Well, I think the question is then why Microsoft, who doesn't often miss an opportunity to call out a feature, especially if it's related to power or speed, wouldn't state that their entire SSD is DMAC? Seems like an awfully big miss for them. Also, how does even this 100GB partition if you will correspond to their stated throughput for the SSD? Seems again a big missed opportunity to talk about how exactly "instant" this access is?

There's no partition, that 100gb refers to the game install on the drive. It's just normal space on the drive, it all works that way.
 
Last edited:

ToadMan

Member
It's how it was reported by a number of outlets; I didn't read it in context with the original interview. Google the quote and you'll find it reported on out of context by about a dozen outelts and how someone reported it to me here, on this forum, by telling me devs wouldnt' have to optimize.

That is interesting reading it in context of the full Eurogramer article; there was no ill intention or lack of intellectual ability here.

Jesus fucking christ this fucking forum over-reacts to everything.

Here see an example of the numerous places reporting it how it was reported to me: https://wccftech.com/cerny-devs-don...y-for-ps5s-variable-clocks-its-all-automatic/

So you quoted something you didn’t understand and snarkily told me I was wrong and then want to say I’m overreacting?

it’s ok, I accept your apology and gratefully receive your thanks for expanding your knowledge and correcting your mistake
 

IntentionalPun

Ask me about my wife's perfect butthole
So you quoted something you didn’t understand and snarkily told me I was wrong and then want to say I’m overreacting?

it’s ok, I accept your apology and gratefully receive your thanks for expanding your knowledge and correcting your mistake
I do appreciate the correction; I fail to see where I was snarky? Are you talking about me throwing an "lol" in the post?'

I was just trying to keep it light hearted.

I still maintain that optimizing to avoid being throttled is not really an ideal scenario*. There's a reason people have singled out that quote from Cerny saying you don't have to optimize.

* and yes I get that code might be "better" if it truly does produce the same result with a smaller power budget, hopefully engine devs and middleware devs can do most of that optimization so your average dev doesn't have to concern themselves too much with it.. and maybe overall code will be better off for it.
 
Last edited:

Thirty7ven

Banned
You don’t think Velocity Architecture was in their plan from the beginning?

The naming or the hardware itself? :messenger_grinning_sweat:

It seems like an extension of the work they are also doing with directstorage on the PC side, so I would say absolutely they thought about it from the beginning.

But that doesn't mean they got the exact results they wanted, or that what they achieved is just perfect. With PS4 and Xbox One they had similar targets and one ended up with a much more powerful GPU and much better memory system. And just like people are parroting a bunch of codenames for software applications, they also parroted "DirectX" last gen(along with power of the cloud but let's just move on from that...)

Only time price and time will tell us how these two compare in real games, and which games.
 
Last edited:






52 CU vs 36 CU.
Nanite runs faster on XSX. This is an un-disputeable fact.

Hey Guru which doesn't need the tech talk from Epic reply my arguments don't run just to go out and say definitive
things like that.

Why do you want to run faster something which you cannot feed it also if the CPU use is high probably you even cannot do it anyway in a important number.
 
Wait, so what does it mean in practice? "Instantly accessible" as opposed to what?

And isn't that at the end of the day the same exact thing the PS5 also does, but much more efficiently (as far as we know from the XSX components)?

I really think that the reason why it hasn't been clarified yet is because it doesn't do what some people think it does. Which is why they don't clarify it to avoid disappointment.
 

Bo_Hazem

Banned
Epic said it's running 1440p most of the time; I expect this is the bottom end of a dynamic resolution scale. However, it's using their excellent TAAU reconstruction to bring it up to 4K just like they do with Gears 5.

This comes with one caveat which is temporal artifacts, stability is very good in this case due to the ~1:1 pixel:triangle target, but there's visual artifacting and a trailing on falling rocks, birds, bats and some parts of the last part of the demo where the character and camera are moving fast. Basically any fast moving elements can look a little crispy and noisy.

Watch the full res stream on a decent sized TV (preferably the Vimeo 4K one) and it's probably the one aspect of the demo that can take away a little from the otherwise gorgeous presentation. Check the bats flying up into the sky as the character exits the crevice, they look extremely rough. Fast moving objects + high contrast...

When DF are talking about being unable to pixel count it's likely because the geometric detail is subpixel level, so they don't have a common reference between pixels in a given segment of the image. I expect TAAU on top of this then makes it harder.

If they can get this level of quality at 60fps, they will be able to cut those temporal artifacts in half and further increase the quality of the end result due to the effective doubling of samples (frames) by which reconstruction derives its result.. Not to mention that reconstruction techniques are advancing at a crazy rate and with optimisation in general and on a per-title basis, the end result and artifacting will likely be even better in actual games.

A slight digression, but 60fps will also help Lumen or similar GI solutions, as they use temporal accumulation to derive light bounces over a few frames causing a slight lag/latency between the initial bounce and the rest. (I believe you alluded to this).


While I think 1440p is still a little on the low side and that 1620p-1800p is probably the sweet spot for a dynamic res scale + non-checkerboard reconstruction. Keeping at 1440p and pushing for 60fps wherever possible may make more sense in many cases. -- (I'd categorically say that going above 1800p in almost any circumstance is a waste of resources and totally pointless, it takes an extra 40% GPU power too get to 2160p from 1800p and the difference is practically invisible, especially with decent reconstruction). --

As mentioned above, 60fps increases the samples by which the reconstructed image is derived, meaning higher quality image and less artifacting (not to mention other temporal applications such as Lumen GI). 60fps increases both temporal resolution and the subsequent perception of spatial resolution. It increases responsiveness, it strains the eye less and it diminishes the effects of tearing, uneven frame pacing and framerate drops should they occur.

A higher res at 30fps gives you only one thing relatively speaking. Spatial resolution which is only really apparent in motion; and which is further negated by the nature of motion resolution limitations in modern displays.

Of course, if you're heavily CPU limited, 60 just might not be an option if you have a particular vision to achieve. But wherever it is, I think there's far more to be gained from it. I think 60fps might be at least somewhat more common as a result of this wildcard.

What a wonderful thicc wall of text, amazing breakdown to the matter. I won't mind an intelligent solution to reconstruct a 4K out of 1440p at 60fps if it produces even cleaner image than what we've already saw. Of course, Rebirth trailer is still obviously sharper at 24fps and narrower (wider) aspect ratio. I wouldn't mind native 4K myself at solid 30fps for 3rd person view, storrytelling games that tend to be less crowded and less pacy, but an option for 60fps or open framerates like in GOW at lower res (1440p) would satisfy both sides.

In Uncharted 4 on base PS4, I tend to stop a lot and enjoy the art of building such a beautiful, vibrant world. Yup, let's leave some room for PS5 Pro and enjoy this insane jump in fidelity at the moment. Thanks a lot for the deep dive. I'll try to download the vimeo 4K version, although I think it's suffering from bandwidth more than a problem with my connection that it can't even show a still frame.
 
Last edited:

Exodia

Banned
Are you even reading the thread? The heatsink patent suggests it will sandwich the IO controllers on the other side of the mainboard opposite side of the APU, much like the Vita setup.

More speculation

It does have on chip SRAM, which is faster than DRAM and doesn't need to be constantly refreshed:

There are dozens of SSDs with SRAM/DRAM. Again MS pointed out what their SSD defers from trad SSD which is the fact it has a heatsink and a hardware decompression.

The Xsex SSD dram-less info comes from here :
https://hothardware.com/news/xbox-s...umored-to-use-37gbsec-phison-flash-controller

I think it will be the first and perhaps only NVMe not to have some dram available.

This is pure speculation and a bad rumor at that. Isn't MS partnered with Seagate for their SSD?
 

Bo_Hazem

Banned
Well Dealer is Dealer and will never change no matter how hard he tries but Alex should be far more neutral than he is. Yes we all have our preferred platforms but in his position it shouldn't come through so obviously and sometimes even scathingly. John and Richard are far better in this regard.

It's funny that Crapgamer has now shifted from being toxic xbox fan into a toxic PS fan :messenger_tears_of_joy:
 
NXGamer NXGamer put up a nice video about what we can expect from the SSD and I/O bandwidth generational leaps in PS5 and XsX



In the next of our series discussing the next generation leaps we hit the storage solutions and this is likely going to be the biggest impact, for both consoles, the SSD is a paradigm change. Now, we need to be clear here that the improvement here is not just from an SSD. Let me take you back to a revolution....
 

DeepEnigma

Gold Member
More speculation



There are dozens of SSDs with SRAM/DRAM. Again MS pointed out what their SSD defers from trad SSD which is the fact it has a heatsink and a hardware decompression.



This is pure speculation and a bad rumor at that. Isn't MS partnered with Seagate for their SSD?

No shit, it's a speculation thread.

Sony also pointed out how theirs wildly differs from anything on the market today. Sweeney too..

Stop clowning, I see what you're doing here.
 
Last edited:

Bo_Hazem

Banned
This might sound strange, but Dreams on PS4 for example is sporting global illumination in an impressive way, although he made extra light sources to show off the details, which is a mistake to do as it was more than enough with the main source coming from the hole to replicate the original scene instead of seeking an HDR look. You can't go wrong guessing that Sony's first part studios/Decima engine are already cooking something even more impressive:

 
Last edited:

Sinthor

Gold Member
In terms of what it practically means, I'm not sure anyone understands what the full impact could be. It's difficult to directly compare the two, but PS5 will most likely do a decently better job of it, if i had to guess. The key here, I think, will be latency more so than the total rate of data transfer when it comes to feeling impact from "instant" access, but we already know that even the fastest ssd pales in comparison to ram in this area.




There's no partition, that 100gb refers to the game install on the drive. It's just normal space on the drive, it all works that way.

Well, I guess that's what we have to see then. Further clarification maybe will clear that up. Just seems odd for MS not to be beating that drum if that's what they meant. Also curious what other hardware they may have in place to allow that as we haven't heard about that either. Will be great as we start getting more official info from Sony and MS on these wee beasties they're about to unleash on us all. :)
 

ToadMan

Member
I do appreciate the correction; I fail to see where I was snarky? Are you talking about me throwing an "lol" in the post?'

I was just trying to keep it light hearted.

I still maintain that optimizing to avoid being throttled is not really an ideal scenario*. There's a reason people have singled out that quote from Cerny saying you don't have to optimize.

* and yes I get that code might be "better" if it truly does produce the same result with a smaller power budget, hopefully engine devs and middleware devs can do most of that optimization so your average dev doesn't have to concern themselves too much with it

It’s fine.

So this is what’s going on:

P=CV^2Af

where
P = power requirement
C = Capacitance
V= Voltage
A= Activity
f= Clock speed

First let’s clarify what we mean by A - activity. This is the work done by a processing unit for a given clock tick. Processor operations cause transistors to “flip” which takes power - the more transistors that flip per clock tick the higher the “activity level”. Different instructions invoke different amounts of activity.

With that in mind, conventional console design says keep f constant for the cpu and gpu, and let P rise as A rises. As P rises, the thermal output rises, the fans get louder etc. But the clock is fixed.

Conventional console design then requires the designer to predict the power a processor might consume and develop a cooling solution for that estimate. Get this prediction wrong, and the system can become unstable or even fail (rrod from 360 days for example).

For the PS5, Sony cap P while allowing f to vary. In this case as A rises, P rises until max P is reached - at that point f may be reduced to maintain P within the limit (see below for why it’s “May be”). This is what happens when devs haven’t optimised their code effectively.

For PS5 the determination of power usage is done using an SoC logical model - not the specific chip in the unit at run time.

That’s where the code optimisation comes from. Developers run their code to meet the max power budget as measured at this “simulated” SoC. If they prepare their code to remain within the power budget in the office - it will remain within the power budget at the user side.

It’s different to the traditional optimisation step but any developer familiar with even rudimentary low level coding will understand what they’re trying to achieve and how to achieve it - this isn’t Cell processor complexity or anything even approaching that.

To complete the story there’s also SmartSwitch. In response to the power cap being reached at runtime there are 2 possible solutions - let’s assume it’s the GPU demanding more power since that seems to be the normally expected scenario.

The obvious solution is to throttle the clock of the gpu to reduce the power demand - but this reduces performance. Smartswitch avoids this performance dip by checking the power usage of the CPU - if the CPU is below its max power budget, that budget is allocated to the GPU and the clock speeds are maintained at 100%.
 

Bo_Hazem

Banned
Won't take any work unless you think the 2080ti is one of the slowest nvidia cards because it has lowest clocks. I admire the devotion to the net burst line thinking. Just like the 80cu rdna2 pc card will be lower clocked but best performance.

It might even clock higher than PS5, making up to 26TF:

amd-rx-gamma-leaked-slide-.jpg


amd-rx-gamma-benchmarks-leaked-2.jpg


If the leaks are true, it's gonna "RAPE" RTX 3000 lineup.

April's Fools joke.

 
Last edited:
It might even clock higher than PS5, making up to 26TF:

amd-rx-gamma-leaked-slide-.jpg


amd-rx-gamma-benchmarks-leaked-2.jpg



If the leaks are true, it's gonna "RAPE" RTX 3000 lineup.

This was an April Fool's joke bruh.
 

NickFire

Member
What the hell happened? Two months ago everyone was fighting about TF. Come back around and people seem to be fighting over SSD as memory. I'm still on fence for next gen cause I don't know anything more of substance today than two months ago. But if the fight has moved to SSD, does that mean Sony already won?
 
What the hell happened? Two months ago everyone was fighting about TF. Come back around and people seem to be fighting over SSD as memory. I'm still on fence for next gen cause I don't know anything more of substance today than two months ago. But if the fight has moved to SSD, does that mean Sony already won?

I guess in the end people are trying to argue which difference will be more noticeable. Will it be the PS5s I/O solution or the XSXs more powerful GPU?
 

NickFire

Member
I guess in the end people are trying to argue which difference will be more noticeable. Will it be the PS5s I/O solution or the XSXs more powerful GPU?
I think I get it. So basically MS was winning one battle and decided to up the ante with a second simultaneous battle?

If that's accurate it is a very bold move. But Germany tried that before and it did not work well for it.
 
I think I get it. So basically MS was winning one battle and decided to up the ante with a second simultaneous battle?

If that's accurate it is a very bold move. But Germany tried that before and it did not work well for it.

In my opinion the Xbox Series X won't be superior to the PS5 in every single way. With that said they will lose some battles in the future. People have to decide what they want the most in a next gen console.

For me I can deal with a slightly lower resolution to obtain the eliminación of load times. Having a blazing fast OS is easily done with a great I/O system. These are my preferences when it comes to the hardware.
 

Panajev2001a

GAF's Pleasant Genius
It’s fine.

So this is what’s going on:

P=CV^2Af

where
P = power requirement
C = Capacitance
V= Voltage
A= Activity
f= Clock speed

First let’s clarify what we mean by A - activity. This is the work done by a processing unit for a given clock tick. Processor operations cause transistors to “flip” which takes power - the more transistors that flip per clock tick the higher the “activity level”. Different instructions invoke different amounts of activity.

With that in mind, conventional console design says keep f constant for the cpu and gpu, and let P rise as A rises. As P rises, the thermal output rises, the fans get louder etc. But the clock is fixed.

Conventional console design then requires the designer to predict the power a processor might consume and develop a cooling solution for that estimate. Get this prediction wrong, and the system can become unstable or even fail (rrod from 360 days for example).

For the PS5, Sony cap P while allowing f to vary. In this case as A rises, P rises until max P is reached - at that point f may be reduced to maintain P within the limit (see below for why it’s “May be”). This is what happens when devs haven’t optimised their code effectively.

For PS5 the determination of power usage is done using an SoC logical model - not the specific chip in the unit at run time.

That’s where the code optimisation comes from. Developers run their code to meet the max power budget as measured at this “simulated” SoC. If they prepare their code to remain within the power budget in the office - it will remain within the power budget at the user side.

It’s different to the traditional optimisation step but any developer familiar with even rudimentary low level coding will understand what they’re trying to achieve and how to achieve it - this isn’t Cell processor complexity or anything even approaching that.

To complete the story there’s also SmartSwitch. In response to the power cap being reached at runtime there are 2 possible solutions - let’s assume it’s the GPU demanding more power since that seems to be the normally expected scenario.

The obvious solution is to throttle the clock of the gpu to reduce the power demand - but this reduces performance. Smartswitch avoids this performance dip by checking the power usage of the CPU - if the CPU is below its max power budget, that budget is allocated to the GPU and the clock speeds are maintained at 100%.

Very well said. On top of that I feel that voltage may also be adjusted dynamically
(slightly reduced or brought back to normal depending on how you want to look at it when frequency is lowered) potentially helping deliver the non linear power reduction based on the frequency scaling used.
 
It might even clock higher than PS5, making up to 26TF:

amd-rx-gamma-leaked-slide-.jpg


amd-rx-gamma-benchmarks-leaked-2.jpg



If the leaks are true, it's gonna "RAPE" RTX 3000 lineup.

April Fool's joke.
:lollipop_neutral: again really?Do you need a more severe reinforcement of what? lol
 

Bo_Hazem

Banned
Its kinda strange people only talke only about the teraflop difference .. which is by itself is significant '' i still belevie the PS5 is a 9 TF mechine''
while ignoring other XSX hardware advantages like :
- 112GB/s higher mem bandwidth
- 3.1GPixels higher fillrate
- 58GTexels higher texture rate
Mark my words .. the XBX will be > 40 % faster

41xrst.jpg
 
Last edited:

NickFire

Member
In my opinion the Xbox Series X won't be superior to the PS5 in every single way. With that said they will lose some battles in the future. People have to decide what they want the most in a next gen console.

For me I can deal with a slightly lower resolution to obtain the eliminación of load times. Having a blazing fast OS is easily done with a great I/O system. These are my preferences when it comes to the hardware.
All I care about currently is finally seeing some next gen only games, price, price, and price. Until we get that level of dirt, all this other stuff is background noise because third parties are not about to abandon either company nor get on their bad side.
 
Status
Not open for further replies.
Top Bottom