• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Neo Blaster

Member
People are pointing that out like it's a good thing. It's actually a completely insane ratio.

Still, the only thing I don't like is the thermals. All this talk about TF will be forgotten when the games are shown, especially Sony 1st party, because they will be mindblowing.
I don't know if that ratio is correct, but Cerny mentioned just that, a small decrease in frequency can make power go down by a lot.

What we need right now from Sony is a demo showing in practice everything Cerny said on that presentation.
 
It was noticeable. Games like Alien Isolation and Tekken run on a pitiful 720p - which looks like crap on a 4K TV - while the same games on PS4 run at a 1080p resolution, looking much better, sharper.

So yeah, I can definitely see a lot of multiplat games not running on Native 4K on the PS5, but rather on 1440p - which isn't nearly as bad as 720p on a 4K TV. But again, it sucks that the most impressive looking games on the PS5 PROBABLY won't manage to have constant 4K resolution.

But hey, this whole dynamic resolution thing makes the whole conversation kinda moot.


The power delta between XB SEX and PS5 is not that big where games running in Native 4K on the Series X would have to run in 1440p on the PS5. Where do you even get this idea. Most of the time both games will be able to hit the same resolutions, but you might get a situation where PS5 might use a dynamic resolution here and there going from 1800p to Native 4K. The only reason I see PS5 hitting 1440p is if devs want to push 120fps for a game.
 
5% difference ? You are optimistic))
Closer to reality, the difference - RX 5700 XT > RX 5600 XT


You can't compare two graphics cards running on a PC to the consoles, it doesn't work that way since both consoles are going to use their own custom APIs and such.
 

Fafalada

Fafracer forever
Yes, and I think it will be there
I don't doubt that - I think questions are mainly how mixing slow/fast access will impact performance.

I can only go off Wikipedia for this, but it looks like Cell also had less than a quarter rate write to GDDR3.
Yes - while CPU providing fast-path to 'its' memory was always part of PS3 design(so GPU got fast access either way) - RSX wasn't built the other way around.

I know very little about the PSP (regretfully, looking back it was a pretty cool device) but both the Wii and GC had much larger differences in bandwidth between memory pools.
It's a mix - GC had a really slow-pool in the mix - but Wii upgraded all of its memory to have roughly the same bandwidth, so it was really just latency differences between SRam and DRam pools. PSP was interesting as bandwidth was largely the same, the main use for embedded memory was lower latency/direct access to the 'owning' chip (no bus contention).
 

Chumpion

Member
I'm really excited about the AI capabilities of the upcoming consoles. Thanks to the plentiful neural net power, they're going to be the best video upscalers available, better than any TV. Games will benefit from improved reconstruction algorithms as much improved image quality. Indie games can put NN filters on top and go crazy with art styles. Procedural content generation can leverage generative NNs, for instance levels decorated with neural dreams in real time. Camera functionality will be upgraded thanks to far better scene analysis and pose detection.
 

Imtjnotu

Member
If it were that simple, why doesn't MS do it? Why haven't big tech been doing that for years? Why worry about heat at all? Just speed ahead to 5.23 GHz. clocks and just "CHANGE THE VOLTAGE".

do you want to win the numbers war or the "why doesnt everyone do it war"

cpus and gpus during intensive gaming run at the same speeds always(dont include any boost mode into this).
so when you have something on the screen not doing shit like say when youre at the main menu for cod warzone your
cpu and gpu are running at those speeds still creating excess heat and power draw for now damn reason and all youre
doing is wating at a menu screen. cerny even showed the screen with the ps4pro and all the heat and noise it makes
when its virtually not doing anything intensive.

the way cerny made it is so that you dont have to use enough power to keep the high clocks consistent if you dont need too.
 
Well there goes the whole "I'll buy a ps5 because it will be cheaper" theory out the window...

Surprisingly closer split between XsX and Ps5 than I would have thought.

If MS comes out swinging firing on all cylinders with Halo Infinite and others at launch and Sony doesnt have anything new at launch besides enhanced versions of games that came out this spring and summer they very well may be in trouble.
Well there goes the whole "I'll buy a ps5 because it will be cheaper" theory out the window...

Surprisingly closer split between XsX and Ps5 than I would have thought.

If MS comes out swinging firing on all cylinders with Halo Infinite and others at launch and Sony doesnt have anything new at launch besides enhanced versions of games that came out this spring and summer they very well may be in trouble.

You are basing this off one random poll? Don't get me wrong I think Xbox might do a bit better next gen than they did current gen, but unless Sony kicks a dog on live TV they will continue to have the larger market share than Xbox. The Playstation brand is just simply more popular worldwide. Also, a potential $100 price gap will help Sony as well.
 

SamWeb

Member
Talking about a "bad memory decision" on XSX is just ridiculous.

320-bit, 560 GB /s in any of the scenarios will beat 256-bit, 448 GB/s!

XSX does not share memory on a physical level. The second 192-bit bus also does not exist.
Games available 13.5GB VRAM.
10GB GPU optimal memory and 3,5GB standard memory.

PS4 actually has a pretty similar separation.
"The actual true distinction is that:

"Direct Memory" is memory allocated under the traditional video game model, so the game controls all aspects of its allocation
"Flexible Memory" is memory managed by the PS4 OS on the game's behalf, and allows games to use some very nice FreeBSD virtual memory functionality. However this memory is 100 per cent the game's memory, and is never used by the OS, and as it is the game's memory it should be easy for every developer to use it. "

https://www.eurogamer.net/articles/digitalfoundry-ps3-system-software-memory
This division is also abstract.

Good example of poor implementation - GTX 970
gtx970_04.gif


gtx970_03.gif

Slow section of GTX 970 memory has a bandwidth of only 28 GB / s (this is more than 6 times less than PS4 memory bandwidth - 176GB / s, and more than 12 times compared to XSX)
At the same time, when the buffer is full, the drop in performance is on average only 4-6 percent.
 

SamWeb

Member
Both are not comparable. This doesn't even take into account the clock speeds in which these cards are running at.
This is a conditional comparison of the performance level of two graphics systems. The frequency and other aspects in this case do not interest me
 
Last edited:

FeiRR

Banned
Well there goes the whole "I'll buy a ps5 because it will be cheaper" theory out the window...

Surprisingly closer split between XsX and Ps5 than I would have thought.

If MS comes out swinging firing on all cylinders with Halo Infinite and others at launch and Sony doesnt have anything new at launch besides enhanced versions of games that came out this spring and summer they very well may be in trouble.
I'd say it's surprisingly low for Microsoft, considering the poll was among American users. For worldwide distribution, I'd say 3x more for PS5, even before games are shown. After that, well, things will get nasty for Microsoft, especially when they reveal the pricing.

Outside of 5-6 million of hardcore American audience, nobody gives a single fuck about Halo.
 

geordiemp

Member
Talking about a "bad memory decision" on XSX is just ridiculous.

320-bit, 560 GB /s in any of the scenarios will beat 256-bit, 448 GB/s!

XSX does not share memory on a physical level. The second 192-bit bus also does not exist.
Games available 13.5GB VRAM.
10GB GPU optimal memory and 3,5GB standard memory.

PS4 actually has a pretty similar separation.
"The actual true distinction is that:

"Direct Memory" is memory allocated under the traditional video game model, so the game controls all aspects of its allocation
"Flexible Memory" is memory managed by the PS4 OS on the game's behalf, and allows games to use some very nice FreeBSD virtual memory functionality. However this memory is 100 per cent the game's memory, and is never used by the OS, and as it is the game's memory it should be easy for every developer to use it. "

https://www.eurogamer.net/articles/digitalfoundry-ps3-system-software-memory
This division is also abstract.

Good example of poor implementation - GTX 970
gtx970_04.gif


gtx970_03.gif

Slow section of GTX 970 memory has a bandwidth of only 28 GB / s (this is more than 6 times less than PS4 memory bandwidth - 176GB / s, and more than 12 times compared to XSX)
At the same time, when the buffer is full, the drop in performance is on average only 4-6 percent.

No I prefer to listen to someone with the right expertise...such as below, you are just making stuff up that is not relevant to a shared bus between CPU and GPU. Yes XSX is estimated to be 15 % better .bandwidth ..strange that number isnt it, looks very similar to .....????

Read the first paragraph carefully, then read it again, and again.


qtqfhye.png
 
Last edited:

SlimySnake

Flashless at the Golden Globes
It's much bigger than expected. Ps4 vs xbone sales in US are something like 53:47 and this being a US poll it's pretty surprising.
online polls arent reliable. i remember the polls being 95-5% in favor of ps4 during the xbox one reveal debacle. and yet at launch, both consoles sold a million in the first month anyway and i believe MS actually sold more in december but mostly because sony kept selling out within a day and xbox one was the only console available during thanksgiving and christmas.
 
online polls arent reliable. i remember the polls being 95-5% in favor of ps4 during the xbox one reveal debacle. and yet at launch, both consoles sold a million in the first month anyway and i believe MS actually sold more in december but mostly because sony kept selling out within a day and xbox one was the only console available during thanksgiving and christmas.
But that's the thing. The poll was before Microsoft backtracked on the DRM check-ins. Wouldn't it be reasonable to suggest that the poll results served as a wake-up to Microsoft that their original vision for the XB1 was a really bad one? Judging the effectiveness of polls with only sales is a very misguided metric.
 

SonGoku

Member
F Fafalada
Is this what you meant by naive implementation?
Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.
CPU can't access slower pool at the same time as GPU access the fast pool. Could this be solved with separate bus for CPU & GPU access like PS4s?
 

SlimySnake

Flashless at the Golden Globes
But that's the thing. The poll was before Microsoft backtracked on the DRM check-ins. Wouldn't it be reasonable to suggest that the poll results served as a wake-up to Microsoft that their original vision for the XB1 was a really bad one? Judging the effectiveness of polls with only sales is a very misguided metric.
Sure. i can buy that.

but what can sony backtrack on? Cut the needless I/O shit out and go with a slower ssd to add more CUs? its too late for that. sony is locked in for good.
 
Sure. i can buy that.

but what can sony backtrack on? Cut the needless I/O shit out and go with a slower ssd to add more CUs? its too late for that. sony is locked in for good.
The I/O isn't needless, though. And based on that IGN poll someone shared and a GAF poll which showed very similar results, this isn't XB1: Part 2, the Electric Boogaloo.

Even Nvidia has stated how improving the storage-to-memory communication improves performance. But the reason why it is not typically done is because the solution is really expensive, so most of the time, systems just go with the "brute force" solution because it's cheaper.
 

SlimySnake

Flashless at the Golden Globes
The I/O isn't needless, though. And based on that IGN poll someone shared and a GAF poll which showed very similar results, this isn't XB1: Part 2, the Electric Boogaloo.

Even Nvidia has stated how improving the storage-to-memory communication improves performance. But the reason why it is not typically done is because the solution is really expensive, so most of the time, systems just go with the "brute force" solution because it's cheaper.
the nvidia comparison is interesting because nvidia tflops usually offer more performance somehow. if that advantage is indeed related to i/o, ps5 should be able to close the performance gap despite the tflops advantage. i guess we will find out.
 
the nvidia comparison is interesting because nvidia tflops usually offer more performance somehow. if that advantage is indeed related to i/o, ps5 should be able to close the performance gap despite the tflops advantage. i guess we will find out.
Just always remember that teraflops are the theoretical peak ALU performance. Neither console will hit that peak performance all the time, so they need to be judged on average performance (and 1% lows). And even if the performance difference ends up scaling 1-to-1 with the teraflops difference, we're talking about 1800p vs. 2160p. That's not nearly as discernible as 720p/900p vs. 1080p. Plus, if you use Radeon Image Sharpening (RIS) on 1800p, it almost looks the same as native 4K and RIS has almost no performance penalty.

What I can say absolutely is that the XSX will have better raytracing because RT performance is dependent on CU count. As high as the PS5's GPU is clocked, the improvements you'll see from the higher clockspeed is faster rasterisation and higher L2/L3 cache bandwidth, not so much in RT. I don't think the PS5's audio chip will close that gap even though it appears to be better than the XSX's audio chip (MS has never specified how powerful its audio chip is and Sony has had experience with SPUs thanks to the PS3).
 
I think people fail to realize how much of a difference custom silicon could have, looking at the video at which is about Advanced Mesh Shaders in Xbox Series X, you can see that the amount of model data in both scenes is the same but the way the hardware renders it makes a huge difference to the render time. We assume that the hardware doesn't draw things that are hidden/not seen but that is not always the case and this is where a lot of power could be wasted. Just being more efficient when rendering and being able to ignore faces/objects which do not need to be drawn could help alot.

All the new GPUs have the same now XSX,PS5, RTX GPUs, RX 6000 series
 

Tiago07

Member
If I'm not wrong, the PS5's SSD is so fast that you can only render what player sees instataneously, and the XSX can't do it.


PS5 with less power can do things and create worlds that XSX can't.


Its impressive the PS5 efficiency.


I saw all of that in Moore's Law is dead
 

SlimySnake

Flashless at the Golden Globes
AWj6IrK.jpg


Just wanted to clear this up for me, what is FULL ray-tracing?
Path ray tracing. everything in the game is done by ray tracing. shadows, lighting, reflections, AO, everything. you arent going to see that in fully 3d games. maybe 2d games like ori or shit looking games like minecraft. its far too expensive. Even a 2080 cant run full path ray tracing at 60 fps and 1080p in Quake which is a 20+ year old game with blocky graphics.
 

Tiago07

Member
If I'm not wrong, the PS5's SSD is so fast that you can only render what player sees instataneously, and the XSX can't do it.


PS5 with less power can do things and create worlds that XSX can't.


Its impressive the PS5 efficiency.


I saw all of that in Moore's Law is dead
This os like Hulk vs Flash

And physics goes on PS5 favor.
unlike,
F = MA
If Flash goes so fast he can defeat Hulk.

At least,
Im only joking
 

CJY

Banned
If I'm not wrong, the PS5's SSD is so fast that you can only render what player sees instataneously, and the XSX can't do it.


PS5 with less power can do things and create worlds that XSX can't.


Its impressive the PS5 efficiency.


I saw all of that in Moore's Law is dead

You're not wrong.

lt's all a theory at the moment, but proof is just a matter of time.
 

SamWeb

Member
No I prefer to listen to someone with the right expertise...such as below, you are just making stuff up that is not relevant to a shared bus between CPU and GPU. Yes XSX is estimated to be 15 % better .bandwidth ..strange that number isnt it, looks very similar to .....????

Read the first paragraph carefully, then read it again, and again.


qtqfhye.png
https://www.neogaf.com/threads/next...-analysis-leaks-thread.1480978/post-257491119

I already mentioned that APU memory bandwidth is shared between CPU and GPU. There is nothing new or denying what I said earlier.
Even my imperfect understanding of English, allows me to determine that you do not understand what I wrote and cited by you.
PS5 and XSX have similar bandwidth only on the part of the CPU memory controller.
 

SlimySnake

Flashless at the Golden Globes
Just always remember that teraflops are the theoretical peak ALU performance. Neither console will hit that peak performance all the time, so they need to be judged on average performance (and 1% lows). And even if the performance difference ends up scaling 1-to-1 with the teraflops difference, we're talking about 1800p vs. 2160p. That's not nearly as discernible as 720p/900p vs. 1080p. Plus, if you use Radeon Image Sharpening (RIS) on 1800p, it almost looks the same as native 4K and RIS has almost no performance penalty.

What I can say absolutely is that the XSX will have better raytracing because RT performance is dependent on CU count. As high as the PS5's GPU is clocked, the improvements you'll see from the higher clockspeed is faster rasterisation and higher L2/L3 cache bandwidth, not so much in RT. I don't think the PS5's audio chip will close that gap even though it appears to be better than the XSX's audio chip (MS has never specified how powerful its audio chip is and Sony has had experience with SPUs thanks to the PS3).
I dont know if consoles will never hit that peak performance. my gpu runs at 1950 mhz at all times even though nvidia says the boost clocks are only 1750 mhz on it. my pro runs so hot there is no way devs like ND and SSM are letting precious clock cycles go free.

I think next gen is going to be very interesting because even if the worst case scenario is that the audio chip and SSD dont help make up the resolution gap, shorter load times, and better audio will give each and every PS5 port several distinct features and advantages.

This is the first time im hearing about the L2/L3 cache bandwidth advantage. where was this mentioned? also, is the faster rasterization going to make up a 17% gap in tflops? ps5 only has 22% higher clockspeeds, and lacks 44% of CUs, i dont see how faster clocks on 16 fewer CUs will make a dent.
 

SlimySnake

Flashless at the Golden Globes
Physics and 4K 60 fps

Maybe it's just this game but higher framerates seems a general target this gen. Fucking finally.
QXcdq9w.png


KSmqWrP.png
its a cross gen game. 4k 60 fps should be standard for all cross gen games. PS5 gpu is roughly 18 PS4 tflops or 10x for the gpu alone. with the 7-8x improvement in cpu, 2x the fps and 4x the resolution should be expected if not standard.

next gen only games are a different story.
 
Last edited:
PlayStation 5's GPU Clock Speed = 2,230MHz
PlayStation 5's GPU = 2,304 Cores
PlayStation 5' Fill Rate = 2,230Mhz x 2,304 cores = 5,137,920 texels per second

Xbox Series X's GPU Clock Speed = 1,825MHz
Xbox Series X GPU = 3,328 Cores
Xbox Series X's Fill Rate = 1,825MHz x 3,328 Cores = 6,073,600 texels per second

Percentage Difference between Fill Rates of both consoles = [(5,137,920 t/s) / (6,073,600 t/s)] x 100 = 84.6% -> 100% - 84.6% = 15.4%

The PlayStation 5's texture fill rate is 15.4% less than that of the Xbox Series X.
IF the PS5 is running at 2230, which it won't unless the CPU is running at 3GHz only
Also your numbers are completely wrong.

Ps5 has a maximum of 321.120 texels per second
Series X has 379.600 texels per second


Also you don't know the difference between more and less?
It both takes a different base.
Ps5 is over 15.4% weaker than Series X
Series X is over 18.2% stronger than Ps5
 

SamWeb

Member
If I'm not wrong, the PS5's SSD is so fast that you can only render what player sees instataneously, and the XSX can't do it.


PS5 with less power can do things and create worlds that XSX can't.


Its impressive the PS5 efficiency.


I saw all of that in Moore's Law is dead
it is just hope. This may affect level design, yes. But hardly more than that.
In addition, the volume of PS5 SSD is finite.
After a year, 2, people will need external media to install games that will be slower than the built-in.
And multi-platform developers (in their bulk) will not focus on its capabilities.
 
Last edited:
it is just hope. This may affect level design, yes. But hardly more than that.
In addition, the volume of PS5 SSD is finite.
After a year, 2, people will need external media to install games that will be slower than the built-in.
Aтв multi-platform developers (in their bulk) will not focus on its capabilities.
You can't. To use another storage on PS5 you will need to buy one validated by Sony which grants the exact same performance than the original, at least. They are infact waiting for faster SSDs to start validating some.
 
Last edited:

Evilms

Banned
Let me put this bluntly - the memory configuration on the Series X is sub-optimal.

I understand there are rumours that the SX had 24 GB or 20 GB at some point early in its design process but the credible leaks have always pointed to 16 GB which means that, if this was the case, it was very early on in the development of the console. So what are we (and developers) stuck with? 16 GB of GDDR6 @ 14 GHz connected to a 320-bit bus (that's 5 x 64-bit memory controllers).

Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".

Now, you may be asking what I mean by "full". Well, it comes down to two things: first is that, unlike some commentators might believe, the maximum bandwidth of the interface is limited to the 320-bit controllers and the matching 10 chips x 32 bit/pin x 14 GHz/Gbps interface of the GDDR6 memory.

That means that the maximum theoretical bandwidth is 560 GB/s, not 896 GB/s (560 + 336). Secondly, memory has to be interleaved in order to function on a given clock timing to improve the parallelism of the configuration. Interleaving is why you don't get a single 16 GB RAM chip, instead we get multiple 1 GB or 2 GB chips because it's vastly more efficient. HBM is a different story because the dies are parallel with multiple channels per pin and multiple frequencies are possible to be run across each chip in a stack, unlike DDR/GDDR which has to have all chips run at the same frequency.

However, what this means is that you need to have address space symmetry in order have interleaving of the RAM, i.e. you need to have all your chips presenting the same "capacity" of memory in order for it to work. Looking at the diagram above, you can see the SX's configuration, the first 1 GB of each RAM chip is interleaved across the entire 320-bit memory interface, giving rise to 10 GB operating with a bandwidth of 560 GB/s but what about the other 6 GB of RAM?

Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.

The fallout of this can be quite complicated depending on how Microsoft have worked out their memory bus architecture. It could be a complete "switch" whereby on one clock cycle the memory interface uses the interleaved 10 GB portion and on the following clock cycle it accesses the 6 GB portion. This implementation would have the effect of averaging the effective bandwidth for all the memory. If you average this access, you get 392 GB/s for the 10 GB portion and 168 GB/s for the 6 GB portion for a given time frame but individual cycles would be counted at their full bandwidth.

However, there is another scenario with memory being assigned to each portion based on availability. In this configuration, the memory bandwidth (and access) is dependent on how much RAM is in use. Below 10 GB, the RAM will always operate at 560 GB/s. Above 10 GB utilisation, the memory interface must start switching or splitting the access to the memory portions. I don't know if it's technically possible to actually access two different interleaved portions of memory simultaneously by using the two 16-bit channels of the GDDR6 chip but if it were (and the standard appears to allow for it), you'd end up with the same memory bandwidths as the "averaged" scenario mentioned above.

If Microsoft were able to simultaneously access and decouple individual chips from the interleaved portions of memory through their memory controller then you could theoretically push the access to an asymmetric balance, being able to switch between a pure 560 GB/s for 10 GB RAM and a mixed 224 GB/s from 4 GB of that same portion and the full 336 GB/s of the 6 GB portion (also pictured above). This seems unlikely to my understanding of how things work and undesirable from a technical standpoint in terms of game memory access and also architecture design.

In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4. Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.



RAM%2Bconfiguration%2Bgraphic.jpg


 

SamWeb

Member
You can't. To use another storage on PS5 you will need to buy one validated by Sony which grants the exact same performance than the original, at least. They are infact waiting for faster SSDs to start validating some.
Thank you captain. :messenger_sunglasses: But not one of them will still get a 12-channel controller and will work at a speed PCI-E 4.0 x4
 

SlimySnake

Flashless at the Golden Globes
Well, no.
Shadow of Mordor, Inquisition, Watchdogs and such were crossgen, never seen 60 fps.
Crossgen is not equal to remaster.
the ps4 cpu was barely even as powerful was the cell. i actually recall seeing some gdc charts where they showed the cell had an advantage in some simulations.

the cpu wont be a bottleneck next gen.
 
Thank you captain. :messenger_sunglasses: But not one of them will still get a 12-channel controller and will work at a speed PCI-E 4.0 x4
You're welcome.
But that's exactly why Sony is waiting for not only faster SSDs, but for faster SSDs than the one in PS5, to compensate with raw speed the assence of customization (check the conference). In this regard, I expect all SSDs validated by Sony to be faster in raw than the original.
the ps4 cpu was barely even as powerful was the cell. i actually recall seeing some gdc charts where they showed the cell had an advantage in some simulations.

the cpu wont be a bottleneck next gen.
Yes, I agree. For that reason I see a bigger focus on frame rates, for now it's going good with Microsoft targeting up to 120 fps and Sony games like Godfall and this one targeting 4k 60 fps. Let's see how ti goes.
 

geordiemp

Member
https://www.neogaf.com/threads/next...-analysis-leaks-thread.1480978/post-257491119

I already mentioned that APU memory bandwidth is shared between CPU and GPU. There is nothing new or denying what I said earlier.
Even my imperfect understanding of English, allows me to determine that you do not understand what I wrote and cited by you.
PS5 and XSX have similar bandwidth only on the part of the CPU memory controller.

Lady Gaia over that other place is a professional game engine optimiser and designed profiling / game engine optimising tools for a living, everybody listens / learns and respects when she posts.

I know who I believe over your confused ramblings lol. Your knowledge is from where exactly ? Timdog ?

Go read others who know what they are talkiing about.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Lady Gaia over that other place is a professional game engine optimiser and designed profiling / game engine optimising tools for a living, everybody listens / learns and respects when she posts.

I know who I believe over your confused ramblings lol. Your knowledge is from where exactly ? Timdog ?

Go read others who know what they are talkiing about.
i think we need to stop putting these devs on a pedestal. dont forget klee was misled and made to look like a fool by a dev. his so called friend. jason schrier has been having a meltdown because he was led to believe that the ps5 was superior and the most revolutionary console in 20 years. osiris black has lost all credibility thanks to devs who lied to his face for months.

devs are people. they have their own biases. we have devs working on ps5 devkits lying to everyone, lady gaia doesnt even have devkits. its nice to have her opinions since she knows more than the average gamer on the internet, but after recent events, lets take her opinions with a grain if not bucket of salt.
 

SamWeb

Member
Lady Gaia over that other place is a professional game engine optimiser and designed profiling / game engine optimising tools for a living, everybody listens when she posts.

I know who I believe over your confused ramblings lol

Go read others who know what they are talkiing about.
It's funny how you brandish someone else's authority, to which I have no complaints... LOL
I was once again convinced that you do not understand what you are referring to (no offense)
 

SamWeb

Member
You're welcome.
But that's exactly why Sony is waiting for not only faster SSDs, but for faster SSDs than the one in PS5, to compensate with raw speed the assence of customization (check the conference). In this regard, I expect all SSDs validated by Sony to be faster in raw than the original.
let me keep my doubt about this)
 

geordiemp

Member
It's funny how you brandish someone else's authority, to which I have no complaints... LOL
I was once again convinced that you do not understand what you are referring to (no offense)


You do not also understand the subject that is clear, neither do I fully - but I know she is referred to as an engine optimiser specialist and everybody always agrees, even colbert.

i think we need to stop putting these devs on a pedestal. dont forget klee was misled and made to look like a fool by a dev. his so called friend. jason schrier has been having a meltdown because he was led to believe that the ps5 was superior and the most revolutionary console in 20 years. osiris black has lost all credibility thanks to devs who lied to his face for months.

devs are people. they have their own biases. we have devs working on ps5 devkits lying to everyone, lady gaia doesnt even have devkits. its nice to have her opinions since she knows more than the average gamer on the internet, but after recent events, lets take her opinions with a grain if not bucket of salt.

I agree I hold no weight to reporters or devs praising what they are working on, water is wet.... its nice for an expert to explain how the memory contention works and is not self promoting just explaining...

Because lets face it, everything from its 560 to 192 gets banded around by the ignorant. Her analysis puts Xb1 15 % more bandwidth with 15 % more TF to feed - best explanation I have read so far this week.

Even people like NXgamer who does his own DF foundry stuff asks her questions and agrees on every word, as do all others....so until I read something better.....We will find out soon enough

PS if games are less than 10 GB then XsX is a beast...if much greater than 10GB then XSX has same limitations as Ps5 is what I see.

Could be a leveller, but if you want XSX clear victory its not want you want to hear.
 
Last edited:

SamWeb

Member
You do not also understand the subject that is clear, neither do I fully - but I know she is referred to as an engine optimiser specialist and everybody always agrees, even colbert.
Hard case....

Please list all the contradictions point by point and we will find out who is right and who is wrong.
 
Status
Not open for further replies.
Top Bottom