• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

TBiddy

Member
Sometimes you get the feeling that people consider Cerny as an infallable all-knowing God, that singlehandedly designes the Playstation, while the Microsoft-engineers are bumbling idiots that has no clue what they are doing.

Comments such as "off the shelf", "just a pc", "brute force", "so many bottlenecks" and what have you are idiotic and should almost be a bannable offence. It ruins any form of discussion.
 
Here is a good article on GDDR6, its written around Nvidia but ....


There are also 4 transfers per clock, and timing / parallel to series and crosstalk also comes into play, your right it is complex, and timing between the "upper and lower" RAM maybe ?

Maybe thats why its 10 and 6 indepenant, and why everyone expected same RAM for each chip which is normal ?

I dont know, but it aint simple straws lol and complex engneering timing, and any explanation is vastly oversimplifying.

This design was definately meant to be 20 GB, or maybe MS thought they could resolve the complex timing and cross talk, lets see. Its unusual for sure and anyone thinking its simple digital straws is funny.

eDTXwNt.png
Holy shit that is picoseconds. From Wikipedia -> A picosecond is to one second as one second is to approximately 31,689 years

giphy.gif
 

rnlval

Member
The chips themselves aren't actually slower. By slow they mean the bandwith dedicated to the GPU 10*56 versus dedicated to CPU 6*56.

The chips themselves are actually all the same speed.
NAVI 10 works via quadrant divided by four 64 bit memory controllers. Each Shader Engine is divided by 2. Each half of Shader Engine has a 64-bit memory controller and four L2 cache banks.
 
Yeah I know that chips are the same speed 14 gbps... and wording I should say narrow slower access......

However, think picosecond timing and access on 4 transfer per clock....go read the article on GDDR6.

I read a post on Beyond 3d a guy was saying timing would be difficult with this configuration to make all bus active - lets see, there is more to this maybe

By narrower you just mean "don't use as many lanes". Ie 6*56 vs 10*56.
 
NAVI 10 works via quadrant divided by four 64 bit memory controllers. Each Shader Engine is divided by 2. Each half of Shader Engine has a 64-bit memory controller and four L2 cache banks.

So each controller manages 2 32bit paths to memory?

I like the diagram you posted earlier thanks for that.

Each 16 bit path is bidirectional it seems. So one path could be writing while the other is reading, or both could be reading (probably not writing) at the same time.

So if say the GPU needed the full bandwith of any memory chip, would it have to consume through both 16bit paths or can one 16bit path consume all 56gb/s by itself?

Im thinking back to your ten straw analogy. If the gPU can consume all 56gbs across the entire lane that leaves no room for GPU accesses on the 6 GB correct?
 
Last edited:

geordiemp

Member
So each controller manages 2 32bit paths to memory?

I like the diagram you posted earlier thanks for that.

Each 16 bit path is bidirectional it seems. So one path could be writing while the other is reading, or both could be reading (probably not writing) at the same time.

So if say the GPU needed the full bandwith of any memory chip, would it have to consume through both 16bit paths or can one 16bit path consume all 56gb/s by itself?

Im thinking back to your ten straw analogy. If the gPU can consume all 56gbs across the entire lane that leaves no room for GPU accesses on the 6 GB correct?

Sorry I posted the wrong link before, its page 8 you want to read.

4 transfers per clock and a complex tranmission that needs correct timing and close attention to cross talk, its not a straight "simple digital" access.

anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/8


rPWOozw.png


7sTVmJY.png
 
Last edited:
My friend, why not stop the psychic talk and personal mockery and keep it informative? Can you explain how 100GB-ready is when you can only transfer at the speed of 2.4GB/s (4.8GB/s compressed)? I think you're smarter than me as you've been showing here that I'm suffering from psychic or behavior issues, in that case you should be smarter than that misleading information that adds nothing new. It's like someone saying I can eat a burger in one bite, then the other responds that he can eat a whole cow in 2 months, it's not the same.

EDIT: PS5 is capped at 22GB/s compressed, and @BGs has seen 20GB/s in action. XSX is capped at 6GB/s.

Yes its very easy to explain actually. The data is NOT transferred over the system Pcie bus but rather has a direct connection to the GPU as a virtual memory frame buffer.

The only analog we have for this seems to be AMDs SSG or solid State Graphics implementation where they connected an SSD directly into the i/o fabric of the GPU.

According to Phil Spencer and the XSX velocity architecture team, this implementation roughly mimics the i/o speed of a slow Dram.

Thats how. Thats why they do not seem fazed by the write speeds of Sonys SSD because having access to a 100GB game package nearly instantly trumps the fast SSD argument.
 
Last edited:
Microsoft's Head of Gaming said:

Thanks to their speed, developers can now use the SSD practically as virtual RAM.
The SSD access times come close to the memory access times of the current console generation.
Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.
Of course, the SSD will still be slower than the GDDR6 RAM that sits directly on top of the die. But the ability to directly supply data to the CPU and GPU via the SSD will enable game worlds to be created that will not only be richer, but also more seamless.
Not only in terms of pure loading times, but also in terrain mapping. A graphic designer no longer has to worry about when GDDR6 ends and when the SSD starts. "

"It is, after all, still very much news as this is the most extensive explanation to date of alternate uses (beyond speeding up loading times) for SSD in next-generation consoles.

Furthermore, Digital Foundry also posted its own speculation video on this very topic a few days ago, postulating that the Xbox Series X could be using some of the tech featured in the Radeon Pro SSG (Solid State Graphics) workstation presented a few years ago by AMD."

 

Kusarigama

Member
"Things just got weird. Ali Salehi, the developer who made these comments on PS5, has apparently withdrawn his statements. The interview itself was apparently also taken offline.

The Twitter user who originally translated Salehi’s comments @ man4dead, deleted and said all the tweets about the interview The Crytek engineer “no longer confirms the content of the interview for personal reasons.”

Is that the same as dismissed? I hope that clears the issue up. Let that entire commentary go. He was utterly wrong anyway. Why hold onto it?
Withdrawing is not same as dismissed.

More CPU cores are tricky to work with.

As opposed to stables of lead developers and engineers who work hand in hand to develop games and plot out directx features that go into the HW of Nvidia and AMD.

Better than those guys?

This is a very strange perspective to have. Cerny is a brilliant man. Xbox staff will tell you that upfront. But he's isn't the only brilliant person in the console hw space with a background in game design.
Okay then name any three other people with similar experiences.
 

Kusarigama

Member
Sometimes you get the feeling that people consider Cerny as an infallable all-knowing God, that singlehandedly designes the Playstation, while the Microsoft-engineers are bumbling idiots that has no clue what they are doing.

Comments such as "off the shelf", "just a pc", "brute force", "so many bottlenecks" and what have you are idiotic and should almost be a bannable offence. It ruins any form of discussion.
These are all your own words. Relax a bit.
 

pawel86ck

Banned
"Things just got weird. Ali Salehi, the developer who made these comments on PS5, has apparently withdrawn his statements. The interview itself was apparently also taken offline.

The Twitter user who originally translated Salehi’s comments @ man4dead, deleted and said all the tweets about the interview The Crytek engineer “no longer confirms the content of the interview for personal reasons.”

Is that the same as dismissed? I hope that clears the issue up. Let that entire commentary go. He was utterly wrong anyway. Why hold onto it?
Ali Salehi wrote MS didnt updated DX in a long time, but we know for a fact it's not true (DX 12 ultimate). This interview was probably old and that's why Ali Salehi has withdrawn his statements.
 
Last edited:
HEY GUYS, this is Austin and Sony just announced the new wireless controller for the PS5 - its called DualSense.

OH by the way, the Xbox Series X is 20% more powerful.

The controller will feature brand new haptic feedback and adaptive triggers.

Don't forget the Xbox Series X is still the most powerful console and Microsoft had haptic feedback in the Xbox One controller 7-years ago.

Moving along, Sony opted for a two-tone finish on the DualSense controller and it's different. Yeahhh...

Remember the Xbox Series X is still really powerful, and the Xbox controller has offset sticks which is vastly superior.


Lastly, Sony decided to get rid of the "Share" button and replace it with a new "Create" button. Sony is being really coy about this feature and it's functionality.

If I haven't already told you guys, the Xbox Series X has 12tflops, is more powerful than the Playstation 5 and Microsoft built a Share button into the controller.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Gonna go out on a limb here and say the functionality for the "Share" button isn't going away. Just a hunch.
9q95aBJ.png
Share button is soo last gen.
Create button will also PS5 gamers to share game levels and scenarios. Next gen on steroids.
 

DForce

NaughtyDog Defense Force
Yes its very easy to explain actually. The data is NOT transferred over the system Pcie bus but rather has a direct connection to the GPU as a virtual memory frame buffer.

The only analog we have for this seems to be AMDs SSG or solid State Graphics implementation where they connected an SSD directly into the i/o fabric of the GPU.

According to Phil Spencer and the XSX velocity architecture team, this implementation roughly mimics the i/o speed of a slow Dram.

Thats how. Thats why they do not seem fazed by the write speeds of Sonys SSD because having access to a 100GB game package nearly instantly trumps the fast SSD argument.
So let me get this straight.

You believe Xbox Series X's SSD solution is faster than the PlayStation 5's?
 

samsrt_

Neo Member
Nope, those in USD:



"renewed"


Most of them are discontinued:


This was for $320 with a game as well:

I really don't get what point you're trying to make. The original meme was UK pricing.

Still, going by your links, the Elite Controller is out of stock so the $300 price tag is from Amazon 3rd-party sellers price gouging. I just looked on the Xbox website and they sell it for $180. So, if that is too difficult to understand, Microsoft price it at $180.

Microsoft is also selling the One X for $299 on sale, and the One S for $299 as well. Meaning they clearly have a lot of One X's they want to get rid of in the next few months. And for whatever financial reasons it wasn't worth putting the One S on sale too. It's simple economics.

I don't know why you're so obsessed with Amazon listings, with most prices set by 3rd-party price gougers. Like, of all the things to make fun of Xbox (and there are a lot), that's the best you can come up with?
 
Last edited:

Darius87

Member
Ali Salehi wrote MS didnt updated DX in a long time, but we know for a fact it's not true (DX 12 ultimate). This interview was probably old and that's why Ali Salehi has withdrawn his statements.
really? where did he state that? can you quote him?
all i read from his interview about api was:
For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants.

i don't see any word about update. he means that ms build upon older version of dx but sony starts from 0 every generation that's why it's better then ms api.
 
D

Deleted member 775630

Unconfirmed Member
It's a mock-up, and should be posted in one of the threads dedicated to that.
Yeah, but what's the point if it can never come into existence. That's like me creating an XSX skin with TLOU2 "pie_tears_joy:
 

rnlval

Member
So each controller manages 2 32bit paths to memory?

I like the diagram you posted earlier thanks for that.

Each 16 bit path is bidirectional it seems. So one path could be writing while the other is reading, or both could be reading (probably not writing) at the same time.

1. So if say the GPU needed the full bandwith of any memory chip, would it have to consume through both 16bit paths or can one 16bit path consume all 56gb/s by itself?

2. Im thinking back to your ten straw analogy. If the gPU can consume all 56gbs across the entire lane that leaves no room for GPU accesses on the 6 GB correct?
1. Unknown. Not enough data on XSX's five memory controllers. MS hasn't revealed XSX GPU layout.
2. It depends on the workloads based on FIFO for the five 64bit memory controllers.
 

DaGwaphics

Member
^ Starting from scratch automatically makes it better? :messenger_grinning:

MS does not use bog standard DX for Xbox, the version used on all of the consoles has been lower level than the PC equivalent. Not as bare-metal as Sony's approach, but that is a conscious decision that allows MS to maintain fluidity in HW design. With that said, Sony did a great job mining the power of the PS4, I'm sure they'll knock it out of the park with PS5 as well.
 

geordiemp

Member
1. Unknown. Not enough data on XSX's five memory controllers. MS hasn't revealed XSX GPU layout.
2. It depends on the workloads based on FIFO for the five 64bit memory controllers.

No.

If all the chips could be accessed simultaneously, there would be no need for MS to state the 336 GB/s max bandwidth number in their spec. Yes its there in black and white.

Otherwise they would just state 560 GB/s max all the time and leave it.

Its that simple.
 
Last edited:

Darius87

Member
^ Starting from scratch automatically makes it better? :messenger_grinning:

MS does not use bog standard DX for Xbox, the version used on all of the consoles has been lower level than the PC equivalent. Not as bare-metal as Sony's approach, but that is a conscious decision that allows MS to maintain fluidity in HW design. With that said, Sony did a great job mining the power of the PS4, I'm sure they'll knock it out of the park with PS5 as well.
yes if it's tailored to specific hw like ps5 so it could gain more perf from hw features it has.
 

DaGwaphics

Member
No.

If all the chips could be accessed simultaneously, there would be no need for MS to state the 336 GB/s max bandwidth number in their spec. Yes its there in black and white.

Otherwise they would just state 560 GB/s max all the time and leave it.

Its that simple.

Obviously, the upper 1GB of those 2GB chips could never be accessed at more than 336GB/s a second, that memory is only accessible by the memory controllers connected to those chips (with each chip having a maximum 32bit path).
 

Bo_Hazem

Banned
I believe that in df tear down they have said that xbsex uses pci4.0 so please stop spreading FUD.

re. this comment of yours, it is actually other way around. Sony allows of the shelf ssd hard-rives to be inserted into ps5 as expansion while ms developed proprietary expansion card with Seagate. Ssd in ps5 is same as the new ssd that will be available for everylone later this year (when they hit the market). So your statement is correct when you replace Sony with ms. Please stop spreading FUD.

Man, even 7GB/s is not guaranteed to match the internal SSD. The XSX SSD will be outdated quickly with its budget PCIe-4.0-wannabe 2.4GB/s if NVMe m.2 changes the architecture to match the PS5's 1-3 years later.
 
Last edited:

Bo_Hazem

Banned
That I say that the theoretical peak is +20GB/s does not mean that it is a constant or that I have seen it. Let's try not to read beyond my words.

Indeed, it's not constant, it's the peak that could be hit randomly. Didn't mean that you meant it as constant.

By the way, how do you feel? And hope all your family and beloved people are safe. 🙌
 

Gamernyc78

Banned
really? where did he state that? can you quote him?
all i read from his interview about api was:
For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants.

i don't see any word about update. he means that ms build upon older version of dx but sony starts from 0 every generation that's why it's better then ms api.

Yeah for every Microsoft version, Sony builds it's own API, sometimes better. This was the case last Gen.
 

Bo_Hazem

Banned
Yes its very easy to explain actually. The data is NOT transferred over the system Pcie bus but rather has a direct connection to the GPU as a virtual memory frame buffer.

The only analog we have for this seems to be AMDs SSG or solid State Graphics implementation where they connected an SSD directly into the i/o fabric of the GPU.

According to Phil Spencer and the XSX velocity architecture team, this implementation roughly mimics the i/o speed of a slow Dram.

Thats how. Thats why they do not seem fazed by the write speeds of Sonys SSD because having access to a 100GB game package nearly instantly trumps the fast SSD argument.

Seems like you're missing the point here, it's still 2.4GB/s raw (4.8GB/s compressed, ~6GB/s max) vs 5.5GB/s raw (9GB/s compressed, 22GB/s max). 2 priority levels vs 6 priority levels.
 
Last edited:

Bo_Hazem

Banned
I really don't get what point you're trying to make. The original meme was UK pricing.

Still, going by your links, the Elite Controller is out of stock so the $300 price tag is from Amazon 3rd-party sellers price gouging. I just looked on the Xbox website and they sell it for $180. So, if that is too difficult to understand, Microsoft price it at $180.

Microsoft is also selling the One X for $299 on sale, and the One S for $299 as well. Meaning they clearly have a lot of One X's they want to get rid of in the next few months. And for whatever financial reasons it wasn't worth putting the One S on sale too. It's simple economics.

I don't know why you're so obsessed with Amazon listings, with most prices set by 3rd-party price gougers. Like, of all the things to make fun of Xbox (and there are a lot), that's the best you can come up with?

Mate, I made that meme. Here:

148265.jpg


148266.jpg


Due to drastic fail, the X1X has been discontinued

148267.jpg


 
No she is analysiing the APU and shared bus, nothing to do with PC at all. If the amount of access to the > 10 GB memory is 48 GBs, then the GPU available max bandwidth is 480 GBS which is still better than Ps5 at 400 GBS if same non GPU use is used....,

Also did MS not say they would put CPU bound memory in the slower pool, I cant recall. Thats where the 48 comes from ....estimated.

And funnily enough that GPU bandwidth is the same per TF...

Obviously if game < 10 GB, it does not matter.
Correct, the only dedicated ram is ~2GB for the OS. The XSX RAM is a shared pool other than that.
 

Bo_Hazem

Banned
You used a refurbished(renewed) price which means they're used consoles. You're being heavily biased and providing misinformation.

It's discontinued, what should I do?

Here the pricing from the official page:


I just hope you don't say they're biased as well:

148268.jpg


 

rnlval

Member
No.

If all the chips could be accessed simultaneously, there would be no need for MS to state the 336 GB/s max bandwidth number in their spec. Yes its there in black and white.

Otherwise they would just state 560 GB/s max all the time and leave it.

Its that simple.
Nope,
1. if all GPU quadrants focus towards 6 GB address range, GPU would act like RX 5600 XT's 336 GB/s memory access pattern.
2. if all GPU quadrants focus towards 10 GB address range, GPU would have 560 GB/s memory access pattern.

The limiter is the memory address range regardless of independent 64bit memory controllers. GPU is multitasking and multithreaded. The problem is on the programmer's workload organization.

xjth2h9.png
 
Last edited:

DaGwaphics

Member
If there is a Xbox Series S, I would expect One S to be moved out of production swiftly as well. No point in keeping those alive if the new systems are fully backwards compatible (unless you anticipate meaningful production difficulties on your new system).
 

rnlval

Member
Nope.
Page 6.
"This arrangement reduces the pressure on the globally shared L2 cache, which is still closely associated with the memory controllers"
Nope, the partitioned L2 cache has a crossbar type links to enable global L2 share.
 
Last edited:

psorcerer

Banned
Nope, the partitioned L2 cache has a crossbar type links to enable global L2 share.

That's the implementation detail of the crossbar.
Memory controllers and L2 are not connected to the shader engines in any way logically.
All the access to RAM is going through the global L2 (crossbar).
And it's cool. Because otherwise it would be impossible to enforce any "separate" memory pools.
 
Status
Not open for further replies.
Top Bottom