• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

Fafalada

Fafracer forever
I didnt know PS5 has a 3.7 GHz CPU.
Unless XSX SMT toggle allows runtime switching(which would be comical after all the fixed clock talk) I doubt it'll be often worth it where CPU presents actual bottleneck.
But fair enough, .3 - the number wasn't really point of my post.
 

longdi

Banned
I realize ppl love lists on internet but... what "areas" are we talking here? Faster graphics subsystem encompasses memory and the related checkboxes, its not a set of 'areas' - if you removed something like extra bandwidth, the gap would effectively vanish.

The .1 ghz faster cpu is... another 'thing' I guess?

I feel just by saying 'tfops advantage' is hand wavering.
Where Series X has most of the advantages you will want in a consoles match-up, that is including BC and cloud integration.
Something like 16 channels i/o feels *really* small instead.
 

Bernkastel

Ask me about my fanboy energy!
I feel just by saying 'tfops advantage' is hand wavering.
Where Series X has most of the advantages you will want in a consoles match-up, that is including BC and cloud integration.
Something like 16 channels i/o feels *really* small instead.
Regarding the 12 channel vs 4 channel thing
Google Translate
This SSD has been analyzed very thoroughly, and there is no special "black technology". Everything is the result of a special adaptation for the scene of the game console. In theory, the cost is not high, and it may even be better than The SSD cost of the Xbox Series X is even lower.

That's right, more than twice the speed, but in theory the cost is still low, why?

Because the SSD's main control is actually very cheap, the main function is to differentiate pricing for manufacturers. In fact, the high-end main control takes the price of the goods and the low-end main control is not much different. However, good horses and good saddles, high-end SSDs on PCs often have to be paired with a small DDR cache to store the address lookup table (LUT), and that's it.

If you want to use a pile of 64GB of flash memory to build a 1TB hard drive, you can use 16 channels, each channel put a piece of flash memory, so the fastest speed, 16x64GB = 1024GB, and then we can be regarded as 16X.

The PS5 is such a design, 12 channels, each channel 64GB, 12x64 = 768GB. Converted from 1024 to 1000 is 825G. The speed is regarded as 12X.

XSX has only 4 channels, but each channel can stack multiple chips, so 4x4x64GB = 1024GB, but only 4X speed. So if you look at the original speed, 5.5GB / s vs 2.4GB / s, which is about 2.3 times the gap. Why not 3 times? Because 64GB of flash memory does not run on a single channel, multiple chips will use the channel bandwidth more completely when stacked.

This is why the 8-channel E16 main control of the group can run to 5GB / s, while the 12-channel main control of the PS5 can only run to 5.5GB / s. In fact, the incomplete utilization of channel bandwidth by a single flash memory causes a certain amount of waste.

Of course, there is another possibility that the MT / s per channel value of the PS5 main control may be lower. In other words, the upper limit of the speed of each channel on the main control is lower, but this just saves money ...

===================================

The main control of PS5 is DRAM-LESS, which is also critical because it saves money.

Sony uses a large data block to reduce the total amount of block addresses, thereby reducing the size of the LUT table mentioned above from the usual GB level to the current KB level, so it can be installed in a small SRAM cache, which saves I lost 1GB of DRAM. Don't underestimate this 1GB DRAM, the total of zero zero is reflected in the total cost, and it can save almost a dozen dollars.

Memory access is simplified by address + offset. The map you use in your daily life will only show the XX building, not the XX room XX in the XX building, right? In the traditional PC SSD, the address table is equivalent to a map accurate to the XX floor, while the PS5 address table is equivalent to a map accurate to the XX cell. So obviously, the map of PS5 will be much smaller and easier to fit in your pocket.

Then, the rest is the offset. For ordinary PC, it is room XX, XX, and for PS5, it is room XX, XX building XX building ... However, this has little effect on the reading speed.

While writing, you can think of it as a fire drill. The writing of the PC is the fire drill by the people of "XX Building XX Building XX", while the writing of the PS5 is the fire drill by the people of "XX Community" In other words, the latter is more laborious. However, the impact on the game console scene is not great.

Therefore, a design like PS5 is not only more expensive than XSX, but it is also likely to be cheaper than XSX SSD. Because the capacity of the hard disk is smaller, you can spend less than 4 flash memory. Four 64GB flash memories are a lot of money, which is much more expensive than a higher-end SSD master ...

====================================

As for how to treat it? Quite simply, Sony's design goal is to save money, not (higher than competitors) high performance. High performance is just a matter of doing it. As mentioned earlier, you have saved 1/4 of the capacity, and the cost is much lower. If you don't add any more, won't you be killed by the player? So I did such a thing. It was more expensive to switch channels, but it wasn't much expensive. It wasn't the main control of the cost, and that's all.

The bandwidth required by the graphics card for page swapping is under the current GDDR6 bandwidth level. Unless it is deliberately forced, there cannot be a scenario where SSDs above 2GB / s cannot be satisfied. In the normal game scene, most things should be in video memory and memory. Only in extreme cases (such as spaceships performing jumps) will mass exchanges be required. However, even if it is a mass exchange, 5.5GB / s SSD can't open the gap with 2.4GB / s SSD, because the entire memory space used for the game is only about 13GB, and a large amount of it will be stored in memory There is no need to exchange the items in it. Actually, there will be at most swap GB. You can't design a scene for the player to jump back and forth in order to use this 5.5GB / s. Do you want the player to trigger photosensitive flicker epilepsy ((

In addition, Virtual Geometry and Virtual Texture are essentially materials in picture format (VG is a geometric model saved in picture format). There are a lot of very efficient compression algorithms that can be directly addressed. This is also for both Mao PS5 and XSX. A hardware decompression chip is added, which can not only release the CPU, but also realize ultra-low latency material exchange. What do you mean, usually you use WinRAR to decompress the 1024 younger sister files in a package are made with CPU, and now with hardware, not only high bandwidth but low latency (and the delay is estimated and predictable, which is stable The number of frames is very important!) In addition, you can also directly access the corresponding younger sister without having to fully decompress. So Sony said that this is revolutionary for console game development, because rounding this is the virtual memory used by the game royal friends. It can be directly addressed, and the delay can be expected, which means that developers with brains can directly access almost any material they need at any time, and they can do a lot of access in advance, which largely hides the delay (although Not all).

=====================================

Next, answer the specific questions of the subject:

Will the memory advancement of the next generation become a bottleneck if it is small?

meeting. This problem will be more obvious in the light chase scene. Whether it is light chase, 4K, 60-120fps or noise reduction, which one is not a big video memory owner? All are memory destroyers. 448GB / s, 560GB / s, to a certain extent, has already set the upper limit of the next generation game memory access capacity. The gain of SSD is only capacity, not bandwidth.

We know that streaming technologies such as megatexture and virtual texture can save a lot of memory, but the bottleneck is hard disk IO, so can hard disk IO with ps5 up to 8-9GB / s solve this bottleneck?

This depends on how you define the bottleneck. The original Mega Texture and the current virtual texture streaming were two dishes in one dish. The bottleneck can be said to be the hard disk IO, not to mention the 8-9GB / s hard disk, even the 1GB / s hard disk is enough. Even if you have a SATA 3 SSD, you wo n’t see anything you ca n’t load in most games. Because the biggest bottleneck of virtual texture streaming is [random read and write speed], this is why games with linear levels generally use virtual textures casually, and you ca n’t see the card loading. Because linear levels are rarely read randomly, and open world games are different, a large number of random reads will instantly drag the entire game. On the other hand, both PC and PS4 use the CPU to decompress and read, so a lot of reading will also drag the CPU and cause unstable frame generation.

Of course, the above-mentioned violence theory is for PC 1440P and the following scenes. When it reaches 4K, it will rise appropriately> 2 times. However, the next generation is starting at 2.4GB / s, and there is hardware decompression, which can be said to have solved the problem perfectly at a certain cost.

Can such a high-speed SSD help improve the performance of the picture, thereby making up for the shortcomings of 15% of the floating point performance of Microsoft's xsx?

No, the picture performance improvement depends on the bandwidth of the entire data path. On the surface, PS5 is only "2TFlops lower" in floating-point performance, but if you look at the essence:
  1. The memory bandwidth is 25% less, and the difference is 112GB / s, which is more scary than 9GB / s. Of course, I am just scaring you (escape
  2. WGP-level caches such as LDS and L0 are 44% less, and L2 is 25% less. L1 is uncertain but should be less (otherwise XSX will easily feed the CU). For scenes like GPUs, the size of the cache is far more important than the frequency. The RTX 2070 Super is 40 SMs, 1.77GHz, but it ’s still the same as lifting 40 CUs, 1.9GHz 5700XT (Of course, this is just an analogy used to scare you. In fact, Turing efficiency is the same as Teraflops There are many factors that are higher than RDNA1. One of the main factors is that the data path of the N card is actually more efficient, and the buffer size allocated to each stream processor is more solid than the RDNA1 graphics card.) In the words of lazy people, to a certain extent, the performance gain brought by the increase in cache capacity is much greater than the frequency increase. Because your cache at the same level is faster, at most twice as fast as others? However, the time overhead caused by each slow external access is more than ten times the normal cache access (
  3. Now you should be able to understand why the demo of UE5 can only run to 1440P 30FPS. Because whether it is Nanite or Lumen, it actually eats more video memory bandwidth (rather than video memory capacity), or more, the GPU's ability to use video memory.
Overall, the data path bandwidth of XSX from memory to CU is much higher (between 25-50%, depending on the specific scenario), but even then I do n’t think XSX can run UE5 ’s demo at 4K 60FPS. Because the essence of this Demo is to show "you can see that I can make such a thing without good optimization", not "you see me this thing is very good than other things. Especially Lumen, Lumen is actually SSGI + coarse mode reflection + sparse voxel tracking GI, Lumen's coarse mode reflection and voxel tracking part is not actually implemented with light chasing acceleration hardware, completely do not understand why, do not know API Not ready or Epic has personality, anyway, DXR can accelerate voxel tracking. Anyway, it's fascinating. Is it better to save the performance and improve the resolution?

Another example is that Nanite does not use Mesh Shader, of course, this is also expected, after all, PS5 is nothing like this. XSX's geometric performance is much higher than 2080Ti under good optimization. After using mesh, 2080Ti can actually render 50M + triangles and people still> 30fps. Nanite said in this demo that it actually renders triangles around 20M and still 30fps. It's embarrassing:
v2-1edbff2f8f7444d591c96ac3981d05e4_720w.jpg

Mesh Shader running at 2080Ti, 45fps, 48M real drawn triangle, 4K resolution

If it is reduced to 20M solid-drawn triangle like UE5 Demo, or to 1440P, it can save a lot of resources for pixel shading and improve performance. Rounding up> 60fps is no problem (

As I said before, I once again confirmed that PS5 does not support Mesh Shader. The geometric performance is still limited by Primitive Shader with insufficient parallelism.

So do n’t have too high expectations for things that save money as the main design purpose, so it is not good for fans or manufacturers. . . If in the end it's really 399, isn't it really fragrant ...
Microsoft Translate
This SSD has been thoroughly analyzed, and there is no particular "black tech", everything is specifically adapted to the console scene results, in theory, the cost is not high, or even more likely than the Xbox Series X SSD cost.

Yes, more than twice the speed, but in theory the cost is low, why?

Because SSD's main control is actually very cheap, the main role is to give manufacturers to distinguish pricing, in fact, high-end master take the price and low-end master control is not a few dollars. However, a good horse with a good saddle, the PC high-end SSD often with a small DDR cache to store the address finder table (LUT), and so is it.

If you want to build a 1TB hard drive with a pile of 64GB of flash memory, you can use 16 channels, one piece of flash per channel, so the fastest, 16x64GB s 1024GB, and then the speed we can put as 16X.

The PS5 is such a design, with 12 channels, 64GB per channel, 12x64 x 768GB. From 1024 to 1000 that is 825G. Speed is considered 12X.

XSX only has 4 channels, but each channel can be stacked with multiple chips, so it's 4x4x64GB . . . 1024GB, but only 4X speed. So if you look at the original speed, 5.5GB/s vs 2.4GB/s, about 2.3 times the difference. Why not three times? Because 64GB of flash running is not satisfied with a single channel, the channel bandwidth utilization is more complete when multiple chips are stacked.

This is also why the group's 8-channel E16 master can run to 5GB/s of reading, while the PS5's 12-channel master can only run to 5.5GB/s. In fact, incomplete utilization of channel bandwidth for a single flash resulted in a degree of waste.

Of course, there is another possibility that the MT/s per channel value of the PS5 master may be lower, in other words, the speed limit of each channel on the master is lower, but this saves money...

===================================

The PS5's masterised is DRAM-LESS, which is also critical because of the savings.

Sony uses large blocks of data to reduce the total block address size, thus reducing the size of the LUT table as mentioned earlier from the usual GB to the current KB level, so that it can be installed in a small SRAM cache, thus saving 1GB of DRAM money. Don't look down on this 1GB DRAM, zero total reflected in the total cost, almost can save a dozen knives.

The simplicity of the visit is through the address plus offset. The map you use in your daily life will show only XX Building, not XX Building XX Room XX, will it? Traditional PC SSD, address table is equivalent to a map accurate to the XX floor, and PS5 address table is equivalent to a map to xx district accurate. So obviously, the PS5 map will be much smaller and easier to put into the pocket.

The rest, then, is the offset. For ordinary PC is XX ROOM XX, AND FOR PS5 IS XX BUILDING XX FLOOR XX ROOM XX ... However, this has little effect on read speed.

While writing, you can think of it as a fire drill. PC's writing is "by XX District XX Building XX Building" people unified fire drills, and PS5 writing is "by XX District" people unified fire drills. In other words, the latter is more labor-conscious. But it's not a big influence on the console scene.

So the PS5's design is not more expensive than the XSX, but is likely to be a little cheaper than xSX's SSD. Because the hard drive is smaller, you can spend four less flash memory. Four 64GB flash memory is a big deal, much more expensive than a higher-end SSD master...

====================================

As for what to think? Quite simply, Sony's design goal is to save money, not to achieve higher performance than its competitors. High performance is just the way it is done. As said before, you have saved 1/4 of the capacity, the cost is also much lower, if not add edgy, will not be sprayed by the player? So did such a thing, change the channel more points but actually not much expensive and the root is not the main cost of the main control, that's all.

The bandwidth required for page switching of the graphics card is not possible to meet the situation of SSD s2GB/s unless it is deliberately forced. In a typical game scenario, most things should be in memory and memory, and only in extreme cases (such as a ship jumping) will require massive exchanges. But even if it is a massive exchange, 5.5GB/s SSD can not and 2.4GB/s SSD to open the gap, because the entire memory space used for the game is only about 13GB, which has a large number of things stored in memory do not need to exchange, the actual maximum swap will be a GB. Can't you design a scene to make the player jump back and forth in order to fill this 5.5GB/s, do you want the player to trigger the light-sensitive flashepilepsy (((

In addition, Virtual Geometry, Virtual Texture these things are essentially picture format footage (VG is a geometric model saved in the picture format), there are a large number of very efficient, and directly addressed compression algorithms, which also for the gross PS5 and XSX have added a hardware decompression chip, not only to release the CPU, but also to achieve ultra-low latency material exchange. What does it mean, usually you use WinRAR to unzip 1024 little sister files in a package are made with CPU, and now do with hardware not only high bandwidth and low latency (and the delay can be estimated, predictable, which is critical to the stable number of frames!) ), also can also go hand in direct access to the corresponding small sister, do not need to fully unzip. So Sony says it's revolutionary for console game development, because rounding it is game-free virtual memory ah friends. Direct addressing, and the latency can be expected, meaning that a braindeveloper can have direct access to almost any material they need at any time, and can pre-empt a large amount of access, largely hiding the delay (though not all).

=====================================

Next, answer the question's specific question:

Will the small progress of memory in the next generation be a bottleneck?

Yes. This problem is even more obvious in the light-chasing scene. Light chase is also good, 4K is also good, 60-120fps is also good, noise reduction is also good, which is not a large memory? It's all memory bandwidth destroyers. 448GB/s, 560GB/s, to some extent, has been set the upper limit of next-generation game access. The gain of the SSD is limited to capacity, not bandwidth.

We know that megatexture, virtual texture and other streaming technology can save a lot of memory, but the bottleneck lies in the hard disk IO, then the ps5 up to 8-9GB/s hard disk IO can solve this bottleneck?

It depends on how you define the bottleneck. Mega Texture and now the virtual texture streaming is a dish two, the bottleneck can be said to be the hard disk IO, not to mention 8-9GB/s hard drive, even if it is 1GB/s hard drive is enough. Even if you have a SSD for SATA 3, you won't see anything that can't be loaded in most games. Because the biggest bottleneck of virtual texture streaming is "random reading and writing speed", which is why the game of linear levels are generally casually used virtual texture, you can not see the card loading. Because linear levels are rarely read randomly, and open-world games are different, a large number of random reads instantly drags down the entire game. On the other hand, both the PC and PS4 use the CPU to extract the read, so a large number of reads can also drag the CPU to cause frame generation instability.

Of course, the above-mentioned tyrannical theory is directed at PC 1440P and the following scenarios. At 4K, it will be appropriate to float up to 2 times. However, the next generation is 2.4GB/s start, and hardware decompression, it can be said that has been at a certain cost to solve the problem perfectly.

Can such a high-speed SSD help improve picture performance and make up for microsoft's xsx 15% of floating-point performance?

No, the picture performance boosts the bandwidth that depends on the entire data path. On the face of it, the PS5 is only "2TFlops" lower in floating-point performance, but if you look at the essence:
  1. The memory bandwidth is 25% less, the difference is 112GB/s, this number can be more scary than 9GB/s, of course, I am just scare you (escape)
  2. WGP-level caches such as LDS and L0 are 44% less, L2 is 25%, and L1 is uncertain but should be less (otherwise XSX will easily feed the CU). For a GPU scenario, the size of the cache is far more important than frequency. The RTX 2070 Super is 40 SMs, 1.77GHz, not the same as lifting 40 Cu, 1.9GHz 5700XT hammer (which, of course, is just a analogy to scare you. In fact, Turing has a lot more of the higher rDNA1 factor than the RDNA1 in comparison to Teraflops, one of the main factors being that the N-card's data path is actually more efficient, with a cache size that is more solid for each stream processor than the RDNA1 graphics card). In lazy words, the performance gains from the increase in the size of the cache capacity are much greater than the frequency increases to a certain extent. Because you're faster at the same level, at most, twice the speed of someone else? But the time overhead of each reprieve is more than ten times that of access within a normal cache (
  3. Now you should be able to understand why UE5 this Demo can only run to 1440P 30FPS. Because whether it's Nanite or Lumen, it actually eats more memory bandwidth (rather than memory capacity) and more cpus use of memory.
Overall, XSX's data path bandwidth from memory to CU is too high (between 25-50%, depending on the scenario), but even then I don't think XSX can run UE5 demo 4K 60FPS. Because the essence of this Demo is to show that "you see I can make such a shy thing without good optimization" instead of "You see i'm more shy than anything else". In particular, Lumen, Lumen is actually SSGI and coarse-mode reflection and sparse-based carnitin tracking GI, Lumen's crude-mode reflection and carnouscan tracking part is not actually using light to speed up hardware to achieve, completely do not understand why, do not know is API is not ready or Epic has personality, anyway, DXR can accelerate the hormone tracking. In short, very crazy. It's not good to save point performance to improve point resolution (

Nanite, for example, is not mesh Shader, of course, this is also to be expected after all PS5 moed this thing. XSX's geometric performance is much higher than 2080Ti under the optimized condition, and 2080 Timesh can actually render the triangle of 50M plus and people are still 30fps, Nanite in this demo is actually rendering about 20M triangle sand and 30fps, which is very embarrassing:
v2-1edbff2f8f7444d591c96ac3981d05e4_720w.jpg


2080 Ti Run Mesh Shader, 45fps, 48M real-painted triangle, 4K resolution

If you drop down to a 20M painted triangle like the UE5 Demo, or down to 1440P, you can save a lot of resources for pixel shading, which can improve performance. Rounding 60fps is no problem (

As I've said before, this proves once again that PS5 does n'go of Mesh Shader. Geometric performance is still limited by The precision of parallelism, Primitive Shader.

So don't expect too much of what's the main design purpose of saving money, either for fans or for the manufacturer... In case the last real 399, is not really fragrant ...
 
Last edited:

longdi

Banned
Oh sorry, PS5 is 12 channel :messenger_grinning_sweat:

I mean even expandable storage is in favor of Series X.

The is from Epic engineer, I wonder what he meant below, IIRC panjev link a tweet about PS5 super geometry engine, but this Epic guy is saying the opposite

Another example is that Nanite does not use Mesh Shader, of course, this is also expected, after all, PS5 is nothing like this. XSX's geometric performance is much higher than 2080Ti under good optimization

As I said before, I once again confirmed that PS5 does not support Mesh Shader. The geometric performance is still limited by Primitive Shader with insufficient parallelism.

So do n’t have too high expectations for things that save money as the main design purpose, so it is not good for fans or manufacturers. . . If in the end it's really 399, isn't it really fragrant ...

vs

 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Regarding the 16 channel vs 4 channel thing

No sure it helps the discussion to open up yet another sub thread by taking random blog posts or equivalent with clear unbiased gems such as “Therefore, a design like PS5 is not only more expensive than XSX, but it is also likely to be cheaper than XSX SSD” and taking having less CU’s and milking it by quoting everything inside a CU and counting it up as if it were an additional thing missing (“oh wow... not only you have less CU’s you also have less L0 cache, which is inside the CU and to support the CU processing units), hyping up unconfirmed concurrent integer and floating point vector execution (people crucify Cerny for talking about FP16 correctly and are now running around with INT4 numbers of course)...

Some people in this thread are honestly trying to understand how XVA works and all... others are using it as the n-th thread to trying to make the difference between XSX appear bigger than it is: console warring dressed nicely. More random quotes, mistranslation, stretchy and stitching of other quotes, etc... We are not getting insights in an interesting architecture like XSX when we just console war and try to one up the PS5 and boast.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Oh sorry, PS5 is 12 channel :messenger_grinning_sweat:

I mean even expandable storage is in favor of Series X.

Debatable as we do not know the cost of each and many prefer non proprietary storage, but you completely dropped the pretence to learn what XVA is and are just straight laundry list console warring now?
 
Last edited:

longdi

Banned
No sure it helps the discussion to open up yet another sub thread by taking random blog posts or equivalent with clear unbiased gems such as “Therefore, a design like PS5 is not only more expensive than XSX, but it is also likely to be cheaper than XSX SSD” and taking having less CU’s and milking it by quoting everything inside a CU and counting it up as if it were an additional thing missing (“oh wow... not only you have less CU’s you also have less L0 cache, which is inside the CU and to support the CU processing units), hyping up unconfirmed concurrent integer and floating point vector execution (people crucify Cerny for talking about FP16 correctly and are now running around with INT4 numbers of course)...

Some people in this thread are honestly trying to understand how XVA works and all... others are using it as the n-th thread to trying to make the difference between XSX appear bigger than it is: console warring dressed nicely. More random quotes, mistranslation, stretchy and stitching of other quotes, etc... We are not getting insights in an interesting architecture like XSX when we just console war and try to one up the PS5 and boast.

Oh i thought that wall of text is from Epic china team?
I mean if it is random guy than yes we should ignore.
 

Bernkastel

Ask me about my fanboy energy!
No sure it helps the discussion to open up yet another sub thread by taking random blog posts or equivalent with clear unbiased gems such as “Therefore, a design like PS5 is not only more expensive than XSX, but it is also likely to be cheaper than XSX SSD” and taking having less CU’s and milking it by quoting everything inside a CU and counting it up as if it were an additional thing missing (“oh wow... not only you have less CU’s you also have less L0 cache, which is inside the CU and to support the CU processing units), hyping up unconfirmed concurrent integer and floating point vector execution (people crucify Cerny for talking about FP16 correctly and are now running around with INT4 numbers of course)...

Some people in this thread are honestly trying to understand how XVA works and all... others are using it as the n-th thread to trying to make the difference between XSX appear bigger than it is: console warring dressed nicely. More random quotes, mistranslation, stretchy and stitching of other quotes, etc... We are not getting insights in an interesting architecture like XSX when we just try to one up the PS5 and boast.
Therefore, a design like PS5 is not only more expensive than XSX, but it is also likely to be cheaper than XSX SSD
Thats why I also provided a Microsoft translate version. In my experience. Microsoft Translate is better for East Asian languages like Chinese/Japanese, But people just want Google Translate, so I provided both. Ultimately I also gave the link so you could use whatever Machine Translator you use
More random quotes, mistranslation, stretchy and stitching of other quotes, etc... We are not getting insights in an interesting architecture like XSX when we just try to one up the PS5 and boast.
Zhihu is the Chinese Quora, not some random forum. I literally posted his exact quote, no "stretchy and stitching of other quotes".
 

Panajev2001a

GAF's Pleasant Genius
Zhihu is the Chinese Quora, not some random forum. I literally posted his exact quote, no "stretchy and stitching of other quotes".

Yep, this time is not console warring directly taking and stitching other quotes, it is a Chinese translation of someone doing that... more indirect :).

Seriously though, if this thread is about exploring how good XVA is I am not sure this is the best way to go about it: that is to scour the Internet for quotes that appear to say what you want to hear about XSX superiority over PS5.
 
Last edited:
Yep, this time is not console warring directly taking and stitching other quotes, it is a Chinese translation of someone doing that... more indirect :).

Seriously though, if this thread is about exploring how good XVA is I am not sure this is the best way to go about it: that is to scour the Internet for quotes that appear to say what you want to hear about XSX superiority over PS5.

Or just commentary. It could just be comments about what people know/heard/learned. SOME people are taking that comparison as A QUEST for "superiority over PS5" which then they feel the need to constantly comment on.

Again, you appear to have a daylight clear console preference ,which you have exposed on multiple occasions in this thread. Not exactly sure why you remain in this thread other than to police the various examinations of the tech.

I dont really think your interest is learning about the XVA.

Moreover your comments are keenly focused on poopooing any examinations that make it seem like XSX demonstrates any gains over what we know about the PS5.

You arent alone in doing that either, but to pretend as if you are "purely interested in the science" isn't true. You are goaltending for sure.
 

Panajev2001a

GAF's Pleasant Genius
Or just commentary. It could just be comments about what people know/heard/learned. SOME people are taking that comparison as A QUEST for "superiority over PS5" which then they feel the need to constantly comment on.

Again, you appear to have a daylight clear console preference ,which you have exposed on multiple occasions in this thread. Not exactly sure why you remain in this thread other than to police the various examinations of the tech.

I dont really think your interest is learning about the XVA.

Moreover your comments are keenly focused on poopooing any examinations that make it seem like XSX demonstrates any gains over what we know about the PS5.

You arent alone in doing that either, but to pretend as if you are "purely interested in the science" isn't true. You are goaltending for sure.

You are free to believe in whatever you want to believe. Trying to understand how something works and commenting on console warrish BS masqueraded as information seeking/presentation can both coexist.

I do not know what to tell you more than it feels like, it seems like you and some others want an Xbox.com centred discussion meant to generate hype, diss on other consoles, and celebrate XSX more than discussing it. Tiptoeing with comments about anyone not taking this thread as that or not appearing to do so (not sure what the point of trying to label Fafalada as Sony dev was meant to do). Find it quite hypocritical/ projecting on your part to take this stance to be fair.

You have proof that something is better than something else? Awesome... does the proof have some backing and does it make sense or is it an over excited fanboy theorising pro PS5 or pro XSX? That makes for a big difference actually.
 
Last edited:
You are free to believe in whatever you want to believe. Trying to understand how something works and commenting on console warrish BS masqueraded as information seeking/presentation can both coexist.

I do not know what to tell you more than it seems like you want an Xbox.com centred discussion meant to generate hype, diss on other consoles, and celebrate XSX more than discussing it.

You have proof that something is better than something else? Awesome... does the proof have some backing and does it make sense or is it an over excited fanboy theorising pro PS5 or pro XSX? That makes for a big difference actually.

If you think that, about my position I would respectfully ask you to find evidence of me doing that anywhere in my commentary. Anywhere.

I have all the time in the world to wait for you to prove that. Go.

My answer to your questions is maybe. Sometimes there is "proof" meaning some statement by a person that hasn't been dismissed by those who prefer Sony like Chris Grannell or even Phil Spencer.

It appears that all information is simply a narrative rather than factual statement and if someone posts something counter to YOUR narrative then its just "console warrior BS."

But never you right?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
If you think that, about my position I would respectfully ask you to find evidence of me doing that anywhere in my commentary. Anywhere.

The post I was replying to and the other threads trying to police what I am saying, trying to call me out/burn me, seems like a good enough example, but I am sure you have way way more concrete evidence of the contrary and others behaviour.
Or you can keep going on the personal attack for some reason....

It appears that all information is simply a narrative rather than factual statement and if someone posts something counter to YOUR narrative then its just "console warrior BS."

But never you right?

Maybe sometimes, I am not perfect, but building a strawman does not prove the point, just the opposite.
 
Last edited:
The post I was replying to and the other threads trying to police what I am saying, trying to call me out/burn me, seems like a good enough example, but I am sure you have way way more concrete evidence of the contrary and others behaviour.
Or you can keep going on the personal attack for some reason....



Maybe sometimes, I am not perfect, but building a strawman does not prove the point, just the opposite.

I am certainly not attacking you because I'm not making extraordinary or false claims.

What strawman? That in a thread exploring the potential implementation of Xbox technology you have attempted to refute or re-interpret the statements of MS official's and official documents as if every other reading of that documentation was either:

A) simply meant to support a superiorty narrative

B) purely nonsensical even as a self contained statement?

I don't think you and I agree on what a strawman is.

Suffice to say, that I'm looking forward to June and July to clarify and I thank the many people in this thread who have spent their time seeking and attempting to piece together what are altogether murky statements.

Thanks for your time as well.
 
MS has DLSS for AMD... implemented in XSX HW. So... they are fully gunning for RT implementation this Generation. Probably to show in Halo infinite first.

You mean AMD's rsi (Radeon Image Sharpening,) there's no confirmation that either next gen console supports this - yet. And I would have thought it would have been perfect for the Series X Minecraft demo.
 
Last edited:

martino

Member
You mean AMD's rsi (Radeon Image Sharpening,) there's no confirmation that either next gen console supports this - yet. And I would have thought it would have been perfect for the Series X Minecraft demo.

isn't RSI mostly sharpening ? is it doing image reconstruction / upscale ?
I didn't really looked into it.
 
Last edited:

Tripolygon

Banned
Regarding the 16 channel vs 4 channel thing
This is BS.
XSX has only 4 channels, but each channel can stack multiple chips, so 4x4x64GB = 1024GB, but only 4X speed. So if you look at the original speed, 5.5GB / s vs 2.4GB / s, which is about 2.3 times the gap. Why not 3 times? Because 64GB of flash memory does not run on a single channel, multiple chips will use the channel bandwidth more completely when stacked.

This is why the 8-channel E16 main control of the group can run to 5GB / s, while the 12-channel main control of the PS5 can only run to 5.5GB / s. In fact, the incomplete utilization of channel bandwidth by a single flash memory causes a certain amount of waste.
The kind of bullshit people make up is outstanding.

So you think

Solution A will use 16 64GB nand modules stacked on top each other just to reach 1TB at 2.4GB/s

Solution B will be using 12 of the same 64GB nand modules and achieve 825GB at 5.5GB/s

Solution A will be using more nand modules so more expensive than B while not providing any tangible benefit. And some how your conclusion is solution A is a better design because of supposedly waste? What waste?
 
Last edited:

geordiemp

Member
This is BS.

The kind of bullshit people make up is outstanding.

So you think

Solution A will use 16 64GB nand modules stacked on top each other just to reach 1TB at 2.4GB/s

Solution B will be using 12 of the same 64GB nand modules and achieve 825GB at 5.5GB/s

Solution A will be using more nand modules so more expensive than B while not providing any tangible benefit. And some how your conclusion is solution A is a better design because of supposedly waste? What waste?

I did not know MisterXmedia was now Chinese, yes thats funny.
 

martino

Member
if i understand he says lanes are underutilized if they are pcie 4.0 but how it that relevant or bad ?
next years pcie 5.0 will be here with 32 GB/S for 4 lanes....can you imagine how long this will be underutilized....is that a problem ? (even more when ddr5 will be slower than than at first)
in the end the ssd will still be 5.5 gb/s
 

John254

Banned
Oh sorry, PS5 is 12 channel :messenger_grinning_sweat:

I mean even expandable storage is in favor of Series X.

The is from Epic engineer, I wonder what he meant below, IIRC panjev link a tweet about PS5 super geometry engine, but this Epic guy is saying the opposite



vs

What is source of this epic guy saying that ps5 doesn't support mesh shaders?
 

Ar¢tos

Member
Debatable as we do not know the cost of each and many prefer non proprietary storage, but you completely dropped the pretende to learn what XVA is and are just straight laundry list console warring now?
If you want to learn and talk tech I suggest Beyond3D forums (the negative there is that everything is literally tech only, after reading a few threads you start missing humans).
Since GAF opened the doors to free emails it kinda turned into Gamefaqs lite (inevitable I guess, maybe the rules should have become stricter before this change).
 

Panajev2001a

GAF's Pleasant Genius
I am certainly not attacking you because I'm not making extraordinary or false claims.

What strawman? That in a thread exploring the potential implementation of Xbox technology you have attempted to refute or re-interpret the statements of MS official's and official documents as if every other reading of that documentation was either:

A) simply meant to support a superiorty narrative

B) purely nonsensical even as a self contained statement?

I don't think you and I agree on what a strawman is.

Making up a fake argument that I did not make and twisting what I say to fit it in. Claims were made, some were not supported with evidence and were debated and the evidence drummed up sometimes fell apart on its own. If I exaggerated, was a bit too harsh, or had a condescending tone I apologise for that, but I do resent the general adversarial tone.

Severa of your posts here have been mostly emotes/reactions and several saying how disappointed you were in a poster or accusing people of policing the thread (ant sense of irony seemingly lost in the process) in a way that I feel is either attacking them directly or just making one feel unwelcome.
 
Last edited:

martino

Member
Geometry engine isn't a PS5 feature. Its an AMD feature. All recent AMD GPUs have Geometry Engines.
whatch road to ps5 again
it's clear:
Cerny said he took 2 features from rdna 2 then talk about them:
geometry engines (mesh shader)
then RT.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
i mean road to ps5...
2 rdna feature in pS5 confirmed geometry engine (mesh shader) and RT.
i don't see sony lying on that.

I think what is causing confusion is statements such as the ex PS5 Principal Dev boasting about PS5 geometry engine’s programmability and Cerny making a point of saying essentially “just because you will see some of the features I am presenting there today in future RDNA desktop GPU’s (thus not on other consoles, maybe variations but not the exact thing thus you know the argument being “technically correct” more than absolute) it does not mean we did not do-develop them together with AMD for PS5” and then IIRC he went off and started talking about the Geometry Engine (which he separated from the rest).

It could be that, like SFS and VRS extensions which seem to be unique to XSX that PS5 might have Geometry Engine extensions that will appear in future AMD GPU’s but not competitor consoles in the near future.
 

martino

Member
I think what is causing confusion is statements such as the ex PS5 Principal Dev boasting about PS5 geometry engine’s programmability and Cerny making a point of saying essentially “just because you will see some of the features I am presenting there today in future RDNA desktop GPU’s (thus not on other consoles, maybe variations but not the exact thing thus you know the argument being “technically correct” more than absolute) it does not mean we did not do-develop them together with AMD for PS5” and then IIRC he went off and started talking about the Geometry Engine (which he separated from the rest).

It could be that, like SFS and VRS extensions which seem to be unique to XSX that PS5 might have Geometry Engine extensions that will appear in future AMD GPU’s but not competitor consoles in the near future.
the thing is rdna2 is still future desktop gpu today.
the rest is extrapolation on your end because you want it to be special on ps5
it's not imply at all but it's also not out of possibility. it's just not said.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
the thing is rdna2 is still future desktop gpu today.
the rest is extrapolation on your end.

Oh indeed it is purely extrapolation (educated guessing more than random thoughts) on my end , based on past history, AMD description of their semi custom arrangements, Cerny’s words (paraphrased, but he did make that point), and an assumption that these kind of presentations are quite deliberate in how they present their topics and which topics they decide to highlight... I was also careful not to talk about RDNA generations but GPU’s for a reason :).

I do not think it makes it nonsensical though, I think it is fair to think these semi-custom GPU’s contain variations and customisations each that the other console does not have but will be out (or discarded, AMD just reserves the right to use them in its own products) in future AMD GPU’s and Xbox One, PS4, XSX, and PS4 Pro do support inferring such points I think.
 
Last edited:

martino

Member
Oh indeed it is purely extrapolation (educated guessing more than random thoughts) on my end , based on past history, AMD description of their semi custom arrangements, Cerny’s words (paraphrased, but he did make that point), and an assumption that these kind of presentations are quite deliberate in how they present their topics and which topics they decide to highlight... I was also careful not to talk about RDNA generations but GPU’s for a reason :).

I do not think it makes it nonsensical though, I think it is fair to think these semi-custom GPU’s contain variations and customisations each that the other console does not have but will be out (or discarded, AMD just reserves the right to use them in its own products) in future AMD GPU’s and Xbox One, PS4, XSX, and PS4 Pro do support inferring such points I think.

why i don't agree here.
because of how it's presented
He quickly preview 2 features he introduce like being part of rdna 2
What is presented is no different than what RT and mesh shader is when presented elsewhere.
yes he said they are custom feature like "cache scrubbers" but imo not thoses two
 

Panajev2001a

GAF's Pleasant Genius
why i don't agree here.
because of how it's presented
He quickly preview 2 features he introduce like being part of rdna 2
What is presented is no different than what RT and mesh shader is when presented elsewhere.
yes he said they are custom feature like "cache scrubbers" but imo not thoses two

I can understand you point, but possibly (the “technically right” part) they have been slightly customised and thus, while not presenting any massive performance advantage they may still be worth calling out.

Beside the cache scrubbers and the I/O setup there are likely more customisations to the GPU in line with what we got for PS4 for example (increased number of ACE’s and compute queues, volatile bit, additional cache bypassing bus, etc...).
 
Last edited:

THE:MILKMAN

Member
I just watched Austin Evans XSX hands-on video again and see that he did check out the SSD on the SoC board. I completely missed this!



It is completely encased in a heat sink but looks more like a M.2 2230/2242 form factor size wise? Not sure if this tells us much new but it is different to the 12 separate flash chips+Flash controller on PS5?
 

Redlight

Member
Making up a fake argument that I did not make and twisting what I say to fit it in. Claims were made, some were not supported with evidence and were debated and the evidence drummed up sometimes fell apart on its own. If I exaggerated, was a bit too harsh, or had a condescending tone I apologise for that, but I do resent the general adversarial tone.

Severa of your posts here have been mostly emotes/reactions and several saying how disappointed you were in a poster or accusing people of policing the thread (ant sense of irony seemingly lost in the process) in a way that I feel is either attacking them directly or just making one feel unwelcome.
I think people sometimes react negatively to your comments because you represent yourself as an 'honest broker' but your console affiliation is very, very clear. You're very positive about any PS5 news, but generally not receptive to positive Series X features or speculation. That's obviously fine (and not unusual here), but you take offence if anyone takes the same stance on the opposite side. I think that can be a little irksome.
 

Thirty7ven

Banned
I feel just by saying 'tfops advantage' is hand wavering.
Where Series X has most of the advantages you will want in a consoles match-up, that is including BC and cloud integration.
Something like 16 channels i/o feels *really* small instead.

What it feels like is that you want really hard for something be true. And even when a developer wastes his time telling you otherwise, you decide to move on to the next fantasy.

BC? Cloud integration? You know people will end up choosing the console based on game experiences. Regardless of how hard you fight for it.
 
Last edited:
D

Deleted member 775630

Unconfirmed Member
What it feels like is that you want really hard for something be true. And even when a developer wastes his time telling you otherwise, you decide to move on to the next fantasy.

BC? Cloud integration? You know people will end up choosing the console based on game experiences. Regardless of how hard you fight for it.
True, and for third party games that experience will be better on the XSX because it's more powerful
 

Panajev2001a

GAF's Pleasant Genius
I think people sometimes react negatively to your comments because you represent yourself as an 'honest broker' but your console affiliation is very, very clear. You're very positive about any PS5 news, but generally not receptive to positive Series X features or speculation. That's obviously fine (and not unusual here), but you take offence if anyone takes the same stance on the opposite side. I think that can be a little irksome.

I am not actually spending words to make myself appear to be or pretend to be THE moral arbiter without any preferences, but I try to stay reasonably objective. I am receptive to some features and some capabilities because of what they are, not who offers it.
It is not the stance other people take I have any issue with, but more a post truth scenario where you can just make claims left, right, and centre to piss other people off or to win your console war or both without offering any backing or ridiculous backing and then going at times on the offensive when people discuss it with you.

I was very happy about UHD Blu-Ray offered by Xbox One S and Xbox One X, but not about PS4 Pro’s choice of basic Blu-Ray or lack of Dolby Vision support for HDR in games. I like the fully virtualised approach MS innovated the Xbox One CPU and GPU with (then again I was disappointed Sony dropped it after PS3, as far as I know)... I am very positive about SF and SFS, they make virtual texturing / texture streaming easier to get it right to more developers (that is what adding HW support for the tough parts of an algorithm does, you are free to bypass it if restrictive and free to use it and not having to re-write it). Easier to use than PRT sure, 2-3x performance improvements over PRT? That was the claim I had problems with...
Not sure why seemingly a test of loyalty is supposed to take place in these kind of discussions...

People disagreeing with me is fine, I will be wrong, I will be right, and I will be neither... if people want to take offende because I do not like just eating some arguments up there is not much I can do.
Some people are bound to take what I say, what I disagree with and what I agree with, under their own console warrior lens, that I cannot do much beyond trying to be more careful about how I say things a bit more perhaps.
 
Last edited:

Thirty7ven

Banned
True, and for third party games that experience will be better on the XSX because it's more powerful

On paper, that’s the promise, if the definition of better for you is slightly better visuals or less dropped frames. There’s no arguing that until the games are shown.

A certain group spent the whole generation saying multiplats were better on Xbox One because of the gamepad though, so the definition of better is fluid enough.
 

martino

Member
Beside the cache scrubbers and the I/O setup there are likely more customisations to the GPU in line with what we got for PS4 for example (increased number of ACE’s and compute queues, volatile but, additional cache bypassing bus, etc...).

this seems more likely
 

Fafalada

Fafracer forever
112 GB VRAM bandwidth advantage
I spoke to this in my post, bandwidth is part of GPU perf. - take it away and you lose most of that added gpu throughput.
ROPs fall under the same thing - basically think of it like this - PS4/XB1 had 2x the bandwidth, 2x the ROPs, 50% more compute, 8x the async-shading - but people only really ever talked about compute delta - ultimately all of these 'extra items' are cogs depending on one-another to get a meaningfully faster GPU.

Mesh shaders are an API construct, not a hardware feature. So far only Cerny spoke about related underlying hw (we're missing real Xbox details here) but it seems likely it's a common RDNA feature to both.
HDR hack for bc is a SW service, and 100gb/virtual memory is a simple way to explain benefits of SSD I/O, which isn't in favor of SXS anyway.

Packed rapid integer math is a valid question - it's another 'likely' common RNDA element, but it'd be nice to know for sure.
Though it doesn't run concurrent with floating point - just like RT acceleration doesn't. Goes for both consoles. There are no magical TFlop inflation scenarios in RDNA.

I know you used to be a Sony dev
That was many years/platforms ago - been all over the map(figuratively and literally / geographically) since those days.
 

Fafalada

Fafracer forever
I feel just by saying 'tfops advantage' is hand wavering.
Sort of - but we've used it to hand-wave GPU delta that had an even longer 'list' of differences for the past 7 years. At this point, expanding the 'GPU is faster' into a list of 'stuff' comes across like pretending things have changed 'just because', even though architecturally the differences have never been smaller in history of consoles.

Where Series X has most of the advantages you will want in a consoles match-up, that is including BC and cloud integration.
While I think MS has done a fair share of good things with software-side integration in last 2 years, 'Cloud integration' is one of those buzzwords that's about on-par with 'blockchain' in terms of end-user relevance or desire-ability. Especially in a game console.
But as far as 'match-ups' go - I guess I'm different from most users since buying any console is far from a done deal for me at this point, so it's really not a relative-comparison question... yet.
I/O improvements (on both) are the only system selling thing that has been announced to date (and I say this as someone that's been using 3-4GB/s NVME for several years, most of it lost to PC software-stack dead-end -_-), but I'll need more than that to sell me on one.

I mean even expandable storage is in favor of Series X.
That'll take a lot of explaining given that in 40 years of console history 'proprietary storage' was never considered a win by consumers. Unless we're now going back in time claiming Sony was right all along with the Vita...
 
Last edited:

oldergamer

Member
This is BS.

The kind of bullshit people make up is outstanding.

So you think

Solution A will use 16 64GB nand modules stacked on top each other just to reach 1TB at 2.4GB/s

Solution B will be using 12 of the same 64GB nand modules and achieve 825GB at 5.5GB/s

Solution A will be using more nand modules so more expensive than B while not providing any tangible benefit. And some how your conclusion is solution A is a better design because of supposedly waste? What waste?
Hold on a second, we discussed this before, what you just wrote is missing a few points.

Solution B wouldn't be using the same 64GB modules as Solution A. It can't add up to 825gb. 64x12 = 768! They are possibly using a variant of sram used in camera hardware ( which i could see sony doing considering they make cameras ) where one gb is represented as 1000 instead of 1024 ( i forget the name of mem type) , and that would give you a number closer to 825gb.

You also aren't factoring in if either is using faster or slow memory modules ( that drives up the cost ) this is outside of the lane/channel bandwidth. They could be using anything from 800mt to 1600mt. You also aren't factoring in the limitations on the technology and that is where "waste" comes in. It's not possible for any single mem module to use all available bandwidth if its on a single lane/channel. It introduces inefficiency that could play a factor in real-world vs on paper performance numbers.
 
Last edited:

martino

Member
It's not possible for any single mem module to use all available bandwidth if its on a single lane/channel. It introduces inefficiency that could play a factor in real-world vs on paper performance numbers.
how so ?
the waste is on the lane but the modules can perform without any bottleneck in parallel because of that.
the complex I/O did it the "Hammond way" in some places to achieve its objective but i don't how it will not be efficient at it.
 
Last edited:

oldergamer

Member
how so ?
the waste is on the lane but the modules can perform without any bottleneck in parallel because of that.
the complex I/O did it the "Hammond way" in some places to achieve its objective but i don't how it will not be efficient at it.
Latency i suspect. There's more then one factor that determines SSD performance.

One article: https://www.tweaktown.com/news/7010...ss-pcie-4-nvme-up-3-7gb-sec-speeds/index.html

It came from a interview ( i can't find the link at the moment ) with phison employee, where they gave an example of how 8 channel NVME could reach 5gb/s compared to a 12 channel with 5.5gbs/s. They were stacking chips to use more of the available bandwidth with the 8 channel variant.
 

martino

Member
Latency i suspect. There's more then one factor that determines SSD performance.

One article: https://www.tweaktown.com/news/7010...ss-pcie-4-nvme-up-3-7gb-sec-speeds/index.html

It came from a interview ( i can't find the link at the moment ) with phison employee, where they gave an example of how 8 channel NVME could reach 5gb/s compared to a 12 channel with 5.5gbs/s. They were stacking chips to use more of the available bandwidth with the 8 channel variant.

curious since it's one of the design focus
obviously you can achieve same speed different ways using more or less lanes saturating them more or less....what does this bring here ? xsx is not same the same speed.
 
Last edited:
I just watched Austin Evans XSX hands-on video again and see that he did check out the SSD on the SoC board. I completely missed this!



It is completely encased in a heat sink but looks more like a M.2 2230/2242 form factor size wise? Not sure if this tells us much new but it is different to the 12 separate flash chips+Flash controller on PS5?

Exactly. This is the kind of SSDs you see in the Series X
Goldendisk-YCdisk-Serial-NGFF-240GB-256GB-128GB-120GB-M-2-font-b-SSD-b-font-font.jpg


Those are cheap Laptop SSDs
Not even the expensive long desktop M2 SSDs with more chips like this:
583430-samsung-970-evo-and-970-pro-nvme-m-2-ssd.jpeg



We don't know the PS5 solution yet (in terms of physical hardware layout), but it has to be 12 chips (stacked, or sperate)
We also don't know if the PS5 has a cache or cacheless controller like the Series X. But the controller is not of the shelf like Series X

PS5 solution is a lot more expensive even with a 17.5% less cpapacity.
 

Ascend

Member
I just watched Austin Evans XSX hands-on video again and see that he did check out the SSD on the SoC board. I completely missed this!



It is completely encased in a heat sink but looks more like a M.2 2230/2242 form factor size wise? Not sure if this tells us much new but it is different to the 12 separate flash chips+Flash controller on PS5?

That by itself doesn't say much. What is interesting is that the SSD is on the same board as the SOC & RAM. That board has the SSD, the custom SSD port for the SSD 'memory card', and HDMI out, and nothing else. The other board has everything else, like USB ports and LAN. They kept the SSD as close as possible to the APU.

And what is still interesting are these pictures;

2020-03-16-ts3_thumbs-c09.jpg


Lk8dsfO.png


They show the SSD being connected directly to the APU. We still don't really know what the APU itself looks like internally, aside from this;

Xbox-Series-X-Specs.jpg


The question still is whether the data needs to be transferred to RAM before being used, or if (some of) it can be transferred directly to cache as well.
 

THE:MILKMAN

Member
We don't know the PS5 solution yet (in terms of physical hardware layout), but it has to be 12 chips (stacked, or sperate)

I was thinking for PS5 a motherboard layout like OG PS4:

SAA-001-small.png


But instead of 8 GDDR5 chips on the other side, there would be 12 flash chips+controller and the double-sided heat sink would cool it all.

Not sure how this would work with the M.2 expansion bay though.

The question still is whether the data needs to be transferred to RAM before being used, or if (some of) it can be transferred directly to cache as well.

This is way above my knowledge base to be honest. Maybe one of the programmers here could chime in?
 
This is way above my knowledge base to be honest. Maybe one of the programmers here could chime in?

No. This cache is for the GPU to use based on the current code running on it, used as local memory for CUs as well as other use cases. They literally store Kb's of data in L1 cache, and a few megs in L2. They also have unbelieveably fast access times compared to even RAM, and change constantly.
 
Last edited:

Ascend

Member
No. This cache is for the GPU to use based on the current code running on it, used as local memory for CUs as well as other use cases. They literally store Kb's of data in L1 cache, and a few megs in L2. They also have unbelieveably fast access times compared to even RAM, and change constantly.
Can you identify what each component of the APU is based on the image I posted above?;

 
Top Bottom