Panajev2001a
GAF's Pleasant Genius
Look at it from this angle, since we clarified the maths before already. If BCPACK compression yields a 2:1 compression ratio and lowers the number to 2.8 GB then you cannot transfer it in 1s.I'm not so sure I missed on the calculation. I definitely know I messed up on something I said earlier in a pretty hilarious fashion that even I myself started laughing at, but I suspect my current understanding may be accurate now.
So here is my thinking.
If the game requests 14GB of texture data, Sampler Feedback Streaming's efficiency should cut that 14GB texture demand to a more efficient 5.6GB due to the 2.5x efficiency advantage.
And here comes the part where some think I messed up or double counted compression. Keep in mind what SFS has done initially is not considered "compression." Cutting the demand to 5.6GB of actual texture data isn't compression, that's just SFS intelligently informing the system of all it will need for the current scene.
One way to look at this is to say that SFS is telling the system it needs 5.6GB of texture data.
Another way to look at this is SFS is telling the system it needs 2.8GB of BCPack format input data decompressed into main memory.
The number for the calculation must be 2.8GB otherwise after decompression it's no longer 5.6GB of textures. If the number used for the calculation is 5.6GB then that's 11.2GB worth of textures after decompression, double what SFS suggested is needed.
That's why I don't think the below calculation in the quoted post by Rea can work 5.6 / 2.4 / 2, which would equal 1.16 seconds. That calculation is the Series X decompressing 11.2GB of textures into main memory, way more than what was called for.
5.6GB of data being decompressed is actually 11.2GB of texture data, not the 5.6GB of texture data that Sampler Feedback Streaming suggests is actually required.
So the calculation is actually 2.8GB / 2.4GB/s / 2 = 0.58 seconds.
Another way to do it is to get rid of the 2 at the end and simply do this 2.8GB / 4.8GB/s = 0.58 seconds.
What are people missing? Just because Sampler Feedback Streaming says 14GB of data isn't required and cuts it down 5.6GB of texture data, do not confuse that 5.6GB to be the COMPRESSED data size. The compressed data size for 5.6GB worth of textures using BCPack is lower still at 2.8GB. It only becomes 5.6GB of texture data after decompression.
Going off of my post above, I think people only think my calculation is wrong because they are using the wrong data point. People are using the end result rather than the compressed size of 5.6GB worth of textures, which with BCPack is 2.8GB.
Even with Cerny's 2GB / 5GB / 1.5 example. That data is only 2GB in its compressed form, but once it's decompressed into main memory, it's actually 3GB of data. 2GB * 1.5 compression ratio gets you 3GB. In the same Cerny example, the 5GB/s SSD actually becomes 7.5GB/s with compression. This is why if you do 2GB / 7.5GB/s you get the exact same 0.27 result.
This is my understanding of how it works. 5.6GB is what will be in RAM after decompression, but it shouldn't be confused to be the same as the data size in its 50% compressed form, 2.8GB.
2.8GB / 4.8GB/s —> this is wrong… 2.8 GB / 2.4 GB/s as 2.4 GB/s is the maximum SSD I/O speed which (if you take decompression into account becomes 4.8 GB/s on average). You are factoring the compression going from 5.6 GB to 2.8 GB and then you are applying the same factor to boost bandwidth again. You should also consider the effects of SFS/PRT+ after compression as you stream compressed data (that gets uncompressed by the I/O unit).
Counting all the multiplication factors in an optimal scenario you could transfer the following amount of data in 1s: 2.4 GB * 2 (BCPACK) * 2.5 (PRT/SFS) = 12 GB.
Last edited: