Ugh... Ok. Let me try and explain things in baby steps, because apparently that is needed here....
Imagine you have 10 textures.
With your current I/O system, you can transfer 2 textures per second.
With compression/decompression, you reduce the size of textures to half.
That means that you can now transfer 4 textures per second with the same I/O system.
That is the normal way of doing things.
Now imagine that you have a way to predict exactly which texture you would need of those 4 textures, and say you need only 2 of them rather than needing to load all 4.
That means you're loading half of what you would normally load, meaning you can effectively transfer 8 textures rather than 4 with the same I/O system.
Get it now? They stack.
Actually now thinking of it for a brief second, I wonder if this is all some method inspired in part by speculative execution branch prediction, which you see in CPUs from Intel and AMD:
Speculative execution is an
optimization technique where a
computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored.
The objective is to provide more
concurrency if extra
resources are available. This approach is employed in a variety of areas, including
branch prediction in
pipelined processors, value prediction for exploiting value locality,
[1] prefetching
memory and
files, and
optimistic concurrency control in
database systems.
[2][3][4]
Notice the mention of prefetching there. We know SFS is (along with other parts of XvA) focused on cutting down the prefetching window to try getting texture streaming as "just in time" as possible. Now, actual just-in-time isn't possible with NAND levels of storage, but if you're focusing on reducing factors that contribute to latency, you can cut the prefetch window down a lot.
SFS (and maybe SF as well), IIRC, basically operate by having a means of developers to provide some type of sample batch of texture data to let the system know what is to be used. In the case the higher-quality mipmap isn't ready in time the system can fall back to the lower-level mip and blend in to the higher-level one when it's available. This is done with some custom hardware on the GPU.
This approach flips a few things compared to speculative branch prediction/speculative execution but the idea of attempting to optimize the pipeline by preventing delays via doing work before a particular data asset is needed,
as close to the time it is actually required as possible, shares some inspiration with SE, even if SE is (to my knowledge) a CPU thing, traditionally. And attempting to reduce the window of frames between preparing for placing texture data for a prefetch and the texture actually being fetched for stream by the GPU, that requires very good latency figures.
Of course speaking of speculative execution we know there's a big controversy going on there with the exploits like Meltdown, so uh hopefully if MS's approach is doing something to that effect here (I've been thinking those ARM cores that were mentioned by an Indian AMD engineer's LinkedIn months back might be customizations for the GPU as well), the security is on-point and unauthorized exploits don't become a thing xD.
(Honestly tho speculative execution itself is not the problem that's led to those exploits, it's mainly Intel's architecture design in enforcing SE why their processors in particular have been so damn vulnerable. AMD's for example are much less vulnerable to those same exploits. Neither system's using Intel obviously so I think that's worth keeping in mind).
O
OptimistPrime
Well like quest was speaking of, most if not all of XvA will also be deployed on PC within a year or so. That increases the net of hardware supporting it by a metric ton. I would also expect it within whatever server and data center markets you'll find MS's equipment in, such as Azure, since it's designed for scalability and workign with a number of drive implementations.
I do think there are aspects of Sony's approach that will be easier to leverage, especially considering if a game is using UE5 as Epic have gone to great lengths to rewrite parts of the I/O to support Sony's solution (but not at the expense of other solutions, mind). OTOH, I think there are some aspects of Microsoft's solution that will be easier to leverage, it just all really comes down to what specific things developers want to do.
Sony's approach seems to automate more of the process however, which helps with easing developers in. So there is that to take into consideration. Just that I don't think it's going to be night and day difference in ease of use between the two solutions. They aught to be very close to each other in terms of dev friendliness, with for some various workloads perhaps Sony's having the advantage, particularly again with specific engines leveraging it like UE5.