isn't VRS just a software feature? I'm not aware of any piece of hardware which would enable VRS on one system and not the other. they both use the same GPU architecture
You need the hardware to perform it if you want the benefits of hardware acceleration. Otherwise you implement your own solution in software, which is less efficient, but can be technically done.
IIRC among AMD GPUs RDNA1 does not support VRS, just RDNA2 and onward. If Sony has an equivalent to VRS, they are implementing it differently, maybe with some customizations to the GE and PSes. And it would not be called VRS as that particular term is patented by Microsoft.
Some people mentioned MS and Intel's patents referencing Sony's, but didn't keep in mind that Sony's was for foveated rendering in application with the PSVR. That and VRS are similar in some aspects but operate and are applied differently. You can have two technologies with similar base DNA but very different implementations and functionality in practice, just look at PCM technologies like 3D Xpoint and ReRAM. It's nothing new.
First, I think the XSX is a very well designed console.
However, the 100GB is bullshit unless Microsoft explains how they calculated that number. You can’t logically explain how the GPU will render data straight from the SSD skipping the Sys/GPU ram.
The 100 GB bit, some of us have speculated, might be in regards to the GPU addressing a partition of data on the drive as extended RAM (it sees it more or less as RAM) through GPU modifications built off of pre-existing features of the XBO such as executeIndirect (which only a couple of Nvidia cards have support for in hardware). GPUDirectStorage, as nVidia terms it, already allows GPUs in non-hUMA setups to access data from storage into the VRAM. It's particularly useful for GPUs in that type of setup, but since these are hUMA systems that on the surface wouldn't seem necessary.
But...what if there's more to that virtual pool partition on XSX than meets the eye? We know the OS is managing the virtual partitions of the two RAM pools on the system, is it possible in some case that the GPU can access the 4x 1 GB RAM modules while the CPU accesses the lower-bound 1 GB of the 6x 2 GB modules? We don't know if this is the case or not, but if the OS can virtualize a split pool for optimizing the bus access of the GPU and CPU in handling contention issues, it might also theoretically be able to implement a mode, even if just in specific usage cases, to virtualize the pool as a 4x 1 GB chunk to the GPU and 6x 1 GB chunk to the CPU that can have them work simultaneously on the bus in those instances.
The tradeoff there would be collectively only 10 GB of system memory is being accessed, but the OS could then just re-virtualize the normal pool partition logic as needed, usual penalties in timing factoring in. Which wouldn't necessarily be massive whatsoever; if Sony can supposedly figure a way of automating the power load adjusting in their variable frequency setup to 2 Ms or less, I don't see how MS wouldn't be unable to do what's proposed here in even smaller a time range.
Anyway, the 100 GB being "instantly available" was never a reference to the speed of access but maybe something in regards to the scenario I've just described; even if the data is going to RAM, and the RAM it can go to is cut down to 4 GB physical with this method (if it would need to go to more RAM than that and/or need a parallel rate of data transfer greater than 224 GB/s, it'd have to re-virtualize the normal memory pool logic), at the very least the GPU can still transfer data while the CPU has access to data in the 6 GB pool on the rest of the bus, simultaneously.
Again, though, it'd depend on what customizations they've done with the GPU here and also, what extent the governing logic in the OS and kernel for virtualizing the memory pool partitions operates at. But it certainly seems like a potential capability and a logical extension of the GPUDirectStorage features already present in nVidia GPUs as well as things like AMD's SSG card line (it works very similarly I would assume, i.e drawing data directly from the 2 TB of NAND and transferring it to the GPU's onboard HBM2 VRAM, rather than needing to have the CPU draw the data from storage, dump it in system RAM, and then have the GPU shadow-copy those assets to the VRAM as how many older-generation CPU/GPU setups on PC operate). I'm gonna do a little more thinking on this because there might be some plausibility in it being what MS has done with their system setup, IMHO.