The bolded is extremely weird to me. I don't get it. It's odd.
Don't get it as in it's weird to present the idea, or don't get it as in it's weird there are people who actually seem to discuss the tech along these terms?
You know that is not true.
This whole debate has been going through the following loops:
1) Microsoft claims they have the best system for graphics the coming generation as the PS5 specs are released. This is also reinforced by every forum warrior out there. This analysis is more less only focused on Tflops.
2) Cerny's talk starts to disseminate and he claims that I/O will be a more important part of upgrading graphics the coming generation than what people think and spells out how much silicon Sony has dedicated in the PS5 to this.
3) Forum warriors laugh at Cerny as if he does not know what he is talking about since I/O does nothing according to them apart from faster load times.
4) UE5 demo is shown and TS says that I/O is key in making the demo as graphically rich as it is.
5) Slow realization that Cerny might (!) know what he is talking about rolls through the internet's underbelly.
6) MS starts to move their marketing from Tflops to Velocity Architecture etc - they seem to realize this as well.
7) Today the OP erroneously tries to state that the demo is not I/O limited in any way apart from requiring an SSD which both systems have. He reaches this conclusion by adding an apple to an orange and dividing it by pi (basically BS).
Questions still remain how much impact the I/O differences between the systems will have - both for third party and first party titles (personally I do not believe it will have an impact unless the game is designed to take advantage of it which most likely will be hard outside of first party titles). However, no one argues that UE5 will be able to run on a variety of systems but unless you chose not to believe TS the graphics in the demo will take some sort of hit on the XSX due to lower I/O capabilities.
No, I've observed as such. Some people may not necessarily outright state it that clearly, of course. But there's a tone you can pick up in a person's use of language, what type of points they stick to talking to, when they do so and how, and it can start to paint a picture. It's just something people do a lot, granted that happens on both sides of the aisle too.
1) Yeah that happened, but that was the fault of forum warriors only focused on TFs, and that was
their choice. I was there in those threads. I kept trying to say "TFs don't mean everything", because both sides were having an e-dick competition for biggest TF count. It only got worst when the Github leak happened and the testing data kept coming forward. There are people to this day who pretend Ariel and Oberon are not PS5 chips simply because final TF and clock speeds are different. I'm looking forward to the phantom PS5 chip to finally make its appearance if that's the case.
2) Yes he did this, and it was a great thing to see. It was what was needed at the time, for sure. Granted, he also
did try downplaying TFs in the process and imply that a smaller, narrower GPU is better at faster clock than a wider GPU at slightly lower clock, despite evidence of almost
all GPU benchmarks between cards of the same architecture showing the
opposite in almost every category aside from pixel fillrate and cache speeds.
So he basically did a downplay by giving a half-truth, on a particular architecture feature they didn't have an advantage in regards pure numbers. That is something you do as PR control no matter how technical the rest of your dissertation is.
3) It was wrong of people to dismiss the advantages of SSDs based simply on PC benchmarks, that's true. Was clear even before Road to PS5 that SSDs would bring a lot of I/O advantages going into next-gen, but that presentation opened the floodgates on encouraging that discussion in earnest.
TBF yes, some of that kickback was from PC and Xbox zealots who probably felt a bit cut down after seeing Sony's SSD I/O presentation. But from what I observed, they only got a bit particular about it when pro-Sony fans tried spinning the SSD I/O as a means of closing the graphics gap, and in many cases exceeding graphics on Xbox and PC platforms, without understanding the actual role of the SSD in a console design hierarchy. It just all kept feeding into a vicious feedback loop.
4) Yeah, TS said this...although he did a
LOT of PR fluff into his statements as well. That went conveniently overlooked by people who used his statements to solidify their perception on the next-gen SSD I/Os even going as far to say that non-Sony platforms could indeed never run a demo like that (which I mean if you want to argue semantics yeah, because the demo has no binary compiler to run on a system other than PS5 xD).
5) Here's the thing; people (especially
certain people) take Cerny's view on resolving the I/O issues as being the
ONLY solution. That is where all of those people are
wrong. Throughout the history of technology there have always been MULTIPLE solutions to the same problem, some just being more successful than others and not
always because they were the objectively better solution, at that. x86's success, in fact, can be argued as being down more to ubiquity and mass market saturation (thanks to PCs) than being just an objectively superior solution, since there were always other architectures that did certain things much better for specific tasks that required them (M68K for example having 32-bit registers at a time when x86 processors did not).
The same thing applies with the SSD I/O solutions. Neither Sony's nor Microsoft's approaches are the
ONLY approaches valid in being taken. They can each work and offer comparable results based on their design principals. I feel that some people don't understand the nuances in this which is why they keep clinging to seeing these as apples-to-apples solutions and clinging to paper specs so much, when in fact the paths the approaches take are pretty divergent and have different guiding philosophies to them.
Sony's approach maximizes a focus on
bandwidth, Microsoft's maximizes a focus on
latency. This doesn't mean they are necessarily lacking in the other area, just that their main priority is in one of them. They are both very valid approaches but a large contingent of people view Cerny's approach as the only possible solution. The truth is that he is not the first person to notice "Hm, there's some bottlenecks here. Let's try fixing them!", not even by a long shot. And he is not the only one who has found solutions to this problem, either.
Yes, certain solutions may excel in
specific areas and use-cases over others; the fact remains there are always
multiple valid solutions to addressing similar problems and it has been this way in technology since dang near its inception. It will remain that way into the future as well, minus some market capital tomfoolery or anything driven by money pushing people to a standardized solution.
6) MS was actually talking about XvA at least from the time Sony talked about their SSD I/O. In fact in that same blog post the day of Sony's Road to PS5, they mention XvA and other things like DirectML right there. But guess what? It's
everyone else who were obsessed over TFs still, that skipped past that stuff and only focused on the TF count.
So Road to PS5 gets underway and people are still clinging to TFs. Then Sony announces theirs and (keep in mind I was in the Youtube stream when this happened) there's just a flood of "WTF!", "Lol", and other type of comments. So that shows you where these people's headspace was at, even up to that point. However, once Sony starts delving deep into their SSD I/O, you get some people who might've actually remember MS's stated performance numbers, kept up with some of the other rumors etc...they see Sony's stuff here and now that presents a new avenue for them to focus their discussions on (some doing so as more console warrior BS, obviously).
It's just very ironic that it took Road to PS5 for people at large to finally focus on something aside from TF, so the reason MS played into TFs early on is actually because the majority of
gamers themselves were
only paying attention to TFs! They were just giving people the talking points they wanted, but they did find ways to drop in mentions of things aside from TFs well before Road to PS5, including XvA, Project Acoustics, DLI etc. You probably just didn't pay attention to that since you might've been obsessed over GPU TFs. It's essentially tunnel vision, and a lot of people had it (and still do, they've just shifted that tunnel vision to SSD I/O instead).
7) I'll admit OP looked at the 768 MB figure incorrectly, but what do we see people doing now? Doubling down again on "you can only do this on PS5 after all", or some variation of that. It's like there is zero middle ground in this discussion anymore. Some people implying you'd need "magnitudes larger RAM reserve" for "other systems" to pull off the same thing...sneaky doublespeak like that it's been creeping back up once again.
Now, I still assert that there's nothing in this demo that cannot be done on at least the Series systems; they have pretty much all of the same focuses on smart bandwidth, very low latency (perhaps moreso), decompression etc. as PS5, just with lower overall maximum bandwidths. PC
could be a case of where it may not be possible due to I/O and file system limitations, at least temporarily, but those will likely be resolved in the near future.
The thing is it would serve the community best if indeed this type of demo can run on other platforms because if this is the peak of asset streaming, and we're getting it in a demo before the generation even starts, that's kind of blowing the load too early. I'd like to think the generation, in terms of fast asset streaming capabilities, will have more to offer in real gameplay than what was seen in the UE5 demo.
I'm not assuming, I gave a math example for UE running at 60fps which Nanite can do. It's only Lumen that were currently limiting the fps.
184 MB or 92 MB doesn't matter too much as it's the same amount of data per second. It might also be much lower in reality as the SSD budget may be spent for other thing to be read.
You're still applying your math as an assumption, the assumption being they are constantly streaming in new data the whole time. If that's indeed what they're doing, it's ultimately a waste of resources WRT to an actual gameplay scenario.
As a controlled demo with limited physics and no advanced AI, physics, world logic etc. systems running in tandem (nor NPCs), on the merits of being technologically impressive it certainly fits that bill.