• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

KingT731

Member
HW RT is a big deal. It sounded like devs will have to develop their own algorithms to lower the workload on the GPU or CPU when making use of the RT hardware on the PS5(Instructions on how to fill up the BVH). The BVH structures he showed are fundamental datastructures used for ray tracing workloads but the algorithms to work with the BVH are where I would have expected them to invest more in that. At least he mentioned that devs were able to use the RT hardware for reflections with minimal impact on the GPU.
You also have to remember that during that presentation it was the 3rd time Cerny had to confirm that the console even had HW RT. But for the rest you'd need to be privy to what is all in Sony's API.
 

rntongo

Banned
I read that each processor is performing a different task though.

Also I saw in a video that it's two cores and not 10% of one. I believe it was either NX Gamer or that Red Tech Gaming guy. I'll go watch the video again.

Edit: Nevermind I just saw the video again. My mistake.

No that's not true, the 10% of one Zen 2 core on the XSX does the equivalent of the two I/O co-processors on the PS5(Otherwise the XSX has a co-processor for mapping just like the PS5).

I never said the two co-processors do the same thing, in fact I know exactly what each one does. The first one is used to efficiently stream data into RAM if the game code is using traditional function calls in the API. So if it's C++ the game developer is using and they're using the IOStream library, it's work will be handled by this co-processor. On the other hand, if the dev wants to map the whole game into the RAM the work will be offloaded to the second I/O co-processor in charge of mapping.

Edit: I'll just add that it would make sense that MSFT has an I/O coprocessor for mapping.
 
Last edited:
If I were to make an educated guess.i would say that both companies are using hardware and software acceleration in their I/O systems.

I agree, and frankly I personally feel like it is foolish that some people are assuming they MS is just copy-pasting old PC techniques just because they haven’t detailed the nitty gritty details of how their kit works. I think both consoles have a lot of nifty tricks up their sleeve
 

Panajev2001a

GAF's Pleasant Genius
TF is now meaningless because the Sony console has less. simple as that.

Now everyone is an SSD expert because Sony has a faster SSD.

In the thread where the pendulum has been swinging constantly from the 2x faster SSD bandwidth and low latency only buys you a second less of loading times to well so if it matters more well XSX’s SSD has this magic tech that makes it 2-3x better... it is rich reading this ;).

Yes, the problem is these mysterious PS fans that keep saying TFLOPS do not matter yet when examples are sought there is little given or nothing despite threads seemingly flooded by them saying absurd things... talk about projecting... more transistors switching fast is not necessarily more efficient than less transistors switching faster as a whole.

BTW, all this talk about efficiency:
 

rntongo

Banned
You also have to remember that during that presentation it was the 3rd time Cerny had to confirm that the console even had HW RT. But for the rest you'd need to be privy to what is all in Sony's API.

True. It also makes sense because for example large studios will likely ditch Sony's APIs and use their own for RT. Like I would expect RockStar to do something of the sort. And Sony's first party studios can develop their own as well.
 

Panajev2001a

GAF's Pleasant Genius
No that's not true, the 10% of one Zen 2 core on the XSX does the equivalent of the two I/O co-processors on the PS5(Otherwise the XSX has a co-processor for mapping just like the PS5).

I never said the two co-processors do the same thing, in fact I know exactly what each one does. The first one is used to efficiently stream data into RAM if the game code is using traditional function calls in the API. So if it's C++ the game developer is using and they're using the IOStream library, it's work will be handled by this co-processor. On the other hand, if the dev wants to map the whole game into the RAM the work will be offloaded to the second I/O co-processor in charge of mapping.

Edit: I'll just add that it would make sense that MSFT has an I/O coprocessor for mapping.

I presume you have links to the devkit docs handy ;)?
 

rntongo

Banned
I presume you have links to the devkit docs handy ;)?

Oh I forgot you're not a programmer and have no idea what memory mapped I/O is. Cerny explained it in the presentation so i understood it since I have an education in Software analysis and design(Datastructures and Algorithms). One co-processor deals with traditional I/O and the other deals with mapping(memory mapped I/O).
 
Last edited:
One co-processor deals with traditional I/O and the other deals with mapping(memory mapped I/O).

So one deals with the reading and writing of the information while the other deals with finding it?

I can see how that can take up some CPU power if they didn't have those processors. Heck they might even help make those things even better.
 

jimbojim

Banned
No that's not true, the 10% of one Zen 2 core on the XSX does the equivalent of the two I/O co-processors on the PS5(Otherwise the XSX has a co-processor for mapping just like the PS5).

I never said the two co-processors do the same thing, in fact I know exactly what each one does. The first one is used to efficiently stream data into RAM if the game code is using traditional function calls in the API. So if it's C++ the game developer is using and they're using the IOStream library, it's work will be handled by this co-processor. On the other hand, if the dev wants to map the whole game into the RAM the work will be offloaded to the second I/O co-processor in charge of mapping.

Edit: I'll just add that it would make sense that MSFT has an I/O coprocessor for mapping.

Where are the receipts?

Oh I forgot you're not a programmer and have no idea what memory mapped I/O is. Cerny explained it in the presentation so i understood it since I have an education in Software analysis and design(Datasrtuctures and Algorithms). One co-processor deals with traditional I/O and the other deals with mapping(memory mapped I/O).

f1Mxz47.jpg
 

Deto

Banned
.... and we have just entered "fan" territory. seriously though, it was a valid argument. You aren't applying the same logic to whichever side you like.


This logic should not be applied anywhere, because it is stupid.

I thought that was clear.


Probably those who spend all day answering this topic insisting on the nonsense they speak have an agenda and should be paid for it

Remembering that Microsoft is an example and company that makes FUD



In 2000, Microsoft settled the lawsuit out-of-court for an undisclosed sum, which in 2009 was revealed to be $280 million.[19][20][21][22]

At around the same time, the leaked internal Microsoft "Halloween documents" stated "OSS [Open Source Software] is long-term credible… [therefore] FUD tactics cannot be used to combat it."[23] Open source software, and the Linux community in particular, are widely perceived as frequent targets of Microsoft's FUD:



If in 2010 the MS did it openly, why not do it today discreetly?


- talking about "true 4k" for a year to implicitly call the PS4 PRO a Fake and when the time comes the vast majority of their games are "4k fake"

- talking about "fixed clock" when having a fixed clock is stupid and only the xbox will use it, both PS5 and Radeon and Nvidia use variable clocks, only with very different approaches; Radeon and Nvidia vary with temperature; PS5 with electric consumption.

The two examples above for me are examples of discrete FUD.

=============


Sony: 8 ~ 9GS/s

User X, Y, Z: 7GB/s, super SSD heats up

There isn't even a post that anyone disputes the xbox sx's 12TF, but half of that topic is a post contesting absolutely everything Sony has declared.

I hope the guy gets paid, because I guarantee that it won't make a difference in sales even from a PS5 and SX unit the FUDs he does, the only thing he can make him and the forum disgusting.
 
Last edited:
first, RAM is 16GB so even if the console could instantly (and I use the word 'instantly' colloquially) transfer/stream some portion of SSD it could be no more than RAM capacity. second, if it is faster than the compressed speed, lets say 2x, for a special part of SSD than that part would be open to contention among many games in the library, which one gets to stay there, and what happens when you switch games? third, a partition within an SSD can not be software for it to work it really needs a hardware difference if there is an even speedier transfer than the compressed value. So overall that instantaneous seem a lot like marketing/PR mumbo jumbo that is not real, not even representative.

The CPU can consume data from ANYWHERE. Not just RAM. The idea is that the GPU will look to cache then VRAM, then extended Virtual RAM which is what that partition is to be used for then it would source a request through the CPU to query the SSD non 100GB partition.

What the speed is of accessing that VRAM partition? We don't know. We also knew about this before Cernys breakdown so its actually not due to PS5 this inquisition is being made.

In fact we knew about this goal in some form back in 2019 in an interview with Phile SPencer in the PC MAG interviow:

"Thanks to their speed, developers can now use the SSD practically as virtual RAM. The SSD access times come close to the memory access times of the current console generation. Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.


That sounds like a disadvantage to me since that CPU core could be dedicated to other tasks.

one tenth of one core... the CPU is clocked from at 3.6 or 3.8 Ghz... not much impact
 

Md Ray

Member
Whatever Louise said, i wouldn't take it seriously in any way ( especially when he/she is only XSX/W10 dev and also he/she follows these guys ) :

p4MdIHE.jpg
Thanks for pointing that out. I always suspected Kirby was a discord FUD trooper. After seeing that I don't think I can take anything she says seriously.

200.gif


Yeah, Louise isn't credible. Before the specs reveal by Mr. Cerny, Louise insisted and wanted ppl to believe PS5 was RDNA 1 despite all the clear signs that Sony were using RDNA 2.
 
Last edited:

rntongo

Banned
So one deals with the reading and writing of the information while the other deals with finding it?

I can see how that can take up some CPU power if they didn't have those processors. Heck they might even help make those things even better.

Roughly yes, its biggest advantage is that the game code will be able to fully utilize the SSD speed. If a developer decides to map the files(the bottom IO processor kicks in), it's more efficient and reduces overhead. Otherwise they can simply use normal function calls and the I/O processor on top kicks in. I think the PS5 can also use the game install as virtual RAM by using the second IO co-processor to map the whole game install(Maybe up to 100GB) into RAM. On the other hand I want to see how the XSX does this because it seems to transfer everything(the file IO workload) to the Zen 2 core. I think there are benefits to doing this as well and it seems more innovative. I also wonder what DMA controllers the XSX has to achieve this.
 
Last edited:
I think there are benefits to doing this as well and it seems more innovative

The way Sony designed the I/O complex and the actual design of the SSD itself is also pretty innovative. Like for example having 12 lanes instead of the usual 4 and adding additional priority levels.

It just looks to me that Sony did a better job designing their I/O system than Microsoft did.
 
Last edited:

Ar¢tos

Member
True. It also makes sense because for example large studios will likely ditch Sony's APIs and use their own for RT. Like I would expect RockStar to do something of the sort. And Sony's first party studios can develop their own as well.
What??
You can't ditch APIs because you don't like them. You have no choice in the matter.
Studios can't create their own APIs for consoles (they can, to a limited extent, make small changes and submit them to be integrated in the oficial APIs)
You are stuck with DirectX on XSX and GNM/GNMX on PS5.
 

THE:MILKMAN

Member
The way Sony designed the I/O complex and the actual design of the SSD itself is also pretty innovative. Like for example having 12 lanes instead of the usual 4 and adding additional priority levels.

It just looks to me that Sony did a better job designing their I/O system than Microsoft did.

Not necessarily better, but different. They basically traded CUs for I/O, SSD and higher GPU clocks. I'll declare which way appears better when I see first party games.
 

rntongo

Banned
The way Sony designed the I/O complex and the actual design of the SSD itself is also pretty innovative. Like for example having 12 lanes instead of the usual 4 and adding additional priority levels.

It just looks to me that Sony did a better job designing their I/O system than Microsoft did.
Sorry I should have been clearer because. In terms of offloading all the File IO to the CPU. It will be interesting to see if the XSX has an I/O co processor for mapping. If it's also handled by the Zen 2 core then its really impressive.
 

rntongo

Banned
What??
You can't ditch APIs because you don't like them. You have no choice in the matter.
Studios can't create their own APIs for consoles (they can, to a limited extent, make small changes and submit them to be integrated in the oficial APIs)
You are stuck with DirectX on XSX and GNM/GNMX on PS5.
Really? I understand for example Devs can bypass the decompression hardware and the File IO APIs for accessing the hardware and use their own software for Decompression at the expense of CPU overhead. Maybe I should have just used software to describe what I was saying. But I think Sony and Microsoft are allowing low level access to the RT hardware so devs don't have to use DX12 for RT if they don't want for example.
 
Reading the Eurogamer article again, I think Kirby Louise is right;

The form factor is cute, the 2.4GB/s of guaranteed throughput is impressive, but it's the software APIs and custom hardware built into the SoC that deliver what Microsoft believes to be a revolution - a new way of using storage to augment memory (an area where no platform holder will be able to deliver a more traditional generational leap). The idea, in basic terms at least, is pretty straightforward - the game package that sits on storage essentially becomes extended memory, allowing 100GB of game assets stored on the SSD to be instantly accessible by the developer. It's a system that Microsoft calls the Velocity Architecture and the SSD itself is just one part of the system.



I don't think it will use zero. The CPU still needs to tell the I/O which data to read and where to put it in RAM. The I/O handles the actual transfer and decompression by itself.

Yeah, I think it might be a concept worth considering as being the case, as well. Think of it this way; if the 100 GB/s were just functioning the same as the rest of the NAND storage, why even single it out? Why highlight that 100 GB block in particular if it's just acting as expected and providing data throughput through to main system memory? Especially given that's what the entire drive itself would be doing (presumed)?

"Instant" might be referring more to direct calls in the game code to data on that 100 GB as if it were just additional RAM, so no need to specify specific access to the SSD path. Kind of like how older consoles could address ROM cartridges as extensions of RAM (the tradeoff being that data would be read-only, which I'm assuming this 100 GB cluster would be set to as well once the contents desired to be placed there are actually written to the location).

I'm still curious if this specific 100 GB portion is a higher-quality SLC or even MLC NAND cache; IIRC Phison's controllers have been used in at least one SSD that operates in that manner. I'd like to think so for this particular feature if it's being implemented, but I figure we'll see eventually.

Who is Matt?

Someone who perceives themselves an authority figure even though they tried getting people to ignore the Github leak almost as soon as it came out (yet it turned out to be very pertinent to the consoles in the end). Basically just another poster like the rest of us, so he would need to back up his statements directly to be taken with any further consideration (personally).

Having a dedicated to that to free up CPU power seems better to me.

In a lot of ways it does. But I can see this from MS's POV, too. They want their software implementations of DirectStorage (and others) adaptable across a range of devices within the PC, laptop, tablet etc. space. And server markets, too. So with that consideration, which approach seems more easily feasible?

A: Design a hardware block to completely handle the task and needing to deploy that in new CPUs, APUs, PCIe cards etc. which require hardware upgrades that can eat up to millions of costs for things like the server market, or..

B: Find a way to manage that on the CPU with as little resources as possible (1/10th of a single core in XSX's case), that can be designed for deployment onto current existing CPUs, APUs, servers etc. and scale its performance based on what they hardware they already have, while also being fluid enough to ensure future-proofing as time goes on?

Seeing the kind of company MS is, I can see why they chose the latter approach. The former approach isn't inherently better, neither is the latter. They just have their strengths for their specific purposes.
 
Last edited:
The way Sony designed the I/O complex and the actual design of the SSD itself is also pretty innovative. Like for example having 12 lanes instead of the usual 4 and adding additional priority levels.

It just looks to me that Sony did a better job designing their I/O system than Microsoft did.

Maybe The PCIE 4.0 spec only calls for 4 lanes and the RAW throughput of over 7gb/s. So in the near future PC users and other can get raw and compressed speed exceeding the PS5.
 

Just seems like Microsoft is trying to use software to replace what dedicated hardware can do for the I/O system. This is something that's probably really useful in PC gaming since PCs I/O systems usually don't contain specialized processors.

Whether or not it's a better option for a console is still a topic worth debating over.
 
Last edited:

Ar¢tos

Member
Really? I understand for example Devs can bypass the decompression hardware and the File IO APIs for accessing the hardware and use their own software for Decompression at the expense of CPU overhead. Maybe I should have just used software to describe what I was saying. But I think Sony and Microsoft are allowing low level access to the RT hardware so devs don't have to use DX12 for RT if they don't want for example.
I'm talking of graphics APIs, I don't know about SSD instructions.
Regardless of low level or high level, an API is always necessary to communicate with the hardware.
DirectX is already a low level API.
 
"Instant" might be referring more to direct calls in the game code to data on that 100 GB as if it were just additional RAM, so no need to specify specific access to the SSD path. Kind of like how older consoles could address ROM cartridges as extensions of RAM (the tradeoff being that data would be read-only, which I'm assuming this 100 GB cluster would be set to as well once the contents desired to be placed there are actually written to the location).

Is it likely that this 100gig block, or whatever, will always be where whatever game you are playing at the time is loading/streaming from? With the exception of games on external USB storage, of course.
 
Last edited:

Deto

Banned
Whatever Louise said, i wouldn't take it seriously in any way ( especially when he/she is only XSX/W10 dev and also he/she follows these guys ) :

p4MdIHE.jpg


MisterXmedia

LOL

This one must be astroturfing by MS.

Almost certainly they already knew that the developers would praise the PS5, and they needed people defending the xbox sx.
 
Maybe The PCIE 4.0 spec only calls for 4 lanes and the RAW throughput of over 7gb/s. So in the near future PC users and other can get raw and compressed speed exceeding the PS5.

Nah, PCIe 4.0 can easily support more than 4 lanes. NVMe is the current limitation, I believe it only supports 4 PCIe 4.0 lanes (or 8 PCIe 3.0 lanes). Need to check into that.

A future revisions to NVMe should be coming pretty soon.

Is it likely that this 100gig block, or whatever, will always be where whatever game you are playing at the time is loading/streaming from? With the exception of games on external USB storage, of course.

Seems likely. Especially if the entire game's contents can fit within that 100 GB space. Hopefully what MS and Sony have mentioned in game sizes shrinking next-gen turns out true (and devs don't push up the size of their assets even further to essentially negate that :LOL: ).
 

THE:MILKMAN

Member
Someone who perceives themselves an authority figure even though they tried getting people to ignore the Github leak almost as soon as it came out (yet it turned out to be very pertinent to the consoles in the end). Basically just another poster like the rest of us, so he would need to back up his statements directly to be taken with any further consideration (personally).

That's unfair to be honest. Matt can be, and was, grumpy with me before but he clearly is in the know/a dev. He was the first to give hints here about next-gen 3 years ago.

All I remember him stating about the Github leak is that it was out of context. Which was correct.
 

Deto

Banned
I find this kind of selective..

She also follows;
Nintendo of America
Sega
Elon Musk
Reggie
Digital Foundry
nVidia Geforce
AMD
AMD PC
RedGamingTech
Kotaku
Chris Grannell
Intel
Unreal Engine
And most importantly; PLAYSTATION

I tend to try and judge what someone said directly, not necessarily who that person is friends with, who they follow, who they had dinner with last Tuesday or who the cousin of their father's friend's neighbor's daughter is.
To take myself as an example. I follow Christina Hoff Sommers. Some people might think that makes me a feminist. At the same time I follow Stefan Molyneux. Some people might think I'm a misogynist. That would be nothing more than confirmation bias, and in reality does not prove anything.


MisterXmedia

the lunatic who INVENTED the GPU hidden in the xbox one?
Year passed that shit bag, misterxmedia, said that the xbox one HAD RT.

Do not insult my intelligence.
 
Maybe The PCIE 4.0 spec only calls for 4 lanes and the RAW throughput of over 7gb/s. So in the near future PC users and other can get raw and compressed speed exceeding the PS5.

PCs are not fixed unless you buy one of those laptops that can't be upgraded.

But consoles are fixed hardware and whether design decisions these companies make will stick with them for the rest of the generation.

Will PCs surpass the PS5s I/O system in the future?

Most definitely yes.

Will the XSX do the same?

Signs point to no.
 
I think that was my issue with the whole discussion.

I was looking at it from a PS5 vs XsX point of view. However with Microsofts operating system i should have looked at it from a PS5 Vs Windows (Including Xbox) pont of view.

However consoles are not the same as PCs so certain designs will be better for them.

True, they aren't. But at the same time...they kinda are :LOL: . They're using PC architectures and technologies these days. The times of CPU architectures and GPUs that you'd only find in a very specific gaming consoles have been over since 8th ten started.

I still admire what MS and Sony are doing in terms of their designs and from a design POV this is easily a more exciting gen than PS4 and XBO. as the spirit of those older systems seem to be alive here.

But at the end of the day, they're still heavily leveraging PC architectures, features, and technological standards. So by and large what works well for PC will work well for the consoles, too. That's how similar they are these days.
 

rntongo

Banned
I'm talking of graphics APIs, I don't know about SSD instructions.
Regardless of low level or high level, an API is always necessary to communicate with the hardware.
DirectX is already a low level API.
Devs don't have to use DX12 for RT if they don't want!! Or whatever API Sony will have. They can get low level access to the hardware through the IDE and dev tools provided by the companies.
 

rntongo

Banned
Where are the receipts?



f1Mxz47.jpg

I was trying to ignore you but I see you keep on bringing this up. It's trolling at this point. He thought the exchange was pedantic. I agree it was, I was not offended but thank you for being more offended on my behalf. In any case, my points are still valid to this day.
 
True, they aren't. But at the same time...they kinda are :LOL: . They're using PC architectures and technologies these days. The times of CPU architectures and GPUs that you'd only find in a very specific gaming consoles have been over since 8th ten started.

I still admire what MS and Sony are doing in terms of their designs and from a design POV this is easily a more exciting gen than PS4 and XBO. as the spirit of those older systems seem to be alive here.

But at the end of the day, they're still heavily leveraging PC architectures, features, and technological standards. So by and large what works well for PC will work well for the consoles, too. That's how similar they are these days.

I understand where your going but Sony isn't going to design a console based on an operating system that they don't own. Basically while Sony releases games on PC they are not going to worry about improving the platform. Microsoft on the other hand owns direct X and windows so it's pretty obvious why they need to worry about it.
 

jimbojim

Banned
Who is Matt?

Someone who perceives themselves an authority figure even though they tried getting people to ignore the Github leak almost as soon as it came out (yet it turned out to be very pertinent to the consoles in the end). Basically just another poster like the rest of us, so he would need to back up his statements directly to be taken with any further consideration (personally).

He has a a pretty pristine track record. He doesn't share LOTS of info. But the info he does share is always accurate from what I can remember. Yes, regarding Github, he was right. He was here till 2017.


I was trying to ignore you but I see you keep on bringing this up. It's trolling at this point. He thought the exchange was pedantic. I agree it was, I was not offended but thank you for being more offended on my behalf. In any case, my points are still valid to this day.

Maybe you should ignore everybody. I think when the time comes ( in next 7 months ), you will curl your tail. Lots of your post will be quoted for a good reason.
 

Ar¢tos

Member
Devs don't have to use DX12 for RT if they don't want!! Or whatever API Sony will have. They can get low level access to the hardware through the IDE and dev tools provided by the companies.
You can't use a GPU without an API, you can't program directly to the hardware, you always need an API (or more than one!) to communicate with hardware driver, that then communicates with the hardware.
380px-D3D_Abs.svg.png
 
That's unfair to be honest. Matt can be, and was, grumpy with me before but he clearly is in the know/a dev. He was the first to give hints here about next-gen 3 years ago.

All I remember him stating about the Github leak is that it was out of context. Which was correct.

It was the tone in which he implied it that I take issue with, not to mention the timing in which he mentioned it. He put it out there in a way in which others were given carte blanche to insult or ignore anyone even partially referring to the Github leak and the testing data.

I would not say it was "out of context"; the context is absolutely there. Oberon is PS5's chip, Arden and XSX's chip. There are no other chips fitting the profiles of the respective systems. The better thing would be to say that the full context wasn't provided in that leak and the testing data, which is correct. But at the time he said to disregard it, "out of context" and "not full context" were and still are two completely different things, I would think he was aware enough to know this but still phrased it the way he did anyway.

Didn't help a lot of other insiders went along with this, doubling down on dismissing the Github leak or the testing data. Kind of helped create this two-factions/warring divide among speculators that was unnecessary, and we're still dealing with the results of that IMHO.

I understand where your going but Sony isn't going to design a console based on an operating system that they don't own. Basically while Sony releases games on PC they are not going to worry about improving the platform. Microsoft on the other hand owns direct X and windows so it's pretty obvious why they need to worry about it.

Yes both of those are true. The thing though is that Direct X and Windows are very scalable pieces of software, they're designed for a range of platforms and more importantly are tailored for the platforms they are designed for. I don't think there's a case of MS "spreading themselves thin" with adapting Direct X and Windows to the Xbox platform.

In fact it's smarter to leverage your assets across a range of products. Sony does the same same generally when it comes to hardware, leveraging product technologies from their other divisions for PS systems. It doesn't make those technologies less suitable simply because they weren't inherently designed with PS consoles in mind.

jimbojim jimbojim No, he was only partially right about Github. The data there was still pertinent. You just needed to piece a lot of it together yourself to see how. But some people managed to do so and a lot of that leak data still checks out. You just had to account for changes along the timeline of system dev, tertiary corporate circumstances (such as planned launches, delays, etc.), certain industrial reports, etc.

Again, if Matt elaborated on what they meant by "disregard it" when it came to Github and the leaked data, it would've cleared a lot of air. But he didn't. This has nothing to do with him being a dev or having been right about stuff previously; in the context of PS5/XSX and the Github/testing data, he put a message out there implying the entirety of it was wrong when we know now at some point the entirety of it was correct, and natural development progressed to where a few things changed, even up to the point of some CUs getting disabled or clocks being pushed up (both of which already had historical precedent of happening previously, and with the same companies no less, so was that even really a surprise?).
 

Panajev2001a

GAF's Pleasant Genius
Oh I forgot you're not a programmer and have no idea what memory mapped I/O is. Cerny explained it in the presentation so i understood it since I have an education in Software analysis and design(Datasrtuctures and Algorithms). One co-processor deals with traditional I/O and the other deals with mapping(memory mapped I/O).

Did I say I was not a programmer? Sure I have an ECE degree (and I am familiar with the concept of memory mapped I/O, DMA, etc... and can also read the exact not sibylline quote from Cerny’s speech thank you), but please proceed with assuming who I am and what I do... as if you were working on the console devkit.
 

rntongo

Banned
You can't use a GPU without an API, you can't program directly to the hardware, you always need an API (or more than one!) to communicate with hardware driver, that then communicates with the hardware.
380px-D3D_Abs.svg.png
My argument was that you don't always need to use the API provided. For example, you can create your own library for RT and ditch the RT related API provided by Sony or MSFT. That was my argument.
 

Panajev2001a

GAF's Pleasant Genius
Sorry I should have been clearer because. In terms of offloading all the File IO to the CPU. It will be interesting to see if the XSX has an I/O co processor for mapping. If it's also handled by the Zen 2 core then its really impressive.

It would be impressive, but it would take a non trivial amount of CPU cycles away eating into the CPU frequency gap they have with the PS5 CPU... I am not sure Sony designers added a custom co-processor to accelerate file I/O for no reason, but then again they move a lot more data per second so you may be right in that sense.
 
Last edited:

Ar¢tos

Member
My argument was that you don't always need to use the API provided. For example, you can create your own library for RT and ditch the RT related API provided by Sony or MSFT. That was my argument.
That would NEVER pass Sony or MS certifications. Letting devs create their own low level hardware access is instant recipe for disaster. All it takes is a dev making an error and suddenly you have a jailbreaked console and rampant piracy.
 

jimbojim

Banned
.

jimbojim jimbojim No, he was only partially right about Github. The data there was still pertinent. You just needed to piece a lot of it together yourself to see how. But some people managed to do so and a lot of that leak data still checks out. You just had to account for changes along the timeline of system dev, tertiary corporate circumstances (such as planned launches, delays, etc.), certain industrial reports, etc.

Again, if Matt elaborated on what they meant by "disregard it" when it came to Github and the leaked data, it would've cleared a lot of air. But he didn't. This has nothing to do with him being a dev or having been right about stuff previously; in the context of PS5/XSX and the Github/testing data, he put a message out there implying the entirety of it was wrong when we know now at some point the entirety of it was correct, and natural development progressed to where a few things changed, even up to the point of some CUs getting disabled or clocks being pushed up (both of which already had historical precedent of happening previously, and with the same companies no less, so was that even really a surprise?).

Like i've said, he doesn't talk to much. Regarding Github, he was right. He just said something like "glad we moved away from github discussion like it was confirmation of anything". Before github leak (around 4 weeks ago )he implied that there would be around 15% difference between the two, also saying depending on other factors. Of course, he was on XSX side. Well, that was enough

I would consider ~15% difference or less to be very close, depending on other factors of course.
15% difference in Anaconda favor? That literally means 12 tf Anaconda and 10.2 tf PS5
.

 
Last edited:

THE:MILKMAN

Member
It was the tone in which he implied it that I take issue with, not to mention the timing in which he mentioned it. He put it out there in a way in which others were given carte blanche to insult or ignore anyone even partially referring to the Github leak and the testing data.

I would not say it was "out of context"; the context is absolutely there. Oberon is PS5's chip, Arden and XSX's chip. There are no other chips fitting the profiles of the respective systems. The better thing would be to say that the full context wasn't provided in that leak and the testing data, which is correct. But at the time he said to disregard it, "out of context" and "not full context" were and still are two completely different things, I would think he was aware enough to know this but still phrased it the way he did anyway.

Didn't help a lot of other insiders went along with this, doubling down on dismissing the Github leak or the testing data. Kind of helped create this two-factions/warring divide among speculators that was unnecessary, and we're still dealing with the results of that IMHO.

Like I said he was a grump! Like a teacher with a class full of dumb-asses he probably often thought! As for the Github thing all I'll say is it wasn't the full details is all. As for others dismissing Github after Matt's comments then all I can say is that it is always the case that mods/insiders are put on a pedestal. That really isn't his fault. We all can read his posts and decide for ourselves what to believe. e.g. he's also said the 18% the XSX is ahead in compute and the various numbers given for both so far bear out in practice.
 

rntongo

Banned
Did I say I was not a programmer? Sure I have an ECE degree (and I am familiar with the concept of memory mapped I/O, DMA, etc... and can also read the exact not sibylline quote from Cerny’s speech thank you), but please proceed with assuming who I am and what I do... as if you were working on the console devkit.

Okay so you were trolling when you asked me for the devkit docs? Don't play games like that with me dude.
 
Top Bottom