• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry's John: Been talking to developers, people will be pleasantly surprised with PS5's results

geordiemp

Member




"Compatibility mode", "emulates", and " not coded for RDNA 2", it literally can't get any clearer then that.

while Series X runs old games with full clocks, every compute unit and the full 12 teraflop of compute, it does so in compatibility mode - you aren't getting the considerable architectural performance boosts offered by the RDNA 2 architecture.

I am saying the same thing but with proper details and not glossing over it.

What are the considerable architectural performance boosts offered - VRS and Mesh shaders and SFS - what else ? I said that. These need to be programmed.

Fafalada also mentioend this, most architectural improvements are not turned off and on by game code, its api driven.

Features he is talking about VRS, Mesh shaders and SFS and things you need to program for or recompile.

AjURagb.png
 
Last edited:
Also, the spec's I cited back in 2017 are 100% accurate particularly when you factor in that I added a Teraflop to both devices due to CPU performance - considering if you remove that 1 teraflop of CPU performance I calculated in my predictions - they 100% accurately represent the pure GPU performance cited by both manufacturer's.

So my assertion/prediction's for the industry back then were and remain 100% accurate.

You can say "but those aren't the number's cited for GPU Teraflop performance - which is all either manufacturer has divulged" but all you have to do is remove exactly - precisely - 1 Teraflop - and they are exactly the GPU Teraflop Performance Specifications - without the adage of 1 Teraflop of CPU performance per console that I also factored in.
 
Last edited:

martino

Member
I am saying the same thing but with proper details and not glossing over it.

What are the considerable architectural performance boosts offered - VRS and Mesh shaders and SFS - what else ? I said that. These need to be programmed.

Fafalada also mentioend this, most architectural improvements are not turned off and on by game code, its api driven.

Features he is talking about VRS, Mesh shaders and SFS and things you need to program for or recompile.

AjURagb.png
a blatant case of misintepretating what is said in df video because too simplified/not precise enough imo (not on your end)
 
Last edited:

Zathalus

Member
while Series X runs old games with full clocks, every compute unit and the full 12 teraflop of compute, it does so in compatibility mode - you aren't getting the considerable architectural performance boosts offered by the RDNA 2 architecture.

What are the considerable architectural performance boosts offered - VRS and Mesh shaders and SFS - what else ? I said that.

Fafalada also mentioend this, most architectural improvements are not turned off and on by game code, its api driven.

Features he is talking about VRS, Mesh shaders and SFS and things you need to program for or recompile.

AjURagb.png
Because the GCN architecture is being emulated in a compatibility layer. You do know what emulators are right, and how they have performance overheads?

I'm honestly baffled here, Microsoft has communicated how they are performing backwards compatibility, but you refuse to believe it.

Its the exact same type of FUD as those that didn't belive how PS5 variable clocks worked and were convinced Sony was lying about the 10.28 TFlOP figure.
 

geordiemp

Member
a blatant case of misintepretating what is said in df video because too simplified/not precise enough imo (not on your end)

OK, Lets leave it at that, the MS to eurogamer statement is purposefully so vague you cannot interpet much specfic about it.

WE have had articles on the benefits of abstraction layer api, but now its the minus reason and thats OK.

Lets come back to this when we get more examples as obviously Sekiro and Haitman 3 which is a DX12 game is not enough for some.

Lets see if next games that come out will not be RDNA2 optimised either, maybe we can use the same reason if ps5 game is not performing, lets see....
 
Last edited:

Justin9mm

Member
I have a feeling that right now the performance gap between third party games on Series X and PS5 will be a lot closer than what One X to Pro is.

In a few years or so when developers have ironed out the kinks and become efficient with what they can push out, the performance gap will become even wider.. But by then I think Sony is going to murder the Series X with their PS5 Pro. I have this feeling it's going to leave the Series X in the dust.

Will have to see what Xbox will do at that time.
 

PaintTinJr

Member
It is a RDNA 2 thing... the Interceptions are part of the TMUs so they share the same silicon.... if you use for one task you can't use the same for another at the same time.

it is like Async Compute... if you are using the CU for render graphics you can't use it for Async Compute... only the non-used CUs can be used for Async Compute.
That makes sense and maybe is the final piece of the puzzle I was missing for what Playstation are really angling at for the future.

In Bo_Hazem Bo_Hazem interesting pixel counting thread, he linked this Sony Atom View video, and by all accounts, this tech is the basis of UE5's nanite - and will even be coming to Unity(Halo's renamed middleware if I'm not mistaken) - but the broader point, is that the textures are largely gone, because it is just the colours of the atomview atoms(4 polys per pixel in the final dataset) that were crunched down from 64,800 scans (one per quaternion of 360 (deg) positions around the y-axis, while doing 180 (deg) positions (per 360) scanning around the x-axis).



So despite AMD calling the feature Async compute, it seems that between the IO complex data streaming and async largely replacing the need for the bulk of drawcalls and conventional rendering pipeline, async will really become the normal graphics compute - and it will be the use of the conventional pipeline graphics that will eventually be using the asynchronous holes in a sea of graphics compute.

So If the use of BVH blocks texturing on PS5's solution too - in the same way texturing blocks BVH/RT acceleration use on RDNA2 - but atomview/nanite will be used in all major engines on PS5 (and probably XsX/PC) then it doesn't matter that this is a limitation of RDNA2. It almost seems like it was planned that way by Playstation and that the ACE count comparison is where the two console APUs will really be compared in their powerfulness (IMHO) because the bulk of rendering will be async + BVH/RT, not conventional pipeline (without texturing) + BVH.
 
Last edited:

martino

Member
OK, Lets leave it at that, the MS to eurogamer statement is purposefully so vague you cannot interpet much specfic about it.

WE have had articles on the benefits of abstraction layer api, but now its the minus reason and thats OK.

Lets come back to this when we get more examples as obviously Sekiro and Haitman 3 which is a DX12 game is not enough for some.

Lets see if next games that come out will not be RDNA2 optimised either, maybe we can use the same reason if ps5 game is not performing, lets see....
imo we'll see a good bunch of them not using them in the first year.
but will it be 1:1 between platform ? depend how the port are done on each one.
all scenarios are possible. on paper at least.
in practise they will probably look for some sort of parity.
 

yurinka

Member
What is your source/proof that MS has ps5 dev kits?
Dev kits are sent to studios, not mother companies, and under strict NDAs.
The only MS studio likely to be able to even request a Devkit would be Mojang, but so far they have said nothing about a ps5 version of Minecraft.

The opposite is more likely to be true, since Sony San Diego is confirmed to be making MLB for XSS/XSX too.
Microsoft's Zenimax is now working on 2 PS5 timed console exclusives, Deathloop and Ghostwire Tokyo.
 
Last edited:
Not surprised by John's comments. He seems to be completely unwilling to possibly believe that maybe just maybe the Xbox Series X is simply the far better designed console in every aspect. He's complained all year about not being a fan of Microsoft being cross-gen for about two years if that but Sony announced a few games being cross-gen and outside of a single tweet, I haven't seen, read or heard him complain continuously about it like he's done with Xbox. Like, how does that work? If they're both doing the same shit and Sony seems to be doing cross-gen twice as long according to Jim "dance moves" Ryan, shouldn't he be realistically complaining about Sony's cross-gen approach twice as much?

Love it when I see people say this developer said this or that or even when these developers tweet. They're making games for both these consoles at the requests of the publishers that's signing their paychecks. Like, WTF else are they going to say?

Microsoft sent out their Xbox Series X prototypes on September 23rd if I remember correctly which was 7 weeks before launch to Gaming Media and YouTubers as well as Digital Foundry yet I don't see Sony doing that shit yet? When will they do that? November 11th? SMH.

When the head to head game comparisons happen in November and the two consoles get compared, I really hope it's any of the other four guys that do the comparing because John just doesn't seem capable of being impartial, fair and critical of Sony and PlayStation 5 if/when needed.
where are you getting this idea that Sony is doing cross gen for twice as long as Microsoft?
 

Clear

CliffyB's Cock Holster
Teraflop's are not a rough measurement of performance - a 12 teraflop card will alway's be able to utilize 12 teraflop's of performance barring an existential error in hardware that in fact render's that part useless. Whereas a 12 Teraflop card with superior architecture will allow you to in fact improve Teraflop performance radically. Period.

No, you're wrong. A single floating point operation simply combines 2 inputs to get an output. That's it. Drawing the same triangle a million times is not the same as drawing a million different triangles, because the source data needs to come from somewhere, its not just sitting there pre-loaded on the registers.

Actual real-world workloads are far more complex and it matters less how fast you can process than how well you can avoid downtime waiting for fresh data to arrive for processing. Occupancy in short.

And I'm not sure how exactly EXACTLY announcing the console spec's Accurately for both Xbox Series X and Playstation 5 - 2 whole years before they were announced -with the inclusion of also factoring in the CPU teraflop performance on top of GPU performance - is a fanfic thing unless... you're a hardcore digital foundry fan - *cringe* but that's your prerogative I suppose.

And this was written 2 years before gamepass was public. Include a mention of gamepass and cite the acquisition (not creation) of new IP's and I'd say it's pretty accurate

No. Hearing rumors about projected specs does not make you Cassandra. The writer of that piece by his own (inflated) claims is an artist. Not a coder. Not a producer. Not a corporate bigwig responsible for strategic planning of global brands.

Once again, stop fetishizing Tf counts. Especially when consoles are closed systems where things like memory layout and bus-speeds/propensity for congestion can vary drastically compared to the PC status quo.
 

Redlancet

Banned
Conspiracy? It's pretty well known since the PS3/Xbox 360 days, slight improvement in performance or resolution would be talked very favourably towards the 360, while when the reverse situation arrived, where PS4 being particularly stronger than Xbox One by a larger margin, it was swept a bit under the rug.

I mean, they are big fans of the PC and MS has invited them to unveil the Xbox One X, you don't think they have a very good working relationship with MS? Leadbetter has always been a fan of them and kinda holds his biases back a bit, which can't be said about the german guy. The most genuine and unbiased person there is John imo, but if we are talking about DF being Richard's baby, then yeah, it's pretty biased towards MS.
That and the discord leaks
 
D

Deleted member 471617

Unconfirmed Member
where are you getting this idea that Sony is doing cross gen for twice as long as Microsoft?

Jim Ryan stated it after their last PS5 showcase. He said they will support PS4 for up to 4 more years. Third parties will support the older console for a few years because they're multi-platform but Sony supporting it is funny since Ryan kept repeating that he believes in generations for over a year. Of course, when the company is losing a lot of money on PS5 and overpaying for timed exclusive games which makes absolutely zero sense, his bosses need them to make up that money for investors and shareholders. Can't just throw away 110m+ user install base.
 
Jim Ryan stated it after their last PS5 showcase. He said they will support PS4 for up to 4 more years. Third parties will support the older console for a few years because they're multi-platform but Sony supporting it is funny since Ryan kept repeating that he believes in generations for over a year. Of course, when the company is losing a lot of money on PS5 and overpaying for timed exclusive games which makes absolutely zero sense, his bosses need them to make up that money for investors and shareholders. Can't just throw away 110m+ user install base.
I think that statement was in regards to manufacturing. Sony supported the PS3 until like 2017, but how many first party games released for it after 2013? Same with the PS2 and PS1, Sony supported them for several years after their successors released before discontinuation. Supporting a console means more than just releasing 1st party software.
 
D

Deleted member 471617

Unconfirmed Member
I think that statement was in regards to manufacturing. Sony supported the PS3 until like 2017, but how many first party games released for it after 2013? Same with the PS2 and PS1, Sony supported them for several years after their successors released before discontinuation. Supporting a console means more than just releasing 1st party software.

Then Ryan should be more clear. Horizon 2 will most likely be 2022 which if it is would already surpass Microsoft's limit of cross-gen which ends once 2021 ends. Just have to wait and see. Either way, Ryan needs to be transparent and clear instead of speaking 'in general".
 
Then Ryan should be more clear. Horizon 2 will most likely be 2022 which if it is would already surpass Microsoft's limit of cross-gen which ends once 2021 ends. Just have to wait and see. Either way, Ryan needs to be transparent and clear instead of speaking 'in general".
Wouldn't Microsoft's cross gen plan finish at the end of 2022? I thought it was two years: Late 2020 to late 2022.
 

Sushen

Member
Enough stories that came from someone's heard from someone else nonsense. Show me real deals or just shut up.
 
D

Deleted member 471617

Unconfirmed Member
Wouldn't Microsoft's cross gen plan finish at the end of 2022? I thought it was two years: Late 2020 to late 2022.

As far as I know, if Halo Infinite launches in late 2021 (which is what im expecting), it will be the last cross-gen game from Microsoft's internal studios. Everything else they've announced will be next gen only. Halo Infinite, Gears Tactics and if it can run on Xbox One, Flight Simulator would be another. Only other cross-gen games would be from Global Publishing which would consist of those deals from external development studios.
 
I am saying the same thing but with proper details and not glossing over it.

What are the considerable architectural performance boosts offered - VRS and Mesh shaders and SFS - what else ? I said that. These need to be programmed.

Fafalada also mentioend this, most architectural improvements are not turned off and on by game code, its api driven.

Features he is talking about VRS, Mesh shaders and SFS and things you need to program for or recompile.

AjURagb.png

Someone should tell Fafalada that Blast Processing was a real thing; it just required knowing very exact timings of the DMA to the memory to ensure there wasn't sprite corruption. There's a whole sleuth of demos around now which show it off. SEGA didn't necessarily ban devs from using it back in the day, but it was very difficult to figure the timings out so very few had the time or resources to bother. AFAIK though Sonic may've been one of the few games that did utilize it, at least portions of it, for per-pixel DMA in select portions of events in the game(s).

Regarding XSX, he's still wrong throughout that post though because the unenhanced BC titles are still operating under the logic of the old XBO and One X CPU, I/O routines, GPU logic etc. Not to mention the fact the games themselves would have been programmed with engines and systems operating in the constraints of those older hardware specifications.

It's like that recent thing on Twitter with the girl who loaded up Crysis 3 in her VRAM and saw no benefit; it doesn't matter if you give an old game new hardware because if the old game doesn't understand the new hardware's "language", then it won't know how to utilize said new hardware.

Don't know why this is a crazy concept to understand...


He's not even the first person to bring this, or even Richard to my knowledge. There were already rumors going back late last year Series X devkits were "running behind", heck I think even Schrier was saying this closer to summer of 2019.

It's only been a bit more recently we know the reason has been due to the scope of transitioning from XDK to GDK, that's also what was causing some issues with generating Series S profiles for game code. I'd assume those issues were all resolved a while ago but a "while ago" could've still been somewhat recent, maybe into the late summer.

It'd seemingly be one of the few rumors the insiders got right, too, so that's something to keep in mind.
 
Last edited:

ethomaz

Banned
That makes sense and maybe is the final piece of the puzzle I was missing for what Playstation are really angling at for the future.

In Bo_Hazem Bo_Hazem interesting pixel counting thread, he linked this Sony Atom View video, and by all accounts, this tech is the basis of UE5's nanite - and will even be coming to Unity(Halo's renamed middleware if I'm not mistaken) - but the broader point, is that the textures are largely gone, because it is just the colours of the atomview atoms(4 polys per pixel in the final dataset) that were crunched down from 64,800 scans (one per quaternion of 360 (deg) positions around the y-axis, while doing 180 (deg) positions (per 360) scanning around the x-axis).



So despite AMD calling the feature Async compute, it seems that between the IO complex data streaming and async largely replacing the need for the bulk of drawcalls and conventional rendering pipeline, async will really become the normal graphics compute - and it will be the use of the conventional pipeline graphics that will eventually be using the asynchronous holes in a sea of graphics compute.

So If the use of BVH blocks texturing on PS5's solution too - in the same way texturing blocks BVH/RT acceleration use on RDNA2 - but atomview/nanite will be used in all major engines on PS5 (and probably XsX/PC) then it doesn't matter that this is a limitation of RDNA2. It almost seems like it was planned that way by Playstation and that the ACE count comparison is where the two console APUs will really be compared in their powerfulness (IMHO) because the bulk of rendering will be async + BVH/RT, not conventional pipeline (without texturing) + BVH.

I’m still intrigued how Nanite is doing it stuff... it is will be interesting to see how they did it when the course code gets open next year.

Yeap the name Async Compute indeed looks weird... what AMD did is add specific schedulers to use the idle CUs for something else but it is not really Async and not make the hardware more powerful... it is still limited to the number of processing units.

I will say Async Compute makes the use of the hardware more efficient.
 

Lysandros

Member
I’m still intrigued how Nanite is doing it stuff... it is will be interesting to see how they did it when the course code gets open next year.

Yeap the name Async Compute indeed looks weird... what AMD did is add specific schedulers to use the idle CUs for something else but it is not really Async and not make the hardware more powerful... it is still limited to the number of processing units.

I will say Async Compute makes the use of the hardware more efficient.
PS4/PS4 PRO had 8 ACE's (compared to Xone's 2). RDNA has 4. I am very curious about their number in PS5. I am guessing 8 mainly because of the backward compability but who knows..
 

LordOfChaos

Member
PS4/PS4 PRO had 8 ACE's (compared to Xone's 2). RDNA has 4. I am very curious about their number in PS5. I am guessing 8 mainly because of the backward compability but who knows..

iirc, with 3rd and 4th gen GCN and continued onto RDNA, the HWS was able to virtualize the work of the ACEs, and HWS being dual threaded meant 4 could replicate 8 ACEs

I'd be curious if Sony again customized this for more compute, but 4 HWS SHOULD be able to do what 8 ACEs did. + vastly higher clock speed, again.
 
Last edited:

Lysandros

Member
iirc, with 3rd and 4th gen GCN and continued onto RDNA, the HWS was able to virtualize the work of the ACEs, and HWS being dual threaded meant 4 could replicate 8 ACEs

I'd be curious if Sony again customized this for more compute, but 4 HWS SHOULD be able to do what 8 ACEs did. + vastly higher clock speed, again.
In that case 4 seems more likely to save a bit of die space and cost. Do you have any idea if PS4's extra buses (Onion, Onion+ and Garlic) could still be present in PS5? These were added to enhance GPGPU capabilities as far as i know.
 

Clear

CliffyB's Cock Holster
To be honest, the main thing that Cerny should be praised for as Chief Architect of PS hardware is his focus on eliminating the failings of Kutaragi's leadership. Ken, may have been a visionary but he had some horrid blind-spots in his worldview.

You talk to anyone who worked on PS2 early on and they'll bemoan (1) the spotty documentation and overall quality of the dev environment, (2) the overall difficulty in getting started due to the idiosyncrasies of the hardware design, and (3) the realization that accessing performance basically required fully understanding the hardware design.

A good example being the PS2 GS graphics chip was so fast that the cpu (EE) was simply unable to feed it data fast enough to tap its potential. To do that, you needed to utilize the VU coprocessors. And of course to utilize them effectively you had to use them in parallel with the EE, not a simple ask given that multi-threading was a new thing for consoles at that time, and that the speed disparity of these components made stalling* an ever-present concern.

Now its important to note that some coders liked this sort of setup, problem solving being a key trait and the gains from mastering its intricacies being so large that it made it rewarding, For others though, it was just a pain in the dick!

The PS3 took this to the next level. SPE coding being even worse than coding the VU's; More complexity, more parallelism, and the performance disparity between them and the main Cell core (PPE) was even greater... This was the time when coders learnt the "joy" of loop unrolling! (look it up, its pretty arcane)

Add on top of that the split memory setup being a huge issue (absent of course on 360), and you had a machine that was straight-up hard work. A real issue in a scene that was increasingly typified by multi-platform devs, and where the most popular middleware engine (Unreal) runs like shit on PS3 compared to Xbox without massive customization.

Anyway, sorry for going into so much detail but the reason is you need to understand this long established dynamic between grass-roots devs and Playstation as a hardware platform. This is what Mark Cerny inherited.

Knowing that, its striking how precisely targeted his goals are upon addressing these past failings.

I think the important part to understand is that he thinks and talks like someone in dev. To the end-user concepts like "time to triangle" are meaningless, because in the end release dates are release dates. However its music to the ears of anyone who's working on the system, and that's an audience that is going to see through any bullshit immediately.

Point being I'd be inclined to take his claims at face-value, because what he's saying is so in tune with this history. Objectively I'm not sure if his style is the best way to reach the mass-market (!), but its absolutely what people on the inside will want to hear above all else. And as I said before, there's no hiding when its your job to deal with the hardware daily.



*Stalling being when one part of the system is stuck idling while waiting for the rest to catch up, or when the whole pipeline chokes because more data arrives than can be dealt with usefully.
 
No, you're wrong. A single floating point operation simply combines 2 inputs to get an output. That's it. Drawing the same triangle a million times is not the same as drawing a million different triangles, because the source data needs to come from somewhere, its not just sitting there pre-loaded on the registers.

Actual real-world workloads are far more complex and it matters less how fast you can process than how well you can avoid downtime waiting for fresh data to arrive for processing. Occupancy in short.



No. Hearing rumors about projected specs does not make you Cassandra. The writer of that piece by his own (inflated) claims is an artist. Not a coder. Not a producer. Not a corporate bigwig responsible for strategic planning of global brands.

Once again, stop fetishizing Tf counts. Especially when consoles are closed systems where things like memory layout and bus-speeds/propensity for congestion can vary drastically compared to the PC status quo.
Der duhh duuh duh deee - No.

I and the entire Computer Science Community are correct in asserting that with superior architecture 12 teraflops can only ever be enhanced - these enhancement's are only ever made through software optimization - and that these software optimization's are projected to AGAIN make a single teraflop which was originally a metric of 1 trillion instruction's - a metric that now mean's 4 trillion instructions through optimization - produce twice it's workload at 8 trillion instruction's per teraflop of performance. All due to the Law of Accelerating returns and future software/driver optimizations - to insinuate otherwise FLATLY exposes you're knowledge of the subject as the standard teraflop originally was a measurement of 1 trillion operations and ONLY reached 4 trillion calculation's through standard software optimization efficiency.

It is now a measurement of 4 trillion operations a second purely based on software optimization. Any attempt to argue this point, or attempt's to argue that there are no software optimization's set to bolster the teraflop another 10x further are futile.

I, specifically wrote that piece - I, specifically did not adhere to "rumors" as there were none that matched my specification's exactly - there were other attempt's - none of which perfectly aligned with the official specifications announced 2 years later - once you remove 1 teraflop of adage per cpu/console respectively.

Secondly - you're attempt to deflate the spec's presented in that piece are worthless. Anyone can see, plainly - that I nailed those spec's 100% with the adage of an extra teraflop for CPU performance - which the manufacturers have still not included for the standard consumer.

And this was 2 years before the console spec's were made public. And these prediction's were made without hinging on rumor's.

None of your mere claim's contradicts this.

I am a computer Scientist. I am a CGI artist. The 2 subject's are complimentary.

For other's who aren't hopelessly brainwashed - Teraflop performance is the only raw metric worth evaluating and considering in term's of graphical fidelity - unless you intend on overclocking your hardware or are measuring performance that is not Graphical. The Teraflop metric is the penultimate indicator of performance barring superior architecture improvements that do not in fact negate this barometer.

Also, I find it telling - that the industry was poised to follow my advice when suggesting that 5+ Multiresolution's of sculpt detail become standard at minimum - enter Epic with UE5 and unlimited sculpt detail 2 years later.
 
Last edited:
22 years as developer, coder through to producer/design director. Every major platform over the years, too old and curmudgeonly to give a shit anymore.

Work on your apostrophe use.
You cited otherwise in your previous post's and put my credentials under scrutiny - and attempted to say that is not my reddit account. I'm 3 and 0 here.

You're blatant attempt's to dissuade other's that my spec's are somehow incorrect when they are anything but is laughable. Remove 1 teraflop of CPU Teraflop adage from either of those specs defined and then attempt to tell others my spec prediction's were preposterous.

They are 100% resolute as are the other statement's I've made.
 
Last edited:
22 years as developer, coder through to producer/design director. Every major platform over the years, too old and curmudgeonly to give a shit anymore.

Work on your apostrophe use.

What do you make of the design philosophy behind the PS5, in particular? Do you believe that them focusing on I/O throughput the way they have done might be a bit overkill? Sorry to ask...But really curious as to what a person with your pedigree makes of their current orientation.
 

Clear

CliffyB's Cock Holster
You cited otherwise in your previous post's and put my credentials under scrutiny - and attempted to say that is not my reddit account. I'm 3 and 0 here.

You're blatant attempt's to dissuade other's that my spec's are somehow incorrect when they are anything but is laughable. Remove 1 teraflop of CPU Teraflop adage from either of those specs defined and then attempt to tell others my spec prediction's were preposterous.

They are 100% resolute as are the other statement's I've made.

I actually assumed the huge quoted portion was somebody else, I didn't think anyone would be so egotistical as to recycle such a chunk like holy writ as opposed to simply cutting and pasting the relevant points into the body of the post.

Was wrong about that, and only that.
 
I'm way out of my depth when it comes to things being discussed in this thread. Anyway, based on what John was hinting at: does anyone think the PS5 is utilizing something like AMD's rumored "infinity cache"?
 
I actually assumed the huge quoted portion was somebody else, I didn't think anyone would be so egotistical as to recycle such a chunk like holy writ as opposed to simply cutting and pasting the relevant points into the body of the post.

Was wrong about that, and only that.
Considering they were prediction's that were 100% accurate - and in fact included vast swathes of information on shader performance which was the other topic of discussion here with the caveat that it also included what was needed for higher graphical fidelity - you are insinuating I should have typed out the entire feature AGAIN while excluding about the last 2 maybe 3 paragraphs.

Wrong about mostly all of it if you dare again attempt to dissuade the blissfully ignorant by citing teraflop's are only a rough calculation of performance you imbecile.
 

rnlval

Member
We haven't actually had hands on with the PS5 yet, but just talking to developers... I think people will be surprised in a good way. [I have been] hearing some good things about that PS5.




A bit vague but it's hard not to read this like doing better than its numerical grunt. What comes to mind is Sony (correctly) saying that flops are just a paper calculation of CUs, it's ALUs * 2 * clock speed, but other things on the die scale with clock speed, caches, buffers, command processors, etc, plus API differences and so on.

Meanwhile AMD's "Big NAVI" delivers significant TFLOPS uplift.
 

LordOfChaos

Member
Meanwhile AMD's "Big NAVI" delivers significant TFLOPS uplift.

And needed to come up with a way to keep the shader occupancy rate up scaling to so many CUs, i.e Infinity Cache. Doubling the CU count from the 5700, it's reportedly aiming at the 3070 performance wise. Mmhmm.

If we end up finding the scaling didn't quite work out so well and other things limit its linear performance increase with CUs, that kind of proves the point.

Yeah, more power will come with more flops over the course of time, but they're not everything when it comes to applying the performance to a real world game, that's easy enough to understand if you're not a CGI artist apparently.
 
Last edited:

Azurro

Banned
You cited otherwise in your previous post's and put my credentials under scrutiny - and attempted to say that is not my reddit account. I'm 3 and 0 here.

You're blatant attempt's to dissuade other's that my spec's are somehow incorrect when they are anything but is laughable. Remove 1 teraflop of CPU Teraflop adage from either of those specs defined and then attempt to tell others my spec prediction's were preposterous.

They are 100% resolute as are the other statement's I've made.

Can you stop it with the trolling? It's not even funny, you just write complete stupidity.
 

Clear

CliffyB's Cock Holster
What do you make of the design philosophy behind the PS5, in particular? Do you believe that them focusing on I/O throughput the way they have done might be a bit overkill? Sorry to ask...But really curious as to what a person with your pedigree makes of their current orientation.

Massively enhancing the pool of effective "resources" available at a given time is absolutely huge. Whether or not the need exists to go to the extremes they appear to have gone to, remains to be seen. However a higher ceiling on anything is always a win.

I mean the advantage of more power is just that you need to worry less frequently about hitting a performance limit, which in turn liberates you from being forced to do stuff in ways that are faster but maybe more limited or difficult to implement. The reality is you can do crazy shit on limited hardware ( "dog standing on two legs"-syndrome as explained to me once!) but in the end no matter how laudable it might be, its not the most efficient way of doing things.

I mean, look at say Shadow Of The Colossus on PS2. That was a ridiculously ambitious thing to attempt both technically and as a game design, but they did it. The point being most devs, most projects aren't about pushing the limits that hard because of what it costs in terms of time and labor.

In short, efficiency, accessibility, and flexibility are super useful for everyone, but the full possibilities and potentials opened up by what having a mega i/o stack offers will only be embraced and utilized by a minority. By which I mean Ratchet + Clank seems like a considered showcase for what's possible, and it does sit very well with the established style of product from Sony's first-party studios.

TL; DR; It can't be bad, but overall benefit kinda depends on how often its utility is a difference maker. Who knows how competitive MS' Velocity architecture turns out to be, especially in cross-platform titles.
 

rnlval

Member
PS4/PS4 PRO had 8 ACE's (compared to Xone's 2). RDNA has 4. I am very curious about their number in PS5. I am guessing 8 mainly because of the backward compability but who knows..
Hardware Scheduler (HWS) In “Hawaii,” and other GPUs based on the Graphics Core Next ISA, the hardware was designed to support a fixed number of compute queues (up to 8 per ACE). Starting with 3 rd and 4th-gen GCN, however, the HWS makes it possible to virtualize these compute queues. This means that any number of queues can be supported, and the HWS will assign these queues to the available ACEs as slots became available.

Don't compare recent HWS units to PS4's older ACE units.
 
That Reddit post in short: a graphics whore with terrible grammar got a job in the industry and made a ridiculously long post about polygons and trees to just say "MS is is smart and playing the long game and Sony are dumb and will lose because weaker".
Wrong, I am a Microsoft fan - check my post's here.

And wrong again as I correctly predicted the spec's of both consoles 2 years ahead of their own reveals with 100% ACCURACY a word which none of you are familiar with - with the caveat being that I added a teraflop to both because as I had also predicted - Microsoft and Sony both have not divulged that the CPUs in fact adds an extra Teraflop in performance overhead. But good try.
 
Top Bottom