• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry's John: Been talking to developers, people will be pleasantly surprised with PS5's results

Azurro

Banned
Yes, point out where my spec predictions are actually wrong and then and only then - will you figure out you never read anything I said to begin with.

Can you please stop it with the stupidity? As part of the "Computer Science Community" you quote below, what you wrote down here makes zero sense. Theoretical performance is theoretical performance, 1 TFLOP does not equal 8 TFLOPs through "optimization". Jesus.

I and the entire Computer Science Community are correct in asserting that with superior architecture 12 teraflops can only ever be enhanced - these enhancement's are only ever made through software optimization - and that these software optimization's are projected to AGAIN make a single teraflop which was originally a metric of 1 trillion instruction's - a metric that now mean's 4 trillion instructions through optimization - produce twice it's workload at 8 trillion instruction's per teraflop of performance

I know you enjoy the attention, but just go away, you are not funny.
 

Dr Bass

Member
Can you please stop it with the stupidity? As part of the "Computer Science Community" you quote below, what you wrote down here makes zero sense. Theoretical performance is theoretical performance, 1 TFLOP does not equal 8 TFLOPs through "optimization". Jesus.

I know you enjoy the attention, but just go away, you are not funny.

The "Computer Science Community" will argue violently about tabs vs. spaces and this guy is saying they are in all agreement that 1tf will become 4 and then 8 through software enhancements.
 
S

SLoWMoTIoN

Unconfirmed Member
My dad likes to talk with his hands too and I got hit in the face a few times accidentally.

d1Wf6wS.png
 
Massively enhancing the pool of effective "resources" available at a given time is absolutely huge. Whether or not the need exists to go to the extremes they appear to have gone to, remains to be seen. However a higher ceiling on anything is always a win.

I mean the advantage of more power is just that you need to worry less frequently about hitting a performance limit, which in turn liberates you from being forced to do stuff in ways that are faster but maybe more limited or difficult to implement. The reality is you can do crazy shit on limited hardware ( "dog standing on two legs"-syndrome as explained to me once!) but in the end no matter how laudable it might be, its not the most efficient way of doing things.

I mean, look at say Shadow Of The Colossus on PS2. That was a ridiculously ambitious thing to attempt both technically and as a game design, but they did it. The point being most devs, most projects aren't about pushing the limits that hard because of what it costs in terms of time and labor.

In short, efficiency, accessibility, and flexibility are super useful for everyone, but the full possibilities and potentials opened up by what having a mega i/o stack offers will only be embraced and utilized by a minority. By which I mean Ratchet + Clank seems like a considered showcase for what's possible, and it does sit very well with the established style of product from Sony's first-party studios.

TL; DR; It can't be bad, but overall benefit kinda depends on how often its utility is a difference maker. Who knows how competitive MS' Velocity architecture turns out to be, especially in cross-platform titles.

Interesting read and even more interesting times ahead. Thanks for having taken the time to respond!
 
Can you please stop it with the stupidity? As part of the "Computer Science Community" you quote below, what you wrote down here makes zero sense. Theoretical performance is theoretical performance, 1 TFLOP does not equal 8 TFLOPs through "optimization". Jesus.



I know you enjoy the attention, but just go away, you are not funny.
Lol, so tell me - when a teraflop of performance was improved to 4 trillion calculation's a second through software (it seems you are wrongly inferring this never occurred) making 1 Teraflop essentially perform as effectively as 4 Teraflops -where in your studies have you seen this as the end of such software improvement's because currently the roadmap for teraflop improvement dictates that within 15 year's the standard Teraflop will improve 335 percent.

I have cited a concrete performance roadmap for the single Teraflop, running on superior updated (ie: Current) Architecture's and what it will be capable of producing in 15 year's time. This is not some fallacy to be ignored but is taught to computer scientist across countless curriculums. That software improvement ALONE will bolster single teraflop performance to height's thought unattainable by most non computer scientists. a 335% increase means the single Teraflop will eventually perform 375 Trillion calculation's opposed to the current 4 trillion.

This is not some fantastical fallacy preached by fanboy's but factual information taught in computer science curriculum spread across 12 books minimum.

Now with that out of the way, you are Literally saying that the new standard metric of 4 trillion calculation's per teraflop does not in fact mean that 1 teraflop is equal or equivalent to 4 Teraflops from past Teraflop Performance Metrics. Even when these gain's were produced solely by software optimization. You are categorically wrong.

I wish I lived in the world where software optimization that increased Teraflop performance 4x today in the now-now didn't exist, actually I'm so glad and thankful I don't. Can't speak for you lot however.
 
Last edited:

Ar¢tos

Member
Microsoft's Zenimax is now working on 2 PS5 timed console exclusives, Deathloop and Ghostwire Tokyo.
*sigh*
They barely just bought the studios, it's not like they (MS) had the Devkits available as soon as they were released to study them and improve their own hardware/SDKs based on them (which is what MaulerX was stating).
 
Last edited:
Probably PS5 BC will be worse. But if they would have test this first on PS5 and would have this same performance, we would have a ton of comments, tweets, and videos on Youtube about PS5 being underpowered and incapable to run current gen games to 4k/60fps. Maybe this is because Sony is pretty silent.
BC of PS5 for this game will be better. The game on Pro is already running better than on XBX. But yes you are right. People have being brain washed by DF superlative adjectives and they can't make anymore an objective analysis about XSX BC when a few games like this run objectively quite bad on XSX.

"If DF have said the XSX BC perf is extreme then it must mean 1.5x better than XBX is extreme"
 
Last edited:

Raonak

Banned
This is gonna be the smallest power difference in like forever.

A far cry from PS4 having a 50% tf increase compared to XB1.
 

geordiemp

Member
Der duhh duuh duh deee - No.

I and the entire Computer Science Community are correct in asserting that with superior architecture 12 teraflops can only ever be enhanced - these enhancement's are only ever made through software optimization - and that these software optimization's are projected to AGAIN make a single teraflop which was originally a metric of 1 trillion instruction's - a metric that now mean's 4 trillion instructions through optimization - produce twice it's workload at 8 trillion instruction's per teraflop of performance. All due to the Law of Accelerating returns and future software/driver optimizations - to insinuate otherwise FLATLY exposes you're knowledge of the subject as the standard teraflop originally was a measurement of 1 trillion operations and ONLY reached 4 trillion calculation's through standard software optimization efficiency.

It is now a measurement of 4 trillion operations a second purely based on software optimization. Any attempt to argue this point, or attempt's to argue that there are no software optimization's set to bolster the teraflop another 10x further are futile.

I, specifically wrote that piece - I, specifically did not adhere to "rumors" as there were none that matched my specification's exactly - there were other attempt's - none of which perfectly aligned with the official specifications announced 2 years later - once you remove 1 teraflop of adage per cpu/console respectively.

Secondly - you're attempt to deflate the spec's presented in that piece are worthless. Anyone can see, plainly - that I nailed those spec's 100% with the adage of an extra teraflop for CPU performance - which the manufacturers have still not included for the standard consumer.

And this was 2 years before the console spec's were made public. And these prediction's were made without hinging on rumor's.

None of your mere claim's contradicts this.

I am a computer Scientist. I am a CGI artist. The 2 subject's are complimentary.

For other's who aren't hopelessly brainwashed - Teraflop performance is the only raw metric worth evaluating and considering in term's of graphical fidelity - unless you intend on overclocking your hardware or are measuring performance that is not Graphical. The Teraflop metric is the penultimate indicator of performance barring superior architecture improvements that do not in fact negate this barometer.

Also, I find it telling - that the industry was poised to follow my advice when suggesting that 5+ Multiresolution's of sculpt detail become standard at minimum - enter Epic with UE5 and unlimited sculpt detail 2 years later.

If your a computer scientist have you ever read the RDNA2 white paper, a TF is a TF but it is not a gaming metric. Hang on, this is a joke post isnt it ?

Vega 64 is better for mining using TF

5700 XT is far superior in every game in every metric.

AMD white paper is the exact opposite of what you just said.

fcCUZuY.png


I could continue but why bother. I wont even get into the last AMD patent on shared L1 cache and IPC improvements and effective bandwidth vs clock misses. CU utilisation and keeping data closer in less clock cycles.

Ah OK your just being silly.
 
Last edited:

sircaw

Banned
I agree, EVERYONE has bias, I prefer water to any other drink, but I still drink.

I prefer Motorcycles to cars, but I still drive cars also.

I think Sony WWS are at the forefront of game technology and visuals (not the only one) but I love other teams and games equally.

The thing about bias is you have to see past it, mostly it is an emotion. Being a business developer, Architect and Manager over far too many years within the corporate world. You loose this connection and for any scientific analysis you have to remove or see past this "confirmation bias" that can creep in.

Like testing a game performance, test the heavy parts and the light parts and show the balance across all platforms. Where I see this the MOST is in PC benchmarks, I can rarely match up to a major onsite GPU or CPU test and they almost never show the tested section OR even WHAT it actually is.

Is this bias, laziness, lies or all 3?

you make me wanna become a better man/fish :messenger_grinning:
 

FritzJ92

Member
what does he mean when he says not triple digits?... is that compared to how SX has shown 100% improvements on games?... sort of lost me with his last comment.
 

Lysandros

Member
If your a computer scientist have you ever read the RDNA2 white paper, a TF is a TF but it is not a gaming metric. Hang on, this is a joke post isnt it ?

Vega 64 is better for mining using TF

5700 XT is far superior in every game i every metric.

AMD white paper is the exact opposite of what you just said.

fcCUZuY.png


I could continue but why bother. I woont even get into the last AMD patent on shared L1 cache and IPC improvements and effective bandwidth vs clock misses. CU utilisation and kleeping data closer in less clock cycles.

Ah OK your just being silly.
It's very interesting that Total Cache Bandwidth/FLOP is used as a final metric here and shows the biggest difference in ratio between the architectures.
 
AMD Whitepaper is inferring it has architecture that increases teraflop performance with less basis on increased clock speed's/CU Counts and bandwidth - this is a welcomed/expected side effect / benefit of die shrinks and will only grow larger and larger the closer we get to 3 nanometer's and ultimately 1.3 nanometers and below.

The Teraflop will remain firm at the standard 4 trillion calculation's per second barring improvement's to the architecture meant solely to boost the calculation throughput - as is to be expected.

Architecture improvement's can also bolster a single teraflop's performance.

Hardware revisions/improvements will continue to see the teraflop calculation throughput bolstered even higher - past the standard 4 trillion calculation per second threshold it is at currently with mere software optimization and efficiency issues sorted out - I would say through hardware optimization's alone you may be able to squeeze out 4.7 trillion calculations a second bare minimum, but up to 10 Trillion calculation's a second and far more than that in the future - utilizing die shrink's as they are currently - but that will improve vastly and these same hardware improvements will permit those previously mentioned software improvement's to produce far.... far... greater result's.

Teraflop performance matter's less (not really - it always ALWAYS matters) when you take a machine learning program *COUGH*DLSS 5.0 *COUGH* that can recreate a scene in game verbatim utilizing very little data - this is what we refer to as disruptive performance gains. And plenty... plenty of those are on the horizon - along with pure software optimization that bolster's performance 100,000%. Lot's to be excited about in that Arena.
 
Last edited:

geordiemp

Member
It's very interesting that Total Cache Bandwidth/FLOP is used as a final metric here and shows the biggest difference in ratio between the architectures.

Its just a summary really, but the effective Bandwdith to shaders = bandwidth TF/s at each stage / cache miss factors .

Cache miss factors are massuive influence.

Also note pascal to turing white paper, this is not new.


fouE2Nq.png
 
Last edited:

Dr Bass

Member
AMD Whitepaper is inferring it has architecture that increases teraflop performance with less basis on increased clock speed's/CU Counts and bandwidth - this is a welcomed/expected side effect / benefit of die shrinks and will only grow larger and larger the closer we get to 3 nanometer's and ultimately 1.3 nanometers and below.

The Teraflop will remain firm at the standard 4 trillion calculation's per second barring improvement's to the architecture meant solely to boost the calculation throughput - as is to be expected.

Architecture improvement's can also bolster a single teraflop's performance.

Hardware revisions/improvements will continue to see the teraflop calculation throughput bolstered even higher - past the standard 4 trillion calculation per second threshold it is at currently with mere software optimization and efficiency issues sorted out - I would say through hardware optimization's alone you may be able to squeeze out 4.7 trillion calculations a second bare minimum, but up to 10 Trillion calculation's a second and far more than that in the future - utilizing die shrink's as they are currently - but that will improve vastly and these same hardware improvements will permit those previously mentioned software improvement's to produce far.... far... greater result's.

Teraflop performance matter's less (not really - it always ALWAYS matters) when you take a machine learning program *COUGH*DLSS 5.0 *COUGH* that can recreate a scene in game verbatim utilizing very little data - this is what we refer to as disruptive performance gains. And plenty... plenty of those are on the horizon - along with pure software optimization that bolster's performance 100,000%. Lot's to be excited about in that Arena.

Where in the heck do you get this teraflop is 4t operations a second nonsense?
 
AMD Whitepaper is inferring it has architecture that increases teraflop performance with less basis on increased clock speed's/CU Counts and bandwidth - this is a welcomed/expected side effect / benefit of die shrinks and will only grow larger and larger the closer we get to 3 nanometer's and ultimately 1.3 nanometers and below.

The Teraflop will remain firm at the standard 4 trillion calculation's per second barring improvement's to the architecture meant solely to boost the calculation throughput - as is to be expected.

Architecture improvement's can also bolster a single teraflop's performance.

Hardware revisions/improvements will continue to see the teraflop calculation throughput bolstered even higher - past the standard 4 trillion calculation per second threshold it is at currently with mere software optimization and efficiency issues sorted out - I would say through hardware optimization's alone you may be able to squeeze out 4.7 trillion calculations a second bare minimum, but up to 10 Trillion calculation's a second and far more than that in the future - utilizing die shrink's as they are currently - but that will improve vastly and these same hardware improvements will permit those previously mentioned software improvement's to produce far.... far... greater result's.

Teraflop performance matter's less (not really - it always ALWAYS matters) when you take a machine learning program *COUGH*DLSS 5.0 *COUGH* that can recreate a scene in game verbatim utilizing very little data - this is what we refer to as disruptive performance gains. And plenty... plenty of those are on the horizon - along with pure software optimization that bolster's performance 100,000%. Lot's to be excited about in that Arena.

1 TFlop = 1 trillion calculations/second

Your "computer club" is weird if you and them use 1Tf = 4 trillion/s

Is Tommy Fisher the boss of the club?
 

Lysandros

Member
Its just a summary really, but the effective Bandwdith to shaders = bandidth TF/s at each stage / cache miss factors .

Cache miss factors are massuive influence.

Also note pascal to turing white paper, this is not new.


fouE2Nq.png
I see, thanks. So PS5 having around 40% more (if i calculated correctly) Total Cache Bandwidth/FLOP compared to XSX isn't 'that' relevant to performance, correct?
 
Last edited:

onesvenus

Member
Clearly they didn’t do a great job at it. Considering some developers are having a hard time with the development kits.
One thing is the hardware and the other is the implementation of the SDK.
If you believe those rumors, they also say it's because previously developers were using XDK and now they must use GDK wich works across Xbox and PC.
That doesn't give any indication of wether the specs of the machinemake developers happy or not.
 

geordiemp

Member
I see, thanks. So PS5 having around 40% more (if i calculated correctly) Total Cache Bandwidth/FLOP compared to XSX isn't that relevant to performance, correct?

Its a bit of everything, speed of cache, bandwidth and size of cache to next stage in pipeline and design for mitigation of cache misses as a whole.

Ps5 will certianly feed the shaders much faster, XSX had more of them, and they are both so custom therefore how it comes together can only be seen in benchmarks .
 

PaintTinJr

Member
Its a bit of everything, speed of cache, bandwidth and size of cache to next stage in pipeline and design for mitigation of cache misses as a whole.

Ps5 will certianly feed the shaders much faster, XSX had more of them, and they are both so custom therefore how it comes together can only be seen in benchmarks .
On the topic of this thread and Richard's historical bias(interesting to see if that is still the case), the word benchmarks sort of crystalizes the issue IMHO.

The benchmarks on the console side are opaque, yet even when Richard has know there was a Ubisoft type: "parity or better" clause in the contract - especially back in the 360 days - between a platform holder and publisher he would still write things he knew were false to keep a big slant on the analysis to keep the fires of war burning.

The only reliable benchmarks we'll see this gen where we can be sure that the systems have been pushed beyond parity are the first party exclusives. It won't, and sadly can't be a direct 1:1, although the Demon souls remake versus any Dark souls remake will certainly be as good an indicator for the first 12months.

IMO it will largely be about comparing games in the genre (probably using the same base engine tech) and then comparing how the overall impression of visuals/audio and performance stack up, before trying to assess RT, particle fx, HDR, resolution, draw distance and frame-rate in a more technical comparison way. even though this generation(and last) has shown a gulf in production quality between the two platform's first party games, with nanite/lumen being UE5 features, technical capability shouldn't be quite the differentiator this gen (IMO) for wowing people, and the hardware capability should be the bigger limiting factor once again.
 

thelastword

Banned
The thing is, we in the next gen thread have been saying that for ages.....We don't need developers to tell us this. In discussing the specs we can very well deduce ourselves and that's what's been going on in the next gen thread, lots of discussion on the custom hardware, the SSD, the IO block, cache scrubbers and geometry engine......I think we need to give some props to many of our fellow gaffers in the next gen thread first...
 

Lysandros

Member
The thing is, we in the next gen thread have been saying that for ages.....We don't need developers to tell us this. In discussing the specs we can very well deduce ourselves and that's what's been going on in the next gen thread, lots of discussion on the custom hardware, the SSD, the IO block, cache scrubbers and geometry engine......I think we need to give some props to many of our fellow gaffers in the next gen thread first...
Honestly i think there are far more people having a deeper technical insight about these systems in Neogaf compared to forums like Beyond3D or Resetera.
 
Not going to get involved in spec talk as I'd be in over my head but will say I'm still always amazed at what the frankly pathetic Jaguar cores and 1.84 TF in the base PS4 have achieved. Must have played every big PS4 exclusive and things like GoW still wrinkle my brain.

The Zen 2 cores will be a huge leap forward as will 10.4tf and SSD baseline will surely open so many doors.
 

LordOfChaos

Member
what does he mean when he says not triple digits?... is that compared to how SX has shown 100% improvements on games?... sort of lost me with his last comment.

He was just being silly, Chad Warden reference, “PSTriple”.

N Nhranaghacon I don’t want to get too far off topic, but what are your personal rules for apostrophes?

Reply banned, we will never know, but thank goodness
 
Last edited:

yurinka

Member
*sigh*
They barely just bought the studios, it's not like they (MS) had the Devkits available as soon as they were released to study them and improve their own hardware/SDKs based on them (which is what MaulerX was stating).
Now they are part of Microsoft, so Microsoft has devkits. A $7.5B purchase isn't made in an afternoon, I assume they have been at least during months in talks and negotiating about it.

MS also has close partnership and friendship with many 3rd party studios, former coworkers and so on, who are making multi games for both next gen consoles.

Not only that, both consoles are very similar in terms of technology: same AMD Zen 2 and RDNA2 family, same amount of RAM, both go with SSD, pretty similar overall performance. Something it's good for 3rd parties because they make mostly multiplatform games, which means it's also good for platform holders because they make most of their money from the 3rd party games.

I bet Sony and MS were in talks, maybe with AMD too, to agree how their next gen consoles were going to be. Consider that they are also partnering for servers and cloud technology. It's the same, they depend on 3rd party companies, who also prefer to have 2 strong markets/consoles/streaming services than a single monopoly.

And even if Sony and MS weren't directly sharing with themselfs the stuff, there are NDAs, but the top companies know in what is working all the other top companies. For sure MS and Sony knew what the others had while ago, years before he purchase.
 
Last edited:
Clearly they didn’t do a great job at it. Considering some developers are having a hard time with the development kits.

Yes, because all devs are equally talented /s

Honestly i think there are far more people having a deeper technical insight about these systems in Neogaf compared to forums like Beyond3D or Resetera.

Eh, it depends. Out of the three, B3D still has the most technically accurate posts I've come across, and on average, the most neutral of technical analysis, especially from a select handful of the posters (some who don't post frequently, but still).

Era has some technical people but some of them seem compromised by vested interests that might lead them to make outright false claims. I mean if they have an ex-Nintendo employee there as an admin this isn't that far a stretch to speculate on. It also doesn't help they have people like Matt as mods; he's not a dev, but the way he does his drive-by no-substance subtle egging-on posts in certain threads that are about technical aspects of the systems, definitely has an unsavory impact on stemming any warring nonsense that might either be happening in those threads or bubbling under the surface.

Here? Well, I certainly do like engaging with other technically-minded folks (whether we agree or even disagree), but sometimes technical threads here that clearly try presenting information in a neutral way end up getting bogged down by shit-stirring console warriors, on both sides. But if you recall what I said before, since there are simply a lot more Sony-only people around, they tend to make up the majority of that bad batch who screw up these kinds of threads sometimes. Same goes over on Era. B3D is surprisingly very sparse of that kind of stuff which is a blessing IMHO. Although there may be a couple of people who try it every now and then, discussion quickly tends to shift back outside of that type of fanboy element.

This is my honest opinion having either participated or lurked on all three forums for a very long time.
 
Last edited:

onQ123

Member
The thing is, we in the next gen thread have been saying that for ages.....We don't need developers to tell us this. In discussing the specs we can very well deduce ourselves and that's what's been going on in the next gen thread, lots of discussion on the custom hardware, the SSD, the IO block, cache scrubbers and geometry engine......I think we need to give some props to many of our fellow gaffers in the next gen thread first...

People thought I was talking crazy




 

Aladin

Member
Link : Sony May Be Overselling Aspects of the PS5’s Hardware Performance

According to Baker, the increase in memory latency that Cerny mentions is indeed a negative that can make smaller, high-clocked parts a bit less efficient than their wider, slower-clocked brethren.

Cerny - "Also, it’s easier to fully use 36 CUs in parallel than it is to fully use 48 CUs – when triangles are small, it’s much harder to fill all those CUs with useful work."

Dan Baker (Game Architect at Oxide Games) - "Small triangles are indeed inefficient, However, This is specific to the type of renderer. In deferred renderers, which make up most of the market today, most of the shading computation is done in screen space, where the small triangle problem is minimized. Only the material setup really pays the cost for small triangles. For Oxide’s decoupled shading rendering technology, neither the setup nor the shading efficiency is affected by the size of the triangle, so we are impacted even less."

18% is 18% no matter how much you want it to be a different number... and 3rd party publishers love parity, the perfect excuse to not go a step further (and save a few $$$).

Now it seems difference will be bigger than the 18%.
 

Aladin

Member
No replacement for displacement.
Even Tom Warren was impressed with Series S's performance. If ps5 performance is good, enjoy it. Why do you have to compare it with the beast Series X ? :messenger_beaming: Everybody can enjoy next gen at multiple lower resolutions.
uYVWzh6.png
 
Top Bottom