• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

CyberPanda

Banned
CyberPanda CyberPanda
Thank you for your help making the thread and pushing my lazy ass to do it. Coulnt have done it without you.
feelsgoodman.png
👌🏻👌🏻👌🏻
 

Fake

Member
That rumor doesn't make sense. Microsoft would be pretty pissed at AMD if they pulled that.
The big jump will be the CPU aspect and they keeping talking about cloud/ai/etc... dunno if this could really piss them. Besides, Navi still GCN.
Don't believe in that too, but who knows.
 

CyberPanda

Banned
Anyone remember the formula for calculating tiddyflops?
ethomaz ethomaz ?
How do you calculate a teraFLOP?

The basic formula for computing teraFLOPS for a GPU is:

(# of parallel GPU processing cores multiplied by peak clock speed in MHz multiplied by two) divided by 1,000,000

The number two in the formula stems from the fact that some GPU instructions can deliver two operations per cycle, and since teraFLOP is a measure of a GPU's maximum graphical potential, we use that metric.

Let's see how we can use that formula to calculate the teraFLOPS in the Xbox One. The system's integrated graphics has 768 parallel processing cores. The GPU's peak clock speed is 853MHz. When we multiply 768 by 853 and then again by two, and then divide that number by 1,000,000, we get 1.31 teraFLOPS.
 

LordOfChaos

Member
About software based RT
D34-B5845-3042-433-B-854-A-4-FB8404471-B9.jpg


That's what I was thinking with the Metro dev interview, that they surely have dev kits by now and may have tipped the hand


Let's talk about ray tracing on next-gen console hardware. How viable do you see it to be and what would alternatives be if not like RTX cards we see on PC? Could we see a future where consoles use something like a voxel GI solution while PC maintains its DXR path?

Ben Archard:
it doesn't really matter - be it dedicated hardware or just enough compute power to do it in shader units, I believe it would be viable. For the current generation - yes, multiple solutions is the way to go.

This is also a question of how long you support a parallel pipeline for legacy PC hardware. A GeForce GTX 1080 isn't an out of date card as far as someone who bought one last year is concerned. So, these cards take a few years to phase out and for RT to become fully mainstream to the point where you can just assume it. And obviously on current generation consoles we need to have the voxel GI solution in the engine alongside the new RT solution. RT is the future of gaming, so the main focus is now on RT either way.

In terms of the viability of RT on next generation consoles, the hardware doesn't have to be specifically RTX cores. Those cores aren't the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to "run" DXR since DXR is just an extension of DX12.


I hope it's hardware accelerated since RTX can still do a lot more rays than the next fastest card running it in pure compute, but that kinda sounded like there's just enough bandwidth and compute to apply a useful amount of the effect, just like the Vega card already did.



Anyone remember the formula for calculating tiddyflops?
ethomaz ethomaz ?


What are the rest of you even talking about?

tiddys * ops per cycle(2 hands) * cycle speed
 
Last edited:
As well as die size we also have to consider GPU clocks, power draw and heat.

For clocks, we just don't know how high Navi will clock at 7nm. But we should very SOON with the launch of PC Navi at Computex. Q3 at the latest.

Higher clocks means more watts means higher temps. But also higher Tflops.

So much of the Tflops argument comes down to the single question of - How high will Navi clock? If it takes 300wats to reach 1.7Ghz @ 7nm, then we can forget about a 12+ Tflop console.
If Navi is a bit of a miracle and clocks 2.1Ghz without melting... then higher Tflops becomes a real possibility. Maybe 1.8Ghz in a console is not so crazy at that point.

Am I confident it will clock high? Not really. When overclocked the Vega 64 could reach ~1.7Ghz ( can't count on it to stay that high though ) and the Radeon VII could reach ~1.9Ghz @ 7nm..... using 300+watts.... and it's $700. I did use the word miracle above for a reason.

Navi does not need to perform the same as the Radeon VII though. We still don't know.

After we know Navi's PC GPU clock speeds. We can start to guesstimate. How much power and what kind of cooling solution will Sony be willing to use? How many CU's can be afforded in a console?

PC Navi will be very revealing as to narrowing down the Tflop range for next gen consoles.
 

SonGoku

Member
I hope it's hardware accelerated since RTX can still do a lot more rays than the next fastest card running it in pure compute, but that kinda sounded like there's just enough bandwidth and compute to apply a useful amount of the effect, just like the Vega card already did.
Me too but the quote you posted seems to imply otherwise that next gen might just brute force rt without dedicated hw
Hopefully the GPU arch is customized to be as efficient as possible doing rt calculations without specialized hw/cores. Dev seems to be guessing as well🤷‍♂️

If anything brings more credibility to them 12.9 tyddies and 800 GB/s bandwidth
 

Evilms

Banned
For those who are wondering how to calculate the memory bandwidth, it is necessary to know the bus width and then multiply it by the actual effective frequency.

Simple example of the GTX 1080 which has a 256-bit bus and a effective frequency of 10008 MHz in GDDR5X, which gives us:

Bandwidth = (Memory Bus / 8) * effective frequency

256 bit / 8 = 32

32 x 10008 MHz = 320256 MB/s
or 320.3 GB/s
 
Last edited:
Who will have the most expensive console though?

I can see Microsoft getting in the mindset of "gamers want powahh" in the same way they went about "gamers want media" with their Tv Tv Tv xbox one presentation.

XBOX 2, the most POWERFUL CONSOLE EVAHHHHHH £1000
 

SonGoku

Member
Hmmm, so either 56 or 60 CU’s is about where we’d expect right?

That would rule out 14TF unless they have some truly magical secret sauce involved.
Indeed or miraculously Navi clocks higher than 1.8Ghz (14.2TF at 1850MHz)
On the flipside 13TF seems doable on a 60 CU card at 1.7GHz or 12.5TF at 1628GHz.
Looking at AdoredTV updated chart the latter could be achieved at 150W by fine tuning the clocks on Navi 20 (RX 3090), especially considering MS ported a 180W card to the X.
Who will have the most expensive console though?

I can see Microsoft getting in the mindset of "gamers want powahh" in the same way they went about "gamers want media" with their Tv Tv Tv xbox one presentation.

XBOX 2, the most POWERFUL CONSOLE EVAHHHHHH £1000
I doubt any will go above $500
Either they are both $500 (PS5/Snek) or PS5 is $400 and Snek $500
 
Last edited:

bitbydeath

Member
Who will have the most expensive console though?

I can see Microsoft getting in the mindset of "gamers want powahh" in the same way they went about "gamers want media" with their Tv Tv Tv xbox one presentation.

XBOX 2, the most POWERFUL CONSOLE EVAHHHHHH £1000

Even if one could get more power both will be displaying 4K+ which won’t be easy to tell a difference.

Both will likely have Ray Tracing. Since Sonys announced it already MS won’t hold it back.
Same goes for SSD.

The major differences will come down to exclusives I think.
 

xool

Member
Good work!

I think the pastebin leak should be in - whether real of fake I think it's the mother of all subsequent "13/14TF" leaks (Dec2018 ?)


I'm a third party small developer from EU,for the last 8 months i've been helping a well known company in a AAA game development that is set to release in 2020 as a lunch game for PS5.

Some infos that i'd like to share that are 99% correct(i say 99% because small incremental hardware change can occur till 2020,although specs are set in stone).

-PS5 official info from Sony somewhere around next E3(Sony will not be participating on E3),i'd say Q2 2019 small reveal
-PS5 release March 2020 or November 2020,not yet finalized
-backward compatible
-physical games & ps store
-ps plus & ps plus premium ( premium-beta early access,create private servers,
-specs CPU 7nm ryzen 8 core 16 threads,unknown speed
GPU 7nm Navi arhitecture around 14TF,its gonna be powerful and power efficient,Sony working with Amd for Navi,some sort of Ray Tracing but will not focus on that,more focus with VR and 4k,much better bandwith overall
24GB Gddr6 + 4gb ddr4 for os,we have 32 gb dev kits
-2tb hdd some sort of nand flash
-8k upscaling
-PSVR2 in 2020 also,reveal with ps5,big resolution boost probably 2560x1440,120hz,220 field of view,eye tracking,wireless,battery life 4-5 hours,headphones integrated,less motion sickenss,no breaker box,much less cable management,much more focus on VR for aaa games,price around 250$
-dualshock 5,some sort of camera inside for VR,more analog precision for fps games,something similiar to steam analog trackpad
-price 499$,100$ loss per console at a beginning

Ps4 exclusive launch games that i know of

Gran Turismo 7 (vr)
Pubg remaster 4k f2p with ps+ only on ps5
Last of us 2 remaster
Ghost of Tsushima remaster
2-3 aaa games more + psvr2 games

Non exclusive ps5 games 2020

Battlefield bad company 3
Harry potter
Gta 6 Holiday 2020 most probably,not hearing anything ps4 related (hearing that Sony is paying huge money to secure 1 month time exclusive for ps5). Been hearing rumors about Miami and New York,so 2 big cities,but im not sure if thats 100% true
Assassins creed
Horizon 2 so far in 2021
 
Last edited:

xool

Member
lol.. its nothing but a summary from all the speculations on the internet. not one new info. fake fake fake...
Sony task force in panic mode?

And yet they predicts Sony would not be at E3, long before that news, plus so much else officially confirmed - people must be really good at picking their speculation.?
 

bitbydeath

Member
Rutheniccookie broke that news actually before that PasteBin is dated.

 

Darius87

Member
haven't read yet properly. but wanted to share with you guys, before i forget:

Performance of different GCN iterations under heavy geometry load

i really hope navi is doing something new to lessen this GCN bottleneck (this weakness was exploited by nvidia field engineers in many pc games over the past decade and has done it's fair share to the nvidia/amd flop discrepancy)

there's fair chance that ps5 comes with improved geometry engines and then we might see more then 64CU's in ps5 most likely with lower clocks then speculated 1,8ghz, also improved bandwidth compression would help for performance.
 

SonGoku

Member
Rutheniccookie broke that news actually before that PasteBin is dated.

Will add to the OP
DAMN the words $500 and monster make me drool in anticipation.
 

Silver Wattle

Gold Member
As long as it has at least 11TF, I'd rather they spread the resources around and improve the other aspects more than this tunnel vision people seem have for more teraflops of fp32.

Give us higher CPU clocks, go for extra secret sawce etc.
 

ethomaz

Banned
there's fair chance that ps5 comes with improved geometry engines and then we might see more then 64CU's in ps5 most likely with lower clocks then speculated 1,8ghz, also improved bandwidth compression would help for performance.
There is no way to PS5 has more than 64CUs.

To be fair to get a good yield they will probably choose to disable some CUs so they can use some defective chips... I think at least 4 CUs will be disabled for that.

60 or 56 CUs will be the mostly like config.

Give some time that I will try to make math to estimated the possible PS5 die size.
 
Last edited:

thelastword

Banned
Good work!

I think the pastebin leak should be in - whether real of fake I think it's the mother of all subsequent "13/14TF" leaks (Dec2018 ?)

I've seen this rumor before, a few weeks back, and I must say, it's the only leak I believe is real.........

I think specs will improve though, 32 GDDR6 (Games) + 4-8GB DDR4 (OS) seems like it will make it into final kit........They could then fuse GDDR6 with DDR4 as an extended ram allocation if the OS does not use all of the DDR4.....Would be great for devs...
 

Darius87

Member
There is no way to PS5 has more than 64CUs.

To be fair to get a good yield they will probably choose to disable some CUs so they can use some defective chips... I think at least 4 CUs will be disabled for that.

60 or 56 CUs will be the mostly like config.

Give some time that I will try to make math to estimated the possible PS5 die size.

with 7nm they could fit more then 64CU if supposedly they aiming more/less for ps4 die size, i know above 64 CU it's not likely gonna happen but that would be great solution if current setup doesn't pass thermals with performance they need,
 
GCN is limited to 64 CUs. So more than that is not happening.

Edit: the guys at the other place are absolutely insane, or trolling really hard. PS5 at 8 or 9TF and XBox going all the way to 14 and even 16TF. :pie_roffles:
 
Last edited:

ethomaz

Banned
with 7nm they could fit more then 64CU if supposedly they aiming more/less for ps4 die size, i know above 64 CU it's not likely gonna happen but that would be great solution if current setup doesn't pass thermals with performance they need,
They can't... GCN is hardware limited to max 64 CUs.

That is why everybody got disappointed with Navi being the same old limited GCN.

AMD needs a new Arch to break these limitations ASAP.

GCN is limited to 64 CUs. So more than that is not happening.

Edit: the guys at the other place are absolutely insane, or trolling really hard. PS5 at 8 or 9TF and XBox going all the way to 14 and even 16TF. :pie_roffles:
There are a lot of misinformation there like some guys claiming the GCN was only software limited to 64CUs when in reality it is hardware designed to max 64CUs and only anew Architecture could break that... or even some weird maths with memory speeds and bandwidth lol
 
Last edited:

DeepEnigma

Gold Member
They can't... GCN is hardware limited to max 64 CUs.

That is why everybody got disappointed with Navi being the same old limited GCN.

AMD needs a new Arch to break these limitations ASAP.


There are a lot of misinformation there like some guys claiming the GCN was only software limited to 64CUs when in reality it is hardware designed to max 64CUs and only anew Architecture could break that... or even some weird maths with memory speeds and bandwidth lol

Not saying they will, but could a “crossfire” like setup be done on chip to theoretically take advantage of more CUs in a similar nature like running two separate cards in SLI? Or the butterfly method like the Pro?

Let’s say for the sake of argument GCN was stuck for another decade (which it shouldn’t be due to their next gen chip coming after).
 
Last edited:
I've seen this rumor before, a few weeks back, and I must say, it's the only leak I believe is real.........

I think specs will improve though, 32 GDDR6 (Games) + 4-8GB DDR4 (OS) seems like it will make it into final kit........They could then fuse GDDR6 with DDR4 as an extended ram allocation if the OS does not use all of the DDR4.....Would be great for devs...
32GB GDDR6? what?
 

Darius87

Member
They can't... GCN is hardware limited to max 64 CUs.

That is why everybody got disappointed with Navi being the same old limited GCN.

AMD needs a new Arch to break these limitations ASAP.


There are a lot of misinformation there like some guys claiming the GCN was only software limited to 64CUs when in reality it is hardware designed to max 64CUs and only anew Architecture could break that... or even some weird maths with memory speeds and bandwidth lol

only reason why GCN is limited to 64CU is geometry engines can't take more then 64 CU's load, so increasing CU count would be waste of die space, as i said before if sony would improve geometry engines that would allow for more CU, there's no need for new arch for improving geometry engines.
 

ethomaz

Banned
Not saying they will, but could a “crossfire” like setup be done on chip to theoretically take advantage of more CUs in a similar nature like running two separate cards in SLI? Or the butterfly method like the Pro?

Let’s say for the sake of argument GCN was stuck for another decade (which it shouldn’t be due to their next gen chip coming after).
Yes but Crossfire is not dead? I thought AMD and nVidia support for that type of tech ended due the low efficiency (you get 20-50% boost in performance with twice the hardware).

I don't think there is any way to GCN get another decade of life but if that happens then you will see each new interaction nVidia distancing from AMD in terms of performance and watt.
 
Last edited:

DeepEnigma

Gold Member
Yes but Crossfire is not dead? I thought AMD and nVidia support for that type of tech ended due the low efficiency (you get 20-50% boost in performance with twice the hardware).

Idk, was just spitballing to see if they can come up with other methods on die for Pro systems and beyond.
 

ethomaz

Banned
only reason why GCN is limited to 64CU is geometry engines can't take more then 64 CU's load, so increasing CU count would be waste of die space, as i said before if sony would improve geometry engines that would allow for more CU, there's no need for new arch for improving geometry engines.
Actually it is limited to 4 Shader Engine and you can have max of 16CUs per Shader Engine... Raja said in an Anandtech interview they didn't have enough time to make a GCN card with more Shader Engines works... I think that claim shows how hard is to implement over 4 Shader Engines in actual GCN (that is the limitation we are talking about).

Of course a new redesign architecture can break these imitations.
 

FranXico

Member
I've seen this rumor before, a few weeks back, and I must say, it's the only leak I believe is real.........

I think specs will improve though, 32 GDDR6 (Games) + 4-8GB DDR4 (OS) seems like it will make it into final kit........They could then fuse GDDR6 with DDR4 as an extended ram allocation if the OS does not use all of the DDR4.....Would be great for devs...
You probably are the only one who believes that leak. Either we are going to get a bit less RAM, or the PS5 will cost more than $600...
 
Status
Not open for further replies.
Top Bottom