• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: PS5 devkits ~ 13 TFLOPS

Status
Not open for further replies.

shark sandwich

tenuously links anime, pedophile and incels
If we’re talking about Vega 64-like performance then I’m not so sure HBM2 would be a handicap. 8GB HBM2 would give it just slightly more VRAM bandwidth than Vega 64 has.

Might not be worth it to go any higher than that. Radeon VII has more than double the bandwidth but that didn’t translate to much of a performance increase.

I also don’t see how split VRAM/system RAM would be a handicap. This is how PCs have done things since forever. It was only a negative for PS3 because the amount of VRAM was so pathetic.

So yeah, I think some bargain 8 GB HBM2 + some DDR4 system memory would be a sensible configuration for a console with slightly below Vega 64 performance.
 

GermanZepp

Member
Once again RAM will be the deciding factor this gen, Unified or split and type.
I wouldnt be surprised if MS went with unified GDDR6 and sony split the pool with HBM2 and DDR4.
Not sure who would have the advantage, i know CPU tasks are latency sensitive but a unified pool is easier for devs.

The rest of the specs we know both are using zen2 8c SMT might be disabled or completely removed
GPUs in the 12-14tflops range. Both are using SSDs.

Whoever miscalculates the RAM will lose just like last gen with MS and their DDR3 and Esram taking up half the SoC.

I think MS and Sony knows what amount and kind of ram put in their consoles. And in that sense, what kind is the fastest the better for his particular strategy. Maybe Ram is a defying piece in the console but not as defying as the other components, to that extent they have a budget for ram. The question is how much lost they gonna eat.
I mean, if mandatory kinect wouldn't exist i think MS could have built a better Xone. I think MS and Sony machines are going to be similar in power, as always one is going to be slightly more powerful, but the custom stuff is going to mark the difference next gen. IMO.

I don't know if i explain myself
 
Last edited:

llien

Member


Who?

13Tflops is VegaII levels.
That's a 331mm^2, 250w+ chip.

I'm puzzled how they could get to that with PS5, given recent Navi leaks.

It’s Zen2 clock numbers and we don’t if it’s the final silicon or not.

There is a clip with AMD engineer stating that boosting clocks game is over, expect more cores instead, so 4.2Ghz max is not surprising.
 
Last edited:

xGreir

Member
HBM tech would mean less memory and less bandwidth
The opposite of what you want lol

Ok, I'm lost here right now.

How is GDDR6 supposed to have more bandwidth than HBM2/3? I mean, it was always said that HBM was making the difference in that matter.

I really mean it, I thought that HBM was far superior in Bandwidth xD
 

shark sandwich

tenuously links anime, pedophile and incels
Ok, I'm lost here right now.

How is GDDR6 supposed to have more bandwidth than HBM2/3? I mean, it was always said that HBM was making the difference in that matter.

I really mean it, I thought that HBM was far superior in Bandwidth xD
It depends on the # of chips/HBM stacks, the memory clock speed, and the bus width.

Radeon VII has 4 stacks of HBM2 @ 2 GHz which gives it 1 TB/s bandwidth. That blows away anything else on the market.

Rumored specs are saying 2 stacks @ 1.7 GHz which would give it around 440 GB/s. (Which would put it just slightly faster than Vega 64)

Latter could easily be surpassed with GDDR6 if they used enough chips/wide enough memory bus. See SonGoku post #747
 

CyberPanda

Banned
Should I get excited? Or is this a wack\BS fake?
It’s a rumour at this point, so I guess try not to get too excited? Hehe
Who?

13Tflops is VegaII levels.
That's a 331mm^2, 250w+ chip.

I'm puzzled how they could get to that with PS5, given recent Navi leaks.



There is a clip with AMD engineer stating that boosting clocks game is over, expect more cores instead, so 4.2Ghz max is not surprising.
It’s just a random guy I came across on twitter, and I guess we’ll see if he’s legit or not soon enough I guess.
 

xGreir

Member
It depends on the # of chips/HBM stacks, the memory clock speed, and the bus width.

Radeon VII has 4 stacks of HBM2 @ 2 GHz which gives it 1 TB/s bandwidth. That blows away anything else on the market.

Rumored specs are saying 2 stacks @ 1.7 GHz which would give it around 440 GB/s. (Which would put it just slightly faster than Vega 64)

Latter could easily be surpassed with GDDR6 if they used enough chips/wide enough memory bus. See SonGoku post #747

Well, if it depends on the stacks, It well could be 4 stacks or wathever the want. I mean, if they go through the HBM way, I'm sure they just wouldn't use an inferior bandwidth xD
 

mckmas8808

Banned
Well, if it depends on the stacks, It well could be 4 stacks or wathever the want. I mean, if they go through the HBM way, I'm sure they just wouldn't use an inferior bandwidth xD

What if it was 4 stacks @ 1.7 GHz? How much bandwidth would that be?
 
It’s a rumour at this point, so I guess try not to get too excited? Hehe

It’s just a random guy I came across on twitter, and I guess we’ll see if he’s legit or not soon enough I guess.
That twitter account doesn't even try to appear trustworthy. It's a consolewars comedian.
 

Gamernyc78

Banned
That twitter account doesn't even try to appear trustworthy. It's a consolewars comedian.

LOL I haven't looked at his Twitter but I'll take your word for it. I can't wait till both consoles are fully revealed with all the info and we can see what we are getting.
 

xool

Member
What if it was 4 stacks @ 1.7 GHz? How much bandwidth would that be?

If 8 stacks at 2GHz is 1TB/s (1000GB/s)

then 4 stacks is 0.5 TB/s

and a 1.7 instead of 2 GHz it's 0.5 * (1.7/2) = 0.425 TB/s (not sure if 1TB here means 1000 or 1024)
 
Last edited:
Guys just want to point out again how much of a fraud this AdoredTV is, because a lot of you are basing Navi on what he puts in his videos.

Back in December last year, he released that hype video on Navi and Ryzen 3000, and recently made another clickbait video on Navi, where he backtracked, but again the info was likely his own fiction.

Today, a Ryzen 16-core engineering sample leaked via TOM APISAK, who is 100% legit. The Zen 2 sample:

3.3Ghz base clock, 4.2Ghz boost, 16-cores 32-threads.



This is the CPU Adored claimed in his chart was.....4.3Ghz base clock, 5Ghz boost. Not even fucking close. But of course, engineering samples will go up about 200-300Mhz, and I don't expect anyone to know the exact frequencies even if they had legit leak info. But the figures he was pushing (also 12-core with 4.2Ghz base and 4.8Ghz boost) were so stupid and unrealistic I called him out but he kept trying obfuscate.

The guy is full of shit and it's slowly unravelling, but y'll keep trusting him because he got the RTX naming right. He's got legions of defenders too I just can't understand it.
 
Last edited:

LordOfChaos

Member
Once again RAM will be the deciding factor this gen, Unified or split and type.
I wouldnt be surprised if MS went with unified GDDR6 and sony split the pool with HBM2 and DDR4.

Not sure who would have the advantage, i know CPU tasks are latency sensitive but a unified pool is easier for devs.

The rest of the specs we know both are using zen2 8c SMT might be disabled or completely removed
GPUs in the 12-14tflops range. Both are using SSDs.

Whoever miscalculates the RAM will lose just like last gen with MS and their DDR3 and Esram taking up half the SoC.



In either case it's not going to be like split memory in prior generations. All game memory will be a unified pool. The ~4GB DDR4 is to run the OS/multitaskable apps. So you still get the advantage of having the CPU and GPU memory in the same physical pool for everything the game needs, while not spending more for memory the OS doesn't benefit from. GPGPU compute which is the big benefactor of unified memory will have that in tact.
 
Last edited:

SonGoku

Member
I wouldnt be surprised if MS went with unified GDDR6 and sony split the pool with HBM2 and DDR4.
Sony wouldn't be that stupid to fall onto such crippled design. Cerny knows better
Not sure who would have the advantage,
MS would have a huge bandwidth advantage
If we’re talking about Vega 64-like performance then I’m not so sure HBM2 would be a handicap. 8GB HBM2 would give it just slightly more VRAM bandwidth than Vega 64 has.
The X already has 320GB/s for a measly 6TF. With next gen games designed around the new spec (Navi 12.5TF), bandwidth will be one of the factors to make the GPU punch above its weight, especially considering GCN.

The bandwidth from a measely 8GB HBM2 stack wouldn't be enough to pick up the ddr4 slack.
 
Last edited:

SonGoku

Member
Ok, I'm lost here right now.

How is GDDR6 supposed to have more bandwidth than HBM2/3? I mean, it was always said that HBM was making the difference in that matter.

I really mean it, I thought that HBM was far superior in Bandwidth xD
It is If you compare the same pool size HBM wins by a big margin, but once you factor in price you get much less HBM memory compared to GDDR6. Less memory means less stacks which in turn translates to lower bandwidth.
You would need 16GB HBM2 to compete with 24GDDR6. GDDR6 wins with massively bigger pool and slightly less bandwidth compared to HBM2
 
Last edited:

xGreir

Member
It is If you compare the same pool size HBM wins by a big margin, but once you factor in price you get much less HBM memory compared to GDDR6. Less memory means less stacks which in turn translates to lower bandwidth.
You would need 16GB HBM2 to compete with 24GDDR6. GDDR6 wins with massively bigger pool and slightly less bandwidth compared to HBM2

Yeah, I've read something like it depends of the number of stacks used, more or less.

So... Yeah, I prefer just 8 TF, and use all the budget left in 60 GB of HBM :messenger_ok: :messenger_smirking:
 

SonGoku

Member
Yeah, I've read something like it depends of the number of stacks used, more or less.

So... Yeah, I prefer just 8 TF, and use all the budget left in 60 GB of HBM :messenger_ok: :messenger_smirking:
Even if you go that route doesn't make sense, for the money you invest on HBM2 you can get a much bigger pool which in turn has more bandwith
Also you will need HBM3 to go over 16GB and the budget headroom from going with a 8TF GPU won't make up the costs

I hope one day HBM tech replaces GDDRx but today is not that day
 

SonGoku

Member
This is the CPU Adored claimed in his chart was.....4.3Ghz base clock, 5Ghz boost.
Did he specifically mention the 16 core varian? Chips with lower core counts tend to clock higher.
I mean we got lots in here now claiming, after that latest clickbait video on Navi (he's trying to get more Patreons), that PS5 won't be 12-14TFlops because that clown is making things up again and saying Navi is in a terrible state.
I knew wasn't right, I makes no sense to use for consoles a broken arch that's worse than Vega and engineers can't wait to ditch
GPGPU compute which is the big benefactor of unified memory will have that in tact.
Didn't know that, can you explain how? or link a vid/article
 

TeamGhobad

Banned
It is If you compare the same pool size HBM wins by a big margin, but once you factor in price you get much less HBM memory compared to GDDR6. Less memory means less stacks which in turn translates to lower bandwidth.
You would need 16GB HBM2 to compete with 24GDDR6. GDDR6 wins with massively bigger pool and slightly less bandwidth compared to HBM2

what do you do about latency? cpu side?
 

CrustyBritches

Gold Member
I find adoredtv can be inaccurate in the sense that he goes by the best-case marketing slides from AMD instead of their historical real-world performance. So whatever he says add more power consumption and higher msrp and you get a close approximation of the truth.
 
Last edited:

SonGoku

Member
what do you do about latency? cpu side?
Didnt seem to be an issue for Jaguar CPUs it would affect Zen 2 even less, so not noticeable performance penalty.
The CPU jump is huge next gen, the GPU not so much. Better to lose some CPU performance than GPU.

That's my take on it.
 
Last edited:

xool

Member
I think I said this before - but there's not a lot between HBM2 and GDDR6 in terms of bandwidth/bang per buck .. both are the most modern versions of each tech as of 2019

eg suppose you want 8GB ram and are buying the latest chips from Samsung :


Techchipspin data ratechip data bus widthbandwith calcbandwidth in GB/s
GDDR6
(option A)
4 K4ZAF325BM-HC14 16Gbit chips14 Gbit/s32 bit4x32x14 = 1792 Gb/s224
GDDR6
(option B)
8 K4Z80325BC-HC16 8Gbit16 Gbit/s32 bit8x32x16 = 4096 Gb/s512
HBM21 KHA884901X-MN13 8GByte2.4 Gbit/s1024 bit2.4x1024 = 2457.6 Gb/s307

[double or triple the numbers above for 16GB or 24GB - there aren't bigger chips to buy right now so relatively nothing changes]

So depending on which option you pick GDDR6 either wins or loses - but winning option needs double the chips (which relatively increase the price) .. and the second but is that HBM2 is (supposedly) more expensive than GDDR6

I don't think there is a winning choice here - you simply get what you pay for .. (at the end of next gen maybe one or the other will have been shown to be a better choice) - but right now I don't think it's possible to say
 
Last edited:

SonGoku

Member
xool xool
Where are you getting your pricing figures from? Everywhere i read HBM2 is significantly more expensive than GDDR6
16GB is the limit for HBM2 btw
 

xool

Member
xool xool
Where are you getting your pricing figures from? Everywhere i read HBM2 is significantly more expensive than GDDR6

I don't have prices - but I expected that big customers like Sony/MS will get realistic prices for HBM2 - so as far as article like this are concerned : SK Hynix: Customers Willing to Pay 2.5 Times More for HBM2 Memory I'm calling BS - customers can just go use GDDR6 (or go call a competitor) if the manufacturer gets greedy on HBM2 prices.

..also the "HBM2 is expensive" stuff was from 2 years ago, when it was brand new tech, and probably was expensive .. also at the time they were comparing with GDDR5 not GDDR6 - afaik the latest musings on HBM2 are much more about affordability, not about milking early adopters.

tldr - there is a competing product and competitors in the market - either HBM2 will become reasonably priced now/soon or it's a lemon .. (I don't think it's a lemon)



[edit]
xool xool
16GB is the limit for HBM2 btw

Samsung is only advertising 8GB stacks today (why I chose), but the limit was also recently upped to 24GB as well (can't buy yet)
 
Last edited:
I think I said this before - but there's not a lot between HBM2 and GDDR6 in terms of bandwidth/bang per buck .. both are the most modern versions of each tech as of 2019

eg suppose you want 8GB ram and are buying the latest chips from Samsung :


Techchipspin data ratechip data bus widthbandwith calcbandwidth in GB/s
GDDR6
(option A)
4 K4ZAF325BM-HC14 16Gbit chips14 Gbit/s32 bit4x32x14 = 1792 Gb/s224
GDDR6
(option B)
8 K4Z80325BC-HC16 8Gbit16 Gbit/s32 bit8x32x16 = 4096 Gb/s512
HBM21 KHA884901X-MN13 8GByte2.4 Gbit/s1024 bit2.4x1024 = 2457.6 Gb/s307

[double or triple the numbers above for 16GB or 24GB - there aren't bigger chips to buy right now so relatively nothing changes]

So depending on which option you pick GDDR6 either wins or loses - but winning option needs double the chips (which relatively increase the price) .. and the second but is that HBM2 is (supposedly) more expensive than GDDR6

I don't think there is a winning choice here - you simply get what you pay for .. (at the end of next gen maybe one or the other will have been shown to be a better choice) - but right now I don't think it's possible to say

where do you get your pricing information?

in such a configuration:

ps5layout0fk4h.png



DDR4

it would cost like 12 x $15 = $180 for 672-768GB/s



HBM2

2 times 4GB : 2 x $75 = $150 for ~500GB/s

or

4 times 4GB: 4 x $75 = $300 for ~1000GB/s


(if you don't buy in sony bulks of course)
 
Last edited:

SonGoku

Member
I don't have prices - but I expected that big customers like Sony/MS will get realistic prices for HBM2
You can use this reasoning for GDDR6 too though... Sony can get 24GB for $100-150 in Long term high volume contracts
Can they get comparable amount of HBM3 for the same price?
 
Last edited:

xGreir

Member
Even if you go that route doesn't make sense, for the money you invest on HBM2 you can get a much bigger pool which in turn has more bandwith
Also you will need HBM3 to go over 16GB and the budget headroom from going with a 8TF GPU won't make up the costs

I hope one day HBM tech replaces GDDRx but today is not that day

CAN'T A MAN OF GOOD HAVE DREAMS?

Because right now is just 30 bucks on early access. Don't break my dreams, be a dreamer Boi
 
Last edited:

xool

Member
You can use this reasoning for GDDR6 too though... Sony can get 24GB for under $100-150 in Long term high volume contracts

Yes - but for the same memory both GDDR6 and HBM1,2, or 3 will need the about the same die space (ie mm square of silicon) - as they're still based on the same DRAM memory cell - there'll be price differences, not least because packaging is different - HBM stacks silicon in a single "chip", whereas you generally buy multiple GDDR6 chips for the same memory .

So I'm assuming that the cost to manufacture will actually be very similar, excluding the packaging costs - so with the same profit margins both should be similar prices per GB ..

[edit]
Can they get comparable amount of HBM3 for the same price?
No idea if HBM3 will be ready for this gen in time.
 
Last edited:

SonGoku

Member
No idea if HBM3 will be ready for this gen in time.
I said that because i thought you couldn't get more than 16GB with HBM2
Also wouldn't supply and demand factor in pricing? atm more customers are buying gddr6 and more factories producing it.
 

xool

Member
I said that because i thought you couldn't get more than 16GB with HBM2
Also wouldn't supply and demand factor in pricing? atm more customers are buying gddr6 and more factories producing it.

Absolutely - I was reading this pdf which talks about the costs of the interposer (CoWoS) which is necessary for HMB (but not GDDR6) -it says "Volume is by far the No.1 factor in the cost equation" – they do mention that orders are firm in the server market (cloud) but its still niche ..

It's chicken and egg if they can't find customers for HBM to bring down the prices .

[edit] this article/video interesting and by someone much better informed than me https://www.extremetech.com/computing/289391-hbm2-vs-gddr6-new-video-compares-contrasts-memory-types -the take out I got from the article was "[Woo] says he doesn’t expect to see HBM2 to be widely used in consumer hardware, now that GDDR6 is available " -a possible exception here is if Sony tries to make another slimline console like the PS4 and has cooling problems in which case they it might tip them towards HBM over GDDR6.
 
Last edited:

Imtjnotu

Member
Yes - but for the same memory both GDDR6 and HBM1,2, or 3 will need the about the same die space (ie mm square of silicon) - as they're still based on the same DRAM memory cell - there'll be price differences, not least because packaging is different - HBM stacks silicon in a single "chip", whereas you generally buy multiple GDDR6 chips for the same memory .

So I'm assuming that the cost to manufacture will actually be very similar, excluding the packaging costs - so with the same profit margins both should be similar prices per GB ..

[edit]

No idea if HBM3 will be ready for this gen in time.
Rambus was suppose to start producing hbm3 back in January. Haven't really looked into it tho
 

Imtjnotu

Member
I said that because i thought you couldn't get more than 16GB with HBM2
Also wouldn't supply and demand factor in pricing? atm more customers are buying gddr6 and more factories producing it.
Sony should just go with the 24gb gddr6 @ 864 GB/s. Call it a day. I'm sure Samsung would cut them in especially when they have started mass production
 
Status
Not open for further replies.
Top Bottom