• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Evilms

Banned
Does using HBM2 reduce the size of SoC compared to Gddr6?

PS5 with 320mm Soc vs Anaconda 380mm will lend some creditability to HBM rumor for PS5

HBM memory is more expensive than GDDR and does not perform better in gaming than gddr, I strongly doubt that Sony will choose this choice which will only increase the invoice unnecessarily.


 
Last edited:

-kb-

Member
HBM memory is more expensive than GDDR and does not perform better in gaming than gddr, I strongly doubt that Sony will choose this choice which will only increase the invoice unnecessarily.



My understanding is that HBM is meant to have better latency. If that is so at the same bandwidth it should perform better for operations that benefit from low latency (such as reading and writing lots of buffers that are bigger than cache).
 

Aceofspades

Banned
HBM memory is more expensive than GDDR and does not perform better in gaming than gddr, I strongly doubt that Sony will choose this choice which will only increase the invoice unnecessarily.



Your link doesn't support your argument that Gddr6 is better. It said that Hbm can offer higher bandwidth, is using less power , main con is being expensive to produce.

Edit: back to our topic, why I feel like HBM is better suited for consoles is that it uses way less power than Gddr6. Almost 4x less, which in a closed system is huge so the manufacturer can allocate the power budget somewhere else (GPU) .
 
Last edited:

Evilms

Banned
Your link doesn't support your argument that Gddr6 is better. It said that Hbm can offer higher bandwidth, is using less power , main con is being expensive to produce.

Edit: back to our topic, why I feel like HBM is better suited for consoles is that it uses way less power than Gddr6. Almost 4x less, which in a closed system is huge so the manufacturer can allocate the power budget somewhere else (GPU) .

The bandwidth does not do everything, RX Vega 56/64 is hbm2 Radeon VII from HBM2 and yet they get busted by the GTX 10 series GDDR5X or RTX 20 in GDDR6 so I maintain what I said, HBM is expensive and unnecessary for purely video game use.
 
Last edited:

Aceofspades

Banned
The bandwidth does not do everything, RX Vega 56/64 is hbm2 Radeon VII from HBM2 and yet they get busted by the GTX 10 series GDDR5X or RTX 20 in GDDR6 so I maintain what I said, HBM is expensive and unnecessary for purely video game use.

I agree with the bolded, BW is not everything. But power/heat can influence the design of a console.

Notice that the examples you used are of a discreet gaming GPUs not closed system boxes like a consoles.

Also their is a reason why the highest end products out of both AMD and Nvidia are using HBM and not Gddr6.
 

-kb-

Member
The bandwidth does not do everything, RX Vega 56/64 is hbm2 Radeon VII from HBM2 and yet they get busted by the GTX 10 series GDDR5X or RTX 20 in GDDR6 so I maintain what I said, HBM is expensive and unnecessary for purely video game use.

Theres a big difference between consoles and desktops though. It is unlikely that desktop cards that have HBM will have the advantages leveraged much because those advantages don't exist on the vast majority of video cards. On a console developers could easily leverage the advantages HBM brings.
 

DanielsM

Banned
I'm assuming with the SSD solution that increased RAM would have little to no benefit? So, is it possible for a sub 16gb RAM solution PS5?
 

-kb-

Member
I'm assuming with the SSD solution that increased RAM would have little to no benefit? So, is it possible for a sub 16gb RAM solution PS5?

That all depends on how much RAM you need and how often you need to seek to the disk and if you could even avoid seeking with more RAM. A SSD is still vastly slower in both bandwidth and latency compared to actual DRAM.
 

DanielsM

Banned
To my knowledge 24 GB of GDDR6 is possible since it already exists but it's really expensive right now.


But why? (with the SSD sitting there and assuming it can feed at a great enough rate) It was actually a comment I saw on reddit last week. They guy was like why do devs really need more then 8gb to play in? I'm looking at it from a power consumption savings standpoint, meaning anyone can make a console which can rival the latest PC graphics cards, but they can't do it at a total system consumption of 150-200w.
 

Panajev2001a

GAF's Pleasant Genius
HBM memory is more expensive than GDDR and does not perform better in gaming than gddr, I strongly doubt that Sony will choose this choice which will only increase the invoice unnecessarily.



HBM2 can deliver the same bandwidth at a lower power consumption and PCB complexity, the higher the bandwidth goes the worse it gets for GDDR6. GDDR6 is cheaper if you can afford the power consumption and extra board complexity.
 

pawel86ck

Banned
That is wrong, PS4 has 2 GCP.
Really? So can you link article or PS4 diagram where people can read about it, because so far all informations I have read mention one command processor in PS4 GPU, not two.

Here's Polaris RX 480 diagram and it show just one command processor
607b3ec3-dbfb-4f79-86ac-50afe94eb201.PNG


Here's Xbox X SOC diagram and you can see 2x command processors
a1.jpg
 
Last edited:

ethomaz

Banned
Really? So can you link article or PS4 diagram where people can read about it, because so far all informations I have read mention one command processor in PS4 GPU, not two.

Here's Polaris RX 480 diagram and it show just one command processor
607b3ec3-dbfb-4f79-86ac-50afe94eb201.PNG


Here's Xbox X SOC diagram and you can see 2x command processors
a1.jpg
RX 480 has half of the ACE of PS4 too.

I believe PS4 has only one GPC.
 
Last edited:

ethomaz

Banned
None of these are making any sense to me.
It makes to me.

It shows AMD will release three Navi 10 cards, two Navi 12 cards, one Navi 14 cards and two Navi 21 cards.

Relating these codes with console is what doesn't make sense.
 
Last edited:
11TF baseline, i really don't see either of them going lower than 64

Yeah especially taking into account both 4k and rt resource hogs
What do you think of this?

 

ethomaz

Banned
BTW somebody on ERA things the Lite means versions with units disabled.

Could it be console? Yes.
Could it be PC cards? Yes.
Could it be both? Yes.

The tables show three Navi 10 versions and AMD launched only two chip versions: RX 5700 and RX 5700 XT.
There is one more chip not launched yet.
 
Last edited:

LordOfChaos

Member
fwiw, biggest TSMC 7nm chip ever


Having our first Versal ACAP silicon back from TSMC ahead of schedule and shipping to early access customers is a historic milestone and engineering accomplishment. It is the culmination of many years of software and hardware investments and everything we’ve learned about architectures over the past 35 years. The Versal ACAP is a major technology disruption and will help spark a new era of heterogeneous compute acceleration for any application and any developer.”

50 billion transistors!!
 
Last edited:

Bogroll

Likes moldy games
PS5 will ALWAYS be more powerful because it has the power of Dreams©
Petaflops of Neverending creativity flowing from users directly to Sony HQ, it will be unstoppable!!
It's a good thing Saddam Hussain not around anymore else the world would cease to exist when PS5 is released :)
 
I think hsa had an issue with amd never supporting it on the software side. I thought ps4 used some sort of hsa with garlic and onion but it's been forever since reading into it
HSA and hUMA are supported on consoles, because they have a custom software stack (OS + drivers + games) and there's no need for backwards compatibility with old software (since they're not IBM PC compatible), but not on regular PCs (AMD APUs pretend to be discrete CPU & GPU with discrete RAM pools via BIOS emulation).

HSA is old news though, HBCC is the new buzzword in town...
 

Evilms

Banned
izFaXrD.png


NV_NAVI10_P_A0 = 1,

NV_NAVI12_P_A0 = 10,

NV_NAVI14_M_A0 = 20,

NV_NAVI21_P_A0 = 40,

NV_NAVI10_LITE_P_A0 = 0x80,

NV_NAVI10_LITE_P_B0 = 0x81,

NV_NAVI12_LITE_P_A0 = 0x82,

NV_NAVI21_LITE_P_A0 = 0x90,

NV_UNKNOWN = 0xFF




 
Last edited:

LordOfChaos

Member
izFaXrD.png


NV_NAVI10_P_A0 = 1,

NV_NAVI12_P_A0 = 10,

NV_NAVI14_M_A0 = 20,

NV_NAVI21_P_A0 = 40,

NV_NAVI10_LITE_P_A0 = 0x80,

NV_NAVI10_LITE_P_B0 = 0x81,

NV_NAVI12_LITE_P_A0 = 0x82,

NV_NAVI21_LITE_P_A0 = 0x90,

NV_UNKNOWN = 0xFF







I thought it was looking like RX 5700 was Navi 12, give the CU count relative to Vega.
 
Last edited:

Ovech-King

Gold Member
So I just realized that Scarlett being 672Gb/s on Gddr6 is better than 2080Ti so we can assume native 4k 60 fps confirmed on Microsoft side.
 
Last edited:
So I just realized that Scarlett being 672Gb/s on Gddr6 is better than 2080Ti so we can assume native 4k 60 fps confirmed on Microsoft side.
Not a fair comparison, since 2080 Ti is a discrete GPU with a dedicated VRAM bus.

Next-gen APUs will have to feed more hungry CPU cores (Zen 2/AVX256), so memory contention will always be an issue in unified memory pools.
 

LordOfChaos

Member
So I just realized that Scarlett being 672Gb/s on Gddr6 is better than 2080Ti so we can assume native 4k 60 fps confirmed on Microsoft side.

Nvidia notoriously makes significantly better use of bandwidth for one, second the CPU is using the same bandwidth pool here, and three I doubt the memory bandwidth available here will be the limit for a chip in this class, nor would quadrupling the bandwidth bring it closer to a 2080TI, the chip is the limit.

Last, as always it's up to the developers how to apply the extra power each generation affords.
 

Hellgardia

Member
izFaXrD.png


NV_NAVI10_P_A0 = 1,

NV_NAVI12_P_A0 = 10,

NV_NAVI14_M_A0 = 20,

NV_NAVI21_P_A0 = 40,

NV_NAVI10_LITE_P_A0 = 0x80,

NV_NAVI10_LITE_P_B0 = 0x81,

NV_NAVI12_LITE_P_A0 = 0x82,

NV_NAVI21_LITE_P_A0 = 0x90,

NV_UNKNOWN = 0xFF






Not a bad lineup but the problem is that apart from RX5700 Series, the other ones will probably be up against nVidia's 7nm EUV lineup.
 

Ovech-King

Gold Member
Not a fair comparison, since 2080 Ti is a discrete GPU with a dedicated VRAM bus.

Next-gen APUs will have to feed more hungry CPU cores (Zen 2/AVX256), so memory contention will always be an issue in unified memory pools.

Fair though platform optimization by the devs and future proof choices by both Sony and Microsoft still make my peace of mind because need to deliver until 2023+ . It may seems much in mid 2019 but we have to look at longevity here.
 
Last edited:

ethomaz

Banned
izFaXrD.png


NV_NAVI10_P_A0 = 1,

NV_NAVI12_P_A0 = 10,

NV_NAVI14_M_A0 = 20,

NV_NAVI21_P_A0 = 40,

NV_NAVI10_LITE_P_A0 = 0x80,

NV_NAVI10_LITE_P_B0 = 0x81,

NV_NAVI12_LITE_P_A0 = 0x82,

NV_NAVI21_LITE_P_A0 = 0x90,

NV_UNKNOWN = 0xFF





It that official?

Because it basically confirms my guesses of Lite being the disabled units of the same chip... RX 5700 XT (Navi 10) and RX 5700 (Navi 10 Lite).
 

archy121

Member
IMO discussions of RT importance are overblown

RT is more of a PR booster for the next console wars than a feature which can bring significant benefits without compromises in other areas. It’s being oversold as a must have when in reality the returns from the immature first generation tech will offer minimal benefits.

Cerney was correct not to give it too much attention as a developer he knows what features will give actual measurable returns. No spin doctoring.

Considering even the full blown Nvidia cards out next year costing $1400+ still won’t be able to do full blown RT without compromises I really don’t know what people are expecting from the comparative tiny AMD APU’s - miracles ?

We will be much better off with a console that has next to nothing boot/load times and maximised CPU power to push FPS & AI.

If Sony were to limit/sacrifice RT implementation and instead spend more resources on other areas of APU I would not see it as negative.
 
Last edited:

LordOfChaos

Member
Just think a little about...

Why will AMD add console chip data to the Linux driver when no console chip will use Linux as OS?

Well it's in there:


The consoles are going to use APUs, so when people say it's Navi 10 Lite, they mean based on it in terms of the chip configuration, not using the actual part. So Navi 10 Lite is a dedicated part for desktop or mobile, as the string is in the driver.
 

ethomaz

Banned
Well it's in there:


The consoles are going to use APUs, so when people say it's Navi 10 Lite, they mean based on it in terms of the chip configuration, not using the actual part. So Navi 10 Lite is a dedicated part for desktop or mobile, as the string is in the driver.
There is no console in that link.

There are actually three Navi 10 products.

Navi 10 Rev. A0 = RX 5700 XT
Navi 10 Lite Rev. A0 = RX 5700?
Navi 10 Lite Rev. B0 = RX 5700?

The Rev. B0 could be either a revision of RX 5700 or another Navi 10 chip with disabled units to be launched in the future.
 
Last edited:
Reading this thread and not understanding lot of tech stuff, are you lads suggesting sony will cheap out and let microsoft take the power in graphics?
 

ethomaz

Banned
tenor.gif


Who ever thought the literal console chip would be in there?
Every chip needs to be in the drive but console's chip won't be coded on Linux driver by AMD themselves (unless somebody in the community wants to).
These as only PC cards references.
 
Last edited:

LordOfChaos

Member
Every chip needs to be in the drive but console's chip won't be coded on Linux driver by AMD themselves (unless somebody in the community wants to).
These as only PC cards references.

We know, these are just the PC products that the console chip will be more based on.
 
Status
Not open for further replies.
Top Bottom