• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next Xbox is ‘More Advanced’ Than the PS5 according to Insiders.

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
It's no different than optimizing for PC at this point. More toggles and switches for more effects and resolutions. Xbox One X games look phenomenal, but that doesn't mean the Xbox One held any games back. They just enable more settings and provide 4k Asset packs.

But the benefit of a console is that it's "NOT" like developing for a PC. Who cares about scaling when you are making an exclusive game for one console? And yes Xbox One X games were held back by the Xbox One S. Any exclusive X1X title would look miles better than they do now if they didn't have to support the "S".

I would rather have devs have a top tier baseline, pushing a Crysis (as it it would not even run on the 4TF regardless of same CPU and resolution), taking advantage of all that power, than scaling something that looks cross gen/mid gen refresh from day zero (all gen long) with only a resolution boost and added minor tweaks to shadows, etc. due to a handicap.

And what happens if there are mid gen refreshes 3-4 years in again? Even more redundancy of power or an ass'ed out 4TF system?

Exactly! I don't think people honestly realize how making games for a console is different than a PC. The mindset is different. If you are a dev and can design the game from the ground up to only work on a console that has 14 TFs of power, it'll be designed differently than if you had to support a 4TF console.
 

FranXico

Member
I'm so confused by this leaker. Why does it say 12 GBs of DDR4 RAM usable for games and 20 GBs of RAM usable for games?
8 GB HBM2 + 12 GB DDR4 = 20 GB RAM in total. A combination of "fast" and "slow" RAM that the SDK manages automatically. Developers may manage how to distribute the use of both memory pools.

Both memory pools are shared between the CPU and GPU.

This sounds interesting.
 
Last edited:

CyberPanda

Banned
So, why is Sony going for the split memory pool? Wouldn’t it be better to have just a unified GDDR6 memory? The spilt memory pool reminds me of XB1.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
So, why is Sony going for the split memory pool? Wouldn’t it be better to have just a unified GDDR6 memory? The spilt memory pool reminds me of XB1.

And I thought bandwidth was key in a console. Why go with HBM2 is the bandwidth is half of what it needs to be?
 
But the benefit of a console is that it's "NOT" like developing for a PC. Who cares about scaling when you are making an exclusive game for one console? And yes Xbox One X games were held back by the Xbox One S. Any exclusive X1X title would look miles better than they do now if they didn't have to support the "S".

Maybe, but the Xbox One X and the Xbox One S are fairly different. XOS being 1.4TF with significantly slower RAM and the XOX at 6TF with significantly faster RAM. That's a very big difference, but that's also the difference it needs to allow current generation games to hit 4K. The difference for next-gen will be graphics at 1080p or 4k and give or take a few effects and toggles. The games will most likely look prettier on the beefed up system kind of like how games scale on a PC.

These consoles are becoming more PC like in architecture, but they are still focused on games wheras the PC is a multi-tool. But to think they can't scale like the do now is a little naive, especially because all first party will end up on PC anyway and all third party will usually have PC ports.

EDITED HEAVILY.
 
Last edited:

ethomaz

Banned
I'm so confused by this leaker. Why does it say 12 GBs of DDR4 RAM usable for games and 20 GBs of RAM usable for games?
The pool for games is 8GB HBM2 + 12GB DDR4.

So, why is Sony going for the split memory pool? Wouldn’t it be better to have just a unified GDDR6 memory? The spilt memory pool reminds me of XB1.
There is no split... it is a single pool of memory.

8 GB HBM2 + 12 GB DDR4 = 20 GB RAM in total. A combination of "fast" and "slow" RAM that the SDK manages automatically. Developers may manage how to distribute the use of both memory pools.

Both memory pools are shared between the CPU and GPU.

This sounds interesting.
Not the SDK... HBCC is at hardware level... the APU directly see it like a single memory pool.

The key point is this pool is accessed by 2 memory controller so the issue with PS4 where CPU usage affect GPU access doesnt exists anymore.
 
Last edited:

CyberPanda

Banned
  • Like
Reactions: TLZ

ethomaz

Banned
Can you explain to me what HBCC is? I asked before... :(
I did not see...

But it is better to copy from others sites:

What is HBCC?
HBCC is the controller for AMD’s high-bandwidth cache, what the company has functionally renamed from VRAM. There is no hard threshold as to what governs the naming designation of “high-bandwidth cache,” and should AMD produce a hypothetic GDDR5 Vega GPU, its framebuffer would also be named “high-bandwidth cache.” The card does not need HBM to have its framebuffer designated as HBC, in other words.

AMD’s High-Bandwidth Cache Controller is disabled by default. When enabled, the controller effectively converts VRAM into a last-level cache equivalent, then reserves a user-designated amount of system memory for allocation to the GPU. If the applications page-out of the on-card 8GB of HBM2, a trade-off between latency and capacity occurs and the GPU taps-in to system memory to grab its needed pages. If you’re storing 4K textures in HBM2 and exceed that 8GB capacity, and maybe need another 1GB for other assets, those items can be pushed to system memory and pulled via the PCIe bus. This is less effective than increasing on-card memory, but is significantly cheaper than doing so – even in spite of DDR pricing. Latency is introduced by means of traveling across the PCIe interface, through the CPU, and down the memory bus, then back, but it’s still faster than having to dump memory and swap data locally.

In simple terms HBCC makes all the SystemRAM + VRAM to be saw as a single memory pool by the GPU (or APU).
So when the GPU uses all the 8GB HBM2 it automatically start to use the DDR4.
That is at hardware level.

It is basically the biggest new feature that Vega added to the table.
 
Last edited:

CrustyBritches

Gold Member
Fine, but then what happens to the One X? Do they kill it given you expect nextgen to be backwards compatible?

My fundamental point on all of this is that bringing 2 new consoles on the market in addition to the existing One, One S, One X, SAD edition, etc makes for potential confusion. Marketing that lot could be a real challenge.
Same thing that happens to the PS4, PS4 Slim, PS4 Pro, and the upcoming PS4 Slim 7nm. They receive cross-gen games for year or 2 and then on to xCloud life support afterwards.
Exactly! I don't think people honestly realize how making games for a console is different than a PC. The mindset is different. If you are a dev and can design the game from the ground up to only work on a console that has 14 TFs of power, it'll be designed differently than if you had to support a 4TF console.
Devs have been wrestling with nasty setups on console the whole of this gen with XO's weird memory setup, but it's hard for them to detect the system and select 1080p output instead of 4K?

The real danger will be the desire to cater game design to the ~100million PS4 owners. A lot of games could be designed with the PS4's 1.6GHz Jag CPU in mind and it might hold back more advanced use of the Ryzen CPU. That's like this whole gen for PC gamers, excluding RTS, sim, etc.

PC benchmarks prove the setup works, I've checked countless games against the idea. Go look for yourself...
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Maybe, but the Xbox One X and the Xbox One S are fairly different. XOS being 1.4TF with significantly slower RAM and the XOX at 6TF with significantly faster RAM. That's a very big difference, but that's also the difference it needs to allow current generation games to hit 4K. The difference for next-gen will be graphics at 1080p or 4k and give or take a few effects and toggles. The games will most likely look prettier on the beefed up system kind of like how games scale on a PC.

These consoles are becoming more PC like in architecture, but they are still focused on games wheras the PC is a multi-tool. But to think they can't scale like the do now is a little naive, especially because all first party will end up on PC anyway and all third party will usually have PC ports.

EDITED HEAVILY.

Well a couple things....

1. GPU processing will be used on a whole lot more things than just resolution and basic graphics. The difference in a 4 TF console and a 14 TF console is HUGE! Nobody is using 10 TFs worth of power just to add a couple of effects and boost the resolution to 4K.

2. And how much time and resources will these "extra effects" get from developers when 40% of the Xbox Next fanbase don't even have the Anaconda version, because they have Lockhart? There will be features in whole game engines that will be created due to the amount of power these next-gen systems will have. Example Look at Dreams. What they do in Dreams is impossible with current traditional rasterization in GPUs. They needed to do a compute shading based rendering engine.

It allows you to create a game engine that can allow a game to look like all of the below. These 4 screenshots have no polygons at all. The engine doesn't use polygons like 99.9% of all other games.

DyWPmanXQAALUnk.jpg

65c1d14a-7f50-4e7b-a92a-260e7d2ca454-original.jpg

tl8rneuxivyvhvpdzw4g.png

dreams.jpg


Not the SDK... HBCC is at hardware level... the APU directly see it like a single memory pool.

The key point is this pool is accessed by 2 memory controller so the issue with PS4 where CPU usage affect GPU access doesnt exists anymore.

I wonder if they had to create a smart emulator to allow the PS5 to do the bolded for PS4 games.
 
Last edited:

ethomaz

Banned
I wonder if they had to create a smart emulator to allow the PS5 to do the bolded for PS4 games.
PS4 games will run native without emulator... the OS/Game/App didn't know there is 2 physical memory pool... they know there is only one virtual memory pool like PS4.

PS4: 5.5GB available for games
PS5: 20GB available for games

What Sony will do is the same they did with PS4 Pro... they will make PS4 games to use the CPU/GPU/RAM limited.
Of course they will need to work in a Boost Mode like they did on PS4 Pro to use fully the hardware on PS5.

Edit - Just to not make confusion lol I don't know what PS5 is... I'm just basing that in the rumor that uses the HBCC feature.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
PS4 games will run native without emulator... the OS/Game/App didn't know there is 2 physical memory pool... they know there is only one virtual memory pool like PS4.

PS4: 5.5GB available for games
PS5: 20GB available for games

What Sony will do is the same they did with PS4 Pro... they will make PS4 games to use the CPU/GPU/RAM limited.
Of course they will need to work in a Boost Mode like they did on PS4 Pro to use fully the hardware on PS5.

Edit - Just to not make confusion lol I don't know what PS5 is... I'm just basing that in the rumor that uses the HBCC feature.

Man things are getting interesting. I think I can see where MS can claim "best console on Earth". By making Anaconda it'll probably have 32 total GBs of RAM and it'll probably go with a high bandwidth RAM option like GDDR6 instead of HBM2.

But then by having Lockhart on shelves for $100 less they can claim both sides of the market. "Affordable" and "The Strongest", yet on two different technical consoles.
 
I did not see...

But it is better to copy from others sites:



In simple terms HBCC makes all the SystemRAM + VRAM to be saw as a single memory pool by the GPU (or APU).
So when the GPU uses all the 8GB HBM2 it automatically start to use the DDR4.
That is at hardware level.

It is basically the biggest new feature that Vega added to the table.
Interesting. But still... that DDR4 would be limited to the (supposed) 102GB/s, no? That's not great tbh
 

ethomaz

Banned
Interesting. But still... that DDR4 would be limited to the (supposed) 102GB/s, no? That's not great tbh
Well benchmarks on PC shows it did not lose performance when using more RAM from the DDR4.


sniper-4k-hbcc-on-off.png


aots-4k-hbcc-on-off.png


grw-4k-hbcc-on-off.png

grw-1080p-hbcc-on-off.png
 
Last edited:

TLZ

Banned
I'm not worried about pretty graphics for the most part. It's gonna be the guts(AI, etc) that are important to me.

But yes, a midgen refresh is very likely with this gen as well.

Textures and shadows are much more scalable than AI.

Just my worthless opinion.
It's not worthless :)


I'm happy with 13tf :)

Regarding the HBCC rumor, these are the key points we should be looking at:

  • memory automatically managed by HBCC and appears as 20 GB to the developers
  • HBCC manages streaming of game data from storage as well
  • developers can use the API to take control if they choose and manage the memory and storage streaming themselves
  • memory solution alleviates problems found in PS4
  • namely that CPU bandwidth reduces GPU bandwidth disproportionately
  • 2 stacks of HBM have 512 banks (more banks = fewer conflicts and higher utilization)
  • GDDR6 better than GDDR5 and GDDR5x in that regard but still less banks than HBM
It looks like an advanced way of doing things and sounds impressive on paper. The last part answers the question of why not gddr6 instead.
 
Last edited:

ethomaz

Banned
It's not worthless :)


I'm happy with 13tf :)

Regarding the HBCC rumor, these are the key points we should be looking at:

  • memory automatically managed by HBCC and appears as 20 GB to the developers
  • HBCC manages streaming of game data from storage as well
  • developers can use the API to take control if they choose and manage the memory and storage streaming themselves
  • memory solution alleviates problems found in PS4
  • namely that CPU bandwidth reduces GPU bandwidth disproportionately
  • 2 stacks of HBM have 512 banks (more banks = fewer conflicts and higher utilization)
  • GDDR6 better than GDDR5 and GDDR5x in that regard but still less banks than HBM
It looks like an advanced way of doing things and sounds impressive on paper. The last part answers the question of why not gddr6 instead.
HBM2 is better than GDDR6 in both latency access and speeds with the right bus.

The only issue is PRICE.

GDDR6 is cheaper to buy and implement... HBM2 is expensive to buy and implement.

But for future HBM3 that will be cheaper than GDDR6 will be out in 2020 with the same better latency and speeds of HBM2 while GDDR7 won't come before 2025?
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
HBM2 is better than GDDR6 in both latency access and speeds with the right bus.

The only issue is PRICE.

GDDR6 is cheaper to buy and implement... HBM2 is expensive to buy and implement.

But for future HBM3 that will be cheaper than GDDR6 will be out in 2020 with the same better latency and speeds of HBM2 while GDDR7 won't come before 2025?

Is it possible Sony can use HBM3 on future PS5s to replace the HBM2 for cost savings?
 
Well benchmarks on PC shows it did not lose performance when using more RAM from the DDR4.


sniper-4k-hbcc-on-off.png


aots-4k-hbcc-on-off.png


grw-4k-hbcc-on-off.png

grw-1080p-hbcc-on-off.png
Damn I had no idea about that... it looks really good!
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
That is base of the whole rumor... HBM2 and HBM3 are 100% compatibles so Sony got a big deal with HBM2 right now and in one or two years they will shift to HBM3 to cut costs.

Man that's amazing and crazy good foresight by Sony if true. I'm still saddened by the low 400+ GB/s bandwidth of HBM2 though. :(
 

demigod

Member
Personally I'm interested to know how our old buddy Leocarian who was banned here and fled to era, is now banned over there.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
The bandwidth is to be expected because HBM is fairly new tech, with HBM3 and onwards I’m sure the bandwidth is going to get better and better.

But Sony will not able to take advantage of the PS5s with HBM3, since it will not be in every PS5 (since the lauch version will have HBM2). :(
 
Well a couple things....

1. GPU processing will be used on a whole lot more things than just resolution and basic graphics. The difference in a 4 TF console and a 14 TF console is HUGE! Nobody is using 10 TFs worth of power just to add a couple of effects and boost the resolution to 4K.

2. And how much time and resources will these "extra effects" get from developers when 40% of the Xbox Next fanbase don't even have the Anaconda version, because they have Lockhart? There will be features in whole game engines that will be created due to the amount of power these next-gen systems will have. Example Look at Dreams. What they do in Dreams is impossible with current traditional rasterization in GPUs. They needed to do a compute shading based rendering engine.

It allows you to create a game engine that can allow a game to look like all of the below. These 4 screenshots have no polygons at all. The engine doesn't use polygons like 99.9% of all other games.

DyWPmanXQAALUnk.jpg

65c1d14a-7f50-4e7b-a92a-260e7d2ca454-original.jpg

tl8rneuxivyvhvpdzw4g.png

dreams.jpg




I wonder if they had to create a smart emulator to allow the PS5 to do the bolded for PS4 games.

I see what you are saying, I really do, but it's the same principles that we run into in PC gaming. There is a reason why some PC's can run at Ultra and others at Medium or lower. 10TF is a lot, but do you want 4k60 or 4k30 with Anti-Ailiasing, SSAO, and Raytracing? 4k and Raytracing, among other things requires the beef.
 
Last edited:

THE:MILKMAN

Member
Yup. >600GB/s bandwidth or bust.

Well 18Gbps GDDR6 were specifically said to go in to mass production by Samsung all the way back in January 2018 yet we have not seen a single product use them. So where are they?

These chips on a 256-bit bus would provide 576GB/s for 16GB RAM.
 

Three

Member
You could probably get it down a little bit on a closed system like a console, hense why both Xbox's are rumoured to be 4tf and 12tf.
If this is true they will once again cripple the crap out of next gen. You won't get devs developing new rendering pipelines and games for the 12tf machine. They will develop for 4tf and just boost dynamic resolutions and fps because it is the easiest thing to do. Next gen would just be the 4tf machine but with the option of higher res and fps if you pay through the teeth.
 

DeepEnigma

Gold Member
If this is true they will once again cripple the crap out of next gen. You won't get devs developing new rendering pipelines and games for the 12tf machine. They will develop for 4tf and just boost dynamic resolutions and fps because it is the easiest thing to do. Next gen would just be the 4tf machine but with the option of higher res and fps if you pay through the teeth.

That's what I'm saying. I want Crysis up in this bitch on 12TF that won't run on 4, I don't want your low to high Fortnite scaling (hyperbole obviously).
 
Last edited:

Dural

Member
If this is true they will once again cripple the crap out of next gen. You won't get devs developing new rendering pipelines and games for the 12tf machine. They will develop for 4tf and just boost dynamic resolutions and fps because it is the easiest thing to do. Next gen would just be the 4tf machine but with the option of higher res and fps if you pay through the teeth.
That's what I'm saying. I want Crysis up in this bitch on 12TF that won't run on 4, I don't want your low to high Fortnite scaling (hyperbole obviously).


Why does this have to keep being repeated ad nauseam? The rumors talk of a 4tf 1080p machine and a 12tf 4K machine, they use the same base architecture therefore they will both take advantage of new rendering pipelines. In fact, the 4tf machine should have a little extra headroom at 1080p than the 12tf machine at 4K! It’s like the difference between a 1060 and a 1080. Now, if we are talking of the 12tf machine rendering at 1080p that’s a horse of a different color.
 

DeepEnigma

Gold Member
Why does this have to keep being repeated ad nauseam? The rumors talk of a 4tf 1080p machine and a 12tf 4K machine, they use the same base architecture therefore they will both take advantage of new rendering pipelines. In fact, the 4tf machine should have a little extra headroom at 1080p than the 12tf machine at 4K! It’s like the difference between a 1060 and a 1080. Now, if we are talking of the 12tf machine rendering at 1080p that’s a horse of a different color.

But I thought we needed 4x the processing power to achieve native 4K for all?

Otherwise it seems we will still have 1800p on our hands with select 4K.

You would still get way more out of 12TF baseline than a 4TF baseline even at 1080p. I want more effects over just a rez bump. And throw RT into the mix, and that 4TF seems a hair more anemic or none at all.

Imagine 1440p with some CB rendering pushing 12TF to the wall, it won't happen with a 4TF handicap all gen if resolution bump is their only focus.
 
Last edited:

Dural

Member
Like I said, the 4tf 1080p machine has more overheard than the 12tf 4K machine, therefore, if a developer decides to make a game at a lower resolution on the 12tf machine they won’t have to drop the res as much on the 4tf machine to achieve the same results. Personally, I’d rather the 12tf machine be a 1080p machine as I’m sick of the stupid resolution bumps taking up so many of the resources.
 
Top Bottom