• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next Xbox is ‘More Advanced’ Than the PS5 according to Insiders.

DeepEnigma

Gold Member
Like I said, the 4tf 1080p machine has more overheard than the 12tf 4K machine, therefore, if a developer decides to make a game at a lower resolution on the 12tf machine they won’t have to drop the res as much on the 4tf machine to achieve the same results. Personally, I’d rather the 12tf machine be a 1080p machine as I’m sick of the stupid resolution bumps taking up so many of the resources.

I definitely agree with the last sentence. 1440 max if that.
 

SonGoku

Member
He's not wrong. you need 3-4 times the processing power to run a 1080p game at 4k, depending on the game. I have tested it on tons of PC games over the years amnd its always in that range.
You could probably get it down a little bit on a closed system like a console, hense why both Xbox's are rumoured to be 4tf and 12tf.
Im not claiming either way im just asking how he came to that number
GTX 2080 is stronger than anything you will see on next-gen consoles...

448GB/s
nvidia cards are more bandwidth efficient
Nvdia is also know for the bean counters maximizing profits

That is base of the whole rumor... HBM2 and HBM3 are 100% compatibles so Sony got a big deal with HBM2 right now and in one or two years they will shift to HBM3 to cut costs.
I doubt its true
8gb of mediocre salvaged hbm2 + 16gb slow ddr4 its the opposite of the winning ps4 design philosophy. It looks more like the xbone
Not to mention the added complexity and extra silicon eating into the gpu die budget

That rumor is 100% fake, if its real Sony lost the gen
 
Last edited:

CrustyBritches

Gold Member
Im not claiming either way im just asking how he came to that number
4k is 4x the resolution as 1080p, but MS stated that during their design phase with Scorpio they found that rasterization efficiency scales with resolution, and that it took only 3.5x the power to scale an unnamed(Forza) 1080p X1 game to 4K at the same settings.

Much depends on the architecture and memory config, too. Xbox One X can run some PS4 1080p games at 4K for example(RDR2). Also, Nvidia matches Vega 56/64 with much less powerful hardware on the compute side of things.
---
Games will be made for native 4K on Anaconda. Let's say it's 12TF like the rumor, similar to the 12.7TFLOPS Vega 64 in raw compute. Go look up benchmarks for Vega 64 at 4K, then look at 1080p benchmarks and see what cards can run at a similar framerate. In general a RX 470 4GB(4.9TFLOPS) is more than enough, and it lacks the newer features of Vega like direct pixel engine to L2 cache access. It's has like half the memory bandwidth, too. Most cases even a lowly 380x can do the job, but it's harder to find mixed benchmarks.

I took some time to check benchmarks done on the same CPU, RAM, HDD, etc. I compared Vega 64 at 4K to RX 470 4GB at 1080p, same settings(usually max/ultra on Guru3d):
-Sniper Elite 4
Vega 64(4K) = 55fps | RX 470(1080p) = 68fps
-Rise of the Tomb Raider
Vega 64(4K) = 46fps | RX 470(1080p) = 63fps
-Battlefield 1
Vega 64(4K) = 62fps | RX 470(1080p) = 86fps
-Gears of War 4
Vega 64(4K) = 43fps | RX 470(1080p) = 75fps
-Hitman
Vega 64(4K) = 62fps | RX 470(1080p) = 74fps
-Mankind Divided
Vega 64(4K) = 38fps | RX 470(1080p) = 64fps
-DOOM(Vulkan)
Vega 64(4K) = 79fps | RX 470(1080p) = 131fps
-Resident Evil 7
Vega 64(4K) = 46fps | RX 470(1080p) = 86fps
-The Witcher 3
Vega 64(4K) = 44fps | RX 470(1080p) = 51fps
-GTA V
Vega 64(4K) = 49fps | RX 470(1080p) = 76fps

10-game AVG:
Vega 64(4K) = 52.4fps | RX 470 4GB (1080p) = 77.4fps

I just like reading through these next-gen threads and I learn a lot, so hopefully people will find this interesting or helpful.
 
Last edited:

TLZ

Banned
4k is 4x the resolution as 1080p, but MS stated that during their design phase with Scorpio they found that rasterization efficiency scales with resolution, and that it took only 3.5x the power to scale an unnamed(Forza) 1080p X1 game to 4K at the same settings.

Much depends on the architecture and memory config, too. Xbox One X can run some PS4 1080p games at 4K for example(RDR2). Also, Nvidia matches Vega 56/64 with much less powerful hardware on the compute side of things.
---
Games will be made for native 4K on Anaconda. Let's say it's 12TF like the rumor, similar to the 12.7TFLOPS Vega 64 in raw compute. Go look up benchmarks for Vega 64 at 4K, then look at 1080p benchmarks and see what cards can run at a similar framerate. In general a RX 470 4GB(4.9TFLOPS) is more than enough, and it lacks the newer features of Vega like direct pixel engine to L2 cache access. It's has like half the memory bandwidth, too. Most cases even a lowly 380x can do the job, but it's harder to find mixed benchmarks.

I took some time to check benchmarks done on the same CPU, RAM, HDD, etc. I compared Vega 64 at 4K compared to RX 470 4GB at 1080p, same settings(usually max/ultra on Guru3d):
-Sniper Elite 4
Vega 64(4K) = 55fps | RX 470(1080p) = 68fps
-Rise of the Tomb Raider
Vega 64(4K) = 46fps | RX 470(1080p) = 63fps
-Battlefield 1
Vega 64(4K) = 62fps | RX 470(1080p) = 86fps
-Gears of War 4
Vega 64(4K) = 43fps | RX 470(1080p) = 75fps
-Hitman
Vega 64(4K) = 62fps | RX 470(1080p) = 74fps
-Mankind Divided
Vega 64(4K) = 38fps | RX 470(1080p) = 64fps
-DOOM(Vulkan)
Vega 64(4K) = 79fps | RX 470(1080p) = 131fps
-Resident Evil 7
Vega 64(4K) = 46fps | RX 470(1080p) = 86fps
-The Witcher 3
Vega 64(4K) = 44fps | RX 470(1080p) = 51fps
-GTA V
Vega 64(4K) = 49fps | RX 470(1080p) = 76fps

10-game AVG:
Vega 64(4K) = 52.4fps | RX 470 4GB (1080p) = 77.4fps

I just like reading through these next-gen threads and I learn a lot, so hopefully people will find this interesting or helpful.
Thanks for doing the work. So just from looking at this, it's possible the same game will always be 1080p/60 on Lockhart, while 4k/30 on Anaconda. Unless it's a game like Hitman, Doom or Battlefield 1.

Or of course if devs put the effort to optimize, they can probably squeeze out that little more.
 
Last edited:

CrustyBritches

Gold Member
Thanks for doing the work. So just from looking at this, it's possible the same game will always be 1080p/60 on Lockhart, while 4k/30 on Anaconda. Unless it's a game like Hitman, Doom or Battlefield 1.

Or of course if devs put the effort to optimize, they can probably squeeze out that little more.
The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.

Going by Xbox One X, MS will encourage native 4k, but in the case of 4K checkerboard, you'd need to drop settings on Lockhart. I always start with dropping HBAO+ to SSAO, and shadows to 'medium'.
 
Last edited:

TLZ

Banned
The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.

Going by Xbox One X, MS will encourage native 4k, but in the case of 4K checkerboard, you'd need to drop settings on Lockhart. I always start with dropping HBAO+ to SSAO, and shadows to 'medium'.
If the average is 77fps on rx470, why would Lockhart only be 1080/44 instead of 60? The list you showed had them all above 60 fps at 1080.
 
Last edited:

CrustyBritches

Gold Member
If the average is 77fps on rx470, why would Lockhart only be 1080/44 instead of 60? The list you showed had them all above 60 fps at 1080.
It's ~47% increase in frames at 1080p over the Vega 64 at 4K. Next-gen games will be targeting 4K/30fps on a ~12TF GPU(PS5, Anaconda). So if Vega 64 benchmarked at 4K/30fps in that imaginary next-gen game, the RX 470 would be 1080p/44fps going by the average. Vega 64 had a 52fps AVG, not 30fps.

As you mentioned, in many games excluding the ones you listed, it could be possible to see 1080p/60fps with a few settings dropped(AO, Shadows). I believe the best and worst case scenarios were listed next to each other:

-Resident Evil 7
Vega 64(4K) = 46fps | RX 470(1080p) = 86fps *~87% increase in framerate
-The Witcher 3
Vega 64(4K) = 44fps | RX 470(1080p) = 51fps *~16% increase in framerate
 

Armorian

Banned
The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.

Going by Xbox One X, MS will encourage native 4k, but in the case of 4K checkerboard, you'd need to drop settings on Lockhart. I always start with dropping HBAO+ to SSAO, and shadows to 'medium'.

NEVER drop from HBAO+ if it's available :messenger_pouting:

Some people just don't get what you (and I) are saying, base (graphics quality) upgrade for next gen consoles will be ~2x in GPU power:

PS4 1080p (1.8TF) -> Lockhart 1080p (4TF)

X1X (kinda) 4K (6TF) -> Anaconda 4K (12TF)

As I said earlier it can only have negative effect on devs that would want to stay 1080p only for next gen and I doubt MS will allow that.
 

CrustyBritches

Gold Member
NEVER drop from HBAO+ if it's available :messenger_pouting:

As I said earlier it can only have negative effect on devs that would want to stay 1080p only for next gen and I doubt MS will allow that.
Sorry:messenger_beaming:

With the advent of checkerboard, and even the decent upscaling from 1800p, there's no reason to go 1080p on a 12TF 4K system unless you're offering a performance mode. In many cases, the Lockhart would still be able to offer something similar with reduced settings in some areas.

I hope the rumors are true and the price comes in around $250-299. I'd like 2 for me and my boys and it's more practical to have two $299 1080p systems than a single $500 4K system in our case.
 

Armorian

Banned
Sorry:messenger_beaming:

With the advent of checkerboard, and even the decent upscaling from 1800p, there's no reason to go 1080p on a 12TF 4K system unless you're offering a performance mode. In many cases, the Lockhart would still be able to offer something similar with reduced settings in some areas.

I hope the rumors are true and the price comes in around $250-299. I'd like 2 for me and my boys and it's more practical to have two $299 1080p systems than a single $500 4K system in our case.

To achieve that price they would have to cut more than just GPU and RAM in Lockhart, I think it will be 350-400. Memory reduction is the biggest threat to game devlopment, if rumors are true and LH has 12GB of memory (vs 24 in Anaconda) devs would have to create games with that amount in mind.
 

ANIMAL1975

Member
Im not claiming either way im just asking how he came to that number

nvidia cards are more bandwidth efficient
Nvdia is also know for the bean counters maximizing profits


I doubt its true
8gb of mediocre salvaged hbm2 + 16gb slow ddr4 its the opposite of the winning ps4 design philosophy. It looks more like the xbone
Not to mention the added complexity and extra silicon eating into the gpu die budget

That rumor is 100% fake, if its real Sony lost the gen
Yeah Sony, if you listening and there is any truth to this rumor, hope you planing the right way to go: 16 hbm2, 8 ddr4! Beast mode or burst.
 

CrustyBritches

Gold Member
To achieve that price they would have to cut more than just GPU and RAM in Lockhart, I think it will be 350-400. Memory reduction is the biggest threat to game devlopment, if rumors are true and LH has 12GB of memory (vs 24 in Anaconda) devs would have to create games with that amount in mind.
What are you thinking the Anaconda msrp will be? I'm hoping for $299 Lockhart and $499 Anaconda. A $399 Lockhart would surely mean a $599 Anaconda, right?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Im not claiming either way im just asking how he came to that number

nvidia cards are more bandwidth efficient
Nvdia is also know for the bean counters maximizing profits


I doubt its true
8gb of mediocre salvaged hbm2 + 16gb slow ddr4 its the opposite of the winning ps4 design philosophy. It looks more like the xbone
Not to mention the added complexity and extra silicon eating into the gpu die budget

That rumor is 100% fake, if its real Sony lost the gen

Well, developer invisible segmentation (see AMD HBCC feature) is a bit different than the Xbox One setup which is 0.03125 GB (32 MB) super fast + 8 GB fast where developers must manually manage both pools ... vs 8 GB super duper fast + 16 GB fast with transparent caching and automatic paging in and out of data and where devs do not necessarily have to worry about memory allocation and manual memory transfers.
 

Armorian

Banned
What are you thinking the Anaconda msrp will be? I'm hoping for $299 Lockhart and $499 Anaconda. A $399 Lockhart would surely mean a $599 Anaconda, right?

They could proce it above PS5 if it's more powerful (in meaningful way) but I don't think any console maker will repeat $599 fiasco :)

bed.jpg
 
Last edited:
GTX 2080 is stronger than anything you will see on next-gen consoles...

448GB/s

well nvidia has much better delta color compression than GCN and therefore requires lot less bandwidth. Vega56 was bandwidth starved even with HBM2. so it's highly likely you'll need 500+GB/s to keep a 11+TF Navi fed. that pretty much rules out a 256bit bus and therefore 16GB of GDDR6. so im Pretty confident that we will see 18 or 24gb, when it's gonna be gddr6.
 
NEVER drop from HBAO+ if it's available :messenger_pouting:

Some people just don't get what you (and I) are saying, base (graphics quality) upgrade for next gen consoles will be ~2x in GPU power:

PS4 1080p (1.8TF) -> Lockhart 1080p (4TF)

X1X (kinda) 4K (6TF) -> Anaconda 4K (12TF)

As I said earlier it can only have negative effect on devs that would want to stay 1080p only for next gen and I doubt MS will allow that.
From what I remember the X1X had a pretty intuitive dev kit that made it easy for devs to scale games. I don’t think this would be such a problem. I’m not sure why devs would want to stay at 1080p when they will already have to build scaling into the PC counterpart.
 

ethomaz

Banned
well nvidia has much better delta color compression than GCN and therefore requires lot less bandwidth. Vega56 was bandwidth starved even with HBM2. so it's highly likely you'll need 500+GB/s to keep a 11+TF Navi fed. that pretty much rules out a 256bit bus and therefore 16GB of GDDR6. so im Pretty confident that we will see 18 or 24gb, when it's gonna be gddr6.
But Navi is not Vega.

Let’s see how it turns out.

If Navi have these same issues as Vega then AMD will be ridiculous behind nVidia in GPU torch not just the one generation behind like it looks today.
 
Last edited:

SonGoku

Member
Yeah Sony, if you listening and there is any truth to this rumor, hope you planing the right way to go: 16 hbm2, 8 ddr4! Beast mode or burst.
That setup still sucks
Only 16gb
Salvaged slow parts
Added complexity
Extra silicon eating into the GPU die budget

Sony is not that stupid to make the same mistake their competition made 5 years ago.
24GB GDDR6 OR BUST!

Well, developer invisible segmentation (see AMD HBCC feature) is a bit different than the Xbox One setup which is 0.03125 GB (32 MB) super fast + 8 GB fast where developers must manually manage both pools ... vs 8 GB super duper fast + 16 GB fast with transparent caching and automatic paging in and out of data and where devs do not necessarily have to worry about memory allocation and manual memory transfers.
Im not even getting into ease of development
It carries all of the hw limitations of the xbone design, its essentially an evolution of the xbone design philosophy
Also there's no way the setup will be as fast as 24GB of unified fast ram (hbm2, gddr6 etc)

Limitations of the setup include but not limited to:
Slow salvaged parts
Added board complexity eating into BOM budget
Extra silicon needed eating into GPU die budget which translates into weaker GPU

This setup doesn't make any sense, all things considered it won't even save that much money compared to a pure 24GB GDDR6 setup to justify the penalty in bandwidth and GPU performance.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
That setup still sucks
Only 16gb
Salvaged slow parts
Added complexity
Extra silicon eating into the GPU die budget

Sony is not that stupid to make the same mistake their competition made 5 years ago.
24GB GDDR6 OR BUST!


Im not even getting into ease of development
It carries all of the hw limitations of the xbone, its essentially an evolution of the xbone design philosophy
Also there's no way the setup will be as fast as 24GB of unified fast ram (hbm2, gddr6 etc)

Limitations of the setup include but not limited to:
Slow salvaged parts
Added board complexity eating into BOM budget
Extra silicon needed eating into GPU die budget which translates into weaker GPU

This setup doesn't make any sense

No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above. I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
 

Housh

Member
It has to be or MS is doomed. I love Sony but it makes logical sense that next gen will belong to Microsoft. It's just how these cycles work.
 

ANIMAL1975

Member
That setup still sucks
Only 16gb
Salvaged slow parts
Added complexity
Extra silicon eating into the GPU die budget

Sony is not that stupid to make the same mistake their competition made 5 years ago.
24GB GDDR6 OR BUST!


Im not even getting into ease of development
It carries all of the hw limitations of the xbone design, its essentially an evolution of the xbone design philosophy
Also there's no way the setup will be as fast as 24GB of unified fast ram (hbm2, gddr6 etc)

Limitations of the setup include but not limited to:
Slow salvaged parts
Added board complexity eating into BOM budget
Extra silicon needed eating into GPU die budget which translates into weaker GPU

This setup doesn't make any sense, all things considered it won't even save that much money compared to a pure 24GB GDDR6 setup to justify the penalty in bandwidth and GPU performance.
The salvaged parts thing is just one guy rumor. Not buying it. Wasn't in this thread that we got the Samsungs HBM2 going in to mass production announcement?
 

SonGoku

Member
No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above.
24GB GDDR6
16 Gbps / 8 = 2 Gbps
2 x 384 = 768 GB/s

18 Gbps / 8 = 2.25 Gbps
2.25 x 384 = 864 GB/s

20 Gbps / 8 = 2.5 Gbps
2.5 x 384 = 960 GB/s

Its much better in any configuration, than salvaged HBM2 alone. We don't know the price consoles get on long term high volume deals.
The 8 + 16GB setup will still be slower than a pure 400GB/s setup which its not even a big jump compared to the X.
I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
  1. Added board complexity
  2. Extra sillicon eating into GPU die budget=weaker GPU
  3. Slower overall bandwidth than a pure GDDR6 system
Seems like a repeat of the xbone and ps4 situation point for point. The roles have just been reversed and hey! Its more dev friendly this time. On the bright side PS5 will hold the balance crown.
Just replace SRAM with HBM2, DDR3 WITH DDR4, 8GB GDDR5 with 24GB GDDR6.
The salvaged parts thing is just one guy rumor. Not buying it. Wasn't in this thread that we got the Samsungs HBM2 going in to mass production announcement?
Salvaged would explain the anemic bandwidth
 
Last edited:
No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above. I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above. I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.

18/24GB of GDDR6 would be nearly 700GB/s whereas 8GB HBM2 would only be round 500GB/s. the only way to get more bandwidth as with GDDR6 and a wide bus is with 16GB of HBM2 and that won't happen because of economics.
 

SonGoku

Member
it's 485GB/s or something at 950Mhz in vega64. i could clock mine to 1.100MHz two years ago. with process improvements 950-1.000MHz should be doable in low power console use nowadays.
Im talking about the rumor being discussed in the thread which claims Sony is using "cheap" salvaged memory hence 400GB/s
 

Fafalada

Fafracer forever
Seems like a repeat of the xbone and ps4 situation point for point.
Let's cut to the chase here.
Consoles and embedded systems have always used hybrid memory pools for cost efficiency reasons - there's never been any 'philosophy' behind it other than money, and this predates all current consoles by 20-30 years, there's no 'recent trends' to debate here.
With all that said - another thing that consoles also repeat all the time is (over)compensate for perceived(or real) past weaknesses. That's why unified memory machines are often followed by designs using discrete pools, and reverse. That's why systems that favor CPU:GPU ratio in one way usually tilt the scales in the other direction the next iteration and on and on. This also isn't anything particular to any one platform holder - they've all done similar (over)adjustments in their system history.

So yes - I'd see it as entirely possible for PS5 to optimize for cost vs peak performance with a hybrid memory pool, and no amount of arm-chair analysis will get around the fact that if it happens, it would be to maximize the $/(GB/s) ratio. Whether there needs to be a marketing spin (frankly most developers don't give a damn what marketing says) for the fanbase, that's something for their PR teams to work out.
I do agree we'd likely want more than 400GB/s, no matter what the configuration used is.

But to go back to what I said earlier - modern hardware already has 2-3 layers of memory before hitting external memory chips, so if this design effectively amounts to an L4/L5 cache structure backed by a very fast solid-state that presents the entire available 'memory', I'll argue that is something of a game changer compared to how we do things today, on any platform (although ironically it's also closer to old cartridge systems). Sure - it's always easier (and safer) to just look for repeat of existing state 'but faster' - but the exciting bit about consoles (Which was largely lost this gen) has been about creative hardware approaches, not just recycling PC components in a fixed configuration.
 
Last edited:
Let's cut to the chase here.
Consoles and embedded systems have always used hybrid memory pools for cost efficiency reasons - there's never been any 'philosophy' behind it other than money, and this predates all current consoles by 20-30 years, there's no 'recent trends' to debate here.
With all that said - another thing that consoles also repeat all the time is (over)compensate for perceived(or real) past weaknesses. That's why unified memory machines are often followed by designs using discrete pools, and reverse. That's why systems that favor CPU:GPU ratio in one way usually tilt the scales in the other direction the next iteration and on and on. This also isn't anything particular to any one platform holder - they've all done similar (over)adjustments in their system history.

So yes - I'd see it as entirely possible for PS5 to optimize for cost vs peak performance with a hybrid memory pool, and no amount of arm-chair analysis will get around the fact that if it happens, it would be to maximize the $/(GB/s) ratio. Whether there needs to be a marketing spin (frankly most developers don't give a damn what marketing says) for the fanbase, that's something for their PR teams to work out.
I do agree we'd likely want more than 400GB/s, no matter what the configuration used is.

But to go back to what I said earlier - modern hardware already has 2-3 layers of memory before hitting external memory chips, so if this design effectively amounts to an L4/L5 cache structure backed by a very fast solid-state that presents the entire available 'memory', I'll argue that is something of a game changer compared to how we do things today, on any platform (although ironically it's also closer to old cartridge systems). Sure - it's always easier (and safer) to just look for repeat of existing state 'but faster' - but the exciting bit about consoles (Which was largely lost this gen) has been about creative hardware approaches, not just recycling PC components in a fixed configuration.

i like you reasoning, but thing is 18Gigs of GDDR6 would barely be much more expensive than the proposed HBM - DDR hybrid solution while giving you a whole lot less problems when it comes to bandwidth requirements. this can only be true if the PS5 turns out to be a lot less powerful then i would expect at this point (like 9 instead of 13TFlops). and in that case you could just go with 16GB GDDR4 on a 256Bit bus, which should once again be cheaper.
 

SonGoku

Member
F Fafalada
Im still waiting for you to back up your claim of PS3 games running at 2x the 360 resolution. Where are those 5 games you were supposed to name?

The rumors setup doesn't make sense because:
  • Already Slow salvaged HBM2 combined with even slower ddr4
  • Board complexity (added cost)
  • Extra silicon eating into GPU die budget = Weaker gpu
I could see your point if the HBM used was 1TB/s or something crazy like that.
 
Last edited:

Fafalada

Fafracer forever
i like you reasoning, but thing is 18Gigs of GDDR6 would barely be much more expensive than the proposed HBM - DDR hybrid solution while giving you a whole lot less problems when it comes to bandwidth requirements.
Quite honestly I don't have enough visibility on costs to argue about that, so I'll have to take people's word on it at the moment.
Although on the flipside, I recall a lot of similar debates around RDR use in consoles in late 90s and early 00s, and while I don't want to reheat performance characteristics debates there, cost obviously worked out for Sony and Nintendo better than the direct competition at the time, but who knows, maybe they just got lucky.

Im still waiting for you to back up your claim of PS3 games running at 2x the 360 resolution. Where are those 5 games you were supposed to name?
Listed on B3D - I'm not interested in debating the list itself, the original point was around % number of RSX 'on average' being the slower GPU(which you argued based on framerate marks and DF speculation), not asserting that it was somehow naively 2x faster (there were certainly workloads where speed was in its favor - but never 2x). Fact is when that delta is actually 40% (like this gen) you don't get the slower GPU run 3d games at higher res, not even once.

I could see your point if the HBM used was 1TB/s or something crazy like that.
Yea that was my initial read (if the target was in the 700 or more, but I guess that's not likely. Then again if the TF ratings are as low as DF insists, the bandwidth requirements go down too...
 
Last edited:

SonGoku

Member
Listed on B3D - I'm not interested in debating the list itself, the original point was around performance % comparisons where RSX was still 'on average' the slower GPU, not asserting that it was somehow naively 2x faster (there were certainly workloads where speed was in its favor - but never 2x).
You claimed some ps3 games were 2x the resolution supporting your argument favoring rsx. The burden of proof is on you
I ask you to name those 5 games not as a list war but simply because i don't believe they exist.
Yea that was my initial read (if the target was in the 700 or more, but I guess that's not likely. Then again if the TF ratings are as low as DF insists, the bandwidth requirements go down too...
DF never insisted on a number, they said it could be anywhere in between 8-10tf 10-12tf 12-14tf
They didn't give any estimate as they just don't know (their words)

700GB/s would still not justify the sacrifices and added cost over a pure 24gb gddr6 setup
 
Last edited:

SonGoku

Member
Yes, I meant XBoxOne. And I was joking. The comparison between the XBoxOne and a hypothetical PS5 with a hybrid memory pool is a big stretch to say the least.
It brings the same hw limitations and sacrifices. Since you didn't bother to read my post, I'll list them again.
  1. Added board complexity
  2. Extra sillicon eating into GPU die budget=weaker GPU
  3. Much slower overall bandwidth than a pure GDDR6 system
 

Fafalada

Fafracer forever
I ask you to name those 5 games not as a list war but simply because i don't believe they exist.
I'll take your word on that - but IME that's just a request to move the goal posts and start debating those games being valid evidence or not.
To save some time - 2 were Japanese developed during the era where PS3 marketing was bullish on 1080p messaging, and 3 were western pubs, 1 sports title among them. The list is obviously incomplete, so I'm not saying it was conveniently exactly 5.

DF never insisted on a number, they said it could be anywhere in between 8-10tf 10-12tf 12-14tf
This 'quote'(assuming it's real) https://www.neogaf.com/threads/rumo...t-12-9tf-56cu-and-more.1478566/post-254117338 claims anything more than 8TF doesn't pass 'sanity check'.

700GB/s would still not justify the sacrifices and added cost over a pure 24gb gddr6 setup
That goes back to what I alluded to in previous post. The use of RDR in 3 consoles that had it had a lot to do with a much more competitive 5+ year price-curve(which did in fact, pan out for at least 2 of them) than just direct bandwidth/price advantage on day 1, it's possible this could be the same.
I'll agree board complexity being a compelling argument against this specific setup (GPU budget not so much - as I doubt silicon for memory mapping is even a noteworthy blip in overall die-space).
 
Last edited:

SonGoku

Member
To save some time - 2 were Japanese developed during the era where PS3 marketing was bullish on 1080p messaging, and 3 were western pubs, 1 sports title among them. The list is obviously incomplete, so I'm not saying it was conveniently exactly 5.
Can you just name them (or pm me even) im not going in a scavenger hunt to seek proof of your claim. To sweeten the deal if those games are indeed 2x the resolution, i won't discuss it any further.
It wasn't even relevant to my argument, i already pointed out how DF analysis covered more than just fps difference and how they always credited those differences to an advantage xenos had over rsx
This 'quote'(assuming it's real)
lol why are you quoting a sarcastic troll post? lol
I'll agree board complexity being a compelling argument against this specific setup (GPU budget not so much - as I doubt silicon for memory mapping is even a noteworthy blip in overall die-space).
The rumor talks of extra silicon needed to shuffle things around, this will impact the GPU big or small its another sacrifice.
Bandwidth needs to be on the high end of what gddr6 offers (960GB/s) or more to justify such a setup.

Anyways this rumor doesn't have any weight credibility wise, its pointless going on and on about something that most likely its not actually happening
We don't know if GDDR6 prices will be cheaper or more expensive long term (compared to this hybrid setup), we do know however GDDR6 is dropping in price long term.
 
Last edited:

ethomaz

Banned
It has to be or MS is doomed. I love Sony but it makes logical sense that next gen will belong to Microsoft. It's just how these cycles work.
Like everybody thought after 360...

Unless Sony fuck up big time PlayStation will always be market leader.

The build momentum with PS4 is even stronger than with PS3... PS5 is trending to have a even better launch than PS4.
 
Last edited:

Fafalada

Fafracer forever
Can you just name them (or pm me even) im not going in a scavenger hunt to seek proof of your claim. To sweeten the deal if those games are indeed 2x the resolution, i won't discuss it any further.
Full Auto and Ridge Racer were two Japanese ones. At least one of the NBA titles was in the sports.

and how they always credited those differences to an advantage xenos had over rsx
I should note here that I have great appreciation for the work DF does, but a lot of deeper analysis is based on conjecture and speculation. More than once this gen the CPU was highlighted for certain things where there was no real evidence to the claim, and this isn't a fault on DFs side - it's just the limitation of analyzing a game in a black box where you can't profile individual components.
Again - not disputing Xenos was the faster rasterizer on average - the difference just wasn't all that pronounced (and definitely smaller than this gen), and obviously due to different architectures, a single number(like available compute) doesn't paint the full picture nearly as well either.

lol why are you quoting a sarcastic troll post? lol
In the lead-up to console launches people start to do all kind of crazy things, it's harder to be sure what's clear sarcasm.
And notably the 8TF rumours have been around for a good while longer than these, so for all I know it's being taken seriously. :/

The rumor talks of extra silicon needed to shuffle things around
Most modern GPUs in this decade have had proper virtual addressing support, so you're really just adding some logic to swap pages in hardware instead of letting OS do it. Nothing's free, sure - but in the grand scheme of things I don't think this would tilt anything in any direction.

We don't know if GDDR6 prices will be cheaper or more expensive long term (compared to this hybrid setup), we do know however GDDR6 is dropping in price long term.
Well the rumor does go into that specific bit (expectation of HBM2/3 price curve being more favorable). But fair enough, that's getting too speculative to discuss.
 

SonGoku

Member
Full Auto and Ridge Racer were two Japanese ones. At least one of the NBA titles was in the sports.
Full Auto and Ridge Racer don't have 360 counterparts to compare
All NBA versions shared between PS360 are at resolution parity
The fact that RSX had several games that ran at higher (and up to 2x) resolution of 360 counterparts
I'll give you the benefit of doubt here and assume you had a memory lapse or meant 20%
I should note here that I have great appreciation for the work DF does, but a lot of deeper analysis is based on conjecture and speculation
Fair enough.
Again - not disputing Xenos was the faster rasterizer on average - the difference just wasn't all that pronounced (and definitely smaller than this gen), and obviously due to different architecture
Raw power metrics were similar but Xenos had the huge unified shader arc advantage
 
Last edited:

Fafalada

Fafracer forever
Full Auto and Ridge Racer don't have 360 counterparts to compare
R7 was a port of R6 with a few extra stages and cars. Full Auto PS3(and PSP) released the same year as 360 game - again it was largely the same content and game.

I got mixed up with NBALive/2k - anyway Cars and Commando games are 1080p on PS3 as well.

Raw power metrics were similar but Xenos had the huge unified shader arc advantage
Yes, which varied with different workloads. A bulk of what SPEs brought to the table in late cross-platform ports for the GPU, was alleviating Vertex workload bottlenecks, mitigating that problem.
 

quickwhips

Member
Xbox.
I have more games on Xbox then ps4 so they all transfer for me to play.
I'm getting 4 games a month with gold.
I prefer the Xbox elite controller
My friends are mostly on Xbox.
I like my game share partner on Xbox better.
 
Last edited:

SonGoku

Member
R7 was a port of R6 with a few extra stages and cars. Full Auto PS3(and PSP) released the same year as 360 game - again it was largely the same content and game.

I got mixed up with NBALive/2k - anyway Cars and Commando games are 1080p on PS3 as well.
RR7 and Full Auto 2 (although i can't find FA1 rez) had one year extra to polish but fine ok
Cars and Commando are just using less expensive AA to increase rez
Yes, which varied with different workloads. A bulk of what SPEs brought to the table in late cross-platform ports for the GPU, was alleviating Vertex workload bottlenecks, mitigating that problem.
I agree wholeheartedly, taking the console as whole i believe the CELL + RSX combo to be superior to 360 albeit much harder to get the most out of it.
i was merely discussing the qualities of xenos and rsx as stand alone GPUs.
 
Top Bottom