Insane Metal
Member
Yeah, and that's an nVidia card.GTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
Yeah, and that's an nVidia card.GTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
Like I said, the 4tf 1080p machine has more overheard than the 12tf 4K machine, therefore, if a developer decides to make a game at a lower resolution on the 12tf machine they won’t have to drop the res as much on the 4tf machine to achieve the same results. Personally, I’d rather the 12tf machine be a 1080p machine as I’m sick of the stupid resolution bumps taking up so many of the resources.
Im not claiming either way im just asking how he came to that numberHe's not wrong. you need 3-4 times the processing power to run a 1080p game at 4k, depending on the game. I have tested it on tons of PC games over the years amnd its always in that range.
You could probably get it down a little bit on a closed system like a console, hense why both Xbox's are rumoured to be 4tf and 12tf.
nvidia cards are more bandwidth efficientGTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
I doubt its trueThat is base of the whole rumor... HBM2 and HBM3 are 100% compatibles so Sony got a big deal with HBM2 right now and in one or two years they will shift to HBM3 to cut costs.
4k is 4x the resolution as 1080p, but MS stated that during their design phase with Scorpio they found that rasterization efficiency scales with resolution, and that it took only 3.5x the power to scale an unnamed(Forza) 1080p X1 game to 4K at the same settings.Im not claiming either way im just asking how he came to that number
Thanks for doing the work. So just from looking at this, it's possible the same game will always be 1080p/60 on Lockhart, while 4k/30 on Anaconda. Unless it's a game like Hitman, Doom or Battlefield 1.4k is 4x the resolution as 1080p, but MS stated that during their design phase with Scorpio they found that rasterization efficiency scales with resolution, and that it took only 3.5x the power to scale an unnamed(Forza) 1080p X1 game to 4K at the same settings.
Much depends on the architecture and memory config, too. Xbox One X can run some PS4 1080p games at 4K for example(RDR2). Also, Nvidia matches Vega 56/64 with much less powerful hardware on the compute side of things.
---
Games will be made for native 4K on Anaconda. Let's say it's 12TF like the rumor, similar to the 12.7TFLOPS Vega 64 in raw compute. Go look up benchmarks for Vega 64 at 4K, then look at 1080p benchmarks and see what cards can run at a similar framerate. In general a RX 470 4GB(4.9TFLOPS) is more than enough, and it lacks the newer features of Vega like direct pixel engine to L2 cache access. It's has like half the memory bandwidth, too. Most cases even a lowly 380x can do the job, but it's harder to find mixed benchmarks.
I took some time to check benchmarks done on the same CPU, RAM, HDD, etc. I compared Vega 64 at 4K compared to RX 470 4GB at 1080p, same settings(usually max/ultra on Guru3d):
-Sniper Elite 4
Vega 64(4K) = 55fps | RX 470(1080p) = 68fps
-Rise of the Tomb Raider
Vega 64(4K) = 46fps | RX 470(1080p) = 63fps
-Battlefield 1
Vega 64(4K) = 62fps | RX 470(1080p) = 86fps
-Gears of War 4
Vega 64(4K) = 43fps | RX 470(1080p) = 75fps
-Hitman
Vega 64(4K) = 62fps | RX 470(1080p) = 74fps
-Mankind Divided
Vega 64(4K) = 38fps | RX 470(1080p) = 64fps
-DOOM(Vulkan)
Vega 64(4K) = 79fps | RX 470(1080p) = 131fps
-Resident Evil 7
Vega 64(4K) = 46fps | RX 470(1080p) = 86fps
-The Witcher 3
Vega 64(4K) = 44fps | RX 470(1080p) = 51fps
-GTA V
Vega 64(4K) = 49fps | RX 470(1080p) = 76fps
10-game AVG:
Vega 64(4K) = 52.4fps | RX 470 4GB (1080p) = 77.4fps
I just like reading through these next-gen threads and I learn a lot, so hopefully people will find this interesting or helpful.
The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.Thanks for doing the work. So just from looking at this, it's possible the same game will always be 1080p/60 on Lockhart, while 4k/30 on Anaconda. Unless it's a game like Hitman, Doom or Battlefield 1.
Or of course if devs put the effort to optimize, they can probably squeeze out that little more.
If the average is 77fps on rx470, why would Lockhart only be 1080/44 instead of 60? The list you showed had them all above 60 fps at 1080.The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.
Going by Xbox One X, MS will encourage native 4k, but in the case of 4K checkerboard, you'd need to drop settings on Lockhart. I always start with dropping HBAO+ to SSAO, and shadows to 'medium'.
It's ~47% increase in frames at 1080p over the Vega 64 at 4K. Next-gen games will be targeting 4K/30fps on a ~12TF GPU(PS5, Anaconda). So if Vega 64 benchmarked at 4K/30fps in that imaginary next-gen game, the RX 470 would be 1080p/44fps going by the average. Vega 64 had a 52fps AVG, not 30fps.If the average is 77fps on rx470, why would Lockhart only be 1080/44 instead of 60? The list you showed had them all above 60 fps at 1080.
The 10-game avg was 52.4fps on Vega 64 and 77.4fps on RX 470 4GB, so it should mean a 4K/30fps game on Anaconda will be 1080p/44fps on Lockhart. So going 4K/30fps on Anaconda and the same settings at 1080p/30fps on Lockhart will be a breeze. Same with 4K/60fps to 1080p/60fps.
Going by Xbox One X, MS will encourage native 4k, but in the case of 4K checkerboard, you'd need to drop settings on Lockhart. I always start with dropping HBAO+ to SSAO, and shadows to 'medium'.
SorryNEVER drop from HBAO+ if it's available
As I said earlier it can only have negative effect on devs that would want to stay 1080p only for next gen and I doubt MS will allow that.
Sorry
With the advent of checkerboard, and even the decent upscaling from 1800p, there's no reason to go 1080p on a 12TF 4K system unless you're offering a performance mode. In many cases, the Lockhart would still be able to offer something similar with reduced settings in some areas.
I hope the rumors are true and the price comes in around $250-299. I'd like 2 for me and my boys and it's more practical to have two $299 1080p systems than a single $500 4K system in our case.
Yeah Sony, if you listening and there is any truth to this rumor, hope you planing the right way to go: 16 hbm2, 8 ddr4! Beast mode or burst.Im not claiming either way im just asking how he came to that number
nvidia cards are more bandwidth efficient
Nvdia is also know for the bean counters maximizing profits
I doubt its true
8gb of mediocre salvaged hbm2 + 16gb slow ddr4 its the opposite of the winning ps4 design philosophy. It looks more like the xbone
Not to mention the added complexity and extra silicon eating into the gpu die budget
That rumor is 100% fake, if its real Sony lost the gen
What are you thinking the Anaconda msrp will be? I'm hoping for $299 Lockhart and $499 Anaconda. A $399 Lockhart would surely mean a $599 Anaconda, right?To achieve that price they would have to cut more than just GPU and RAM in Lockhart, I think it will be 350-400. Memory reduction is the biggest threat to game devlopment, if rumors are true and LH has 12GB of memory (vs 24 in Anaconda) devs would have to create games with that amount in mind.
Im not claiming either way im just asking how he came to that number
nvidia cards are more bandwidth efficient
Nvdia is also know for the bean counters maximizing profits
I doubt its true
8gb of mediocre salvaged hbm2 + 16gb slow ddr4 its the opposite of the winning ps4 design philosophy. It looks more like the xbone
Not to mention the added complexity and extra silicon eating into the gpu die budget
That rumor is 100% fake, if its real Sony lost the gen
What are you thinking the Anaconda msrp will be? I'm hoping for $299 Lockhart and $499 Anaconda. A $399 Lockhart would surely mean a $599 Anaconda, right?
GTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
From what I remember the X1X had a pretty intuitive dev kit that made it easy for devs to scale games. I don’t think this would be such a problem. I’m not sure why devs would want to stay at 1080p when they will already have to build scaling into the PC counterpart.NEVER drop from HBAO+ if it's available
Some people just don't get what you (and I) are saying, base (graphics quality) upgrade for next gen consoles will be ~2x in GPU power:
PS4 1080p (1.8TF) -> Lockhart 1080p (4TF)
X1X (kinda) 4K (6TF) -> Anaconda 4K (12TF)
As I said earlier it can only have negative effect on devs that would want to stay 1080p only for next gen and I doubt MS will allow that.
But Navi is not Vega.well nvidia has much better delta color compression than GCN and therefore requires lot less bandwidth. Vega56 was bandwidth starved even with HBM2. so it's highly likely you'll need 500+GB/s to keep a 11+TF Navi fed. that pretty much rules out a 256bit bus and therefore 16GB of GDDR6. so im Pretty confident that we will see 18 or 24gb, when it's gonna be gddr6.
That setup still sucksYeah Sony, if you listening and there is any truth to this rumor, hope you planing the right way to go: 16 hbm2, 8 ddr4! Beast mode or burst.
Im not even getting into ease of developmentWell, developer invisible segmentation (see AMD HBCC feature) is a bit different than the Xbox One setup which is 0.03125 GB (32 MB) super fast + 8 GB fast where developers must manually manage both pools ... vs 8 GB super duper fast + 16 GB fast with transparent caching and automatic paging in and out of data and where devs do not necessarily have to worry about memory allocation and manual memory transfers.
That setup still sucks
Only 16gb
Salvaged slow parts
Added complexity
Extra silicon eating into the GPU die budget
Sony is not that stupid to make the same mistake their competition made 5 years ago.
24GB GDDR6 OR BUST!
Im not even getting into ease of development
It carries all of the hw limitations of the xbone, its essentially an evolution of the xbone design philosophy
Also there's no way the setup will be as fast as 24GB of unified fast ram (hbm2, gddr6 etc)
Limitations of the setup include but not limited to:
Slow salvaged parts
Added board complexity eating into BOM budget
Extra silicon needed eating into GPU die budget which translates into weaker GPU
This setup doesn't make any sense
The salvaged parts thing is just one guy rumor. Not buying it. Wasn't in this thread that we got the Samsungs HBM2 going in to mass production announcement?That setup still sucks
Only 16gb
Salvaged slow parts
Added complexity
Extra silicon eating into the GPU die budget
Sony is not that stupid to make the same mistake their competition made 5 years ago.
24GB GDDR6 OR BUST!
Im not even getting into ease of development
It carries all of the hw limitations of the xbone design, its essentially an evolution of the xbone design philosophy
Also there's no way the setup will be as fast as 24GB of unified fast ram (hbm2, gddr6 etc)
Limitations of the setup include but not limited to:
Slow salvaged parts
Added board complexity eating into BOM budget
Extra silicon needed eating into GPU die budget which translates into weaker GPU
This setup doesn't make any sense, all things considered it won't even save that much money compared to a pure 24GB GDDR6 setup to justify the penalty in bandwidth and GPU performance.
24GB GDDR6No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above.
I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
Salvaged would explain the anemic bandwidthThe salvaged parts thing is just one guy rumor. Not buying it. Wasn't in this thread that we got the Samsungs HBM2 going in to mass production announcement?
No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above. I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
No, 24 GB of HBM2 would always be better, but also too expensive and 24 GB of GDDR6 would hardly match the bandwidth per dollar of the solution highlighted above. I do not see it as evolution of that philosophy or put it this way, it is significantly a different beast when you talk 32 MB paired to 8 GB of slower RAM vs an 8 GB + 16 GB setup.
400 GB/s btw8GB HBM2 would only be round 500GB/s
400 GB/s btw
Im talking about the rumor being discussed in the thread which claims Sony is using "cheap" salvaged memory hence 400GB/sit's 485GB/s or something at 950Mhz in vega64. i could clock mine to 1.100MHz two years ago. with process improvements 950-1.000MHz should be doable in low power console use nowadays.
Let's cut to the chase here.Seems like a repeat of the xbone and ps4 situation point for point.
Let's cut to the chase here.
Consoles and embedded systems have always used hybrid memory pools for cost efficiency reasons - there's never been any 'philosophy' behind it other than money, and this predates all current consoles by 20-30 years, there's no 'recent trends' to debate here.
With all that said - another thing that consoles also repeat all the time is (over)compensate for perceived(or real) past weaknesses. That's why unified memory machines are often followed by designs using discrete pools, and reverse. That's why systems that favor CPU:GPU ratio in one way usually tilt the scales in the other direction the next iteration and on and on. This also isn't anything particular to any one platform holder - they've all done similar (over)adjustments in their system history.
So yes - I'd see it as entirely possible for PS5 to optimize for cost vs peak performance with a hybrid memory pool, and no amount of arm-chair analysis will get around the fact that if it happens, it would be to maximize the $/(GB/s) ratio. Whether there needs to be a marketing spin (frankly most developers don't give a damn what marketing says) for the fanbase, that's something for their PR teams to work out.
I do agree we'd likely want more than 400GB/s, no matter what the configuration used is.
But to go back to what I said earlier - modern hardware already has 2-3 layers of memory before hitting external memory chips, so if this design effectively amounts to an L4/L5 cache structure backed by a very fast solid-state that presents the entire available 'memory', I'll argue that is something of a game changer compared to how we do things today, on any platform (although ironically it's also closer to old cartridge systems). Sure - it's always easier (and safer) to just look for repeat of existing state 'but faster' - but the exciting bit about consoles (Which was largely lost this gen) has been about creative hardware approaches, not just recycling PC components in a fixed configuration.
Quite honestly I don't have enough visibility on costs to argue about that, so I'll have to take people's word on it at the moment.i like you reasoning, but thing is 18Gigs of GDDR6 would barely be much more expensive than the proposed HBM - DDR hybrid solution while giving you a whole lot less problems when it comes to bandwidth requirements.
Listed on B3D - I'm not interested in debating the list itself, the original point was around % number of RSX 'on average' being the slower GPU(which you argued based on framerate marks and DF speculation), not asserting that it was somehow naively 2x faster (there were certainly workloads where speed was in its favor - but never 2x). Fact is when that delta is actually 40% (like this gen) you don't get the slower GPU run 3d games at higher res, not even once.Im still waiting for you to back up your claim of PS3 games running at 2x the 360 resolution. Where are those 5 games you were supposed to name?
Yea that was my initial read (if the target was in the 700 or more, but I guess that's not likely. Then again if the TF ratings are as low as DF insists, the bandwidth requirements go down too...I could see your point if the HBM used was 1TB/s or something crazy like that.
You claimed some ps3 games were 2x the resolution supporting your argument favoring rsx. The burden of proof is on youListed on B3D - I'm not interested in debating the list itself, the original point was around performance % comparisons where RSX was still 'on average' the slower GPU, not asserting that it was somehow naively 2x faster (there were certainly workloads where speed was in its favor - but never 2x).
DF never insisted on a number, they said it could be anywhere in between 8-10tf 10-12tf 12-14tfYea that was my initial read (if the target was in the 700 or more, but I guess that's not likely. Then again if the TF ratings are as low as DF insists, the bandwidth requirements go down too...
Just replace SRAM with HBM2, DDR3 WITH DDR4, 8GB GDDR5 with 24GB GDDR6.
xbone not xbonexWow, I didn't know that the X1 had 8GB SRAM...
Yes, I meant XBoxOne. And I was joking. The comparison between the XBoxOne and a hypothetical PS5 with a hybrid memory pool is a big stretch to say the least.xbone not xbonex
It brings the same hw limitations and sacrifices. Since you didn't bother to read my post, I'll list them again.Yes, I meant XBoxOne. And I was joking. The comparison between the XBoxOne and a hypothetical PS5 with a hybrid memory pool is a big stretch to say the least.
I'll take your word on that - but IME that's just a request to move the goal posts and start debating those games being valid evidence or not.I ask you to name those 5 games not as a list war but simply because i don't believe they exist.
This 'quote'(assuming it's real) https://www.neogaf.com/threads/rumo...t-12-9tf-56cu-and-more.1478566/post-254117338 claims anything more than 8TF doesn't pass 'sanity check'.DF never insisted on a number, they said it could be anywhere in between 8-10tf 10-12tf 12-14tf
That goes back to what I alluded to in previous post. The use of RDR in 3 consoles that had it had a lot to do with a much more competitive 5+ year price-curve(which did in fact, pan out for at least 2 of them) than just direct bandwidth/price advantage on day 1, it's possible this could be the same.700GB/s would still not justify the sacrifices and added cost over a pure 24gb gddr6 setup
Can you just name them (or pm me even) im not going in a scavenger hunt to seek proof of your claim. To sweeten the deal if those games are indeed 2x the resolution, i won't discuss it any further.To save some time - 2 were Japanese developed during the era where PS3 marketing was bullish on 1080p messaging, and 3 were western pubs, 1 sports title among them. The list is obviously incomplete, so I'm not saying it was conveniently exactly 5.
lol why are you quoting a sarcastic troll post? lolThis 'quote'(assuming it's real)
The rumor talks of extra silicon needed to shuffle things around, this will impact the GPU big or small its another sacrifice.I'll agree board complexity being a compelling argument against this specific setup (GPU budget not so much - as I doubt silicon for memory mapping is even a noteworthy blip in overall die-space).
Like everybody thought after 360...It has to be or MS is doomed. I love Sony but it makes logical sense that next gen will belong to Microsoft. It's just how these cycles work.
Full Auto and Ridge Racer were two Japanese ones. At least one of the NBA titles was in the sports.Can you just name them (or pm me even) im not going in a scavenger hunt to seek proof of your claim. To sweeten the deal if those games are indeed 2x the resolution, i won't discuss it any further.
I should note here that I have great appreciation for the work DF does, but a lot of deeper analysis is based on conjecture and speculation. More than once this gen the CPU was highlighted for certain things where there was no real evidence to the claim, and this isn't a fault on DFs side - it's just the limitation of analyzing a game in a black box where you can't profile individual components.and how they always credited those differences to an advantage xenos had over rsx
In the lead-up to console launches people start to do all kind of crazy things, it's harder to be sure what's clear sarcasm.lol why are you quoting a sarcastic troll post? lol
Most modern GPUs in this decade have had proper virtual addressing support, so you're really just adding some logic to swap pages in hardware instead of letting OS do it. Nothing's free, sure - but in the grand scheme of things I don't think this would tilt anything in any direction.The rumor talks of extra silicon needed to shuffle things around
Well the rumor does go into that specific bit (expectation of HBM2/3 price curve being more favorable). But fair enough, that's getting too speculative to discuss.We don't know if GDDR6 prices will be cheaper or more expensive long term (compared to this hybrid setup), we do know however GDDR6 is dropping in price long term.
Full Auto and Ridge Racer were two Japanese ones. At least one of the NBA titles was in the sports.
I'll give you the benefit of doubt here and assume you had a memory lapse or meant 20%The fact that RSX had several games that ran at higher (and up to 2x) resolution of 360 counterparts
Fair enough.I should note here that I have great appreciation for the work DF does, but a lot of deeper analysis is based on conjecture and speculation
Raw power metrics were similar but Xenos had the huge unified shader arc advantageAgain - not disputing Xenos was the faster rasterizer on average - the difference just wasn't all that pronounced (and definitely smaller than this gen), and obviously due to different architecture
How do you figure? That's only a 1080 Ti equivalent, the Xbox One X is already approximately 50% of that in terms of GPU capability.GTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
Full Auto and Ridge Racer were two Japanese ones. At least one of the NBA titles was in the sports.
R7 was a port of R6 with a few extra stages and cars. Full Auto PS3(and PSP) released the same year as 360 game - again it was largely the same content and game.Full Auto and Ridge Racer don't have 360 counterparts to compare
Yes, which varied with different workloads. A bulk of what SPEs brought to the table in late cross-platform ports for the GPU, was alleviating Vertex workload bottlenecks, mitigating that problem.Raw power metrics were similar but Xenos had the huge unified shader arc advantage
RR7 and Full Auto 2 (although i can't find FA1 rez) had one year extra to polish but fine okR7 was a port of R6 with a few extra stages and cars. Full Auto PS3(and PSP) released the same year as 360 game - again it was largely the same content and game.
I got mixed up with NBALive/2k - anyway Cars and Commando games are 1080p on PS3 as well.
I agree wholeheartedly, taking the console as whole i believe the CELL + RSX combo to be superior to 360 albeit much harder to get the most out of it.Yes, which varied with different workloads. A bulk of what SPEs brought to the table in late cross-platform ports for the GPU, was alleviating Vertex workload bottlenecks, mitigating that problem.
Wut?How do you figure? That's only a 1080 Ti equivalent, the Xbox One X is already approximately 50% of that in terms of GPU capability.
What are you not understanding?Wut?
GTX 2080 is stronger than anything you will see on next-gen consoles...
448GB/s
How do you figure? That's only a 1080 Ti equivalent, the Xbox One X is already approximately 50% of that in terms of GPU capability.