Rumour: PS5 Devkits have released (UPDATE 4th April: Rumoured Specs)

I am, games like The Order blows everything out of the water from a graphical perspective.
As beautiful as 1886 is - and this is not to take anything away from the achievement - it was a heavily filtered, highly linear game with a near 20% reduction in resolution over native 1080p (a choice that no doubt benefited the game on both a technical front and an artistic front).
 
As beautiful as 1886 is - and this is not to take anything away from the achievement - it was a heavily filtered, highly linear game with a near 20% reduction in resolution over native 1080p (a choice that no doubt benefited the game on both a technical front and an artistic front).
It was more of a set piece than an actual game. The PC is already years ahead of anything possible on consoles. Can you imagine how Star Citizen would run on the Pro? I am hoping they consider a good CPU and have more games running at higher frame rates.
 
Last edited:
PS5 - 2020

NAVI 15Tflops
Zen(2) 8-Core 3.2Ghz
32GB DDR6

I don't know what they'll do with the price. But you can be sure that the console will be powerful. Write it down.


I feel like we maay be starting to run away with it a tad and it's best to reel expectations in a bit to not be let down, some are reporting Navi will be a 580 replacement rather than Vega 64 replacement

https://www.fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card
https://www.reddit.com/r/hardware/comments/8bqhgh
Navi 7nm the 2019 chip will not be a high end GPU, it will be a quite powerful performance/mainstream chip.

Think of it as the Radeon RX 580 / 480 replacement. It will be small, and is likely to perform as well as the Vega 14nm that shipped last year. In the Nvidia performance world Navi should perform close to Geforce GTX 1080 which is quite good for the mainstream part but probably on part of the mainstream part planned after the high end part.

Note there is no direct source, so this rumor would be scrutinized moreso than a typical rumor.

However, this appears reasonable.

  • Navi will likely debut on a 1st gen 7nm process without the yield benefits of EUV (officially we only know that it's on some kind of 7nm process). Therefore, small chips are better if you're only selling to gamers (7nm Vega 20 has pro customers).
  • The existing mid-range (Polaris 10 & 11) will be 3 years old in 2019 and desperately needs a replacement.
  • AMD has previously dealt with a mid-range-only gaming lineup (see 2016), so it's not entirely unexpected that they'll do it again.
  • Vega still has a 4 shader engine limit, which limits the GPU to 64CUs. That's basically Vega 10. Therefore, Navi 10 can only be a "shrunken" Vega 10 unless this limit is removed. However, this limit has existed since GCN debuted, so I doubt it's going anywhere until post-Navi.
  • On a brief aside, the number of compute engines has been an unexpectedly interesting point of discussion over the years. Back in 2013 we learned that the then-current iteration of GCN had a maximum compute engine count of 4, which AMD has stuck to ever since, including the new Vega 10. Which in turn has fostered discussions about scalability in AMD’s designs, and compute/texture-to-ROP ratios.
    Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work.


So think Vega 64 performance in Radeon 580 power, which is still a proper generational leap in performance per watt, but it's a mid range part like the 580 and likely a small tamp down on the part for an APUs power envelope shared with the CPU looking at the PS4.

The Radeon 580 is 6.1Tflops, the Vega 64 is 12.5, so expecting more than a smidgen above the latter is getting optimistic imo. Rounding up to 13 we're about about 7X up from the base PS4 (very very simply, barring any memory, ROP, TMU etc bottlenecks), which ain't bad in my books. 15+Tflops is stretching so far as I see.
 
Last edited:
“Requires additional work” may be a nice way for Sony and MS to invest in semi custom enhancements to accelerate this work.
This is true. I do often point to the PS4 borrowing 8 ACEs from a future GPU product, the PS4 Pro borrowing Vega texture compression from a future GPU product.

Perhaps the PS5 could borrow the removal of this limit from AMD "NextGen".

But it'll work out better if I set my expectations that it does not ;)
 
Last edited:
This is true. I do often point to the PS4 borrowing 8 ACEs from a future GPU product, the PS4 Pro borrowing Vega texture compression from a future GPU product.

Perhaps the PS5 could borrow the removal of this limit from AMD "NextGen".

But it'll work out better if I set my expectations that it does not ;)
Pro also "borrowed" FP16 from Vega - whether that makes a difference or not really depends on the implementation (and no, I'm not claiming it'll give you 2xTFlops - it never will, but it may help in some instances, others not at all, really depends on the engine).

I think the interesting question is, will the ID buffer make it to AMD mainstream products? Because personally I'm in the opinion that a good checkerboarding implementation (there are so many, some of which are very much inferior) is a worthwhile tradeoff instead of trying to reach a native resolution/using dynamic resolution. This is from a typical TV viewing distance, but would it be worth it on PCs too, or would the artifacts be too visible?

Edit: and without the ID buffer, the overhead of checkerboarding might not be worth it.
 
Last edited:
Pro also "borrowed" FP16 from Vega - whether that makes a difference or not really depends on the implementation (and no, I'm not claiming it'll give you 2xTFlops - it never will, but it may help in some instances, others not at all, really depends on the engine).

I think the interesting question is, will the ID buffer make it to AMD mainstream products? Because personally I'm in the opinion that a good checkerboarding implementation (there are so many, some of which are very much inferior) is a worthwhile tradeoff instead of trying to reach a native resolution/using dynamic resolution. This is from a typical TV viewing distance, but would it be worth it on PCs too, or would the artifacts be too visible?

Edit: and without the ID buffer, the overhead of checkerboarding might not be worth it.
Great questions.
 
AMD focusing on mid range GPU and ignoring high end altogether makes perfect sense, that's how they scored a home run against Fermi with great 5xxx series.

Mm, sanity check:

Die size
8 core Ryzen is 213mm^2
Vega 64 is 484mm^2

All that on 14nm. So Vega 64 + Ryzen are very close to 700mm^2
Initial DUV 7nm promises 50% area reduction, so, 350mm^2,
Verdict: realistic

Power consumption
8 core Ryzen - 95W (not really, but for our purposes)
Vega 64 power consumption:
balanced - 284W
power-saver - 194W (more realistic for console)

So, close to 300W total on 14nm.
Initial DUV 7nm (as well as 2nd gen, which is 2019 and which only brings higher yields) promises 60% power reduction, so 180W
Verdict: realistic
 
Last edited:
I wouldn't take Fudzilla too seriously, they have been wrong before on so many things.
As noted, however the four shader engine limit still exists in GCN and Navi does not appear to move past GCN as a whole (at earliest this will be with whatever "nextgen" is), so it's still reasonable to think along the lines of a shrunk Vega 64 with whatever wins they can score within four SEs. If Sony or Microsoft customize away this limit, great.
 
Last edited:
As noted, however the four shader engine limit still exists in GCN and Navi does not appear to move past GCN as a whole (at earliest this will be with whatever "nextgen" is), so it's still reasonable to think along the lines of a shrunk Vega 64 with whatever wins they can score within four SEs. If Sony or Microsoft customize away this limit, great.
Where is this from though, I've been following Navi since the earliest I can remember and only recently I saw Digital Foundry made a claim that Navi is just GCN.
But nowhere can I find anything that backs that up, as far as things stand and as I know (lately I haven't been following Navi much) is that it will have next-gen memory (I reckon that's GDDR6 or HBM3).
And that it can be found in a Linux driver from AMD, but beyond that I haven't came across anything that remotely indicate it will be GCN based at all (it makes sense in a lot of ways, but is it confirmed?).
They should be able to add CU's, MS did it with XBOX (that's based off a RX480/RX580 I think, 36CU count), it has 40+4, 4 disabled (40 * 64 =2560 * 1172MHz (2x) = 6TF).
It might be cheaper to use a more capable GPU and disable compute units instead of adding them, at least that's how it has always been done.
 
Last edited:
I do wonder if the lack of Navi info is connected to PS5. Could it be almost the exact chip of PS5 (that will end up a PC GPU later) and AMD revealing any info gives the game away?
 
Where is this from though, I've been following Navi since the earliest I can remember and only recently I saw Digital Foundry made a claim that Navi is just GCN.
But nowhere can I find anything that backs that up, as far as things stand and as I know (lately I haven't been following Navi much) is that it will have next-gen memory (I reckon that's GDDR6 or HBM3).
And that it can be found in a Linux driver from AMD, but beyond that I haven't came across anything that remotely indicate it will be GCN based at all (it makes sense in a lot of ways, but is it confirmed?).
They should be able to add CU's, MS did it with XBOX (that's based off a RX480/RX580 I think, 36CU count), it has 40+4, 4 disabled.
More reading between the lines tbh. They've been on GCN since 2012 and a substantial move off it would seem like it would be playing into their very extended hype cycles. Right now, all they seem to be promising for it is 7nm, connected dies via infinity fabric, and whatever nextgen memory is. Also named "next gen" (sigh) is where the earliest most people are expecting a true move off GCN to be.


"Performance improvements", scalability (multi-die), faster memory, "focused execution", not quite screaming a ground up move off of GCN, earliest expectation seems to be "next-gen" after Navi.

AMD working on GCN successor after Navi
https://hardforum.com/threads/amd-working-on-next-gen-gpu-after-navi-for-2020-2021.1954109/
"Navi Might Be Final GCN GPU from AMD"
https://www.overclockersclub.com/news/41365/

I do wonder if the lack of Navi info is connected to PS5. Could it be almost the exact chip of PS5 (that will end up a PC GPU later) and AMD revealing any info gives the game away?
The other possibility -

"Navi may be the last GCN architecture as RTG is looking forward to a non-GCN future.
Work on Navi has been scaled back as RTG plans a brand new architecture or at least drastically different enough to be considered non-GCN.
A major goal is to significantly reduce power consumption.
Of cause, if this new non-GCN is not released on schedule, we may be looking at some stopgap GCN architectures. "

" development for this next architecture began prior to Raja Koduri's departure from the Radeon Technologies Group, but development has been increasing enough to produce more chatter under RTG's new leadership. Everything is speculation currently, but it seems AMD's intent is to make the leap to the new macro architecture as significant a performance jump as it was from TeraScale to GCN, or greater. Navi-based GPUs are expected to launch in 2019, likely early in the year, putting this GCN-successor in 2020 or 2021, assuming everything goes as planned and hoped. "

Additional features scaled down from Navi to focus on NextGen being a larger leap from all GCN architectures. Navi gets multi-die, faster memory, and 7nm for the meantime.


Now, it may be Sony takes us all by surprise by not using 7nm chips and releasing PS5 way before anyone expected. But while they're certainly capable of some braindead business decisions (this has been proven time and again), surely they're not that stupid?
Once again in Cerny we trust, he maintained incredibly levelheaded decisions in creating the PS4s hardware.
 
Last edited:
I think that's reading too much between the lines IMHO, so let's leave it at agree to disagree.
These roadmaps and AMD's conference calls and/or anything out of them about unreleased products are intentionally vague for a reason.
We'll see what's what when the time comes, neither optimistic or as remotely excited as I used to be about gaming - them old age.
 
Last edited:
I do wonder if the lack of Navi info is connected to PS5. Could it be almost the exact chip of PS5 (that will end up a PC GPU later) and AMD revealing any info gives the game away?
I think that's highly unlikely. Consoles require huge production volumes. And not only huge production volumes, but good production yields as well. If the roadmap posted above is correct (and no reason to believe it wouldn't be) we'll initially see some 7nm "Navi" chips sometime in 2019. That's way too early for the type of mass production consoles require. And besides, surely dedicated GPUs will come before APUs.

Now, it may be Sony takes us all by surprise by not using 7nm chips and releasing PS5 way before anyone expected. But while they're certainly capable of some braindead business decisions (this has been proven time and again), surely they're not that stupid?
 
I think that's highly unlikely. Consoles require huge production volumes. And not only huge production volumes, but good production yields as well. If the roadmap posted above is correct (and no reason to believe it wouldn't be) we'll initially see some 7nm "Navi" chips sometime in 2019. That's way too early for the type of mass production consoles require. And besides, surely dedicated GPUs will come before APUs.

Now, it may be Sony takes us all by surprise by not using 7nm chips and releasing PS5 way before anyone expected. But while they're certainly capable of some braindead business decisions (this has been proven time and again), surely they're not that stupid?
Actually I didn't word my post very well. Think more along the lines of PS4 Pro launch (albeit at lower volume) in late 2016. It would have started manufacturing at the same time/before PC Polaris.

Who knows, maybe Sony would be happy with a more gentle transition with PS5 and PS4 Pro launch window numbers (2-4 million?) would suffice?
 
Last edited:
AMD focusing on mid range GPU and ignoring high end altogether makes perfect sense, that's how they scored a home run against Fermi with great 5xxx series.

Mm, sanity check:

Die size
8 core Ryzen is 213mm^2
Vega 64 is 484mm^2

All that on 14nm. So Vega 64 + Ryzen are very close to 700mm^2
Initial DUV 7nm promises 50% area reduction, so, 350mm^2,
Verdict: realistic

Power consumption
8 core Ryzen - 95W (not really, but for our purposes)
Vega 64 power consumption:
balanced - 284W
power-saver - 194W (more realistic for console)

So, close to 300W total on 14nm.
Initial DUV 7nm (as well as 2nd gen, which is 2019 and which only brings higher yields) promises 60% power reduction, so 180W
Verdict: realistic
This is literally what I’ve been thinking from the beginning. The current Ryzen APUs literally have the Vega core married to it minus the computer units.

So at 7nm Zen 2/Navi we could see AMDs stock APU married to “Navi 64” and still be well over 12 Tflops and a $399 price tag.
 
Actually I didn't word my post very well. Think more along the lines of PS4 Pro launch (albeit at lower volume) in late 2016. It would have started manufacturing at the same time/before PC Polaris.

Who knows, maybe Sony would be happy with a more gentle transition with PS5 and PS4 Pro launch window numbers (2-4 million?) would suffice?
I get your point, kind of - but the Pro was/is an enthusiast device. PS5 will target the mass market again (I'm assuming), they'll need to have the capacity to sell to a much larger audience. A Cartman-esque marketing campaign along the lines of "here's PS5, you're not getting one!" may work for a little while, but very soon Sony will have to deliver.
 
I get your point, kind of - but the Pro was/is an enthusiast device. PS5 will target the mass market again (I'm assuming), they'll need to have the capacity to sell to a much larger audience. A Cartman-esque marketing campaign along the lines of "here's PS5, you're not getting one!" may work for a little while, but very soon Sony will have to deliver.
Oh I think it is more likely for PS5 to be a mass market device. Just wondering if Sony felt the need to get it out in late 2019 in lower numbers, they could do it. Being unavailable anywhere didn't hurt Switch!
 
Oh I think it is more likely for PS5 to be a mass market device. Just wondering if Sony felt the need to get it out in late 2019 in lower numbers, they could do it. Being unavailable anywhere didn't hurt Switch!
Again, I get your point :) I was very much in the market for a new console when PS4 was released, but didn't preorder - and man, was it hard to find one. It kind of built up the hype too, despite not having at the time any games I'd be definitely interested in. March 2014 I was able to get it (I suppose ordering from abroad would have been an option too, but I didn't want the hassle of a possible dud/guarantee). But if I would have had to wait an entire year for it (assuming late-2019 initial release, late-2020 full release for PS5) I might very well have given up and got something else.

(Just a side note, it's odd reading about the Switch shortages. Here in Finland, there may have been some at launch (didn't have the budget or desire for it at the time), but after that it has been always available at my common retailers. When I eventually decided to get one last November, it was a matter of placing an order, not figuring out where to get one. It seems to me that either Nintendo has some really odd supply issues, or the Switch is wildly popular in some regions/countries and less so elsewhere.)
 
I feel like we maay be starting to run away with it a tad and it's best to reel expectations in a bit to not be let down, some are reporting Navi will be a 580 replacement rather than Vega 64 replacement

https://www.fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card
https://www.reddit.com/r/hardware/comments/8bqhgh
Navi 7nm the 2019 chip will not be a high end GPU, it will be a quite powerful performance/mainstream chip.

Think of it as the Radeon RX 580 / 480 replacement. It will be small, and is likely to perform as well as the Vega 14nm that shipped last year. In the Nvidia performance world Navi should perform close to Geforce GTX 1080 which is quite good for the mainstream part but probably on part of the mainstream part planned after the high end part.

Note there is no direct source, so this rumor would be scrutinized moreso than a typical rumor.

However, this appears reasonable.

  • Navi will likely debut on a 1st gen 7nm process without the yield benefits of EUV (officially we only know that it's on some kind of 7nm process). Therefore, small chips are better if you're only selling to gamers (7nm Vega 20 has pro customers).
  • The existing mid-range (Polaris 10 & 11) will be 3 years old in 2019 and desperately needs a replacement.
  • AMD has previously dealt with a mid-range-only gaming lineup (see 2016), so it's not entirely unexpected that they'll do it again.
  • Vega still has a 4 shader engine limit, which limits the GPU to 64CUs. That's basically Vega 10. Therefore, Navi 10 can only be a "shrunken" Vega 10 unless this limit is removed. However, this limit has existed since GCN debuted, so I doubt it's going anywhere until post-Navi.
  • On a brief aside, the number of compute engines has been an unexpectedly interesting point of discussion over the years. Back in 2013 we learned that the then-current iteration of GCN had a maximum compute engine count of 4, which AMD has stuck to ever since, including the new Vega 10. Which in turn has fostered discussions about scalability in AMD’s designs, and compute/texture-to-ROP ratios.
    Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work.


So think Vega 64 performance in Radeon 580 power, which is still a proper generational leap in performance per watt, but it's a mid range part like the 580 and likely a small tamp down on the part for an APUs power envelope shared with the CPU looking at the PS4.

The Radeon 580 is 6.1Tflops, the Vega 64 is 12.5, so expecting more than a smidgen above the latter is getting optimistic imo. Rounding up to 13 we're about about 7X up from the base PS4 (very very simply, barring any memory, ROP, TMU etc bottlenecks), which ain't bad in my books. 15+Tflops is stretching so far as I see.
It does not matter if the PS5 will be 6 or 8 times more powerful than the basic PS4. This will always be insufficient to generate next-generation graphics. A VEGA 64 with 12.66Tflops can not run the games of the current generation in 4k 60fps Ultra Settings. What would make us think that a console with 10, 11 or 12.5Tflops would be able to generate a new generation graphics at 4k 30fps and some at 60fps? I'm not hoping to see a big generational jump from this generation to the next unless the future API are very advanced and the optimizations that the consoles will be able to make breakthrough! Otherwise we will only see a performance jump and consolidation of the 4k in all games. CPUs will ensure advanced physics and more simulation. Nothing more than this ... What is sad ... On the other hand we could play games in 4k 60fps ... Who knows the 60fps are the pattern of the future ... Who knows ...
 
Last edited:
It does not matter if the PS5 will be 6 or 8 times more powerful than the basic PS4. This will always be insufficient to generate next-generation graphics. A VEGA 64 with 12.66Tflops can not run the games of the current generation in 4k 60fps Ultra Settings. What would make us think that a console with 10, 11 or 12.5Tflops would be able to generate a new generation graphics at 4k 30fps and some at 60fps? I'm not hoping to see a big generational jump from this generation to the next unless the future API are very advanced and the optimizations that the consoles will be able to make breakthrough! Otherwise we will only see a performance jump and consolidation of the 4k in all games. CPUs will ensure advanced physics and more simulation. Nothing more than this ... What is sad ... On the other hand we could play games in 4k 60fps ... Who knows the 60fps are the pattern of the future ... Who knows ...

I'm just looking at it from the perspective of where hardware actually is and will be, not an expected end result and working backwards to what hardware would get us there. So long as Navi is GCN, and I see 0 indication otherwise and "next gen" sounds too...Post-GCN, to be coincidence, the cap on the shader engines points me to something that brings Vega 64s performance into a lower power draw, with some IPC wins to be gained via the added transistor budget.

Perhaps those little gains could push it above a Vega 64 regardless of a paper measure of Gflops, but the bridge between 12.5 and 15 is a 20% IPC gain...Again I'm just keeping my expectations sane so pappa Cerny can deliver. Richard @ Digital Foundry also agreed 15 was on the stretching end of speculation.

https://www.eurogamer.net/articles/...n-a-potential-ps5-deliver-a-generational-leap

Fwiw, looking at the hardware a lot of us were doubting the XBO X would pull off as many 4K titles as it did, knowing a consoles hardware in and out as a tuning target is a help. As for 60fps, well, popular speculation is that Jaguars poor per-core performance limits many ambitions there anyways, and even aside from that if you gave developers any amount of power some would always opt for 30 and better visuals regardless. Zen+ should be a big enabler though, should they choose.
 
Last edited:
Oh I think it is more likely for PS5 to be a mass market device. Just wondering if Sony felt the need to get it out in late 2019 in lower numbers, they could do it. Being unavailable anywhere didn't hurt Switch!
Nintendo is only half competing with the big boys. Sony can't think like that.

It does not matter if the PS5 will be 6 or 8 times more powerful than the basic PS4. This will always be insufficient to generate next-generation graphics. A VEGA 64 with 12.66Tflops can not run the games of the current generation in 4k 60fps Ultra Settings. What would make us think that a console with 10, 11 or 12.5Tflops would be able to generate a new generation graphics at 4k 30fps and some at 60fps? I'm not hoping to see a big generational jump from this generation to the next unless the future API are very advanced and the optimizations that the consoles will be able to make breakthrough! Otherwise we will only see a performance jump and consolidation of the 4k in all games. CPUs will ensure advanced physics and more simulation. Nothing more than this ... What is sad ... On the other hand we could play games in 4k 60fps ... Who knows the 60fps are the pattern of the future ... Who knows ...
Nah....sorry LittleAngryDog, but videogame programming on consoles doesn't work that way. A 12 Tflop console will simply be able to do more than a 12 Tflop PC. Why nobody should expect all games next gen to target 4K 60fps. That's just a silly target.
 
Last edited:
Ultra Settings are elusive, mostly hardly noticeable and put in there just for the sake of having ultra settings.

I actually expect 10TF-ish chip (lower clock and possibly less CUs than Vega64)
We can expect high poly count!
That would change the whole scenario!

I do not think we need better textures in the next generation. I just think it would require a higher polygonal density to improve the modeling of the characters.
 
Ultra Settings are elusive, mostly hardly noticeable and put in there just for the sake of having ultra settings.

I actually expect 10TF-ish chip (lower clock and possibly less CUs than Vega64)
If Navi is not high end as rumors suggest then we can expect 680 to be ~Vega 56 in terms of power but with much better perf/watt. This gpu is very likely candidate for PS5 (as Polaris sits in Pro/X) so console will be ~12TF IMO, people expecting 15TF+ are delusional :D
 
Last edited:
If Navi is not high end as rumors suggest then we can expect 680 to be ~Vega 56 in terms of power but with much better perf/watt. This gpu is very likely candidate for PS5 (as Polaris sits in Pro/X) so console will be ~12TF IMO, people expecting 15TF+ are delusional :D
Do you think these graphics are possible with 12Tflops from AMD ??? Remembering that 12Tflops from AMD is not the same as 12Tflops from Nvidia. Nvidia for some reason that no one from Neogaf could explain has superior performance by Teraflop than AMD. If I am not mistaken they used a GTX 1080 Ti with 11Tflops for these tech demos.

 
Last edited:
Do you think these graphics are possible with 12Tflops from AMD ??? Remembering that 12Tflops from AMD is not the same as 12Tflops from Nvidia. Nvidia for some reason that no one from Neogaf could explain has superior performance by Teraflop than AMD. If I am not mistaken they used a GTX 1080 Ti with 11Tflops for these tech demos.
Some games runs on AMD gpu's as good as TF numbers would suggest, Wolfenstein for example. Question is: do GPU's shows their true capabilities or Vulcan is just heavily AMD optimized (it still is)?

If this runs on 1080ti then it could easily run on Vega, just with some things scaled down (res for example) but I don't whink we will see this tech demo quality graphics on PS5, at least not in first ~3 years - I'm sure ND, SSM, GG and SP will do their magic ;)
 
Higher frame rate will never be a priority on console. Console developers will always choose graphics vs frame rate. ;)
It's not just graphics. Or at least, I think that word tends to oversimplify the situation. The quality of physics, water, fire, explosions, plant life... The number of explosions on the screen at a time, the amount of destructability in the environment, draw distance, and number of enemies on the screen...

There are a lot of things that are potentially worth choosing over 60fps in certain games. Personally, I'd rather a game that pushed the boundaries in all of the above examples at 30fps, then a game that was cut back in order to hit 60 FPS. I realize are plenty of people that would prefer 60fps, and that's fine.

But reducing it to this concept of graphics is just a bit misleading. There's a lot going on in many games. You might have 20 soldiers on screen and three tanks. What happens if everyone throws a grenade at once? Do you optimize for worst case scenario ensuring that you can always hit that 60fps, or do you allow framerate drops when something obscure happens? Or do you prevent it from happening at all by reducing the number of enemies, or changing the AI so fewer grenades are thrown?
 
People expecting standard 60 fps on consoles are going to be disappointed (again...), 30 fps is standard on consoles and I'm fine with that as long as it's stable. VRR can change things in the future, I imagine many titles will have unlocked framerate option for HDMI 2.1 users (like now we have performance options in some games).
 
Last edited:
I think that's reading too much between the lines IMHO, so let's leave it at agree to disagree.
These roadmaps and AMD's conference calls and/or anything out of them about unreleased products are intentionally vague for a reason.
We'll see what's what when the time comes, neither optimistic or as remotely excited as I used to be about gaming - them old age.
I'm always excited about these new gaming boxes. Can't help it.
 
Nvidia for some reason that no one from Neogaf could explain has superior performance by Teraflop than AMD. If I am not mistaken they used a GTX 1080 Ti with 11Tflops for these tech demos.
Pretty easy to tl;dr explain actually.

Gflops are a paper calculation, they are shader cores * two operations per core per clock (perfect conditions, which is why measured is always lower) * clock speed in GHz.

Nvidia uses larger cores that each do more, AMD uses more numerous smaller cores, as differing design philosophies.

Net result is Nvidia will have less Gflops on paper because they have fewer cores, even if they do equal or more work, than an equivalent AMD card. In the end performance per watt and mm2 is king regardless of design philosophy, Nvidia is winning there now but that doesn't necessarily mean the more cores design for GPUs is inherently worse.


Navi may eke out more instructions per clock, but nothing so far indicates a move off of GCN so I'm not expecting it to become anything like Nvidia performance per Gflop numbers, sticking to the GCN design philosophy.
 
Last edited:
I personally think 2020 at the earliest would be best. The ps4 sales have yet to start decline. Last year was there best ever year in sales.
This... Plus, sending out a devkit just means developers will now starting to think about making future games on that platform and under those specs. It'll take another 2 years for them to actually have a game ready at launch. Or, at minimum, a year to port an existing game over.

Personally, I think Sony would be insane to bring out the PS5 any sooner than November 2020.
 
It's not just graphics. Or at least, I think that word tends to oversimplify the situation. The quality of physics, water, fire, explosions, plant life... The number of explosions on the screen at a time, the amount of destructability in the environment, draw distance, and number of enemies on the screen...

There are a lot of things that are potentially worth choosing over 60fps in certain games. Personally, I'd rather a game that pushed the boundaries in all of the above examples at 30fps, then a game that was cut back in order to hit 60 FPS. I realize are plenty of people that would prefer 60fps, and that's fine.

But reducing it to this concept of graphics is just a bit misleading. There's a lot going on in many games. You might have 20 soldiers on screen and three tanks. What happens if everyone throws a grenade at once? Do you optimize for worst case scenario ensuring that you can always hit that 60fps, or do you allow framerate drops when something obscure happens? Or do you prevent it from happening at all by reducing the number of enemies, or changing the AI so fewer grenades are thrown?
I'm completely agree with you, i was summaring the whole thing behind the word "graphic" but of course there is much more than that. :D
 
Pretty easy to tl;dr explain actually.

Gflops are a paper calculation, they are shader cores * two operations per core per clock (perfect conditions, which is why measured is always lower) * clock speed in GHz.

Nvidia uses larger cores that each do more, AMD uses more numerous smaller cores, as differing design philosophies.

Net result is Nvidia will have less Gflops on paper because they have fewer cores, even if they do equal or more work, than an equivalent AMD card. In the end performance per watt and mm2 is king regardless of design philosophy, Nvidia is winning there now but that doesn't necessarily mean the more cores design for GPUs is inherently worse.


Navi may eke out more instructions per clock, but nothing so far indicates a move off of GCN so I'm not expecting it to become anything like Nvidia performance per Gflop numbers, sticking to the GCN design philosophy.
Aren't AMDs more efficient at lower wattage? Not to mention offering the best price/performance ratio which would be the main focus of a console.
 
So I wanted to create a hypothetical scenario and see which one would be more interesting...
I'm basing it on the die size discussion here:


AMD focusing on mid range GPU and ignoring high end altogether makes perfect sense, that's how they scored a home run against Fermi with great 5xxx series.

Mm, sanity check:

Die size
8 core Ryzen is 213mm^2
Vega 64 is 484mm^2

All that on 14nm. So Vega 64 + Ryzen are very close to 700mm^2
Initial DUV 7nm promises 50% area reduction, so, 350mm^2,
Verdict: realistic

Power consumption
8 core Ryzen - 95W (not really, but for our purposes)
Vega 64 power consumption:
balanced - 284W
power-saver - 194W (more realistic for console)

So, close to 300W total on 14nm.
Initial DUV 7nm (as well as 2nd gen, which is 2019 and which only brings higher yields) promises 60% power reduction, so 180W
Verdict: realistic

Right now the Ryzen APU is about 209mm^2 and 65W

Ryzen 2 APU built on Zen 2/Navi could theoretically give the same level of performance in half the power draw and size.

If anything below 180W and 350mm^2 is fair game...
Lets say the move to 7nm allows them to cram 22 compute unites versus the current 11.

How feasible would an dual Ryzen 2 APU configuration with maybe 20 additional compute units be?
I'd even argue if the 7nm allowed higher clocks and double the computer units of the current APU then just 2 Ryzen 2 APUs would make it a monster.
 
Last edited:
Aren't AMDs more efficient at lower wattage? Not to mention offering the best price/performance ratio which would be the main focus of a console.
"More" is questionable, but at low wattage the mobile Radeon Pros show similarish perf/watt to Pascal, sure. That's in the 30 watt range which a chip like this would be well above though. The more you try to push those clocks the more AMDs design suffers per watt. DFs speculation towards the stretching 15Tflop range would definitely be pushing the clocks past AMDs prime.


And sure AMDs lower acceptable margins would bode well for the better performance per dollar here, I was just responding to the question of why everyone says Nvidia does more per flop than AMD.
 
Last edited:
A 400 dollar machine with 12tflop power won't release before 2020
considering Xbox One X and Ps4 Pro which would be still solid platform , I don't think ANYTHING "new" priced below 499 USD at day one

Xbox one S / Ps4 Slim @ 149/199 USD as entry point /bundled with a game
Xbox one X / Ps4 Pro @ 349/299 USD as mainstream console / bundled with a game
Xbox Next / Ps5 Next @ 499/499 USD as new device
 
I don't see why not? A Nov 2019 PS5 would be similar to PS4 releasing in Nov 2013. 1.84TF semi-custom AMD (mid-range plus?) vs 12TF semi-custom AMD that would be ~mid-range plus by late 2019?
I think there is a desire to see Xbox One X and PS4 Pro as machines where the console maker pulled out all the stops and went with the bestest and most expensive system they could build and said “screw profit margins”... while these intentionally stop-gap consoles are quite about the opposite attitude IMHO.
 
Last edited: