• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Pachter: PS5 to be a half step, release in 2019 with PS4 BC

Does anyone else think it would be a case of games still running at 30fps with the ps5 or Xbox one two? Developers will still push graphics over frame rate.

Seriously though, with motion blur getting as good as it is already, I don't see 30fps being a major detriment anymore for the majority of games. Sixty will always be better and ideal, but we've moved past the "unplayable slideshow" hyperbolic nonsense.
 
Seriously though, with motion blur getting as good as it is already, I don't see 30fps being a major detriment anymore for the majority of games. Sixty will always be better and ideal, but we've moved past the "unplayable slideshow" hyperbolic nonsense.

We were never hindered by it in the first place. It's only on the internet among raging fanboys and die-hards that this kind of hyperbolic rhetoric is even a thing.

There is nothing and has never been anything wrong with 30fps games.
 

Theonik

Member
Sorry, I'm not sure how this relates to what I said.
When Sony was considering this they were aware they had the better GPU but didn't know they had a worse CPU.

Seriously though, with motion blur getting as good as it is already, I don't see 30fps being a major detriment anymore for the majority of games. Sixty will always be better and ideal, but we've moved past the "unplayable slideshow" hyperbolic nonsense.
It shouldn't look like a slideshow unless you are dropping frames like mad. 60fps will always look and play better though. Hell 60 isn't even enough.
 
*It's also important to remember that a) It's not like current systems or even future ones have huge surpluses of GPU power and b) That approach is faster, but much less efficient overall there is a lot of considerations when implementing such systems in an actual game.
You assume that it's possible to saturate the GPU with graphics (vertex/pixel shaders) workloads only and there's not much juice left for compute shaders (physics, post-processing effects etc).

Apparently this isn't the case for Radeon GCN GPUs.

ps: How exactly is it "less efficient" when GPUs can offer a 100x speedup compared to the CPU?
 

MilkyJoe

Member
Probably the reactionary part which can't be true considering they both had the same idea at the same time.
Which leads me to believe AMD did a nice sales pitch to get them both on board with the idea.

Correct.

Also him thinking I meant scorpio is next gen when answering "why do people want ps5 so soon".
 
When Sony was considering this they were aware they had the better GPU but didn't know they had a worse CPU.

The difference is marginal at best, and that's not taking into account the added overhead the vitalization on XB1 imposes. In real games, even now we've barely seen a difference.

Regardless, my speculation was intended as an explanation to Cerny's comment about the "14:4" GPU CU split that he was quizzed on. The point being that assuming the XB1 was essentially a PS4 but with less GPU, the additional GPU on PS4 could be leveraged for compute, considering that developers would design for the lowest common denominator (i.e. XB1).

I don't think incremental increases in CPU clock-speed were relevant or known at the time the pastebin leaks came out mentioning the "14:4" ALU split on the PS4 specs.. I certainly don't think it would have factored into Cerny's explanation either.
 

Shin

Banned
This is the low-end/cheapest card in AMD's stable ATM or will end up being soon enough and certainly by the time PS5 launches.
It probably gives us an idea of what we can expect in terms of performance, Polaris will be phased out and Navi is unknown ATM.
https://videocardz.com/amd/radeon-500/radeon-rx-vega-xl this card should be around 114w+- once they switch to 7nm (RX480 TDP is 150w).
It's the 56CU (the other two have 64CU's) version running at 1600Mhz, using GloFo press release on 7nm vs 14nm gains this card would be 16TF (in theory).

Basically there won't be any other card beside Vega and Navi for Sony to choose from so it's going to be one of them, older parts are cheaper.
AMD could throw in some refinements they've made on Navi and deliver a compelling price/performance for the platform holders without breaking the bank.
Just doing a raw math if that's how it works, >40% performance gain over 14nm, even downclocked and running at 1300MHz with 56CU you'd end up at 13TF.
I don't know about you guys but downclocking a GPU by 300MHz is really a lot, 13TF would be the minimum in that case if it's clocked higher than 1300MHz.

Navi should be more efficient (7nm EUV) but we don't know jack shit what that GPU will be, at least the prospect of a 12TF+ GPU is looking good IMO.
 
This is the low-end/cheapest card in AMD's stable ATM or will end up being soon enough and certainly by the time PS5 launches.
It probably gives us an idea of what we can expect in terms of performance, Polaris will be phased out and Navi is unknown ATM.
https://videocardz.com/amd/radeon-500/radeon-rx-vega-xl this card should be around 114w+- once they switch to 7nm (RX480 TDP is 150w).
It's the 56CU (the other two have 64CU's) version running at 1600Mhz, using GloFo press release on 7nm vs 14nm gains this card would be 16TF (in theory).
Basically there won't be any other card beside Vega and Navi for Sony to choose from so it's going to be one of them, older parts are cheaper.
AMD could throw in some refinements they've made on Navi and deliver a compelling price/performance for the platform holders without breaking the bank.
Just doing a raw math if that's how it works, >40% performance gain over 14nm, even downclocked and running at 1300MHz with 56CU you'd end up at 13TF.
I don't know about you guys but downclocking a GPU by 300MHz is really a lot, 13TF would be the minimum in that case if it's clocked higher than 1300MHz.
Navi should be more efficient (7nm EUV) but we don't know jack shit what that GPU will be, at least the prospect of a 10-12TF GPU is looking good.

How do you get >40% performance going from 14nm to 7nm with a GPU with the same number of GPU compute units and a lower clockspeed?

The performance gain going to 7nm comes from the fact that you'll have a larger CU count for the same silicon die area, i.e. smaller geometries = smaller transistors = more transistors / mm²
 

Shin

Banned
How do you get >40% performance going from 14nm to 7nm with a GPU with the same number of GPU compute units and a lower clockspeed?

The performance gain going to 7nm comes from the fact that you'll have a larger CU count for the same silicon die area, i.e. smaller geometries = smaller transistors = more transistors / mm²

From this:

6fc9d2e7d9.jpg


It's why I wrote it "if that's how it works" since I can't think of what else they are implying at.
If they increase the CU count (which would make sense since they save die space from the shrink) then you could still end up with 10+.
 
The point being that assuming the XB1 was essentially a PS4 but with less GPU, the additional GPU on PS4 could be leveraged for compute, considering that developers would design for the lowest common denominator (i.e. XB1).
It was the same thing in the previous gen CPUs: Xenon was a Cell without SPUs (and with more CPU cores/PPUs).

SPU programming was considered arcane/exotic back then and GPU Compute is still considered exotic/not worth the effort by some people...
 

Theonik

Member
You assume that it's possible to saturate the GPU with graphics (vertex/pixel shaders) workloads only and there's not much juice left for compute shaders (physics, post-processing effects etc).

Apparently this isn't the case for Radeon GCN GPUs.

ps: How exactly is it "less efficient" when GPUs can offer a 100x speedup compared to the CPU?
It's a balancing act, while in practice if you do things asynchronously you can do more when your rendering pipeline enters linear paths which would normally waste resources. (SMT works on a similar principle to better utilise a CPU core in cases where a thread is not running 100% efficiently)
What I meant though is that parallelism can be less efficient while still being faster. You can run an operation on two cores, get some perf gains but unless you gain 2x perf you are not running efficiently. There is of course many reasons why this happens.

The difference is marginal at best, and that's not taking into account the added overhead the vitalization on XB1 imposes. In real games, even now we've barely seen a difference.

Regardless, my speculation was intended as an explanation to Cerny's comment about the "14:4" GPU CU split that he was quizzed on. The point being that assuming the XB1 was essentially a PS4 but with less GPU, the additional GPU could be leveraged for compute, considering that developers would design for the lowest common denominator (i.e. XB1).

I don't think incremental increases in CPU clock-speed were relevant or known at the time the pastebin leaks came out mentioning the "14:4" ALU split on the PS4 specs.. I certainly don't think it would have factored into Cerny's explanation either.
It's a 10% gap and you can see it in practice in several games when they are CPU bound. The difference of course is that many games this gen are GPU bound after developers moved to optimising in that direction. Sony's recommendations are just suggestions. Developers chose to use the CUs differently and that benefited them all the same. It's not a surplus if you use it.

Correct.

Also him thinking I meant scorpio is next gen when answering "why do people want ps5 so soon".
So how is Scorpio relevant here? The only reason we are having this discussion is because someone asked Pachter a question. The two systems are intended to be competitors to one another not an imaginary PS5. If MS had been first to market, Sony would be responding to MS's proposition and vice-versa. First movers get to set the narrative irrespective of time of release. PS Move was in development before the Wii came out yet people perceived it as a Wii knock-off, It's irrelevant.
 
What I meant though is that parallelism can be less efficient while still being faster. You can run an operation on two cores, get some perf gains but unless you gain 2x perf you are not running efficiently.
If by "efficiency" you mean linear scaling of performance, then that's just not possible. Amdahl's law is a thing.

Most people would agree that a 100x speedup is a lot more efficient than running on the CPU, regardless of the CPU uarch. You can also run vertex shaders on the CPU if you want, but nobody does that anymore. It used to be common during the 3DFX/pre-GeForce era. There is a constant trend of moving parallelizable tasks to the GPU for the last 15-20 years and this isn't gonna stop because of Ryzen or whatever comes up in the future.
 

Theonik

Member
If by "efficiency" you mean linear scaling of performance, then that's just not possible. Amdahl's law is a thing.

Most people would agree that a 100x speedup is a lot more efficient than running on the CPU, regardless of the CPU uarch. You can also run vertex shaders on the CPU if you want, but nobody does that anymore. It used to be common during the 3DFX/pre-GeForce era. There is a constant trend of moving parallelizable tasks to the GPU for the last 15-20 years and this isn't gonna stop because of Ryzen or whatever comes up in the future.
You are confusing performance with efficiency. Some tasks run much faster on a GPU because they are trivially parallel. Colour values of one pixel don't depend on one next to it so can run at close to 100% efficiency in parallel. That is not so of every task. You can get good results from running things in parallel greedily then disposing incorrect predictions though.
 
You are confusing performance with efficiency. Some tasks run much faster on a GPU because they are trivially parallel. Colour values of one pixel don't depend on one next to it so can run at close to 100% efficiency in parallel. That is not so of every task. You can get good results from running things in parallel greedily then disposing incorrect predictions though.
You still haven't explained why a 100x speedup is not deemed "efficient" enough. Efficiency increases performance.
 
Next consoles are going to be so beast if they have Zen CPUs.

Even like a 4 core Zen or a half speed 8 core Zen would run circles around the current consoles.

And it would be really interesting to see what the developers do with it, not just open world games or games with lots of destruction, but even SP games in terms of scale or AI and such, and density of physics objects or AI routines.

Would be super interesting. But to develop more complex things also costs more money, so I bet that transition would be slow, except maybe in terms of physics/destruction stuff.

Would love to have a FPS Red Faction again just saying.... THQ Nordic... reboot the series and show us what PS5 can really do :)
 

Lady Gaia

Member
From this:

6fc9d2e7d9.jpg

That's a ballpark estimate of what 7nm delivers relative to 14nm either at the same power (more performance due to higher clockspeeds), or same clockspeed (identical performance with lower power requirement.)

On a GPU you'll generally opt to keep clockspeeds similar and increase the number of computational units. With a 60%+ reduction in power consumption for the same number of CUs you could assume 2.5x the number of CUs at equivalent power which is a much bigger win than 1.4x by leaning into clockspeeds gains.

So you get ~10.5TF using the same basic design. Additional design improvements could improve the situation further, though I expect Sony will trade some of the die area for larger CPU cores instead.
 

THE:MILKMAN

Member
Saw this picture of the upcoming Notebook APU AMD have (Raven Ridge?) and considering it is "only" rumoured to be a 4 core/8 thread Zen+11CU Vega it looks like it is already around 200mm^2?

Even allowing for the 7nm area reduction it could get quite big at expected PS5 specs?

chip1.jpg
 

Shin

Banned
Additional design improvements could improve the situation further, though I expect Sony will trade some of the die area for larger CPU cores instead.

Depends, it's possible that Zen 2 or Zen 3 cores could be smaller no?
Even so I'm not sure how small Jaguar are compare to other CPU line ups, I guess we'll see when the APU makes it's debut sometime this or next year.

Saw this picture of the upcoming Notebook APU AMD have (Raven Ridge?) and considering it is "only" rumoured to be a 4 core/8 thread Zen+11CU Vega it looks like it is already around 200mm^2?

Even allowing for the 7nm area reduction it could get quite big at expected PS5 specs?

chip1.jpg
Isn't that running at 800MHz? at least I think it's 11CU @ 800MHz (SiSoft DB), 1.12TF APU?
 

AmyS

Member
I mean, Sony does not *have* to stick with the exact die size and power consumption limits of PS4 & PS4 Pro APUs. for PS5, right? They could give PS5 a somewhat larger/higher silicon/power consumption budget, without going to the levels of launch Fat PS3.

What I don't want is the need for a PS5 Pro just 3 years after PS5 releases.
 

THE:MILKMAN

Member
Isn't that running at 800MHz? at least I think it's 11CU @ 800MHz (SiSoft DB), 1.12TF APU?

I believe that is part of the rumour, yes.

I mean, Sony does not *have* to stick with the exact die size and power consumption limits of PS4 & PS4 Pro APUs. for PS5, right? They could give PS5 a somewhat larger/higher silicon/power consumption budget, without going to the levels of launch Fat PS3.

What I don't want is the need for a PS5 Pro just 3 years after PS5 releases.

Of course they don't have to and One X shows that pushing it back toward 200W at the wall is something they will probably have to accept to do. The question is how much will they then charge for the console? Sony and Microsoft both will need pretty gangbuster sales numbers to recoup the much higher investment that 7nm requires. It is a very precarious balance between ROI and ensuring good sales.
 

Blanquito

Member
Saw this picture of the upcoming Notebook APU AMD have (Raven Ridge?) and considering it is "only" rumoured to be a 4 core/8 thread Zen+11CU Vega it looks like it is already around 200mm^2?

Even allowing for the 7nm area reduction it could get quite big at expected PS5 specs?

chip1.jpg
Source please? I would like to read more
 

AmyS

Member
Edge #309 September issue interview with Jim Ryan on the hardware refreshes, he had this to say about it.
Leaving the door open from the looks of it which is the best possible stance, if there's a need then there will be one.

Ryan also talked about the recurring topic of whether or not the hardware refreshes released mid-generation (PlayStation 4 Pro and Xbox One X) will happen again in the future. He seemed to leave the door open to the possibility, though he wouldn’t confirm either way.

Hmm I missed that.

This seems to be the full quote from him in EDGE:

It’s a very interesting question. The cultural phenomenon of regular updates to smartphones and tablets is without question, perhaps subliminally, coloring mindsets. And the days of a 13-year PlayStation 2 cycle will almost certainly never repeat themselves. But equally, a platform is a very delicate ecosystem, and if that platform is to succeed, you’ve got to give those who make content for it the chance to recoup on it. At the end of the day, like it or not, these are businesses.

We struck – and Microsoft has as well – a good balance of innovation within the confines of the platform. Also, services which operate agnostically of particular hardware, like PlayStation Now for example, are something you’re going to see more of. I think we’re only six months into PlayStation 4 Pro, and it’s too early to tell. The Xbox One X hasn’t launched yet. I don’t know if this is the way forward or not.
 

Lady Gaia

Member
Saw this picture of the upcoming Notebook APU AMD have (Raven Ridge?) and considering it is "only" rumoured to be a 4 core/8 thread Zen+11CU Vega it looks like it is already around 200mm^2?

It's impossible to tell the size of the actual silicon die given a photo of the external package. The package size is determined almost entirely by the package style and pin count, not the size of the silicon die it contains.
 

THE:MILKMAN

Member
Source please? I would like to read more

It is being held in the hand of a AMD exec at Computex but check it out here: https://www.pcper.com/news/Processors/Computex-2017-AMD-Demos-Ryzen-Mobile-SoC-Vega-Graphics

It's impossible to tell the size of the actual silicon die given a photo of the external package. The package size is determined almost entirely by the package style and pin count, not the size of the silicon die it contains.

Sure, but this is just to get a ballpark figure and not a serious and accurate measure. IIRC both Xbox One and PS4 dies look very similar to this Raven Ridge APU and plenty of guesses/measurements where made with pics of them which weren't far off!

May be a pixel counter here could check how big it is from the below pic just for fun (coin is 20mm wide/diameter apparently)?

 

AmyS

Member
You're right the quote marks were below the line I quoted, that was the author's opinion.

Yep. So really what it's coming down to is

A. PS5 is happening. but we won't see for 'some time'.

B. Another refresh of PS4 with more power is a possibility, but depends on how 2018 plays out, including how Xbox One X.

The problem with another further upgraded PS4 (among many) is while a more powerful GPU is a given, there's nowhere to go on the CPU side if Jaguar is used again, other than matching XB1X CPU custom features and 200 MHz extra clockspeed (above PS4 Pro).

What would Sony do:

Release a PS4 Ultra/4K in 2018 with 2.4 GHz Jaguar, 8-10 TF GPU 12 GB RAM.

Then PS5 in 2020 with Zen3, 20 TF GPU and 64 GB RAM.
 

Yazzees

Member
I think a clear forward leap in graphics is necessary to get the masses interested. Simply using current assets at 4k doesn't come close to what I'm talking about, and hell, even this gen is often described as a diminished return compared to the transition from PS2->PS3.

Because of that I don't really feel like new consoles are coming soon or that any further "half-steps" are gonna be a thing.
 

AmyS

Member
I think a clear forward leap in graphics is necessary to get the masses interested. Simply using current assets at 4k doesn't come close to what I'm talking about, and hell, even this gen is often described as a diminished return compared to the transition from PS2->PS3.

Because of that I don't really feel like new consoles are coming soon or that any further "half-steps" are gonna be a thing.

I totally agree 1000%
 

Shin

Banned
Yep. So really what it's coming down to is

Probably meant PS5 Pro, they are in a interesting position and can almost do whatever they want with being first out of the gate.
Things were supposed to shift back a bit in Microsoft's favor after they made up for their mistakes but it looks like exclusives are king.

Vega XTX
Vega XT
Vega XL
1da2a30fed.png


As you guys can see that card is behind the GTX 1080 in FireStrike and the same with other gaming result, it's around a 1080 but definitely not Ti level.
AMD has to keep up this graphical power race and most likely looking for a big jump to catch up like they did with Zen, maybe Navi will be it maybe it won't.
Point I'm trying to make is that their lowest end GPU can only be so weak, I'm expecting to see jumps like Pascal > Volta from AMD with Vega > Navi.

AMD have a conference call today (2 hours 15 min from now), you can follow it here: http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-eventDetails&EventId=5259318
Might hear something about their semi-custom business.
 

RoboPlato

I'd be in the dick
I think a clear forward leap in graphics is necessary to get the masses interested. Simply using current assets at 4k doesn't come close to what I'm talking about, and hell, even this gen is often described as a diminished return compared to the transition from PS2->PS3.

Because of that I don't really feel like new consoles are coming soon or that any further "half-steps" are gonna be a thing.
This is part of why I think we'll see more 60fps games. It's a more obvious improvement than a lot of small visual refinements will be and the mid gen refreshes are already pushing good solutions for 4k displays.
 

avaya

Member
Sony will be reluctant to overshoot 350mm^2 on the die because the time till the next die shrink could be some way off and you will be saddled with a large die for longer. This will likely keep them focused on a design with the same die space / TDP as the original PS4.

I can not realistically see them pushing for a die size greater than 370mm^2 for this reason. (Liverpool was 348)
 

AmyS

Member
Very much relevant IMO.

AMD's CTO on 7nm, Chip Stacks
Papermaster calls for EUV ASAP

SAN JOSE, Calif. — AMD is among chip designers getting an early taste of 7nm process technologies, said its chief technology officer. He called for accelerated work on wafer-level fan-out packaging and greater use of parallelism in EDA software.

To gear up for 7nm, ”we had to literally double our efforts across foundry and design teams...It's the toughest lift I've seen in a number of generations," perhaps back to the introduction of copper interconnects, said Mark Papermaster, in a wide-ranging interview with EE Times.

The 7nm node requires new ”CAD tools and [changes in] the way you architect the device [and] how you connect transistors—the implementation and tools change [as well as] the IT support you need to get through it," he said.

Both AMD's Zen 2 and Zen 3 x86 processors will be made in 7nm. ”It's a long node, like 28nm...and when you have a long node it lets the design team focus on micro-architecture and systems solutions" rather than redesign standard blocks for the next process, Papermaster said.

The CPUs and GPUs AMD is shipping today were among its first designs in 14/16nm nodes using double patterning lithography and FinFET transistors.

For that work, ”our partnerships with foundries and the EDA industry had to deepen. In 7nm it requires even deeper cooperation [because] we have quad patterning on certain critical levels [where] you need almost perfect communications between the design teams," he said.

Papermaster expects foundries will begin to use extreme ultraviolet (EUV) lithography starting in 2019 to reduce the need for quad patterning. EUV ”could bring a substantial reduction in total masks and thus lower costs and shorten cycle time for new designs," he said.

”Foundries will introduce [EUV] at different rates but...I urge them all to go as fast as they can," he said.

http://eetimes.com/document.asp?doc_id=1332049

Of course 7nm would be a given for new consoles in late 2019, but if Sony were to wait until 2020 (and (I am only guessing) second gen 7nm+ with EUV could be within reach, and might possibly mean the difference between something like an APU w/Zen 2 CPU + 10-12 TF GPU and a Zen3 APU with an~18 TF GPU.

Edit: While somewhat meaningless from certain perspectives, why would an 18 TF GPU in a Fall 2020-released PS5 matter much over a 10-12 TF GPU in a 2019 PS5?

It would mean a straight up 10x increase in raw performance over OG PS4's GPU (not counting architectural improvements in AMD GPUs from GCN 1.1) and a 3x leap over XB1X GPU (which can't be used to make games that can't work on OG XBone and XB1S anyway). More importantly, it gives developers more time to release their big current-gen games (i.e. FFVII Remake, TLoU2) and have a bit of time to work on next-gen games for PS5. Releasing PS5 in 2019 has both pros and cons, but I think the upside in waiting until 2020 outweighs the downsides.
 

Shin

Banned
Very much relevant IMO.
Edit: While somewhat meaningless from certain perspectives, why would an 18 TF GPU in a Fall 2020-released PS5 matter much over a 10-12 TF GPU in a 2019 PS5?
Nothing new to be honest as that's known already and 7nm isn't nearly as long as 28nm was (at least amount of GPU's released on that node, I don't know how many years that lasted).
I think we should be happy if it's 12TF, I'm having a hard time already seeing them reach that at the very least, I do believe the next gen will last longer than this one though (because 4k to 8k).

Stating the obvious (to us gamers) with this (2013 - 2018):
205562b019.jpg


Nothing new I think, sales are down, no mention of next year or new deals.
7e3f3d8a2b.jpg
 

ZOONAMI

Junior Member
Yep. So really what it's coming down to is

A. PS5 is happening. but we won't see for 'some time'.

B. Another refresh of PS4 with more power is a possibility, but depends on how 2018 plays out, including how Xbox One X.

The problem with another further upgraded PS4 (among many) is while a more powerful GPU is a given, there's nowhere to go on the CPU side if Jaguar is used again, other than matching XB1X CPU custom features and 200 MHz extra clockspeed (above PS4 Pro).

What would Sony do:

Release a PS4 Ultra/4K in 2018 with 2.4 GHz Jaguar, 8-10 TF GPU 12 GB RAM.

Then PS5 in 2020 with Zen3, 20 TF GPU and 64 GB RAM.

Or a PS5 in 2019 with zen1 or 2 and a 10-15tf gpu with 16gb gddr6.

LOL at 64gb ram.

Given the state of amd's gpu tech right now being extremely power hungry for gains over the fury x even, I don't think it's out of the question for Sony to go back to Nvidia and run with emulation for BC.
 

THE:MILKMAN

Member
Very much relevant IMO.

AMD's CTO on 7nm, Chip Stacks
Papermaster calls for EUV ASAP





http://eetimes.com/document.asp?doc_id=1332049

Of course 7nm would be a given for new consoles in late 2019, but if Sony were to wait until 2020 (and (I am only guessing) second gen 7nm+ with EUV could be within reach, and might possibly mean the difference between something like an APU w/Zen 2 CPU + 10-12 TF GPU and a Zen3 APU with an~18 TF GPU.

Edit: While somewhat meaningless from certain perspectives, why would an 18 TF GPU in a Fall 2020-released PS5 matter much over a 10-12 TF GPU in a 2019 PS5?

It would mean a straight up 10x increase in raw performance over OG PS4's GPU (not counting architectural improvements in AMD GPUs from GCN 1.1) and a 3x leap over XB1X GPU (which can't be used to make games that can't work on OG XBone and XB1S anyway). More importantly, it gives developers more time to release their big current-gen games (i.e. FFVII Remake, TLoU2) and have a bit of time to work on next-gen games for PS5. Releasing PS5 in 2019 has both pros and cons, but I think the upside in waiting until 2020 outweighs the downsides.

I highly doubt a EUV chip at all to be honest. If PS5 is 2019 the chip would already be designed and if 2020 then very close to being designed (for standard 7nm). EUV is always coming next year is the standard industry joke.....

I'm more worried about the costs of standard 7nm and how that would affect console prices. Papermaster lays out clearly how much more effort and manpower is having to go into 7nm to get it done.

Edit: When was the last time a console launched on a new/months old node? (excluding PS4 Pro because that isn't very high volume IMO) I think this is a very important consideration.
 

ZOONAMI

Junior Member
And how is amd going to pull off a 12tf apu in basically a year and a half when their Vega chip is 13tf and is sucking down like 500w. The ps4 pro and Xbox x are already pretty power hungry for a console and are less than half the tflops. I don't see an apu coming out of amd with more than 10tflops in the next 2 years.

Honestly a Nvidia 1160/1170 or maybe 1260/1270 might make more sense from a power consumption standpoint and would probably come in at around that 10+ tflops mark and would be reasonably inexpensive from a bom perspective. Probably looking at 12gb gddr6 for 1160/1260 and 16gb for 1170/1270. If the 12gb route they'll probably add 4gb system ram and if 16 unified. We might see 32gb gddr6 if they go higher end and wait until 2020. I doubt it though.

Also if Nintendo has a good relationship with NV it may give Sony more of a reason to rethink sticking with amd just because of BC. It might also be cheap enough by then to just stick a ps4 chip in there for BC like they did for ps2 on ps3. Not sure if amd would be happy about that but it's better than nothing. I don't think a $500 price point will be out of the question for sony for ps5 either.
 
And how is amd going to pull off a 12tf apu in basically a year and a half when their Vega chip is 13tf and is sucking down like 500w. The ps4 pro and Xbox x are already pretty power hungry for a console and are less than half the tflops. I don't see an apu coming out of amd with more than 10tflops in the next 2 years.

7nm... as well as AMD's Navi GPU micro-architecture. Vega is an unfortunate anomally for AMD. Certainly not the norm.

Shrinking the design to a smaller process node means either a significant drop in power consumption for the same chip design, or similar power consumption for almost double the performance (2x number of transistors).

This isn't new and is how greater processor performance is achieved year on year. Perhaps you should have read the discussion the thread.
 
I highly doubt a EUV chip at all to be honest. If PS5 is 2019 the chip would already be designed and if 2020 then very close to being designed (for standard 7nm). EUV is always coming next year is the standard industry joke.....

I'm more worried about the costs of standard 7nm and how that would affect console prices. Papermaster lays out clearly how much more effort and manpower is having to go into 7nm to get it done.

Edit: When was the last time a console launched on a new/months old node? (excluding PS4 Pro because that isn't very high volume IMO) I think this is a very important consideration.

In late 2019/2020 7nm won't be months old. And why would you arbitrarily exclude the PS4 Pro? It's a console, based on the maximum die-size/TDP that Sony could achieve on the 16nm node, and is priced at the same point the PS4 was priced at launch in 2013. Considering your question, the PS4 Pro is absolutely applicable and answers your question. And even if you think the Pro isn't high volume (not necessarily true), the PS4 slim launched on the same node at the same time is.

Also, the investment in engineering work AMD does, won't have a direct impact on the price of the PS4. According to AMD, 7nm is a "long" node, so those sunk costs will be recouped over the course of the generation of products AMD will develop on that node. AMD will sell a number of their GPU products on 7nm, so there's no reason to think their PS4 contract alone will somehow bear the impact of that initial engineering investment.

You also forget that console manufacturers go with AMD consistently because they offer favorable terms to the console manufacturers in their design contracts. AMD needs console design business to survive, the opposite isn't true (i.e. Sony/MS could sign deals elsewhere).
 

Shin

Banned
Is an ARM CPU strong enough to handle a native 4K UI, OS, BG DL, recording?, streaming at 1080p? and everything else?
Anything wrong with a 320-bit bus, 10x32-bit sticks, 2GB each? 12GB is like 2x what we have ATM after OS if unified pool, doesn't seem enough for the jump from 1080p to 4K.

7nm... as well as AMD's Navi GPU micro-architecture. Vega is an unfortunate anomally for AMD. Certainly not the norm.

I looked up the GTX 1050 (75w), 1050Ti (75w) and 1060 (120w), RX480 still lost to those in TDP AND performance against the GTX 1060 so it's more than just Vega that's an issue.
Higher teraflops which didn't do anything for them as GTX 1060 > RX 480 and it's 30w more, it's a rather depressing situation, both platform holders would be better off with nVidia.
If we're strictly talking about efficiency and performance, price I think AMD is still better and they have a CPU/GPU solution where else nVidia nor Intel does not.
 

Theonik

Member
ARM is better than x86 in many ways. But I don't think it's happening they'll break BC. But at the same time future BC is much easier on ARM.

In late 2019/2020 7nm won't be months old. And why would you arbitrarily exclude the PS4 Pro? It's a console, based on the maximum die-size/TDP that Sony could achieve on the 16nm node, and is priced at the same point the PS4 was priced at launch in 2013. Considering your question, the PS4 Pro is absolutely applicable and answers your question. And even if you think the Pro isn't high volume (not necessarily true), the PS4 slim launched on the same node at the same time is.
Slim was a 30% smaller die though.
 

Shin

Banned
From what I understood/read, new nodes are usually "reserved" for mobile SoCs, high-end GPU's or GPU's for data centers, basically where the big money is.
They are the ones that pay the most because it's worth it to them due to high margin on those products, consoles are pretty much last in line when it comes to new nodes.
As the process improves the prices goes down and as the Fabs recoup from their investment in the machines needed for a die shrink.

It might be more expensive than 16nm due to the complexity of any smaller node, but I don't think it's that bad where it's going to raise the BoM by e.g. $50 or whatever, probably talking couple of cents/dollars here.
Vega XL will be priced at $399 and I can only assume it will drop ASAP (GTX is going at $530 ATM), seems like it will replace RX580 since it also debuted in that ball park.
That's probably our baseline GPU with improvements from the node shrink and/or what Navi brings to the table, it's not their mainstream/low-end card yet but it will be once RX580 is phased out.

As per AMD's conference call yesterday...
Semi-Custom business area of ​​AMD had a 5 percent lower quarterly turnover than last year.
AMD delivers semi-custom socs for the Xbox and PlayStation and the big growth of these generation consoles is out.
Microsoft will launch the Xbox One X in November, but AMD expects revenue from these chips to decline throughout the year.
 
Slim was a 30% smaller die though.

Well yeah, but the question was about consoles that launched on a new manufacturing process. Both the Pro and Slim qualify.

At the same time, however, 7nm won't be brand new in late 2019 and certainly not in late 2020 (even assuming 6-9 months lead for manufacturing inventory ready for a Fall launch).
 

THE:MILKMAN

Member
In late 2019/2020 7nm won't be months old. And why would you arbitrarily exclude the PS4 Pro? It's a console, based on the maximum die-size/TDP that Sony could achieve on the 16nm node, and is priced at the same point the PS4 was priced at launch in 2013. Considering your question, the PS4 Pro is absolutely applicable and answers your question. And even if you think the Pro isn't high volume (not necessarily true), the PS4 slim launched on the same node at the same time is.

Also, the investment in engineering work AMD does, won't have a direct impact on the price of the PS4. According to AMD, 7nm is a "long" node, so those sunk costs will be recouped over the course of the generation of products AMD will develop on that node. AMD will sell a number of their GPU products on 7nm, so there's no reason to think their PS4 contract alone will somehow bear the impact of that initial engineering investment.

You also forget that console manufacturers go with AMD consistently because they offer favorable terms to the console manufacturers in their design contracts. AMD needs console design business to survive, the opposite isn't true (i.e. Sony/MS could sign deals elsewhere).

To the bold......To be fair neither me or anyone knows how old the node will be at PS5 launch but all I can do is look at the most recent history. TSMC's 16nm FF+ as used by the GTX 1080 entered risk production in late 2014 and the GTX 1080 launched in May 2016 (18 months) Now I'm sure other mobile products used this node before that but it gives a decent ballpark for lead times.

TSMC's 7nm node is reported to have entered risk production in April this year so applying the same lead time for a non-mobile volume production chip over 300mm^2 (if it makes a difference) takes us to November 2018...

That is if everything goes swimmingly when it is said 7nm is much more of a challenge.

I'm really amazed we got two analysts, multiple articles and comment from Matt about PS5 when it does seem the thing is barely in R&D and the node it will launch on doesn't really exist yet!

Compare that to the PS4 Pro where all we got pretty late on was a very strong hint (in hindsight) with the 4gamer Ito interview a year out and then the GDC leak from PK/Kotaku in March 2016.
 
I'm really amazed we got two analysts, multiple articles and comment from Matt about PS5 when it does seem the thing is barely in R&D and the node it will launch on doesn't really exist yet!

This really only amounts to unsubstantiated rumours and speculation. Not much different to what folks here on Gaf are saying (or B3D)...

I think you're making too much out of what pitiful dribbles of info. about PS5 we have and trying to base conclusions off that. I think that's a mistake.

It's also pretty clear that the PS5 design will be well underway, if they have a hope in hell of reaching a 2019 or 2020 Fall launch. There's no way the next-gen consoles are just "barely in R&D".

If devs are already getting preliminary briefs, it means that Sony and MS probably already have a good idea now about what their next-gen boxes will look like. AMD almost certainly will.
 

THE:MILKMAN

Member
This really only amounts to unsubstantiated rumours and speculation. Not much different to what folks here on Gaf are saying (or B3D)...

I think you're making too much out of what pitiful dribbles of info. about PS5 we have and trying to base conclusions off that. I think that's a mistake.

It's also pretty clear that the PS5 design will be well underway, if they have a hope in hell of reaching a 2019 or 2020 Fall launch. There's no way the next-gen consoles are just "barely in R&D".

If devs are already getting preliminary briefs, it means that Sony and MS probably already have a good idea now about what their next-gen boxes will look like. AMD almost certainly will.

Yeah that was a crap phrase to use. I just mean it is very early for any talk this far out. PS4 Pro <year for the first substantial rumours and media articles and PS4/Xbox One ~18 months before launch.

Did Matt state he or other devs had been preliminary briefed? Can't remember what, if anything, he said about this.
 
Yeah that was a crap phrase to use. I just mean it is very early for any talk this far out. PS4 Pro <year for the first substantial rumours and media articles and PS4/Xbox One ~18 months before launch.

Did Matt state he or other devs had been preliminary briefed? Can't remember what, if anything, he said about this.

Don't quote me on that... I vaguely recalled someone in this thread saying something along those lines. I may be wrong though...
 

Leak

Member
The jump to the next PS are the 7nm and the CPU change.

If Sony matches 500$ price, it will just be the RX 480 Na'vi and 7nm sucessor, so we're speaking about 9TFLOPs, over a Fury X, which is a little worse than a GTX 1070. It's power won't be surprising, or at least it won't if they want to keep PS4 "noob" alive... But killing PS4 and not killing PS4 Pro should be hard to sell.

I think there's a possible way for both PS and Xbox:

Xbox One - Xbox One X - Xbox X [there One dies, One X stands]
PS4 - PS4 Pro - PS Pro [There PS4 dies, PS4 Pro stands]

They have to have thinked several ways to make this new two consoles paradigm painless and understandable for the masses. This could be one. And Ryzen based processors with 4 cores - 4 threads and "boost speed" for "future overclock" should be a good way to help One X / PS4 Pro to survive and then have a nice processor for 2020's consoles.

Uh, and if Sony matches 400$ price... It should be pretty close to One X in therms of power. I think Microsoft made a good movement here.

Edit: I used to wonder how should Microsoft make Xbox games boxes to help customers to difference games for Xbox One + One X and futurible One X only or One X + sucessor games... But this approach should solve the problem: Xbox One games boxes, Xbox X games boxes - PS4 games boxes, PS Pro boxes.
 
Top Bottom