• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

pawel86ck

Banned
Because 4K is a meme? Especially when you realize idiots that are going on about it also use FXAA to remove edges, basically shitting across image quality like its no bodies business
If 4K is so important, then MSAA should be used along because otherwise you are smearing your image with post processing effect so you may as well use good upscaling method and save tons of GPU time.
FXAA look worse compared to MSAA, but the difference in sharpness is very subtle unlike TXAA or TAA for example. I'm using FXAA sometimes and it doesnt look like upscaled picture at all.
 
Last edited:

llien

Member
Based on AMD's mm2 figures, which are for 250mm chips, no way supers, which are twice bigger, could cost less.

I mean, look how small the die of Radeon VII is compared to Turing. It's much smaller than even 2060 but the price is just insane.
Most of it is 16Gb HBM2, interposerd, yadayada.
 
Last edited:
I thought one of the whole points of going to a smaller node was to "save" cost? Isn't that what the PS3 and PS4 both did through out their time?

you save cost from a node shrink because you can salvage more dies from an (in this case 300mm) Wafer because dies get smaler = more dies fit on one Wafer & smaler dies have less probabily of a defect. same size wafers are just going to get more expensive with every shrink.
 

Gamernyc78

Banned
All this thread does is throw numbers back and forth and raises and lowers expectations day by day between the same four ppl lol

You guys are not trying to let this thread die down are you? lol 😭😘😁😁😁[/QUOTE]
 

CrustyBritches

Gold Member
It’s looks like we are in for another gen with disappointing gpu performance 😩. Praying to Jesus as we speak
$300-400 GPU performance is disappointing? :messenger_grinning_smiling:
---
Seeing new results in the 3DMark database for 5700 Pro + Ryzen 5 3600(6c/12t).
5700-3600.jpg


I can't see you, but just wanted to remind you how supposedly you'd need a 2080-level GPU to get to 20K in FS. 6c/12t 3.6GHz CPU, 5700 Pro gpu at 1.7GHz = 20k. Black magic:messenger_ghost:

---
Gonzalo = 20K+
 
Last edited:

Mass Shift

Member
Screenshot-51.png


Very indicative slide if you are still dreaming of ~400mm² dies (and no, Scarlett is not 380mm² confirmed).

Confirmed? No. But Scorpio's announcement render was spot on. If MS followed the same method with Scarlett it would be a safer guess than blind estimations.

If MS went this way, Sony might have as well.
 

mckmas8808

Banned
They saved cost because the dies got smaller. Here we're saying for the same die size, i.e 200mm2, the cost increases over time, and is quite high on 7nm. But that's countered by the increase in logic density, ie why GPUs are getting faster and faster even though the die sizes would be considered lower end in previous generations, and why people are wrong to look at current GPUs and say they're selling what would be mid range parts at top end prices, without considering the cost per unit die size.

AH! Thanks. That makes sense. Looking at these charts, there's no way either of the next-gen consoles will be 400 mm² SOCs
 

SonGoku

Member
they can make 8k and 4k60fps with lower quality, less details ...
Even 4k at 30fps is out of the question if the intention is to deliver next gen graphics
Im confident consoles will be equipped with a much better GPU
Very indicative slide if you are still dreaming of ~400mm² dies (and no, Scarlett is not 380mm² confirmed).
Old slide from 2017 that you are not even interpreting correctly. That comparison takes into account design costs which are much higher yes, but also inmutable independent of die size
TSCM 7nm transistor cost is already cheaper than 16nm and 7nm yields are improving at a steady rate which will make bigger dies cheaper and also on 7nm EUV 66CUs would fit on a 350-360mm2 SoC. EUV is also cheaper to manufacture due to reduced complexity.
 
Last edited:

bitbydeath

Member
Even 4k at 30fps is out of the question if the intention is to deliver next gen graphics
Im confident consoles will be equipped with a much better GPU

5700 isn’t even capable of RT so should already be ruled out of next-gen consoles by that reason alone.

Unlike Navi 10/12, Navi 20 will include ray tracing features. It's assumed a smaller variant of Navi 20 will be in the next generation PlayStation and Xbox consoles, with a larger model going into future extreme performance graphics cards.

 
Last edited:

SonGoku

Member
there's no way either of the next-gen consoles will be 400 mm² SOCs
EUV reduces costs of manufacturing but 400mm2 is not even needed, with EUV a 350-360mm2 SoC would suffice
On DUV a 390-400mm2 chip would be quickly shrunk to 330-340mm2 using 6nm (which is design compatible with 7nm DUV) in 2021
5700 isn’t even capable of RT so should already be ruled out of next-gen consoles by that reason alone.
It doesn't even have enough juice for 4k/60fps of current gen games
 
Last edited:

stetiger

Member
5700 isn’t even capable of RT so should already be ruled out of next-gen consoles by that reason alone.



RDNA2 would have to be another leap in performance like radeonVII to rdna for anything powerful to fit into a console APU. There is little chance we are getting 330mm2+ apus. We might if it is more expensive AND sold at a loss. Like 499$ with a 50$ loss on BOM. Would require bumping the cost of ps plus up, and selling at least two games per console or increasing game price. GIven how sensitive people are with prices, I would say that is highly unlikely.
 

bitbydeath

Member
RDNA2 would have to be another leap in performance like radeonVII to rdna for anything powerful to fit into a console APU. There is little chance we are getting 330mm2+ apus. We might if it is more expensive AND sold at a loss. Like 499$ with a 50$ loss on BOM. Would require bumping the cost of ps plus up, and selling at least two games per console or increasing game price. GIven how sensitive people are with prices, I would say that is highly unlikely.

It would be a leap but we don’t know by how much. Basing next-gen off hardware that won’t be in it is pointless.

Yet we keep going round in circles of what the 5700 can do.
 
Last edited:

SonGoku

Member
RDNA2 would have to be another leap in performance like radeonVII to rdna for anything powerful to fit into a console APU.
  • 54CUs @1520 Mhz = 10.5TF (2080 tier)
  • 60CUs @1600Mhz = 12.28TF (2080S+ tier)
There is little chance we are getting 330mm2+ apus.
and how did you come up with that number?
7nm production costs are decreasing and yields improving, bigger chips are entirely dependent on yields and nothing more not to mention EUV reduces manufacturing costs and chip sizes
 
Last edited:

LordOfChaos

Member
All this thread does is throw numbers back and forth and raises and lowers expectations day by day between the same four ppl lol

You guys are not trying to let this thread die down are you? lol 😭😘😁😁😁



We have a record to break!


Ah, I remember it well. The gradual progression.

"1Tflop is reasonable"
"They can't even buy a chip under 600Gflops..."
"352...For the die size...maybe?"
"Half of the die is eDRAM...And it's on 40nm...They really managed a 176Gflop part in early 2013..."

And then there was making all the effort of porting the old PowerPC 750 to a 45nm node to keep BC into the dark ages...

It's why I keep my expectations in check, and keep almost none for Nintendo specifically, lol
 
Last edited:

R600

Banned
Even 4k at 30fps is out of the question if the intention is to deliver next gen graphics
Im confident consoles will be equipped with a much better GPU

Old slide from 2017 that you are not even interpreting correctly. That comparison takes into account design costs which are much higher yes, but also inmutable independent of die size
TSCM 7nm transistor cost is already cheaper than 16nm and 7nm yields are improving at a steady rate which will make bigger dies cheaper and also on 7nm EUV 66CUs would fit on a 350-360mm2 SoC. EUV is also cheaper to manufacture due to reduced complexity.
Yea, cost per transistor might be cheaper, but next gen consoles will have 3x more transistors then last gen.

I am pretty sure, from everything (literally) we saw, 7nm and lower is considerably more expensive then bigger nodes. EVERYTHING points in thst direction, not only consoles but GPUs as well (see Navi vs Polaris).
 
Last edited:

SonGoku

Member
Scarlett die is anywhere from 330mm² to 380mm². To me, it looks smaller then Scorpio.
Pixel counter experts say it could be anywhere from 380 to 400 mm2
I am pretty sure, from everything (literally) we saw, 7nm and lower is considerably more expensive then bigger nodes
You are missing the point though, 7nm increased costs are due to designs costs, those costs remain the same irrespective of die size
With yields shown to be improving steadily bigger chips will be possible at lower costs, and even if a small loss was taken on a big 390-400mm2 chip it can be quickly shrunk to 330-340mm2 in 2021.

That's before even taking into account 7nm EUV which doesn't require as big die and is cheaper to manufacture
And Samsung 7nm EUV is aggressively undercutting TSMC
 
Last edited:

mckmas8808

Banned
We have a record to break!


At this pace, we'll pass 234 pages before the end of the summer lol
 

R600

Banned
Pixel counter experts say it could be anywhere from 380 to 400 mm2

You are missing the point though, 7nm increased costs are due to designs costs, those costs remain the same irrespective of die size
With yields shown to be improving steadily bigger chips will be possible at lower costs, and even if a small loss was taken on a big 390-400mm2 chip it can be quickly shrunk to 330-340mm2 in 2021.

That's before even taking into account 7nm EUV which doesn't require as big die and is cheaper to manufacture
And Samsung 7nm EUV is aggressively undercutting TSMC
You are missing the point. Scarlett die size was not calculated by pixel counting, it was calculated by perspective and size of GDDR modules. BIG, VERY BIG margin of error.

For example, take Scarlett die and compare it to Scorpio, you will find something is very off as it does in no way look bigger then Scorpio, if anything it looks smaller (narrower, similar lenght).

So they can quickly redesign chip for lower node that would still give dies similar to the ones that where originally release in 2013? Doubt it.

Entire console business is literally going down in die size, console gen after console gen, I find it very hard to believe they will be releasing new consoles on node that is still no where to be found.

In any case, that picture is giving costs per yielded mm2 on 250mm2 die. I dont know who is reading right or wrong, but I suppose this graph and Mike Papermaster from AMD know what they talk about when they say nodes are getting more and more expensive.
 
Even 4k at 30fps is out of the question if the intention is to deliver next gen graphics
Im confident consoles will be equipped with a much better GPU

Even 2080ti struggles and cant provide solid 60fps in tomb raider at 4K maxed out which is current gen, can next gen consoles run 4k60 ? the answer is : YES
it's up to Devs, if they want to target 4k60 or more detailed advanced graphics at 1440p 60+ fps etc ...

for example that new star wars game fallen order, can run at 4k60 in next gen easy, i'm very disappointed in what we saw so far, Halo trailer is garbage, not next gen to me, the reality is consoles always disappoint, a 2080/2080super performance on a next gen box (HELL NO) i hope so but it's unlikely, we are lucky if we get 1080ti perf, i was expecting vega56 before (one year ago) but now that we have RX5700/xt can't complain it's ok .
 

xool

Member
Yea, cost per transistor might be cheaper, but next gen consoles will have 3x more transistors then last gen.

I am pretty sure, from everything (literally) we saw, 7nm and lower is considerably more expensive then bigger nodes. EVERYTHING points in thst direction, not only consoles but GPUs as well (see Navi vs Polaris).

(from 2019 these are estimates)

Design costs are growing exponentially, but it's a one off cost

And7IUW.png


Price per transistor is still (estimated) to be going down (note this starts at 16nm, and we are probably comparing with this gen PS4/XONE at 28nm -from the graph I posted before costs from 28 to 16nm are ~60% (or 40% less) )

ZPNcEGg.png

[using these numbers I get cost per transistor to be ~ 3.1x less for 7nm vs 28nm)

Note how there's not much joy after 7 nm .. but power efficiency and possibly frequency should increase, previous increases in transistors/$ stalls

[economics lesson] (however there's an expectation that as the node size limit is reached then chips become a true commodity which leads to better prices for consumers are margins are cut, and development costs cease to be a big factor .. we're already seeing the consolidation in the semi-foundry business with the market becoming an oligopoly - a sure sign of commodization)
 
Last edited:

stetiger

Member
Pixel counter experts say it could be anywhere from 380 to 400 mm2

You are missing the point though, 7nm increased costs are due to designs costs, those costs remain the same irrespective of die size
With yields shown to be improving steadily bigger chips will be possible at lower costs, and even if a small loss was taken on a big 390-400mm2 chip it can be quickly shrunk to 330-340mm2 in 2021.

That's before even taking into account 7nm EUV which doesn't require as big die and is cheaper to manufacture
And Samsung 7nm EUV is aggressively undercutting TSMC
80mm2 for zen2 chips, 250mm2 for RX5700xt. Not much space left. However I am willing to bet a profile picture and maybe money that we will not get 390mm2 chips. You are way too optimistic my friend and setting yourself up honestly.
 

SonGoku

Member
You are missing the point. Scarlett die size was not calculated by pixel counting, it was calculated by perspective and size of GDDR modules.
That's how its done, you use a scale (gddr6 chips in this case) and then pixel count to get a size estimate, are you sure it wasn't done that way?
Lets say it wasnt and there's significant margin for error, that reasoning applies both ways... you can't use the shot as proof it isn't a big die either
So they can quickly redesign chip for lower node that would still give dies similar to the ones that where originally release in 2013? Doubt it.
Hold up, you are mixing two separate scenarios
  1. Designed on 7nm DUV for launch 2020 then shrunk in 2021 using 6nm, which is design compatible, meaning no redesign/retooling required, its a "free" shrink
  2. Designed from the start on 7nm EUV to launch 2020
80mm2 for zen2 chips, 250mm2 for RX5700xt. Not much space left
Zen 2 chiplet is 75mme btw
  • On 7nm EUV they can fit 66CUs on a 350-360mm2 SoC
  • On 7nm DUV they can fit 60CUs on a 390-400mm2 SoC and shrink to 330-340mm2 in 2021
that we will not get 390mm2 chips.
If they use 7nm EUV, 390mm2 won't even be necessary for good performance (2080S+)
. You are way too optimistic my friend and setting yourself up honestly.
I just look at whats technically possible considering the timelines
I dont need to artificially lowball my expectation to be forcedly exited over an unexciting prospect, if consoles turn out to be underpowered machines, i'll just wait till it hits $299 and several worthwhile games are out before I bite.
 

mckmas8808

Banned
(from 2019 these are estimates)

Design costs are growing exponentially, but it's a one off cost

And7IUW.png


Price per transistor is still (estimated) to be going down (note this starts at 16nm, and we are probably comparing with this gen PS4/XONE at 28nm -from the graph I posted before costs from 28 to 16nm are ~60% (or 40% less) )

ZPNcEGg.png

[using these numbers I get cost per transistor to be ~ 3.1x less for 7nm vs 28nm)

Note how there's not much joy after 7 nm .. but power efficiency and possibly frequency should increase, previous increases in transistors/$ stalls

[economics lesson] (however there's an expectation that as the node size limit is reached then chips become a true commodity which leads to better prices for consumers are margins are cut, and development costs cease to be a big factor .. we're already seeing the consolidation in the semi-foundry business with the market becoming an oligopoly - a sure sign of commodization)

Man 5nm and 3nm seems to be a total waste. What are we going to do after 7 nm? Start stacking?
 

xool

Member
80mm2 for zen2 chips, 250mm2 for RX5700xt. Not much space left. However I am willing to bet a profile picture and maybe money that we will not get 390mm2 chips. You are way too optimistic my friend and setting yourself up honestly.

I have to agree - in the table in my post 2 above you can see the wafer cost (and therefor cost per mm2) increases 3x from 16nm to 7nm - expecting die size to increase together with that increased cost is not logical.

At best I would expect them to maintain at ~320mm2
 

SonGoku

Member
I have to agree - in the table in my post 2 above you can see the wafer cost (and therefor cost per mm2) increases 3x from 16nm to 7nm - expecting die size to increase together with that increased cost is not logical.

At best I would expect them to maintain at ~320mm2
You forgot about this lol?
Design costs are growing exponentially, but it's a one off cost
This is the key point many are missing when pointing out 7nm costs and making the flawed equivalency to small dies. 7nm design cost is the same irrespective of die size
What will determine whether chips are big or small is yields.

Thats before even factoring EUV which further reduces manufacturing costs
 

xool

Member
Man 5nm and 3nm seems to be a total waste. What are we going to do after 7 nm? Start stacking?

Yep diminishing returns - but - there's still an arms race to get there, because, when Moore's law truly ends, and there are no more node shrinks whoever is still in the game essentially wins everything.

Probably just chiplets + interposers just like current Threadrippers. Stacking works for RAM, but I think the chips will just be too hot to cool to stack GPUs or CPUs
 

SonGoku

Member
Even 2080ti struggles and cant provide solid 60fps in tomb raider at 4K maxed out which is current gen,
Which is why i think consoles will target 2080+ perfomance minimum, for a proper next gen leap
MS/Sony aren't Nintendo, they must show a visual leap to sell the consoles even Cerny alluded to this and both dug themselves in the 4k hole, the 5700XT just doesnt have the juice for next gen graphics at 4K.
 
Last edited:

xool

Member
You forgot about this lol?

This is the key point many are missing when pointing out 7nm costs and making the flawed equivalency to small dies. 7nm design cost is the same irrespective of die size
What will determine whether chips are big or small is yields.

Thats before even factoring EUV which further reduces manufacturing costs

ok so design costs at 7nm are ~$300 million - lets say PS5 sells 100 million to thats $3 per console.

cost per billion transitor at 7nm is estimated at $2.65 .. AMD 5700 has ~>10 billion, Zen is ~5billion (?) - cost = 15*2.65 = $40 per chip ..

increase the die size by +25% is an extra $10 ...


I'm not betting the house on those figures in the table - but expect them to be near enough, at least as far as 7nm
 

SonGoku

Member
cost per billion transitor at 7nm is estimated at $2.65 .. AMD 5700 has ~>10 billion, Zen is ~5billion (?) - cost = 15*2.65 = $40 per chip ..

increase the die size by +25% is an extra $10 ...
D-pGn8qWkAEM_7n.png

Going with a extreme case:
5700 = 10.3B
80CU APU ~25B
$2.65*25 = $66.25

Of course this is with 85mm2 size yields, but yields are continuously increasing and 60-66CUs chip wont take that many transistors so i expect them to meet their target for a $500 launch price and small loss
ok so design costs at 7nm are ~$300 million - lets say PS5 sells 100 million to thats $3 per console.
Thats dirt cheap for a main console, MS invested 500 million on the Xbox controller alone lol
 
Last edited:

DeepEnigma

Gold Member
LOL. For an article written today (July 9th), that might be the most shallow click bait Scarlett article I've ever seen. It's like they pieced together what they found out a month ago and rewrote a new article

Just like their SAD google sourced clickbait article. Jez got so pissy that Afro made a thread laughing at it here, he cried about it on Twitter.

Windows Central is bottom tier, lol.
 
Last edited:

xool

Member
ok after thinking about it I'm ready to join the dark side .. I think either/both Sony/MS will go for a ~400mm2 APU, and take the extra $10+5 (die size + yield reduction) hit ..

it'd be stupid not to

the one with the biggest balls wins


(what we get for 400mm2, how many TF ?)
 
Which is why i think consoles will target 2080+ perfomance minimum, for a proper next gen leap
MS/Sony aren't Nintendo, they must show a visual leap to sell the consoles even Cerny alluded to this and both dug themselves in the 4k hole, the 5700XT just doesnt have the juice for next gen graphics at 4K.

lol ? dug themselves in the 4k hole ? for you 4k means ultra or nothing ? 4k60 low setting is still 4k60 .

😧 The ultimate nightmare scenario 😧

$499 7.5TF, 250W, loud AF cooling. 90% of games cross gen

worst case is :

* 44 CU @ 1550 Mhz = 8,7 Tf Rdna which is 10,8 Tf GCN
im pretty sure next gen is better than google stadia

more optimistic are :

* 46 CU @ 1550 Mhz = 9,1 Tf Rdna
* 48 CU @ 1550 Mhz = 9,5 Tf Rdna

dream scenario would be
* 52 CU @ 1550 Mhz = 10.3 Tf Rdna
 

SonGoku

Member
for you 4k means ultra or nothing ?
Next gen graphics
4k60 low setting is still 4k60 .
Consoles need to show a clear visual leap, even Cerny alluded to this
Current gen graphics at 4k wont cut it
(what we get for 400mm2, how many TF ?)
DUV (390-400mm2)
54CUs @1520Mhz = 10.5TF (2080 tier) Very likely
54CUs @1592Mhz = 11TF (best case scenario) "dream"

EUV (350-360mm2)
60CUs @1600Mhz = 12.28TF (2080S+ tier) Very likely
60CUs @1693Mhz = 13TF "dream"

EUV (390-400mm2)
72CUs @1550Mhz = 14.2TF (2080Ti+ tier) "dream"
ok after thinking about it I'm ready to join the dark side .. I think either/both Sony/MS will go for a ~400mm2 APU, and take the extra $10+5 (die size + yield reduction) hit ..
Either that or their plans accounted 7nm EUV from the start.
 
Last edited:
Next gen graphics

Consoles need to show a clear visual leap, even Cerny alluded to this
Current gen graphics at 4k wont cut it

DUV (390-400mm2)
54CUs @1520Mhz = 10.5TF (2080 tier) Very likely
54CUs @1592Mhz = 11TF (best case scenario) "dream"

EUV (350-360mm2)
60CUs @1600Mhz = 12.28TF (2080S+ tier) Very likely
60CUs @1693Mhz = 13TF "dream"

EUV (390-400mm2)
72CUs @1550Mhz = 14.2TF (2080Ti+ tier) "dream"

Either that or their plans accounted 7nm EUV from the start.

not current gen at 4k but if you want 60 fps yes a little bit above current gen, you can get better graphics at 30fps, but you need to know that when he says "Consoles need to show a clear visual leap" yes at lower resolutions than 4k and yes it can do 4k60 but he cant say well you know if you want to target 60fps at 4k you need to lower some settings it's bad marketing, i dont know why you excluded or forgot about Ryzen 7 and what it brings to the table, i posted above that it will add more fps and has 35+Mb cache, having a game coded on a tablet proc jaguar is not the same as Ryzen .
 

SonGoku

Member
not current gen at 4k but if you want 60 fps yes a little bit above current gen, you can get better graphics at 30fps,
Even at 30fps/4k the 5700XT does not have the juice to produce substantially better visuals required of a next gen console.
For 1440p at 30fps 5700XT performance will be enough to produce next gen visuals hence my comment that Sony/MS dug themselves in the 4k hole

The expectation for next gen consoles is 4k, just like 1080p was the expectation of current gen.
i dont know why you excluded or forgot about Ryzen 7 and what it brings to the table, i posted above that it will add more fps and has 35+Mb cache
It will be a fantastic upgrade but i was making a point for visuals and besides a powerful GPU would better match it
 
Last edited:

Exentryk

Member
Note that 60 fps is not as important this time around due to HDMI 2.1 (VRR). 4K native at 45 or 50 fps will be smooth.
For those that don't have HDMI 2.1 TVs, devs might add a 30 fps lock option perhaps.
 
Last edited:

Shmunter

Member
Note that 60 fps is not as important this time around due to HDMI 2.1 (VRR). 4K native at 45 or 50 fps will be smooth.
For those that don't have HDMI 2.1 TVs, devs might add a 30 fps lock option perhaps.
Visually perhaps, but consistent input response is still highly desirable.
 

xool

Member
I just realised there's some obvious information in the table above = a 300mm diameter wafer is 70,000 mm2 , and the price given is ~$10,000 per wafer .. so that gives a cost per mm2 of $0.14

$14 for 100mm2, $28 for 200mm2, $42 for 300mm2, $56 for 400mm2 - interesting figures if correct
[edit - the above figures include bad chips with defects - see below for yields]

Note the other table data seems to be based on a die size of ~128mm2 with a yield of 76% (price +33% per chip)

... -at ~400mm2 that yield would be expected to extrapolate to ~43% giving an additional cost per die increase of 1.75x . However elsewhere yields have been stated to be ~85% - assuming the same die size that extrapolates to a lower cost increase at 400mm2 of 1.38x

[edit2] my calculated APU BOMs are therefor : $42 200mm2 ; $80 300mm2 ; $132 400mm2 (I think initial PS4 APU costs were ~$100 and initial PS3 Cell+RSX costs were $200+ ! )
 
Last edited:

mckmas8808

Banned
I just realised there's some obvious information in the table above = a 300mm diameter wafer is 70,000 mm2 , and the price given is ~$10,000 per wafer .. so that gives a cost per mm2 of $0.14

$14 for 100mm2, $28 for 200mm2, $42 for 300mm2, $56 for 400mm2 - interesting figures if correct
[edit - the above figures include bad chips with defects - see below for yields]

Note the other table data seems to be based on a die size of ~128mm2 with a yield of 76% (price +33% per chip)

... -at ~400mm2 that yield would be expected to extrapolate to ~43% giving an additional cost per die increase of 1.75x . However elsewhere yields have been stated to be ~85% - assuming the same die size that extrapolates to a lower cost increase at 400mm2 of 1.38x

[edit2] my calculated APU BOMs are therefor : $42 200mm2 ; $80 300mm2 ; $132 400mm2 (I think initial PS4 APU costs were ~$100 and initial PS3 Cell+RSX costs were $200+ ! )

$200 for the PS3 Cell+RSX is INSANE now that I have more information on how this stuff works. Sony just let Krazy Ken do whatever!
 
Status
Not open for further replies.
Top Bottom