What? They quite obviously are.
Care to provide any evidence that Pascal cards are horrifically bandwidth constrained? (At the typical resolutions people would be expected to use them, at least).
Also there's a huge difference between having 25GB/s total memory bandwidth and 150GB/s in absolute terms. Bandwidth per pixel is a much more important metric than bandwidth per compute, since the latter is destined to decrease faster and image quality is about work done per pixel. At 900p30 an XBO has to render 43,200,000 pixels. That's less than twice as much as the 27,648,000 pixels NS would have to render at 720p30, and the XBO has 6x memory bandwidth.
Yes, bandwidth per compute is a silly metric, that was my point (as that's effectively the way the "insider" on the Anandtech forums was calculating things). Bandwidth per pixel is a more fair comparison,
if the devices you're comparing use the same graphics architecture. The bandwidth consumption of a GCN 1.0 era GPU accessing an uncompressed buffer using immediate-mode rasterisation will be very different from a Pascal GPU accessing a compressed buffer using tile-based rasterisation (especially if we don't know the cache configuration of the latter).
The mobile code paths of UE4 do give up features present in the main branch in order to accommodate FP16. For mobile games it is a good trade-off, but fully featured games probably wouldn't consider it.
Do you have any sources on features being dropped from mobile specifically due to lack of precision in FP16? (I'm not doubting that there are such features, I've just had a difficult time finding reliable info on this kind of stuff online.)
Would a custom X1 really even be a realistic option for the retail unit, though?
Who will produce these chips? We had a lot of reports in the past years about how the 20nm chips are quickly skipped in favour of 16nm by most of the TSMC customers.
We have this article from January:
http://www.extremetech.com/computin...-10nm-production-this-year-claims-5nm-by-2020
Plus:
http://www.tsmc.com/uploadfile/ir/quarterly/2016/3tBfm/E/TSMC 3Q16 transcript.pdf
Going through TSMC's earnings reports and calls, they're barely even mentioning the 20nm segment and when it is, it's bundled with 16nm.
With mid-range smartphone SoCs jumping straight from 28nm to 16/14nm, I think it's safe to say that 20nm is just straight up uneconomical at this point, and if there's any capacity left it's being used for legacy products (Apple still sells some devices that use the A8, for example). The only possible reason Switch would use a 20nm chip would be if Nvidia had entered into a large wafer commitment with TSMC for 20nm and offered Nintendo an obscenely good deal in order to use it up, but I very much doubt that that's the case.
To preempt Hoo-doo, yes, I know that Nintendo didn't follow the logical path with Wii U and went for the old fab process, but that did create them a lot of trouble later in Wii U's life, so one would hope they won't ignore the fact that 20nm is almost dead in the near future.
It's worth keeping in mind that Nintendo's decision to use eDRAM on Wii U's GPU prevented them from using 28nm, even if they had wanted to.
Preempting the counter point doesn't invalidate it. AFAIK they also produced Wii's at 90nm for the entirety of it's life, never shrinking to 65nm or 45nm. When there is a history of illogical HW decisions, "potentially creating trouble" isn't a sufficient argument against it.
I don't think this is true at all. While nobody (to my knowledge) has actually decapped Wii CPUs & GPUs from across the production timeline, later models definitely had smaller package sizes, required smaller heatsinks and consumed less power, all of which would definitely indicate that they performed die shrinks just like MS and Sony.
An audience of people who, thanks to Apple's brilliant marketing of the Retina display many years ago, care more about resolution and clarity of the display than ever before.
A 6" 720p display has a PPI greater than any of Apple's Retina Macs and only slightly lower than their Retina iPads. Let's not pretend that this is going to be a pixelated mess in comparison.