Hitting a locked 30 or 60 might be easier docked. Having adaptive refresh would be more beneficial on the go.
If there's no resolution difference, then I'd agree with you. If games are expected to run at higher resolutions while docked, though, it could potentially be the other way around if the performance jump isn't enough to accommodate.
It'd be nice if Nintendo also supported
adaptive-sync over HDMI for those of us who have our consoles hooked up to monitors, but I somehow doubt they'll be supporting AMD's open standard with Nvidia on board.
For the 7.9" iPad mini that is more comparable to the 6" Switch, the PPI on the Retina iPad is 326 PPI. That is a 30% difference. Not spitting difference or slightly lower.
The iPad Mini doesn't need to be 326 PPI, though. They only arrived at that density because of the way the OS requires 2x resolution jumps for high-PPI scaling (and the previous iPad Mini was 163 PPI). Had iOS handled scaling differently I'm sure they would have stuck with the same 220-264 PPI range that all of their tablets and laptops lie in.
Nothing that I can find easily, it was something I heard asked of an engineer on a stream (and I don't remember where as it was a while ago). I don't know how significant the differences are though, I got the impression that it is mostly all there. I also got the impression that it was also more of a case of features lagging rather than outright impossibility.
I'm not sure how much lagging features would have to do with FP16, though. In general, an algorithm is going to fall into one of three categories when you run it at a lower precision:
1 - Is completely unaffected by the drop in precision, and always produces precisely the same output as it would at a higher precision.
2 - The drop in precision can cause small to medium sized errors in the output compared to higher resolutions. This can range from occasionally flipping the lowest bit (which wouldn't be noticeable in a real-time rendering environment) to frequent errors in higher bits, which can cause visible rendering errors, such as banding.
3 - Is fundamentally unstable at the lower precision, producing results which are totally unrelated to what would be produced at a higher precision.
Any given graphical technique performed in shaders on a GPU is going to fall into one of those categories*. Effectively, it's either going to work at FP16 or it's not, and in most cases this is a characteristic of the algorithm itself, rather than the implementation of the algorithm in code (i.e. it's not simply a matter of tweaking the code to get it working on FP16). In theory you may be able to use another algorithm to produce the same result, but which will produce functional output at FP16, typically at the expense of some performance. This might not always be the case, however (for example graphical techniques which attempt to emulate some physical phenomenon, such as PBR, may be constrained by the workings of that phenomenon itself).
To be honest, though, I have no idea to what extent any of this affects engine programmers attempting to optimise for a platform with good FP16 performance. I have no doubt there are quite a few people out there who have worked hard to figure out which techniques work well in FP16 and which don't, but unfortunately none of them seem interested in giving a GDC talk on it. At this point I'm half tempted to write an FP16 emulation library and start testing stuff myself.
* Although I very much doubt any fall into the third category. It's more of an issue for scientific simulations (which, incidentally, is one of the reasons why FP64 is so important for HPC cards).
Pretty much all GPUs these days are bandwidth constrained relative to compute, especially ones that have over 5x the compute power of console and less than twice the bandwidth. Compute scales faster than bandwidth, and it's a problem. See 4K and VR if you think this is merely academic as opposed to something people in the industry take seriously.
I don't doubt that there are certain situations where Pascal GPUs can be bandwidth constrained, and 4K and VR can certainly represent issues when running bandwidth-intensive engines.
But my point wasn't about Pascal GPUs sometimes being bandwidth constrained in specific situations. My point arose because an "insider" on Anandtech forums claimed (and I'm paraphrasing here) that "because Switch has 1/6th the bandwidth of XBO, therefore it's limited to 1/6th the performance". It's a ludicrous statement, and I was pointing out how ludicrous it is by showing that, by his own logic, all Pascal GPUs would be "horrifically, cripplingly bandwidth constrained". They're not. If they manage to get the performance they do while 80% of their ALU logic is sitting idle (which is what he's implying with the bandwidth comparisons), then Nvidia are capable of straight-up witchcraft.
Pretty much all GPUs these days are bandwidth constrained As for the XBO vs TX1 situation, Nvidia's bandwidth optimisations aren't going to net significant enough performance increases to offset only having 25.6GB/s of memory bandwidth.
I do see a lot of people engaging in insane amounts of fudge factoring without a single piece of evidence though. Apparently fp16, color compression, "NV FLOPS", and tiled rasterization somehow mean the TX1 level hardware likely to be in the NS is somehow a portable XBO or PS4.
But we don't know if it's TX1 level hardware. It could be around that, or it could be above, or it could be below. We also have no idea whether it has 25GB/s or bandwidth, it could well be 50GB/s, or
it could even theoretically be 256GB/s. Yeah, that last one is extremely unlikely, but my point is that, aside from using a TX1 in dev kits when that was literally the only off-the-shelf chip available to them, we don't really have any reliable, specific information on either the performance level or memory system of Switch, so any claims that it's going to be bandwidth constrained are just being plucked from thin air.
The original semiaccurate.com leak that pointed us in the direction of Nvidia stated that they were pretty distraught over losing the console bids for the PS4/Xbox One and were willing to give Nintendo a really strong deal in terms of both software support and hardware to make it work. It sounded kind of unbelievable at the time but given that the rest of the rumor came true I'm not sure what to believe.
Shield TV was 5-20w with a Maxwell Chip. Pascal will hopefully be less on both the low and high ends. We don't really know until we have a better idea of the internals.
Charlie from Semiaccurate isn't known for being particularly favourable to Nvidia, though, so it's possible that his "taking a loss on the sale" was just an exaggeration of them accepting much lower margins, closer to the ~15% AMD makes from their console business.
Does anyone know of good resources to learn more about this highly technical stuff? Not necessarily limited to Switch, but technical gaming discussion in general? All I can think of are these kinds of GAF threads and Digital Foundry.
Anandtech and Ars Technica frequently have good articles on hardware. For games specific stuff GDC talks are also a great resource. They're all up for free online a few months after the conference, and you'll find lots of the talks about engine optimisation will give you a good insight as to the how hardware decisions affect game developers.