That's more or less the macro way I've been imagining things going with the supplement patent, if implemented in practice.
Apropos, speaking of connection technologies, we can differentiate between two major scenarios: local (namely core unit and supplement over a PtP connection in the same room or apparment - what Thraktor talks about above) and not-so-local/remote (core unit and supplement over a reasonably close tcp/ip network - same neighbourhood or at least same city). In the first scenario. BW-wise, anything that does 1Gb/s (aka 120-ish MB/s) is already perfectly usable for very substantial offloading of work to the supplement, and the latency is, well, for lack of better terms, ideal. Remember we're talking of compact buffers, mostly render targets, and sync messages flying back and forth, respectively, during the frame duration, not raw assets. In the second scenario, things that can be offloaded drop substantially thanks to both BW reductions and most importantly, latency hiking. Then BW goes down to a few (perhaps ten) of MB/s (which is still not bad) but latency stars taking some sort of heuristics and robust contingency planning - think netcode in fighting/racing multiplayes. Now, before you start wondering what good could that second scenario be, here's a rudimentary example: the supplement on a 'remote' connection can compute a fully-realtime texture for the skybox in a game, send it over in motion-frame-compressed format, not unlike how you watch youtube, and let the core unit just paint it over a skybox.
I was actually thinking about a system which does both (although reading back over my post it seems I never actually said this), and would see it as a form of competition to Vita's ability to stream from the PS4 over local wifi or further afield. Of course, I'm imagining Nintendo would have higher standards of usability than that.
The local scenario presents quite a lot of possibilities. Even over a standard home wifi network you should be able to squeeze sufficient bandwidth and sufficiently low (and relatively stable) latency to do some pretty interesting things. The problem is that, of the plausible functionality which I can think of in that scenario, almost none of it translates at all to the low-bandwidth, high-latency scenario. For that, you'd need to look at elements of a game which satisfy all of the following:
- Is computationally non-trivial
(otherwise you could just do it locally)
- Reacts to the player's actions
(otherwise you could pre-compute)
- But doesn't need to react too quickly
(needs to accommodate added latency)
- Involves small data transfers each way
(needs to accommodate low bandwidth)
- Should be able to be implemented in a simplified form on the handheld
(or otherwise needs to accommodate the case where the signal drops)
The reason I gave the example of AI is not just that it satisfies the above, but that implementing it over a variable-latency connection is a relatively solved problem, just take the netcode you mention, but instead of a player at the other end there's an NX home console running some AI code. The problem with improved AI as the main use case, though, is that in general the improvement just isn't going to be all that noticeable, even going from a handheld to a home console (unless you're simulating large crowds where a handheld CPU would simply choke, but then you run into a problem with 5 above), so it's hard to sell the whole technology on that basis.
There are much more visible, and hence more sellable, options once you restrict yourself to a home network, I'll admit. For instance, you could have the home NX dynamically calculate lightmaps for the game's environment, providing high-quality real time environmental lighting while leaving the handheld to only have to handle lighting and shadowing for the player character and other dynamic elements. It wouldn't be quite trivial to implement, but it would be doable for talented engineers, and would be impressive enough in use to be a selling point for the technology.
Trying to think up use cases for this beyond the home network becomes a lot more challenging. The best I can think of is fluid simulation. It's very computationally expensive (
and can be pretty impressive when you dedicate an entire GPU to the purpose), but requires relatively little data flow back and forth (basically you just need to send the surface mesh and forces on physics objects it interacts with). The problem is that I can't think of any scenario where you want fluid situation but you don't need it to react instantaneously to the player's actions. If you're playing Wave Race NX, for example, you'll want the water to be displaced immediately by your jet-ski, not after you've already sped over it.