• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo files patent application for cloud gaming devices

By OP's console example at the end, sounds like a handheld (as others have said) that provides gaming on the go, that connects via local cloud to a box at home that boosts graphics/capabilities and allows you to play on an HD TV (or display). And if you don't have the handheld and just the box, your tablet, phone, or device will connect to it and allow you to play. If that's the case, sounds awesome, and I wonder if it would let you game via your phone while not local by using your provoder? I wonder if Apple would have a cow?
I imagine this "cloud box" will be their console. Since it looks like their handhelds and consoles will have a similar game library. I suspect that they will want to in various other ways try to encourage people to buy both their handheld and console.
 

Thraktor

Member
Nice thoughts, Thraktor. I've also thought that this would be something they might choose to launch a year or two after the initial NX hardware, as the console itself will already be an investment for consumers. Also, what kind of message would it send if Nintendo's new hardware already needed a 32X-esque add-on right out of the gate? At the same time, I could see them marketing it as partially as external HDD/NAS, especially if they end up doing 2 NX console skus (one w/ optical drive and one w/ internal HDD), as that previous patent implied. In that case, it might make sense to release such a device sooner, if only to supplement the optical drive sku's storage.

Well, I think the important thing is that Nintendo, by and large, wouldn't be selling the actual NX+ hardware itself, they'd be selling the free streaming service runs on the existing NX. The sell-through rate of NX+'s may only need to be in the ~20% range to give the necessary quality and availability of service, so to 80% of people they'd be saying "You don't need to buy anything new, you can play all these shiny new games over the internet on your existing $149 console without spending a penny extra". To the other 20% it would be a matter of "if you want to buy the new box, we'll give you these rewards for doing so".

Good question. I suppose it could just pause the game like when the Wii U Gamepad gets moved out of range if a disconnect happens unexpectedly. Perhaps one could automatically set to connect to the next closest available SCD if the device you're using has a specific timer associated with its use or in the case of a mobile device, it senses that you're moving out of range.

Well, it would have to move over to another NX+, but if it forced a long pause in the game while it saves the game on one NX+, transmits the save and then loads it up on the other NX+, it would be rather annoying to the player, particularly if they're in the middle of a boss fight and then suddenly out of nowhere there's a minute-long loading screen.

In a purely theoretical environment, it would be possible for the switchover to be seamless, by transferring the entire game state to the new NX+, running the game in parallel on both units briefly, and then switching the stream when they're in sync. In practice, though, the "game state" can include everything in RAM (plus CPU registers, GPU memory pools, etc), and if you're looking at PS5-competitive hardware there may be up to 64GB of RAM in there, which would take over 11 hours to transfer with a 5Mb/s upload. In theory they could pursue this by forcing games to be able to serve up a "minimal game state" on demand, and implementing a "speedup mode" to allow the second NX+ to catch up to the first (otherwise it would be behind by however long it took to transfer the game state).

Great analysis. One question about the ratio of NX+ to NX. Let's say Nintendo wants content streaming at launch and doesn't want to wait for an NX+ launch to implement it, but they also don't want to spend for a server farm (or the standard definition of a server farm)

Could they turn whatever NX in store kiosk they have planned into a supercharged NX server of sorts? If we assume that pretty much every Target and Walmart in the USA is going to have one of these kiosks and we have 4,177 Walmart stores and 1789 Target stores, that's 5,966 kiosks available to act as dedicated servers in the US alone (plus whatever super stores I am forgetting...maybe Toys R Us, Sears, and Kmart depending), and they'll already be in strategic locations around the country.

Would this be enough to get things started if the system sold lets say three million units in the first six months of availability? And if not, what kind of ratio are we talking about to make this feasible?

If it could work, it seems to me the cheapest way to go about things, since multi year distribution deals are already set up with those retail partners, and Nintendo already has their army of reps and technicians in place to deal with any issues they would have with kiosk maintenance.

But if the NX+ software is ready, and the NX+ hardware is ready, why not just start selling NX+'s? In theory they could implement this to help smooth the launch over a little (as that's when there will be fewest NX+'s in the wild and streaming demand will be relatively high), but in the long run the sell-though of NX+ units shouldn't need to be that high to give good coverage.

Interesting points. There's one problem you didn't address though and that is getting the game to the source NX. If you are relying on the person having bought the game in the first place you are cutting down the number of available machines considerably. You would also need to have it stored digitally because none's going to be able to change disks, and therefore you would need so much storage it'd be impractical.

Thanks for mentioning this, I actually thought about it and forgot to put it in the post. My thinking is that the NX+ would be sold as having 1TB (say) of storage, but would actually have a 2TB drive inside. The owner would be able to fill their 1TB with their own games, and the other half of the space would be filled with games for streaming to other players, based on projected demand in the area. The NX+ wouldn't have to have every game available for streaming on it, just a selection, so long as there are enough NX+'s with a given game in a given location to meet demand (i.e. Zelda might be loaded on 80% of NX+ units to accommodate the number of people who will want to play it, while a more obscure title may only be loaded onto 20% of NX+ units).

There's also the issue of data caps limiting the number of available systems. I basically think it'd be unworkable at present as an internet based system. However within your subnet I can see it working to some degree. I haven't read the entire patent yet but it seems like console and SCD's can process data requests from other consoles, with SCDs being limited by having no audio, controller etc. So you could have one in the bedroom, one in the lounge and use resources from each other if needed. I would still be concerned with latency and bandwidth, but if you're on say a wired network with 1-2ms latency, that's 4 both ways, so long as the other machine can render, encode and get the data back to you within 14ms (Or about 28 at 30fps) you might be able to use it as a rendering platform, a-la WiiU->GamePad.

It's all very interesting nonetheless.

Well, as I mentioned above, the NX+ sell-through rate wouldn't need to be that high, so it would really be mainly aimed at those with decent upload speeds and without data caps (or with very high ones). In addition people who want to own an NX+ but don't have a suitable connection could just opt-out of the streaming service.

Bandwidth wouldn't be a particular issue, a ~5-10Mb/s h.265 stream should give decent quality at 1080p, which shouldn't be a huge challenge for 20% of NX customers to provide 3-4 years from now.

Latency would be an issue, but far from an insurmountable one. There are two issues regarding latency when streaming games. The first is the overall latency, and the second is the variability of the latency (i.e. the extent to which latency changes during play).

Firstly a few comparisons on typical latency values, from a "thumb to eye" latency measure (i.e. the time it takes between you pressing a button with your thumb and seeing the effects of that on the screen with your eye, assuming the game polls input immediately before rendering a frame).

12ms - A PC game running at 120fps on a high-end monitor with a 4ms response time
27ms - A PC game running at 60fps on a standard monitor with a 10ms response
33ms - A 60fps Wii U game running on the gamepad (16.7ms render + 16.7ms display latency)
50ms - A 30fps Wii U game running on the gamepad (33.3ms render + 16.7ms display latency)
57ms - A 60fps console game running on a TV with relatively good latency (40ms)
73ms - A 30fps console game running on a TV with relatively good latency (40ms)
167ms - A 60fps console game running on a TV with bad latency (150ms)
183ms - A 30fps console game running on a TV with bad latency (150ms)

(If you want vsync you can add up to 16.7ms to each of the above as well)

It's worth noting that a lot of people play games on TVs that take as much as 150ms to display a frame after it's arrived in the HDMI port. That means that in a 60fps game what you're seeing is nine frames behind what's actually happening in the console.

Now, what latency would you be looking at for an NX+? (This is extremely hard to estimate accurately without a lot of hard data, but let's try a zero order approximation). For simplicity, let's assume everyone uses ADSL/VDSL, and the exchanges they connect to have 1000 connections a piece, with each connection being used by two people. If Nintendo sells one million NX+ units in the US, then there's a 99.81% probability that at least one person of the 1,998 others on your exchange has an NX+ unit to stream to. So, it's safe to assume that you're streaming from someone on your exchange.

Latencies between two people on the same exchange can vary a lot, from as low as 1ms roundtrip for a FTTH connection upwards, depending on line lengths, connection type, the quality of hardware in the exchange, etc. Let's say we're talking about a 20ms roundtrip maximum. We'll then assume that there's a bit of local latency, say 5ms on each end. Compression and decompression will then add a small amount, say 8ms. In total, then, our theoretical NX+ streaming service would add about 38ms to the total "thumb to eye" latency compared to playing locally. So, where we have a 57ms to 183ms range for playing consoles on TVs above, that range would instead be 95ms to 221ms, a 67% increase on the low end and a 21% increase on the high end.

Would this be as good as playing locally? Obviously not. However, would it be "good enough" for most people? I'd actually say it would be. A huge number of people are used to playing with 100-200ms of latency, and simply don't know that there's an issue. The human brain is pretty good at adapting to delays between actions and results, so long as those delays are predictable.

Which is what brings us to the variability of the latency. As I said, the brain is pretty good with dealing with X milliseconds of delay so long as X milliseconds stays X milliseconds. One the latency starts to waver, though, playing becomes increasingly frustrating, as that part of your brain which had just wired itself to perfectly time the button press to send Mario jumping over the lava is suddenly out-of-sync, causing him to die a fiery death when your brain is telling you you nailed the jump.

This is a big issue for streaming games (or at least has been for me in my experiences with it). Thinking about the NX+ situation above, when your router connects to another router on the same exchange, there are maybe two physical boxes that the packets travel through to be routed from his house to yours. When a game is streamed from a server farm, the number of routing devices it's running through could be up to 20, each of which is adding to the total latency, bit by bit. More importantly, though, those 20 boxes won't be the same for every frame, or even every packet. Each packet that's sent will be routed according to algorithms which, by and large, aim to maximise aggregate bandwidth, rather than minimise typical latency. One second your stream may be running along a relatively low-latency path, and the next second 20% of the packets start getting diverted along a less-congested route which also happens to increase the latency considerably. This is a big difficulty for services like PSNow, and this kind of latency variability is just something they're going to have to live with, unless they somehow figure out how to get a server farm attached to every ADSL exchange in the country...

Streaming from someone on the same exchange as you isn't going to completely eliminate latency variability, but it would reduce it substantially. For one thing your path is identical for every packet. The routers your packets are going through are still going to add a little latency variability each, depending on demand, but the total latency variability will be petty low, maybe 2-3ms.

Now, what do you do if you've got a 2-3ms variability in your thumb-to-eye latency and you want to eliminate that? You implement vsync! Vsync is basically just a means of adding latency to an image so that it gets drawn to screen exactly when the screen refresh occurs. With the NX+ they'd use this latency to hide the natural variability of the network latency, so instead of having a latency which varies from, say, 36ms to 40ms, you may have a constant 43ms latency. This does add a little latency to the proceedings, but no more than vsync already does in a traditional console environment.

I'd be very confident in Nintendo's ability to achieve zero-variability latency for the player in an environment with low-variability video streaming, mainly because they've already done it. The Wii U gamepad hits 16.7ms latency like clockwork, and it's clear that Nintendo placed a lot of emphasis on this when developing the console, given the custom compression and streaming hardware they developed for it.

This has been pretty long, so here's the jist:

tldr: Nintendo should be able to achieve a streaming service with "good enough" latency and zero latency variability, even with a relatively small install base of NX+ units.

And there would have to be some kind of downside to letting someone use your personal resources, otherwise they probably wouldn't offer rewards for doing it.

Not necessarily. The main downside to the user is that sharing the SCD over the Internet will eat up into your bandwidth and electricity bill. This distributed cloud compute network is a clever idea but in order for it to reach its full potential, hundreds of thousands, if not millions of SCD owners will have to commit to making their unit available as much as possible (without it eating into their own gaming time). It only makes sense for Nintendo to offer rewards to keep people engaged in the initiative.

Well, the main downside would really be that you'd have to spend several hundred dollars on an NX+, whereas you could play the same games on your existing NX for free by streaming them.

The word supplemental has been floating around in my head for a few days now. I'm wondering if I've been looking at this all wrong, and the SCD is a PowerPC device that locally gives access to the legacy Wii U/Wii/GC game library, and also offers some cloud features like game streaming to the NX handheld or whatever cloud processing Nintendo has in mind for their new online infrastructure. The Wii U SOC is probably pretty cheap these days, and it would allow the NX to "absorb" the Wii U architecture as Iwata once described.

That's an interesting alternative take on it, although if the WiiU SoC is so cheap, why not include it in the console in the first place?

I suppose they could sell it as a storage expansion which also gives you BC, if they throw a decent size drive in there.
 

Dirtie

Member
Just sounds like the game equivalent of a local media server to me, with some cloud/P2P sharing capability.
 

AmyS

Member
ExtremeTech's take on the patent:

Nintendo’s hypothetical console can connect to multiple supplemental devices, measure their latency and performance characteristics, and assign appropriate workloads, all with the goal of improving primary console performance. These supplemental devices are shown as being wired directly to the primary console, as below, but the actual patent text makes it clear that supplemental devices could also connect via Wi-Fi or Bluetooth.

Cluster-based console gaming?

The system Nintendo describes in its patent application sounds more like a compute cluster than a traditional platform. Game streaming is still in its infancy, but virtually every major player has something in the works. The Wii U arguably pioneered local game streaming, though the feature didn’t drive sales the way motion controllers drove the original Wii. Today, Sony and Nvidia have PlayStation Now and GeForce Now, the PlayStation Vita can stream at least some PS4 titles, and the Xbox One can stream games to any Windows 10 PC running on a compatible network. Microsoft also has a cloud computing backend available for additional processing, though I’m not aware of any shipping titles that currently use the feature.

When the Wii U shipped, reviewers noted that the console’s unique controller also limited its overall performance. While the console could technically support more than one controller (albeit at a performance hit), in practice, Nintendo assumed a single Wii U gamepad. The “supplemental” devices that Nintendo’s patent contemplates don’t sap the performance of the console — they improve it.

This patent doesn’t clarify what kinds of scenarios Nintendo believes would be suitable for offloading to supplemental devices, the hardware involved in doing so, or how the company would compensate for the still-significant latency hit of performing calculations remotely. Presumably there would be a way for developers to specify which tasks could be farmed out to supplemental devices, while core gameplay ran natively. It’s also not clear if Nintendo envisions a system in which end-users purchase multiple devices simultaneously, or how much that hardware would cost. Consumers are used to buying consoles as a distinct unit, so selling them on the idea of supplemental hardware could be tricky.

On the other hand, clustering hardware together could offer an interesting way for gamers to invest more money in exchange for better overall performance. Both the Xbox One and PS4 have often struggled to maintain frame rates (the Xbox One has a larger overall problem, but neither console is guaranteed to be lag-free). I don’t know how many Sony or Microsoft fans would pay an additional premium to alleviate these issues and guarantee a smooth 30-60 FPS at 1080p, but we bet some would. Whether Nintendo can launch and ramp such an approach is open to debate — network-related features and online gaming are far from the company’s traditional strengths.

Nintendo has already said it expects to announce the NX console’s release date at next year’s E3, and that the new system will be a complete break from the Wii’s architecture. This last can’t come quickly enough —

http://www.extremetech.com/gaming/2...s-nx-console-is-like-nothing-weve-seen-before
 

Kimawolf

Member
So. the earlier rumors which seemed to conflict with each other, perhaps dont? What if what was seen was indeed the base "NX" and one of the more powerful SCDs? that would explain the power differences.
 
lol here we go, another season of listening to idiots hype up the secret power sauce of "the cloud" only for it to not really go anywhere.
 

Kimawolf

Member
lol here we go, another season of listening to idiots hype up the secret power sauce of "the cloud" only for it to not really go anywhere.

It's not a secret power of anything actually. It's using wired and wireless devices in your own network o r over the net. You should read more of the thread, it will be enlightening.
 

LordOfChaos

Member
Hm, so instead of a distant cloud based system, it sounds like you'd have a "helper" system right in your house. Though this possibly means a NX stationary could assist a NX handheld in the same house.

But the further twist is it sounds like there's a "sharing" system of resources? You give X compute hours, and you get something in return, points or the ability to get other people to assist your system.

Hmm.

So the supplemental device works to share processing resources over the cloud? Kinda like Sony promised w/ Cell and PS3? This patent is crazy...


Good old Crazy Ken. Cell in your fridge! Cell in your toaster! All assisting each other, household supercomputer!

Actually...That would have been sweet. But a development nightmare.

I wonder if the NX can get around that issue of unsure resources.
 

AzaK

Member
tldr: Nintendo should be able to achieve a streaming service with "good enough" latency and zero latency variability, even with a relatively small install base of NX+ units.

Thanks for the detailed reply. I see what you're trying to say, and I agree there could be some sort of solution that could technically work, but would it be worth it in the end for the amount of engineering and cost required to develop this part of the system? Ethernet connected units and physically bus-connected ones, no worries, it'd probably be pretty good but wireless out onto the net, to the few machines around that have the necessary data and availability to process what you want, and need to be turned on, just feels like you'd be left with a subset of possible connections of less than 1 :)
 

heidern

Junior Member
This patent is device agnostic. It doesn't just apply to a theoretical NX console or handheld, it could also be used to power their mobile games. The supplementary processing devices are also able to be any computing device. So it could be a dedicated SCD, but even the NX console or handheld itself could act as an SCD. The patent also allows for non-dedicated hardware to be used as SCDs which means potentially PCs and Macs.

This generation both Nintendo and MS got burned trying to appeal to core and casual. This patent seems to me Nintendo decoupling elements of the hardware so that they can appeal to everyone effectively. They could release a console for between $150-$250 with their innovation/gimmick to appeal to the casuals and then have SCDs for $150-$500(possibly more even) to appeal to the Sony/MS audience as well as free online gaming. They could then release upgraded SCDs annually. It would be disruptive if they can do this. It would mean they kind of get a 2-3 year headstart over PS5/XB2 and makes extended console generations no-longer viable for anyone.

If they do make allow open hardware like PCs to act as SCDs then that would mean people with PCs or living near people with PCs can engage in high end console gaming+gimmicks for only the price of a base unit. In fact with the local cloud or a SCD purchase it means the NX would be more powerful than the PC for games, maybe a lot more powerful.
 

AzaK

Member
This patent is device agnostic. It doesn't just apply to a theoretical NX console or handheld, it could also be used to power their mobile games. The supplementary processing devices are also able to be any computing device. So it could be a dedicated SCD, but even the NX console or handheld itself could act as an SCD. The patent also allows for non-dedicated hardware to be used as SCDs which means potentially PCs and Macs.

This generation both Nintendo and MS got burned trying to appeal to core and casual. This patent seems to me Nintendo decoupling elements of the hardware so that they can appeal to everyone effectively. They could release a console for between $150-$250 with their innovation/gimmick to appeal to the casuals and then have SCDs for $150-$500(possibly more even) to appeal to the Sony/MS audience as well as free online gaming. They could then release upgraded SCDs annually. It would be disruptive if they can do this. It would mean they kind of get a 2-3 year headstart over PS5/XB2 and makes extended console generations no-longer viable for anyone.

If they do make allow open hardware like PCs to act as SCDs then that would mean people with PCs or living near people with PCs can engage in high end console gaming+gimmicks for only the price of a base unit. In fact with the local cloud or a SCD purchase it means the NX would be more powerful than the PC for games, maybe a lot more powerful.

Yup, there is certainly lots of potential with the system as outlined. As you implied, the handheld could be used out and about and when you get home, all of a sudden the graphic quality enhances because it uses an SCD.

I never thought about a PC or Mac as an SCD but of course the patent doesn't discount this. Not sure Nintendo would put a "game player" on a platform like that but man wouldn't it be great.
 

Terrell

Member
And people said there wasn't a way to make NX a PS4-level piece of hardware that also differentiated itself from the other devices in its category.

Hmmm.
 

E-phonk

Banned
I was wondering if this couldn't be for the new gamepad.

They could give it a small mobile chip that would make it strong enough to play (certain?) NX portable/virtual console games, but it's additional power could also be used for the NX console. This would solve a few problems the current gamepad has (limited reach, gamepad taking away power from the console, only 1 gamepad because of this, ...)

Cost wise mobile tech is so cheap they can fit a mid range ARM chipset in each gamepad en still sell it for a reasonable price.
 

Wildean

Member
the handheld could be used out and about and when you get home, all of a sudden the graphic quality enhances because it uses an SCD.

What would be the point though? Sony have tried twice to bring home console style gaming to handhelds, and it hasn't proved popular. I don't see playing say Zelda U on the move with lower quality textures etc as a very marketable concept.
 

E-phonk

Banned
What would be the point though? Sony have tried twice to bring home console style gaming to handhelds, and it hasn't proved popular. I don't see playing say Zelda U on the move with lower quality textures etc as a very marketable concept.

A lot of the big 3DS releases are console style games these days. Xenoblade, zelda, smash, mario kart, ...
 

Wildean

Member
A lot of the big 3DS releases are console style games these days. Xenoblade, zelda, smash, mario kart, ...

It's true for the Zelda games, but I thought they were not that well suited to mobile play, as you really need to set at least 25 minutes aside for a go on OoT or MM. Was Xenoblade 3D really popular? Smash and Mario Kart are good for quick games on the move.

The whole concept just sounds like OffTV play to me: a nice feature, but not one that will sell a console.
 

Turrican3

Member
Reading Blu, Thraktor and Fourth Storm is truly fascinating.

But as a person who lives in Italy and is far too well aware of the *awful* state of the average Internet connection (and luckily we usually do NOT have to deal with caps) I really have to wonder: can you imagine Nintendo giving away their carefully crafted gameplay mechanics by forcing themselves to deal with a totally unpredictable variable like that?

I mean, they've been championing flawless 60fps experiences for more than a decade now, going back to 30 and/or even worse, jumping on the variable frame rate bandwagon seems too un-Nintendo to me.

To be honest, I can totally see them leveraging local additional computational resources to improve games, due to less constraints that would probably give them a relatively stable/predictable environment... but over the 'net? Seems quite unlikely, unless we're just talking about some kind of distributed stuff that has little to no impact on actual gameplay on both local and remote users (i.e. player notice no slowdowns, no tearing, no missing inputs, etc.)

ExtremeTech's take on the patent:

Nintendo has already said it expects to announce the NX console’s release date at next year’s E3
Has this changed very recently?!
Last I remember Nintendo's stance on the matter was a generic "NX is 2016 talk", nothing else.
 

LordOfChaos

Member
Guys, nothing indicates this patent could not also be used for the Wii U...

Err, and what, add a break out box for CPU/GPU boosts over...USB 2? Even Thunderbolt limits performance, let alone USB 3, let quadruply alone USB 2.

Besides...Nintendo is definitely getting ready to bury that, as much as they talk about wanting to please it's buyers. There's not going to be any major additions to it like a buddy system console, even if that was a possibility with it's inputs. The patent also describes it with a physical connection after all, which makes it more feasible than a cloud one.

Heck, over local wifi would be worse than USB 2, especially if you're doing two wireless hops, Buddy console to wifi, wifi to Wii U. You may see 9MB/s if you're lucky on wifi that way.
 

AzaK

Member
What would be the point though? Sony have tried twice to bring home console style gaming to handhelds, and it hasn't proved popular. I don't see playing say Zelda U on the move with lower quality textures etc as a very marketable concept.

I wouldn't suggest they market solely on that feature and I wasn't suggesting console like gaming on a handheld. They'd still be handheld games, designed for that platform but they could leverage processing and storage etc from the SCDs. There would be more value in SCDs at home to expand power of console but if the system is generic enough it could work for any device.
 

AzaK

Member
Err, and what, add a break out box for CPU/GPU boosts over...USB 2? Even Thunderbolt limits performance, let alone USB 3, let quadruply alone USB 2.

Besides...Nintendo is definitely getting ready to bury that, as much as they talk about wanting to please it's buyers. There's not going to be any major additions to it like a buddy system console, even if that was a possibility with it's inputs. The patent also describes it with a physical connection after all, which makes it more feasible than a cloud one.

Heck, over local wifi would be worse than USB 2, especially if you're doing two wireless hops, Buddy console to wifi, wifi to Wii U. You may see 9MB/s if you're lucky on wifi that way.

9MB/s could still push a frame buffer. Compress it and it's less. Latency is what I'd be worried about. Although they didn't mention it I assume it would work over ethernet too which is good for me.

Regarding the connection between main unit and SCD it mentioned physical connection which could also be PCIe (I don't know if Nintendo would go for that)l. But thunderbolt or USB-c would definitely be workable.
 

heidern

Junior Member
I never thought about a PC or Mac as an SCD but of course the patent doesn't discount this. Not sure Nintendo would put a "game player" on a platform like that but man wouldn't it be great.

If they did this then it means they are competing with PC gaming and can fight for the revenues that Valve for example are making as a platform holder, software revenue by selling first party games to that audience and hardware revenue on peripherals as well as hardware revenue by selling SCDs. It would in a way be transitioning PC gaming to the living room where people can use their 50" TVs.

Also, this P2P network would be their infrastructure. The infrastructure makes their effectively makes their hardware better and more valuable. Getting PCs on board would speed up the building of the infrastructure and improve it's quality. Instead of investing billions sending satellites into space and digging cables into the ground they get the public to provide their infrastructure for them.
 

LordOfChaos

Member
9MB/s could still push a frame buffer. Compress it and it's less. Latency is what I'd be worried about. Although they didn't mention it I assume it would work over ethernet too which is good for me.

Regarding the connection between main unit and SCD it mentioned physical connection which could also be PCIe (I don't know if Nintendo would go for that)l. But thunderbolt or USB-c would definitely be workable.

It can't just be a frame buffer though. If it's an assistance system, and not running the whole game since the main console would be doing part of that too, they have to communicate assets, inputs, the whole shebang back and forth.

Dual hop over wifi is a bad idea for most uses anyways. That's why the sneaker net is still so popular :p

(Heck, 9MB/s, for sending 30 or 60 framebuffers a second, that alone would be hard/not possible. )

So I think this definitely won't be for Wii U, partly for the input limits, partly because they're ready to be done with it soon. They'll support it a bit longer, but nothing major like added processing hardware for it.

For the latter part I don't really see a reason for using PCI-E, as that's a standard for all PCs to use. A console doesn't need to abide by any standards for compatibility with other systems. I don't know any consoles that use PCI-E, it's all proprietary. Thunderbolt is likewise carried over PCI-E. USB C gen 2 reaches 10Gb (small B) per second, which only brings it in line with Thunderbolt 1, and Thunderbolt 2 was still bottlenecking GPUs alone, let alone a CPU+GPU in the buddy console.

Could just be 100% proprietary, why not, it's a console.
 
Yup, there is certainly lots of potential with the system as outlined. As you implied, the handheld could be used out and about and when you get home, all of a sudden the graphic quality enhances because it uses an SCD.

I never thought about a PC or Mac as an SCD but of course the patent doesn't discount this. Not sure Nintendo would put a "game player" on a platform like that but man wouldn't it be great.

Actually, the patent describes it the other way around. The SCD would always be dedicated hardware from Nintendo, but the "game console" could be a PC, tablet, smart phone...pretty much anything. I find this part most interesting indeed.

Is it possible to use these devices as basically terminals, but also have them handle a bit of the mundane processing tasks, such as I/O and display processing? The other question I have is if such a set up would require custom hardware, such as the Broadcom chip in the Wii U Gamepad or could the SoCs in most smart devices these days basically be capable enough to carry out the mundane processing and decompression on their own?

Also, to those more knowledgable in such subjects: Is having the A/V drivers on such an "enhanced termninal" something which would make sense in a modern graphics pipeline? I'm thinking back to this part of the patent just for reference:
For instance, the supplemental computing device(s) may include processor(s), memory for storage, and interface(s) for coupling to game consoles, but may be free from display drivers, audio drivers, a user control interface for interfacing with the control 110, or the like.

Some good discussion here.
It can't just be a frame buffer though. If it's an assistance system, and not running the whole game since the main console would be doing part of that too, they have to communicate assets, inputs, the whole shebang back and forth.

Dual hop over wifi is a bad idea for most uses anyways. That's why the sneaker net is still so popular :p

(Heck, 9MB/s, for sending 30 or 60 framebuffers a second, that alone would be hard/not possible. )
Could you perhaps explain how a configuration such as described in the patent would be more difficult than a Wii U/Gamepad setup, in which the entirety of the A/V signal is sent from the Wii U to the Gamepad over a local wireless connection? Along with the user inputs being sent from Gamepad to Wii U over the same connection?
 

LordOfChaos

Member
Could you perhaps explain how a configuration such as described in the patent would be more difficult than a Wii U/Gamepad setup, in which the entirety of the A/V signal is sent from the Wii U to the Gamepad over a local wireless connection? Along with the user inputs being sent from Gamepad to Wii U over the same connection?

With the Gamepad the regular controller inputs are sent to the Wii U, and a video stream is fed back.

From my understanding of the first page patents, you would have a "main" console that could do work on it's own, and a "buddy" console that could help it along. Thus you're not simply sending video from A to B, you're splitting work between them, which means sharing assets and a lot more bandwidth than just a video feed.

That's how I see the OP patents anyways. People were also talking about this as possibly a base system that could run games on it's own, and an upgrade add on, which would necessitate the added bandwidth the way I see it.


Maybe I'm overthinking it and it would be more like Playstation Remote Play, where you're either playing a full Vita game, or streaming a full PS4 game to it, not combining the assets of both.

But then the part of the patent where you have a "buddy" console that can also use it's resources to help others outside your house, and you get some sort of points for it, is still in question.
 

AzaK

Member
It can't just be a frame buffer though. If it's an assistance system, and not running the whole game since the main console would be doing part of that too, they have to communicate assets, inputs, the whole shebang back and forth.

Dual hop over wifi is a bad idea for most uses anyways. That's why the sneaker net is still so popular :p

(Heck, 9MB/s, for sending 30 or 60 framebuffers a second, that alone would be hard/not possible. )

So I think this definitely won't be for Wii U, partly for the input limits, partly because they're ready to be done with it soon. They'll support it a bit longer, but nothing major like added processing hardware for it.

For the latter part I don't really see a reason for using PCI-E, as that's a standard for all PCs to use. A console doesn't need to abide by any standards for compatibility with other systems. I don't know any consoles that use PCI-E, it's all proprietary. Thunderbolt is likewise carried over PCI-E. USB C gen 2 reaches 10Gb (small B) per second, which only brings it in line with Thunderbolt 1, and Thunderbolt 2 was still bottlenecking GPUs alone, let alone a CPU+GPU in the buddy console.

Could just be 100% proprietary, why not, it's a console.

WTF WAS I SMOKING!?!?! 9MB/s not per 30th ;) Yes that would be pushing it.

Re the bus, I would not be thinking of the bus as one like a regular MB to GPU bus where data needs to be pulled across it in "realtime". The SCDs I imagine would have storage. The game could boot up, copy it's textures and vertex data to the SCD as part of the loading, maybe even storing a digital copy of the game there. Then all that needs to be shared in real time is dynamic data. Object transforms, input, view orientation etc and results of renders (assuming in this example the SCD is basically a GPU). It doesn't need the bandwidth of a regular bus.

In fact, it may even be easier for them to have the game running on both machines, but each one rendering 1/2 the frame. That said, when disparities between SCD and main machine come into play we may see issues, but it's all possible.

The real problem will be how hard it is for developers to leverage this. If it's PS3 levels of complexity then they'll shy away.
 

AzaK

Member
Actually, the patent describes it the other way around. The SCD would always be dedicated hardware from Nintendo, but the "game console" could be a PC, tablet, smart phone...pretty much anything. I find this part most interesting indeed.

Is it possible to use these devices as basically terminals, but also have them handle a bit of the mundane processing tasks, such as I/O and display processing? The other question I have is if such a set up would require custom hardware, such as the Broadcom chip in the Wii U Gamepad or could the SoCs in most smart devices these days basically be capable enough to carry out the mundane processing and decompression on their own?

Also, to those more knowledgable in such subjects: Is having the A/V drivers on such an "enhanced termninal" something which would make sense in a modern graphics pipeline? I'm thinking back to this part of the patent just for reference:

I may be getting overly optimistic but it seems like the whole patent is pretty open ended and describing a general system of work sharing amongst connected and disjoint devices. Some with varying features (Audio, display out vs not). Essentially the sky is the limit here.

Buy an SCD, download "Zelda" on iOS and the SCD does all the work and streams it to your iPhone.

Buy a main console, use your PC to help render super graphics.

Buy a main console enjoy standalone and when you have some cash, buy an SCD, attach it and your games look better.

Buy a main console for the lounge, an SCD (assuming cheaper than main console) for your bedroom and play your games on either, taking advantage of work sharing if the connection is good enough between them.

Honestly, this is a nerds wet dream, however I never think Nintendo really shoot for the sky like this so I expect reality will be somewhat more mundane but it's nice to fantasise.
 
With the Gamepad the regular controller inputs are sent to the Wii U, and a video stream is fed back.

From my understanding of the first page patents, you would have a "main" console that could do work on it's own, and a "buddy" console that could help it along. Thus you're not simply sending video from A to B, you're splitting work between them, which means sharing assets and a lot more bandwidth than just a video feed.

That's how I see the OP patents anyways. People were also talking about this as possibly a base system that could run games on it's own, and an upgrade add on, which would necessitate the added bandwidth the way I see it.


Maybe I'm overthinking it and it would be more like Playstation Remote Play, where you're either playing a full Vita game, or streaming a full PS4 game to it, not combining the assets of both.

But then the part of the patent where you have a "buddy" console that can also use it's resources to help others outside your house, and you get some sort of points for it, is still in question.

Right. Obviously, if we're talking things such as textures and framebuffer, that's going to require massive bandwidth. That can't be what they have in mind. Not for more distant communications and likely not for close-range communication either, if they are using WiFi and ethernet as examples.

But they've got something in mind. What if it did basically work like Gamepad streaming or Vita Remote play, except with the final display/audio signal processing being performed by the tablet/PC/whatever? Along with this, what if that same tablet or PC was also processing user input and other gameplay before beaming the results to the SCD, which would process AI and more complex physics along with most of the graphics and sound? Does this sound feasible?

The ideal is a console which can be played basically anywhere you have a display. Maybe even ship the SCD with a Chromecast-like device which would be one manifestation of the "game console" described in the patent. This would perhaps match one of Iwata's few hints on NX:
Iwata said:
Since we are always thinking about how to create a new platform that will be accepted by as many people around the world as possible, we would like to offer to them "a dedicated video game platform with a brand new concept" by taking into consideration various factors, including the playing environments that differ by country.
 

AzaK

Member
Right. Obviously, if we're talking things such as textures and framebuffer, that's going to require massive bandwidth. That can't be what they have in mind. Not for more distant communications and likely not for close-range communication either, if they are using WiFi and ethernet as examples.

But they've got something in mind. What if it did basically work like Gamepad streaming or Vita Remote play, except with the final display/audio signal processing being performed by the tablet/PC/whatever? Along with this, what if that same tablet or PC was also processing user input and other gameplay before beaming the results to the SCD, which would process AI and more complex physics along with most of the graphics and sound? Does this sound feasible?

Certainly is feasible. Just think of the SCD as more cores/threads running on your machine. All you need to ensure is that the data is there that they need. The problem is really just that the buses to transfer that data along are likely MUCH slower than internal buses unless they do something for physically connected devices like PCIe etc. For low bandwidth buses this would likely require preloading which I don't think would be an issue if the system had significant RAM or actually had storage itself. You'd then want to keep data transfer parallelised and/or discrete from the other parts of the system so they can work on their own. AI decisions, physics etc could very well be on an SCD I imagine.
 

Pokemaniac

Member
With the Gamepad the regular controller inputs are sent to the Wii U, and a video stream is fed back.

From my understanding of the first page patents, you would have a "main" console that could do work on it's own, and a "buddy" console that could help it along. Thus you're not simply sending video from A to B, you're splitting work between them, which means sharing assets and a lot more bandwidth than just a video feed.

That's how I see the OP patents anyways. People were also talking about this as possibly a base system that could run games on it's own, and an upgrade add on, which would necessitate the added bandwidth the way I see it.


Maybe I'm overthinking it and it would be more like Playstation Remote Play, where you're either playing a full Vita game, or streaming a full PS4 game to it, not combining the assets of both.

But then the part of the patent where you have a "buddy" console that can also use it's resources to help others outside your house, and you get some sort of points for it, is still in question.

You don't really need to be sharing assets in real time. If they really intended to have a setup like this, the SCD would probably have enough storage that things could be mostly preloaded.
 

heidern

Junior Member
Actually, the patent describes it the other way around. The SCD would always be dedicated hardware from Nintendo, but the "game console" could be a PC, tablet, smart phone...pretty much anything.

It says:
Because the sole or primary function of the supplemental computing device(s) may be to enhance the gaming experience by supplementing resources of the game console 102, in some instances the hardware of the supplemental computing device(s) is purposefully limited.

In other words the primary function of the SCD may not be enhancing the game experience which opens it up to being any hardware. But that's up to Nintendo whether they want to maximise dedicated SCD sales or want to maximise the infrastucture and audience they can appeal to. Using open hardware might open further technical hurdles which they might not want to deal with and they might prefer strict control over all the hardware.

Other quotes that clarify things:

implementations herein are not limited to the particular examples provided, and may be extended to other environments, other system architectures, other types of merchants, and so forth

In still other instances, different portions of a game or other application may be stored across multiple supplemental computing devices such that these portions are "closer" to users' game consoles and therefore may be rendered faster as compared to storing the data at remote servers. For instance, different portions of a map of a game may be stored across a group of supplemental computing devices that are within a relatively close network distance to one another such that the associated game consoles may each access these parts of the game, when needed, relatively quickly.

Relatively close supplemental computing devices may be able to provide services at a nearly real-time speed (e.g. processing real-time graphics and sound effects), while relatively far away devices may only be able to provide asynchronous or supplementary support to the events occurring on the console (e.g. providing for weather effects in games, artificial intelligence (AI), etc.).

After identifying supplemental computing devices within range (e.g., having a threshold connection strength, threshold latency, etc.), the module 214 may present this information to a user of the game console, who may select one or more supplemental computing devices to connect with. After receiving a user selection, the module 214 may attempt to establish a wireless connection with the selected devices and may begin utilizing the devices if successful.

For instance, a console could send a different AI algorithm to different supplemental computing devices and may therefore receive multiple different results calculated using the different algorithms. The game console may then select which of the different results to act on or otherwise render on the display. The supplemental computing devices may additionally or alternatively provide any other type of support to the game console, including resources for performing specialized graphics processing, storing uncompressed game data so that the game console doesn't have to perform the de-compression on the fly, and the like.

In still other instances, the local supplemental computing device 104 may make this determination and, hence, may seek to couple to a remote supplemental computing device for buttressing the processing resources and/or storage available to the game console 102.

users that share resources may similarly utilize other supplemental computing devices, potentially in equal amounts of what they shared (measured in time or resource amounts).

So potentially your looking at sharing processing time but also hard drive space.
 

LordOfChaos

Member
Right. Obviously, if we're talking things such as textures and framebuffer, that's going to require massive bandwidth. That can't be what they have in mind. Not for more distant communications and likely not for close-range communication either, if they are using WiFi and ethernet as examples.

But they've got something in mind. What if it did basically work like Gamepad streaming or Vita Remote play, except with the final display/audio signal processing being performed by the tablet/PC/whatever? Along with this, what if that same tablet or PC was also processing user input and other gameplay before beaming the results to the SCD, which would process AI and more complex physics along with most of the graphics and sound? Does this sound feasible?


Maybe! There's no reference point for such a system so far so it's kind of hard to say, but perhaps. But one processing gameplay and input while the other calculated AI, physics, and graphics as you give an example of, still sounds like a lot more communication than the relatively simple video feed of the gamepad. i.e, AI responds to your input to a degree, so more direct interactions would need quick turnaround. Could the round trip be done in one frame? Lots of questions we can't know until it's out!

An NXCast would be interesting too...One unit tucked away somewhere, cheap dongles for every TV in the house? That could be cool.

You don't really need to be sharing assets in real time. If they really intended to have a setup like this, the SCD would probably have enough storage that things could be mostly preloaded.


Synchronization would still take something more than a raw video feed of bandwidth though, see early SLI bridges and current SLI/Crossfire PCI-E bandwidth needs. Even when each card has the same assets within VRAM, there has to be a lot of communication between them. Even for a simple setup like split frame rendering, which also happens to be inefficient since something cool could only be happening in one part of the screen.
 

AzaK

Member
Maybe! There's no reference point for such a system so far so it's kind of hard to say, but perhaps. But one processing gameplay and input while the other calculated AI, physics, and graphics as you give an example of, still sounds like a lot more communication than the relatively simple video feed of the gamepad. i.e, AI responds to your input to a degree, so more direct interactions would need quick turnaround. Could the round trip be done in one frame? Lots of questions we can't know until it's out!

An NXCast would be interesting too...One unit tucked away somewhere, cheap dongles for every TV in the house? That could be cool.




Synchronization would still take something more than a raw video feed of bandwidth though, see early SLI bridges and current SLI/Crossfire PCI-E bandwidth needs. Even when each card has the same assets within VRAM, there has to be a lot of communication between them. Even for a simple setup like split frame rendering, which also happens to be inefficient since something cool could only be happening in one part of the screen.

Think of this though. You have two PS4's each started at some synchronised state (The first frame of the game) they both read input from the one controller. Ok, now have one render only 1/2 the frame and the other render the other 1/2 the frame - even if they both run all logic etc. You have basically halved the GPU requirements of each. Or to put another way, both could potentially render twice the number of pixels. More effects, alpha etc. Of course both 1/2 renders need to be joined for display, so one would have to encode and send it across although I wouldn't necessarily suggest that someone implements it like this (Both machines would have to have comparable Cpu/compute power). Either way, you would hopefully end up with a net benefit.

With something like AI though, you don't need to necessarily send the whole geometry; pathing graphs would be sufficient and you could sent modifications to that as needed and they wouldn't need to be at 60th. Then you might need some player state to work from (orientation, perceived threat etc) but that wouldn't be huge.

Certainly all doable depending on what the SCD has in it.
 

Schnozberry

Member
That's an interesting alternative take on it, although if the WiiU SoC is so cheap, why not include it in the console in the first place?

I suppose they could sell it as a storage expansion which also gives you BC, if they throw a decent size drive in there.

My thought was that the supplemental unit would be external storage + cloud, and having the Wii U SOC in there for backwards compatibility and other general cloud computation would mean that it would have some very compelling features while remaining completely optional. You don't need backwards compatibility or the cloud device for the NX to function, but if you purchase it, you get storage and cloud processing access as well as full compatibility with a huge library of games. It would be an attractive option for lapsed Nintendo fans who may want to go back and play something they missed but never would have bought a console for, plus it would allow Wii and Wii U owners to potentially transfer their digital software directly onto the SCD for play on NX.
 

Pokemaniac

Member
Maybe! There's no reference point for such a system so far so it's kind of hard to say, but perhaps. But one processing gameplay and input while the other calculated AI, physics, and graphics as you give an example of, still sounds like a lot more communication than the relatively simple video feed of the gamepad. i.e, AI responds to your input to a degree, so more direct interactions would need quick turnaround. Could the round trip be done in one frame? Lots of questions we can't know until it's out!

An NXCast would be interesting too...One unit tucked away somewhere, cheap dongles for every TV in the house? That could be cool.




Synchronization would still take something more than a raw video feed of bandwidth though, see early SLI bridges and current SLI/Crossfire PCI-E bandwidth needs. Even when each card has the same assets within VRAM, there has to be a lot of communication between them. Even for a simple setup like split frame rendering, which also happens to be inefficient since something cool could only be happening in one part of the screen.

If you want to do an SLI sort of setup, then of course you will need a lot of bandwidth. However, that is only one of many ways this could be configured. There are other tasks in video games that are that don't directly involve rendering part of the image which require a lot less bandwidth. We really don't know how (or if) Nintendo plans to use this, so assuming a dual GPU setup like that is a bit premature.
 

AzaK

Member
My thought was that the supplemental unit would be external storage + cloud, and having the Wii U SOC in there for backwards compatibility and other general cloud computation would mean that it would have some very compelling features while remaining completely optional. You don't need backwards compatibility or the cloud device for the NX to function, but if you purchase it, you get storage and cloud processing access as well as full compatibility with a huge library of games. It would be an attractive option for lapsed Nintendo fans who may want to go back and play something they missed but never would have bought a console for, plus it would allow Wii and Wii U owners to potentially transfer their digital software directly onto the SCD for play on NX.

I personally don't think Wii U Backwards compatibility is worth it (Although if it's $2 then sure) and cloud is pretty lame if that's all it was. For me to think it's worth buying It;d have to be a fully functioning console for the most part with the ability to share work from the main system so I could upgrade it by daisy chaining 64 of those fuckers together to solve world hunger.

If you want to do an SLI sort of setup, then of course you will need a lot of bandwidth. However, that is only one of many ways this could be configured. There are other tasks in video games that are that don't directly involve rendering part of the image which require a lot less bandwidth. We really don't know how (or if) Nintendo plans to use this, so assuming a dual GPU setup like that is a bit premature.

If the SCD's are a processor with a bit of RAM that needs to get it's resources from a network connected device in realtime then it's dead out of the water IMO. It needs to have sufficient RAM and processing power to allow augmenting the main console's "power". For that you don't want to be streaming assets across the network for rendering every frame. RAM and/or HDD storage would be my guess.

Nintendo could release NX as a thing with meagre storage storage that gets its data from disc like consoles do now but if you want extra massive storage you don't buy an HDD, you buy an SCD which has a process, GPU, HDD etc. That's a way to cater to those who want something simple and cheap and the power users like us can fork out more and get the "uber" edition.
 

AfroDust

Member
Since its looking like this thing could be heavily reliant on the Cloud what should they call the thing then? The Nintendo Sky? The Nintendo Horizon?
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
You don't really need to be sharing assets in real time. If they really intended to have a setup like this, the SCD would probably have enough storage that things could be mostly preloaded.

Synchronization would still take something more than a raw video feed of bandwidth though, see early SLI bridges and current SLI/Crossfire PCI-E bandwidth needs. Even when each card has the same assets within VRAM, there has to be a lot of communication between them. Even for a simple setup like split frame rendering, which also happens to be inefficient since something cool could only be happening in one part of the screen.
Pokemaniac is right - in such low-latency distributed systems the traffic is highly asymmetrical - nobody sends back and forth assets, at least not at playtime.

Unit A and unit B both have all the assets they need to operate with, in advance. Then, assuming a master-slave configuration (which is natural, given the control inputs normally arrive at one unit only, say A), unit A requests some work done by unit B. But that does not need to happen per-frame, every frame. Say, A can sent a slim request 'compute the distance field of geometry X and light-source Y, animated by Z*, for the next N hundred frames' and for the duration of the next N hundred frames, B sends back a distance field map (the fat response), per frame, every frame. Everything A needs to send back is a 'keep alive' type of acknowledge, periodically, but not necessarily per frame.

* where X, Y and Z are assets.
 

LordOfChaos

Member
Pokemaniac is right - in such low-latency distributed systems the traffic is highly asymmetrical - nobody sends back and forth assets, at least not at playtime.

Unit A and unit B both have all the assets they need to operate with, in advance. Then, assuming a master-slave configuration (which is natural, given the control inputs normally arrive at one unit only, say A), unit A requests some work done by unit B. But that does not need to happen per-frame, every frame. Say, A can sent a slim request 'compute the distance field of geometry X and light-source Y, animated by Z*, for the next N hundred frames' and for the duration of the next N hundred frames, B sends back a distance field map (the fat response), per frame, every frame. Everything A needs to send back is a 'keep alive' type of acknowledge, periodically, but not necessarily per frame.

* where X, Y and Z are assets.


That would be interesting. Still much more bandwidth involved than the simple video stream, but perhaps far less than I was imagining. The scary part?
Microsoft was right
 

Thraktor

Member
Thanks for the detailed reply. I see what you're trying to say, and I agree there could be some sort of solution that could technically work, but would it be worth it in the end for the amount of engineering and cost required to develop this part of the system? Ethernet connected units and physically bus-connected ones, no worries, it'd probably be pretty good but wireless out onto the net, to the few machines around that have the necessary data and availability to process what you want, and need to be turned on, just feels like you'd be left with a subset of possible connections of less than 1 :)

[Edit: In my original version of this post I made a small error in my analysis by assuming that the probability of finding your game on one NX+ is independent of finding it on another one. This is true if we know what the game is, but in this case we don't, so if your game isn't on the first NX+ it's actually less likely to be on the next one (as it's more likely to be an unpopular game), and so on. To fix this we switch to a bayesian approach and update our priors after each step. What's crossed out below is what was in the original version of the post, and the additions/replacements are in italics (I've removed the original italics to prevent confusion).]

Actually, from a statistical post of view, it's extremely very likely that there's a connection available that's "close" to you, even with a very low install base of NX+'s (the 1 million I used in my example above is basically Virtual Boy level numbers).

Let's have a quick look at the numbers again, keeping the install base at 1 million for a start. The expected number of NX+'s on your exchange in that case comes to just over 6 (99.8% is the probability that at least one NX+ is on your exchange, but obviously there could be more). Let's say that there are 6, and four of those NX+'s are being used (which would represent very high demand). Let's also say that each NX+ has 20 games loaded on it (1TB of space and 50GB per game), and assume those games are loaded in simple statistical proportion to the number of players who want to play them (a more advanced technique that intentionally spreads games around geographically would achieve better results, but let's keep it simple for now).

For the purposes of estimating the distribution of games, I'm going to use Steam, as it's the only readily available source of accurate data. I'm going to assume that there are 200 games on the platform, and that the usage statistics of those games match the top 200 games on Steam. Taking these numbers, and doing a bit of work in Excel, tells me that there's a 81.8% 69.23% chance that the game you want to play will be on at least one of the two remaining available NX+ units.

Now, even in this unlikely scenario, there's still a small chance that there won't be someone on your exchange (i.e. with minimum physically possible latency), so let's examine that. The next lowest latency would be with people connected to exchanges on the same ISP that are "neighbours" to your exchange (i.e. either directly connected to your exchange, or both connect to the same ISP hub). These should add low single-digit milliseconds of latency to your stream, and still maintain a low variability of latency. Let's say there are five of these exchanges. The six exchanges (including your own) would now have an estimated 37.5 NX+'s, and assuming 66.7% are used, as above, we have 12 free NX+ to stream from, with almost minimal physically possible latency. Redoing the calculations, the probability that your game is on at least one of these 12 NX+ units comes to 99.996% 89.3%.

So, in a situation where:

- The NX+ sells almost as badly as the Virtual Boy
- Nintendo implements the laziest possible game pre-loading technique
- Two-thirds of NX+ units are in use (much higher than typical demand should be expected to be)

you are still effectively guaranteed very likely to be connected to an NX+ that has your game and can offer next-to-minimal latency, and you have a greater than 80% almost 70% chance of being connected to an NX+ right on your exchange, which will give you the lowest latency any video game streaming service could possibly provide.

If we re-do the calculations with more realistic assumptions (5 million NX+'s, and 25% usage rate), the probabilities start to get a little crazy considerably higher. In this case you have:

- a 99.9999998% 93.9% chance that your game is available on an NX+ on your exchange
- a 99.99999999999999999999999999999999999999999999999999% 99.6% chance that your game is available on an NX+ in a neighbouring exchange

That is, you are more than twice as likely to be hit by lightning the same second you're hit by a meteor, on the same day you've been drafted by the NBA, while holding winning lottery tickets for both the Powerball and the Euromillions, than you are to not be able to stream a game with almost minimal latency. Okay, it's not that unlikely, but it's still pretty unlikely, and I should emphasise that the probabilities would go up considerably were a more sensible game distribution technique used.

Now, granted this model I'm using is only a very rough approximation of reality, and more real-world data (particularly on the topologies of real ISP's networks) would help make it more accurate, but I feel it still does a decent enough job of demonstrating a simple statistical fact:

Under any reasonable set of assumptions, you would be able to stream from an NX+ that is, by the standards of the internet, extremely close to you.

Even my estimates for total latency are on the conservative side. By 2019, the average fixed-line broadband speed in North America is expected to hit 43.7Mb/s, in Western Europe 49.1Mb/s and in Japan over 100Mb/s (PDF source, page 19). The technologies that bring in these higher speeds (such as FTTH, DOCSIS 3.1 and VDSL/G.fast with fibre to a street-level cabinet) also reduce latency in the access network. It's quite possible that for a large number of users, the total latency added by the streaming solution would be so low that it would be hidden entirely within the vsync delay, making the service indistinguishable from playing locally.
 

AzaK

Member
Actually, from a statistical post of view, it's extremely likely that there's a connection available that's "close" to you, even with a very low install base of NX+'s (the 1 million I used in my example above is basically Virtual Boy level numbers).

Let's have a quick look at the numbers again, keeping the install base at 1 million for a start. The expected number of NX+'s on your exchange in that case comes to just over 6 (99.8% is the probability that at least one NX+ is on your exchange, but obviously there could be more). Let's say that there are 6, and four of those NX+'s are being used (which would represent very high demand). Let's also say that each NX+ has 20 games loaded on it (1TB of space and 50GB per game), and assume those games are loaded in simple statistical proportion to the number of players who want to play them (a more advanced technique that intentionally spreads games around geographically would achieve better results, but let's keep it simple for now).

For the purposes of estimating the distribution of games, I'm going to use Steam, as it's the only readily available source of accurate data. I'm going to assume that there are 200 games on the platform, and that the usage statistics of those games match the top 200 games on Steam. Taking these numbers, and doing a bit of work in Excel, tells me that there's an 81.8% chance that the game you want to play will be on at least one of the two remaining available NX+ units.

Now, even in this unlikely scenario, there's still a small chance that there won't be someone on your exchange (i.e. with minimum physically possible latency), so let's examine that. The next lowest latency would be with people connected to exchanges on the same ISP that are "neighbours" to your exchange (i.e. either directly connected to your exchange, or both connect to the same ISP hub). These should add low single-digit milliseconds of latency to your stream, and still maintain a low variability of latency. Let's say there are five of these exchanges. The six exchanges (including your own) would now have an estimated 37.5 NX+'s, and assuming 66.7% are used, as above, we have 12 free NX+ to stream from, with almost minimal physically possible latency. Redoing the calculations, the probability that your game is on at least one of these 12 NX+ units comes to 99.996%.

So, in a situation where:

- The NX+ sells almost as badly as the Virtual Boy
- Nintendo implements the laziest possible game pre-loading technique
- Two-thirds of NX+ units are in use (much higher than typical demand should be expected to be)

you are still effectively guaranteed to be connected to an NX+ that has your game and can offer next-to-minimal latency, and you have a greater than 80% chance of being connected to an NX+ right on your exchange, which will give you the lowest latency any video game streaming service could possibly provide.

If we re-do the calculations with more realistic assumptions (5 million NX+'s, and 25% usage rate), the probabilities start to get a little crazy. In this case you have:

- a 99.9999998% chance that your game is available on an NX+ on your exchange
- a 99.99999999999999999999999999999999999999999999999999% chance that your game is available on an NX+ in a neighbouring exchange

That is, you are more than twice as likely to be hit by lightning the same second you're hit by a meteor, on the same day you've been drafted by the NBA, while holding winning lottery tickets for both the Powerball and the Euromillions, than you are to not be able to stream a game with almost minimal latency.

Now, granted this model I'm using is only a very rough approximation of reality, and more real-world data (particularly on the topologies of real ISP's networks) would help make it more accurate, but I feel it still does a decent enough job of demonstrating a simple statistical fact:

Under any reasonable set of assumptions, you would be able to stream from an NX+ that is, by the standards of the internet, extremely close to you.

Even my estimates for total latency are on the conservative side. By 2019, the average fixed-line broadband speed in North America is expected to hit 43.7Mb/s, in Western Europe 49.1Mb/s and in Japan over 100Mb/s (PDF source, page 19). The technologies that bring in these higher speeds (such as FTTH, DOCSIS 3.1 and VDSL/G.fast with fibre to a street-level cabinet) also reduce latency in the access network. It's quite possible that for a large number of users, the total latency added by the streaming solution would be so low that it would be hidden entirely within the vsync delay, making the service indistinguishable from playing locally.

I hear ya but......I'm in New Zealand. Us and I imagine MANY other countries won't have the density you suggest.
 

AmyS

Member
So the supplemental device works to share processing resources over the cloud? Kinda like Sony promised w/ Cell and PS3? This patent is crazy...

http://www.google.com/patents/US7321958

xEouSCT.png

This image represent a massive two-chip configuration with a Cell Processor (with 4 Elements) and Cell Visualizer GPU (with another 4 Elements).


This image represents a single chip configuration, basically half of each of the above (2 Processing Elements + 2 GPU Elements).


This represents the network of CELLs.

The final PS3 ended up being a Cell with 1 Processor Element and the Nvidia RSX GPU.

These concepts might work for NX with the Supplemental Computing Device, if done locally. We might not ever need another Nintendo console. Just keep adding more Supplemental Computing Devices.

I know, it's fucking Krazy
 

Clefargle

Member
Here is how I hope it plays out

Low settings - NX handheld - 720p 30fps
Med settings - NX console - 1080p 30fps
High settings - NX* + SU** - 1080p 60fps
Max settings - NX*** + 2 SU** - 4K 30fps****

*This could be either console or handheld
** These could be a docked handheld, a remote console through cloud, or a local SU hardware connected with a cable.
*** This can be the NX console only
**** I get it, this could be unrealistic but hopefully the next gen competitor consoles will be 4K and a couple years after launch the NX will support it with required hardware.

These are the possible configs:

Console + handheld (either local docking or visa versus like vita remote play)

Console + console (through the cloud)

Console + SU (hardware)

Console + handheld + SU (either local or cloud)

Could be really sweet but Nintendo better have this figured out at launch and help devs provide scaling game performance. Could be incredible and solve their issue with hardware refreshes and segmented games selection across devices. But only if it works out of the box, has third parties on board, and has a great launch library on day 1. But I am very hyped.
 
I hear ya but......I'm in New Zealand. Us and I imagine MANY other countries won't have the density you suggest.

Ah yes, that might limit things for yah. My professor was from New Zealand. I'd love to visit some time myself. Anyway, in such instances, you may be stuck using your own SCD and game console (whatever it may be) locally. The way they describe the SCD, however, it might just have one cable for power (and it may even be a relatively low power device). You could take it wherever you want to play.

Remember some other tidbits from Iwata (R.I.P.). I don't feel like digging up the quotes, as my mind is spinning from that controller patent, but there was one where he mentioned "new ways of payment for customers." This might allude to the option of buying an SCD or using point/cash/whatever to leech off someone else's.

There's also Iwata talking about "redefining a platform" and how anyone with an NNID would basically be a part of the Nintendo "platform." Well, wouldn't this make sense if users w/ an account could use the technology in this patent to play Nintendo games using others' SCD along w/ their own tablet, PC, "nintendocast", Wii U or whatever as the "game console/terminal"? As blu said, imagine if the Gamepad had even the power of a 2DS. Most smart device SoCs have enough juice that they could process control inputs, A/V out, and basic game instruction no sweat while the SCD takes care of the heavy lifting.
 

maxcriden

Member
Extremetech's article says Nintnedo has said they'll announce an NX release date at E3. That's ours speculation though, right?
 
Top Bottom