• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo's Supplemental Computing Devices patent allowed by USPTO, rejection cleared*

So...that Gran Turismo 5 BS they talked about years before its' release where it was like '4 PS3's = 120fps in 4K' or something like that?

60fps in 4k.I think they did the same with Wipeout, it was to show off 4k tvs rather than the games. The retail game ended up supporting up to 5 screens but only arranged side by side, not 2x2.

Edit: I might be confusing Wipeout for an early 3dtv demo, not 4k.
 

Eradicate

Member
Interesting thoughts about the console being the handheld with extra boosting coming from the home unit. I keep looking to see if they add any more patents that discuss these things, but all I've seen recently was (what looked like, anyways) a Nintendo grip game. Test your might!

MVLacpt.png

4q5lpOA.gif
 

AmyS

Member
So...that Gran Turismo 5 BS they talked about years before its' release where it was like '4 PS3's = 120fps in 4K' or something like that?

It was never intended for consumer use.

4 PS3s for 240fps at 1080p

*or*

4 PS3s for native 4K at 60fps.

What's the team a Polyphony Digital doing besides finishing their upcoming Gran Turismo 5 racing simulator? Making some crazy ass tech demos with four PlayStation 3s hooked together to share rendering time. Not only can four PS3s create a 2160p image (that's four 1080p images for a resolution of 3840x2160 blasted on Sony's 4K projector), they can create one single 1080p image that runs at 240 FPS. 240!

93eHXR9.jpg
DC4h2xC.jpg
 

Snakeyes

Member
The difficulty is in finding those situations where the player indirectly interacts with systems such as liquid or cloth simulation. Most of the times you see cloth simulation in games the developers make a point of the player directly interacting with it to show it off (e.g. by modelling the player character's clothes, or by having the character walk through curtains, etc.) Having an NPC interact with some kind of cloth in the background would work, but may not be all that visible on a 540p screen, and you may be able to get just as good an effect with pre-computation (e.g. the famous Metro gif of pulling a cloth cover off a car).

I suppose in a god game or RTS you might be able to make interesting use of fluid simulation without latency being an issue (for example blowing up a dam and watching the water flood your enemy's base), but those aren't exactly common genres on handhelds, and you again have the issue of needing a local fallback if the connection drops.

Yeah, guess you'd run into problems if you limit this to a handheld. I'm more intrigued by the potential applications with a higher speed wired connection to a stationary console and an Ethernet connection, though.
 

Thraktor

Member
Yeah, guess you'd run into problems if you limit this to a handheld. I'm more intrigued by the potential applications with a higher speed wired connection to a stationary console and an Ethernet connection, though.

There are obviously ample technical opportunities for a directly-connected SCD linked to the console by a high-bandwidth connection. With tens of gigabits/s bandwidth (e.g. thunderbolt or other PCIe-based) you can pretty much just use explicit multi-adapter support in Vulkan* as if you had two GPUs in a single box. With a single gigabit/s bandwidth (e.g. Ethernet) you can still do quite a lot, although you have to be a bit more careful in managing bandwidth. However, as interesting as both of these cases are from a technical point of view, I don't see Nintendo pursuing them, because I don't see a business case for attempting to sell people an "upgrade box" that makes their NX more powerful. It's appealing to people like us who want to see Nintendo games on the newest and most impressive hardware, but most people who buy consoles do so because they're simple one-box solutions which will last them four or five years. Introducing an upgrade system just makes things more complicated for consumers, and if there's one thing Nintendo should have learnt from the success of Wii relative to Wii U, it's that simplicity is king.

That's not even considering the difficulty in getting third parties to get on board. Nintendo have enough difficulty getting third parties to bring games to a system with a single hardware configuration and multiplying the number of target configurations wouldn't exactly help the matter.

*Vulkan actually doesn't have particularly complete EMA support yet (e.g. no direct memory transfers between GPUs), but I would imagine sufficient API support would precede the release of the first SCD in this scenario.
 

Thraktor

Member
The Expansion Pak was simply for RAM. Same as Saturn's 1MB/4MB carts.

This is something quite a few ways different from that.

It's also important to note that Nintendo never actually intended to sell the Expansion Pak as a stand-alone upgrade. It was only ever intended to come with the 64DD to act as a buffer to accommodate the disk's slow R/W speed. It was only when developers got hold of it they started asking Nintendo to release it stand-alone (Rare and Factor 5 being the main two pushing for it, as far as I'm aware).
 
It's also important to note that Nintendo never actually intended to sell the Expansion Pak as a stand-alone upgrade. It was only ever intended to come with the 64DD to act as a buffer to accommodate the disk's slow R/W speed. It was only when developers got hold of it they started asking Nintendo to release it stand-alone (Rare and Factor 5 being the main two pushing for it, as far as I'm aware).
That's interesting; I never knew about the 64DD part. Always figured the 64DD had that RAM inside itself.

Would be interesting to know why they designed it that way. Maybe they foresaw use of it outside of the 64DD at some point or designed it that was as a backup?
 

AzaK

Member
There are obviously ample technical opportunities for a directly-connected SCD linked to the console by a high-bandwidth connection. With tens of gigabits/s bandwidth (e.g. thunderbolt or other PCIe-based) you can pretty much just use explicit multi-adapter support in Vulkan* as if you had two GPUs in a single box. With a single gigabit/s bandwidth (e.g. Ethernet) you can still do quite a lot, although you have to be a bit more careful in managing bandwidth. However, as interesting as both of these cases are from a technical point of view, I don't see Nintendo pursuing them, because I don't see a business case for attempting to sell people an "upgrade box" that makes their NX more powerful. It's appealing to people like us who want to see Nintendo games on the newest and most impressive hardware, but most people who buy consoles do so because they're simple one-box solutions which will last them four or five years. Introducing an upgrade system just makes things more complicated for consumers, and if there's one thing Nintendo should have learnt from the success of Wii relative to Wii U, it's that simplicity is king.

That's not even considering the difficulty in getting third parties to get on board. Nintendo have enough difficulty getting third parties to bring games to a system with a single hardware configuration and multiplying the number of target configurations wouldn't exactly help the matter.

*Vulkan actually doesn't have particularly complete EMA support yet (e.g. no direct memory transfers between GPUs), but I would imagine sufficient API support would precede the release of the first SCD in this scenario.

There's certainly issues surrounding the SCD concept such as

1) Inventory management of multiple devices/addons
2) It reduces ability to reset with a new hook every 5 years
3) It requires Nintendo to build a good set of API's around shared processing
4) CPU. Standard GPU is scalable relatively easily but if you're CPU bound or wanting to use a certain level of CPU for AI, etc then that can be harder to manage for various scales.
5) Cut off points. At what point do you say "Ok, games don't have to be backwards compatible". If it's 5 years, then well, you're back to a single gen effectively.

I don't disagree that it's simpler for Nintendo and customers to continue the way they are, but I'm not sure it's the right way to go in this changing industry.

I don't see a problem with new NX boxes coming every 3-5 years as technology increases. Just release the NX with new tech. Provided you're marketing the device, people will know about it and provided you assure backwards compatibility, people can continue to use their boxes.

I can see people buying MORE devices, especially amongst the "core" gamer to augment their experience.

I can see a good entry level price for the "casual" gamer.

It would sure be a bold move to attempt something like this, and would require careful consideration and planning for sure.
 

Snakeyes

Member
There are obviously ample technical opportunities for a directly-connected SCD linked to the console by a high-bandwidth connection. With tens of gigabits/s bandwidth (e.g. thunderbolt or other PCIe-based) you can pretty much just use explicit multi-adapter support in Vulkan* as if you had two GPUs in a single box. With a single gigabit/s bandwidth (e.g. Ethernet) you can still do quite a lot, although you have to be a bit more careful in managing bandwidth. However, as interesting as both of these cases are from a technical point of view, I don't see Nintendo pursuing them, because I don't see a business case for attempting to sell people an "upgrade box" that makes their NX more powerful. It's appealing to people like us who want to see Nintendo games on the newest and most impressive hardware, but most people who buy consoles do so because they're simple one-box solutions which will last them four or five years. Introducing an upgrade system just makes things more complicated for consumers, and if there's one thing Nintendo should have learnt from the success of Wii relative to Wii U, it's that simplicity is king.

That's not even considering the difficulty in getting third parties to get on board. Nintendo have enough difficulty getting third parties to bring games to a system with a single hardware configuration and multiplying the number of target configurations wouldn't exactly help the matter.

*Vulkan actually doesn't have particularly complete EMA support yet (e.g. no direct memory transfers between GPUs), but I would imagine sufficient API support would precede the release of the first SCD in this scenario.

I think the SCD could be their answer to the problem of faster refresh cycles. If these faster upgrades are here to stay, isn't it better to pay $200 mid-gen for a PS4K type upgrade than buying a brand new system full price, especially when you get perks for sharing it online?
 
It's also important to note that Nintendo never actually intended to sell the Expansion Pak as a stand-alone upgrade. It was only ever intended to come with the 64DD to act as a buffer to accommodate the disk's slow R/W speed. It was only when developers got hold of it they started asking Nintendo to release it stand-alone (Rare and Factor 5 being the main two pushing for it, as far as I'm aware).


Was actually Akklaim they added a high resolution mode to Turok 2
 

AzaK

Member
There are obviously ample technical opportunities for a directly-connected SCD linked to the console by a high-bandwidth connection. With tens of gigabits/s bandwidth (e.g. thunderbolt or other PCIe-based) you can pretty much just use explicit multi-adapter support in Vulkan* as if you had two GPUs in a single box. With a single gigabit/s bandwidth (e.g. Ethernet) you can still do quite a lot, although you have to be a bit more careful in managing bandwidth. However, as interesting as both of these cases are from a technical point of view, I don't see Nintendo pursuing them, because I don't see a business case for attempting to sell people an "upgrade box" that makes their NX more powerful. It's appealing to people like us who want to see Nintendo games on the newest and most impressive hardware, but most people who buy consoles do so because they're simple one-box solutions which will last them four or five years. Introducing an upgrade system just makes things more complicated for consumers, and if there's one thing Nintendo should have learnt from the success of Wii relative to Wii U, it's that simplicity is king.

That's not even considering the difficulty in getting third parties to get on board. Nintendo have enough difficulty getting third parties to bring games to a system with a single hardware configuration and multiplying the number of target configurations wouldn't exactly help the matter.

*Vulkan actually doesn't have particularly complete EMA support yet (e.g. no direct memory transfers between GPUs), but I would imagine sufficient API support would precede the release of the first SCD in this scenario.

Thraktor, have you seen this? I was linked from another thread talking about AMD but interestingly in it is the talk of Explicit Multiadapter - Linked GPUs. GPUs getting smaller but more of them on a SoC. Memory access between them etc.

https://www.youtube.com/watch?v=aSYBO1BrB1I&feature=youtu.be
 

Thraktor

Member
There's certainly issues surrounding the SCD concept such as

1) Inventory management of multiple devices/addons
2) It reduces ability to reset with a new hook every 5 years
3) It requires Nintendo to build a good set of API's around shared processing
4) CPU. Standard GPU is scalable relatively easily but if you're CPU bound or wanting to use a certain level of CPU for AI, etc then that can be harder to manage for various scales.
5) Cut off points. At what point do you say "Ok, games don't have to be backwards compatible". If it's 5 years, then well, you're back to a single gen effectively.

I don't disagree that it's simpler for Nintendo and customers to continue the way they are, but I'm not sure it's the right way to go in this changing industry.

I don't see a problem with new NX boxes coming every 3-5 years as technology increases. Just release the NX with new tech. Provided you're marketing the device, people will know about it and provided you assure backwards compatibility, people can continue to use their boxes.

I can see people buying MORE devices, especially amongst the "core" gamer to augment their experience.

I can see a good entry level price for the "casual" gamer.

It would sure be a bold move to attempt something like this, and would require careful consideration and planning for sure.

I think the SCD could be their answer to the problem of faster refresh cycles. If these faster upgrades are here to stay, isn't it better to pay $200 mid-gen for a PS4K type upgrade than buying a brand new system full price, especially when you get perks for sharing it online?

Even taking development difficulties out of the way, I would argue that mid-gen console updates like the PS4K would be a much easier sell than an upgrade box you slap on the side of your existing console. The general public will understand what the PS4K is pretty much immediately when it's announced, just like they understand what the new iPad is or what the new Galaxy S is, it's a newer, more powerful, fancier version of the preceding product. The issue with the PS4K isn't that it's a new console with faster hardware, it's the implications for continued software support for existing PS4s.

The reason that Apple and Samsung and HTC and so forth can release new phones and tablets on a yearly (or in the same of Sony 6-monthly) basis is that when you purchase one of these devices you have a reasonable faith that, even if a replacement comes along, the device you've bought will continue to do what you bought it do do for at least another few years. With a smartphone, the things people expect their phone to do is send emails and texts, browse the internet, support social media and communication apps like Facebook, WhatsApp, etc., take some snapshots and of course make phone calls. The existence of a new iPhone doesn't make the previous models any less capable of doing any of these things, so most iPhone buyers don't have any problem with a new model coming out a year or less after they bought theirs.

With a console, though, people buy them to do one thing; play games. They may occasionally use them for Netflix, but the decision to purchase is dependent almost entirely on the device's ability to play games, both games which have already been released, and unreleased and unannounced games for several years onwards. With the traditional console cycle, the release of a new console has meant the pretty swift death of new software for the console it replaces. At best you'll get a year of poorly performing cross-gen titles and a couple of years of recycled sports games.

Sony obviously wants to change this by forcing developers to make games which run both on PS4K and on the existing PS4. This only solves part of the problem, though, as Sony can ensure third party games still run on PS4, but they can't ensure that they run well. With PS4K almost certainly the lead platform for all releases, and all screenshots, videos, previews and reviews based on the better looking a better performing PS4K version, there won't be a whole lot of incentive for developers and publishers to ensure that their games actually run at playable frame rates on existing PS4s. We've already had a number of high-profile games released this gen which consistently fail to hit 30fps on PS4 while it's the lead platform, and with it being relegated to secondary status that situation can only get worse.

This is the danger for iterative games consoles; that the mere existence of the upgrade means that the previous console won't be able to do what people bought it to do. If someone buys a PS4 in early 2016 for $350, but by late 2017 already finds that games he wants to buy are almost unplayable on his console, then that's what's going differentiate iterative updates in the games console industry from other industries, and that's what's going to prevent it from working here.

If you do want to pursue an iterative update cycle in the games industry, then you need to recognise the fact that someone who pays $300+ for a console has a reasonable expectation of being able to play new games on it, in a playable state, for at least 3 or 4 years after their purchase. You need to find a way to ensure that happens even after the newer model comes out, and you need to be able to assure the customer when they buy the console that that will be the case.

Going back to the NX and SCDs, the situation isn't any different. Nintendo would still need to ensure that games run smoothly on the existing hardware, and they still need a way to show that to the customer. However, in addition to this difficulty, they've got a device which is a lot more complex and confusing than a simple new NX. Look at the history of console add-ons, from the 64DD to the Sega CD and 32X, and they're simply not successful devices. The Expansion Pak doesn't even really count in this category, as it was cheap enough to bundle with a game. For an SCD to be worthwhile, it would have to have substantial processing power, and if Nintendo were planning on using it for even a half-gen jump in performance it would have to be almost as expensive as an entire new console.

Which brings me to the final question. If it's almost as expensive as a new console, then why not just release it as a new console? You get to sell to the entire gaming audience, rather than just existing NX owners, you give developers a much easier time working on a simple single-SoC based system, you have a device which is instantly understandable by consumers at large, and you get to keep yourself away from the horrible commercial failures that almost every single console upgrade before this has been. And if existing owners can easily transfer their games and saves onto the new device, they'll probably end up paying the same or less by selling or trading in their existing system.

There are certainly challenges to implementing an iterative console update cycle, but I think it would be a much more sensible route for Nintendo to take than attempting to sell upgrades that users can bolt onto their existing consoles. In fact, there's one big reason why Nintendo would be best placed among all console makers to do this in a workable manner; the vast majority of games sold on Nintendo platforms are from Nintendo themselves, and although they can't control the experience that third parties provide to owners of older hardware, they can control the experience their own teams give. And with internal teams with literally the best track record of solid frame rates over the past generation or two of any developers, they're well placed to ensure that people who own existing hardware can count on still getting solid versions of new games when the next NX comes out.

Was actually Akklaim they added a high resolution mode to Turok 2

Acclaim was one of the first to release a game which supported it, but I don't know if they ever actually pushed Nintendo to release it (and I doubt they did, as they were less likely to have access to it than Rare and Factor 5). There's an interview around somewhere on Rogue Squadron's development where one of the key people in Factor 5 describes getting a 64DD development kit and realising that the RAM upgrade on its own allowed them to do a lot of extra stuff in the game, so they went to Nintendo and asked them to release it as a stand-alone upgrade. I seem to remember Rare talking about being in a similar position early on in Perfect Dark's development.

Thraktor, have you seen this? I was linked from another thread talking about AMD but interestingly in it is the talk of Explicit Multiadapter - Linked GPUs. GPUs getting smaller but more of them on a SoC. Memory access between them etc.

https://www.youtube.com/watch?v=aSYBO1BrB1I&feature=youtu.be

The guy's obviously fairly knowledgable, and does a good job of explaining why increased die sizes have lower yields (something people expecting large-die 14nm chips this year would do well to understand), but he makes a few mistakes on the EMA side of things:

1. Yes, two dies of size X will cost less than a single die of size 2X, and four dies of size Y will cost less than one die of size 4Y. However, putting those two or four dies together on the same package is anything but cheap. Have a look at Nintendo's inability to reduce Wii U's price, even though the much more powerful PS4 and XBO are pretty much in the same price bracket as it now. The gamepad is a large part of this, but so is the multi chip module (MCM) at the heart of the machine. That's two small, relatively cheap dies on mature processes packaged together on one module, yet that process of packaging them together on one module is extraordinarily expensive. This was due to the manufacturing differences between the two dies, though, if Nintendo could have fabbed both CPU and GPU on one die on the same process they certainly would have, it would have saved them a lot of money.

Multi-chip modules where both chips share the same fabrication process are extraordinarily rare, due to the cost of packaging it all together, and they're pretty much restricted to cases where a single die would be absolutely monstrous (an example being IBM, who have occasionally used MCMs for their POWER server chips). It's theoretically possible that yields on future processes become so low that it becomes a feasible strategy, but that's a long way off. Don't take my word for it, though, here's one of AMD's technical staff from their Radeon group talking about it in a recent reddit AMA:

Yes, it is absolutely possible that one future for the chip design industry is breaking out very large chips into multiple small and discrete packages mounted to an interposer. We're a long ways off from that as an industry, but it's definitely an interesting way to approach the problem of complexity and the expenses of monolithic chips.

2. The other alternative is keeping each GPU on its own package, similar to the Pro Duo or other traditional multi-GPU cards. The difficulty with this is that you have to give each GPU its own pool of RAM. The video sort of skims over this, but even with two identical GPUs making the most of those two separate pools of memory is far from trivial. Even with VR, which is treated as one of the ideal cases for multi-GPU setups, you have to duplicate virtually everything across both pools. Talented developers would be able to make better use of the two pools, but they'd still be a lot happier with one big pool.

As a console designer, even if you've got the best APIs in the world to accommodate the two GPUs and the two memory pools, you're still going to vastly prefer a single GPU with a single shared pool of RAM. Not only does it allow developers to much more easily get the most out of the hardware, but it allows for a much simpler mainboard layout, a simpler cooling system, a smaller console and reduced logistical costs. Over the course of the existence of 3D consoles we've seen a reduction in the number of CPUs and ASICs and the number of discrete memory pools for precisely these reasons.

Have a look at the Saturn motherboard (first image) and the original Playstation motherboard (second image):


Now, look at the XBO motherboard, followed by the PS4 motherboard:


Aside from an increase in the number of RAM modules, game console motherboards have only become simpler over time. Bringing as much as possible onto a single die simply makes consoles easier to design and manufacture.

I've left Nintendo off this one because the N64 had a fairly modern architecture for the time; one CPU chip, one GPU chip (which also handled sound and contained the memory controller) and a single unified memory pool. The only real layout difference for Nintendo over the years is moving the CPU and GPU onto the same package with Wii U.

3. In a hypothetical NX SCD scenario you're almost certainly looking at an asymmetric performance balance between the two GPUs, potentially a very heavily asymmetric one (particularly if they release an NX now which competes with XBO and PS4 and then an SCD a few years down the line to compete with PS5). In this case I can see a lot of third parties, if they do support the SCD, taking the simplest possible route; do pretty much everything on the more powerful GPU, and perhaps run a few compute shaders on the weaker GPU. This wouldn't be much help to AMD if they want third parties to leverage that experience for PC games, as that kind of approach isn't going to yield much of a performance uplift in a symmetric multi-GPU scenario. Nintendo's internal studios would probably make good use of both chips, but I wouldn't expect much from external teams.
 

AzaK

Member
Even taking development difficulties out of the way, I would argue that mid-gen console updates like the PS4K would be a much easier sell than an upgrade box you slap on the side of your existing console. The general public will understand what the PS4K is pretty much immediately when it's announced, just like they understand what the new iPad is or what the new Galaxy S is, it's a newer, more powerful, fancier version of the preceding product. The issue with the PS4K isn't that it's a new console with faster hardware, it's the implications for continued software support for existing PS4s.
Don't disagree. It will be simpler to the customer.

The reason that Apple and Samsung and HTC and so forth can release new phones and tablets on a yearly (or in the same of Sony 6-monthly) basis is that when you purchase one of these devices you have a reasonable faith that, even if a replacement comes along, the device you've bought will continue to do what you bought it do do for at least another few years. With a smartphone, the things people expect their phone to do is send emails and texts, browse the internet, support social media and communication apps like Facebook, WhatsApp, etc., take some snapshots and of course make phone calls. The existence of a new iPhone doesn't make the previous models any less capable of doing any of these things, so most iPhone buyers don't have any problem with a new model coming out a year or less after they bought theirs.

With a console, though, people buy them to do one thing; play games. They may occasionally use them for Netflix, but the decision to purchase is dependent almost entirely on the device's ability to play games, both games which have already been released, and unreleased and unannounced games for several years onwards. With the traditional console cycle, the release of a new console has meant the pretty swift death of new software for the console it replaces. At best you'll get a year of poorly performing cross-gen titles and a couple of years of recycled sports games.

Sony obviously wants to change this by forcing developers to make games which run both on PS4K and on the existing PS4. This only solves part of the problem, though, as Sony can ensure third party games still run on PS4, but they can't ensure that they run well. With PS4K almost certainly the lead platform for all releases, and all screenshots, videos, previews and reviews based on the better looking a better performing PS4K version, there won't be a whole lot of incentive for developers and publishers to ensure that their games actually run at playable frame rates on existing PS4s. We've already had a number of high-profile games released this gen which consistently fail to hit 30fps on PS4 while it's the lead platform, and with it being relegated to secondary status that situation can only get worse.

I guess we disagree here. There's nothing stopping Sony from doing some quality control on titles to ensure they run on the PS4 as standard and in fact I think Sony MUST ensure quality. People who have a PS4 will be mighty pissed off and feel very burnt if in a year or so their PS4 runs everything at 10fps.

There's also a feedback loop at work here. If a developer releases shit PS4 games then that will also hurt their image which will hurt future sales until I guess the PS4K is the mainstream console and outselling PS4 by a large margin i.e. Until it looks like a typical generational shift-over.

If you do want to pursue an iterative update cycle in the games industry, then you need to recognise the fact that someone who pays $300+ for a console has a reasonable expectation of being able to play new games on it, in a playable state, for at least 3 or 4 years after their purchase. You need to find a way to ensure that happens even after the newer model comes out, and you need to be able to assure the customer when they buy the console that that will be the case.

Absolutely. This is the key point I feel and I guess I just feel like it's a possible thing with great benefits all around.

Going back to the NX and SCDs, the situation isn't any different. Nintendo would still need to ensure that games run smoothly on the existing hardware, and they still need a way to show that to the customer. However, in addition to this difficulty, they've got a device which is a lot more complex and confusing than a simple new NX. Look at the history of console add-ons, from the 64DD to the Sega CD and 32X, and they're simply not successful devices. The Expansion Pak doesn't even really count in this category, as it was cheap enough to bundle with a game. For an SCD to be worthwhile, it would have to have substantial processing power, and if Nintendo were planning on using it for even a half-gen jump in performance it would have to be almost as expensive as an entire new console.

I can't say much about this as I'm not sure of the cost of every component. However there are costs to be saved with controller and optical/magnetic drives, as well as economies of scale.

Which brings me to the final question. If it's almost as expensive as a new console, then why not just release it as a new console? You get to sell to the entire gaming audience, rather than just existing NX owners, you give developers a much easier time working on a simple single-SoC based system, you have a device which is instantly understandable by consumers at large, and you get to keep yourself away from the horrible commercial failures that almost every single console upgrade before this has been. And if existing owners can easily transfer their games and saves onto the new device, they'll probably end up paying the same or less by selling or trading in their existing system.
Without knowing the price we can't say. but I agree it's a simpler process than an expansion, but there's also a lot of plusses if we're looking at it from the patent's point of view. Multiple boxes in the house that people can use to share processing. So they could also be full consoles, but assist each other.

3. In a hypothetical NX SCD scenario you're almost certainly looking at an asymmetric performance balance between the two GPUs, potentially a very heavily asymmetric one (particularly if they release an NX now which competes with XBO and PS4 and then an SCD a few years down the line to compete with PS5). In this case I can see a lot of third parties, if they do support the SCD, taking the simplest possible route; do pretty much everything on the more powerful GPU, and perhaps run a few compute shaders on the weaker GPU. This wouldn't be much help to AMD if they want third parties to leverage that experience for PC games, as that kind of approach isn't going to yield much of a performance uplift in a symmetric multi-GPU scenario. Nintendo's internal studios would probably make good use of both chips, but I wouldn't expect much from external teams.
I imagine it will depend on the APIs. If the GPUs are presented through a coherent API I can't see why they can't be used for split rendering, or post processing a minimum. If they are presented as a single GPU, assuming a fast bus or at least device info so developers can adjust accordingly, it may be even simpler.

In summary I don't totally disagree with your points, I just feel that if someone could sort out the kinks in the plan it could open up a whole lot of opportunities for everyone.
 
Top Bottom