• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Confirmed: The Nintendo Switch is powered by an Nvidia Tegra X1

Status
Not open for further replies.

tkscz

Member
One question: We know that Switch is 3 or 4 times more powerful than last gen machines in GPU power and it have 8 times more RAM.

But how it compare to ps3 and X360 in CPU capacity? How much stronger the switch CPU is?

CPU's are tricky bastards. On pure technical standing, the Cell in the PS3 has advantages over Jaguar in the PS4. Same with the A57 in the Switch. Compared to the Cell and Xenon, it has a few advantages. Mainly that it's far more efficient. However, it's slower (all CPU's now are slower than Cell).
 

Pasedo

Member
Ok how about this for a summary. We can expect graphics to look as good if not better than the best graphics on PS3 with world size and interactions as good if not better than BOTW :p

RawTintedBullmastiff.gif


zelda-BotW.gif
 
I thought the gpu these days could do most of what the cpu used to do. I think more physics and interactions definitely improves gameplay experience over graphics that already looked good several years ago and I think this is what devs should be focusing on today. It is kind of dumb then that there's so much focus on gpu power and gflops. Let's get that cpu power going.

The industry has seemingly decided that graphics sell games and systems far better than things like AI and physics. And it's really not hard to figure out why, since it's far easier to sell someone on a game by showing them a picture or video than explaining how these game systems work and how the game plays. It's upsetting to me personally, but it seems to be the way the big AAA devs are going. And console makers, judging by these current gen CPUs.

So why doesnt Switch? It gets lost in the USB-C to HDMI conversion?

Do we know for a fact the Switch can't support HDR and 4k?

Thanks.
As for being relevant... well, I was just trying to understand whether it would make sense for Nintendo to release a more powerful, home-only (no dock, no screen, etc.) Switch by just leveraging on the very same hardware, just with higher clocks.

Any revision or home only Switch (which I personally doubt we'll see) will likely have a new chip like the TX2 or Xavier/Volta, depending on when it releases. Which is why I say getting a TX1 at its full clock speeds seems to be not terribly relevant to the Switch. Also it would likely take a very large volume and a lot of cooling, judging by the OG Shield TV. I doubt a standard videogame box could do it.

AMD drivers always suck at optimising the performance of their cards on PC. Stop using PC benchmarks for that "NVIDIA FLOPS vs. AMD FLOPS" meme. There are differences in how the GPUs work that go beyond the drivers like tile-based rendering and color compression, but that doesn't change the definition of FLOPS.

You also can't use the gap for higher level GPUs to deduct some theoretical gap for lower level GPUs. It's a logical fallacy unless you clearly know what the bottlenecks are for both sets of cards and if that scales linearly.

I'm practically a Nvidia fan but reading these arguments is still cringeworthy.

From what I understand, FLOPS are a measure of the theoretical maximum for calculations per second, and different APIs allow developers get closer to that maximum than others, which is where the "Nvidia advantage" comes from in PC environments. As for consoles, I don't think z0m3Ie is saying that the Switch will have that same advantage over PS4/XB1, but if you want to try to compare the Switch to a current PC card, it makes sense to factor in those types of advantages.

And we don't know for sure that the NVN/Vulkan APIs won't give the Switch any effective FLOP advantage over the PS4/XB1 though if it does it'll be way less than 40%.
 

BDGAME

Member
CPU's are tricky bastards. On pure technical standing, the Cell in the PS3 has advantages over Jaguar in the PS4. Same with the A57 in the Switch. Compared to the Cell and Xenon, it has a few advantages. Mainly that it's far more efficient. However, it's slower (all CPU's now are slower than Cell).

For what I understand the old CPUs are fast, but modern CPUs can handle multi tasks a lot better.

It's like have a big line that walk fast vs multiple lines that are not so fast, but can resolve multiple problems at once. Plus the usage of multiple cores are a lot better on today CPUs too.
 
Ok how about this for a summary. We can expect graphics to look as good if not better than the best graphics on PS3 with world size and interactions as good if not better than BOTW :p

RawTintedBullmastiff.gif


zelda-BotW.gif

Uncharted 3 is probably a bad example given that it uses the Cell processor to the max. Better examples would be the AAA games ported to Shield TV like Crysis 3 and Borderlands 2.
 

Hermii

Member
No, it technically does support it, but there is no video service on Switch yet and what title do you expect to have HDR? Wii U ports are probably not going to get it.

I dont know how hard its to implement from a dev standpoint, but every single title could potentially benefit from it and it costs zero extra processing requirements.
 

Marmelade

Member
From what I understand, FLOPS are a measure of the theoretical maximum for calculations per second, and different APIs allow developers get closer to that maximum than others, which is where the "Nvidia advantage" comes from in PC environments. As for consoles, I don't think z0m3Ie is saying that the Switch will have that same advantage over PS4/XB1, but if you want to try to compare the Switch to a current PC card, it makes sense to factor in those types of advantages.

And we don't know for sure that the NVN/Vulkan APIs won't give the Switch any effective FLOP advantage over the PS4/XB1 though if it does it'll be way less than 40%.

I think Fafalada said it best

There are also situations where that advantage is exactly reversed on the same gen GPUs (NVidia ending up running same code 30-40% slower), but that says nothing for console software on the market. Consoles will target specific hw advantages where it make sense, and likewise avoid the pitfalls.
Cross-GPU comparisons are only really relevant to PC software where there's little to no hw (as opposed to API or hw-family)-targeted optimizations of any kind.

And what advantage do you expect Vulkan to give Switch over what's used in PS4 for example? (serious question as I didn't really think it could give any compared to other consoles equivalents)
 

tkscz

Member
For what I understand the old CPUs are fast, but modern CPUs can handle multi tasks a lot better.

It's like have a big line that walk fast vs multiple lines that are not so fast, but can resolve multiple problems at once. Plus the usage of multiple cores are a lot better on today CPUs too.

Exactly. Efficiency can be more important than pure speed, IF the efficiency is done well, which is where the Jaguar tends to lose ground. The Jaguar CPU is not very efficient due to how AMD made their CPU's at the time (they were made to handle large chunks of data at one time better than handling multiple smaller chunks). The A57 is technically more data efficient than the Jaguar and Cell and definitely more efficient than Xenon and Espresso. However, it isn't nearly as fast as Jaguar and Espresso and nowhere near the speed of Cell and Xenon. There are also things like Cache and bus but I don't want to get into all that without knowing what the bus speed and cache levels/storage of the Switch.
 
I think Fafalada said it best

And what advantage do you expect Vulkan to give Switch over what's used in PS4 for example? (serious question as I didn't really think it could give any compared to other consoles equivalents)

Well I really don't know as I'm not an expert in APIs or programming at all, but given Vulkan is a more modern API which the PS4 and XB1 don't support (I believe) and NVN is several years newer than anything in the PS4/XB1, and given that Nvidia has demonstrated quite frequently that they make better APIs than AMD, I would expect some small advantage. Maybe as a complete guess 5%?

Exactly. Efficiency can be more important than pure speed, IF the efficiency is done well, which is where the Jaguar tends to lose ground. The Jaguar CPU is not very efficient due to how AMD made their CPU's at the time (they were made to handle large chunks of data at one time better than handling multiple smaller chunks). The A57 is technically more data efficient than the Jaguar and Cell and definitely more efficient than Xenon and Espresso. However, it isn't nearly as fast as Jaguar and Espresso and nowhere near the speed of Cell and Xenon. There are also things like Cache and bus but I don't want to get into all that without knowing what the bus speed and cache levels/storage of the Switch.

Since you are knowledgeable about these CPUs, a couple days ago I was wondering if there could be a game made on PS4/XB1 which utilizes all 6 Jaguar cores so fully such that it would simply be impossible to port to the Switch, with 3 cores clocked far slower. Since A57s do have some efficiency advantages do you think any CPU heavy tasks could just be reoptimized given enough time, or is there actually a limit to what could feasibly be ported from 6 fast cores to 3 slower but more efficient cores?
 

Pasedo

Member
Uncharted 3 is probably a bad example given that it uses the Cell processor to the max. Better examples would be the AAA games ported to Shield TV like Crysis 3 and Borderlands 2.

Is the Cell processor better than the Switchs? I read somewhere it was a beast but took some time to master but still... It's like over 10 years old yeh?
 

z0m3le

Banned
I think Fafalada said it best



And what advantage do you expect Vulkan to give Switch over what's used in PS4 for example? (serious question as I didn't really think it could give any compared to other consoles equivalents)

My point was zero'd in by Skittzo0413, I'm trying to give an example of how Switch would run "x" title, and the AMD APU A8 7600 in dx11 is probably closer to realistic performance we'd see on the Switch because of the bandwidth limitations, the gflops don't have to be 1:1 here as mixed precision OR Nvidia's advantage over AMD on PC in DX11 is pretty close to a general thing, the point is not "Switch at 400gflops will perform like a PS4 Pro" the point is, the A8 7600's performance should be fairly close to what Switch is capable of, at least from the GPU side.

There is a lot of comparisons to what that means for current gen consoles, and I try to bridge that gap somewhat, but the reality is, it doesn't really matter how it compares to the XB1 or PS4, it's a handheld that can play current gen games, and while it will obviously need to reduce performance, it's not going to be a different game that developers need to make, all of these engines scale and no one is sitting around maxing out 6 and a half ps4 CPU cores all the time who doesn't work at naughty dog. If it's multiplatform, and it's coming to PC too, don't expect a wide CPU utilization, but a deep one. Developers would much rather use ~4 PS4 CPU cores utilized at maybe 80% and an occasional 5th or 6th thread for smaller scripts, pushing maybe 2 cores higher, because that way, they aren't limited by the CPU, and games really don't even show much CPU logic going on, I mean Zelda is one of the more impressive examples of AI, general compute and it's running on ancient Wii U CPU cores that can easily move over to Switch.

Well I really don't know as I'm not an expert in APIs or programming at all, but given Vulkan is a more modern API which the PS4 and XB1 don't support (I believe) and NVN is several years newer than anything in the PS4/XB1, and given that Nvidia has demonstrated quite frequently that they make better APIs than AMD, I would expect some small advantage. Maybe as a complete guess 5%?

The advantage we can probably expect is in frame response, Nvidia even in DX11 really kills it in this area, with drastic differences to AMD in DX12 (close to what XB1 at least is using for an API)
 

Interfectum

Member
Is the Cell processor better than the Switchs? I read somewhere it was a beast but took some time to master but still... It's like over 10 years old yeh?

If I'm not mistaken the Cell processor is even 'better' than PS4, PS4 Pro and Xbone (maybe Scorpio?). The thing was a beast.
 

Pasedo

Member
I dont know how hard its to implement from a dev standpoint, but every single title could potentially benefit from it and it costs zero extra processing requirements.

Yes was wondering if Switch could potentially do HDR. It does make a definite difference to visuals. I'm assuming the screen on the Switch wouldn't support HDR right? I don't even know the reason why screens under 4k don't support it. Pretty sure it would look amazing on any screen. Including the 7in on the Switch.
 

dr_rus

Member
There are also situations where that advantage is exactly reversed on the same gen GPUs (NVidia ending up running same code 30-40% slower), but that says nothing for console software on the market. Consoles will target specific hw advantages where it make sense, and likewise avoid the pitfalls.
Cross-GPU comparisons are only really relevant to PC software where there's little to no hw (as opposed to API or hw-family)-targeted optimizations of any kind.

Best examples of code optimizations for GCN (in any API really) puts them roughly on the same level of per flop efficiency as Paxwells. Assuming that all console software will be ideally optimized for the underlying h/w is quite a stretch. So it's not out of the realms of possibility to expect that Maxwell's average per flop efficiency in Switch will be higher than that of GCN in XBO and PS4.

On the other hand, it's obvious that this won't be enough to reach even XBO's processing power levels, let alone PS4's. Only that the gap would be smaller than one would think from straight peak flops comparison.
 

z0m3le

Banned
Best examples of code optimizations for GCN (in any API really) puts them roughly on the same level of per flop efficiency as Paxwells. Assuming that all console software will be ideally optimized for the underlying h/w is quite a stretch. So it's not out of the realms of possibility to expect that Maxwell's average per flop efficiency in Switch will be higher than that of GCN in XBO and PS4.

On the other hand, it's obvious that this won't be enough to reach even XBO's processing power levels, let alone PS4's. Only that the gap would be smaller than one would think from straight peak flops comparison.

Take note everyone: I 100% agree with all of this.^^^ I'm not looking for "secret sauce" just talking about what Tegra X1 means for current gen games coming to Switch.
 

Pasedo

Member
Hey another question. I remember when we used to go on about the gpgpu aspect on the wiiu giving it some extra potential. Do we have that on Switch or is the X1 a totally different beast.
 

tkscz

Member
Well I really don't know as I'm not an expert in APIs or programming at all, but given Vulkan is a more modern API which the PS4 and XB1 don't support (I believe) and NVN is several years newer than anything in the PS4/XB1, and given that Nvidia has demonstrated quite frequently that they make better APIs than AMD, I would expect some small advantage. Maybe as a complete guess 5%?



Since you are knowledgeable about these CPUs, a couple days ago I was wondering if there could be a game made on PS4/XB1 which utilizes all 6 Jaguar cores so fully such that it would simply be impossible to port to the Switch, with 3 cores clocked far slower. Since A57s do have some efficiency advantages do you think any CPU heavy tasks could just be reoptimized given enough time, or is there actually a limit to what could feasibly be ported from 6 fast cores to 3 slower but more efficient cores?

Physics are the only thing I can think of that could be that CPU heavy. So if say, there was a game where every tiny thing was effected by the physics engine uniquely depending on how it was interacted with (think BotW but 10x more physics heavy), then that game would be impossible to port from the Jaguar to the A57 as they'd have to remove a lot of the physics engine due to lack of pure power. Other than that, not much else. Did we ever find out if the cache on the Switch's X1 was different from that on a normal X1?

Is the Cell processor better than the Switchs? I read somewhere it was a beast but took some time to master but still... It's like over 10 years old yeh?

Cell was way ahead of it's time and would've made a stellar server CPU a few years back. However, it still can't multitask AS WELL as whats currently available. As I said before, AMD Jaguar (and by extension Bulldozer) were made to handle a lot of data all at once better than other CPUs, however multitasking was not it's strong point (a reason why AMD is so far behind Intel). So to answer your question, Cell is better at handling large loads of data (like just huge loads of data) and runs very fast, but A57 is better at multitasking.
 

Hermii

Member
Physics are the only thing I can think of that could be that CPU heavy. So if say, there was a game where every tiny thing was effected by the physics engine uniquely depending on how it was interacted with (think BotW but 10x more physics heavy), then that game would be impossible to port from the Jaguar to the A57 as they'd have to remove a lot of the physics engine due to lack of pure power. Other than that, not much else. Did we ever find out if the cache on the Switch's X1 was different from that on a normal X1?

Its identical, no customizations at all can be spotted in the die shot.
 

Mokujin

Member
Yes, when people took care to specifically craft their code for it's many idiosyncrasies, it was quite powerful.

From my understanding most of the things achieved by good Spu utilization can be done faster and more efficiently by modern GPUs and a multi core cpu though.
 
Is the Cell processor better than the Switchs? I read somewhere it was a beast but took some time to master but still... It's like over 10 years old yeh?

Yes it's better, but the Shield TV comparison is really what's important if you're trying to calibrate your Switch expectations. It's the exact same hardware substantially underclocked for portable use. So take a look at the top tier games ported to Shield TV, pray to god there's hw optimizations that can mitigate the clock difference and that's what you can expect from Switch. It's capabilities fall somewhere in-between 360 and PS3 (when properly supported).
 

TitanSloth

Neo Member
I think Fafalada said it best



And what advantage do you expect Vulkan to give Switch over what's used in PS4 for example? (serious question as I didn't really think it could give any compared to other consoles equivalents)

In a unity 5.6 preview Vulkan was using a lot less power to run a game compared to OpenGl ES for mobile. Meaning better battery life for the switch for unity games.
 

Pasedo

Member
Yes it's better, but the Shield TV comparison is really what's important if you're trying to calibrate your Switch expectations. It's the exact same hardware substantially underclocked for portable use. So take a look at the top tier games ported to Shield TV, pray to god there's hw optimizations that can mitigate the clock difference and that's what you can expect from Switch. It's capabilities fall somewhere in-between 360 and PS3 (when properly supported).

So PS3 rendered their games mostly at 720p. If they're similar at Switch portable mode but renders at 480p you'd still get a pretty good bump in performance yeh. On a 7in screen I reckon 480p would be fine. At docked mode I don't care if they upscaled from 720p. I think this is fine personally. It would again mean significantly more leg room.
 

z0m3le

Banned
So PS3 rendered their games mostly at 720p. If they're similar at Switch portable mode but renders at 480p you'd still get a pretty good bump in performance yeh. On a 7in screen I reckon 480p would be fine. At docked mode I don't care if they upscaled from 720p. I think this is fine personally. It would again mean significantly more leg room.

The 6.2 inch screen is 480p on the Wii U gamepad, and without the weird streaming artifacts, it definitely is nice, wouldn't work for small font like in Xenoblade X, but it's pretty decent for everything else. 540p should be a "good enough" solution for everyone though, and is close enough to half 720p that the docked version of that game at 720p should be fine.
 

bomblord1

Banned
Yes it's better, but the Shield TV comparison is really what's important if you're trying to calibrate your Switch expectations. It's the exact same hardware substantially underclocked for portable use. So take a look at the top tier games ported to Shield TV, pray to god there's hw optimizations that can mitigate the clock difference and that's what you can expect from Switch. It's capabilities fall somewhere in-between 360 and PS3 (when properly supported).

Ok this is getting a tad bit ridiculous. Unless you are talking solely about the CPU (if you are I apologize) then the Switch is way above those expectations even in handheld mode.

First, Shield TV has examples of games running at 1080p and 60fps when the Xbox 360/PS3 Versions were 720p and could not maintain a solid 60fps https://www.youtube.com/watch?v=je7-Ot4zyf0. This was hampered by Android's bloat, inefficient API's and the fact the Shield TV underclocks its GPU under heavy load to the exact same clocks that the Switch uses docked according to a gaf user who tested such.

Then we have the fact the Switch has 8x the ram of the PS360 with a higher bandwidth to the main memory pool on top of the obvious architectural advantages given the Switches GPU is 7 GPU generations newer than the one in the PS3/360.

We already have games like Fast doing things that were not seen in any PS360 game that I'm aware of (4k+ textures, PBR, full atmospheric light scattering) and doing it at a higher resolution/framerate than any demanding PS3 game ran at natively.
 
Yes it's better, but the Shield TV comparison is really what's important if you're trying to calibrate your Switch expectations. It's the exact same hardware substantially underclocked for portable use. So take a look at the top tier games ported to Shield TV, pray to god there's hw optimizations that can mitigate the clock difference and that's what you can expect from Switch. It's capabilities fall somewhere in-between 360 and PS3 (when properly supported).

You realize the shieldtv throttles to (wait for it) around the same clocks as the switch's x1?
 

LordOfChaos

Member
Would it be possible to *completely* avoid throttling on a chip like the Tegra X1 pushed to the max supported frequency with a properly sized fan+heatsink solution?



Of course, many full factor heatsinks can whisk away far more TDP than the TX1s max.


The chip may have self-limiting thermal hotspotting, but if the cooler was overkill enough I think it could run at max regardless.
The Switch doesn't do any boosting with excess TDP so it's irrelevant there, but if you bolted better coolers onto the Shield it would probably run closer to max, as that stays as high as it can until it thermal throttles.
 
Yes it's better, but the Shield TV comparison is really what's important if you're trying to calibrate your Switch expectations. It's the exact same hardware substantially underclocked for portable use. So take a look at the top tier games ported to Shield TV, pray to god there's hw optimizations that can mitigate the clock difference and that's what you can expect from Switch. It's capabilities fall somewhere in-between 360 and PS3 (when properly supported).

As others have said, this simply isn't true. The Shield TV throttles after just a few minutes to the same levels we see in the docked Switch, so Shield TV performance should actually be the baseline for expectations for the docked Switch.

However, this is without taking into account the massive Android overhead the Shield TV has to use which substantially reduces performance. So we should be seeing games perform quite a bit better on the Switch than on the Shield TV.

And the Switch has an extra 1GB of RAM, which people (including me just now) tend to forget.
 

z0m3le

Banned
Of course, many full factor heatsinks can whisk away far more TDP than the TX1s max.


The chip may have self-limiting thermal hotspotting, but if the cooler was overkill enough I think it could run at max regardless.

I think the CPU clock is mostly limited because A57 has heavy power draw, I do wonder if they can push it up to 1.2ghz eventually, even if drawing back memory bandwidth, it might be worth it if they find cpu bottlenecks, looking at AMD APUs and their bandwidth, I'm no longer too worried about what Switch is working with.
 

LordOfChaos

Member
If I'm not mistaken the Cell processor is even 'better' than PS4, PS4 Pro and Xbone (maybe Scorpio?). The thing was a beast.

On pure SIMD it has a higher theoretical peak than the modern consoles.

Pure SIMD isn't what a game does all day though, and Jaguar and A57 are both better suited to branchy, less linear/predictable dynamic game code.

The SPUs, which were the bulk of Cells power, had no prefetching and a small local memory to work off.


Oh boy, the 2006 console wars, memories :p
 

tkscz

Member
On pure SIMD it has a higher theoretical peak than the modern consoles.

Pure SIMD isn't what a game does all day though, and Jaguar and A57 are both better suited to branchy, less linear/predictable dynamic game code.

The SPUs, which were the bulk of Cells power, had no prefetching and a small local memory to work off.


Oh boy, the 2006 console wars, memories :p

Oh god the SIMD arguments. I mean, back then that was good. More SIMD meant more data chunks it can handle at a time. I mean, SIMD are still used, even in the A57, but it has so much more for better multi-tasking.
 
Then we have the fact the Switch has 8x the ram of the PS360 with a higher bandwidth to the main memory pool on top of the obvious architectural advantages given the Switches GPU is 7 GPU generations newer than the one in the PS3/360.
Switch's memory bandwidth is not likely to be much better in practice than either 360 or PS3. The raw number is marginally higher, but both the last-gen consoles were architected to improve the bottleneck. The PS3 had semi-split memory with separate busses for CPU and GPU (but reduced rates for sharing). The 360 had sizeable high-bandwidth EDRAM to minimize calls on main memory. Switch has neither of those things, which makes it far easier to develop on, but will lower the performance ceiling versus the paper specs.

We already have games like Fast doing things that were not seen in any PS360 game that I'm aware of (4k+ textures, PBR, full atmospheric light scattering) and doing it at a higher resolution/framerate than any demanding PS3 game ran at natively.
The large streaming textures in FAST were definitely done by Rage last gen, and probably more games (didn't one of the Rare games do so?). As for volumetrics, multiple last-gen games had versions of that too (Warhawk, MGS 4, GT 5, Crysis 3, etc.)

As for your last point, FAST RMX does not run at 1080p60, even docked. Resolution dips below that pretty constantly, going as low as 720p I believe (blips even lower are probably bugs). It's more accurately viewed as something like a 900p60 game. And that means PS3 Wipeout and GT 5 have approximately the same performance (but with lower IQ).

All this makes perfect sense. Recent high-end gaming handhelds have typically been a bit better than the previous gen of home consoles. Which is exactly what we're seeing now with Switch, and I predict we'll continue to see: nice improvements from PS360, but nowhere near current gen. Any current gen games ported to Nintendo's new hardware will likely have cutback graphics and worse performance than ever before.
 

Polygonal_Sprite

Gold Member
Pretty much, as for the problem, it is likely an LoD problem, this is why it happens mostly on the great plateau, and can be avoided by changing the camera angle to reduce the draw distance. It can also be a bandwidth problem that we are seeing. What we do know is that the handheld runs the game much better than the wii u version and the docked gpu in switch is twice as powerful, thus even in this port, it's clearly above Wii U.

I'm not really going to touch on this again, as I said before, comparing wii u to switch with a wii u game is silly to say the least and is much the same as comparing ZoE HD to the original game to try and guage how much more powerful ps3 is to ps2.



Wii U has 3 cores and all are available for games, wii u even closes the background operations, causing the home screen to actually load when it is hit rather than just switch over to the background op.

I can tell you after 50 hours of docked Breath of the Wild play that the framerate issues are much more severe in the other areas. There's a particular area where the framerate is constantly in the teens. It's embarrassing for their flagship launch game.

Again I don't know why you're comparing a Konami outsourced port to a first party flagship launch game. Silly comparison to try and prove your point.

Were Switch as powerful as you say it would run Zelda at 900p/30fps without breaking a sweat.
 
I can tell you after 50 hours of docked Breath of the Wild play that the framerate issues are much more severe in the other areas. There's a particular area where the framerate is constantly in the teens. It's embarrassing for their flagship launch game.

Again I don't know why you're comparing a Konami outsourced port to a first party flagship launch game. Silly comparison to try and prove your point.

We're Switch as powerful as you say it would run Zelda at 900p/30fps without breaking a sweat.

The framerate hit the teens on the Switch? I'm pretty sure Digital Foundry would have reported that, all I've heard of is as low as 20fps. And besides that one area in the forest I don't see too many drops, especially no sustained drops. The hiccups when a moblin is knocked down are incredibly annoying though, but a different issue altogether.

And anyway, regarding the bolded, this is absolutely not true. A game only runs as well as it has been optimized for the hardware it's on. We had an image a few pages back (I believe) showing Nintendo explicitly saying they did no optimization for the Switch port they did in ~9 months. If they worked exclusively on the Switch from the get go it would be performing far, far better.

EDIT:

Here-


I find it hard to believe they did literally no optimization, but acting like it can't possibly get any better on this hardware is pretty ridiculous. And remember, this was ported between last Spring and this January.
 

Lonely1

Unconfirmed Member
Until Maxwell1 felt out of favor and VRAM caught up with it, the 750Ti was doing a surprisingly good job at keeping up with the Ps4, even thought it specs are lower.
 

bomblord1

Banned
Switch's memory bandwidth is not likely to be much better in practice than either 360 or PS3. The raw number is marginally higher, but both the last-gen consoles were architected to improve the bottleneck. The PS3 had semi-split memory with separate busses for CPU and GPU (but reduced rates for sharing). The 360 had sizeable high-bandwidth EDRAM to minimize calls on main memory. Switch has neither of those things, which makes it far easier to develop on, but will lower the performance ceiling versus the paper specs.


The large streaming textures in FAST were definitely done by Rage last gen, and probably more games (didn't one of the Rare games do so?). As for volumetrics, multiple last-gen games had versions of that too (Warhawk, MGS 4, GT 5, Crysis 3, etc.)

As for your last point, FAST RMX does not run at 1080p60, even docked. Resolution dips below that pretty constantly, going as low as 720p I believe (blips even lower are probably bugs). It's more accurately viewed as something like a 900p60 game. And that means PS3 Wipeout and GT 5 have approximately the same performance (but with lower IQ).

All this makes perfect sense. Recent high-end gaming handhelds have typically been a bit better than the previous gen of home consoles. Which is exactly what we're seeing now with Switch, and I predict we'll continue to see: nice improvements from PS360, but nowhere near current gen. Any current gen games ported to Nintendo's new hardware will likely have cutback graphics and worse performance than ever before.


I'm not sure what point you are trying to make about the RAM as the chip does have 8x the ram and an actual higher bandwidth to the main memory pool. If edram made that big of a difference then the 32MB on the wiiU (3x the 360) would mean that fast should perform better on that system when it does not. It seems silly to pretend the ps360 has any kind of advantage here. Especially the PS3.

As for the game being considered 900p

http://www.eurogamer.net/articles/digitalfoundry-2017-fast-rmx-showcases-switches-power-over-wii-u

From what we understand, there is a small issue with the current firmware on Switch which causes a slight drain on GPU resources - once corrected, we're told that Fast RMX should sustain a full 1080p in docked mode.

A future update is supposed to make it hold at 1080p. So no we shouldn't view as a 900p 60 game. Even then PS360 games didn't run at 900p either near the end of the generation sub 720p was common.

While some games are doing some similar things (didn't give an example for PBR) none of them are doing everything Fast is doing at the same time. Wipeout on the PS3 is not even coming close to what Fast is doing. I don't know about GT5 as I have no experience with it. But even if we compare it to games that are using a few of the effects that Fast is they are not doing it at 1080p 60fps or even 900p 60fps.

Finally, a notable step up from ps360 games IS near current gen. You seem to think 3-4x the PS360 is not a big leap but then we have the same or less distance to the power of the Xbox One and suddenly it means games won't even run at acceptable performance.
 

Chronos24

Member
The framerate hit the teens on the Switch? I'm pretty sure Digital Foundry would have reported that, all I've heard of is as low as 20fps. And besides that one area in the forest I don't see too many drops, especially no sustained drops. The hiccups when a moblin is knocked down are incredibly annoying though, but a different issue altogether.

And anyway, regarding the bolded, this is absolutely not true. A game only runs as well as it has been optimized for the hardware it's on. We had an image a few pages back (I believe) showing Nintendo explicitly saying they did no optimization for the Switch port they did in ~9 months. If they worked exclusively on the Switch from the get go it would be performing far, far better.

Building on this, how easily we all forget many other games released on current gen that initially also suffer several issues including performance issues eg. framerate problems that get patched later to perform better. I'm extremely confident if not certain that nintendo is working on a patch to address the issues on Switch. Give it time guys.
 

KingSnake

The Birthday Skeleton
Besides Faron Woods, there is one specific forest that can't be named because of spoilers where the framerate drops are very bad and very constant.

Plus in a lot of other areas where tall grass and rain or where thunderstorms happen.

Let's stop pretending that the issue is very rare.
 
Besides Faron Woods, there is one specific forest that can't be named because of spoilers where the framerate drops are very bad and very constant.

Plus in a lot of other areas where tall grass and rain or where thunderstorms happen.

Let's stop pretending that the issue is very rare.

I never really noticed it much in Faron Woods, though thinking back on it, yeah it was pretty sluggish during the constant thunderstorms there. But naming two places out of an enormous world where it has consistent and sustained drops doesn't mean it's not rare.

In other areas when it drops it's usually only for 1-2 seconds, and definitely not to the teens. And again, a lot of people might be attributing some of these drops to the hiccups that happen due to moblin ragdoll physics, but I think that's an entirely different issue. That happens just as much in handheld mode, and I've had it happen anywhere from a half second to like 5-6 full seconds.

But this is all beside the point because, as the image shows above, they did very little to no optimization to this title for the Switch in order to keep it the same as the Wii U version. So I have no doubt at all that this same game could have been made at 1080p 30fps locked had it been optimized specifically for the Switch from the get-go.
 

LordOfChaos

Member
EDIT:

Here-



I find it hard to believe they did literally no optimization, but acting like it can't possibly get any better on this hardware is pretty ridiculous. And remember, this was ported between last Spring and this January.


Same threading is interesting. I imagine when making a game for the Wii U, you're doing heavier processing on the one thread with 2MB L2 cache, with auxiliary threading on the two with 512KB. The Switch would have four evenly distributed cores, one of which is probably not available, so putting one 'fat' thread out on it wouldn't be the best way to go.


Also if so much is the same it's probably not getting particularly low level with NVN on the GPU side either.
 

Fafalada

Fafracer forever
Marmelade said:
And what advantage do you expect Vulkan to give Switch over what's used in PS4 for example?
Unless you want to talk cross-compatibility - there isn't any for this particular scenario. PS4 GNM is a superset in terms of access to hw-functionality and resources, so any scenario where there would be "API-originating" performance advantages would be on PS4 side. Not that there's any way we could quantify that in a single number anyway.

I don't know what NVN looks like - but that could presumably be a closer equivalent to GNM, though in the end that just means parity in this scenario.

dr_rus said:
Assuming that all console software will be ideally optimized for the underlying h/w is quite a stretch.
That goes without saying - but in cross-platform scenarios(which I presume this debate was about, as nothing else really warrants comparing) you generally end up with most optimization effort centering on the market leading platform. Which Switch could become one day, but won't be in near-term at least.
 
Until Maxwell1 felt out of favor and VRAM caught up with it, the 750Ti was doing a surprisingly good job at keeping up with the Ps4, even thought it specs are lower.
If even the 750Ti can't keep up anymore, that bodes ill for current-gen ports on Nintendo's machine. The desktop card is 2-3 times as powerful as Switch.

I'm not sure what point you are trying to make about the RAM as the chip does have 8x the ram and an actual higher bandwidth to the main memory pool. If edram made that big of a difference then the 32MB on the wiiU (3x the 360) would mean that fast should perform better on that system when it does not. It seems silly to pretend the ps360 has any kind of advantage here. Especially the PS3.
EDRAM doesn’t have to make a huge difference to close the bandwidth gap. Switch has ~25 GB/s to main memory, 360 has ~22. PS3 also has ~22 on paper, but the separate busses to separate memory for Cell and RSX meant that, with careful use, slight advantage could be garnered that way. But even if we ignore those things totally, the difference in bandwidth is pretty small.

The amount of RAM in Switch is plenty.

Even then PS360 games didn't run at 900p either near the end of the generation sub 720p was common.
No last-gen games ran at 900p. But several of them, such as the ones I named, natively rendered anamorphic resolutions of about the same number of pixels.

It's not true that sub-720p increased over the course of last gen. Most franchises stayed the same, or improved.

Wipeout on the PS3 is not even coming close to what Fast is doing. I don't know about GT5 as I have no experience with it. But even if we compare it to games that are using a few of the effects that Fast is they are not doing it at 1080p 60fps or even 900p 60fps.
Incorrect. As mentioned above, both games--and GT 5 is quite accomplished looking--were indeed pushing about the same amount of pixels as 900p60. Ridge Racer 7 and Full Auto 2 were pushing even more (full 1080p60), though their rendering is simpler (but better AA in the latter!).

Finally, a notable step up from ps360 games IS near current gen. You seem to think 3-4x the PS360 is not a big leap but then we have the same or less distance to the power of the Xbox One and suddenly it means games won't even run at acceptable performance.
Well no, Xbox One games given less than half the resources probably wouldn't run nearly as well.

But in any case, that's not what Switch offers. It is nowhere near 4x the power of PS360 right now, and if it can maybe get to 3x that'll probably be a years-long crawl of dev acclimation and growing expertise.
 

tkscz

Member
Wait, are we still using Zelda as an example? This has been explained many times, Zelda is not a good example to use. As said before, because of the Switches low FLoPs, games with frame rate issues that take no advantage of Switch's hardware and data effenciency will keep those frame rate issues it had from the previous console. More RAM and better GPU won't help a game that isn't taking advantage of it, but instead, is forcing raw power out of something that doesn't work the way it needs it to.

In addition, Zelda ran oon significantly different architecture. WiiU ran on an IBM Power based CPU similar to the GC's, a VLIW5 AMD GPU closely related to the low-end radeon HD 4000 series and 1GB of DDR3 RAM. Something created on that tech and ported over in less then a year (Nintendo started porting it in June of last year) wouldn't work well trying to force raw power out of tech it isn't programmed for.
 
Status
Not open for further replies.
Top Bottom