• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Confirmed: The Nintendo Switch is powered by an Nvidia Tegra X1

Status
Not open for further replies.

NOLA_Gaffer

Banned
Really that's the biggest take away. Nintendo did the best they could to hit their price point and not burn through RND dollars.

Bingo. Unless Nintendo wanted to dump a bunch of cash into getting custom chips made the X1 is their best bet right now to hit a <$300 price point and get some profit out of it.
 

Hermii

Member
Bingo. Unless Nintendo wanted to dump a bunch of cash into getting custom chips made the X1 is their best bet right now to hit a <$300 price point and get some profit out of it.
Yea it's just annoying they would lie and say it's custom. They could have told us from the start.
 
750MHz was the throttled clock, iirc full GPU speed is like 1050mhz or something.

Our own MDave showed the TX1 Switch dropped to about 1700MHz CPU, 700MHz GPU on sustained load. Still above where the Switch landed on CPU, about right for GPU, but obviously Nintendo chose clocks that would absolutely 100% never be dipped below.

I didn't realize the K1 spec he postsd was throttled. That puts it in context because I am referencing the tests MDave did. I was going to say it is not possible that tablet doesn't throttle.

On the CPU side I wonder if upclock is possible from Nintendo. I wonder how much overhead they left.
 

Rand6

Member
Am I the only one who isn't really concerned about this?

It's hands down more powerful than a Wii U/360/PS3. You get portable and home gaming.

No you're not the only one, obviously.
But don't act like it's impressive that the Switch is more powerful than the X360.
X360 release date : 2005.
Switch release date : 2017.
 

LordOfChaos

Member
On the CPU side I wonder if upclock is possible from Nintendo. I wonder how much overhead they left.

Digital Foundry clocked the vent area at 52 Celsius, I don't think there's cooling overhead for a post launch overclock.

Unless maybe the fan itself isn't working near potential.
 

KingSnake

The Birthday Skeleton
The good news here is that Nintendo has a pretty straight forward upgrade road ahead for Switch as long as Nvidia produces Tegra chips.

The other interesting thing is how long until Switch OS will run on a Nvidia Shield?

Anyhow, I don't see the big fuss about being a X1, unless one just made oneself believe in some secret sauce or put too much heart into the Foxconn rumour.

It is what it is and currently there's no better mobile GPU that Nintendo could have realistically use, since TX2 doesn't seem adapted for small enough devices.
 

emag

Member
I'm sure both... X1 and X1 cost way below $100... you guys are making it looks like these small chips costs $200.


What I'm reading lol

X2 may cost $50.

Where you guys are getting $299???

That's how much NVIDIA officially charges for the Jetson TX2 board (Tegra X2 + 8 GB RAM + 32 GB storage) that it sells to OEMs.
 

CronoShot

Member
No you're not the only one, obviously.
But don't act like it's impressive that the Switch is more powerful than the X360.
X360 release date : 2005.
Switch release date : 2017.
I guess this means the Vita and PSP were unimpressive when they came out, too.
 
Digital Foundry clocked the vent area at 52 Celsius, I don't think there's cooling overhead for a post launch overclock.

So basically no overheax eh? How hot was MDave's shield running? Holy shit :p

(I know the device is bigger with more areas to disipate heat).

Was DF docked or undocked temp. You can barely hear it undocked from my experience.
 

Oregano

Member
No you're not the only one, obviously.
But don't act like it's impressive that the Switch is more powerful than the X360.
X360 release date : 2005.
Switch release date : 2017.

Xbox release date: 2001
Vita release date: 2011

Damn guess Vita wasn't impressive after all.
 

RootCause

Member
No you're not the only one, obviously.
But don't act like it's impressive that the Switch is more powerful than the X360.
X360 release date : 2005.
Switch release date : 2017.
Lol the chip itself came in 2015. So it's not like mobile chips surpassed the 360/PS3 a long time ago.

It's been close to 2 years since we started seeing mobile chips surpass those systems.
 

ZOONAMI

Junior Member
Find me a Type C to HDMI adapter with everything the dock has to offer and then I'll believe you.

USB C is hot right now. Bom =/ retail price.

That said you can pick up a USB c to hdmi for $15 and a multi port USB c charger for $15-20. So yeah $30 more for some extra plastic seems excessive.
 

Lord Error

Insane For Sony
Find me a Type C to HDMI adapter with everything the dock has to offer and then I'll believe you.
I bought this USBC-> HDMI adapter for $20 for my new MBP, and it works perfectly and 100% reliably (i.e. it's not some crap product).

https://www.amazon.com/dp/B01F5E744A/?tag=neogaf0e-20

But I don't know what else the dock has to offer. USBC cables are not some super-expensive thing, they cost more than regular USB cables, but are still like $10-$20.
 

LordOfChaos

Member
So basically no overheax eh? How hot was MDave's shield running? Holy shit :p

(I know the device is bigger with more areas to disipate heat).

Was DF docked or undocked temp. You can barely hear it undocked from my experience.

That was in docked mode (undocked runs cooler, top)

AXZL6U5.jpg


For a heatsink to run that hot the chip must be pretty toasty internally. Probably letting it run against its thermal max to keep the fan quiet.
 

ethomaz

Banned
That's how much NVIDIA officially charges for the Jetson TX2 board (Tegra X2 + 8 GB RAM + 32 GB storage) that it sells to OEMs.
Jetson TX2 is the board, WiFi, Bluetooth, eMMC, USB-C, HDMI, etc etc etc... it is full board... not the only the chip that Nintendo buys from them.

I did the maths with a X1 and even nVidia having 100% profit over it Nintendo pay less than $50 per chip.
 

senj

Member
Can someone describe to me what year this chip was considered the best? 2011?

Depends on how you want to look at it.

In terms of announcements? 2015-2016, superseded by the TX2 and now Xavier announcements.

In terms of "I can walk into a store and by a consumer product with a better-performaning mobile SOC"? 2017. This is the current top-of-the-line SOC being used by any company in a consumer product. TX2 dev boards (Jetson) are "coming soon".
 

ggx2ac

Member
I know this is an unpopular opinion for gaming; but I think going the Apple / Android model would be great.

Imagine if just like every year they released a new Nintendo Switch with better specs; while keeping backwards compatibility for at least 5 years. That's a great way to not have to start from scratch every console generation.

Revisions are possible every couple of years like they do with the 3DS but a new Switch with new specs every year isn't feasible for them.

The Switch is not a mass market tablet device, it is a gaming focused tablet. They are not comparable to Samsung or Apple that make tablets with the latest tech every year and sell millions of tablets at a much higher cost than a Switch.

What the Switch is and what Samsung and Apple make are for completely different markets where the gaming focused tablet market is niche in comparison.
 

ZOONAMI

Junior Member
The good news here is that Nintendo has a pretty straight forward upgrade road ahead for Switch as long as Nvidia produces Tegra chips.

The other interesting thing is how long until Switch OS will run on a Nvidia Shield?

Anyhow, I don't see the big fuss about being a X1, unless one just made oneself believe in some secret sauce or put too much heart into the Foxconn rumour.

It is what it is and currently there's no better mobile GPU that Nintendo could have realistically use, since TX2 doesn't seem adapted for small enough devices.

Lol, it would actually be cool if Nintendo just partnered with NV to morph the shield tv into a home console only switch. And you know, allowed Netflix and the like.
 

Bowl0l

Member
So, buy a Nvidia Shield and Switch Pro Controller, then wait for people to port all Nintendo games to it? That's actually a brilliant idea from Nvidia.
 

NOLA_Gaffer

Banned
Lol, it would actually be cool if Nintendo just partnered with NV to morph the shield tv into a home console only switch. And you know, allowed Netflix and the like.

Somewhere down the line I could see a TV-only version of the Switch, but it'll be in the waning years of the system. Think of something along the lines of the Wii Mini which stripped out practically everything but the ability to play games.
 
Nobody does custom chips anymore because it doesn't make any sense. In 1988 it made sense because what PCs were doing was so vastly different than what a super Nintendo was doing and throwing off the shelf CPU power at super Mario world would have made the console cost $500.
 
You're telling me this little portable system can beat Xbox 360, PS3 and Wii U? Systems with games like GTA V, The Last of Us, Gears 3, God of War Ascension and Mario Kart 8? And I can play it both handheld and on my TV?

Sounds damn good to me.

/rethorical questions
 

sneas78

Banned
I think it's impressive for how small the size is.. even tho I think it's a huge handheld. It's amazing we are seeing handhelds beating consoles, not that long ago.. I mean the wiiu is still current home console that you can buy.. it's not from 10 years ago. Guys we're many of you even around for Nintendo game boy.... look how far we come. BELIEVE.
Was not even that long ago when Sony said that if the vita was as powerful as a PS3 .. "it would burn in ur pocket"
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Yeah, the way the caches are wired (the CCI).

TX1 uses the oldest form of big.little

To run the OS on an A53 while running the game on 4 A57s, you'd need the newest form, HMP, which allows any and all cores to be active at once. The oldest form is full cluster switching, the middle form is one core of each big.little pair can be active at once (not useful for this scenario as you'd still lose an A57).

Best I can tell the wiring situation as well looks damn near identical. A gif switching between the two would help.

Can you point to some technical documentation that states that these operation modes depend on hardware features? Because I come to the conclusion that these operation modes do not depend on hardware features.

Every whitepaper that I can find seems to suggest that all implementations of big.LITTLE use on a hardware level the same CCI-400 CoreLink Cache Coherent Interconnect, and that the operation modes are entirely down to to the implementation of the operating system's scheduler.

For instance: https://www.arm.com/files/pdf/big_L...ully_heterogeneous_Global_Task_Scheduling.pdf

Software for big.LITTLE Implementations

In software for big.LITTLE implementations, the fundamental idea is that executing code should be dynamically moved to the right-size processor for the performance needs of the task. There are different levels of granularity for addressing this partitioning of work to the big and LITTLE core clusters. The first two modes are migration modes: they use modifications to existing dynamic voltage and frequency scaling mechanisms to decide when to switch an entire CPU cluster or an individual CPU worth of work to the big or LITTLE side of the compute subsystem. The third mode, Global Task Scheduling, modifies the kernel schedule to be aware of the performance requirements of individual threads, and allocates threads to an appropriately sized core. Global Task Scheduling provides the most flexibility for tuning the performance and power balance in an SoC, but it is also the most complex method to implement. That complexity, however, is addressed in kernel patchsets and a stable Linux Kernel Tree version that incorporates the patches. Therefore an SoC developer's or OEM's work in system bringup is no more complex than for a standard DVFS-based system. It does potentially require more tuning of parameters related to the software for big.LITTLE implementations.

All three modes run in kernel space and require no modifications to user space code or middleware. Existing power management mechanisms for shutting down unused cores are active in all three cases. Additionally, all three modes can run on the same hardware, although only Global Task Scheduling can support different numbers of big and LITTLE cores.
 

LordOfChaos

Member
Nobody does custom chips anymore because it doesn't make any sense. In 1988 it made sense because what PCs were doing was so vastly different than what a super Nintendo was doing and throwing off the shelf CPU power at super Mario world would have made the console cost $500.

?

PS4 was customized, XBO was customized, PS4 Pro is customized, Scorpio will be customized, heck, Wii U was custom in new and bizarre ways :p

It's not on its way out, it's a new pillar of AMDs whole business
 
Revisions are possible every couple of years like they do with the 3DS but a new Switch with new specs every year isn't feasible for them.

The Switch is not a mass market tablet device, it is a gaming focused tablet. They are not comparable to Samsung or Apple that make tablets with the latest tech every year and sell millions of tablets at a much higher cost than a Switch.

What the Switch is and what Samsung and Apple make are for completely different markets where the gaming focused tablet market is niche in comparison.

Yeah, the closest thing we might get to a spec boost before Switch 2 would probably just be a "new" Switch in like 4 or so years that has a 1080p screen and just runs the games in docked mode while in portable mode.
But Iwata did mention doing more configurations so I think we could see a "Switch Go" in 2 years or so that is cheaper, smaller, has the controls built in, and doesn't come with a dock, though they'd sell one that fits the system separate.
 

ZOONAMI

Junior Member
Depends on how you want to look at it.

In terms of announcements? 2015-2016, superseded by the TX2 and now Xavier announcements.

In terms of "I can walk into a store and by a consumer product with a better-performaning mobile SOC"? 2017. This is the current top-of-the-line SOC being used by any company in a consumer product. TX2 dev boards (Jetson) are "coming soon".

Iirc a snapdragon 820 adreno pretty much goes toe to toe with an x1 at least in a mobile application. And yeah apples chips surpass the x1 pretty easily.
 

Hermii

Member
Revisions are possible every couple of years like they do with the 3DS but a new Switch with new specs every year isn't feasible for them.

The Switch is not a mass market tablet device, it is a gaming focused tablet. They are not comparable to Samsung or Apple that make tablets with the latest tech every year and sell millions of tablets at a much higher cost than a Switch.

What the Switch is and what Samsung and Apple make are for completely different markets where the gaming focused tablet market is niche in comparison.
Not every year, Nvidia doesn't release a new tegra every year but I can see a revision every two three year featuring the latest mobile tegra. Since it's literally an unmodified off the shelf part, very little to no r&d would be needed.
 

LordOfChaos

Member
Can you point to some technical documentation that states that these operation modes depend on hardware features? Because I come to the conclusion that these operation modes do not depend on hardware features.

Every whitepaper that I can find seems to suggest that all implementations of big.LITTLE use on a hardware level the same CCI-400 CoreLink Cache Coherent Interconnect, and that the operation modes are entirely down to to the implementation of the operating system's scheduler.
https://www.arm.com/files/pdf/big_L...ully_heterogeneous_Global_Task_Scheduling.pdf



Would be true, if you use the CCI-400, which TX1 does not.

I.e if you can support the newest big.little, you can support all prior versions, but TX1 was a custom implementation, that I guess didn't prioritize that.

Anandtech:
as NVIDIA uses a custom CPU interconnect and cluster migration instead of ARM's CCI-400 and global task scheduling.

http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/5
 

senj

Member
Jetson TX2 is the board, WiFi, Bluetooth, eMMC, USB-C, HDMI, etc etc etc... it is full board... not the only the chip that Nintendo buys from them.

I did the maths with a X1 and even nVidia having 100% profit over it Nintendo pay less than $50 per chip.

You did the math on numbers you pulled out of the air (your wafer cost is way too high for TSMC), assumed 20nm and 16nm lithography cost the same (it doesn't) and assumed 100% yields at both 20nm and 16nm (you won't get that, and you won't get identical lossage at each node size, either).
 

Alchemy

Member
Nobody does custom chips anymore because it doesn't make any sense. In 1988 it made sense because what PCs were doing was so vastly different than what a super Nintendo was doing and throwing off the shelf CPU power at super Mario world would have made the console cost $500.

Literally everyone but Nintendo is rocking a custom SoC this gen. It isn't about custom hardware architecture its just little SoC adjustments like ESRAM on the Xbox One.
 
That was in docked mode (undocked runs cooler, top)

AXZL6U5.jpg


For a heatsink to run that hot the chip must be pretty toasty internally. Probably letting it run against its thermal max to keep the fan quiet.

Ay, this makes sense since you can barely hear the fan in undocked mode from when I used it.

Now I really wonder what the Shield MDave was running at for those clocks. It has to be pretty hot but the shield is not portable so the thermal limits can be higher I imagine and no switching mechanic means the game logic is always constant hence the higher clocm.
 
Status
Not open for further replies.
Top Bottom