• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.

MacTag

Banned
How many of these rumours have been vetted enough to have made to the front page of Eurogamer?
They've been vetted by being right on the money with subsequent announcements, much like Eurogamer.

Besides Eurogamer themselves stressed the X1 was for then current devkits, not final hardware, and speculated the noisy exhaust fan meant a newer quieter cooler chip would probably end up in the real device.
 

Rodin

Member
I understand this but as someone who just switched to a 2k monitor, even 1080p looks bad in comparison. I can't even imagine 4k. I just want technology to improve faster than it is. Seeing a 720 screen, something we had 10 years ago is just disappointing for someone who likes tech.

You had a 720p display on a portable device 10 years ago? DAMN son

4GB Ram? Bruh.

On the other hand, it is Nintendo, so they'll do magic with that 4GB.

Or maybe it will be more than 4, who knows
 
Which means nothing. Do I have to post her previous tweet again?



The person who start the dev kit RUMOR also said that the Switch wouldn't use Tegra at all. It was also pointed out in this thread that the devkit specs she posted are those of other Tegra X1 devices. Eurogamer also posted WiiU specs that were speculation from a GAF thread that the poster said might not be true. Just saying.
This dev kit (leak) rumor are from this guy?
 

kIdMuScLe

Member
How many of these rumours have been vetted enough to have made to the front page of Eurogamer?


Have you ever worked at a game industry? As a Q&A? Developers? Final devkits usually arrives 2months or less before launch and it sucks because you have to crunch like crazy to make it gold. And pre final devkits are all different. I remember testing on a tower for PS3 while another group had a slower specs square box for PS3. And stop parroting stuff as facts if you have nothing to back on. Show us the sources
 
Or maybe it will be more than 4, who knows

I don't think people know the "leak" in the OP is bullshit, but since this is now the de facto Switch tech thread people are going to keep seeing that leak and thinking it could be real. Sigh...

This dev kit (leak) rumor are from this guy?

Yes, which is why this "leak" shouldn't be taken as such. Also I wouldn't trust any leaks coming out RIGHT after a product unveiling... that's a bit of a red flag.
 

Zil33184

Member
If you use the Eurogamer article as gospel, why are you ignoring the report from them that stated that the devkit had a very noisy active cooling and that could point to Parker being used in the final version?

Does it though? The speculation was that in order to simulate a higher performing part the TX1 was overclocked, thus necessitating more aggressive cooling. However the NS reveal also shows that the final unit also has active cooling, so it's a bit murky there. The NS is smaller than the SHIELD Android TV, packs more components into that space, and is meant to be held by a user for long periods of time where excess heat can become quite unbearable. Actively cooling it might just be a consequence of it's form factor.

Nvidia has never called any of its chips a "Tegra X2." There is a Drive PX2 chip which is completely different, and a chip called Parker which is essentially the successor to the TX1. But a "TX2" doesn't officially exist.

Also as far as I'm aware, one of the defining feature of the TX1 is the Maxwell architecture, and the major difference between Maxwell and Pascal is the 16nm process, so I would say a hypothetical die shrunk TX1 indeed would no longer be a TX1, and would be a Pascal Tegra. I could be wrong on that though.

All in all, I'm not even sure what you're arguing anymore. We know for a fact that the Switch uses a custom SoC, and we definitely do not know if it will be closer in performance to a TX1 or to Parker, but that's where speculation based on rumors comes in. Either way, the final unit will not use a stock TX1 because Nvidia has confirmed such.

Nvidia did explicitly state the Drive PX2 contained two Tegra X2s. Also a lightly modified die-shrunk X1 would qualify as a custom SoC.
 

nightside

Member
I understand this but as someone who just switched to a 2k monitor, even 1080p looks bad in comparison. I can't even imagine 4k. I just want technology to improve faster than it is. Seeing a 720 screen, something we had 10 years ago is just disappointing for someone who likes tech.


But how long would the battery last? 2 hours? Maybe less?

Anyway, having a ram expansion in the dock would work or the latency would just be too much?
 

sfried

Member
I like how people are happy to accept low quality tech. No one would be complaining if it was 1080 and the battery life was just as good.

I don't know where you're expecting to get a 1080 screen with exceptional battery life that isn't $600 or less.
 
I understand this but as someone who just switched to a 2k monitor, even 1080p looks bad in comparison. I can't even imagine 4k. I just want technology to improve faster than it is. Seeing a 720 screen, something we had 10 years ago is just disappointing for someone who likes tech.
Guess you didn't even have the 3DS.

Battery tech has not improved as much other things throughout the last 10 years, so that is a factor.
 
Nvidia did explicitly state the Drive PX2 contained two Tegra X2s. Also a lightly modified die-shrunk X1 would qualify as a custom SoC.

Source? I seriously cannot find anything on google showing Nvidia officially using the term "Tegra X2" or even TX2.

Also, as to your second point that's essentially what I said. So I guess we agree that the Switch will not use a stock TX1?

And knowing Nintendo those modifications will include a 128-bit memory bus. Bank on it.

Yeah, this can't be stated enough. Nintendo has always put a high priority on RAM, typically spending quite a bit for exotic RAM configurations.
 
I don't think people know the "leak" in the OP is bullshit, but since this is now the de facto Switch tech thread people are going to keep seeing that leak and thinking it could be real. Sigh...



Yes, which is why this "leak" shouldn't be taken as such. Also I wouldn't trust any leaks coming out RIGHT after a product unveiling... that's a bit of a red flag.
Welp... Looks like this guy have the same reliability as SuperMetaldave64... What's the point of this thread then? Thanks for the info.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Nvidia did explicitly state the Drive PX2 contained two Tegra X2s. Also a lightly modified die-shrunk X1 would qualify as a custom SoC.
I understand your taste for conjecture, but at this stage it's more of a friggin reality-distortion field:

http://nvidianews.nvidia.com/news/n...-in-car-artificial-intelligence-supercomputer

Its two next-generation Tegra® processors plus two next-generation discrete GPUs, based on the Pascal™ architecture, deliver up to 24 trillion deep learning operations per second, which are specialized instructions that accelerate the math used in deep learning network inference. That's over 10 times more computational horsepower than the previous-generation product.

There's no such thing as TX2.
 
Welp... Looks like this guy have the same reliability as SuperMetaldave64... What's the point of this thread then? Thanks for the info.

There is another thread for Switch tech discussion but it hasn't been nearly as active as this one: http://www.neogaf.com/forum/showthread.php?t=1297136


EDIT:
Considering they banked on high-speed access RAM before (MoSys 1T-SRAM on the GameCube and Wii), do you think they'll heavily implement HBM2, or settle with GDDR5x?

I really don't know, I'm no expert in this field. I'm just sorta reading and consolidating what other people have said and sources they've presented. That said, HBM2 has been deemed as prohibitively expensive, even for Nintendo. But at this point no one really knows, and we don't have any reliable rumors indicating amount/type of RAM yet.

EDIT2:
It's mentioned in this PDF by NVIDIA research.

Ah, there it is, thanks. So I guess they did officially use that name, but it just evolved to Parker at one point.
 

sfried

Member
Yeah, this can't be stated enough. Nintendo has always put a high priority on RAM, typically spending quite a bit for exotic RAM configurations.
Considering they banked on high-speed access RAM before (MoSys 1T-SRAM on the GameCube and Wii), do you think they'll heavily implement HBM2, or settle with GDDR5x?
 

Donnie

Member
Yeah that's what I thought, don't understand why he's saying that he thinks the Switch GPU could be a lightly modified die shrunk Maxwell yet he's skeptical that it could be Parker.

I mean it'll be custom so technically its own GPU, but if its modified die shrunk Maxwell then the closest description to a current GPU would be Parker anyway.
 

ZOONAMI

Junior Member
It's mentioned in this PDF by NVIDIA research.

Jesus that px2 board is a beast compared to px1.

If there really is a single x2 in the switch, even if cut down quite a bit, were looking at something pretty beastly.

Edit: I see the px2 has 2 discrete gpus. Hopefully they put more than 256 cuda cores on the switch soc. Probably unlikely. If they do though we should be looking at basically ps4 peformance if not running underclocked in the dock.
 

Schnozberry

Member
Jesus that px2 board is a beast compared to px1.

If there really is a single x2 in the nx, even if cut down quite a bit, were looking at something pretty beastly.

The Drive PX2 has 2 discrete GPU's attached to it. It's not a consumer SOC redesign of the Tegra X1. It's entirely irrelevant to the Switch.
 

guek

Banned
Power savings.

Plus, it seems as if everyone already believes that a die shrunk Maxwell Tegra X1 automatically qualifies as a 16nmFF Pascal Tegra. However, it therefore can't still be an X1 because apparently it's a logical impossibility or some such bizarre reason.

Also ITT some claim that the Tegra X2 isn't a thing, even though Nvidia says it is. Go figure.
It's a logical impossibility because there's no reason they'd take the die shrink but keep all other Maxwell specs when there's absolutely no reason to do so. It's neither a cost or power saving measure if you're already using a 16nm fab
 

MacTag

Banned
I mean it's possible, and keeps closer to the dev kit spec than speculation about Parker being the SoC. I'm still dubious though.
I don't think Parker has really been speculated to be the SoC either, no one's said anything about Denver cores. This is going end up being it's own custom Tegra chip, not X1 or Parker, but likely using Pascal based on credible sources.
 

Durante

Member
I still don't get all this discussion about other Nvidia products and their naming. It's completely besides the point.

If Nintendo wants a SoC with Denver cores, lots Pascal SMPs and a 128 bit bus then they can get that. If they want one with A53s, a single SMM and a 64 bit bus then NV will provide that.

It doesn't really matter in this context -- beyond the basic architecture -- what kind of SoCs NV is selling for cars or even tablets.
 

Zil33184

Member
His hope is that the memory bandwidth will be 25GB/s. This whole thing with devkit theories and how the SoC will not change started from the memory bandwidth discussion in the other thread and that theory from Zlatan. That's why it's acceptable that it can get a die shrink but it can never be a Parker, because then the memory bandwidth would be too big.

Who's Zlatan? And no, my hope isn't that it's bandwidth constrained like the SHIELD Android TV was. That would just conflict with my own interests since I actually want an NS.

I do feel like I have to temper expectations though, otherwise this thing might get PS4 Pro levels of backlash.

And it was officially announced to the world as Tegra Parker.

There isn't a mystical "TX2" which is "just a TX1 on a die shrink".

I never claimed that. In fact as far as I'm concerned a Tegra X1 is a Tegra X1, no matter what node it's produced on.
 

Schnozberry

Member
Power savings.

Plus, it seems as if everyone already believes that a die shrunk Maxwell Tegra X1 automatically qualifies as a 16nmFF Pascal Tegra. However, it therefore can't still be an X1 because apparently it's a logical impossibility or some such bizarre reason.

Also ITT some claim that the Tegra X2 isn't a thing, even though Nvidia says it is. Go figure.

You're being willfully ignorant here. Pascal is a shrunken Maxwell. The architectures are identical. Pascal is Maxwell on meth. The power gains from the shrunken process allow for much higher clock speeds. This guy engineered a hypothetical situation to test, and they perform exactly the same.

https://www.youtube.com/watch?v=nDaekpMBYUA
 
I have a legitimate question. I'm not as tech savvy as some of you. But why are we not looking at Nvidia Shield, and Shield tv more specifically. To look at bare minimum what is possible. Because the X1 chip is inside the Shield TV? How much ran does it have 3gb?

I would prefer if possible it has at least 6gb ram. 5gb for games and apps and 1 gb reserved for OS.

Do we all assume we will be able to suspend and resume games like on Vita? Also could this being Vita back from the dead?
 

Thraktor

Member
Power consumption is about linear with frequency. And the voltage is exponential in relation to the power.

$P=CV^2f+P_s where P_s is the zero frequency static power dissipation.

However their are many more things and constrains to consider which leads for the huge power increase when frequencies increases by a certain margins, such as node size of the transistor. And as the frequency and power increases the temperature produces by the chip is increased which would mean, the electrons inside the chip are relatively in a higher excited state which would cause the chip to have higher losses, hence more voltage margin is needed when increasing frequency above a certain threshold.

Power consumption is only linear with frequency in a scenario where voltage is held constant irrespective of frequency (i.e. not a realistic scenario for any ASIC designed for a power or thermally constrained environment). For a chip with sensible voltage scaling and accounting for the Poole Frenkel effect, power curves are quasi-exponential, which shouldn't be a particularly controversial statement.

My point is a 16nmFF Parker SoC could have been ready to sample much earlier than August, if such a design was in the works, in order to meet Nintendo's timeline.

Also, GCN in the PS4's APU had custom feature sets that weren't in GCN 1.0, not to mention that no AMD APUs at the time even featured GCN GPUs and certainly no APU had ever featured a 1152 shader core part.

Manufacturing larger, power hungry APUs is much more complicated than mobile chips, and attempting to downplay the effort it took to make the console APUs doesn't bolster your argument about the NS SoC being so difficult to sample it had to have come out much later.

It's worth keeping in mind that TSMC's 16nm manufacturing comprises of three different processes, not one:

16FF - Early process used for a handful of mobile SoCs
16FF+ - Newer process to replace 16FF, used for GP106 and up
16FFC - Newest process, only used in Apple's A10 thus far

The newer 16FFC isn't intended as a wholesale replacement for 16FF+, but rather as a separate process focussed around mobile hardware, with lower power consumption, increased density and reduced cost (both due to the density improvements and a reduction in the number of mask layers required). If Nintendo is using one of TSMC's 16nm processes for Switch's SoC, then it would seem 16FFC would be the most likely candidate, as it's both cheaper and better suited for the thermal limits of a portable device.

For a process which only hit consumer products in September, it's not at all unreasonable that they wouldn't have a sufficient number of good engineering samples in July to use them in third party dev kits at the time.

Edit:
I still don't get all this discussion about other Nvidia products and their naming. It's completely besides the point.

If Nintendo wants a SoC with Denver cores, lots Pascal SMPs and a 128 bit bus then they can get that. If they want one with A53s, a single SMM and a 64 bit bus then NV will provide that.

It doesn't really matter in this context -- beyond the basic architecture -- what kind of SoCs NV is selling for cars or even tablets.

People seem to just work on the assumption that Nintendo is cheap, and therefore will just take an existing chip and lightly modify it, ignoring both Nintendo's long history of using heavily customised hardware, and the fact that, even at the lower end of possible Switch sales, this would have by far the largest production run of any Tegra SoC, perhaps by an order of magnitude. Hardly a situation where Nintendo and Nvidia would say "eh, let's just use this automotive SoC we have lying around".
 
I never claimed that. In fact as far as I'm concerned a Tegra X1 is a Tegra X1, no matter what node it's produced on.

Okay, someone correct me if I'm wrong but I believe the bolded simply isn't true because:

Tegra X1 is a single chip. It has a defined number of GPU and CPU cores and it is made on the Maxwell architecture, which is made with a 28nm or 20nm process, and in the Tegra X1 case it is a 20nm process.

Tegra X1 cannot be made on a 16nm process, as then it would be a Pascal based chip. You can have a chip with the same amount of GPU/CPU cores as the Tegra X1 with a Pascal architecture, but that's not the Tegra X1.

I think the confusion here is that you are calling several hypothetical chips Tegra X1s, when there is only one real world configuration called a Tegra X1.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I never claimed that. In fact as far as I'm concerned a Tegra X1 is a Tegra X1, no matter what node it's produced on.
You claimed TX2 was a thing. It's not. It's called Tegra Parker and is the next-gen SoC after TX1, featuring a different CPU, GPU and memory controller setup.

Okay, someone correct me if I'm wrong but I believe the bolded simply isn't true because:

Tegra X1 is a single chip. It has a defined number of GPU and CPU cores and it is made on the Maxwell architecture, which is made with a 28nm or 20nm process, and in the Tegra X1 case it is a 20nm process.

Tegra X1 cannot be made on a 16nm process, as then it would be a Pascal based chip. You can have a chip with the same amount of GPU/CPU cores as the Tegra X1 with a Pascal architecture, but that's not the Tegra X1.

I think the argument here is that you are calling several hypothetical chips Tegra X1s, when there is only one real world configuration called a Tegra X1.
Exactly. I had started typing a similar response but then decided against wasting further time.
 

KingSnake

The Birthday Skeleton
I never claimed that. In fact as far as I'm concerned a Tegra X1 is a Tegra X1, no matter what node it's produced on.

Let me get this clear. Your theory is that Nintendo ordered a 16nm design with the same CPU cores and the same memory controller as X1 from Nvidia either early this year or late last year and then used this design to order TSMC to produce some tens of chips to be used in the devkits that ended up with the 3rd parties in time to be the base for the July report from Eurogamer?
 
You can't put a platter disc in a tablet.

It's also a handheld, so a HDD is out of the question.

But again it should be pointed out (because this thread is constantly being bumped, by me too) that the "leak" in the OP is 100% BS.

Uh... Man sometimes the technical illiteracy of even gaffers manages to astound me sometimes. You can't be for real dude...

You want the thing to cost 600+ dollars like phones?

What are you even talking about AtomicShroom? They came out and said it was a home console first and foremost right? So having a hard drive on the charging dock or being able to upgrade to a bigger drive wouldn't be impossible. You guys act like i'm asking them to put 1TB notebook hard drive in the tablet portion of the console.
 

diaspora

Member
I understand this but as someone who just switched to a 2k monitor, even 1080p looks bad in comparison. I can't even imagine 4k. I just want technology to improve faster than it is. Seeing a 720 screen, something we had 10 years ago is just disappointing for someone who likes tech.
The original iPhone launched 9 years ago with a 320*480 screen iirc. What mobile device did you have at 720p? I mean, yes this system is a trash home console but it's looking like an amazing handheld.
 
The original iPhone launched 9 years ago with a 320*480 screen iirc. What mobile device did you have at 720p? I mean, yes this system is a trash home console but it's looking like an amazing handheld.
It's not even necessarily a "trash" home console, in theory there's nothing stopping it from outputting 1080p to the TV.
 

Somnid

Member
What are you even talking about AtomicShroom? They came out and said it was a home console first and foremost right? So having a hard drive on the charging dock or being able to upgrade to a bigger drive wouldn't be impossible. You guys act like i'm asking them to put 1TB notebook hard drive in the tablet portion of the console.

That's what we all assume it does, you plug in your USB hard drive. If you are asking why it might not have a bay with one installed, it's cost.
 

Roo

Member
4GB Ram? Bruh.

On the other hand, it is Nintendo, so they'll do magic with that 4GB.
Wii U had 2GB RAM but only 1 was available for games.
just imagine what they're going to able to do with 3GB or more if the final specs are around 4-6GB
 

Zil33184

Member
You're being willfully ignorant here. Pascal is a shrunken Maxwell. The architectures are identical. Pascal is Maxwell on meth. The power gains from the shrunken process allow for much higher clock speeds. This guy engineered a hypothetical situation to test, and they perform exactly the same.

https://www.youtube.com/watch?v=nDaekpMBYUA

Okay, someone correct me if I'm wrong but I believe the bolded simply isn't true because:

Tegra X1 is a single chip. It has a defined number of GPU and CPU cores and it is made on the Maxwell architecture, which is made with a 28nm or 20nm process, and in the Tegra X1 case it is a 20nm process.

Tegra X1 cannot be made on a 16nm process, as then it would be a Pascal based chip. You can have a chip with the same amount of GPU/CPU cores as the Tegra X1 with a Pascal architecture, but that's not the Tegra X1.

I think the confusion here is that you are calling several hypothetical chips Tegra X1s, when there is only one real world configuration called a Tegra X1.

I think you guys are conflating "Maxwell v Pascal" and "20nm v 16nmFF". If Maxwell and Pascal are identical as Schnozberry is claiming (which I don't agree with btw), then you already have Maxwell on 16nm. Hooray! Whereas Skittzo claims you can't have a 16nm Tegra part without it being Pascal (which I don't agree with btw).

First off can we get some consensus on the whole Pascal is JUST 16nm Maxwell? Because for desktop cards it has worse fp16 support and there have been changes to async compute, colour compression, and PolyMorph. Since it has architectural differences and new features not on Maxwell parts, I'm going to disagree that the two are identical.

If the two are not identical is it then possible to die shrink a 20nm Maxwell Tegra X1 and not inherit features from unrelated desktop cards and other products in the pipeline? Yep. There problem solved, 16nm TX1 isn't Pascal like some have argued.

You claimed TX2 was a thing. It's not. It's called Tegra Parker and is the next-gen SoC after TX1, featuring a different CPU, GPU and memory controller setup.


Exactly. I had started typing a similar response but then decided against wasting further time.

Parker is bandied about like a codename. The presentation Nvidia gave has Parker in quotation marks. Either way, I never claimed TX2 was a die-shrunk TX1. I merely responded to people claiming that "16nm TX1" was logically inconsistent and couldn't therefore exist(?), by saying it could and you wouldn't exactly call it a TX2.

I still don't get all this discussion about other Nvidia products and their naming. It's completely besides the point.

If Nintendo wants a SoC with Denver cores, lots Pascal SMPs and a 128 bit bus then they can get that. If they want one with A53s, a single SMM and a 64 bit bus then NV will provide that.

It doesn't really matter in this context -- beyond the basic architecture -- what kind of SoCs NV is selling for cars or even tablets.

Yeah, I personally don't give a crap what the PX2 parts are called either.
 
I think you guys are conflating "Maxwell v Pascal" and "20nm v 16nmFF". If Maxwell and Pascal are identical as Schnozberry is claiming (which I don't agree with btw), then you already have Maxwell on 16nm. Hooray! Whereas Skittzo claims you can't have a 16nm Tegra part without it being Pascal (which I don't agree with btw).

First off can we get some consensus on the whole Pascal is JUST 16nm Maxwell? Because for desktop cards it has worse fp16 support and there have been changes to async compute, colour compression, and PolyMorph. Since it has architectural differences and new features not on Maxwell parts, I'm going to disagree that the two are identical.

If the two are not identical is it then possible to die shrink a 20nm Maxwell Tegra X1 and not inherit features from unrelated desktop cards and other products in the pipeline? Yep. There problem solved, 16nm TX1 isn't Pascal like some have argued.

I'm pretty sure it's just semantics. You're claiming that a simple die shrink of a 20nm Tegra X1 to a 16nm chip can also be called a Tegra X1. But the Tegra X1, which is a single defined product is made on a 20nm process. Is a 16nm Tegra chip with the same CPU/GPU configuration as a Tegra X1 pretty similar, or nearly identical? Probably, but that's not the point- a chip on a 16nm process cannot be a Tegra X1, because part of the definition of the Tegra X1 is a 20nm process.

Like I said, it's semantics, but it seems to be a major problem with the way the argument is being phrased. I honestly have almost lost sight of your initial argument at this point too.

The point: a TX1 shrunk down on 16nm is a custom Tegra SoC, not a TX1, by definition.
 
Status
Not open for further replies.
Top Bottom