• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.

antonz

Member
so essentially, Nintendo went to the garbage dump and got what was available for the switch.

If by Garbage Dump you mean the most technologically advanced Mobile chip then yes they went to the garbage dump. Now Nintendo not using the full potential of what they went after can be criticized but they did not grab 2006 tech and shoehorn it in or anything this time around
 

pooh

Member
The Rumors concerning the Nvdia deal was that Nvidia gave Nintendo a great deal because they were about to defult on a large production agreement for 20nm chips.

Most of that has held up so far.

That doesn't really jive with what Nvidia says on their own blog, however:

But creating a device so fun required some serious engineering. The development encompassed 500 man-years of effort across every facet of creating a new gaming platform: algorithms, computer architecture, system design, system software, APIs, game engines and peripherals. They all had to be rethought and redesigned for Nintendo to deliver the best experience for gamers, whether they’re in the living room or on the move.

A Console Architecture for the Living Room and Beyond

Nintendo Switch is powered by the performance of the custom Tegra processor. The high-efficiency scalable processor includes an NVIDIA GPU based on the same architecture as the world’s top-performing GeForce gaming graphics cards.

Could be marketing speak, but even then, if it were a stock X1 it seems like they would have focused on that.
 

ggx2ac

Member
The Rumors concerning the Nvdia deal was that Nvidia gave Nintendo a great deal because they were about to defult on a large production agreement for 20nm chips.

Most of that has held up so far.

Wrong, you're perceiving Thraktor's speculation as fact.

Read the OP of that Semiaccurate article thread from long ago, says nothing about wafer deals.

http://m.neogaf.com/showthread.php?t=1218933

Thraktor speculated that Nvidia had to get rid of 20nm wafer orders, someone else has checked Nvidia's financials and they are not owing any orders to TSMC. There's no "good deal" for Nintendo to be made on cheap 20nm wafers if Nvidia doesn't have any wafer orders owing to TSMC.
 
The Rumors concerning the Nvdia deal was that Nvidia gave Nintendo a great deal because they were about to defult on a large production agreement for 20nm chips.

Most of that has held up so far.

so essentially, Nintendo went to the garbage dump and got what was available for the switch.

No, this was never a rumor, ever. This really needs to be nipped in the bud, this was only ever a theory posed by Thraktor about how Nvidia could possibly be taking a loss on the Switch deal*. There is no evidence backing it up and Thraktor has said as much.

*which was colorful wording from the SemiAccurate article, not something we know has happened.
 

Somnid

Member
Just because it came up so many times, a reality check:

The Asus ZenFone AR announced at CES is the first phone with 8GB of RAM (5.7" BTW). 8GB was never, ever on the table. It doesn't matter if PS4 and Xbox have 8GB or what you think Switch needs to be competitive. There was no existing 8GB RAM module for a device that small until now at the very highest end.
 

jdstorm

Banned
Wrong, you're perceiving Thraktor's speculation as fact.

Read the OP of that Semiaccurate article thread from long ago, says nothing about wafer deals.

http://m.neogaf.com/showthread.php?t=1218933

Thraktor speculated that Nvidia had to get rid of 20nm wafer orders, someone else has checked Nvidia's financials and they are not owing any orders to TSMC. There's no "good deal" for Nintendo to be made on cheap 20nm wafers if Nvidia doesn't have any wafer orders owing to TSMC.


Sorry if i got that wrong. That was about 100* NX/Switch twitter blasts/rumours ago. Everything starts to get jumbled after a while.

*Not litterally 100
 

Astral Dog

Member
Just because it came up so many times, a reality check:

The Asus ZenFone AR announced at CES is the first phone with 8GB of RAM (5.7" BTW). 8GB was never, ever on the table. It doesn't matter if PS4 and Xbox have 8GB or what you think Switch needs to be competitive. There was no existing 8GB RAM module for a device that small until now at the very highest end.
Oh dear.

What about 6?
 

ggx2ac

Member
Just because it came up so many times, a reality check:

The Asus ZenFone AR announced at CES is the first phone with 8GB of RAM (5.7" BTW). 8GB was never, ever on the table. It doesn't matter if PS4 and Xbox have 8GB or what you think Switch needs to be competitive. There was no existing 8GB RAM module for a device that small until now at the very highest end.

Damn.

It has a speedier Snapdragon 821 SoC and a more reasonable 5.7-inch, 2560x1440 Super AMOLED display. It's also packing a whopping 8GB (or 6GB) of RAM.

For storage options, Asus has you covered with flavors in 32/64/128 or 256GB of UFS 2.0 storage plus a MicroSD slot. Rounding out the spec sheet is a 3300mAh battery, 8MP front camera, quick charging, NFC, and a 3.5mm headphone jack.

The Zenfone AR doesn't just do augmented reality, it's also compatible with Google's Daydream VR standard. You can slap the device in a headset and use it to power a VR session.

there was no price attached to the Zenfone AR, but Asus did promise a release window for "Q2 2017."

http://arstechnica.com/gadgets/2017...enfone-ar-the-second-ever-google-tango-phone/
 

10k

Banned
Just because it came up so many times, a reality check:

The Asus ZenFone AR announced at CES is the first phone with 8GB of RAM (5.7" BTW). 8GB was never, ever on the table. It doesn't matter if PS4 and Xbox have 8GB or what you think Switch needs to be competitive. There was no existing 8GB RAM module for a device that small until now at the very highest end.
I didn't want or need 8GB.

6GB LPDDR4 would have been ideal. 1GB for OS. 5GB for games (same amount of accessible ram XB1 has).
 

Hermii

Member
I didn't want or need 8GB.

6GB LPDDR4 would have been ideal. 1GB for OS. 5GB for games (same amount of accessible ram XB1 has).
When everything else is so much weaker it doesn't really need to be on par with ram. 3.2 gigs is plenty for how weak it is.
 

10k

Banned
When everything else is so much weaker it doesn't really need to be on par with ram. 3.2 gigs is plenty for how weak it is.
But what about loading entire game worlds and such into the memory?

If you're running like Witcher 3 (just an example) at 1080p on PS4 and 720p on Switch don't you need the same amount of ram to keep draw distances and visual quality the same?
 

Schnozberry

Member
Just because it came up so many times, a reality check:

The Asus ZenFone AR announced at CES is the first phone with 8GB of RAM (5.7" BTW). 8GB was never, ever on the table. It doesn't matter if PS4 and Xbox have 8GB or what you think Switch needs to be competitive. There was no existing 8GB RAM module for a device that small until now at the very highest end.

The Switch wouldn't be limited to one Ram Module. Phones are dealing with much more limited real estate.
 

Easy_D

never left the stone age
But what about loading entire game worlds and such into the memory?

If you're running like Witcher 3 (just an example) at 1080p on PS4 and 720p on Switch don't you need the same amount of ram to keep draw distances and visual quality the same?

Sure but the Switch couldn't handle those settings anyway, so draw distance would be reduced, texture sizes would be smaller, less overall detail on screen and thusly, less RAM needed. It's not like draw distance is only a strain on available memory, it hits hard on the CPU as well, my 12gB of RAM doesn't help me a bit with the highest foliage setting in Witcher 3 because my CPU just can't handle it

tl:d, they're more likely to hit CPU/GPU bottlenecks before a RAM bottleneck
 

Schnozberry

Member
But what about loading entire game worlds and such into the memory?

If you're running like Witcher 3 (just an example) at 1080p on PS4 and 720p on Switch don't you need the same amount of ram to keep draw distances and visual quality the same?

Are you under the impression that entire game worlds are loaded into RAM on the PS4 and Xbox One?
 

EDarkness

Member
But what about loading entire game worlds and such into the memory?

If you're running like Witcher 3 (just an example) at 1080p on PS4 and 720p on Switch don't you need the same amount of ram to keep draw distances and visual quality the same?

With game cards, a lot of that could be streamed from the cart itself assuming that read speeds are fast. So it wouldn't matter that much. Data could be cached and moved on the fly, too. You can run the same games with 3.2GB as you could with 5GB. I doubt games use up ALL that space and a little compression will help get textures to fit. Again, this won't be a problem and NS games can (and will) look great. I think my game looks pretty good and I basically offload and load textures as I need them. No point in loading everything into RAM.
 

Dehnus

Member
Never go full Stallman.

Always go full Stallman! The man also has good taste in flowers :).

That and to answer earlier questions:
I do browse on an Amiga sometimes for funsies. But then I know how to get an Amiga to work on a modern network, and just because it's old doesn't mean it doesn't get to play outside from time to time :). Old people and things need love too :).

Normally, I do my programming on a Core I5 3570k (Much to my dismay, I want an AMD, and probably will be first in line with Ryzen :)) with a Geforce GTX 460 (still serves my needs) and don't play that many graphically heavy games, what I need to play still runs fine, but I have to admit I"m starting to feel the stress it has on the system's Graphics card. So when the Ryzen Upgrade will be build, I will probably also look into a nice upper Mid segment Vega based card :), if they are in that segment at the time.

Does that mean I install stuff from known Scrooglers? Nope! Windows 10 has all the ports blocked in the router and only gets booted when I need to play a game (like right now, playing Wildstar, I only open up ports when I need them, but MS does not get their little "telemetry" packages). Otherwise it's OpenSUSE Tumbleweed with Firefox as a browser and add ons that allow me to block scripts :). It's great to go full Stallman, if only because I bloody hate Eric Schmidt and what crap he pulled at Novell and his scary comments at Google regarding privacy.

So yes, I am all about the Nintendo Switch, but that moment they go Android? I clock out, and stick with my Wii U and N3DS. If not.. I'll be the first in line to get that sexy piece of kit :). Can't wait to hug it in my hands, even if it doesn't have the graphical prowess some people want it to have :p. I"m sure it'll be a lot of fun :).
 

10k

Banned
Sure but the Switch couldn't handle those settings anyway, so draw distance would be reduced, texture sizes would be smaller, less overall detail on screen and thusly, less RAM needed. It's not like draw distance is only a strain on available memory, it hits hard on the CPU as well, my 12gB of RAM doesn't help me a bit with the highest foliage setting in Witcher 3 because my CPU just can't handle it

tl:d, they're more likely to hit CPU/GPU bottlenecks before a RAM bottleneck
Ah I see.

Don't 1080p and 720p games usually use the same texture quality assets though?
 

EDarkness

Member
Ah I see.

Don't 1080p and 720p games usually use the same texture quality assets though?

I'd say generally. I don't change texture resolution when changing screen resolutions. Though, that doesn't mean that some games and developers can't/won't. Personally, I don't see the point in bothering with that when all that's going on is changing resolution.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
So yes, I am all about the Nintendo Switch, but that moment they go Android?
Nobody is going Android. Also, Android is a collection of technologies some of which have nothing to do with Google policies or what you find in the frontends of consumer Android devices. For instance, Ubuntu Touch uses the Android GPU HAL & bionic layers for better compatibility on hw which never got proper linux GPU stacks to boot. The late FirefoxOS used the same approach to create an entirely Firefox-based ecosystem on top Android HAL. It's understandable that when people say 'Android' the first picture that pops to mind is Google hegemony, but the underlying technologies are diverse and fundamental enough so one could have an Android-based device which functions, looks and feels nothing like your Galaxy S42 Plus Pro Premium Edition ++.
 

Hermii

Member
But what about loading entire game worlds and such into the memory?

If you're running like Witcher 3 (just an example) at 1080p on PS4 and 720p on Switch don't you need the same amount of ram to keep draw distances and visual quality the same?
I given up hope on Switch getting games like that anyway. It's most likely a 150 / 390 gflops device.
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
4GB sounds reasonable. But X1 architecture is also looking likely too considering Nvidia using it. Seems Pascal or Parker or whatever is not intended for this year anyways
 

Thraktor

Member
Tegra X1 uses 2x 32-bit chips for its 64-bit bus.

The Asus phone there would probably use 4x 12/16Gbit density LPDDR4 to hit 6GB/8GB.

Asus are likely using one of these, which has a 64-bit interface on a single chip. I'm not aware of any phones which use multiple RAM modules, actually, with most using a single chip in a PoP config either on top of the SoC or stacked with the eMMC/UFS module.

Slightly more on-topic, Nintendo could probably get away with two memory modules in Switch, although unless they strictly need to (i.e. are using a 128-bit bus) I'd assume a single module would be both cheaper and make manufacturing simpler.
 

Schnozberry

Member
Tegra X1 uses 2x 32-bit chips for its 64-bit bus.

The Asus phone there would probably use 4x 12/16Gbit density LPDDR4 to hit 6GB/8GB.

Depending on who they are buying ram from, they can get up to 8GB on a single package now.

EDIT: Beaten like a rented mule.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Tegra X1 uses 2x 32-bit chips for its 64-bit bus.

The Asus phone there would probably use 4x 12/16Gbit density LPDDR4 to hit 6GB/8GB.
Aren't 64 bits of aggregate LPDDR4 bus always 4x 16-bit buses? Ie. in theory TX1 could be matched against a 4x chip configuration.
 

AlStrong

Member
Aren't 64 bits of aggregate LPDDR4 bus always 4x 16-bit buses? Ie. in theory TX1 could be matched against a 4x chip configuration.

On X1, they are indeed split into 4chan of 16-bit widths. I'm not sure about a 4-chip config since the LPDDR4 are typically 32-bit widths in production.

I just wanted to write 4chan

ge9jzyR.jpg

Asus are likely using one of these, which has a 64-bit interface on a single chip.

Ah, fair enough.
 

Donnie

Member
Don't be sad my friend just wait. Smile!

I don't think it has to be completely out of the question even at the worst case scenario of 400Gflops to be honest. I'd be surprised if Tegra Maxwell isn't a decent amount more efficient than the PS4 GPU (flop for flop not really being equal). If you drop resolution from 1080p to 720p, reduce quality a bit and optimise for the Tegra architecture they could probably achieve something decent.

Of course that doesn't mean they'd spend the time and effort to do it I suppose. Here's hoping the GPU is more than some on here are assuming.
 

Hermii

Member
I don't think it has to be completely out of the question even at the worst case scenario of 400Gflops to be honest. I'd be surprised if Tegra Maxwell isn't a decent amount more efficient than the PS4 GPU (flop for flop not really being equal). If you drop resolution from 1080p to 720p, reduce quality a bit and optimise for the Tegra architecture they could probably achieve something decent.

Of course that doesn't mean they'd spend the time and effort to do it I suppose. Here's hoping the GPU is more than some on here are assuming.

They would have to get the game running at 150 too.
 

Vena

Member
I don't think it has to be completely out of the question even at the worst case scenario of 400Gflops to be honest. I'd be surprised if Tegra Maxwell isn't a decent amount more efficient than the PS4 GPU (flop for flop not really being equal). If you drop resolution from 1080p to 720p, reduce quality a bit and optimise for the Tegra architecture they could probably achieve something decent.

Of course that doesn't mean they'd spend the time and effort to do it I suppose. Here's hoping the GPU is more than some on here are assuming.

The architecture isn't that relevant, or more accurately you're over-valuing/conflating/doubling the flops efficiency gains from GCN to Maxwell/Pascal while also counting (double counting) the architecture. Its basically the same thing. There are gains also in RAM use that come from the same nebulous "architecture" if we want to be so vague.

What's more important than architecture (but the flops efficiency gains are still important), is the underlying API. The Switch's APIs as far as we're aware are full-shot Vulkan, that's where you're going to see some serious potential gains over the older hardware of the twins because it allows you to do more with the system and in more efficient/effective ways.

This is also why the 150 on the portable mode is well and above the WiiU's near-equivalent scalar value. But FLOPS (and performance in general) are really vectors but, then, most of this board doesn't know what a vector is (or understand the nuance of one) and their brains operate in pure scalars.

They would have to get the game running at 150 too.

At a 6.2" screen, you can do a LOT of cutting down that won't even be discernible. Fixed screen and fixed resolution, so 150 being some arbitrary "road block" over 300 is itself arbitrary.
 

AlStrong

Member
I would think 1 or 2MB of shared L3 would be enough. It wouldn't take up that much space, even at 28nm.

hm.... I'm not sure it'd be worth the engineering effort to come up with a shared CPU/GPU cache as opposed to just increasing the size of the L2 for the GPU.
 

Donnie

Member
The architecture isn't that relevant, or more accurately you're over-valuing/conflating/doubling the flops efficiency gains from GCN to Maxwell/Pascal while also counting (double counting) the architecture. Its basically the same thing. There are gains also in RAM use that come from the same nebulous "architecture" if we want to be so vague..

I'm not sure what you're getting at here.. I'm not doubling anything or counting anything twice, I'm not trying to be that specific because being that specific isn't really possible (not with real accuracy anyway). I'm simply saying that I doubt Tegra Maxwell and PS4's GPU are exactly the same flop for flop. As in when someone compares 400Gflops for Tegra vs 1.8tflops for PS4's GPU that doesn't mean the later will provide 4 and a half times the performance.
 

z0m3le

Banned
The architecture isn't that relevant, or more accurately you're over-valuing/conflating/doubling the flops efficiency gains from GCN to Maxwell/Pascal while also counting (double counting) the architecture. Its basically the same thing. There are gains also in RAM use that come from the same nebulous "architecture" if we want to be so vague.

What's more important than architecture (but the flops efficiency gains are still important), is the underlying API. The Switch's APIs as far as we're aware are full-shot Vulkan, that's where you're going to see some serious potential gains over the older hardware of the twins because it allows you to do more with the system and in more efficient/effective ways.

This is also why the 150 on the portable mode is well and above the WiiU's near-equivalent scalar value. But FLOPS (and performance in general) are really vectors but, then, most of this board doesn't know what a vector is (or understand the nuance of one) and their brains operate in pure scalars.



At a 6.2" screen, you can do a LOT of cutting down that won't even be discernible. Fixed screen and fixed resolution, so 150 being some arbitrary "road block" over 300 is itself arbitrary.

It's actually pretty relevant. X1 is the first half precision chip designed for gaming, and an ex Ubisoft developer did say he can 70% of the flops in half precision. So 400gflops becomes 680gflops (best case) and if your game is showing the 4/3 advantage over gcn, you'd be hitting 900gflops worth of gcn out of switch with everything going in your favor.

Laura and this ex Ubisoft employee has been saying that the docked clocks can be used when portable, and since the cooling is independent of the dock, this makes sense.
 

Mokujin

Member
Laura and this ex Ubisoft employee has been saying that the docked clocks can be used when portable, and since the cooling is independent of the dock, this makes sense.

This is not going to be the case 100% sure.

Also why is the RAM argument back again? Jesus
 

Schnozberry

Member
I think they're going to need more than that, 4MB+ depending on the purpose.

They're going to be limited to SRAM, so you're talking about a lot of die space for a significant scratchpad on an SOC. It certainly may be advantageous do separate L3 Caches for the CPU and GPU, to help avoid latency introduced by retrieving from main memory. Or maybe just larger L2 caches. But with tiled rendering and much better compression technology they won't need to use a huge pool of embedded memory to act as framebuffer and scratchpad like the Wii U.
 

Schnozberry

Member
hm.... I'm not sure it'd be worth the engineering effort to come up with a shared CPU/GPU cache as opposed to just increasing the size of the L2 for the GPU.

In my mind I was thinking that it would be towards the goal of saving die space, but you're probably right.
 

Thraktor

Member
hm.... I'm not sure it'd be worth the engineering effort to come up with a shared CPU/GPU cache as opposed to just increasing the size of the L2 for the GPU.

What I'm most curious about is how they'd deal with G-buffer caching, assuming that they want to use deferred rendering for their internal engines and are intending for TBR to compensate for the lack of an embedded memory pool. Could an API extension be used to explicitly define render-to-texture buffers to be tiled alongside the color and z buffers, or would the access patterns of G-buffers make tiling largely irrelevant?
 

Schnozberry

Member
Nintendo is not going to let Switch have shit battery life, stop going too far into the wishful thinking line of thought please.

It's not like he's insane. Allowing developers the flexibility to use higher clocks in portable mode, as long as the consumer is prompted and communicated that battery life will suffer, isn't completely out of bounds.
 

atbigelow

Member
It's not like he's insane. Allowing developers the flexibility to use higher clocks in portable mode, as long as the consumer is prompted and communicated that battery life will suffer, isn't completely out of bounds.

If they allow docked speeds while portable, I better be able to disable that.
 

Schnozberry

Member
What I'm most curious about is how they'd deal with G-buffer caching, assuming that they want to use deferred rendering for their internal engines and are intending for TBR to compensate for the lack of an embedded memory pool. Could an API extension be used to explicitly define render-to-texture buffers to be tiled alongside the color and z buffers, or would the access patterns of G-buffers make tiling largely irrelevant?

PowerVR uses tag buffers to resolve visibility for a tile and interpolates attributes before executing pixel shading. Perhaps they could use a similar technique?

Edit: Here's the TBDR Pipeline. The Tag Buffer seems to accomplish the same tasks as a G-Buffer would.

YtCL9NB.png
 

Donnie

Member
They're going to be limited to SRAM, so you're talking about a lot of die space for a significant scratchpad on an SOC. It certainly may be advantageous do separate L3 Caches for the CPU and GPU, to help avoid latency introduced by retrieving from main memory. Or maybe just larger L2 caches. But with tiled rendering and much better compression technology they won't need to use a huge pool of embedded memory to act as framebuffer and scratchpad like the Wii U.

I know that, I keep having to repeat it when people talk about lack of rendering bandwidth for Tegra :) But still we're talking about a system with 4GB or more RAM, and if its being used for both GPU and CPU I think they'd need at least a few MB's. If they did something for the purposes of compute they'd need even more.

I'd just be very surprised if Nintendo didn't add a significant amount of embedded memory. They love the stuff and the reliable performance it provides.
 

Schnozberry

Member
If they allow docked speeds while portable, I better be able to disable that.

I don't see why you wouldn't be given a choice. Although full clocks might be the preferred way to play for graphically intense titles that have significantly reduced image quality and resolution while in portable mode, even if it means sacrificing battery life.
 
Status
Not open for further replies.
Top Bottom