So is there any news or is this just fake?
Is fake
So is there any news or is this just fake?
I was under assumption this had a pascal based chip.Is fake
I was under assumption this had a pascal based chip.
I was under assumption this had a pascal based chip.
Rumors are pointing towards it being a Pascal based chip in the final hardware. But the devkits are running on Maxwell (Tegra X1) according to Eurogamer (who had the other info about the devkit correct).
Which one? The TX1 is 512 at fp32 and 1TF at fp16. But considering the fact that fp16 can only used for a number of computations, to call it a 1TF chip would be disingenuous.
So is there any news or is this just fake?
First off can we get some consensus on the whole Pascal is JUST 16nm Maxwell? Because for desktop cards it has worse fp16 support and there have been changes to async compute, colour compression, and PolyMorph. Since it has architectural differences and new features not on Maxwell parts, I'm going to disagree that the two are identical.
If the two are not identical is it then possible to die shrink a 20nm Maxwell Tegra X1 and not inherit features from unrelated desktop cards and other products in the pipeline? Yep. There problem solved, 16nm TX1 isn't Pascal like some have argued.
It would also be disingenuous to say developers will only use FP32 thus 500GF
I made a post in the Switch Nvidia chip thread, unfortunately its going to be unnoticed slipping to the 3rd page and with this one is staying on the front.
I dig up what sort of balance and use FP16 would be expected to be seen so we get an idea of the ratio of FP precision usage, and it looks in favor to see FP16 used more then FP32 for shaders.
http://www.neogaf.com/forum/showpost.php?p=221603499&postcount=1707
If Thraktor or anyone with technical background can check this out, greatly appreciate it!
Screen isn't even 1080? What year is this. Come on Nintendo. Expected but so disappointing.
I'm pretty sure it's just semantics. You're claiming that a simple die shrink of a 20nm Tegra X1 to a 16nm chip can also be called a Tegra X1. But the Tegra X1, which is a single defined product is made on a 20nm process. Is a 16nm Tegra chip with the same CPU/GPU configuration as a Tegra X1 pretty similar, or nearly identical? Probably, but that's not the point- a chip on a 16nm process cannot be a Tegra X1, because part of the definition of the Tegra X1 is a 20nm process.
Like I said, it's semantics, but it seems to be a major problem with the way the argument is being phrased. I honestly have almost lost sight of your initial argument at this point too.
The point: a TX1 shrunk down on 16nm is a custom Tegra SoC, not a TX1, by definition.
On our lowest expectations of the hardware now - it should still run Zelda, MK8+ and Splatoon+ 1080p on HDMI out no?
Granted the third party XB1/PS4 ports would have to run lower pixel counts.
That's pretty arbitrary, since a die shrunk Cell is still a Cell for example. But if that's the major sticking point I'll just call a hypothetical 16nmFF TX1 equivalent "mike" or "exercise bike". Either way, I agree that the 16nmFF exercise bike would qualify as a custom SoC. In fact I made that point earlier.
Getting back to final NS hardware versus the dev kit, I still haven't seen a more recent spec. Given how leaky this industry is that's actually not a great sign.
I'd say so for MK8+ and Splatoon +.
Depends what they want to do for Zelda, I could see them going for more details or other stuff, still at 720p.
I'd say so for MK8+ and Splatoon +.
Depends what they want to do for Zelda, I could see them going for more details or other stuff, still at 720p.
It's also only 1GB below the PS4 and the Bone, and the PS4pro and Scorpio don't look to add much either.
Huh? I thought both had 8 gigs?
It would also be disingenuous to say developers will only use FP32 thus 500GF
I made a post in the Switch Nvidia chip thread, unfortunately its going to be unnoticed slipping to the 3rd page and with this one is staying on the front.
I dig up what sort of balance and use FP16 would be expected to be seen so we get an idea of the ratio of FP precision usage, and it looks in favor to see FP16 used more then FP32 for shaders.
http://www.neogaf.com/forum/showpost.php?p=221603499&postcount=1707
If Thraktor or anyone with technical background can check this out, greatly appreciate it!
Wouldn't it take more work to get a custom chip based on a Tegra X1 on 16nmFF than to base it of Pascal
No reason to go with Maxwell on 16nm when Pascal is so similar and the preliminary designs (barring Nintendo Customization) have already been done for other chips..
They have rather large allotments of memory Reserved for the OS, though I believe developers have been given access to more memory over time.
I doubt it's only four cores. I think we are getting 6 cores and 6 gigs of RAM
2 denver cores and 4 a57- If it's really using Parker
Huh? I thought both had 8 gigs?
Yeah, but so does the Wii U, so I'm not sure why people expect differently of the Switch?
Hell, as a percentage, the Wii U devotes a greater portion of its ram to the OS than either of the two other systems.
You can only access 5 as a developer
I think 1GB reserved for the OS is probably a safe bet on the Switch. It just depends on if Nintendo went with 4GB of RAM for the final kit or bumped it up. Either way, the percentage goes down.
2 denver cores and 4 a57- If it's really using Parker
Closer to six now.You can only access 5 as a developer
You can only access 5 as a developer
It's not
I'm not sure why that's a safe bet; I'd expect the OS to be more ambitious than the Wii U's.
Wii U's OS was slow as molasses, so loading more of its resources into memory would help. Then you've got other features that are relatively standard now, like multitasking beyond what Wii U could do, more seamless Miiverse / Social integration, and gameplay recording.In what way? Wii U needed a more modern UI and more apps, but it has a modern browser, plus Miiverse and the eShop were fine.
....
To add context to my previous post (I was asked via PM) without going into too much detail any game that runs on the XB1 or PS4 should run on the NX with little to no issue. What developers choose to or not to port to the console will more than likely depend on consumer support for the thing.
Wii U's OS was slow as molasses, so loading more of its resources into memory would help. Then you've got other features that are relatively standard now, like multitasking beyond what Wii U could do, more seamless Miiverse / Social integration, and gameplay recording.
It would also be disingenuous to say developers will only use FP32 thus 500GF
I made a post in the Switch Nvidia chip thread, unfortunately its going to be unnoticed slipping to the 3rd page and with this one is staying on the front.
I dig up what sort of balance and use FP16 would be expected to be seen so we get an idea of the ratio of FP precision usage, and it looks in favor to see FP16 used more then FP32 for shaders.
http://www.neogaf.com/forum/showpost.php?p=221603499&postcount=1707
If Thraktor or anyone with technical background can check this out, greatly appreciate it!
2 denver cores and 4 a57- If it's really using Parker
In what way? Wii U needed a more modern UI and more apps, but it has a modern browser, plus Miiverse and the eShop were fine.
Also the Wii U's OS got sped up quite a bit over time. I don't feel it's much slower than the other console OS' at this point.Wii U was crippled by reading from the really slow built in flash memory, and the CPU wasn't setting the world on fire either. There are full desktop Linux distributions that run on less than 512MB of RAM. 1GB isn't an unreasonable target.
So is there any news or is this just fake?
Fine-grained preemption and the Polymorph engine enhancements for multi-projection are also important consumer features which are new in Pascal (but probably irrelevant for Switch).pascal in the consumer space is mostly identical to maxwell. it supports a higher CR tier but im not sure how important that is in practice. regardless its a feature unlikely to gain traction anytime soon. it also supports dynamic load balancing but that feature is probably also rendered moot due to the number of SMs in the switch. the enhanced color compression is the only feature that seperates it from maxwell, especially considering the anemic bandwidth figures
https://twitter.com/NWPlayer123/status/789116886109655041
Four ARM Cortex-A57 cores, max 2GHz
NVidia second-generation Maxwell architecture
256 CUDA cores, max 1 GHz, 1024 FLOPS/cycle
4GB RAM (25.6 GB/s, VRAM shared)
32 GB storage (Max transfer 400 MB/s)
USB 2.0 & 3.0
1280 x 720 6.2" IPS LCD
1080p at 60 fps or 4k at 30 fps max video output
Capcitance method, 10-point multi-touch
Here's more on the subject of how fp16 matters: http://www.realworldtech.com/apple-custom-gpu/I already asked the same question in the other (or this, can't remember) thread. And Blu responded that indeed fp16 would be used for shaders, which would take up anywhere between 25 and 50%. But he also said not to quote him on that, hah. Thraktor jumped in on the discussion as well.
http://www.neogaf.com/forum/showpost.php?p=221402916&postcount=1540
The discussion with Thraktor is on the next page.
I get a feeling the Switch won't be priced anything under 300.
I would buy it for 300.
The President of Nintendo just reconfirmed that the Switch won't be sold at a loss, but at the same time they are taking consumer expected prices seriously. This is big news IMO, because this kisses $200 and $350 outliers goodbye, and narrows the price down to $250-300 IMO. This also gives a better gauge to guestimate the power of the console.
Best case scenario is that Nintendo breaks even at either 250 or 300.. If they go with 300, we could get more power out of it, which helps in the long run.
I think so also. 300 is the best balance between power and price. LED Screen, joycon controllers, dock.. I hope they have enough enough space for storage(more than 32GB) or at least a game included.
Here's more on the subject of how fp16 matters: http://www.realworldtech.com/apple-custom-gpu/
Power efficiency these days is largely a matter of how many bits you need to haul from whereever they were sitting, in order to compute your desired result - the ALU is relatively cheap. So you're right that if you increase (read: double) the computations there won't be any gain. But for the same amount of computations, fp32->fp16 is a clear power and performance gain. And that's the crux of the entire movement in the hw industry, both GPU and CPU. Ironically, while doing that tons of code will end up with accidental-success computations, which, being the worst thing that can happen in computations, will bite the authors (or their successors) in the bottom. /people-don't-do-enough-numerical-analysis-these-days rantAlso more power efficient... but... how does that work if two fp16 calculations are made instead of 1 fp32? I take it that it's not more powerefficient then, right? Same with memory bandwidth?
Power efficiency these days is largely of matter of how many bits you need to haul from whereever they were sitting, in order to compute your desired result - the ALU is relatively cheap. So you're right that if you increase (read: double) the computations there won't be any gain. But for the same amount of computations, fp32->fp16 is a clear power and performance gain. And that's the crux of the entire movement in the hw industry, both GPU and CPU. Ironically, while doing that tons of code will end up with accidental-success computations, which, being the worst thing that can happen in computations, will bite the authors (or their successors) in the bottom. /people-don't-do-enough-numerical-analysis-these-days rant