Yeah and I don't think that would be too bad on a 6.2 inch screen. They might be able to hit close to XBone levels of detail with a 720p docked/480p mobile game. Just speculation of course since we don't know the specs yet.
Edit: Actually 480p would be a multiple of 1.5 which isn't ideal. They'd have to go as low as 360p to have an integer multiple. That probably wouldn't look very good.
1080p->720p isn't an even number, either. It's also a 1.5 scale.
Yeah, 720p looks bad on a 1080p TV.
Yeah, 720p looks bad on a 1080p TV.
No it doesn't.
I doubt that the modifications that I gave as an example (wider memory bus, more than 2 SM) would require devs to modify their code much, if at all.
Yes. The visions come whenever I close my eyes. Never been easier.Also Its also very easy to see Nvidia is more involved with the development of the processor, api's, and customization of this console than amd was with Wii and Wii U.
I have a Shield TV, I could downclock it to the rumored Switch clock speeds and run a Unity 3D demo on it to for benchmarking. Might need clocking a little higher then Switch because of Android overhead though, not sure if its that much. Some high bandwidth usage demos would be interesting! Anything anyone wants me to test?
I don't think Occam would have survived having a razor in hand and reading the last pages of this thread.
Also the "we don't know anything yet" people are the worst, actively trying to censorship a discussion just because they don't like the way it's going. It's mostly educated guess based on several rumoured variables anyhow.
I have a Shield TV, I could downclock it to the rumored Switch clock speeds and run a Unity 3D demo on it to for benchmarking. Might need clocking a little higher then Switch because of Android overhead though, not sure if its that much. Some high bandwidth usage demos would be interesting! Anything anyone wants me to test?
The problem is that people aren't taking this stuff as rumor, they're treating it as fact, when we don't have all of the variables. Speculating is fine, but treat it as such. Until we get more pieces of the puzzle, all of this is just speculation. I can't speak for everyone else, but that's my take on it. There's nothing definitive here yet. We'll know more next week for sure. Even what games look like running on the system.
Did that get the Android Vulkan update? On Vulkan it should make a better comparison (at the same clocks), NVN may be more efficient for being bespoke but I'm sure these low level APIs hit diminishing returns. On that you could test them 1:1 for clocks without worrying about OpenGL Android overhead.
You'd have to go deeper to turn off the second cluster of 4 cores that that has that the Switch does not though, if you want to get that exact.
Of course not. Come on dude, really. But we know rumors say developers have had to work with similar hardware at least up until October.Come on dude, really? Do you honestly think clock speeds were the only change Nintendo made? They wouldn't even need Nvidia's involvement if they were using the off the shelf X1
Come on dude, really? Do you honestly think clock speeds were the only change Nintendo made? They wouldn't even need Nvidia's involvement if they were using the off the shelf X1
The problem is that people aren't taking this stuff as rumor, they're treating it as fact, when we don't have all of the variables. Speculating is fine, but treat it as such. Until we get more pieces of the puzzle, all of this is just speculation. I can't speak for everyone else, but that's my take on it. There's nothing definitive here yet. We'll know more next week for sure. Even what games look like running on the system.
He won't - TX1 uses cluster switching, so when the big cluster works the LITTLE sleeps, and vice versa. Bottomline, busy sw would mostly run on the big cluster, seeing 4x A57 for the duration of its run.Did that get the Android Vulkan update? On Vulkan it should make a better comparison (at the same clocks), NVN may be more efficient for being bespoke but I'm sure these low level APIs hit diminishing returns. On that you could test them 1:1 for clocks without worrying about OpenGL Android overhead.
You'd have to go deeper to turn off the second cluster of 4 cores that that has that the Switch does not though, if you want to get that exact.
You wouldn't need to turn off the second cluster of CPU cores, because they don't run at the same time as the first cluster anyway.
I suppose as some kind of beyond worst case scenario that test might be worth something.
He won't - TX1 uses cluster switching, so when the big cluster works the LITTLE sleeps, and vice versa. Bottomline, busy sw would mostly run on the big cluster, seeing 4x A57 for the duration of its run.
Didn't the original come in a version without an HDD too though?
Edit. It did.
http://m.androidcentral.com/nvidia-shield-android-tv-console-specs
PowerVR uses tag buffers to resolve visibility for a tile and interpolates attributes before executing pixel shading. Perhaps they could use a similar technique?
Edit: Here's the TBDR Pipeline. The Tag Buffer seems to accomplish the same tasks as a G-Buffer would.
Come on dude, really? Do you honestly think clock speeds were the only change Nintendo made? They wouldn't even need Nvidia's involvement if they were using the off the shelf X1
In my journey to figure out how to down clock my Shield's GPU, it turns out that it down clocks from its stock 1GHz speed anyway because of GPU scaling? Guessing its something to do with thermal throttling, but yeah! One of Dolphin's developers said that here. Shield TV could be running the GPU at that same 768MHz frequency as the Switch in its docked mode.
I just need to figure out how to set the frequencies.
In other news, the benchmarking I decided to go for was the Unity Standard Assets project, which features a couple of scenes. All scenes run at 1080p 60fps locked on the "Fantastic" quality settings, which includes 8x MSAA! Vulkan working very nice. Then again, these scenes aren't demanding but its nice to see. I can't figure out how to force vsync off so I can see how high the FPS can go, must be an Android thing? I need to push this harder ...
In my journey to figure out how to down clock my Shield's GPU, it turns out that it down clocks from its stock 1GHz speed anyway because of GPU scaling? Guessing its something to do with thermal throttling, but yeah! One of Dolphin's developers said that here. Shield TV could be running the GPU at that same 768MHz frequency as the Switch in its docked mode.
I just need to figure out how to set the frequencies.
In other news, the benchmarking I decided to go for was the Unity Standard Assets project, which features a couple of scenes. All scenes run at 1080p 60fps locked on the "Fantastic" quality settings, which includes 8x MSAA! Vulkan working very nice. Then again, these scenes aren't demanding but its nice to see. I can't figure out how to force vsync off so I can see how high the FPS can go, must be an Android thing? I need to push this harder ...
EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that.
In my journey to figure out how to down clock my Shield's GPU, it turns out that it down clocks from its stock 1GHz speed anyway because of GPU scaling? Guessing its something to do with thermal throttling, but yeah! One of Dolphin's developers said that here. Shield TV could be running the GPU at that same 768MHz frequency as the Switch in its docked mode.
I just need to figure out how to set the frequencies.
In other news, the benchmarking I decided to go for was the Unity Standard Assets project, which features a couple of scenes. All scenes run at 1080p 60fps locked on the "Fantastic" quality settings, which includes 8x MSAA! Vulkan working very nice. Then again, these scenes aren't demanding but its nice to see. I can't figure out how to force vsync off so I can see how high the FPS can go, must be an Android thing? I need to push this harder ...
EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that. Ran a 3DMark benchmark before and after to make sure, and yep it stays at 614MHz. Weaker then the Switch's docked mode, hah!
In my journey to figure out how to down clock my Shield's GPU, it turns out that it down clocks from its stock 1GHz speed anyway because of GPU scaling? Guessing its something to do with thermal throttling, but yeah! One of Dolphin's developers said that here. Shield TV could be running the GPU at that same 768MHz frequency as the Switch in its docked mode.
I just need to figure out how to set the frequencies.
In other news, the benchmarking I decided to go for was the Unity Standard Assets project, which features a couple of scenes. All scenes run at 1080p 60fps locked on the "Fantastic" quality settings, which includes 8x MSAA! Vulkan working very nice. Then again, these scenes aren't demanding but its nice to see. I can't figure out how to force vsync off so I can see how high the FPS can go, must be an Android thing? I need to push this harder ...
EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that. Ran a 3DMark benchmark before and after to make sure, and yep it stays at 614MHz. Weaker then the Switch's docked mode, hah!
EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that. Ran a 3DMark benchmark before and after to make sure, and yep it stays at 614MHz. Weaker then the Switch's docked mode, hah!
Wait... Really?
This could potentially cast the Switch's rumored specs in a very different light.
At this point, I'm wondering what's the max clock speed for the Jetson TX1 board when in full load. Might it also get thermal throttled just like the Shield?EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that. Ran a 3DMark benchmark before and after to make sure, and yep it stays at 614MHz. Weaker then the Switch's docked mode, hah!
This actually raises a few questions for me:
- Is this possibly the reason why the early dev kits had such loud fans?
This actually raises a few questions for me:
- Whats the cooling unit look like in the Shield TV?
- Why is it running that low with active cooling?
- Could the tiny little fan in the Switch really provide enough airflow to keep the max 768MHz clock speed?
- Is this possibly the reason why the early dev kits had such loud fans?
- Is it possible that they did switch to a smaller manufacturing process to mitigate the thermal throttling issues?
That was my next question.How about the CPU?
Trying to find another app / figure out adb shell commands so I have more then one way of measuring to really make sure it is topping out at 614MHz when throttled. The Dolphin developer saying the Shield TV needs to be rooted to make it stay at 1GHz to get the most out of the GPU is convincing me enough that is the frequency it throttles down to.
Going to throw my shield in the fridge for a bit
Trying to find another app / figure out adb shell commands so I have more then one way of measuring to really make sure it is topping out at 614MHz when throttled. The Dolphin developer saying the Shield TV needs to be rooted to make it stay at 1GHz to get the most out of the GPU is convincing me enough that is the frequency it throttles down to.
Going to throw my shield in the fridge for a bit
The A57 cores should to be clocked at 1.9GHz.How about the CPU?
So...I'm VLTTP, but is the Switch running on Pascal architecture or not? Some sites I've read say yes, others say no. Some say maybeit's a hybrid that we'll never really be sure of. Which is the accepted consensus?
The A57 cores should to be clocked at 1.9GHz.
Well, that's the case with the Jetson TX1: http://www.phoronix.com/scan.php?page=article&item=nvidia-jtx1-perf&num=10
I only just saw your edit, and I've been reading a little bit on ImgTech's use of tag buffers, but I'm not sure it would be applicable here. For one, it would likely require a large-scale redesign of the Maxwell/Pascal ROPs, which I don't really see happening (unless Nvidia have already implemented something similar, which is possible). The other issue is that it would be of relatively little benefit to explicitly deferred renderers, as the intent of tag/visibility buffers seems to be largely focussed on gaining some of the benefits of deferred renderers (specifically the computational saving of not shading fragments until visibility is fully resolved) with forward renderers.
I was thinking more along the lines of the API giving developers the ability to give sufficient information to the GPU to know exactly what can be tiled and what can't, to allow any intermediate buffers to benefit from tiling as well as the color buffer.
Reading up a bit more on Vulkan's renderpasses and subpasses, though (PDF link here), it seems that it basically covers this. Each renderpass is a high-level operation which can consist of multiple subpasses over a number of (identically sized) buffer objects. The idea is that rather than using a render-to-texture for g-buffers (which requires pushing the entire frame to memory, and then reading from the texture in a way where tiling can't be guaranteed), you add the g-buffer as a buffer object, or "attachment" within the renderpass. Within the renderpass you can only operate on data from same pixel within the framebuffer or any attachments, which allows the GPU to tile the entire renderpass however it likes (or not, as the case may be).
Because of the limitation of not being able to reference data from other pixels (which is necessary to ensure it can be tiled) you can't do things like depth-of-field blurring or post-process AA within the renderpass, so it's unlikely that any given renderer can be fully tiled, but it allows the developer to explicitly group together everything that can be tiled, without resorting to vendor-specific extensions or breaking compatibility with immediate-mode rasterisers.
EDIT: Shield TV thermal throttles the GPU to 614MHz, very interesting indeed. I have an app that logs the frequencies, and it doesn't appear to go higher then that. Ran a 3DMark benchmark before and after to make sure, and yep it stays at 614MHz. Weaker then the Switch's docked mode, hah!
So what you want is that every post to begin with "If this is true ..." or "Considering these rumours ..."? Isn't that redundant? We're in a thread about rumours.
There's nothing definitive probably until somebody will do a teardown of the Switch after March and even then it's not guaranteed to be some 100% sure info.
So what then? Never discuss anything until some official info that will never come? It's ridiculous.
So...I'm VLTTP, but is the Switch running on Pascal architecture or not? Some sites I've read say yes, others say no. Some say maybe—it's a hybrid that we'll never really be sure of. Which is the accepted consensus?
It's running that low with active cooling because 20nm isn't efficient enough to run at a higher clock.This actually raises a few questions for me:
Why is it running that low with active cooling?
If Switch is 16nm, the clock would work well, and the extended battery life also makes sense, could also have other customizations in the chip, such as embedded memory or larger cache for the GPU and CPU to provide the increased power we heard from October devkits.Could the tiny little fan in the Switch really provide enough airflow to keep the max 768MHz clock speed?
Yes, this is a safe assumption IMO.Is this possibly the reason why the early dev kits had such loud fans?
This is the most reasonable way they could keep the clocks without throttling, we were wrong about the efficiency that 20nm had, and now the fan makes sense even at the 768mhz with 16nm IMO.Is it possible that they did switch to a smaller manufacturing process to mitigate the thermal throttling issues?
Maxwell and Pascal are kind of the same architecture. There are some differences, and the latter is built on a process with a smaller feature size. The Switch's internals have been described in various rumour permutations as being a custom nvidia chip based off of Maxwell/Pascal.
Thing is, it could be a custom chip built on the earlier 28nm or 20nm fabrication process but incorporating some microarchitectural elements from Pascal. Or it could be a chip that's really entirely just a Maxwell architecturally but that's fabbed on a 16nm process like Pascal is (it seems less likely that they'd go this route, but whatevs). Which of those two options is "Pascal-based", and which is "Maxwell-based"? It's really kind of up in the air what you call it, since the two designs are so similar.
That's probably why you've seen it described as either in various places, especially since the only thing that's been confirmed is that it's a custom Tegra (per Nvidia).
(...also, Pokemaniac beat me with a more succinct answer, but I'm clicking Submit anyway)
It's running that low with active cooling because 20nm isn't efficient enough to run at a higher clock.
If Switch is 16nm, the clock would work well, and the extended battery life also makes sense, could also have other customizations in the chip, such as embedded memory or larger cache for the GPU and CPU to provide the increased power we saw from October.
Yes, this is a safe assumption IMO.
This is the most reasonable way they could keep the clocks without throttling, we were wrong about the efficiency that 20nm had, and now the fan makes sense even at the 768mhz with 16nm IMO.