• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nintendo Switch Dev Kit Stats Leaked? Cortex A57, 4GB RAM, 32GB Storage, Multi-Touch.

Status
Not open for further replies.

ggx2ac

Member
This adds more weight to Eurogamers reported clock frequency. It seems that the CPU doesn't need to adjust, though. Do you have any theories on why it was decided to be clocked at 1020MHz?

1) To avoid throttling in portable mode
2) To reduce power consumption
3) More than four active CPU cores down clocked to reduce power consumption
4) Something else
 

antonz

Member
I was wondering about that too, but MDave's numbers so far are showing that the CPU can run full power for at least a few minutes without throttling. He said that he will do more strainious tests to challenge the CPU/GPU.

Power consumption becomes the issue I would imagine. Shield TV as a whole uses around 19.8W when gaming. Nintendo hopefully with enough trial and error found the right CPU/GPU ratio that it avoids any bottlenecking issues.

Even Miyamoto ended up speaking out about how badly they messed up with the Wii U CPU being a serious bottleneck even for things he wanted to do.
 

ggx2ac

Member
Also it's safe to assume Nvidia's selling the Shield TV at a decent profit. Nintendo can bite the bullet and sell the Switch at cost or even at a small loss because they know they'll earn that money back in software sales.

With the 3DS nearing the end of its product cycle there will only be Switch as their main driver of hardware revenue.

Nintendo relies on gross profit from their hardware, with the Switch being mobile hardware you can expect Nintendo would want to make a nice profit off the system.

In the following, their average gross profit from hardware is around 40% it's too bad the data hasn't been updated.

gross_profd6p6u.png

Source: http://m.neogaf.com/showthread.php?t=740455
 
Reduce power consumption in portable mode.

Yeah CPU clock speed seems to be basically tied to what offers the best battery life while giving adequate enough CPU performance.

The new details on the GPU clocks from Mdave are illuminating at least and give us a good explanation on why Nintendo went with the clocks they did.
Overall nothing has really changed but we have a better picture of why things are the way they are and we see Nintendo is not lowering clocks for the sake of battery or anything but because its the optimal speed for an X1 based device.

Probably the same reason they went with 768mhz on the GPU. It won't need to throttle in either mode at that speed.

1) To avoid throttling in portable mode
2) To reduce power consumption
3) More than four active CPU cores down clocked to reduce power consumption
4) Something else

Power consumption becomes the issue I would imagine. Shield TV as a whole uses around 19.8W when gaming. Nintendo hopefully with enough trial and error found the right CPU/GPU ratio that it avoids any bottlenecking issues.

Even Miyamoto ended up speaking out about how badly they messed up with the Wii U CPU being a serious bottleneck even for things he wanted to do.

I see. Thank you for all of the replies. How much of a difference is the power consumption for the A57s at 1GHz compared to 2GHz?
 

Vic

Please help me with my bad english
Thanks for this, this is pretty interesting. It would seem to suggest that the Switch SoC is a 20nm chip, as going with the same frequency as TX1's thermal sweet spot would seem like too much of a coincidence on any other node. We can also see that the base clock is 76.8MHz, which tracks with both the docked mode on Switch (obviously), but also with Switch's portable clock (76.8 x 4 = 307.2MHz).
At this point, more than 2 SM sounds unlikely. They've probably optimized the X1's internal and external memory setup to improve performance for gaming applications. A memory bus width of 128-bit seems probable since they've implemented that functionality in Parker.
 

ggx2ac

Member
At 2ghz it uses almost 7w, at 1ghz it's under 2w AFAIR.

Not surprised it had to be clocked low to have reasonable power consumption because the Portable mode is the baseline.

The thing I wonder about is what they'll use for the OS. Someone noted the Wii U used an A9 to handle OS tasks but, will the Switch get 4 A53 cores for OS tasks like the TX1? Would they even need 4 cores? Doubtful.

I wonder about Heterogeneous Multi-Processing but the Jetson TX1 devkits only has 4 A57 cores.
 

Mokujin

Member
Very entertaining last pages thanks to MDave, good job!

At 2ghz it uses almost 7w, at 1ghz it's under 2w AFAIR.

Yes, this graphic from Anandtech review of the Exynos 5433 was brought earlier by Thraktor which is a very good example since it's also a 4 A57 + 4 A53 setup made in a 20nm process.


Given that Switch should have a 5w budget on portable mode (I really don't know but I think that figure has been brought earlier as a rough estimate) that should be already very close to 2W.

And given the hybrid nature of the system CPU has to work at the same speed in both modes.
 

MuchoMalo

Banned
Profit margins, dude. Nintendo isn't in this game to lose money.

Nintendo isn't in a position where they can afford to launch a console with a huge profit margin, and they know it. Whatever the price is, it's going to be very near cost. We already know that it won't be a loss leader (much to the disdain of spoiled rotten GAF members) but it's not going to be overpriced either. If anyone here feels that $250 is too much, you have unrealistic expectations.

Guys, why isn't 200 dollars possible ? TX1 is a 2015 SoC that released in a 200 dollars product in 2015.
Right, you have to add the price of the battery and screen. Shouldn't the nearly 2 years gap making for that price ?

Because the SoC was probably only $50 at launch anyway. Perhaps less since I recall Nvidia selling the chip for that much at one point. Between the lowered cost of the SoC and RAM, I'd only expect a savings of $20-30, and that's on the high-end considering the fact that the chip was used almost nowhere and is on a very cost inefficient. From there, you have to add in the cost of the screen, the battery, the likely more expensive heatsink considering that it has the same performance in a device with less volume, whatever engineering went into the dock and main unit for them to interact as they do, whatever else might be in the box, the extra gig of RAM, and wheatever customization might have been done to the SoC, such as an embedded pool to make up for the memory bandwidth. You guys are all assuming that it's an off-the-shelf X1 with adjusted clock speeds, but it's being called custom so we shouldn't rule out there being some form of customized hardware.

And I swear to god if you say "1 SM" or anything of the sort I will reach through you monitor and slap you silly.
 
Not surprised it had to be clocked low to have reasonable power consumption because the Portable mode is the baseline.

The thing I wonder about is what they'll use for the OS. Someone noted the Wii U used an A9 to handle OS tasks but, will the Switch get 4 A53 cores for OS tasks like the TX1? Would they even need 4 cores? Doubtful.

I wonder about Heterogeneous Multi-Processing but the Jetson TX1 devkits only has 4 A57 cores.

In Wii U, Cafe OS runs across the IBM cores iirc. They reserve the ARM9 core for I/O and Wii BC. Possibly security checks as well, I can't remember.

For the Switch, I hope they don't reserve a whole core for OS, but it's natural to expect at least part of one will be used for OS features.
 

ggx2ac

Member
Very entertaining last pages thanks to MDave, good job!



Yes, this graphic from Anandtech review of the Exynos 5433 was brought earlier by Thraktor which is a very good example since it's also a 4 A57 + 4 A53 setup made in a 20nm process.

A57-power-curve_575px.png


Given that Switch should have a 5w budget on portable mode (I really don't know but I think that figure has been brought earlier as a rough estimate) that should be already very close to 2W.

And given the hybrid nature of the system CPU has to work at the same speed in both modes.

The Switch could easily be running at around 5W to 8W. 4 A57 cores clocked to 1GHz as shown in that graph gives 1.83W. The GPU could be around 2W or less with the assumed clock speed and 256 CUDA cores.

The annoying thing is I can only find the voltage for LPDDR4 RAM 1.1V and not the wattage or amps for it.

Then you factor in the Joy-Cons have their own batteries so the controls will not be pulling power from the main body.

Of course this doesn't factor every little thing since there's also the fan possibly but 5W to 8W is the reasonable ballpark for portable mode power output.

In Wii U, Cafe OS runs across the IBM cores iirc. They reserve the ARM9 core for I/O and Wii BC. Possibly security checks as well, I can't remember.

Ahh, I'm just wondering of the chance people will get peeved because Nintendo reserves an A57 core for OS leaving 3 cores for gaming. (If it happens)
 

Mokujin

Member
While it's about Ps4 Pro wanted to add a link about half precision recently posted in wccftech.-

"Yes. Half precision (16 bit) instructions are a great feature. They were used some time ago in Geforce FX, but didn’t manage to gain popularity and were dropped. It’s a pity, because most operations don’t need full float (32 bit) precision and it’s a waste to use full float precision for them. With half precision instructions we could gain much better performance without sacrificing image quality."

Flying Wild Hog interview
 

LordOfChaos

Member
Not surprised it had to be clocked low to have reasonable power consumption because the Portable mode is the baseline.

The thing I wonder about is what they'll use for the OS. Someone noted the Wii U used an A9 to handle OS tasks but, will the Switch get 4 A53 cores for OS tasks like the TX1? Would they even need 4 cores? Doubtful.

I wonder about Heterogeneous Multi-Processing but the Jetson TX1 devkits only has 4 A57 cores.

The ARM core was IO and Trustzone I think - Cafe OS did run on the PowerPC cores, to which degree it reserved resources we don't know

The kernel runs in supervisor mode on the PowerPC, and performs the basic tasks of a microkernel. It is responsible for process isolation, memory management, and interrupt dispatching, as well as communication with IOSU. Cafe OS applications run as user mode processes, with separate address spaces and W^X memory protection. The kernel provides basic syscalls for running processes.
Processes
A process in Cafe OS represents a single running application, with its own code, memory, and permissions. Cafe OS only executes the code of a single process at a time, but it can hold the data of multiple processes in memory simultaneously, and switch between them. Rather than allowing arbitrary process creation, there is RAM reserved for a single foreground app, a single background app, and various other special processes. Each running process is assigned a unique identifier called a RAMPID:

http://wiiubrew.org/wiki/Cafe_OS
 
Very entertaining last pages thanks to MDave, good job!



Yes, this graphic from Anandtech review of the Exynos 5433 was brought earlier by Thraktor which is a very good example since it's also a 4 A57 + 4 A53 setup made in a 20nm process.



Given that Switch should have a 5w budget on portable mode (I really don't know but I think that figure has been brought earlier as a rough estimate) that should be already very close to 2W.

And given the hybrid nature of the system CPU has to work at the same speed in both modes.

Thanks for linking the chart. They would do better using 2 clusters of 4x A57s @ 1GHz than running 4x A57s @ 2GHz in a power budget perspective. I would expect using clusters of A53s before that, though.
 

KingSnake

The Birthday Skeleton
But wait, the GPU doesn't need to be clocked down in docked mode if MDave's discovery is true. It can just run the same as in the Shield and Nintendo just recommended to the devs to consider the lower frequency when developing, to be on the safe side.
 

antonz

Member
But wait, the GPU doesn't need to be clocked down in docked mode if MDave's discovery is true. It can just run the same as in the Shield and Nintendo just recommended to the devs to consider the lower frequency when developing, to be on the safe side.

Shield TV cannot maintain the 1000mhz and throttles to 768mhz after a few minutes so Nintendo did the smart thing and just set the max frequency at what will not reduce so performance is 100% locked in.

In the end it flipped assumptions on their head. Nintendo is pushing the GPU to its max potential in a device its size. They aren't cutting corners or anything
 

Donnie

Member
The Switch could easily be running at around 5W to 8W. 4 A57 cores clocked to 1GHz as shown in that graph gives 1.83W. The GPU could be around 2W or less with the assumed clock speed and 256 CUDA cores.

AFAIR Nvidia showed the Tegra Maxwell GPU using around 1.5w under load at 500Mhz.
 

ggx2ac

Member
Shield TV cannot maintain the 1000mhz and throttles to 768mhz after a few minutes so Nintendo did the smart thing and just set the max frequency at what will not reduce so performance is 100% locked in.

That's why it's a little shocking hearing about the throttling since it sounds like tech sites didn't check properly?

This is by no means a fair matchup and we need to be clear about this – the SHIELD Android TV has no throttling or power constraints, no need to balance out energy efficiency – but it at least gives us some idea of how the device and Tegra X1 compare to other products.

http://www.anandtech.com/show/9289/the-nvidia-shield-android-tv-review/3

I can see now the Shield TV doesn't list the clock speeds in the product page so they aren't being misleading about real world performance if they stuck in the theoretical performance.

https://shield.nvidia.com/node/12

AFAIR Nvidia showed the Tegra Maxwell GPU using around 1.5w under load at 500Mhz.

Hmm. That's something.
 
Even the top mobile phones won't be as powerful for gaming.



How high clocked they are doesn't mean anything though. Even at 1ghz A57 is as powerful or maybe even more powerful overall than Jaguar at 1.6ghz. That's if the CPUs even end up being A57 and not something better.

The number of cores is the only possible issue. But we have no idea if there could be extra smaller cores in there just for the OS (a couple of A53s maybe).

Pretty sure Jaguar is decently faster than standard A57 CPUs. Jaguar uses more power but doesn't have to worry about throttling issues at decent clock speeds. Since the ARM stuff isn't going to be clocked all that high you'd need more than 8 cores, which isn't happening for gaming when comparing.
 

MDave

Member
Unity Third Person Assets demo screenshot from my Shield TV

Indie 3D platformer games could have some really sharp clean graphics on this thing!

1080p 60fps 8xAA ... might even be possible for 4K gaming minus the AA hah! As long as the games stick to simple graphics of course, but the bandwidth seems to allow for it.

Any recommendations for testing memory bandwidth speeds?

EDIT: Okay thought I'd check what speeds the GPU is at while this is running, and it doesn't thermal throttle?! Staying at 1GHz. The CPU is pretty much not doing anything at all though, so maybe the 3DMark benchmark was using the CPU, thus causing thermal throttling because more power was being delivered to the SoC. More testing needed!
 

antonz

Member
That's why it's a little shocking hearing about the throttling since it sounds like tech sites didn't check properly?



http://www.anandtech.com/show/9289/the-nvidia-shield-android-tv-review/3

I can see now the Shield TV doesn't list the clock speeds in the product page so they aren't being misleading about real world performance if they stuck in the theoretical performance.

https://shield.nvidia.com/node/12



Hmm. That's something.

One thing I notice is it seems in most cases they seem to be referencing CPU performance when discussing the lack of throttling. Not much is actually devoted discussion wise to the GPU. I notice GPU wise they really dont give any hard numbers besides listing off Nvidia talking points for the Tegra x1. It honestly doesn't seem like they bothered to check what it was doing besides what Nvidia said was potentially possible. Then again there really isn't a single website that has gone to the effort to get into the nitty gritty
 
Pretty sure Jaguar is decently faster than standard A57 CPUs. Jaguar uses more power but doesn't have to worry about throttling issues at decent clock speeds. Since the ARM stuff isn't going to be clocked all that high you'd need more than 8 cores, which isn't happening for gaming when comparing.

I'm fairly certain A57 has more performance per MHz, but I'm not sure if it would be enough to match the 1.6x clock speed ratio.
 

KingSnake

The Birthday Skeleton
EDIT: Okay thought I'd check what speeds the GPU is at while this is running, and it doesn't thermal throttle?! Staying at 1GHz. The CPU is pretty much not doing anything at all though, so maybe the 3DMark benchmark was using the CPU, thus causing thermal throttling because more power was being delivered to the SoC. More testing needed!

This is quite a roller coaster. But it's cool that you're running these tests.
 

Donnie

Member
Pretty sure Jaguar is decently faster than standard A57 CPUs. Jaguar uses more power but doesn't have to worry about throttling issues at decent clock speeds. Since the ARM stuff isn't going to be clocked all that high you'd need more than 8 cores, which isn't happening for gaming when comparing.

It isn't, clock for clock A57 is significantly faster. Even at 1Ghz its up there with Jaguar at 1.6Ghz, faster in some ways and slower in others, pretty equal overall. Throttling isn't a issue when we're talking about a locked 1Ghz as well (Switch CPU speed). Of course 4x A57 cores at 1Ghz won't match 8 Jaguar cores at 1.6Ghz. But we don't know the full CPU config yet so..
 

Donnie

Member
Unity Third Person Assets demo screenshot from my Shield TV

Indie 3D platformer games could have some really sharp clean graphics on this thing!

1080p 60fps 8xAA ... might even be possible for 4K gaming minus the AA hah! As long as the games stick to simple graphics of course, but the bandwidth seems to allow for it.

Any recommendations for testing memory bandwidth speeds?

EDIT: Okay thought I'd check what speeds the GPU is at while this is running, and it doesn't thermal throttle?! Staying at 1GHz. The CPU is pretty much not doing anything at all though, so maybe the 3DMark benchmark was using the CPU, thus causing thermal throttling because more power was being delivered to the SoC. More testing needed!

So is the CPU throttling in this test?
 

ggx2ac

Member
AFAIR Nvidia showed the Tegra Maxwell GPU using around 1.5w under load at 500Mhz.

I found a reference for the wattage but not the clock speed.

I remember seeing this article:

For power testing NVIDIA ran Manhattan 1080p (offscreen) with X1’s GPU underclocked to match the performance of the A8X at roughly 33fps.

NVIDIA’s tools show the X1’s GPU averages 1.51W over the run of Manhattan. Meanwhile the A8X’s GPU averages 2.67W, over a watt more for otherwise equal performance.

http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/3

The annoying thing is I can't find the clock speed for the GXA6850 in an iPad Air 2 they used to compare to the TX1.
 

Instro

Member
With the 3DS nearing the end of its product cycle there will only be Switch as their main driver of hardware revenue.

Nintendo relies on gross profit from their hardware, with the Switch being mobile hardware you can expect Nintendo would want to make a nice profit off the system.

In the following, their average gross profit from hardware is around 40% it's too bad the data hasn't been updated.



Source: http://m.neogaf.com/showthread.php?t=740455

Um maybe I'm misunderstanding, but reading the OP of that thread that graph is based on overall revenue & cost numbers, not purely hardware. There's no way to know what their hardware profitability is because they don't report those numbers.
 

Doctre81

Member
Also it's safe to assume Nvidia's selling the Shield TV at a decent profit. Nintendo can bite the bullet and sell the Switch at cost or even at a small loss because they know they'll earn that money back in software sales.

They already said they will not sell switch at a loss and there is no guarantee software sales will take off. Switch will not be $200.
 

Donnie

Member
I found a reference for the wattage but not the clock speed.

I remember seeing this article:



http://www.anandtech.com/show/8811/nvidia-tegra-x1-preview/3

The annoying thing is I can't find the clock speed for the GXA6850 in an iPad Air 2 they used to compare to the TX1.

The same article shows TX1 at full speed running Manhattan 1080p at 63.6fps. So I think the consensus was that if Nvidia clocked TX1 down to match the 33fps performance of A8X it must have been about half full clock.

Then again I suppose the whole throttling thing comes into it. Was it running at full 1Ghz to hit that 63.6fps score? Who knows now with the tests we've seen here.
 

ggx2ac

Member
Um maybe I'm misunderstanding, but reading the OP of that thread that graph is based on overall revenue & cost numbers, not purely hardware. There's no way to know what their hardware profitability is because they don't report those numbers.

I made a mistake, you're right. I thought it was specifically hardware gross profit due to the graph but the text before it says it is gross profit for both software and hardware.

So, moving to another topic and discussing Nintendos direct hardware & software profitability.
 

ultrazilla

Gold Member
Shield TV cannot maintain the 1000mhz and throttles to 768mhz after a few minutes so Nintendo did the smart thing and just set the max frequency at what will not reduce so performance is 100% locked in.

In the end it flipped assumptions on their head. Nintendo is pushing the GPU to its max potential in a device its size. They aren't cutting corners or anything

Interesting. Thanks Antoz. I believe you have been verified as either a developer/solid "source"(in general, not specifically just Nintendo) correct?
 

ggx2ac

Member
The same article shows TX1 at full speed running Manhattan 1080p at 63.6fps. So I think the consensus was that if Nvidia clocked TX1 down to match the 33fps performance of A8X it must have been about half full clock.

Then again I suppose the whole throttling thing comes into it. Was it running at full 1Ghz to hit that 63.6fps score? Who knows now with the tests we've seen here.

That's possible because they were testing the TX1 and not the Shield TV.

As in, they were given something with enough cooling that could run the GPU at 1GHz but it wasn't designed like the Shield TV which would end up throttling.

This is just a guess though.
 

antonz

Member
Interesting. Thanks Antoz. I believe you have been verified as either a developer/solid "source"(in general, not specifically just Nintendo) correct?

I am just replying to what's been discussed here. I went Indie Dev and have no access to the Switch so I cannot offer any insight into what it has. Just educated Reponses based on current knowledge. If Mdave discovers new findings it could change things again.

TLDR version. Speculation based on information we have that can and will likely change. but at least right now we seem to have a better understanding of clock speeds if Mdaves data holds up
 

Donnie

Member
That's possible because they were testing the TX1 and not the Shield TV.

As in, they're given something that could run the GPU at 1GHz but it wasn't designed like the Shield TV which would end up throttling.

This is just a guess though.

Yeah I suppose in that kind of test environment Nvidia could easily have had it running at full speed, since they wouldn't be limited to any particular form factor or cooling solution.
 

MDave

Member
Alrighty looks like the picture is starting to get clearer and clearer hah.

I ran my little indie game that pushes the CPU to 2GHz constantly, and rendering at 1080p 8xMSAA, getting about 30FPS with dips when CPU really gets hit a little too hard.

http://puu.sh/tfWG7/f39056543e.png

GPU is thermal throttling again! Even lower then 768MHz sometimes. Temperature is nearly 60c. But the CPU is staying a rock solid 2GHz. I guess that never throttles, so instead the Shield will throttle the GPU if the CPU starts heating things up. Which again lines up with the Dolphin developer, hah.

So in conclusion: GPU won't throttle down if the CPU is not being pushed to 2GHz. I don't know at what frequency it does start, but yeah interesting stuff.

So where did the throttling kick in on this timeline?

About when the Physics test started running. Notice the massive dip in FPS shortly after the test started.
 

KingSnake

The Birthday Skeleton
But this doesn't explain anymore the GPU clocks on Switch. As the CPU is already limited to 1Ghz there should be no need for throttling when docked. Back to square 1 practically.
 

MDave

Member
But this doesn't explain anymore the GPU clocks on Switch. As the CPU is already limited to 1Ghz there should be no need for throttling when docked. Back to square 1 practically.

Not quite. Nintendo have likely chosen to limit the CPU to A) Battery saving and B) to stop the GPU throttling if the CPU is clocked higher then 1GHz and C) They keep the 1GHz CPU speed in docked mode so game logic doesn't get faster/slower compared to undocked mode.

Is there no way you can lock the CPU to just over 1Ghz and do the same tests?

That's my next test ;)
 

ggx2ac

Member
Yeah I suppose in that kind of test environment Nvidia could easily have had it running at full speed, since they wouldn't be limited to any particular form factor or cooling solution.

It makes me think then that the Switch can't really be needing a fan active while in portable mode if the setup was a standard TX1 down clocked.

Only if it had something like 3 or 4 SMs in the GPU and/or 8 CPU cores running active that it would need active cooling.

Of course that's just speculation due to a patent so no one should be getting their hopes up.
 

MuchoMalo

Banned
But this doesn't explain anymore the GPU clocks on Switch. As the CPU is already limited to 1Ghz there should be no need for throttling when docked. Back to square 1 practically.

Because Switch would have have a less open design and bacause Nintendo wouldn't want to be on the edge of not throttling. If the CPU were running at full speed, the GPU would have been lower more to ensure that it wouldn't throttle even under extreme circumstances. In other words, it's all about tolerance. It's also possible that the GPU clocks were simply locked before the CPU. This isn't anywhere near square one. I don't know why you want to downplay it.
 

Donnie

Member
But this doesn't explain anymore the GPU clocks on Switch. As the CPU is already limited to 1Ghz there should be no need for throttling when docked. Back to square 1 practically.

Well it shows that with the CPU at 2Ghz Shield GPU throttles below Switch's GPU speed. So perhaps with Switch's smaller size the GPU would throttle to around 786Mhz in worst case situations even with the CPU at 1Ghz.

In that scenario Nintendo would need to set the GPU to max out at that worst case speed. There can't be any room for any throttling at all, not even the smallest amount.

I mean when we thought Shield ran at full speed all the time under full load I found it hard to understand why Switch's GPU would need to run at its reported speed with active cooling and the CPU at half speed even with its smaller size. But now that we're seeing Shield does throttle quite severely at times I think its looking much more believable. Still not sure why cooling would be needed in handheld mode. But then I do still expect the SoC to be custom and not just a TX1.
 

KingSnake

The Birthday Skeleton
Because Switch would have have a less open design and bacause Nintendo wouldn't want to be on the edge of not throttling. If the CPU were running at full speed, the GPU would have been lower more to ensure that it wouldn't throttle even under extreme circumstances. In other words, it's all about tolerance. It's also possible that the GPU clocks were simply locked before the CPU. This isn't anywhere near square one. I don't know why you want to downplay it.

I don't want to downplay it. I find the tests MDave does fascinating to follow. But what seemed to be quite a clear explanation at some point is back into speculation territory now.

What's interesting is that Pixel C has the CPU clock only a notch lower and the GPU clock cut down to almost Switch docked level. That would be interesting to see how it acts when it throttles.
 

MDave

Member
Is there no way you can lock the CPU to just over 1Ghz and do the same tests?

Alrighty here are the results in my heavy CPU + GPU test at CPU 1GHz limited:

http://puu.sh/tfXNj/5caf0f3a41.png

Getting a lower FPS, but it appears that the GPU is waiting on the CPU. Between frames it is probably clocking down until the CPU catches up. I don't think this is throttling, temperatures are at 45c.

And this is the GPU only test results:

http://puu.sh/tfY3V/09f37f8819.png

No GPU throttling as expected.
 

Donnie

Member
Alrighty here are the results in my heavy CPU + GPU test at CPU 1GHz limited:

http://puu.sh/tfXNj/5caf0f3a41.png

Getting a lower FPS, but it appears that the GPU is waiting on the CPU. Between frames it is probably clocking down until the CPU catches up. I don't think this is throttling, temperatures are at 45c.

And this is the GPU only test results:

http://puu.sh/tfY3V/09f37f8819.png

No GPU throttling as expected.

What about doing the tests at varying CPU clock speeds to find the CPU speed that causes the GPU to begin throttling?
 

MDave

Member
What about doing the tests at varying CPU clock speeds to find the CPU speed that causes the GPU to begin throttling?

That could be difficult to detect because of the way Android will lower the CPU/GPU clocks when it's waiting or not being used, giving false positives. When I can turn that off, I think I can get more accurate results.
 
Alrighty here are the results in my heavy CPU + GPU test at CPU 1GHz limited:

http://puu.sh/tfXNj/5caf0f3a41.png

Getting a lower FPS, but it appears that the GPU is waiting on the CPU. Between frames it is probably clocking down until the CPU catches up. I don't think this is throttling, temperatures are at 45c.

And this is the GPU only test results:

http://puu.sh/tfY3V/09f37f8819.png

No GPU throttling as expected.

Thanks for sharing this. The sacrifices the system does to maintain the max power of the CPU is interesting. The GPU at moments clocks down to Switch's HH-mode levels.
 
Status
Not open for further replies.
Top Bottom