Shogmaster
Member
What the crikey fuck is going on in this thread... I haven't seen this much fudging since my niece's 6th birthday.
This is only true if you compare them on dx11. Both Xbox1 and PS4 use low level APIs.
@Skittzo0413 The 970 uses a newer architectur then the 390. 390 vs 780(ti) would be a much better comparison. But usually Nvidia needs fewer Gflops for equal performance.
To be honest I don't think it would be that big of an issue. Nintendo's internal teams would have had Nvidia hardware to work on from pretty much the moment the deal was done (perhaps two years before launch), and they dealt with arguably a bigger architectural jump from Wii to Wii U (moving from a fixed-function to fully programmable graphics architecture).
Edit: And CPU ISA really wouldn't be that big of a deal outside of whoever's working on the compiler (if they're not using a third party compiler). Also keep in mind that Nintendo have continuously been developing for ARM-based handhelds since the GBA.
This is only true if you compare them on dx11. Both Xbox1 and PS4 use low level APIs.
@Skittzo0413 The 970 uses a newer architectur then the 390. 390 vs 780(ti) would be a much better comparison. But usually Nvidia needs fewer Gflops for equal performance.
I think that 6 is the most safe bet
This is only true if you compare them on dx11. Both Xbox1 and PS4 use low level APIs.
@Skittzo0413 The 970 uses a newer architectur then the 390. 390 vs 780(ti) would be a much better comparison. But usually Nvidia needs fewer Gflops for equal performance.
You can also compare GTX 750ti 1.4tflops to the PS4 1.843tflops, this similar performance still happens, infact the 750ti is usually faster.
That's actually a good idea. Console generations last much longer than phones, let's say a new Switch releases every 4 years, but depending on specs, some newer games can carry over at lower settings or not be compatible, but that guarantees backwards compatibility with new devices forever onward until Nintendo decides to leave the Switch brand. Kinda like what Microsoft plans to do with Xbox. All Xbox games play on all Xbox systems by going an upgrade route.If they're the same family of chips, then theoretically it shouldn't matter. You'd be able to keep using your old Shift, until it reaches a point where new games ONLY supported a min version. Kind of like how while you can install iOS on old iPhones, you can only go so far back. Shift software would be the same
That's actually a good idea. Console generations last much longer than phones, let's say a new Switch releases every 4 years, but depending on specs, some newer games can carry over at lower settings or not be compatible, but that guarantees backwards compatibility with new devices forever onward until Nintendo decides to leave the Switch brand. Kinda like what Microsoft plans to do with Xbox. All Xbox games play on all Xbox systems by going an upgrade route.
People are used to that now, so it seems logical to do that with game consoles at this point.
This is only true if you compare them on dx11. Both Xbox1 and PS4 use low level APIs.
@Skittzo0413 The 970 uses a newer architectur then the 390. 390 vs 780(ti) would be a much better comparison. But usually Nvidia needs fewer Gflops for equal performance.
You can't fudge late 2013's 200W of processing power into an early 2017's 10W chassis no matter how much mental gymnastics you do.
Stop doing this to yourselves. Just be happy with Wii U power in portable form factor.
That's actually a good idea. Console generations last much longer than phones, let's say a new Switch releases every 4 years, but depending on specs, some newer games can carry over at lower settings or not be compatible, but that guarantees backwards compatibility with new devices forever onward until Nintendo decides to leave the Switch brand. Kinda like what Microsoft plans to do with Xbox. All Xbox games play on all Xbox systems by going an upgrade route.
People are used to that now, so it seems logical to do that with game consoles at this point.
You can't fudge late 2013's 200W of processing power into an early 2017's 10W chassis no matter how much mental gymnastics you do.
Stop doing this to yourselves. Just be happy with Wii U power in portable form factor.
You can't fudge late 2013's 200W of processing power into an early 2017's 10W chassis no matter how much mental gymnastics you do.
Stop doing this to yourselves. Just be happy with Wii U power in portable form factor.
You also have to consider that Pascal is much more power-efficient than Maxwell. Also, if I recall, Emily Rogers said that the Switch is at minimum 2x the Wii U in terms of power.Yes, a TX1 dev board ran the same UE4 demo that runs on XBox One, PS4, and PCs at lower frame rate, resolution and lower fidelity. That doesn't change the tiny chassis that the new chip has to work inside with much reduced wattage.
If we are discussing whether a new TX1 based plug in home set top box console could match XBox One performance, I'm right there with you guys. Swapping out a weak X86 CPU with a powerful ARM CPU and going with nVidia GPU instead AMD GPU, I can see near parity situation easily. But that's not what Switch is. It's stuck with a portable form factor that would be lucky to handle 10W of constant heat dissipation.
It's like you would compare cars only looking at the horsepower and ignoring all other facts, like the weight of the car, the profile of the wheels, the underlaying underground where the cars are used on etc.That would give Nvidia flops a 1.7x advantage over AMD flops which sounds way too high to be honest. I'd love if someone could clarify what causes these differences if they even truly exist.
(I didn't watch all of the video but I'm pretty sure neither of the chips were overclocked, right?)
When it comes down to it however, almost every game can be ported in some way, the most important thing is how well the Switch sells to warrant the ports.
Yes, a TX1 dev board ran the same UE4 demo that runs on XBox One, PS4, and PCs at lower frame rate, resolution and lower fidelity. That doesn't change the tiny chassis that the new chip has to work inside with much reduced wattage.
If we are discussing whether a new TX1 based plug in home set top box console could match XBox One performance, I'm right there with you guys. Swapping out a weak X86 CPU with a powerful ARM CPU and going with nVidia GPU instead AMD GPU, I can see near parity situation easily. But that's not what Switch is. It's stuck with a portable form factor that would be lucky to handle 10W of constant heat dissipation.
Sorry for going off topic but isnt the Nintendo Investors meeting supposed to be happening
Not that I expect Switch to hit XBO or PS4 performance levels, but it would be entirely technically possible for them to do so within a 10W envelope. The GP104 Pascal GPU (which is 20 SMs or 2560 "cores") consumes 36W at 1060 MHz. By that basis, an appropriately scaled down Pascal GPU with 6 SMs (768 cores) should be able to achieve ~1GHz within 10W for 1.5TF of FP32 or 3 TF of FP16. Easily the match of PS4 or XBO provided the CPU/RAM/etc are up to it.
As I say I'm certainly not expecting that (perhaps half the performance is plausible), but there's a lot to be gained from a wide application of an energy efficient architecture on a new node with a modest clock speed.
I wonder if any dev can request a dev kit now or if they are waiting for the full reveal in January to do that. I remember the Ori devs and the unravel devs saying that getting a dev kit was impossible.
I wonder if any dev can request a dev kit now or if they are waiting for the full reveal in January to do that.
Not that I expect Switch to hit XBO or PS4 performance levels, but it would be entirely technically possible for them to do so within a 10W envelope. The GP104 Pascal GPU (which is 20 SMs or 2560 "cores") consumes 36W at 1060 MHz. By that basis, an appropriately scaled down Pascal GPU with 6 SMs (768 cores) should be able to achieve ~1GHz within 10W for 1.5TF of FP32 or 3 TF of FP16. Easily the match of PS4 or XBO provided the CPU/RAM/etc are up to it.
As I say I'm certainly not expecting that (perhaps half the performance is plausible), but there's a lot to be gained from a wide application of an energy efficient architecture on a new node with a modest clock speed.
These two cards are a good idea of the difference in performance per flop in real world game benchmarking:
AMD R9 390: 5,914 GFLOPS
Nvidia GTX 970: 3,494 GFLOPS
[/QUOTE] Considering that there ar... "NX" would be between PS4 and PS4 Pro power.
Well, for one, at 10W sustained dissipation just for the SoC, if you assume a very low 2W for the entire rest of the system, you are looking at a 44 minute battery life with a battery capacity like 3DS XL. A bit under 2 hours with the same battery capacity as the 8" shield tablet.Why wouldn't it? It has active cooling and the system is at least half as big as the WiiU. And NO disc drive.
It would be very strange to go lower than that in most performance metrics. I'd say at least 2x Wii U in terms of GPU compute is a realistic lower bound.Are we looking at Wii U power as the bare minimum for portable mode?
I'd think the performance in portable mode hinges entirely on how much energy Nintendo wants it to use when running on battery. Which would mean a very wide range of speculation.Are we looking at Wii U power as the bare minimum for portable mode?
Didn't Epic confirm Unreal Engine 4 for Switch? We all know how Epic feels about Nintendo hardware so UE4 support should be a good sign of where it might be on the power scale.
Well, for one, at 10W sustained dissipation just for the SoC, if you assume a very low 2W for the entire rest of the system, you are looking at a 44 minute battery life with a battery capacity like 3DS XL. A bit under 2 hours with the same battery capacity as the 8" shield tablet.
It would be very strange to go lower than that in most performance metrics. I'd say at least 2x Wii U in terms of GPU compute is a realistic lower bound.
Well, for one, at 10W sustained dissipation just for the SoC, if you assume a very low 2W for the entire rest of the system, you are looking at a 44 minute battery life with a battery capacity like 3DS XL. A bit under 2 hours with the same battery capacity as the 8" shield tablet.
It would be very strange to go lower than that in most performance metrics. I'd say at least 2x Wii U in terms of GPU compute is a realistic lower bound.
Well, for one, at 10W sustained dissipation just for the SoC, if you assume a very low 2W for the entire rest of the system, you are looking at a 44 minute battery life with a battery capacity like 3DS XL. A bit under 2 hours with the same battery capacity as the 8" shield tablet.
The whole 'UE4 equates power' fiasco (and its episode 'wiiU? haha!') started with Sweeney's talk right before the start of the gen how Epic were planning for SVOGI for next-get consoles, and how that takes 2TF as a minimum. Of course we all know how that story went.I think that's more about the hardware supporting the featureset of the engine rather than it having X much power, with them going Nvidia there's not really any point in the thing not being able to use modern shaders in some form, so getting UE4 to run on it is probably not a big deal at all.
Well the battery life is apparently very poor. ;-)
In all serious I'm expecting somewhere around the smack dab mid point of Wii U to XBO. I think that's a somewhat safe assumption considering the fact it has some form of cooling.
If they can reach a power level that is above roughly half the Xbox One in portable mode, they should be set, right? Multiplats running at 1080p should scale down to 720p pretty much effortlessly, unless I am neglecting some other aspect. If they can supplement it with enough RAM and a good cpu it should be more than fine.
If they can reach a power level that is above roughly half the Xbox One in portable mode, they should be set, right? Multiplats running at 1080p should scale down to 720p pretty much effortlessly, unless I am neglecting some other aspect. If they can supplement it with enough RAM and a good cpu it should be more than fine.
Realistically, unless there is some miracle FP32/FP16 balancing for the most demanding AAA 3rd parties it will be 720p in docked mode and 540p in handheld mode.
From what we know it should be quite a bit above Wii U even in handheld mode so it should be able to hit 720p easily. Plus Nintendo probably would have gone with a 540p screen if that was the case.
EDIT: oh wait AAA third party games you're probably right. Though I'd expect scene complexity to take more a hit.
Yeah, I was talking about 3rd parties AAA. 1st party games I expect to run at 1080p docked and 720p in handheld mode.
Realistically, unless there is some miracle FP32/FP16 balancing for the most demanding AAA 3rd parties the native resolution will most probably be 720p in docked mode and 540p in handheld mode.
Sometimes it requires more work to get lower precision calculations to work (with zero image quality degradation), but so far I haven't encountered big problems in fitting my pixel shader code to FP16 (including lighting code). Console developers have a lot of FP16 pixel shader experience because of PS3. Basically all PS3 pixel shader code was running on FP16.
It is still is very important to pack the data in memory as tightly as possible as there is never enough bandwidth to lose. For example 16 bit (model space) vertex coordinates are still commonly used, the material textures are still dxt compressed (barely 8 bit quality) and the new HDR texture formats (BC6H) commonly used in cube maps have significantly less precision than a 16 bit float. All of these can be processed by 16 bit ALUs in pixel shader with no major issues. The end result will still be eventually stored to 8 bit per channel back buffer and displayed.
One example where 16 bit float processing is not enough: Exponential variance shadow mapping (EVSM) needs both 32 bit storage (32 bit float textures + 32 bit float filtering) and 32 bit float ALU processing.
FP16 is more than enough for post processing (DOF, bloom, motion blur, color correction, tone mapping). As FP16 makes post processing math 2x faster on Rogue (all the new iDevices), it will actually be a big thing towards enabling console quality graphics on mobile devices. Obviously FP16 is not enough alone, we also need to solve the bandwidth problem of post processing on mobiles. On chip solutions (like extending the tiling to support new things) would likely be the most power efficient answers.
They will, most likely, use FP32 for some pixel shader code. But the parts that need FP32 aren't actually that frequent. Some developers will want to play it safe and use FP32 wherever there is any doubt, but even then there are shader parts that are obviously fine with FP16.
I don't think it is particularly challenging to write pixel shader code for a "console quality", "high fidelity" mobile game that is something like 70% FP16 and 30% FP32.
Half the Xbox One would be 650GFlops of power. I would guess we're looking at somewhere between 350-400GFlops during portable mode.
I'm by no means a tech expert but considering a whole lot if XBO games run at resolutions lower than 1080p it would still have a lot of issues. Its certainly much closer than Wii was to 360 or Vita was to PS3.