• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

Thraktor

Member
And that 1W difference probably comes from mass storage access. I'm pretty sure power management was in one of the Linkedin profiles I've found a while ago, so my guess is that it simply isn't enabled in the current firmware.

It's likely that "power management" in that context is simply related to reducing max power consumption. Hardware techniques to reduce power consumption during periods of low load (like power gating) aren't really things which can be turned on or off by firmware. My guess is that they decided against significant power gating for the GPU die, given that it increases the die size and reduces performance (although only very slightly) and the GPU die is already pretty big and power efficient in the first place.
 

DrWong

Member
Don't know if it's relevant so I let you decide guys.

Wii U CPU and GPU Rendered in High-Res:
Today we got a decent rendering of the Wii U CPU and GPU that complements the previous blurry chip shots. It is not an actual photo, but rather a photoshopped recreation of the package.

high-res-wii-u.jpg
 

YuChai

Member
Just got the japan version with monster hunter. Really surprised it's running on mt framework mobile (not the full version)
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Just got the japan version with monster hunter. Really surprised it's running on mt framework mobile (not the full version)
Assuming the 3ds original was using mt mobile, there wouldn't have been much point in moving the title to the more potent version of the engine.
 

z0m3le

Banned
Apparently someone on Wikipedia seems to think that the GPU is based of of the Radeon HD 6450. Thoughts?

That GPU has 370 Million transistors, it's actually roughly half the estimation that was made on this thread on page 17. While the estimation isn't fact, it's hard to imagine that it would be far off, if over a Billion transistors exist in the GPU, and less than a 3rd is edRam, I highly doubt we will see a GPU below 600 Million transistors and as much as 750 Million is a lot more likely. Having said that...

The HD 4770 has 826Million transistors, so something slightly less beefy is reasonable to assume too, if they are staying close to the R700 line, which means somewhere north of 500 shaders or a big architecture change from R700.

Honestly it's pointing right at Turks, 118mm 716 transistors, @600MHz the e6760 (turks) pulled 576GFLOPs with 1GB GDDR5 on board it drew 35watts. So while the Wii U's GPU is custom, that same GPU with the RAM removed and clock speed lowered to 550MHz, it should sit well below 30watts and fit inside this puzzle piece extremely well.
 
In the clip Alstrong linked to above it looks like "traditional" alpha drops -- related to the amount and screen size of the blending effect. Which is still surprising to me, since the eDRAM, with the bandwidth numbers thrown around in this thread, shouldn't let that happen.
Which is why I'm beginning to think they are lower than expected. Like most of the system specifications.

Though if they get too low doesn't that kind of nullify the reason for its existence?
 

z0m3le

Banned
Which is why I'm beginning to think they are lower than expected. Like most of the system specifications.

Though if they get too low doesn't that kind of nullify the reason for its existence?

It has to be at a certain bandwidth for backwards compatibility, so I wouldn't think too hard about that.
 

Argyle

Member
It has to be at a certain bandwidth for backwards compatibility, so I wouldn't think too hard about that.

Are you suggesting that the GPU bandwidth when playing Wii U games is limited to Wii levels for the sake of backwards compatibility, or did I misread your comment?
 

mrklaw

MrArseFace
It has to be at a certain bandwidth for backwards compatibility, so I wouldn't think too hard about that.

Considering the entire machine seems to reboot into Wii mode, I'd think they could mess with clock speeds between modes if needed.
 

z0m3le

Banned
Considering the entire machine seems to reboot into Wii mode, I'd think they could mess with clock speeds between modes if needed.

I was talking about the edram's speed. Is there a reason why they would lower it for Wii U but raise it for Wii? at 33watts the console certainly isn't running hot.

Are you suggesting that the GPU bandwidth when playing Wii U games is limited to Wii levels for the sake of backwards compatibility, or did I misread your comment?

I said that the edram's bandwidth couldn't be lower than Wii's if it wanted BC.
 

Thraktor

Member
I said that the edram's bandwidth couldn't be lower than Wii's if it wanted BC.

Wii's 1T-SRAM bandwidth wasn't all that high (maybe 10GB/s or so). It's the low latency of the 1T-SRAM that'd be the issue for BC (not that that'd be a problem for on-die eDRAM).

Which is why I'm beginning to think they are lower than expected. Like most of the system specifications.

Though if they get too low doesn't that kind of nullify the reason for its existence?

As far as I can tell from reading up on Renesas's eDRAM, Nintendo would have to go seriously out of their way to gimp the eDRAM to such a degree that it'd cause these kinds of alpha issues. And yes, it would almost entirely defeat the purpose of putting of putting the eDRAM on there in the first place.

Given that we've only seen these issues in ports, and we know that the DDR3 pool does have limited bandwidth, I'm still of the opinion that developers are running transparency render targets through the DDR3, a la XBox360, and that this is due to some aspects of the way the GPU interfaces with the eDRAM and DDR3.
 

Durante

Member
Given that we've only seen these issues in ports, and we know that the DDR3 pool does have limited bandwidth, I'm still of the opinion that developers are running transparency render targets through the DDR3, a la XBox360, and that this is due to some aspects of the way the GPU interfaces with the eDRAM and DDR3.
I'm just not really happy with this explanation since it presupposes such a high level of incompetence on part of the developers. Or are you suggesting there's an architecture or tool-related issue that prevents them from implementing it in the obvious(ly better) way?
 

Argyle

Member
I'm just not really happy with this explanation since it presupposes such a high level of incompetence on part of the developers. Or are you suggesting there's an architecture or tool-related issue that prevents them from implementing it in the obvious(ly better) way?

I'm with Durante here, typically the hardware is implemented in a way where the GPU can ONLY write to embedded RAM (IIRC PS2, GameCube/Wii, Xbox 360 all have this limitation)...I think I see what you are suggesting, that the GPU cannot read from embedded RAM, but I think it's more likely that the GPU can only read from embedded RAM to be honest.

IIRC Black Ops is using some form of deferred rendering so I would think that would mean they are going to do a separate translucency pass, so even in some kind of worst case scenario where they have to shuffle buffers around to get things to fit, they would pay such a setup cost once per frame and the actual rendering time for the translucent objects should be fast.

Does the Wii U version show any signs of using a lower res buffer for alpha, out of curiosity?
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I'm with Durante here, typically the hardware is implemented in a way where the GPU can ONLY write to embedded RAM (IIRC PS2, GameCube/Wii, Xbox 360 all have this limitation)...
Both PS2's GS and PSP's GPU can, and so can later GPUs that host eDRAM. Also, Xenos technically can too, during fb resolve.
 

guek

Banned
I'm just not really happy with this explanation since it presupposes such a high level of incompetence on part of the developers. Or are you suggesting there's an architecture or tool-related issue that prevents them from implementing it in the obvious(ly better) way?

I find it surprising that a month after its launch, we still don't seem to have a clear picture of how this system works or why ports might possibly be underperforming. Then again, I'm not sure how long it's historically taken for hardware design to become transparently clear to the public post launch.
 

Argyle

Member
Both PS2's GS and PSP's GPU can, and so can later GPUs that host eDRAM. Also, Xenos technically can too, during fb resolve.

Are you sure about the GS? Granted I haven't looked at the PS2 in like 7-8 years and maybe my memory is failing...I never worked on PSP so no comment there.

And "technically" you are right about Xenos (otherwise you could never see anything as its video out must scan out from main memory, like the GameCube) but that is not what I was talking about (whether it could write directly to main memory using drawing operations).
 

Donnie

Member
Wii's 1T-SRAM bandwidth wasn't all that high (maybe 10GB/s or so). It's the low latency of the 1T-SRAM that'd be the issue for BC (not that that'd be a problem for on-die eDRAM).

24MB 1T-SRAM was 3.9GB/s, while the embedded 1T-SRAM was 15.6GB/s for the 1MB texture cache and 11.4GB/s for the 2MB frame/z buffer. So 27GB/s combined for the eDRAM.

As far as WiiU eDRAM goes, clearly the whole system is heavily designed around eDRAM use, so I see no reason for them to use a narrow bus there.
 

greg400

Banned
So based on current information what's the feature set of the Wii U GPU looking like? OpenGL 3.3 capable feature wise at the most or is OpenGL 4 and above still a possibility?
 

Thraktor

Member
I'm just not really happy with this explanation since it presupposes such a high level of incompetence on part of the developers. Or are you suggesting there's an architecture or tool-related issue that prevents them from implementing it in the obvious(ly better) way?

Well, I was assuming that there is some architectural peculiarity that would require a substantial rewrite of a rendering pipeline designed for PS360 to keep transparencies entirely on the eDRAM, and such a rewrite isn't feasible within the time and resources available for launch ports.

That said, I'm little more than an interested hobbyist here, so I'd gladly accept the viewpoint of those of you with actual experience of these matters.
 

Rolf NB

Member
And what's that chip top right? Security? DSP? A Wii:)
Probably DSP+ARM core. The Wii had an ARM CPU for its IOS stuff, too. Some people speculated it might be memory, but that makes zero sense, given the size, and the hunk of eDRAM already embedded into the GPU.

Which is why I'm beginning to think they are lower than expected. Like most of the system specifications.

Though if they get too low doesn't that kind of nullify the reason for its existence?
It only has to be faster than the paltry 64bit main memory bus to be worth it.

Remember, Nintendo is a worldwide leader in penny pinching.
 

Thraktor

Member
Probably DSP+ARM core. The Wii had an ARM CPU for its IOS stuff, too. Some people speculated it might be memory, but that makes zero sense, given the size, and the hunk of eDRAM already embedded into the GPU.

I'm pretty sure the ARM core and DSP were on-die with the GPU in the Wii, so I'd expect the same for Wii U. The small die is probably EEPROM or something like that.

It only has to be faster than the paltry 64bit main memory bus to be worth it.

Remember, Nintendo is a worldwide leader in penny pinching.

If they were penny-pinching, there woudn't be any eDRAM on there. That stuff's several orders of magnitude more expensive than something like DDR3.
 

Panajev2001a

GAF's Pleasant Genius
Both PS2's GS and PSP's GPU can, and so can later GPUs that host eDRAM. Also, Xenos technically can too, during fb resolve.

Ahhh inverting the GIF to GS bus, another potentially very nice feature of PS2, but I hear it is not a very fast operation to perform... It would have been soooo cool if that bus were able to transfer data at high speed from eDRAM to DRDRAM and let the VU's read it...
 

Donnie

Member
Probably DSP+ARM core. The Wii had an ARM CPU for its IOS stuff, too. Some people speculated it might be memory, but that makes zero sense, given the size, and the hunk of eDRAM already embedded into the GPU.

It only has to be faster than the paltry 64bit main memory bus to be worth it.

Remember, Nintendo is a worldwide leader in penny pinching.

It isn't going to be cheap to put 32MB eDRAM on chip with the GPU surely, so to then use a narrow bus would be quite mad.

GameCube's 3MB eDRAM had a combined bus width of 896bit, even that would give WiiU's eDRAM 60GB/s bandwidth.

As far as this slow down goes, well 360's GPU only has 32GB/s bandwidth to its eDRAM and I'd be stunned if WiiU's eDRAM didn't have a lot more than that. However things such as alpha blends have 256GB/s on 360 since the logic responsible for that is inside the daughter die that houses the 10MB eDRAM. What if WiiU's eDRAM has a lot more than 32GB/s but still significantly less than 256GB/s? Couldn't that potentially cause problems if the corresponding logic on WiiU that normally sits inside the eDRAM on 360 is asked to do exactly the same thing with less bandwidth?
 

Thraktor

Member
GameCube's 3MB eDRAM had a combined bus width of 896bit, even that would give WiiU's eDRAM 60GB/s bandwidth. 1024Bit is likely the narrowest we'll see though.

1024 bit seems to be the narrowest bus available for 32MB of Renesas 40nm eDRAM.
 

peetfeet

Member
What's with the little QR code?

its common to attach a barcode to PCB's during manufacture so it can be tracked throughout the manufacturing process, also if it is reworked for whatever reason that will show on the records as well. Handy if any problems arise in manufacturing that aren't caught at the time. Traceability. Of course the reason may be different for Nintendo but thats what they are used for at my place of work (not gaming related).
 

Rolf NB

Member
It isn't going to be cheap to put 32MB eDRAM on chip with the GPU surely, so to then use a narrow bus would be quite mad.
It's cheaper than giving it an external bus of equivalent bandwidth, however high it may be. eDRAM is not bigger than external DRAM in terms of silicon. So why would it be more expensive?
Yields? Don't think so. Going from 115mm² without eDRAM to 156mm² with eDRAM isn't going to kill yields. 45nm fabrication is mature enough to make that still a safe size. IBM said it has figured out how to integrate eDRAM and logic without compromising either.

Donnie said:
GameCube's 3MB eDRAM had a combined bus width of 896bit, even that would give WiiU's eDRAM 60GB/s bandwidth. 1024Bit is likely the narrowest we'll see though.
Gamecube's eDRAM also ran at 162MHz. But it's the Wii's 243MHz the Wii U needs to match or exceed. And it could do that at half the original bus width, with some headroom left, because it's clocked more than twice as fast.

e: after some external reading, changed eDRAM size estimate for 32MB from 60mm² to 40mm²
 

Donnie

Member
It's cheaper than giving it an external bus of equivalent bandwidth, however high it may be. eDRAM is not bigger than external DRAM in terms of silicon. So why would it be more expensive?
Yields? Don't think so. Going from 95mm² without eDRAM to 156mm² with eDRAM isn't going to kill yields. 45nm fabrication is mature enough to make that still a safe size. IBM said it has figured out how to integrate eDRAM and logic without compromising either.

Gamecube's eDRAM also ran at 162MHz. But it's the Wii's 243MHz the Wii U needs to match or exceed. And it could do that at half the original bus width, with some headroom left, because it's clocked more than twice as fast.

The eDRAM will probably only take up about 30-40mm2, but that's still going to add extra cost to the GPU whether it kills yeidls or not, how much is the only question. Its certainly cheaper than using an external bus of the same width, but then you wouldn't need an external bus of the same width considering you're looking at 1600Mhz external memory compared to 550Mhz eDRAM.

Anyway I'm sure Nintendo's decision about how much bandwidth they required from the 32MB eDRAM wasn't made based on Wii compatibility. I mean while the eDRAM needs to support the requirements of Wii compatibility as a minimum its also designed to support WiiU's capabilities. With GameCube they obviously thought 7.6GB/s was the right kind of bandwidth for a fillrate of 648Mpixels/s (11.4GB/s for 972mpixels with Wii). I'd assume they don't consider 11.4GB/s to be the right kind of bandwidth for 4-8gpixels/s fillrate however (a simplistic comparison but you get my point)

Also just FYI IBM won't be fabbing the GPU and it'll likely be a 40nm process.
 

Thraktor

Member
Regarding the issue with transparencies, I was wondering if there were any exclusive titles that had any alpha-heavy scenes, which may give us some indication of what the cause might be. While not an exclusive, I'm in the middle of playing Trine 2*, and it occurred to me that this is probably the best example of an alpha-heavy Wii U game without slowdown. Given that it was released almost a year after other platforms, and the port was done by the original team, it may indicate that it's an issue that can be worked around, rather than a hard bottleneck.

Alternatively, it may be a different kind of technique I'm looking at, so perhaps blu might be able to give some input in that regard (as I understand he owns the game).

*Incidentally, this happens to be the first time I'm using Wii U's ability to use the web browser while a game's paused.

I'd still prefer all that RAM was used for games instead
 
Wow so basically the Wii U only has 1GB of memory.
I know current consoles only have about 512, but they are almost six years old.

Is 1GB going to be enough? I really doubt it. But then again I am not that technically minded.
A lot of current PCs have upwards of 12GB memory don't they?
I know it isn't a fair comparison, but still...

PS3 and xbox360 dont have 512mb for games, a good amount is reserved for the os on both console.

The Wii U CPU is clocked lower than Xbox360 CPU but could do more operations per clock cycle and has more cache. So in the end it will be faster than the xbox360 cpu with optimized code.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Are you sure about the GS? Granted I haven't looked at the PS2 in like 7-8 years and maybe my memory is failing...I never worked on PSP so no comment there.
I haven't worked on the ps2, but I've studied it out of curiosity. There's a fixed edram BW alloted to GS tex fetches (9.6GB/s), and that's outside of the tex caches (8KB). IOW, GS can consume RTs at this rate no prob. IIRC, where it comes to uploaded textures, there is no direct path from GIF to tex caches - all tex data has to pass via edram (and it better be properly paginated there, or else).

And "technically" you are right about Xenos (otherwise you could never see anything as its video out must scan out from main memory, like the GameCube) but that is not what I was talking about (whether it could write directly to main memory using drawing operations).
While I do agree that's more of a semantics nitpick on my end, I don't agree we can equate Flipper's and Xenos' resolves just like that - the former goes over a dumb blitter, while the latter is fed back to Xenos and comprises a special-case memexport, AFAIK.
 

Panajev2001a

GAF's Pleasant Genius
I haven't worked on the ps2, but I've studied it out of curiosity. There's a fixed edram BW alloted to GS tex fetches (9.6GB/s), and that's outside of the tex caches (8KB). IOW, GS can consume RTs at this rate no prob. IIRC, where it comes to uploaded textures, there is no direct path from GIF to tex caches - all tex data has to pass via edram (and it better be properly paginated there, or else).


While I do agree that's more of a semantics nitpick on my end, I don't agree we can equate Flipper's and Xenos' resolves just like that - the former goes over a dumb blitter, while the latter is fed back to Xenos and comprises a special-case memexport, AFAIK.

Regarding PS2, the 512 bits bus width for texture fetches, like the 2048 read and write paths, is the width of the path between Pixel Engines and the 8 KB pages buffers. Said page buffers, for example, are fed from the eDRAM macros at a much higher speed than the one which is usually quoted. At least that is what I remember reading from the PS2Linux docs about the GS (PS2Linux + sps2dev gave direct access to pretty much everything).
 

Argyle

Member
I haven't worked on the ps2, but I've studied it out of curiosity. There's a fixed edram BW alloted to GS tex fetches (9.6GB/s), and that's outside of the tex caches (8KB). IOW, GS can consume RTs at this rate no prob. IIRC, where it comes to uploaded textures, there is no direct path from GIF to tex caches - all tex data has to pass via edram (and it better be properly paginated there, or else).

While I do agree that's more of a semantics nitpick on my end, I don't agree we can equate Flipper's and Xenos' resolves just like that - the former goes over a dumb blitter, while the latter is fed back to Xenos and comprises a special-case memexport, AFAIK.

Right, all rendering has to happen within the EDRAM on PS2. The video output scans out directly from the EDRAM as well. Yes, you can reverse the bus to get things like screenshots back out of the EDRAM but that is not something that you would normally do.

And for the purposes of what I said (that the video output on the console requires the front buffer to be in main memory), basically how things are done on GC vs. X360 are functionally equivalent (simply that data is copied from EDRAM to main memory - anything else falls under "implementation detail") so I think you are inventing a disagreement here :)

Anyway, we're getting offtopic - basically I haven't heard a satisfactory explanation on why the alpha performance on Wii U titles seems to be worse than we are all expecting. To Thraktor's point, IMHO changing the render pipeline shouldn't be something that would be a radical engine rewrite, so I'm wondering if it's just another hardware deficiency we're seeing.
 

mrklaw

MrArseFace
Why would you require a frame buffer to be in main memory to display it? Isn't that adding an extra step for no obvious reason? Is it related to the ability to scale the output based on the connected display?
 

Oblivion

Fetishing muscular manly men in skintight hosery
Random comment, but I started playing Super Mario Galaxy again on my (brand new) 1080p HDTV, and I was actually quite surprised at how great the game still looks. Amazingly, my biggest fear, that it would be full of jaggies, didn't appear to be an issue. Though sadly the game does appear to have higher amounts of blurriness. A few high res textures don't look so high res. anymore, which is kind of a shame, but not much of a big deal.

After getting a taste of 1080p titles, I would be disappointed if the next 3D Mario wasn't in 1080p w/60 fps. If the next game is designed in a similar fashion to the first two SMGs, it would probably be doable. Course, NSMBU is 720p, but then again that was handled by Nintendo's B team.
 

Argyle

Member
Why would you require a frame buffer to be in main memory to display it? Isn't that adding an extra step for no obvious reason? Is it related to the ability to scale the output based on the connected display?

The video output circuit responsible for displaying the image on your television is not connected to the EDRAM on these consoles, so the buffer needs to be copied back to main memory for it to be displayed. (Think about where the EDRAM typically lives, usually there is a dedicated bus between the GPU and EDRAM with no other connection, so unless the video output circuit is built into the GPU, there's no way for it to access the front buffer in EDRAM.) Yes, it's an extra step, but typically the increased performance of embedded memory more than offsets the costs of the copies (often called "resolves") that need to be made to main memory.
 

Fredrik

Member
Random comment, but I started playing Super Mario Galaxy again on my (brand new) 1080p HDTV, and I was actually quite surprised at how great the game still looks. Amazingly, my biggest fear, that it would be full of jaggies, didn't appear to be an issue. Though sadly the game does appear to have higher amounts of blurriness. A few high res textures don't look so high res. anymore, which is kind of a shame, but not much of a big deal.

After getting a taste of 1080p titles, I would be disappointed if the next 3D Mario wasn't in 1080p w/60 fps. If the next game is designed in a similar fashion to the first two SMGs, it would probably be doable. Course, NSMBU is 720p, but then again that was handled by Nintendo's B team.
I would be perfectly happy with 720p/60fps too, you can hardly notice the difference between 720p and 1080p anyway on normal viewing distance. 60fps is a must though.
 

QaaQer

Member
I would be perfectly happy with 720p/60fps too, you can hardly notice the difference between 720p and 1080p anyway on normal viewing distance. 60fps is a must though.

depends on the size of your tv. Moreover, 1080 images look less jaggy than 720 images, assuming same aa on both.

But yeah, 720 60 fps > 1080 30 fps.
 
I would be perfectly happy with 720p/60fps too, you can hardly notice the difference between 720p and 1080p anyway on normal viewing distance. 60fps is a must though.

Not true at all. As somebody who had to step down from 1080p to 720p in Skyrim because of a patch, there is a definite loss of detail and softening of the image.

After getting a taste of 1080p titles, I would be disappointed if the next 3D Mario wasn't in 1080p w/60 fps. If the next game is designed in a similar fashion to the first two SMGs, it would probably be doable. Course, NSMBU is 720p, but then again that was handled by Nintendo's B team.
The NSMBU team is far from one of Nintendo's B Teams. I would give you NSMB2, but not the main NSMB games.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Regarding the issue with transparencies, I was wondering if there were any exclusive titles that had any alpha-heavy scenes, which may give us some indication of what the cause might be. While not an exclusive, I'm in the middle of playing Trine 2*, and it occurred to me that this is probably the best example of an alpha-heavy Wii U game without slowdown. Given that it was released almost a year after other platforms, and the port was done by the original team, it may indicate that it's an issue that can be worked around, rather than a hard bottleneck.

Alternatively, it may be a different kind of technique I'm looking at, so perhaps blu might be able to give some input in that regard (as I understand he owns the game).

*Incidentally, this happens to be the first time I'm using Wii U's ability to use the web browser while a game's paused.

I'd still prefer all that RAM was used for games instead
Well. Trine2 uses deferred shading. While that may imply nothing about translucencies (those usually come in immediate after-passes) deferred shading is traditionally BW-heavy. Lots of translucencies on top of a deferred shaded scene can only make things heavier. Now, I'm not saying Trine is a translucencies monster, but it is definitely not shy on particles, fog, godrays, etc. At the same time Frozenbyte have not mentioned having any problems with the porting, on the contrary - they've said the WiiU port does things the other two consoles might not have been so successful at (that was re some of the PC-originated DLC bundled in, IIRC). So that's that about Trine.

Apropos, I'm positive I've posted somewhere on gaf how BW might not be the sole factor affecting performance during typical alpha-heavy scenarios. Since I don't feel like retyping it all again (on the WiiU ATM), I'll try looking it up, but no promises.
 

Stewox

Banned
Anyone yet figured out if it really is HDMI version 1.4

Check cables maybe something's written on it.

The 2011 iwata interview from mercury news is dead link on wikipedia.

EDIT: aha now i Remember Iwata said the tech will be capable of 3D stereoscopic but nintendo will not focus on it, that means it definitely has HDMI 1.4.

Well that makes me confused again : "High Speed HDMI 1.3 cables can support all HDMI 1.4 features except for the HDMI Ethernet Channel"
 
This might be my own lack of programming knowledge coming into play, but might the issue be having rewrite code to swap alpha channel data from main RAM into eDRAM + vice versa and figuring out when to do so? Like 2 loads- one from disk to main RAM then stream to eDRAM for bandwidth heavy sequences?
 

wsippel

Banned
Well. Trine2 uses deferred shading. While that may imply nothing about translucencies (those usually come in immediate after-passes) deferred shading is traditionally BW-heavy. Lots of translucencies on top of a deferred shaded scene can only make things heavier. Now, I'm not saying Trine is a translucencies monster, but it is definitely not shy on particles, fog, godrays, etc. At the same time Frozenbyte have not mentioned having any problems with the porting, on the contrary - they've said the WiiU port does things the other two consoles might not have been so successful at (that was re some of the PC-originated DLC bundled in, IIRC). So that's that about Trine.

Apropos, I'm positive I've posted somewhere on gaf how BW might not be the sole factor affecting performance during typical alpha-heavy scenarios. Since I don't feel like retyping it all again (on the WiiU ATM), I'll try looking it up, but no promises.
The two most convincing Wii U games from a technological standbpoint both use custom inhouse engines. The teams didn't have to wait for middleware providers or be afraid to mess with something they don't really understand. And in both cases, the teams are small so the ports were done by their main engine guys, who had the opportunity to focus exclusively on this one platform. It's probably really that simple.

To make that clear, I'm not saying "lazy developers" are to blame for sub-par or shoddy ports. The workflow and the motivation at Frozenbyte or Shin'en simply isn't comparable to that of a three guys at Treyarch or a few guys at WB Games Montreal hacking something together in time for launch because they were told to. Frozenbyte and Sin'en brought their A team, because they have no other teams. They have access to everything, can change any random asset and every line of code as required, and they know their tech down to the last detail. And last, but not least: They want to deliver, because their reputation and ultimately their own money is on stake.
 
Top Bottom