• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

Its always a binned part....


you have 33 watts to power the whole system, do the math.... funny you have a card that matches the specs but you over look it because it wrecks your math. The 5550...

156 = ~35mm[edram] + xxmm i/o + 104mm[ gpu core]

5550 352 glfops 39 tpd = wiiu gpu! This is best case.... looks it 550mhz too... hmmmmm

the 5550 was just a cut down 5670. They cut out 2 compute units for chip salvaging. Unlikely they will still need it given the yields we get at 40nm now. Also the 5750m is binned but also has max 25W TPD, normal power consumption would be about half that.

Since the 40nm process AMD is using is quite mature, it is expected they can enable the full chip and lower the wattage by lowering the voltage driving the chips.
 

ikioi

Banned
Because you're implying that the frame buffer would be in MEM2 instead of MEM1 (the eDRAM).

No i'm not. Quote me where i implied this!

Heck i didn't even mention the words frame buffer if any of my posts.

The frame buffer will have to be in the MEM1 pool, its the only memory pool with sufficient bandwidth to act as the frame buffer, z buffer, etc. However unlike the Xbox 360's eDRAM, it doesn't look like the Wii U's ROPs are tied directly into it via a high speed bus. The Xbox 360's ROPs could access the eDRAM at 256 gigabytes per second, the Wii Us as above looks like its around 70 gigabytes. The Wii U's eDRAM implamentation is more aligned with what MEM1 was for the Gamecube and Wii, its a more general purpose pool of memory. Sure it will be used for the frame buffer, but it also appears to have more versatility in its application then the Xbox 360's but at a significant bandwidth trade off.

Good luck to any developer trying to use the 10.24GBps of bandwidth from MEM2 as a frame buffer. We'd all be playing at 640x480p.
 

Donnie

Member
You mean like how i talked about the Wii U's MEM2 pool being on a 64bit bus. DDR3 1600 on a 64bit bus. 4 chips, 512 megabyte capcity, all on a 16 bit bus. 16bit x 4 = 64bit. 200mhz base clock x 4 x 16 = 10.2gbs per second of bandwidth. This is in comparison to the next gen consoles which appear to be using 256bit for their main memory pools. The Xbox 360 and PS3 also used 128bit bus for their GDDR3, which still provides more raw bandwidth then the Wii Us. Even with a modern memory controller there's no way the Wii U's ram is on par even in the real world vs the Xbox 360's memory.

200Mhz x 4 x 16 = 12.8GB/s, not 10.2GB/s.

Or the likelyhood that the GPU only has 8 ROPs due to the low memory bandwidth of the Wii U. ROPs are bandwidth dependant, there's no point adding more ROPs unless you can feed them data fast enough.

WiiU doesn't render to main memory, it renders to 32MB eDRAM and you have no idea how much bandwidth it has at its disposal.

To which i expanded on by using the XBox 360 as an example. With the Xbox 360 the ROPs were intergrated into the eDRAM. Due to this configuration the ROPs could be fed data at around 256 gigabytes per second. The Wii U's eDRAM implamentation does not seem to be similar to this, with its bus being considerably slower.

No WiiU's isn't similar, its a more complete eDRAM implementation. You don't just have the ROPS with full access to eDRAM, the entire GPU has full speed access since the GPU and eDRAM are on the same die. No idea why you're claiming its slower, where are you getting that idea?

Or the fact the CPU is the size of a single Intel ATOM core and has an incredibly low TDP. It's also based on the decade old IBM PPC 750 based architecture. Its performance is going to be anything but stellar.


There's no smoke without fire. In the case of the Wii U's cpu, the house is well and truely alight We've seen Dice slam it, Crytech slam it, unnamed sources months ago slam it, even developers publically comment on how it was an obstacle they had to work around.

There's also no denying the CPU is based on decade plus old IBM PPC 750 architecture, and has the transistor count of a single Intel atom core. It also has an incredibly low TDP.

Jaguar CPU cores are as small or smaller than WiiU's CPU cores on the same process.. Also the old "based on this old technology" argument is a very poor one.

DICE didn't slam anything, and it was even clarified that the Metro Last Light devs were working on very early dev kits (the CPU was originally 1Ghz) and spent very little time with those. Obviously WiiU's CPU will be less powerful than XBox 3's CPU (8 cores vs 3 cores), that's not in question. The extent of the difference in games is whats in question.

Oh BTW since when have Crytek ever slammed WiiU's CPU?
 

USC-fan

Banned
the 5550 was just a cut down 5670. They cut out 2 compute units for chip salvaging. Unlikely they will still need it given the yields we get at 40nm now. Also the 5750m is binned but also has max 25W TPD, normal power consumption would be about half that.

Since the 40nm process AMD is using is quite mature, it is expected they can enable the full chip and lower the wattage by lowering the voltage driving the chips.
You cannot used a binned part in mass produce products. 5550 is most likely base chip. Again they would have to improve this chip to even fit it into the wuu.
 
While ikioi's post had a negative slant, I don't see how it wasn't technical discussion. Unless the thread is only intended for effusive praise of Nintendo's design choices.

Not exactly talking about ikioi. I'm talking about the last several pages in general. The tone of the thread had been "let's assume that nintendo has designed a balanced system - how does this make sense in that context" and has turned to system wars.

It's actually kind of embarrassing.

This thread shouldn't care what Microsoft or Sony is rumored to be doing in the next two years except how it might affect cross-platform development, and even that should be aimed at being in an even tone.

EDIT: Not to say that ikioi isn't guilty of if.
 

Schnozberry

Member
You cannot used a binned part in mass produce products. 5550 is most likely base chip. Again they would have to improve this chip to even fit it into the wuu.

Why would you tell someone not to use binned parts for comparison, and then assert a binned part was used as the "base" for the GPU? We know it's something in the R700 family. Beyond that, we know it has 32MB of EDRAM and is clocked at 550mhz. That's it. The rest is inferences from guess work.
 
You cannot used a binned part in mass produce products. 5550 is most likely base chip. Again they would have to improve this chip to even fit it into the wuu.

They can hit way better power now than when the 5670 launched. I would rather bet on a 4670 with a shrink than a 5550 purely from the r700 perspective.
 

Chronos24

Member
What would be nice is an x-ray or micrograph to give us the real numbers and finally see what the heck is under the hood of the mystery that is the WiiU
 

NBtoaster

Member
This thread shouldn't care what Microsoft or Sony is rumored to be doing in the next two years except how it might affect cross-platform development, and even that should be aimed at being in an even tone.

EDIT: Not to say that ikioi isn't guilty of if.

Yes it should. Wii U doesn't exist in a vacuum, it will always be compared to it's competitors. How the Wii U matches up to Durango and Orbis is on topic technical discussion.
 
Wow, Nintendo engineers must be the most horrible of the lot. eDRAM for the sake of eDRAM. A small port team for BO2 couldn't have made any mistakes in the port, like letting some shader algorithm data get stored in MEM2. But, Shinen game developer comments can't be trusted, because not having a massive game under they're name means they would have no idea how much bandwidth is available from the eDRAM.
 

USC-fan

Banned
Why would you tell someone not to use binned parts for comparison, and then assert a binned part was used as the "base" for the GPU? We know it's something in the R700 family. Beyond that, we know it has 32MB of EDRAM and is clocked at 550mhz. That's it. The rest is inferences from guess work.
lol @ the binned part. nothing needs to be said. Just non sense....

We also have the die size and power usages. Its not that hard to get it right.
 

ikioi

Banned
200Mhz x 4 x 16 = 12.8GB/s, not 10.2GB/s.

true, stupid me was trying to short cut and jump through to bus for the entire pool.

Doing 200mhz x 4 x 2 x 64. Moron! For some stupid reason i was thinking the memory chips were 64bit each, forget this isn't PC memory moduels but individual chips.

200mhz x 4 x 2 x 16 = right!

So 12.8 gigabytes per second.

WiiU doesn't render to main memory, it renders to 32MB eDRAM and you have no idea how much bandwidth it has at its disposal.

No i dont. But evidence is showing its not that high.

No WiiU's isn't similar, its a more complete eDRAM implementation. You don't just have the ROPS with full access to eDRAM, the entire GPU has full speed access since the GPU and eDRAM are on the same die. No idea why you're claiming its slower, where are you getting that idea?

I said the exact same thing myself above. The Wii U's eDRAM is far more versatile then that of the Xbox 360's. As for claming its slower, that's my view and it does appear to be the commonly accepted one. I doubt its going to be able to match the 256 gigabytes per second the Xbox 360's eDRAM could feed its ROPs.

Jaguar CPU cores are as small or smaller than WiiU's CPU cores on the same process.. Also the old "based on this old technology" argument is a very poor one.

You can't compare x64 architecture to PPC for starters. And are you telling me PPC 750 is new? or even modern?

The extent of the difference in games is whats in question.

And i'd say massively.

Oh BTW since when have Crytek ever slammed WiiU's CPU?

Crytech, Dice, can't recall who. But it was one of the bigger name houses.
 

EDarkness

Member
Yes it should. Wii U doesn't exist in a vacuum, it will always be compared to it's competitors. How the Wii U matches up to Durango and Orbis is on topic technical discussion.

I don't agree. It's like looking at something just to see what it does and though it could be used as a point of comparison between something else, I was under the impression that this thread was simply about the Wii U hardware as it is. Not how it compares to anything else. As someone mentioned earlier in this thread there are still way too many details that no one really knows, even though the system has been out for a little while now. I thought this thread was about trying to figure those things out....

Degrading into system wars BS is really just people making crazy assumptions about things they really don't know a whole lot about. How is that a real discussion?
 

Donnie

Member
Its always a binned part....


you have 33 watts to power the whole system, do the math.... funny you have a card that matches the specs but you over look it because it wrecks your math. The 5550...

156 = ~35mm[edram] + xxmm i/o + 104mm[ gpu core]

5550 352 glfops 39 tpd = wiiu gpu! This is best case.... looks it 550mhz too... hmmmmm

Forgetting for a moment that WiiU won't be only 33w (look at any console and its later games end up using more power than its launch games) there's a bit of a floor in your comparison. The HD5550 is just a down clocked HD5670 with 80 shader units disabled, which is why its as big as 104mm2 (would be smaller if it was a true 320 shader GPU).

As I already said, remove 2GB GDDR5 from a HD5670, down clock it to 550Mhz, remove some PC legacy feature, improve power gating then plop it down on a more mature 40nm process and you could potentially drop that 60w down to 20w.
 

USC-fan

Banned
Forgetting for a moment that WiiU won't be only 33w (look at any console and its later games end up using more power than its launch games) there's a bit of a floor in your comparison. The HD5550 is just a down clocked HD5670 with 80 shader units disabled, which is why its as big as 104mm2 (would be smaller if it was a true 320 shader GPU).

As I already said, remove 2GB GDDR5 from a HD5670, down clock it to 550Mhz, remove some PC legacy feature, improve power gating then plop it down on a more mature 40nm process and you could potentially drop that 60w down to 20w.

HUH? Did the games overclock the system? lol

We can always dream....
 

Donnie

Member
true



No i dont. But evidence is showing its not that high.



I said the exact same thing myself above. The Wii U's eDRAM is far more versatile then that of the Xbox 360's. As for claming its slower, that's my view and it does appear to be the commonly accepted one. I doubt its going to be able to match the 256 gigabytes per second the Xbox 360's eDRAM could feed its ROPs.



You can't compare x64 architecture to PPC for starters. And are you telling me PPC 750 is new? or even modern?



And i'd say massively.



Crytech, Dice, can't recall who. But it was one of the bigger name houses.

Ok so you admit you have no idea what WiiU's eDRAM bandwidth is but you're happy to claim it as fact that it has a "low memory bandwidth". You're also reeling off names of developers who have apparently slammed the CPU but you don't actually know who said what. Don't you think you're being overzealous here?

Also its far from the case to suggest that ROPS are the most bandwidth hungry part of a GPU, shading units are usually much more so these days.

I also didn't say anything about how old or new WiiU's CPU is. Just making the point that the whole "its based on this CPU which was around 10 years ago therefore its 10 years old and slow" isn't a good argument.
 

ikioi

Banned
Donnie, my general view is Nintendo built this console for parity with the Xbox 360 and PS3.

What are you thoughts on the number of ROPs, and eDRAM bandwidth?

I firmly believe its 8, and the eDRAM is well under 100 gigabytes per second.
 

ozfunghi

Member
Donnie, my general view is Nintendo built this console for parity with the Xbox 360 and PS3.

If that's what they were going for, isn't it strange how much it resembles the concept of Durango? Surely they could have gone for other options, making ports between PS360 and WiiU more easy?
 

JordanN

Banned
If that's what they were going for, isn't it strange how much it resembles the concept of Durango? Surely they could have gone for other options, making ports between PS360 and WiiU more easy?

So true. Like, why toss in 2GB's of ram when the PS3/360 clearly operate on 512mb?

The logic never made sense.
 
Donnie, my general view is Nintendo built this console for parity with the Xbox 360 and PS3.

What are you thoughts on the number of ROPs, and eDRAM bandwidth?

I firmly believe its 8, and the eDRAM is well under 100 gigabytes per second.

Shinen disagrees with you, without breaking NDA they hinted at bandwidth in XXXGB/s. Ports could just be what they appear to be "quick".
 

Margalis

Banned
200Mhz x 4 x 16 = 12.8GB/s, not 10.2GB/s.
Doing 200mhz x 4 x 2 x 64. Moron! For some stupid reason i was thinking the memory chips were 64bit each, forget this isn't PC memory moduels but individual chips.

200mhz x 4 x 2 x 16 = right!

So 12.8 gigabytes per second.

So...

200Mhz x 4 x 16 = 12.8 GB/s
200mhz x 4 x 2 x 16 = 12.8 GB/s
200mhz x 4 x 2 x 64 = 10.2 GB/s

Looks legit.

You really need to slow down - what you are posting is literally complete nonsense.

I firmly believe its 8, and the eDRAM is well under 100 gigabytes per second.

Nobody cares what you "believe", they care about actual information and arguments.
 

Donnie

Member
Donnie, my general view is Nintendo built this console for parity with the Xbox 360 and PS3.

What are you thoughts on the number of ROPs, and eDRAM bandwidth?

I firmly believe its 8, and the eDRAM is well under 100 gigabytes per second.

I think, and even hope, they've only gone with 8 ROPS, since I don't believe pixel pushing was the limit in the slightest this gen. I say hope because more ROPS take up more space that could be used for more useful logic. I don't know what the bandwidth of the eDRAM is, its one of the things I don't have a firm and specific opinion on yet. But I wouldn't be surprised at all if its less than 256GB/s. I would however be very surprised if it isn't significantly higher bandwidth than 360's GPU to eDRAM connection.
 
Honestly, what I really want to know, and what affects my decision in the future, is if this thing is capable of 4xMSAA on a game like Zelda at 720p native 60FPS. That is about all. If not then well I don't think this system is for me.
 

JordanN

Banned
Honestly, what I really want to know, and what affects my decision in the future, is if this thing is capable of 4xMSAA on a game like Zelda at 720p native 60FPS. That is about all. If not then well I don't think this system is for me.
Can it do it? Sure. Would Nintendo want to do it? Who knows.

3D Zelda games have always sacrificed frame rate for more effects on screen.
 

AzaK

Member
I have serious doubts about the Wii U's capabilities to run down ports from the next gen Xbox and Playstation. Heck i have serious doubts the Wii U is much if at all more capable then the Xbox 360.

The Wii U's eDRAM appears to have significantly less bandwidth then the Xbox 360's. This is supported by the fact no multi platform game released to date, eg Mass Effect 3 and Assasins Creed 3, features any improvement in AA and AF on the Wii U. AA and AF are piss easy to tack on at the end of production, yet we don't see the Wii U offering any improvement in this area despite the significant increase in eDRAM capacity. Slow bandwidth seems the only plausable explination.
Budgets likely did not dictate any special Wii U considerations over the GamePad.

Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's.
Not when you put all your assets in eDRAM and CPU Cache!

The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.

Then there's the CPU, wich simply put is terrible at more things then its component at. Even the things its compent at are not sufficient for a modern HD console and games. SIMD, MAD, best of luck.
But it does MADD doesn't it? It's the x86 architecture that doesn't.

To finish, FUCK YOU NINTENDO. Can't believe they've yet again delivered us a new console thats performance is 7 years in the past. It would have cost them bugger all to have delivered a console within the same ball park as the next gen Xbox and Playstation with hardware that made down porting a very real and easily achievable process. Heck it would have cost them only a matter of dollars to increase the MEM2 bus to 128bit or even 256bit, and dollars more to get a non munted pathetic CPU.

I agree. Nintendo were cheap fuckers and could have bumped it up a notch, even in memory bandwidth for a small cost that I'd have happily paid.
 
Get back on track post

The R700 architecture that the Wii U CPU is thought to be based on supports much of the DirectX 11 feature set. You can see this by looking at it's support for OpenGL 4.x extensions which are equivalent in functionality.

For those who don't know, when a new version of OpenGL is released it is also broken down into a set of extensions. This is so that hardware that is not capable of supporting the full new specification can still allow access to those pieces of it that it can support. The OpenGL 4.x series of APIs targets the same hardware as DirectX 11.x does.

Looking at the R700 series (and the R600 for comparison) we get the following:
8 of the 13 OpenGL 4.0 extensions are supported. (4 for R600)
5 of the 6 OpenGL 4.1 extensions are supported. (also 5 for R600)
8 of the 12 OpenGL 4.2 extensions are supported. (also 8 for R600)
AMD does not ship OpenGL 4.3 drivers yet so I cannot compare that version.

Out of 31 extensions, 20 are supported and 11 are not.

The 11 that are not supported are:
[4.0] ARB_tessellation_shader
[4.0] ARB_shader_subroutine
[4.0] ARB_gpu_shader5
[4.0] ARB_gpu_shader_fp64
[4.0] ARB_draw_indirect
[4.1] ARB_vertex_attrib_64bit
[4.2] ARB_base_instance
[4.2] ARB_shader_image_load_store
[4.2] ARB_shader_atomic_counters
[4.2] ARB_texture_compression_bptc

Let's go through them.

ARB_tesselation_shader: The R700 does support tessellation (with an improved tessellation unit over the R670). It is not as programmable as the DX11 and GL4 specs require however.
ARB_shader_subroutine: Adds support for calling functions indirectly (function pointers).
ARB_gpu_shader5: This is bit of a grab bag. It's hard to say which features are preventing the R700 from supporting it. I would guess some of the indexing abilities added based on the other extensions unsupported.
ARB_gpu_shader_fp64, ARB_vertex_attrib_64bit: Double-precision floating point support. This is probably more important to non-game compute applications.
ARB_draw_indirect: This is meant to allow shaders to read from buffer memory, avoiding a round trip to the CPU.
ARB_base_instance: Used in instanced rendering. It allows an offset to be applied to attributes that do not change per-vertex. (i.e. it does a little extra math when looking up the vertex data to grab)
ARB_shader_image_load_store: Allows shaders flexible read/write access to textures.
ARB_shader_atomic_counters: Allows shaders read/write access to buffer objects containing atomic counters.
ARB_texture_compression_bptc: Improved texture compression formats, including some support for HDR textures.

If you don't want to read the whole post, just read the next bit:
Interestingly since GPGPU has been directly called out as a feature of the Wii U's hardware by Nintendo, most of the missing features seem to be related to GPGPU functionality. Aside from tesselation and the new texture compression format; you have missing support for double-precision floating point and several extensions involving memory access. Either allowing more flexible read/write access or more flexible addressing.

The Evergreen family that succeeded the R700 is an incremental update, so it would be interesting to find out of any the GPGPU functionality made it's way into the Wii U GPU. Or would the eDRAM imply that the R700's memory had to be changed anyways?

Thank you for posting this information. It is a good summary of the features-set differences between the Direct X and OpenGL versions.
 

ikioi

Banned
Ok so you admit you have no idea what WiiU's eDRAM bandwidth is but you're happy to claim it as fact that it has a "low memory bandwidth".

Happy to admit i do not know the Wii U's eDRAM bandwidth.

To explain how i came to form my view about its likely bandwidth:

We know the Wii U's eDRAM is intergrated directly into the GPU die. Do you agree with this?

Also to clarify. When i say the eDRAM is slower then the Xbox 360's, i'm only refering to bandwidth between the eDRAM and the ROPs.

Why do i believe the above?

The Xbox 360's eDRAM and ROPs were tied together into one piece of silicon. Being on the same die this allowed Microsoft to create an incredibly high bandwidth between the eDRAM and ROPs directly. The bandwidth was aroun 256 gigabytes per second between the eDRAM and ROPs. Even by today's standards that's crazy. The logic behind this was to allow easy frame and z buffering, anti alisasing, and a few other things. That high bandwidth to the ROPs allowed these tasks to be done almost penalty free for developers.

Outside of the ROPs however, the rest of Xenos can't achieve any where near level of bandwidth from the eDRAM. From memory the bandwidth from eDRAM to Xenos is around 32 gigabytes per second. The eDRAM in the Xbox 360 really was geared for the afformentioned frame buffering, z buffering, aliasing etc. It had very limited use and versatility and was geared to primarily feed the ROPs.

The Wii U's eDRAM however seems to be more general purpose, in line with the MEM1 pools in the Wii and Gamecube. The Wii U's eDRAM is not just going to be for frame buffer, z buffer, etc, thus why is 3.2x the size of the eDRAM in the Xbox 360. It's also designed to provide developers with a small capcity but incredibly high bandwidth and low lantecy memory pool to do whatever they want with. Developers can use the remaining memory left over to store anything from textures, GPGPU data, through to a cache to reduce I/O to the MEM2 pool and CPU.

Why do i believe the bus is sub 100 gigabytes per second?

With Nintendo opting to intergrate the Wii U's eDRAM directly into the GPU's die, that fabrication process is going to be a lot more complex and expensive then the Xbox 360's Xenon GPU. Heck its complex by any standard.

I cannot see AMD or Nintendo opting to split the eDRAM pool to provide one portion with a very high >200gbps to the ROPs, and another slower memory bus to the remainder of the GPU. That would be incredibly freaking expensive and complex bus design for any chip yet alone a GPU. As such i believe it would be far cheaper and easier for Nintendo and AMD to intergerate single bus between the eDRAM and GPU. That bus is shared between the ROPs and the remainder of the GPU. This bus would be faster then the 32 gigabytes per second of Xenos to its eDRAM, but slower then the 256 gigabytes per second of the Xbox 360's ROPs - eDRAM. The Wii U's eDRAM speed lies somewhere in the middle.

So that's my logic.

Wii U eDRAM is not split, its a single 32 megabyte block. It acts more like the Gamecube and Wii's MEM1 with developers being able to store what ever they want in the available space left after z buffering and frame buffering. The Wii U's eDRAM has a slower data throughput to its ROPs then the XBox 360s, but on the reverse the eDRAM is larger, more versatile, and throughput between the remainder of GPU and eDRAM is significantly higher then both the Xbox 360's eDRAM and its GDDR5.
 
Honestly, what I really want to know, and what affects my decision in the future, is if this thing is capable of 4xMSAA on a game like Zelda at 720p native 60FPS. That is about all. If not then well I don't think this system is for me.

Don't think they will use MSAA but 720p great looking games are very possible. 60FPS is unlikely as well.

You can probably get halo 4 level graphics at 720p 30 fps if nintendo pushed hard enough after a few years. Some things in the cpu will probably be limiting things but for pretty looking graphics, Wii U should be able to do.
 
Don't think they will use MSAA but 720p great looking games are very possible. 60FPS is unlikely as well.

You can probably get halo 4 level graphics at 720p 30 fps if nintendo pushed hard enough after a few years. Some things in the cpu will probably be limiting things but for pretty looking graphics, Wii U should be able to do.

This is an incredibly huge leap in logic. We don't know significant portions of the HW and you're already making hyperbolic statements about the system...i'd say to wait...
 
This is an incredibly huge leap in logic. We don't know significant portions of the HW and you're already making hyperbolic statements about the system...i'd say to wait...

Its pretty obvious the gpu is more powerful than the 360. At minimal it should be able to do what the 360 can in terms of graphics. This will of course depend on developer support and how much money and time they take to make the games.
 

ikioi

Banned
Its really hard to guage the Wii U's performance.

We don't know the GPUs specs well enough. ALUs, ROPs, eDRAM bandwidth, etc. Yet alone if its entirely based on the R700 series, or a hybrid of R700 and more modern AMD tech.

The CPU does seem weak, however that has to be put into context. Its not like AMD's Jaguar is powerful by any stretch of the imagination. It seems all three next gen consoles are going with more general purpose lower clocked CPUs, with a focus on high instructions per clock, and moving the heavy SIMD and other work that Cell's SPE and Xenon's cores did to the GPUs.

The Wii U's MEM2 bandwidth also seems weak. Only at 64bit and 12.8 gigabytes per second. But that can be offset some what by increased CPU cache, larger general purpose eDRAM for GPU, and more modern architecture. Its possible Nintendo have been able to engineer a console that has significantly reduced dependancy on its MEM2 pool vs the Xbox 360 and PS3, and a heavy focus on using cache and eDRAM to limit reads and writes to I/O. That said the MEM2 data throughput is still not good, it wouldn't have cost Nintendo much to go for a 128 or even 256bit bus.
 
If that's what they were going for, isn't it strange how much it resembles the concept of Durango? Surely they could have gone for other options, making ports between PS360 and WiiU more easy?
The "concept" of Durango isn't really that far off the concept of the 360 in terms of memory structure, as far as I can tell.

The Wii U was billed as being easy to port from the 360 in early leaks and PR iirc. It really isn't a stretch to imagine ~360 was the target performance level, within a very small profile and power envelope.
 
The "concept" of Durango isn't really that far off the concept of the 360 in terms of memory structure, as far as I can tell.

The Wii U was billed as being easy to port from the 360 in early leaks and PR iirc.


As with the Gamecube, Wii, and Wii U. Even the PS2 and its eDRAM perhaps? The biggest changes throughout the generation is the speed and size difference between the fast memory bank and the slow one. I suppose the Xbox, PS3 and PS4 are the odder ones.
 
Its pretty obvious the gpu is more powerful than the 360. At minimal it should be able to do what the 360 can in terms of graphics. This will of course depend on developer support and how much money and time they take to make the games.

I know. I meant more you seemed to be lowballing it. Expecting only Halo 4 at 720p 30fps...we know the gpu is much stronger. I think we can expect more. Of course, like I said, there are still too many unknowns
 

Margalis

Banned
Also to clarify. When i say the eDRAM is slower then the Xbox 360's, i'm only refering to bandwidth between the eDRAM and the ROPs.

The connection between the 360 eDRAM parent and daughter die is 32GB/s, the connection between parent and main memory is 22 GB/s - which you have to do through if you are going to do something useful with what you wrote to eDRAM.

The GPU cannot read from eDRAM, the read bandwidth there is a whopping zero.

Calling it "fast" and "high bandwidth" and the WiiU implementation you suggest slow and low-bandwidth is both an apples to oranges comparison and a purely semantic argument.

I'm pretty sure nobody has used RAM "slowness" to mean "throughput to ROPs" ( you mean "from" ROPS?) until your elaborate backtrack just now.

Before this elaborate clarification you said:

The eDRAM on the GPU seems be on a slower bus then that of the Xbox 360's.

Now you seem to be saying that you meant write-only bandwidth from ROPs instead, which I'm pretty sure cannot really be called a "bus" at all, certainly not more so than two other relevant actual busses.
 
I know. I meant more you seemed to be lowballing it. Expecting only Halo 4 at 720p 30fps...we know the gpu is much stronger. I think we can expect more. Of course, like I said, there are still too many unknowns

I said it was doable, I did not say it was the absolute max. And I think halo 4 looks very amazing for hardware of 360's caliber.
 
Honestly, and I do love Nintendo, but 30fps isn't going to cut it for me. There is simply no going back in Certain games, when you play 60fps all the time.

If they can get 720p 60fps and 2xMSAA out of Zelda I would bite. If not then what the hell is the point. I am NOT repeating another gen of shitty IQ unless said system is cheap as hell.

Seriously, No gamepad, pack in pro controller, and beef specs for this system by 2x and I would have been there launch day. Sigh.
 

ikioi

Banned
The connection between the 360 eDRAM parent and daughter die is 32GB/s, the connection between parent and main memory is 22 GB/s - which you have to do through if you are going to do something useful with what you wrote to eDRAM.

Correct.

eDRAM - GDD5 - then back. That's the process required.


Calling it "fast" and "high bandwidth" and the WiiU implementation you suggest slow and low-bandwidth is both an apples to oranges comparison and a purely semantic argument.

I'm pretty sure nobody has used RAM "slowness" to mean "throughput to ROPs" until your elaborate backtrack just now.

I'll accept that critisism. I really did fail to clarify wtf i was saying.

So yep i'll cop that.
 

Margalis

Banned
I'll accept that critisism. I really did fail to clarify wtf i was saying.

So yep i'll cop that.

Yeah no, sorry. The problem isn't that you didn't "clarify" what you were saying, the problem is that what you said was nonsense, then in an attempt to justify it "clarified" it by completely changing the meaning of what you said, including changing the meaning of specific technical jargon in order to save face and justify your console warrior ranting. And the final point you ended up at has essentially nothing at all to do with the original point in your rant.

The fact that 360 can do super fast ROP stuff is cool. Yay. Too bad resolving to main memory is itself a huge bottleneck in part because the same design that gives you "super high bandwidth mumble mumble" involves passing through two lower-bandwidth bottlenecks.

You say you find a WiiU developer who hasn't complained about this or that? Find a 360 developer who hasn't complained about resolving textures to main memory, tiling, etc.

You accuse people of being "fanbois" but looking at only the positives of one system and only the negatives of another sure looks like fanboyism to me.

An actual analysis shows that there are a lot of pros and cons to the 360 setup, not the "lolz WiiU slow eDRAM 360 fast eDRAM lolz" nonsense you are putting out.

Your posts are full of errors of all kinds, from basic repeated math errors to constant typos and improper capitalization. You use words one way then in your next post use them a different way - neither being correct. You are obviously hastily typing up rants then later trying to justify them while filling your posts with hyperbole and polemics. It's tiresome to read and you aren't convincing anyone of anything other than that you are for some reason very emotionally invested in this.

If you want to try your hand at honest analysis feel free. If you want to continue to produce silly nonsense wrapped in shoddy technical arguments featuring the constant failure to correctly multiply numbers together feel free not to bother. Please.

Learn the difference between analysis and advocacy.
 
Top Bottom