• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

USC-fan

Banned
Well it's your word versus AN ACTUAL WII U DEVELOPER. I wonder whose has more weight (well, really, any weight)?

And this is how thing no one said get taken as "fact." Like the 600 gflop gpu....

Its so unlikely the wiiu edram bw is anywhere near that based on the bw limit problem we have seen on the games released.
 
I doubt intergrating the eDRAM into the GPU would have been cheaper then doubling the MEM2 bus.

eDRAM is quite expensive and effects yields. I can't see how this could have been a cheaper option for Nintendo vs either with no eDRAM and faster MEM2 bus, or smaller off GPU die eDRAM.

Why do you think it was cheaper?

I guess I wasn't clear. My apologies. "Cheaper" was in reference to the eDRAM running on a 1024 bit internal bus vs. A 4096 bit bus, which would put bandwidth well into the xxxGB/s range. The inclusion of eDRAM in the first place is likely due to the decision to make the console BC with Wii games.
 

Donnie

Member
I really don't think they've included eDRAM just for Wii backwards compatibility. eDRAM just suits Nintendo's design philosophy and the way they like they're hardware to run, efficient predictable performance.

Almost as pointless as including 2 GB of RAM but then crippling it with a 64-bit bus?

Who says that bus cripples the 2GB ram?
 

ikioi

Banned
I guess I wasn't clear. My apologies. "Cheaper" was in reference to the eDRAM running on a 1024 bit internal bus vs. A 4096 bit bus

How do you know this?

AFAIK we only could speculate and guess what the Wii U's eDRAM bus and bandwidth is.

That said, I too agree the Wii U's eDRAM bus likely doesn't have throughput in many 100s of gigabytes per second. But it is possible. Resna (spelling?) the company doing the Wii U GPU fabrication do have the ability to make eDRAM in the hundreds of gig per second range. But i believe its not this fast due to the complexity and cost of building a bus into the GPU to handle such raw throughout. That could add a fair bit to the engineering and cost of the GPU fabircation. Still its completly possible and i'm no expert on fab tech and costs.

That said, the Wii U's eDRAM over all should still provide higher bandwidth and lower latency over all then the Xbox 360's eDRAM. The Xbox 360's eDRAM was primarily used for frame and z buffering, and ROPs related tasks. External to the ROPs the bandwidth from the eDRAM to the GPU was in the order of 32 gigabytes per second, and in many cases a lot less then that. As previously mentioned, in many cases the Xbox 360 needed to copy data out of the eDRAM and write it to the GDD3 pool before the Xenon or Xenos could read it. So sub 10.8 gigabytes per second and with pretty high latency. Wouldn't be hard for the Wii U's eDRAM to trounce this.


which would put bandwidth well into the xxxGB/s range. The inclusion of eDRAM in the first place is likely due to the decision to make the console BC with Wii games.

I don't agree with this.

IMHO the Wii U's eDRAM is both to act as a frame buffer, z buffer, etc, as well as provide developers with high bandwidth and incredibly low latency memory pool. Again just my opinion, but i believe its intended to be used some what like the Gamecube's MEM1 pool.

Nintendo wouldn't have had to add eDRAM for backwards compatability, yet alone 32 megabytes of it. To emulate the Wii they could have just remapped the memory back to the DDR3 pool.
 
If I'm reading the article I posted correctly, SRAM required more transistors than DRAM. So with DRAM you actually get better yields than if you were using what MS used. The GPU would also appear to be at appropriate processing node to make this possible.

Let's not forget, GC and Wii used 1T-SRAM which would cost more at 32MB. As well as require more power and transistors.

Devs didn't take advantage of anything in these ports, they had double the amount of memory available, but no higher resolution textures. Every port in my opinion are straight ports no, with a focus on Gamepad functionality.
 

ikioi

Banned
"8 ROPs on a MEM2 bus"



Just admit it you said it, complete the backpedal, and move on.

This is what i said. Lets quote the entire paragraph rather then your little selective quote.

Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's. The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.

Is that line i've underlined wrong?

Could 12.8 gigabytes per second happily fufill the bandwidth requirements of 8 AMD R700 based ROPs?

No it couldn't.

Does that sentence state the 8 ROPs are directly and physically tied in to the MEM2 bus?

No it doesn't.


Thus why i then went on to talk about the eDRAM in the XBox 360 and its relationship with ROPs and compared that to what i believe the Wii U's eDRAM is structured like. As it's obvious to even blind freedy 12.8 gigabytes per second isn't going to be enough. Even with the eDRAM, for certain ROP functions you're going to have to use the main memory. With MEM2 bandwidth as slow as the Wii Us, ROPs are going to be very limited in performance.

In short piss of with your selective quoting and intentional misinterpretation of my post.

Edit: Also you are a fool if you believe the Xbox 360's ROPs didn't use its GDDR3 memory pool for ROP related tasks. IT DID. The eDRAM in the Xbox 360 wasn't sufficent to store every thing you'd need to feed the ROPs.

http://www.beyond3d.com/content/articles/4/3

this leaves the system memory bus free for texture and vertex data fetches which are both read only and are therefore highly efficient

Which again highlights why the Wii U's MEM2 bandwidth is important even for ROP performance. And why the low bandwidth of the Wii U's MEM2 pool strongly suggests this console has similar ROP performance to the Xbox 360. MEM2 performance is still a limiting factor in ROP performance.
 
How do you know this?

AFAIK we only could speculate and guess what the Wii U's eDRAM bus and bandwidth is.

I thought that it was clear that I was speculating. But as has been pointed out to me by other more knowledgeable folk, the alpha issues, lack of MSAA, and 720p (sometimes lower) resolutions all need an explanation. "Low" eDRAM bandwidth and 8 ROPs seem like good ones, so I'm going with it until proven otherwise.

ikioi said:
I don't agree with this.

IMHO the Wii U's eDRAM is both to act as a frame buffer, z buffer, etc, as well as provide developers with high bandwidth and incredibly low latency memory pool. Again just my opinion, but i believe its intended to be used some what like the Gamecube's MEM1 pool.

Nintendo wouldn't have had to add eDRAM for backwards compatability, yet alone 32 megabytes of it. To emulate the Wii they could have just remapped the memory back to the DDR3 pool.

Yes, it does all those things. Although if we're going to go on comparing it to the 24 MB 1t-SRAM in GCN, it bares noting that 24/32 MB of RAM was worth alot more to devs back in those days. But anyway, I'm talking about way back when Nintendo were first conceptualizing the Wii U hardware. They likely decided early on that their next console should be 100% BC with Wii software and thus set out to achieve this in the context of a console which would rival and perhaps exceed current gen in hardware capabilities. They needed to do this all whilst still profiting and including a 6" touch screen on the controller.

I'm not saying anything about the "balance" or even the performance of the console, because by all accounts, the components work together nicely and the next round of games do look promising graphically. However, as a Nintendo fan, I have no problem with admitting that they like things "cheap." I remember back when it was revealed that the controller would only boast a single touch resistive touch screen. Many, including myself, argued that Nintendo made this choice because the technology offered higher precision and was better for drawing games. Then, in an interview, Iwata came out and said that the lack of multitouch was simply for the reason that single touch screens were "cheap." He outright said it! So I no longer look to alternate explanations for why Nintendo make certain decisions. They are a company that needs to profit and it is often necessary to balance performance with economics.

Edit: And oh, are you just regurgitating what Grall said on Beyond3D Re: Wii BC? Off chip DDR3 doesn't have nearly the latency to emulate Wii's embedded texture cache.
 
I thought that it was clear that I was speculating. But as has been pointed out to me by other more knowledgeable folk, the alpha issues, lack of MSAA, and 720p (sometimes lower) resolutions all need an explanation. "Low" eDRAM bandwidth and 8 ROPs seem like good ones, so I'm going with it until proven otherwise.



Yes, it does all those things. Although if we're going to go on comparing it to the 24 MB 1t-SRAM in GCN, it bares noting that 24/32 MB of RAM was worth alot more to devs back in those days. But anyway, I'm talking about way back when Nintendo were first conceptualizing the Wii U hardware. They likely decided early on that their next console should be 100% BC with Wii software and thus set out to achieve this in the context of a console which would rival and perhaps exceed current gen in hardware capabilities. They needed to do this all whilst still profiting and including a 6" touch screen on the controller.

I'm not saying anything about the "balance" or even the performance of the console, because by all accounts, the components work together nicely and the next round of games do look promising graphically. However, as a Nintendo fan, I have no problem with admitting that they like things "cheap." I remember back when it was revealed that the controller would only boast a single touch resistive touch screen. Many, including myself, argued that Nintendo made this choice because the technology offered higher precision and was better for drawing games. Then, in an interview, Iwata came out and said that the lack of multitouch was simply for the reason that single touch screens were "cheap." He outright said it! So I no longer look to alternate explanations for why Nintendo make certain decisions. They are a company that needs to profit and it is often necessary to balance performance with economics.

Edit: And oh, are you just regurgitating what Grall said on Beyond3D Re: Wii BC? Off chip DDR3 doesn't have nearly the latency to emulate Wii's embedded texture cache.

Why not use 1T-SRAM again, compared edram?
 

ikioi

Banned
Agree 100%

The only reason i can see behind Nintendo going with a paultry 64bit bus for the DDR3 pool is because its the cheapest, and can be shrunk down easiler in future redesigns. I don't think they went with that bus for any other reason then financial savings.

32 megabytes of eDRAM even if its at >200 gigabytes per second, doesn't have enough physical capcity to feed ROPs yet alone the rest of the GPU. Thus the system's going to have to fall back to the MEM2 DDR3 pool to suppliment.

I can't see how Nintendo could have gotten around the slow MEM2 bus. The eDRAM helps, but most of its capacity and a fair wack of bandwidth would be used up just on basic stuff like frame and z buffering, along with the vertex and texture work. I can't see the eDRAM having much if any free space left over after doing the afforementioned tasks.

AMD texture compression, better memory controller, and more CPU cache can all help reduce the MEM2s I/O. But still for graphical related tasks you want and need raw throughput, high bandwidth.

Can there be anything under the hood in that system that offsets the slow MEM2 significantly enough to not just bring it up to performance parity with the Xbox 360 and PS3, but put it ahead?
 

ikioi

Banned
Edit: And oh, are you just regurgitating what Grall said on Beyond3D Re: Wii BC? Off chip DDR3 doesn't have nearly the latency to emulate Wii's embedded texture cache.

What was the latency of the Wii's texture cache?

Also i was under the impression the Wii mode was not 100% hardware emulated?
 

ikioi

Banned
Different games with different demands.

Correct me if i'm wrong, but isn't W101 due to release in Q1 this year?

As such wouldn't W101 be at a much later stage of development then Monoliths new title? I don't think we should get too caught up in what games look like early in their development cycle. Comparing Monolith's game which is still in early development to a game that's close to release isn't fair. Yet alone judging a games graphics while it's still in development.
 

Schnozberry

Member
Correct me if i'm wrong, but isn't W101 due to release in Q1 this year?

As such wouldn't W101 be at a much later stage of development then Monoliths new title? I don't think we should get too caught up in what games look like early in their development cycle. Comparing Monolith's game which is still in early development to a game that's close to release isn't fair. Yet alone judging a games graphics while it's still in development.

Seems to look better on television from the vids posted to the Wii U eShop. Video Compression? I don't think my eyes are going bad.
 

NBtoaster

Member
Correct me if i'm wrong, but isn't W101 due to release in Q1 this year?

As such wouldn't W101 be at a much later stage of development then Monoliths new title? I don't think we should get too caught up in what games look like early in their development cycle. Comparing Monolith's game which is still in early development to a game that's close to release isn't fair. Yet alone judging a games graphics while it's still in development.

That can be a factor too. But the monolith title seems to use many more transparencies, with the battle effects, grass, and misty parts of the environment, and thus may need to downsize more. 101 doesn't seem to have a lot in the environment, and effects seem more opaque (the flames for example on the back of one of the characters).
 

ikioi

Banned
Also W101's textures are not as complex as what Monolith appear to be using. W101's cartoony art style allows them to use simple textures and repeat them more. Similar to what Nintendo do with Mario. Less individual textures and they can get away with lower resolutions.

Still i wouldn't use Monolith's video to gauage the Wii U's capabilities. Its still in development and i don't believe developers typically invest much time into volumetric effects, shadowing, lighting, etc, in initial stages of development. Typically such things are amoung the last to be polished and perfected. No?
 

Margalis

Banned
Blu can you just report ikioi to a mod or something?

This is getting ridiculous. The guy can't even multiply a number by 4 correctly after three separate attempts yet continues to post vast volumes of nonsense, half of which is backtracking over what he previously said.

When I read his posts I want to make corrections then realize this is an endless unrewarding task.
 
Easy. For example here:

http://www.youtube.com/watch?feature=player_detailpage&v=6GxUMMGyZcM#t=24s

You can see the tree and neck of the monster being pixelated by the low res waterfall behind it.

You're really grasping at straws there. You also have no idea what the intended effect was suppose to look like. Either way, you can hardly tell anything. Increase the resolution to the highest available and increase the video and the compression artifacts make it too hard to tell anything so I'm not sure how you were somehow able to...
 

JordanN

Banned
Any clues to how the mountains in "X" are rendered?

When I first saw the trailer, I was given tessellation vibes similar to the one used in Ruby Whiteout.
6AFxdeE.gif

awajv0.gif



???
 
Any clues to how the mountains in "X" are rendered?

When I first saw the trailer, I was given tessellation vibes similar to the one used in Ruby Whiteout.
6AFxdeE.gif

awajv0.gif



???

Not sure how capable the Wii U is of tessellation, but it would be a waste of processing power.

Also, look at the shore line. Some really straight lines that they'd be able to tessellate... keep straight when farther and "smooth out" when closer. But that's not the case. I'm sure a lot of it is decent meshes with good normal maps.

No need to waste power on polygons on mountains.
 

JordanN

Banned
Not sure how capable the Wii U is of tessellation, but it would be a waste of processing power.

Also, look at the shore line. Some really straight lines that they'd be able to tessellate... keep straight when farther and "smooth out" when closer. But that's not the case. I'm sure a lot of it is decent meshes with good normal maps.

No need to waste power on polygons on mountains.
Why would tessellation be a waste? It's suppose to make LOD easier and it helps save on memory bandwidth.
 

Margalis

Banned
I ideally if you tesselate something you do it such that it is gaining detail past the distance your eye can perceive, otherwise you see what is usually called "boiling" where the surface of the object appears to change with distance.

Which means that, ideally, there's no way to look at something using the in-game camera and tell if it is using tesselation. At best if it looks super detailed up close you could guess it was using tesselation, but it could always be using other LOD techniques instead. Good tesselation should be invisible.
 

ozfunghi

Member
That can be a factor too. But the monolith title seems to use many more transparencies, with the battle effects, grass, and misty parts of the environment, and thus may need to downsize more. 101 doesn't seem to have a lot in the environment, and effects seem more opaque (the flames for example on the back of one of the characters).

Maybe... but W101 is likely closer to release, it is also running at 60fps (i doubt that is the case for X) and battle sequences seem to be a lot more hectic, so maybe you're right, but i'd be somewhat reluctant to flat out make such a claim at this point. It also has an artstyle much easier on video compression than X. Though the problem is rather clear during battle.

I also doubt we'll get sharp 45° corners on the water/shoreline. You can also see other things are clearly not finished/finalized. Some animations are missing etc... Someone in the other thread listed a couple of things that were clearly subject to change due to being WIP.

It may be wishful thinking, but the World of Monado trailer also didn't look as good as Xenoblade ended up looking.
 

ugoo18

Member
How does that X trailer stack up to the Zelda HD experience tech demo and the Japanese Garden Tech demo for all of you?

Better?

Worse?

More or less on par?
 

FyreWulff

Member
Not sure how capable the Wii U is of tessellation, but it would be a waste of processing power.

Also, look at the shore line. Some really straight lines that they'd be able to tessellate... keep straight when farther and "smooth out" when closer. But that's not the case. I'm sure a lot of it is decent meshes with good normal maps.

No need to waste power on polygons on mountains.

Shore is just probably due to early footage. Especially if it's just scenic and you can't really do anything out there, making the water look pretty is something you'd push off for later.
 

tipoo

Banned
Not sure how capable the Wii U is of tessellation, but it would be a waste of processing power.

Also, look at the shore line. Some really straight lines that they'd be able to tessellate... keep straight when farther and "smooth out" when closer. But that's not the case. I'm sure a lot of it is decent meshes with good normal maps.

No need to waste power on polygons on mountains.


The HD 4000 and forward graphics cards are pretty efficient at tesselation. The 360 includes the feature, but it wasn't as efficient then and I don't know of anything other than Viva Piniata that uses it.

Yes lots of tesselation can bog down even the highest end graphics cards, but it's possible they use some degree of it here with minimal performance loss.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
This is getting more ridiculous with every next post..

This is what i said. Lets quote the entire paragraph rather then your little selective quote.
Indeed, let's.

zzz said:
Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's. The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.
Is that line i've underlined wrong?
Yes, it is wrong.

You have two memory pools A and B available to a GPU. A being of known parameters (size, bandwidth, etc), B - an unknown, outside of its size. ROPs work with pool B, while both pools A and B service asset fetches. Any statement, 'Pool A has such and such BW characteristics, so that ROP numbers should be such and such.' is nonsense - you have neither the BW available to ROPs alone, nor the cumulative bandwidth available for GPU assets.

Here's a hypothetical question for you: what if pool A was not present (or GPU could not directly read assets from it), how many ROPs should a GPU outputting to pool B have? 0?

And here's another one, this time non-hypothetical: how many ROPs do you think PS2's GS had, given its link to the main mem was rated at 1.2GB/s? GS has a top fillrate of 1.2GPix/s when texturing (2.4GPix/s sans texturing), compared to Xenos' 4GPix/s. By your reasoning, GS should have had 22.4 * 1.2 / 4 = 6.72GB/s BW from main mem for GS to have as many as 8 ROPs. Now, humor me and check the actual specs.

Does that sentence state the 8 ROPs are directly and physically tied in to the MEM2 bus?

No it doesn't.
So '8 ROPs on a MEM2 bus' in that underlined passage is supposed to mean what exactly?

Thus why i then went on to talk about the eDRAM in the XBox 360 and its relationship with ROPs and compared that to what i believe the Wii U's eDRAM is structured like. As it's obvious to even blind freedy 12.8 gigabytes per second isn't going to be enough. Even with the eDRAM, for certain ROP functions you're going to have to use the main memory. With MEM2 bandwidth as slow as the Wii Us, ROPs are going to be very limited in performance.
What are those 'certain ROP functions' you need main mem for?

In short piss of with your selective quoting and intentional misinterpretation of my post.

Edit: Also you are a fool if you believe the Xbox 360's ROPs didn't use its GDDR3 memory pool for ROP related tasks. IT DID. The eDRAM in the Xbox 360 wasn't sufficent to store every thing you'd need to feed the ROPs.

http://www.beyond3d.com/content/articles/4/3

Which again highlights why the Wii U's MEM2 bandwidth is important even for ROP performance. And why the low bandwidth of the Wii U's MEM2 pool strongly suggests this console has similar ROP performance to the Xbox 360. MEM2 performance is still a limiting factor in ROP performance.
In what way, form or manner do you believe that article supports you claims? Apropos, 360's GDDR3 is the sole pool capable of feeding assets to Xenos (locked L2 non-withstanding). That means that every single render target (RTT) had to be first resolved to main mem, and then read back from there. In contrast, U-GPU is able to skip the entire roundtrip to main mem for all cases where the former and current rendertargets fit in eDRAM. Furthermore, Xenos' 256GB/s of ROP-only BW served one purpose and one alone - to provide free MSAAx4 with alpha blending (which, apropos, went down the drain the moment tiling was involved). Its actual eDRAM bus BW of 32GB/s serves perfectly well the 4GPix/s of max color fillrate that Xenos actually had, sans blending. For blended color pixels sans MSAA, a 64GB/s BW would've sufficed.

edit: Ok, I did neglect the 32GSample/s of zexel performance, which too, is enabled by the 256GB/s BW, but that has little to do with main mem BW.
 
How does that X trailer stack up to the Zelda HD experience tech demo and the Japanese Garden Tech demo for all of you?

Better?

Worse?

More or less on par?

Wow I forgot about that.... was the garden demo really running off Wii U hardware.... and have we seen anything even close?
 

wsippel

Banned
What was the latency of the Wii's texture cache?

Also i was under the impression the Wii mode was not 100% hardware emulated?
Gamecube: 6.2ns sustainable latency, 10.4GB/s bandwidth @ 162MHz. MEM2 couldn't even substitute just the Wii's eTC with regards to bandwidth.
 

ozfunghi

Member
Blu, Wsippel, maybe it's more constructive from here on out to just ignore certain posts or just labeling them "void" instead of wasting your time replying at length (or maybe it's shorter to reply to what is actually correct instead of all the crap that is factually fault). I'd be much happier to read what, if any, remarks you might have on the new trailers, if certain things are showing (or lacking). I know this originally wasn't the thread for that either (just "facts/specs") but it's the only tech thread left with some reputable folk like yourself in it, and there doesn't appear to be much news feeding the topic as of late.
 

joesiv

Member
The only reason i can see behind Nintendo going with a paultry 64bit bus for the DDR3 pool is because its the cheapest, and can be shrunk down easiler in future redesigns. I don't think they went with that bus for any other reason then financial savings.
I think you're on the right track. As fabrication gets better/cheaper/smaller the cost of the embedded ram with it's bus width (whatever it is) will go down, wheras the cost of MEM2 bus would not go down at the same rate.

Cost savings for the future.


Gamecube: 6.2ns sustainable latency, 10.4GB/s bandwidth @ 162MHz. MEM2 couldn't even substitute just the Wii's eTC with regards to bandwidth.

Indeed, but it does fit the 24MB's 1t-sram and 3MB embedded buffer quite nicely. The rest of the 64MB's of system memory from the Wii fits comfortably into MEM2.
 
I think you're on the right track. As fabrication gets better/cheaper/smaller the cost of the embedded ram with it's bus width (whatever it is) will go down, wheras the cost of MEM2 bus would not go down at the same rate.

Cost savings for the future.




Indeed, but it does fit the 24MB's 1t-sram and 3MB embedded buffer quite nicely. The rest of the 64MB's of system memory from the Wii fits comfortably into MEM2.

Huh
 

wsippel

Banned
Indeed, but it does fit the 24MB's 1t-sram and 3MB embedded buffer quite nicely. The rest of the 64MB's of system memory from the Wii fits comfortably into MEM2.
You mean MEM1? Yes, the eDRAM can (and most likely has to) fit the whole 27MB 1T-SRAM. Required aggregate bandwidth would be ~35GB/s I believe.
 

joesiv

Member
Gamecube: 6.2ns sustainable latency, 10.4GB/s bandwidth @ 162MHz. MEM2 couldn't even substitute just the Wii's eTC with regards to bandwidth.

You mean MEM1? Yes, the eDRAM can (and most likely has to) fit the whole 27MB 1T-SRAM. Required aggregate bandwidth would be ~35GB/s I believe.
Yes I mean MEM1 for the 1T-SRAM, and MEM2 for fitting the Wii's other 64MB's, which is obvious.
 

ikioi

Banned
you have neither the BW available to ROPs alone, nor the cumulative bandwidth available for GPU assets.

Happy conceed that, which was why i was speculating. I was not passing it off as fact.

We know the bandwidth of the DDR3 pool, the eDRAM i was simply speculating. My thoughts were that the Wii U's eDRAM wouldn't be as fast as the throughput the Xbox 360's eDRAM had to its ROPs at 256 gigabytes per second. But would be faster then the 32 gigabytes per second of the external bandwidth of the Xbox 360's eDRAM. That is all. Admittedly as you said that was primarily for MSAA etc.


Here's a hypothetical question for you: what if pool A was not present (or GPU could not directly read assets from it), how many ROPs should a GPU outputting to pool B have? 0?

So MEM1 doesn't exist? And the ROPs are reliant only on the DDR3?


And here's another one, this time non-hypothetical: how many ROPs do you think PS2's GS had, given its link to the main mem was rated at 1.2GB/s? GS has a top fillrate of 1.2GPix/s when texturing (2.4GPix/s sans texturing), compared to Xenos' 4GPix/s. By your reasoning, GS should have had 22.4 * 1.2 / 4 = 6.72GB/s BW from main mem for GS to have as many as 8 ROPs. Now, humor me and check the actual specs.

I don't see how my reasoning suggested that at all.

So '8 ROPs on a MEM2 bus' in that underlined passage is supposed to mean what exactly?

My understanding that the Wii U's ROPs will still need to use MEM2. In other words the 32 megabytes of eDRAM wont be enough to store all the data for a typical ROP or high bandwidth GPU work load.

Due to the Wii U's low MEM2 bandwidth, i am questioning how many ROPs the Wii U's architecture could feed.

That is what i ment when i was commenting on the DDR3s relationship with the number of ROPs in the Wii U.

What are those 'certain ROP functions' you need main mem for?

Data that the ROPs require that cannot be stored into the eDRAM basically.

That means that every single render target (RTT) had to be first resolved to main mem, and then read back from there. In contrast, U-GPU is able to skip the entire roundtrip to main mem for all cases where the former and current rendertargets fit in eDRAM.

Thanks, that is informative. I wasn't aware of that. This really is relavent to the quote of yours above.

In your opinion, is the 32 megabytes of eDRAM in the Wii U large enough to hold the data for aliasing, render targets, texturing, etc?

BTW i do apologise if this has caused a shit storm, I appreciate the feedback. I don't believe i'm the only one who's questioning just how sufficient the 32 megabytes of eDRAM would be. So clarification like you're providing is appreciated.

I also asked the same question on Beyond3D and this was the response i received:

http://forum.beyond3d.com/showpost.php?p=1695285&postcount=4276
 
Top Bottom