blu
Wants the largest console games publisher to avoid Nintendo's platforms.
Sod off, hippie.All I wanted to do was play Nintendo Games in HD I never meant for it to cause all this madness.
Can't Wii All Just Get Along?
Sod off, hippie.All I wanted to do was play Nintendo Games in HD I never meant for it to cause all this madness.
Can't Wii All Just Get Along?
I'll be laughing my ass off if Sony or MS expect me to buy their next consoles for $400-$500.
$299 is my limit. $350 if they bundle all sorts of goodies plus a game.
What is everyone talking about?
What is everyone talking about?
I'm sure that will be very exciting.In about a months time, gaming journalistic outlets will get their hands on WiiU to open it up and post pictures of the chips.
Just how awesome WiiU is and then not so awesome followed by " no no it's awesum, you know nuttin' " bla bla
It's all about the GPU and it's potential etc etc.
Kind of OT but in the general train of thought; Did we ever find out the full specs of the Hollywood GPU? I'm just wondering if a teardown of the WiiU will be enough to settle the speculation or if a full technical leak from Nintendo or AMD would be the only way to settle things.You aren't the only person they are targeting to sell to. So yea, you can wait.
Anyway, this thread has been spiralling down the drain a bit now. In about a months time, gaming journalistic outlets will get their hands on WiiU to open it up and post pictures of the chips. I am certain they already possess a few WiiU's but under NDA to reveal the tech specs.
First, I'll get out of the way the point I disagreed with, I don't believe it's easy or clear to see what anyone means by DX11 features but not DX11 capable, or SM5 like features, but only used sparingly... How does that make sense to anyone? the changes from DX10.1 to DX11 are very small. It flat out is simply about moving tessellation into unified shaders as well as GPGPU targeting that same advancement. There is no long list of DX11 features that they might have or might not have, it's literally those 2 things from DX10.1, but I'm all ears if you have a list. SM5, well either the chip has it or it doesn't, it's a specification that Nintendo would likely not even mess with.
Now to move on, it seems like I'm rocking the boat or whatever, but how can that be when You, BG and myself are plainly saying the same thing. BG is right to ask where he and I disagree, because we don't. He said "No" to there being a different GPU than what was in early dev kits, but I think it was mostly misunderstood that I simply meant that a final silicon chip was put into the final dev kits that replaced the R700 card originally used. Beyond that I said it's possible that that card now supported DX11 (and for OUR purposes, any changes to improve GPGPU and Tessellation, would mean this.)
As for saying "we don't know" if Wii U's GPU is 2-3X Xenos, I think it's plain that it falls in that line, BG basically agrees with me and I don't see you disagreeing with him. We know that Wii U can render 360 graphics and beyond that, we also know that it can also render a second scene from the game onto the pad (batman did this during E3) we know that Batman isn't exactly pushing the hardware because it's an unoptimized port.
Now even 28nm is being taken seriously because someone else said that it's possible. The reason I don't post here much anymore is exactly because of this, reasonable discussion is halted, if not by trolls, then by people who just don't read or understand what others are saying.
I'd love to know where either of you disagree with me about Wii U specs. I'd love to have that debate, instead of what is "known" and what is "guessed" at... As for this spec leak in the OP, it's clearly second hand information from Arkam (he has said as much) so lets try not to treat it like a fact sheet.
BTW: I completely disagree with people trying to crush the E6760 rumor, because most people know it's a custom part, so of course it's not E6760, but it seems we all agree that it has the target performance fairly close and that is all people really care about, until someone can tell me a feature from DX11 they don't expect in GPU7, out of the two choices there really is to pick from. Then it makes no difference if people look at E6760 or an imaginary custom card that has the same performance. GPU7 won't use DX11 so I hardly see the point in telling people it's not E6760, because it would have all the same characteristics of whatever GPU7 turns out to be EXCEPT Nintendo customizations of course.
Sod off, hippie.
Yeah. I hate all of you because Nintendo chose not to make Wii U with two 7970s in Crossfire.
Yeah. I hate all of you because Nintendo chose not to make Wii U with two 7970s in Crossfire.
Yeah. I hate all of you because Nintendo chose not to make Wii U with two 7970s in Crossfire.
"words of the prophet"
CaptNfantasy... you're my favorite poster in GAF right now. I don't know how you manage to pull those words together with such clarity. I am convinced that you are a wizard and that you have a magic string that pulls us all together, to face the darkness and the heathens of the underworld.
Ok dude, let me stop you right there, I'm the only real Sorcerer around here.
There are 7 sorcerers and you are the 7th. CaptNfantasy is the true and original sorcerer.
WTF~<#%%#%^%<
Yeah. I hate all of you because Nintendo chose not to make Wii U with two 7970s in Crossfire.
I would think the eDRAM is actually either part of the CPU die or a separate die. Either way, it's most likely IBM made 45nm SoI. I mean, there was this interesting statement by Bob Patti, CTO of Tezzaron Semiconductors, a few months ago:Fair enough, and I want to make it clear I'm not saying it's impossible, just that there are caveats.
What concerns me more is the eDRAM because volume production typically lags the latest node.
http://semimd.com/blog/2012/07/31/the-changing-role-of-the-osat/Nintendos going to build their next-generation box, said Patti. They get their graphics processor from TSMC and their game processor from IBM. They are going to stack them together. Do you think IBMs going to be real excited about sending their process manifesttheir backup and materialsto TSMC? Or maybe TSMC will send it IBM. Neither of those is ever going to happen. Historically they do share this with OSATs, or at least some material information. And theyve shared it typically with us because Im only a back-end fab. I can keep a secret. Im not going to tell their competition. Theres at least a level of comfort in dealing with a third party that isnt a competitor.
We don't know what the eDRAM is or what purpose it has. It's impossible to answer your question with the info we have. Could be anywhere from a few dozen to several hundred GB/s. To put things into perspective: POWER7 has a regular memory bandwidth of 180GB/s per chip - and that's just the DDR3 RAM, not the eDRAM.I asked a couple questions about 15 pages ago(see how patient I am?)...
But got no answer/response. I know this thread can move pretty fast at times, but I'm getting the impression that these roller-coaster debates are consuming you guys. So much so that sincere questions are being overlooked/disregarded. I'd love to be proven wrong though...
Anyway, if anyone cares to help me... my questions are in the link below:
http://www.neogaf.com/forum/showthread.php?p=42365465&highlight=#post42365465
*fingers crossed*
I know nothing of edram bandwidths.. but bump quoting just for you <3I asked a couple questions about 15 pages ago(see how patient I am?)...
But got no answer/response. I know this thread can move pretty fast at times, but I'm getting the impression that these roller-coaster debates are consuming you guys. So much so that sincere questions are being overlooked/disregarded. I'd love to be proven wrong though...
Anyway, if anyone cares to help me... my questions are in the link below:
http://www.neogaf.com/forum/showthread.php?p=42365465&highlight=#post42365465
*fingers crossed*
We don't know what the eDRAM is or what purpose it has. It's impossible to answer your question with the info we have. Could be anywhere from a few dozen to several hundred GB/s. To put things into perspective: POWER7 has a regular memory bandwidth of 180GB/s per chip - and that's just the DDR3 RAM, not the eDRAM.
I know nothing of edram bandwidths.. but bump quoting just for you <3
How does backward compatibility works? Having what I think is a very different graphics architecture than the wii..
How does backward compatibility works? Having what I think is a very different graphics architecture than the wii..
Random Question: What is EDRAM and what will the wii u most likely use it for
I think it's neither a CPU cache nor a framebuffer. I have a theory, but it's probably all kinds of wrong (and stupid), as that stuff isn't really my area of expertise. We have quite a few people on GAF who know a whole lot more about this than I do, and they'll probably laugh at me, but here it goes: I think the eDRAM is meant to be a fast, low latency scratchpad for both the CPU and the GPU, (mostly) meant to make GPGPU operations more efficient. A buffer for high speed data exchange between CPU and GPU to prevent stalls, basically.Ok, gotcha! I'm hoping they use it in a similar way GameCube did.. not just for a framebuffer? Hmmm, I wonder if they could also use it in a way that benefits general purpose processing significantly(similar to CPU cache?).
I'll be laughing my ass off if Sony or MS expect me to buy their next consoles for $400-$500.
$299 is my limit. $350 if they bundle all sorts of goodies plus a game.
I agree, if either Sony or Microsoft have their next gen efforts retailing for more than $400 they're going to struggle. Especially when you consider that Nintendo will have system selling first party titles such as Mario Kart, a 3D Mario, Retro's new game and Monolith Soft's new games waiting in the wings in an attempt to scupper their launches.
There's always a slim chance that this time next year that the economy worldwide might be in a better state but I certainly wouldn't expect it myself. If they release their consoles for a pricepoint higher than $400 I can't see them flying off shelves no matter what sort of software they have available at launch.
Yeah that worked really well for Sony right when they expected people to get 2nd jobs to fund a PS3...yeah those people can't sustain a console's software sales which is what Sony hinges it's hopes on when they sell a console at a loss.
Fact is there exists a price barrier for console's...that barrier seems to be $350 at the top end which is begrudgingly acceptable. The real expectation and hope of the consumer though is at $299.
You aren't the only person they are targeting to sell to. So yea, you can wait.
What and where are those rumors? I missed that.You're correct of course, but I would think Nintendo and TSMC signed the contracts at least a year ago, probably closer to two years. And TSMC expected much higher volumes and better yields for 28nm back then if I remember correctly. So it's certainly possible that Nintendo planned to go with 28nm and designed the TDP around that, and couldn't really change that anymore - which would explain the rumored manufacturing issues.
What and where are those rumors? I missed that.
I think it's neither a CPU cache nor a framebuffer. I have a theory, but it's probably all kinds of wrong (and stupid), as that stuff isn't really my area of expertise. We have quite a few people on GAF who know a whole lot more about this than I do, and they'll probably laugh at me, but here it goes: I think the eDRAM is meant to be a fast, low latency scratchpad for both the CPU and the GPU, (mostly)[ meant to make GPGPU operations more efficient. A buffer for high speed data exchange between CPU and GPU to prevent stalls, basically.
Yeah that worked really well for Sony right when they expected people to get 2nd jobs to fund a PS3...yeah those people can't sustain a console's software sales which is what Sony hinges it's hopes on when they sell a console at a loss.
Fact is there exists a price barrier for console's...that barrier seems to be $350 at the top end which is begrudgingly acceptable. The real expectation and hope of the consumer though is at $299.
Hey bgassasin, appreciate if you could answer this one question.
In regards to the CPU, have your sources commented on its capabilities?
Previous insiders have commented that the Wii U's CPU is weak in comparison to the HD Twins, and there's also a few developers who have publically have made comments suggesting this is indeed the case. We recently had the Ninja Gaiden devs saying the Wii U version has less on screen enemies then the PS3/60 due to its processor.
What is the likely cause of these sorts of comments:
Developers running unoptimised code on the Wii U
Devleopers failing to take advantage of GPGPU and other hardware that could perform these tasks or lighten CPU load
The Wii U's processor is indeed inferior in most regards to the Xenon and Cell processors.
Middleware and development tools not yet optimised enough for developers to take advantage of the CPU's power
The Ninja Gaiden devs didn't say that, I believe it was one of the Koei teams doing Warriors Orochi, those same devs also said they didn't understand the hardware and that the machine could produce much more beautiful graphics than the other consoles.
Nothing to laugh at here. But you're missing one detail: if we agree that the main pool is DDR3, then it makes a lot of sense for the GPU to keep its fb (not necessarily in its entirety) in the edram pool. Historically, platforms which did not use any form of local fb but instead stored their fb directly in UMA space suffered from some serious memory contention issues (case in point: xbox).I think it's neither a CPU cache nor a framebuffer. I have a theory, but it's probably all kinds of wrong (and stupid), as that stuff isn't really my area of expertise. We have quite a few people on GAF who know a whole lot more about this than I do, and they'll probably laugh at me, but here it goes: I think the eDRAM is meant to be a fast, low latency scratchpad for both the CPU and the GPU, (mostly) meant to make GPGPU operations more efficient. A buffer for high speed data exchange between CPU and GPU to prevent stalls, basically.
The biggest benefit the EDRAM provides is lower power consumption.
Come on people, you should know this by now.
Pretty sure it's not lower power consumption than not having any...
If that's a benefit, it has to be compared to an alternative.
Knowing that it is at least a modified r700 can we expect at least an r800 tesselator performance wise?
That may be your personal price barrier, doesn't mean millions of others won't disagree with you. I personally think anyone is batshit to pay $350 for a Wii U, in context that doesn't mean anything.
Nothing to laugh at here. But you're missing one detail: if we agree that the main pool is DDR3, then it makes a lot of sense for the GPU to keep its fb (not necessarily in its entirety) in the edram pool. Historically, platforms which did not use any form of local fb but instead stored their fb directly in UMA space suffered from some serious memory contention issues (case in point: xbox).
Regardless, I think the edram will be used both as a GPU scratchpad as well as act as a low latency conduit between the CPU and GPU. That's why one of the architecture details I'm most curious about is how AMD solved any potential CPU/GPU contention issues in that pool.