It means we need 16GB GDDR5 on the next Xbox.
8GB GDDR5 of RAM, it's all that matters...ALL.
It's means it's going to put all that RAM to good use!
Whats the best GPU a console manufacturer could use for a fall 2013 release? Would 7950 be possible?
Whats the best GPU a console manufacturer could use for a fall 2013 release? Would 7950 be possible?
IS the GPU equal to 7870? Or is it close to it?
Would have to downclock it quite considerably and the perf advantage wouldnt be there.Whats the best GPU a console manufacturer could use for a fall 2013 release? Would 7950 be possible?
From the diagram looks like CPU.What I don't understand yet is how the load balancing between the individual queues/pipelines will be performed or controlled.
2.0.GCN 2.0 vs 1.0? What is it and why does it matter?
GCN is AMDs current GPU architecture, and 2.0 is obviously its successor. Since we have no idea what exactly this entails, throwing around the "GCN 2.0" moniker in console tech discussions is pretty meaningless IMHO.GCN 2.0 vs 1.0? What is it and why does it matter?
So developers can control the priority of the individual pielines? That would be pretty neat. I want to program one.CPU.
That's really hard to say since this is a APU and it works differently that a single CPU/GPU and the APU works as a single entity with the CPU/GPU feeding off each other, kind of like chocolate and peanut butter, on their own, they are good, but put them together and WAH!!!
It would be kind of neat but solely goiing on what is in the diagram.So developers can control the priority of the individual pielines? That would be pretty neat. I want to program one.
Would have to downclock it quite considerably and the perf advantage wouldnt be there.
CPU.
It came up with the Durango GPU. It's at ~1.2Tflops but 100% efficiency and the 360 was rated at 60% efficiency.
The 8 ACE would load balance it between their (64) rings and it seems like each ACE can be addressed by the CPU.No, the point of the ACEs is to handle task allocation within the GPU. There may be some need for developers to chunk their code up in such a way to encourage the GPU to be more efficient though.
I wonder if each ACE is directly addressable by the developer?
I wouldnt say its meaningless, the diagram screams GCN 2.0.GCN is AMDs current GPU architecture, and 2.0 is obviously its successor. Since we have no idea what exactly this entails, throwing around the "GCN 2.0" moniker in console tech discussions is pretty meaningless IMHO.
Yea, that's actually one of the reasons I want BF5 to return to WWII. True destruction in a war that was pretty much all about destruction would be epic, especially if they brought naval battles back. Imagine a destroyer shelling a village along the coast with all of its weapons...MMMmmmm.
I hope more MP games are willing to take the risk on destruction and realize the potential instead of just thinking it would ruin the flow of a map.
So are we to still believe the information they've been trickling out is still accurate despite the apparent recent changes to the PS4?
Well the RAM thing is the only thing that's been "wrong" so far and that was a last minute change. VGleaks obviously have a lot of detailed docs for PS4 (a 95 page PDF IIRC?) Why they choose to react to rumours I don't know.
I personally would just post everything......
The 8 ACE would load balance it between their rings but it seems like each ACE itself can be addressed by the CPU.
It's appreciated.Sorry if I'm rambling a bit.
I wouldnt say its meaningless, the diagram screams GCN 2.0.
Well my choice would be a 7870 as the best candidate GPU to put into both next gen consoles, although that is right on the edge. A 7950 is probably pushing the TDP limits of the consoles too far. The PS4 GPU is very good GPU for a console that is most probably going to run on less than 200 Watts.
How do you know what GCN 2.0 looks like? Are you really expecting the actual shader processors and render backends to be more or less unchanged?I wouldnt say its meaningless, the diagram screams GCN 2.0.
I wouldnt say its meaningless, the diagram screams GCN 2.0.
Think of it as the GPU is able to reach it's full potential.
Interesting
So Sony is basically going with the best out there
Get people coming back to their site.
Well yes of course! It's just wearing a little thin now. I understand why they do it, but I just find it annoying as hell.
http://i.imgur.com/UYQq2Z8.jpg[/mg]
[url]https://mega.co.nz/#!cdNGFLyC!cDj8ajt1WT0JNSQKEVdRbmk464QVFfVZF4OX7nRdMek[/rl][/QUOTE]
Dude.......
I wouldn't say "screams", but certainly suggests, the prime complaint about general compute on GCN was a lack of granularity and the resulting high cost of context switching that this brought about.
This modification seems to be directly aimed at improving that aspect of the GCN design.
I would now be highly unsurprised to see this outlined as the solution when GCN 2.0 details are revealed.
How do you know what GCN 2.0 looks like? Are you really expecting the actual shader processors and render backends to be more or less unchanged?
And even if it were to be GCN 2.0, how would naming it as such now help us, since we have nothing to compare it to? I could just see it creating another "GDDR5" buzzword phenomenon.
Well, it might be just EGCN (Enhanced GCN), but the diagram clearly shows 8 compute pipelines with 8 user-level queues, which is what GCN 2.2 is all about (less data waiting in line to be processed).