• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The great NeoGAF thread of understanding specs

Durante

Member
I'm reading it and it's very heavy for someone who has minimal knowledge in this field.
Yeah, sorry about that. I feel like especially in the CPU part I overshot the required level of detail (even though I had to stop myself from writing more ;)).

I'm trying to understand this part:
DDR3 RAm has medium latency while GDDR5 has higher per clock latency (don't really understand the per clock portion).
"Per clock" means that the latency for DDR3 is much lower in terms of the number of clock cycles it takes to access some data. However, since GDDR5 is usually clocked at somewhat higher frequency, the difference in time (the more comparable metric) is smaller.

So what exactly is the DDR3 better at doing with regard to latency when it comes to gaming when comparing it to GDDR5.
Well, lower latency will mostly help in CPU tasks that on the one hand do not fit within the CPU's data cache, but also don't consume large amounts of bandwidth, and where the CPU mostly spends its time waiting for memory accesses to complete. An example would be traversing a large sparse/unbalanced data structure in memory. I can't really readily think of a good example in games, ideally you'd like to avoid such structures.
 

RoboPlato

I'd be in the dick
Exellent thread, Durante. Even though I've been following next-gen rumors and specs for a while, the OP cleared up some things I still wasn't too sure on (mainly SIMD)
 
One thing missing IMO is what the clock frequencies mean. A lot of people think that simply because the number is higher, that means it's better. That's not really the case. The architecture is very important in speed.

One thing I don't understand is that if the PS4 has the same cores as the PS3 and yet has a lower frequency, how is it a better CPU? Doesn't it mean that the PS4 can only work on equal amount of threads as the PS3 and yet isn't able to perform as many instructions per second?

Edit - See what I mean?

Might want to explain the difference between programmable and fixed function shaders so people don't think it's some magic sauce. We know for a fact the 3DS still uses some fixed function shaders (as I recall).
 
One thing I don't understand is that if the PS4 has the same cores as the PS3 and yet has a lower frequency, how is it a better CPU? Doesn't it mean that the PS4 can only work on equal amount of threads as the PS3 and yet isn't able to perform as many instructions per second?
 

Kareha

Member
Very nice write Durante. Mods should get this stickied with a note saying "Read this, if you don't understand it, please don't participate in technical discussions unless you know wtf your talking about".
 

Durante

Member
One thing I don't understand is that if the PS4 has the same cores as the PS3 and yet has a lower frequency, how is it a better CPU? Doesn't it mean that the PS4 can only work on equal amount of threads as the PS3 and yet isn't able to perform as many instructions per second?
It has out-of-order execution, more instruction-level parallelism and a shorter pipeline. Read those sections in the OP to understand what that means. Of course, there absolutely will be some tasks that the PS4 CPU will be slower at than Cell. That's an unprecedented situation in consoles, but luckily many of those tasks will be ones that are well suited to GPUs.
 

Boss Man

Member
Very nice thread. Thank you for taking the time.

Idea: The bottom of the OP ought to have Wii U, PS4, and (rumored) Durango specs.
 

DTKT

Member
Quick question if I may.

While the RAM is unified, the PS4 still has a GPU who will handle whatever amount of the unified RAM pool he needs?
 

Oemenia

Banned
Thanks for the thread but can you give us an idea as to how things actually are carried out on a CPU/GPU. You explained what different bits of them do but it was hard to understand their significance when you didnt have an idea what they are useful for.
 

Durante

Member
Very nice thread. Thank you for taking the time.

Idea: The bottom of the OP ought to have Wii U, PS4, and (rumored) Durango specs.
This makes a lot of sense in principle, but psychologically it could give the thread too much of a console war bent on the outset ;)

While the RAM is unified, the PS3 still has a GPU who will handle whatever amount of the unified RAM pool he needs?
I think you mean the PS4. And yes, the RAM can be partitioned freely between GPU and CPU, and the same area can even be used by both. This is a major advantage of unified architectures.
 

Perkel

Banned
Durante i think you may add more cases to memory bandwidth and size.


Very nice write Durante. Mods should get this stickied with a note saying "Read this, if you don't understand it, please don't participate in technical discussions unless you know wtf your talking about".


I am all for that. RAM talk is getting annoying with people starting threads daily.
 

Jinfash

needs 2 extra inches
Great thread... Now I feel smarter, and feel like taking on spec discussions that were once too intidimating. I'm sure others feel the same.

Congrats Durante, you just militarized more warriors!
 

Durante

Member
Thanks for the thread but can you give us an idea as to how things actually are carried out on a CPU/GPU. You explained what different bits of them do but it was hard to understand their significance when you didnt have an idea what they are useful for.
That's a fair question, but for now I can't think of a way to really do that in a meaningful way without spending many more hours writing.

For CPUs, you could try reading the "Operation" part of the Wikipedia article.
Sadly I can't really find anything comparable for GPUs.


Congrats Durante, you just militarized more warriors!
What you don't realize is that this has been the plan all along!
laugh1v6b1.gif
 

Wiz

Member
Great thread.

I have one suggestion though. Can you put the system specs of all current gen (360,PS3, Wii) and next gen (PS4, WiiU, Rumored Durango) consoles in the OP?

Just as a reference. It would help us better visualize after reading the OP.
 

Stumpokapow

listen to the mad man
Yeah all that is well and good but for EG Naughty God will be able to take DDR5 x 8GB = 5 = 2 more than DDR so Naughty Gods will be able to destroy "Durango" (AKA Kinect 2 Floparoo AKA M$)--and based Mark Cerny am steamroll LOL @ "Don" "Mattrick" real "gamer".
 

Margalis

Banned
Thanks for the thread but can you give us an idea as to how things actually are carried out on a CPU/GPU. You explained what different bits of them do but it was hard to understand their significance when you didnt have an idea what they are useful for.

The GPU draws stuff. The CPU figures out what to draw - as in the state of the game.

It's a little more complicated than than that in reality, but that's the general idea. The CPU handles what is happening in the game, all of the logic, like "this guy is playing this frame of animation, this tree is over here, the player is jumping" and the GPU draws all that stuff.

Edit: Generally making the GPU better will make the game prettier, but if you want to do something like increase the number of enemies or objects in the world you need GPU power to draw them and CPU power to run AI, keep track of them, etc.
 

Durante

Member
Yeah all that is well and good but for EG Naughty God will be able to take DDR5 x 8GB = 5 = 2 more than DDR so Naughty Gods will be able to destroy "Durango" (AKA Kinect 2 Floparoo AKA M$)--and based Mark Cerny am steamroll LOL @ "Don" "Mattrick" real "gamer".
I never considered that angle. I may have to rethink much of this.
 

Espada

Member
The GPU draws stuff. The CPU figures out what to draw, what AI should be doing, etc.

It's a little more complicated than than that in reality, but that's the general idea. The CPU handles what is happening in the game, all of the logic, like "this guy is playing this frame of animation, this tree is over here, the player is jumping" and the GPU draws all that stuff.

So, if I'm understanding this correctly, the CPU is like a director and the GPU is like the actors and special effects following said orders?

In that case memory capacity would be the number of ideas in the director's head and bandwidth would be how well/efficiently he relays said ideas?
 

Tan

Member
Awesome thread, very informative and something this forum desperately needs.

Confused about some stuff though

The PS3 and 360 SPUs were both clocked at 3.2 GHz, while PS4 and 720 are rumored to clock at 1.6 GHz.

The PS4/720 clockspeed seems really bad? Am I just not understanding something or do other parts of the CPU (e.g. cores) actually make up for the big gap in frequency?
 

dosh

Member
I usually don't understand a word of the specs talk that goes on GAF, so this thread is really interesting to me. Thanks for taking the time to write all that.
 
Thank you. As someo e who more or less quit researching PC componets after building my last rig (mainly to stave off buyers remorse) this info was quite helpful and eye-opening.
 
Yeah all that is well and good but for EG Naughty God will be able to take DDR5 x 8GB = 5 = 2 more than DDR so Naughty Gods will be able to destroy "Durango" (AKA Kinect 2 Floparoo AKA M$)--and based Mark Cerny am steamroll LOL @ "Don" "Mattrick" real "gamer".

8zft00pab6.gif


But seriously. Thanks for the thread durante, I hope it stops some of this "Quatsch."
 

Durante

Member
The PS4/720 clockspeed seems really bad? Am I just not understanding something or do other parts of the CPU (e.g. cores) actually make up for the big gap in frequency?
The PS4/720 CPUs are "better" in most of the other performance aspects I explained in the OP (e.g. ILP, pipeline length, out-of-order execution), and also in some I skipped (like branch prediction). There will still be some tasks that they are worse at though, however those will mostly be ones that can also be done well with GPGPU.

What is absolutely true is that we're not really seeing the same leap in performance on the CPU side that we see on the GPU or memory sides.
 

aeolist

Banned
Thanks for the thread but can you give us an idea as to how things actually are carried out on a CPU/GPU. You explained what different bits of them do but it was hard to understand their significance when you didnt have an idea what they are useful for.

GPUs are good at processing highly parallel tasks, which is basically anything that doesn't have a lot of interdependencies in the code. Drawing a scene in 2D/3D is such a task because groups of shader processors can work on different sections of the scene indepedently meaning they don't rely on what's happening elsewhere to figure out what they need to do. Hand a GPU anything with branching code or interprocess dependecies and it crawls.

Basically they work for anything video related, physics calculations, and anything else that's simulation type code. They suck for pretty much everything else.

CPUs are good at the stuff GPUs suck at. There's not a lot of overlap, and people seem to be counting on GPGPU (using the shader processors for non graphics-related tasks) to do more than is really feasible right now.
 
Durante, nice thread. Question. How comparable, or better, is the PS4's 8 jaguar cores to the PS3's Cell processor? Can it do everything the Cell did like multiple video streams?
 

Durante

Member
Durante, nice thread. Question. How comparable, or better, is the PS4's 8 jaguar cores to the PS3's Cell processor? Can it do everything the Cell did like multiple video streams?
See my post above. Decoding multiple video streams is probably one of the things Cell might actually be better at than 8 Jaguar cores at 1.6 GHz. However, I'd personally say that both are more than good enough at it in practice.
 
The thing which really rustles my jimmies at times like these is the way that cheerleaders line up like ducks in a shooting gallery to be free PR for their favourite companies. Sony knows that the average console enthusiast is far from being a tech expert, so they litter their marketing with things like "OMG Teh Cell" and "8GB GDDR5 WTFBBQ" because idiots read it, don't know what it means except that it's meant to be good and then start vomiting their ignorance all over my favourite internets.

Sony in particular are really good at it because they understand that NeoGAF and similar places are the native habitat for their core market. The whole GDDR5 thing is actually a master stroke of cognitive dissonance among other things. They've managed to get fanboys and tech writers (srsly Anandtech stahp) all in a lather about how unified memory arch is the future despite the fact that the 360 already used one; split pools were only a huge weakness for the PS3. Oh and of course integrated motherboards have been using unified memory pools for like 15 years.

I'd like to think that a thread like this could really make a difference and actually stem the flow of cretinous fanboy ignorance, but I'm not expecting much.
 

deviljho

Member
What is absolutely true is that we're not really seeing the same leap in performance on the CPU side that we see on the GPU or memory sides.

How will that affect physics and AI? And will the new machines make these things easier for developers?
 

Durante

Member
How will that affect physics and AI?
Well, the heavy lifting on physics you can do on GPUs for a large part, and AI code is probably of a type that runs better on Jaguar.

And will the new machines make these things easier for developers?
Yes, by a massive amount, particularly compared to PS3. Tools for x86 are extremely mature and unified memory is easier to use than separate memory pools.
 
See my post above. Decoding multiple video streams is probably one of the things Cell might actually be better at than 8 Jaguar cores at 1.6 GHz. However, I'd personally say that both are more than good enough at it in practice.

Hmmm. Why did sony and MS decide to use such.......average(?) CPU's?

But reading your post above it seems there are balanced trade-offs?
 

Boss Man

Member
The thing which really rustles my jimmies at times like these is the way that cheerleaders line up like ducks in a shooting gallery to be free PR for their favourite companies. Sony knows that the average console enthusiast is far from being a tech expert, so they litter their marketing with things like "OMG Teh Cell" and "8GB GDDR5 WTFBBQ" because idiots read it, don't know what it means except that it's meant to be good and then start vomiting their ignorance all over my favourite internets.

Sony in particular are really good at it because they understand that NeoGAF and similar places are the native habitat for their core market. The whole GDDR5 thing is actually a master stroke of cognitive dissonance among other things. They've managed to get fanboys and tech writers (srsly Anandtech stahp) all in a lather about how unified memory arch is the future despite the fact that the 360 already used one; split pools were only a huge weakness for the PS3. Oh and of course integrated motherboards have been using unified memory pools for like 15 years.

I'd like to think that a thread like this could really make a difference and actually stem the flow of cretinous fanboy ignorance, but I'm not expecting much.
Er, the 'GDDR5 thing' is the clearest distinction between the consoles and it's a clear advantage for PS4. It's significant whether people want to cheerlead about it or not.

Also, how did unified memory work out for the 360? Why shouldn't we think it's the right direction?
 

artist

Banned
The whole GDDR5 thing is actually a master stroke of cognitive dissonance among other things. They've managed to get fanboys and tech writers (srsly Anandtech stahp) all in a lather about how unified memory arch is the future despite the fact that the 360 already used one
So much salt here, mmmm!
 

Durante

Member
Hmmm. Why did sony and MS decide to use such.......average(?) CPU's?

But reading your post above it seems there are balanced trade-offs?
I would assume that it's because (1) they expect much of the heavy computational workloads to shift to the GPU, (2) tools are ready and mature for x86 and (3) it's cheap (in cost, die space and power consumption) and what AMD had readily available to integrate.

How much you think (3) matters compared to (1) and (2) depends on how cynical you are ;)
 

Boss Man

Member
I would assume that it's because (1) they expect much of the heavy computational workloads to shift to the GPU, (2) tools are ready and mature for x86 and (3) it's cheap (in cost, die space and power consumption) and what AMD had readily available to integrate.

how much you think (3) matters compared to (1) and (2) depends on how cynical you are ;)
This is something I'm curious about and don't know much about. Is there a particular reason why they're betting on this? Because if it were to fall through for some reason, that'd leave the consoles in a pretty bad place. Maybe hurting PS4 more since all of its memory is higher latency, but I'm not sure how 1:1 that trade off between bandwidth and latency actually is even without a focus on GPU tasks.

If it works out, the unified GDDR5 approach on PS4 will have it set up very nicely wouldn't it? So I guess this may be the big thing to watch for?
 
Top Bottom