• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The great NeoGAF thread of understanding specs

Durante

Member
This is something I'm curious about and don't know much about. Is there a particular reason why they're betting on this? Because if it were to fall through for some reason, that'd leave the consoles in a pretty bad place.
Well, one data point I can provide is this: 8 years ago when PS360 were conceived, there wasn't a single GPU in the top 500 supercomputers list. Now, well:

top-500-gpu-supercomputer-growth-exponential-graph.jpg


GPU computing may be abit of a buzzword, but it's not just that. For tasks which are suited to them, GPUs blow CPUs out of the water in terms of performance (and performance/Watt).
 

Razgreez

Member
Also, how did unified memory work out for the 360? Why shouldn't we think it's the right direction?

Technically the 360 did not use a "fully unified memory pool". It still used 10mb eDRAM after all so the PS4 is unique in that regard. That is actually what has quite a few technology enthusiasts' quite interested.

The poster you're responding to has obviously learned nothing from the OP though
 
Er, the 'GDDR5 thing' is the clearest distinction between the consoles and it's a clear advantage for PS4. It's significant whether people want to cheerlead about it or not.

Also, how did unified memory work out for the 360? Why shouldn't we think it's the right direction?

Can you explain what specific tangible differences 8GB of GDDR5 will make to multiplatform games? And of course it worked out, my point is that it's hardly news.

So much salt here, mmmm!

Judging by where you cut off my quote I'm assuming you think that I'm a Microsoft fan which is about as far from the truth as you can get without falling off a cliff. I'm someone who has an interest in technology and gets tired of listening to cretinous idiot fanboys talking like experts about things they don't understand.
 

aeolist

Banned
This is something I'm curious about and don't know much about. Is there a particular reason why they're betting on this? Because if it were to fall through for some reason, that'd leave the consoles in a pretty bad place.

Because the computationally intensive tasks that game consoles and media boxes need to perform are the types that GPUs are generally good at.

The Jaguar cores are better at general purpose code than Xenos and Cell but worse at floating point operations, which will be made up for with the GPUs.

Consider that even 2 Jaguar cores would be more powerful than any current smartphone or tablet which is more than enough for all of the non-gaming related computing tasks for most people.
 

artist

Banned
Judging by where you cut off my quote I'm assuming you think that I'm a Microsoft fan which is about as far from the truth as you can get without falling off a cliff. I'm someone who has an interest in technology and gets tired of listening to cretinous idiot fanboys talking like experts about things they don't understand.
Are you referring to yourself? Because you dont seem to understand how the basic memory architecture of previous systems. The original Xbox had a unified pool as well. OMG! WTFBBQ? I'd suggest you read a little bit deeper and understand why Anand was excited. Dont let your bias get the best of you.

This is something I'm curious about and don't know much about. Is there a particular reason why they're betting on this? Because if it were to fall through for some reason, that'd leave the consoles in a pretty bad place. Maybe hurting PS4 more since all of its memory is higher latency, but I'm not sure how 1:1 that trade off between bandwidth and latency actually is even without a focus on GPU tasks.

If it works out, the unified GDDR5 approach on PS4 will have it set up very nicely wouldn't it? So I guess this may be the big thing to watch for?
Sony and Microsoft have most likely analyzed their game codes over the years to get a sense what calls and tasks are needed most and went in this direction. (Besides talking to developers)
 

Boss Man

Member
Thanks for the GPU computing info guys. I think it's something I may read up on a bit. I'm a software guy but not for commercial video games (I made a terrible 3D Minesweeper game once!) or anything, still interesting though.

Can you explain what specific tangible differences 8GB of GDDR5 will make to multiplatform games?
Bill O'Reilly?

I know that GDDR5 allows for significantly better GPU performance. I don't know what the difference between multiplatform games on PS4 and the next Xbox will be, and honestly I don't think anyone can right now. IMO, there's no telling how it will manifest. But I'm sure some people more informed than me might have some really good guesses.
 
Are you referring to yourself? Because you dont seem to understand how the basic memory architecture of previous systems. The original Xbox had a unified pool as well. OMG! WTFBBQ? I'd suggest you read a little bit deeper and understand why Anand was excited. Dont let your bias get the best of you.

What bias? I'm really not sure what you're talking about.

Since you're an expert though, can you please explain to me specifically what it is about the PS4's memory architecture which Anandtech found to be exciting? For bonus points you can explain how it diverges from the future Intel and AMD have been pursuing for the past 5 years.

Bill O'Reilly?

I know that GDDR5 allows for significantly better GPU performance. I don't know what the difference between multiplatform games on PS4 and the next Xbox will be, and honestly I don't think anyone can right now. IMO, there's no telling how it will manifest. But I'm sure some people more informed than me might have some really good guesses.

And this is precisely my point; you just told me that this was the most significant difference between the next consoles but you can't articulate how it will actually make a significant difference.
 

artist

Banned
What bias? I'm really not sure what you're talking about.
Well, I'm not the only one to call you out here ;)

Since you're an expert though, can you please explain to me specifically what it is about the PS4's memory architecture which Anandtech found to be exciting? For bonus points you can explain how it diverges from the future Intel and AMD have been pursuing for the past 5 years.
How about this .. there is only one memory controller in the PS4. Here is your homework, go and find out how many memory controllers were there in the previous systems.
 

Durante

Member
And this is precisely my point; you just told me that this was the most significant difference between the next consoles but you can't articulate how it will actually make a significant difference.
Interestingly enough, it can be the most significant difference and still fail to be particularly significant ;)

Though if all the leaks are true I'd say 16 ROPs vs. 32 ROPs may be more significant.
 

Perkel

Banned
The thing which really rustles my jimmies at times like these is the way that cheerleaders line up like ducks in a shooting gallery to be free PR for their favourite companies. Sony knows that the average console enthusiast is far from being a tech expert, so they litter their marketing with things like "OMG Teh Cell" and "8GB GDDR5 WTFBBQ" because idiots read it, don't know what it means except that it's meant to be good and then start vomiting their ignorance all over my favourite internets.

Sony in particular are really good at it because they understand that NeoGAF and similar places are the native habitat for their core market. The whole GDDR5 thing is actually a master stroke of cognitive dissonance among other things. They've managed to get fanboys and tech writers (srsly Anandtech stahp) all in a lather about how unified memory arch is the future despite the fact that the 360 already used one; split pools were only a huge weakness for the PS3. Oh and of course integrated motherboards have been using unified memory pools for like 15 years.

I'd like to think that a thread like this could really make a difference and actually stem the flow of cretinous fanboy ignorance, but I'm not expecting much.

Yeah you are better than carmack and you know what is good and what is bad. And you didn't read durante thread. X360 had two memory pools main ram was unified but still there was memory juggling.
 

DrPirate

Banned
People, thought increasing the RAM would create better looking games? lol...

But I do have a noob question concerning the CPUs. I'm a big amateur so please bear with me if the question is really stupid.

You state that:

"Xbox 360 has 3 symmetrical cores, PS4 will have 8 symmetrical cores, PS3 uses 8 asymmetrical cores, and Wii has a single core."

and

"The PS3 and 360 SPUs were both clocked at 3.2 GHz, while PS4 and 720 are rumored to clock at 1.6 GHz. Modern desktop PC processors clock anywhere from 2.5 to 4.2 Ghz."

Is it correct for me to then basically assume that the CPU in the PS3 is more powerful than the CPU in the PS4? I'm running off this information where you basically say that, PS3 has 8 cores, and those cores are clocked at 3.2, while the ones in PS4/720 are at 1.6 (so half the speed)?

I'm not quite understanding how the power of the processor can be qualitatively compared. How can I make a judgment on what is better or worse? or rather, how can I just make a laymen's comparison of the pros and cons and each?

I am interested in learning so a serious response would mean alot to me. I thank you in advance for answering my question.

Edit: If the answer is really really long, I would also appreciate directions to noob friendly, or well-written documentation so I can read up and learn about it on myself.
 

aeolist

Banned
Thanks for the GPU computing info guys. I think it's something I may read up on a bit. I'm a software guy but not for commercial video games (I made a terrible 3D Minesweeper game once!) or anything, still interesting though.

Articles

OpenCL is the open standard for GPU compute, it's handled by the Khronos Group which is the industry body that sets the OpenGL standards. KG is made up of representatives from all the major players in computing hardware and software like Intel, nVidia, AMD, IBM, ARM, TI, Imagination, Qualcomm, and lots more. Microsoft has their own standards and APIs for GPU compute that are roughly equivalent: http://www.anandtech.com/show/2698

Physics calculations are one of the low-hanging fruit as far as GPU computing goes. nVidia's been licensing PhysX for years but now that the consoles support GPGPU tasks here's hoping some quality cross-platform libraries become popular so everyone will see the benefits and not just nVidia customers: http://www.anandtech.com/show/2285 (this one's a bit outdated but not much has changed in the world of GPU physics, hopefully we'll see the ball start rolling next year)
 

Perkel

Banned
Interestingly enough, it can be the most significant difference and still fail to be particularly significant ;)

Though if all the leaks are true I'd say 16 ROPs vs. 32 ROPs may be more significant.

I think it is pretty clear that with 1,2 Tf and bandwidth which they are using they don't plan to hit 1080p as standard if they want to have comparable graphic.
 
I hope to god Microsoft announces some marketing buzz word as a technical achievement just so I can see GAF collectively go insane for the next couple weeks.
 

Durante

Member
I'm not quite understanding how the power of the processor can be qualitatively compared. How can I make a judgment on what is better or worse? or rather, how can I just make a laymen's comparison of the pros and cons and each?
First, let me quote myself from the op:
To understand how a processor will perform at a task we first need to decide how that task will be impacted by the individual performance characteristics of the processor. Will there be a lot of SIMD-friendly number crunching? Can the task be distributed across multiple threads efficiently? Will there be lots of data-dependent unpredictable branching?

What this means is that unless two processors use the same or very similar architectures -- or the performance difference is massive -- it will be hard to categorically state that one is better than the other. One of them could be better at some things and worse at others. It all depends on the task at hand.
 

aeolist

Banned
Is it correct for me to then basically assume that the CPU in the PS3 is more powerful than the CPU in the PS4? I'm running off this information where you basically say that, PS3 has 8 cores, and those cores are clocked at 3.2, while the ones in PS4/720 are at 1.6 (so half the speed)?
Cell is better at certain tasks than the PS4 CPU will be, namely anything heavily threaded with lots of floating point operations. Cell was originally going to perform all the graphics rendering as well as the general purpose calculations on the PS4, so the SPUs are basically designed in much the same way as GPU cores.

Jaguar is much better at general purpose code (in games this will be things like scripting and AI). Clock speed is also not directly comparable because not only does Jaguar have branch prediction and better IPC than Cell but their overall designs are as different as it's possible to be.

I'm not quite understanding how the power of the processor can be qualitatively compared. How can I make a judgment on what is better or worse? or rather, how can I just make a laymen's comparison of the pros and cons and each?

I am interested in learning so a serious response would mean alot to me. I thank you in advance for answering my question.
It's basically impossible to make direct comparisons because the usage scenarios are going to be quite different. Cell is better at workloads that will be handled by the GPU in the PS4 and Jaguar is better at everything else.
 

artist

Banned
I hope to god Microsoft announces some marketing buzz word as a technical achievement just so I can see GAF collectively go insane for the next couple weeks.
They can announce;

Enhanced Synthesizer RAM

or

Give out the transistor count.
 
The thing which really rustles my jimmies at times like these is the way that cheerleaders line up like ducks in a shooting gallery to be free PR for their favourite companies. Sony knows that the average console enthusiast is far from being a tech expert, so they litter their marketing with things like "OMG Teh Cell" and "8GB GDDR5 WTFBBQ" because idiots read it, don't know what it means except that it's meant to be good and then start vomiting their ignorance all over my favourite internets.

Sony in particular are really good at it because they understand that NeoGAF and similar places are the native habitat for their core market. The whole GDDR5 thing is actually a master stroke of cognitive dissonance among other things. They've managed to get fanboys and tech writers (srsly Anandtech stahp) all in a lather about how unified memory arch is the future despite the fact that the 360 already used one; split pools were only a huge weakness for the PS3. Oh and of course integrated motherboards have been using unified memory pools for like 15 years.

I'd like to think that a thread like this could really make a difference and actually stem the flow of cretinous fanboy ignorance, but I'm not expecting much.

8GB GDDR5 is not a marketing ploy from Sony. The console warriors are the only ones making it into a big 'war'. Lets not bring it into this thread.
 

DrPirate

Banned
It's basically impossible to make direct comparisons because the usage scenarios are going to be quite different. Cell is better at workloads that will be handled by the GPU in the PS4 and Jaguar is better at everything else.


What this means is that unless two processors use the same or very similar architectures -- or the performance difference is massive -- it will be hard to categorically state that one is better than the other. One of them could be better at some things and worse at others. It all depends on the task at hand.

Ah, alright, so it's not straightforward comparison of numbers, but rather understanding their architectures depending on what kind of tasks they will be handling. Alright that helps alot, thanks guys.
 
Well, I'm not the only one to call you out here ;)

Yeah and you're all as wrong as each other. I'm a PC gamer, and I follow hardware all the time. So I get really pissed at how a bunch of idiots suddenly become experts on something I'm genuinely interested in whenever new consoles are announced because it fuels their fanboy forum wars.

How about this .. there is only one memory controller in the PS4. Here is your homework, go and find out how many memory controllers were there in the previous systems.

Wow, you're not even trying are you. My point is and has been that anyone who pays attention to hardware knows that unified RAM with a single controller is the future because AMD and Intel have been saying it for years. I have no interest in talking about what was in previous consoles because my interest is in hardware, not comparing what went into a specific and very small set of embedded devices.

Interestingly enough, it can be the most significant difference and still fail to be particularly significant ;)

Only possible with Cell(TM)

;)
 

aeolist

Banned
That's a nice one actually. Totally true and almost entirely meaningless.

Is it though? I know the Durango will have eDRAM but if the lower ROP and shader count is true then it might be smaller than the PS4.

I dunno how many mm^2 eDRAM might take up.
 

Durante

Member
Is it though? I know the Durango will have eDRAM but if the lower ROP and shader count is true then it might be smaller than the PS4.

I dunno how many mm^2 eDRAM might take up.
That's true, I have no idea how 32 MB embedded memory measure up to the rumoured GPU differences in die size.
It would be interesting if the two dies were to be almost exactly the same size :p
 

aeolist

Banned
Ah, alright, so it's not straightforward comparison of numbers, but rather understanding their architectures depending on what kind of tasks they will be handling. Alright that helps alot, thanks guys.

Exactly.

The direct comparison would be between overall system designs, ie Cell + RSX and split RAM pools vs 8xJaguar + 7850 and unified memory. Even disregarding the overall performance improvement and process node upgrade the PS4 is a much smarter designed system. Using off the shelf parts makes it cheaper, and AMD's expertise combined with their awful financial situation means they're probably cutting Sony a really good deal. The Jaguar was specifically designed to be portable to new process nodes with minimal redesign necessary so the move to 20nm should be relatively painless.

Cell was not well suited to basically anything and was difficult to work with, and as a bespoke IBM design probably cost a lot more than it should have. nVidia was probably holding Sony over a barrel with RSX licensing, and the split RAM just made everything harder.
 

Truespeed

Member
Durante said:
They execute all the pixel, vertex, geometry and hull shader code that a modern 3D engine throws at the GPU.

Great 101 intro. Is 'hull' a word or did you mean 'all' or 'haul'?
 

artist

Banned
That's a nice one actually. Totally true and almost entirely meaningless.
Well, I'm counting on MS to be vague about the specs in specific areas and or mask it as much as possible like that.

Uh, it can absolutely make better looking games. Like higher texture resolution or being able to fit more unique textures.
Yeah, he probably thinks higher fps is the only metric that makes better looking games.
 

ekim

Member
An 8 core Jaguar is only rated at 100 GFlops? I've read the specs recently and it said something like 400 GFlops per 4 core-chip.
 

Durante

Member
An 8 core Jaguar is only rated at 100 GFlops? I've read the specs recently and it said something like 400 GFlops per 4 core-chip.
Are you sure that was Jaguar? I got the number like this:
1.6 (GHz) * 8 (cores) * 4 (128 bit SIMD) * 2 (MUL + ADD) = 102.4
 

DrPirate

Banned
Uh, it can absolutely make better looking games. Like higher texture resolution or being able to fit more unique textures.

True, my mistake. I meant it in a frames per second kind of aspect as it was stated in the OP. I deserved to be called out on that.
 

ekim

Member
That's an interesting slide. I honestly have no idea what they mean by that nubmer or how they arrive at it.

I'm not that tech savvy so I just interpreted it the most obvious way... :)

Edit: Ah... 190000 flops isn't 190gigaflops isn't it?
 
Uh, it can absolutely make better looking games. Like higher texture resolution or being able to fit more unique textures.

I think the big problem is that it's something which relies far more on the developers taking advantage of it than other features. Speed is generally easier to see the difference because dropped frames are pretty obvious and straightforward to benchmark comparatively, so if you compare clock speeds and average framerates on paper you'll be able to directly see the correlation between the two.

Memory is a bit harder, because developers have to use that extra memory for it to represent a clear advantage -- hence my earlier snark about the Cell; it was the more powerful CPU in consoles this generation but it was poorly understood and woefully underutilised. The thing which makes me sceptical that it's going to actually be used properly outside first-party content is actually the way that the Cell ended up being almost a disadvantage for the PS3 because developers were making 360 games and then trying to shoehorn them into its more sophisticated but esoteric architecture.

I could be completely wrong though, and maybe developers will just ship the PS4 versions of multiplatform games with "HD" texture packs like they do with the PC versions of console ports. That should make a reasonable difference, but first-party games will still be the real showcases for what you can get out of 8GB of GDDR5.
 
I'm not that tech savvy so I just interpreted it the most obvious way... :)

Edit: Ah... 190000 flops isn't 190gigaflops isn't it?

That makes it even worse.....
1 billion flops = 1 GFlop

Aren't flops usually done per clock? Especially since that spec sheet is talking in generalities.
 
Top Bottom