• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: PS4 GPU based on AMD's GCN 2.0 architecture?

artist

Banned
One interesting fact about the official spec sheet is that it does not list the frequency of the individual cores. Tech-wise it is possible for them to bump up the rumored 1.6 GHz CPU core speed before production if they chose to without much hassle, right?

I not saying the should since they do have 8 cores, processing help from the GPU, and no doubt want to keep heat down and yields up, but isn't it possible at this point?
We have an indirect clock spec for the GPU, reverse calculating it from the 1.84 TFlop figure with 18 CUs.

As for the clock of the CPU, we've some rumblings on 1.6+ GHz. Anand also mentioned that it will be closer to 2GHz than 1.6 or something on those lines.
 

i-Lo

Member
We have an indirect clock spec for the GPU, reverse calculating it from the 1.84 TFlop figure with 18 CUs.

As for the clock of the CPU, we've some rumblings on 1.6+ GHz. Anand also mentioned that it will be closer to 2GHz than 1.6 or something on those lines.

I think it'll come down to reliability inspection and yields. It would have been amazing if they could bump the GPU's core clock as well but given the CPU draws far less power and generates less heat than GPU, a bump in its clock speed is a more realistic proposition.

If this tells us something, it's that the first generation games are being born in pure turmoil of transition. I can only imagine how things are going to look a few years down the line. I just hope the console(s) have the power to go beyond just rendering and into physics (still seeing clipping darn it).
 

onQ123

Member

Eight-Days-Ps35006911.jpg


Eight Days will be a PS4 launch title Believe!
 

Krabardaf

Member

I wouldn't be so sure about this. Porting CPU code to GPU just to ensure both versions of a game are perfectly identical is not something I would expect from the average 3rd party developer.

Isn't GPU processing all about parallelism? And this makes it even more so? Is it developer input driven or hardware driven?

Yeah GPU are massively parallel, but sorting the tasks they crunch won't make them doing more task at once. It will only prevent them from being idle. Sorting will be handled by hardware, that's the purpose of ACEs as I understand them.
 

About half of that. I couldn't be more excited for the PS4. With Sony getting the best components at an affordable price I believe they can be back on top again. Sign me up. I bet this GPU will have the ability to tressFX

Still hoping for BDXL drive and Bluetooth 4.0
 

thuway

Member
Well a few days ago I did get an email from a Gaffer who has posted a lot in these topics.

It's a PDF. I don't know how much of it is new or old. (It's a little bit different from the VGleaks document)

What I can say is the current leaked specs haven't changed.

So I'm curious to who bgassassin's source is.

That's all I know.
I know what you have. Please don't leak it. Trying to act cool on the internet is not worth having someone lose their job. I can back you up though-

Specs for Durango haven't changed. The GPU is nearly 100% efficient though at 1.23 TF, so that's something most people didn't know.
 

yurinka

Member
If this is true, does it fit with the almost 2 terrorflops included in Sony's pdf with the specs?
Or are these things for another stuff like to make things prettier like AA and other tricks?

8's every damn where!


8 Jaguar cores

8 GBs of GDDR5

8 ACE's, each capable of running 8 CL's each
How about 1.8MHz for the CPU?
 
I know what you have. Please don't leak it. Trying to act cool on the internet is not worth having someone lose their job. I can back you up though-

Specs for Durango haven't changed. The GPU is nearly 100% efficient though at 1.23 TF, so that's something most people didn't know.

No boost for the CPU? Or are you pulling our leg again :/
 

Biggzy

Member
I know what you have. Please don't leak it. Trying to act cool on the internet is not worth having someone lose their job. I can back you up though-

Specs for Durango haven't changed. The GPU is nearly 100% efficient though at 1.23 TF, so that's something most people didn't know.

Hasn't that rumor came up before? Microsoft must have a very specific TDP for the whole console and have spent a load of R&D engineering their way to that target.
 

ekim

Member
I know what you have. Please don't leak it. Trying to act cool on the internet is not worth having someone lose their job. I can back you up though-

Specs for Durango haven't changed. The GPU is nearly 100% efficient though at 1.23 TF, so that's something most people didn't know.

The vgleaks specs are based on that (old) document - things "could" have changed,
 

thuway

Member
Hasn't that rumor came up before? Microsoft must have a very specific TDP for the whole console and have spent a load of R&D engineering their way to that target.

It's not a rumor now. I am confirming it. You need a link? There is no link. I am the link :). The point is, MS is going for 100% GPU efficiency.
 

thuway

Member
Gemüsepizza;48275352 said:
Well, I guess the PS4 GPU will probably also be near 100% efficient.

This I don't know. Until I see a solid piece of paper with that information, it's a rumor at best.
 

ekim

Member
You are missing out on the mega ton -_-. The GPU is near 100% efficient. The 360 GPU was 60% efficient. If Reiko wants to back me up, he should have a chart that compares them.

What you are aiming at is a standard feature of the GCN architecture. Happy to disclose this stuff via pm.
 

spwolf

Member
I know what you have. Please don't leak it. Trying to act cool on the internet is not worth having someone lose their job. I can back you up though-

Specs for Durango haven't changed. The GPU is nearly 100% efficient though at 1.23 TF, so that's something most people didn't know.


what does 100% efficient mean?
 

DasMarcos

Banned
There are so many PS4 hardware threads these days it boggles and hurts my mind. Kudos to anyone that knows what's going on though.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
You are missing out on the mega ton -_-. The GPU is near 100% efficient. The 360 GPU was 60% efficient. If Reiko wants to back me up, he should have a chart that compares them.

Sorry, that isn't something you can claim. It is a measurement that is per title and a function of time. Stalls will happen, they are a fact of life. MS can't magically make something 100% efficient, they can just design the hardware for increased efficiency and then hope it worked after devs get their hands on the final product.
 

Kleegamefan

K. LEE GAIDEN
I find it staggering that people are happily talking about the PS4 possibly undergoing changes, but refuse to even consider the mere thought of the next xbox undergoing some changes as well because it's impossible, etc.

Well, that's true to a certain extent, but I hope you are not implying a change in ram type, which would require a different memory controller and other, major architectural changes much more extensive than a mere doubling of gddr5, for example.
 

i-Lo

Member
Sorry, that isn't something you can claim. It is a measurement that is per title and a function of time. Stalls will happen, they are a fact of life. MS can't magically make something 100% efficient, they can just design the hardware for increased efficiency and then hope it worked after devs get their hands on the final product.

Yea, that's what I was thinking as well. Unless, it's the (you know) special sauce/move engines.

Yeah GPU are massively parallel, but sorting the tasks they crunch won't make them doing more task at once. It will only prevent them from being idle. Sorting will be handled by hardware, that's the purpose of ACEs as I understand them.

Thanks for the explanation.
 

onQ123

Member
A lot of people assume that compute is only about making the GPU do CPU style stuff, but that is far from reality.
The compute capabilities of modern GPUs just remove the old vertex->fragment way of operating from the equation and enable the GPU to work on arbitrary buffers and output arbitrary data. The uses for this just on the graphics side are enormous: post-processing, lighting, building auxiliary data structures like sparse voxel octrees or even replacing the whole graphics pipeline with a non polygon oriented approach.
You still want the old pipeline to be there for the basic rendering workload in most games though.

Why did everyone over look this post?


Tim Sweeney also talked about going back to Software Based Rendering using the computing power of GPGPU's in this old interview

http://arstechnica.com/gaming/2008/09/gpu-sweeney-interview/



Back to voxels?

JS: So you guys are just going to use CUDA or whatever?

TS: It could be any general-purpose programming language. But I assume in that case we'll write an algorithm that takes as its input a scene in our own little representation defined by our own data structures, and spits out a framebuffer full of colors, and generate that using any sort of technique.


Remember the Skaarj?
You could write a software rasterizer that uses the traditional SGI rendering approach; you could write a software renderer that generates a scene using a tiled rendering approach. Take for instance Unreal Engine 1, which was one of the industry's last really great software renderers. Back in 1996, it was doing real-time colored lighting, volumetric fog, and filtered texture mapping, all in real-time on a 90MHz Pentium. The prospect now is that we can do that quality of rendering on a multi-teraflop computing device, and whether that device calls a CPU or GPU its ancestor is really quite irrelevant.

Once you look at rendering that way, you're just writing code to generate pixels. So you could use any possible rendering scheme; you could render triangles, you could use the REYES micropolygon tesselation scheme and render sub-pixel triangles with flat shading but really high-quality anti-aliasing — a lot of off-line movie renderers do that—or you could represent your scene as voxels and raycast through it to generate data. Remember all the old voxel-based games in the software rendering era?

JS: Yeah.

TS: You could do that with enormous fidelity for complete 3D voxel environments now in real-time. You might even be able to do that on an NVIDIA GPU in CUDA right now. But whether or not you can actually do that today, I have little doubt that you'll be able to do that on processors from multiple vendors in a few years.

And what else could you do? You could do a ray tracing-based renderer. You could do a volumetric primitive-based renderer; one idea is to divide your scene into a bunch of little tiny spherical primitives and then just render the spheres with anti-aliasing. So you can get really efficiently rendered scenes like forests and vegetation.

There are really countless possibilities. I think you'll see an explosion of new game types and looks and feels once rendering is freed from the old SGI rendering model.

Remember, the model we're using now with DirectX was defined 25 years ago by SGI with the first OpenGL API. It's a very, very restrictive model that assumes you're going to generate all the pixels in your scene by submitting a bunch of triangles in a fixed order to be blended into some frame buffer using some blending equation, and the fact that we have these ultra-complex programmable pixel shaders running on each pixel—that part of the pipeline has been made extensible, but it's still the same back-end rendering approach underneath.

JS: So to follow up with that, I hear that Larrabee will be more general-purpose than whatever NVIDIA has out at the time, because NVIDIA is still gonna have some hardware blocks that support whatever parts of the standard rasterization pipeline.

TS: That's kind of irrelevant, right? If you have a completely programmable GPU core, the fact that you have some fixed-function stuff off to the side doesn't hurt you. Even if you're not utilizing it at all in a 100 percent software-based renderer, there are economic arguments that say it might be worthwhile to have that hardware even if it goes unused during a lot of the game, for instance, if it consumes far less power when you're running old DirectX applications, or if it can perform better for legacy usage cases.

Because, one important thing in moving to future hardware models is that they can't afford to suddenly lose all the current benchmarks. So DirectX remains relevant even after the majority of games shipping are using 100 percent software-based rendering techniques, just because those benchmarks can't be ignored.

So I think you'll see some degree of fixed-function hardware in everybody's architectures for the foreseeable future, and it doesn't matter. And as long as the hardware is sufficiently programmable, we're fine with that.





PS4+GPGPU.jpg



& when you look at how the PS4 GPGPU is being design you have to wonder if software based rendering is a big part of the plan.

could Next Gen see 1 of the biggest improvements between console gens do to GPGPUs?

I keep trying to get people to talk about the possibilities of Software based rendering on the GPGPU but it seems like no one cares.
 

Nachtmaer

Member
We have an indirect clock spec for the GPU, reverse calculating it from the 1.84 TFlop figure with 18 CUs.

As for the clock of the CPU, we've some rumblings on 1.6+ GHz. Anand also mentioned that it will be closer to 2GHz than 1.6 or something on those lines.

I've read a few times that Kabini is supposed to be able to boost up to 2.4GHz. I don't think the Jaguar cores will run that high in these consoles but like you said, something like 2GHz shouldn't raise the TDP that much.
 
We have an indirect clock spec for the GPU, reverse calculating it from the 1.84 TFlop figure with 18 CUs.

As for the clock of the CPU, we've some rumblings on 1.6+ GHz. Anand also mentioned that it will be closer to 2GHz than 1.6 or something on those lines.

I guess some users would be on suicide watch if the PS4 CPU is 1.8GHz with 7.5 cores usable for games and Xbox 3 CPU is 1.6GHz with 6 cores usable for games. I wonder why Sony did not list clock speeds for the CPU, because I guess they should already know what's possible and what not.
 
Top Bottom