• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGleaks: Orbis Unveiled! [Updated]

Correct. Its a balancing act. Instead of wasting GPU resources trying to make the machine juggle expensive compute tasks, a portion of the GPU is shelved off to make these tasks relatively cheap. It is a better solution. Hence the 1.4 TF left over will be very efficient at what it does.

A VLIW architecture already depends that the compiler knows whats best by grouping tasks accordingly. I would rather have seen 18 CUs for GPGPU as I fear that the Jaguar cores might need all the help they can get.
 

deanos

Banned
People seem to be freaking out about the 14+4 compute units. Unless they gimped the segregated 4, or artificially prevent them from rendering, those extra 4 should be equally adept at rendering graphics as the other 14. They're just separated in order to make it easier for a programmer to load balance GPGPU work and general graphics work. They might also have their own local cache in order to improve GPGPU performance.
this.
 
But it also says they can be used for rendering... Can they be used for rendering in the same capacity as the dedicated CUs?
Clearly not, if you'd read it. It says they provide a minor boost in rendering. What the reason is for only providing a minor boost is up to speculation at the moment.
 

Nirolak

Mrgrgr
Correct. Its a balancing act. Instead of wasting GPU resources trying to make the machine juggle expensive compute tasks, a portion of the GPU is shelved off to make these tasks relatively cheap. It is a better solution. Hence the 1.4 TF left over will be very efficient at what it does.

Yeah given the focus a lot of engines have on compute tasks, I think it's a smart decision.

But it also says they can be used for rendering... Can they be used for rendering in the same capacity as the dedicated CUs?

Well if they're not part of the graphics rendering pipeline they would still work, but would be less efficient at doing this.

You would really want to use them for compute tasks unless you really can't come up with four compute tasks that are relevant to your game.
 

Eideka

Banned
I agree. Just seeing the CryEngine 3 being used in a new setting that's not "Fun in the jungle redux" interests me greatly.

Same here, I remember the TV show "Rome" and I would be thrilled by a game taking place in ancient Rome. There was a game by Capcom called Rome (I think) but it sucked ass.
 

Forsete

Gold Member
If the CUs can be used to help the CPU. What is there to help the Durango CPU? I thought it had an even more limited CPU due to the reserved cores?
 

gofreak

GAF's Bob Woodward
In terms of PS4, the picture has definitely gone from one of very simple 'cleanliness' to something more complicated.

Thinking of the 18 CUs as one entirely unified block, it was easy to see 'freebie wins' over Durango with relatively straightforward approaches.

But if 4 are taken out of the hardware-balanced regime, devs won't be able to just twiddle a few lines of code to get better AF or whatever out of purely having more grunt. It'll take more thinking, it adds programmer burden (for the benefit of, presumably, improved performance and control if you do want to go out of your way and use your GPU for things other than graphics).

Getting high utilisation out of PS4 will require less naive approaches now too. It makes me go 'ugh' wrt simple development while raising an interesting new prospect for devs that want to get every last ounce out of the machine...they'll now have to think about fairly substantial gpgpu work, and that'll be interesting to see.

I still think Durango's approach will be less forgiving of naive approaches though. If we consider the ex-compute GPUs and CPUs (more obviously) to be basically a wash now, it still has a more complex bandwidth and memory setup.
 

Gorillaz

Member
So late to this thread but how does Gaf feel about it? Does it seem possible to launch this atleast at $400?

Are they seriously going to include BC? because holy shit if they can pull that off and still sell it for cheap.
 
So late to this thread but how does Gaf feel about it? Does it seem possible to launch this atleast at $400?

Are they seriously going to include BC? because holy shit if they can pull that off and still sell it for cheap.
Lol. Both PS4 and Xbox 3 will easily come in at under $400, no doubt at all based on these specs.
 
People seem to be freaking out about the 14+4 compute units. Unless they gimped the segregated 4, or artificially prevent them from rendering, those extra 4 should be equally adept at rendering graphics as the other 14. They're just separated in order to make it easier for a programmer to load balance GPGPU work and general graphics work. They might also have their own local cache in order to improve GPGPU performance. Basically gofreak summed it up:





Exactly. If a game was absolutely optimized to perfection (not even remotely possible), you would only need to render 1 polygon per pixel - a little over two million polygons per frame at 1920x1080. Or 125 million polygons per second (at 60fps). Theoretically achievable on current HD consoles. Both upcoming consoles likely won't suffer from a lack of polygon pushing ability. Especially since smart and efficient use of tessellation for LOD can greatly aid in putting more advanced scenery on screen.

In theory, but GCN tessellation unit is not that flexible, at least on PC. Here's hoping that in a closed box environment they can actually use it to efficiently crank the poly count up.
 

Lord Error

Insane For Sony
Nope. They will use the 3.5GB pool from Orbis and the lesser bandwidth from Durango. Both will have 4xMSAA, Orbis using the extra RAM bandwidth and Durango using the ESRAM.

Seriously, third parties are going lowest common denominator. Make no mistake.
This could play into Sony's favor though, as it takes no effort of any kind of 'utilize' faster GPU and RAM (meaning you get better framerate in the game automatically) while it takes conscious effort to use extra ram.
 

onQ123

Member
Orbis sounds worse and worse. GPU is not 1.8tflops, but 1.4 vs 1.2 of Durango. Like I predicted MS will come on top with more efficent machine and more ram

You say this as if Xbox 3 will never use it's GPGPU for computing or PS4 will always use it's GPGPU for computing?


it's still 1.8TFLOPS vs 1.2TFLOPS it's just that 400 of the 1840 GFLOPS will be more balanced for computing.


they both have the same CPU so chances are that any game that will need the extra computing power of the PS4 GPGPU will also need it on Xbox 3 too so where will that extra computing power come from?
 

thuway

Member
Yeah given the focus a lot of engines have on compute tasks, I think it's a smart decision.



Well if they're not part of the graphics rendering pipeline they would still work, but would be less efficient at doing this.

This needs to be reiterated one more time. Compute tasks are expensive, and a GPU can be crippled by them in terms of performance.
 

Nirolak

Mrgrgr
Finally someone brings it up. Been postulating it in my mind since reading the article. Problem is you're still 2 "SPU"s short...

The odd number to me suggests this decision was made solely for the compute focus of many engines.

Gigantic portions of Unreal Engine 4 run on compute shaders for example, and any of those that aren't part of the rendering pipeline could run on these more efficiently.
 

tipoo

Banned
What are the chances of a game using 7GB RAM in the foreseeable future? Genuine question.

I think 3.5GB will be adequate for a very long time. Games on Windows with fully decked out settings don't even use that much, unlike when the PS3 launched with 256mb. It will fare better than the PS3 at least.
 
Sounds like a nice balance of power without being excessively expensive. Now all I need is an open world Uncharted from ND

I think we're more likely to see something like Arkham City than a straight up GTA-style open world game. AC used an open world hub, but still had sections where you'd go into a more contained zone where they could have set pieces.
 

gofreak

GAF's Bob Woodward
they both have the same CPU so chances are that any game that will need the extra computing power of the PS4 GPGPU will also need it on Xbox 3 too so where will that extra computing power come from?

Someone earlier in the thread seemed to suggest Durango also has 'a compute module' with flexible/programmable computing grunt, but I'm not sure where that came from.
 

androvsky

Member
Theory- 4 CU's modified for BC?

I don't know AMD's GCN architecture well enough to say for sure. The raw flops are there, roughly double a Cell in half the cores from what it sounds like. But at the low clock speed the CUs run at, the only way it can get so many flops is through incredibly wide and complex instructions with even more aggressive SIMD than the Cell SPUs. I'd be really surprised if Sony and AMD manage to pull even close to universal PS3 BC out of that, no matter what method they use (software, microcode, something else...).

If these rumors are true, it'll be the PS2 on PS3 situation again. :/
 

whitehawk

Banned
HYPE

i3V2HoxRN71FP.gif
 

thuway

Member
Orbis sounds worse and worse. GPU is not 1.8tflops, but 1.4 vs 1.2 of Durango. Like I predicted MS will come on top with more efficent machine and more ram

Hold on a minute there console warrior. If you want things like destructible environments, heavy physics, better animation blending, less clipping, and better set pieces, than you better be good at Compute.

This was a conscious decision, and it is GOOD news. If Durango doesn't have this, you can damn well bet that a significant portion of the GPU will be crippled. GPGPU is the wave of the future, don't be daft.
 
Wait so they are indeed weaker then people are saying? Like 350 US dollars type of weak?
350 dollars is a lot of money. The 7870 GPU with 2GB of GDDR5 is $250 retail and it is twice as fast as what's in these boxes. MS and Sony could easily get something like that for half the price once you factor out the retail cut, nvidia/AMD cut, AiB partner cut, memory and finally the other graphics cards components out that you don't need in a console. That would leave $225 out of a $350 BOM for the relatively slow CPU, 4GB of GDDR5 or 8GB of DDR3, a bog standard blu ray drive, motherboard and other miscellaneous items. $300 is more than achievable once you consider the weak GPUs in these machines.
 
I think we're more likely to see something like Arkham City than a straight up GTA-style open world game. AC used an open world hub, but still had sections where you'd go into a more contained zone where they could have set pieces.

Exactly what I meant, a fully explorable world with areas containing set pieces. It became kind of obvious they were headed in that direction with some of the sandbox scenarios in uncharted 3
 
But it also says they can be used for rendering... Can they be used for rendering in the same capacity as the dedicated CUs?
I'm gonna go with no one knows. Including the VGleaks guys. I don't feel they understand what they have.

Anyway, at worse it will probably be like farming rendering tasks out to the SPEs, which can provide a considerable boost.

Something should be noted, is that both these consoles have relatively weak CPUs, so there's no doubt that computationally expensive tasks like next-gen physics will have to be assisted by the GPU alus.
 

iceatcs

Junior Member
Sounds like a nice balance of power without being excessively expensive. Now all I need is an open world Uncharted from ND

How about not really. Not every games need open world.

I won't mind for Uncharted graphics (animation mainly) in the non-Uncharted openworld.
 

omonimo

Banned
I think 3.5GB will be adequate for a very long time. Games on Windows with fully decked out settings don't even use that much, unlike when the PS3 launched with 256mb. It will fare better than the PS3 at least.

I still don't understand why 7 GB DDR3 are considered better to 3,5GB of GDDR5. It's really weird even with ESRAM.
 
Top Bottom