• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks: Details multiple devkits evolution of Orbis

McHuj

Member
If it's true that we're getting 4GB of GDDR5 that probably means 4Gb GDDR5 chips that even though may double the density of the 2Gb one, they may not be able to reach the same speeds (initially) of their smaller counterparts.

The 7850/7870 GPU's (compute-wise are comparable to Liverpool) don't seem BW starved on 153 GB/sec. If Sony had to drop to 144 GB/sec (4.5 Gb/sec IO speed) or 160GB/sec (5.0 Gb/sec), I think the BW would still be sufficient for the memory.
 

LiquidMetal14

hide your water-based mammals
You ough not to say stuff like that. But whatever...

It's easy to get that sort of POV since closed environment have always had that said advantage. I would say that it's more in the realm to compete with current gen PC's for some years to come thanks to that closed HW nature.

We should have learned this lesson long ago by seeing some of the current gen offerings which are severely constricted by 2004-5 era HW.

This is 2013. We have 6 core consumer level CPU's by Intel, upcoming Titan GPU's by nvidia, 64GB of 2000mhz+ DDR3 and fast SSD's. We ain't in Kansas anymore :p
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Really excited about this news. Thanks.

I don't think they will go higher, if anything they will down clock it for power and heat reasons and get less bandwidth. But they should know how much they need based on the GPU and the rest of the system. They don't have eDRAM to fall back on, so they need enough to keep the system humming along.


If it's true that we're getting 4GB of GDDR5 that probably means 4Gb GDDR5 chips that even though may double the density of the 2Gb one, they may not be able to reach the same speeds (initially) of their smaller counterparts.

The 7850/7870 GPU's (compute-wise are comparable to Liverpool) don't seem BW starved on 153 GB/sec. If Sony had to drop to 144 GB/sec (4.5 Gb/sec IO speed) or 160GB/sec (5.0 Gb/sec), I think the BW would still be sufficient for the memory.

Yep. They are already downclocking the GPU from 850Ghz to 800Ghz according to the DF rumor and they did a similar thing to the RSX before release. The 7970M use 4.8Gbps for 153GB/s, scaling by 800/850 give 144GB/s, just like your number.
 

Elios83

Member
So that 192 GB/s bandwidth figure is from 2011? Sounds like one of those situations where they shoot for the stars and land on the moon. I'm preparing for final to be less, maybe 140 GB/s to 160 GB/s. But that's just a realistic hunch.

Lowering the bandwidth doesn't make any sense because it's the exact reason why they have chosen a particular memory type and memory system in first place.
If they had to lower the bandwidth, GDDR5 won't be there at all, or there would be a split memory system.
 

Nirolak

Mrgrgr

Well, it depends how it's built and how efficient it is.

Even great APIs on PC can have insanely massive overhead. If we go back to the guy who invented FXAA with NVidia, he pegs it at 10-100x as much as consoles.

This is actually also the core argument people have been making for how Durango could possibly match up to Orbis if it's actually built to remove significantly more overhead than a console normally does.

Like, for example, the Xbox 360 only ran at 60% efficiency, which is still much better than PCs tend to do. If you cranked that up, you could gain a ton of ground at a much lower FLOP rate.

Obviously the part itself wouldn't be able to get anywhere even remotely near the performance a 680 theoretically could in a closed environment, but it's basically an argument of being able to actually use the power you have versus having way, way more power but not being able to access that much of it.
 
So not much new information except that the RAM figure could drastically change to the worse after all I think more than 192GB/s are not really feasible and nothing on the rumoured custom parts eg. CUs, ...
 

quest

Not Banned from OT
So that 192 GB/s bandwidth figure is from 2011? Sounds like one of those situations where they shoot for the stars and land on the moon. I'm preparing for final to be less, maybe 140 GB/s to 160 GB/s. But that's just a realistic hunch.

I figure speed will be dependent on how fast these new larger GDDR5 chips are. You could be very well right it might be lower since these larger chips might well run slower. Anyone have info on these new larger chips that are coming out in the next quarter? Should be easy to figure out speeds off that since we know it is a 256 bus.
 
Megaton right here folks. Time to throw out the GDDR5 Kool-Aid out the window. I'm always pessimistic about these figures. It's must easier to go down, and there is very little benefit of going up.

that's my gut feeling as well.

it could be a minor downgrade though, you know, 180 gb/s instead of 192 or something. in which case you arent really losing anything relevant.
 

ghst

thanks for the laugh
Well, everything a game does takes memory.

AI, shadows, anti-aliasing, textures, level size, the number of enemies, the number of environmental props, the amount of animation, and more.

Really you can fill it up with a lot of things. If it's a cross generation title though, pretty much all of that filling has to be graphics related since you can't break your gameplay on the older consoles unless it's simply something like increasing player count in a Battlefield title.

i think in real world terms they'll see very little immediate benefit as that extra 1.5GB of DDR5 is in trade for a larger DDR3 memory pool.

i said something about this in the other thread, but if sony could afford to have less DDR5 and a whole separate DDR3 pool, they would, and it would surely be a more efficient system.

they likely looked at the cost of implementing a split pool with DDR3 on its own bus, and saw that it was cost effective to just throw another couple of GBs of DDR5 on there. it's a wonky solution, but as aegies keeps saying, consoles are children of compromise.
 

Ashes

Banned
Well, it depends how it's built and how efficient it is.

Even great APIs on PC can have insanely massive overhead. If we go back to the guy who invented FXAA with NVidia, he pegs it at 10-100x as much as consoles.

This is actually also the core argument people have been making for how Durango could possibly match up to Orbis if it's actually built to remove significantly more overhead than a console normally does.

Like, for example, the Xbox 360 only ran at 60% efficiency, which is still much better than PCs tend to do. If you cranked that up, you could gain a ton of ground at a much lower FLOP rate.

I presume he meant for a specific part in the system. Not overall.
 
Megaton right here folks. Time to throw out the GDDR5 Kool-Aid out the window. I'm always pessimistic about these figures. It's must easier to go down, and there is very little benefit of going up.

I think you're jumping to conclusions. There's always benefit to go up. I mean these are Teraflop+ GPUs. A 7970 HD is at 264GBs for example.
 

Nirolak

Mrgrgr
i think in real world terms they'll see very little immediate benefit as that extra 1.5GB of DDR5 is in trade for a larger DDR3 memory pool.

i said something about this in the other thread, but if sony could afford to have less DDR5 and a whole separate DDR3 pool, they would, and would surely be a more efficient system.

they likely looked at the cost of implementing a split pool with DDR3 on its own bus, and saw that it was cost effective to just throw another couple of GBs of DDR5 on there. it's a wonky solution, but as aegies keeps saying, consoles are children of compromise.

I think he meant that solely in the concept of designing a game for only 2 GB of GDDR5 (with no extra RAM) and then suddenly being told you have 3.5 GB of GDDR5, and what developers who had been designing around the smaller pool could now add to their game.

There's definitely an argument for a split pool, but I think he's asking about a straight upgrade scenario.
 

btkadams

Member
Well that is the hope, but it didn't pan out exactly that way this generation for some titles, Skyrim for instance was a complete mess because they evidently planned for the 360 memory limitations rather than PS3.

I'm really just worried about another generation of some iffy ports.

were there really that many iffy ports? aside from bethesda, i had no problems with 3rd party ps3 games. there obviously were differences, but it's not like many games were anywhere near unplayable on one system.
 
If it's true that we're getting 4GB of GDDR5 that probably means 4Gb GDDR5 chips that even though may double the density of the 2Gb one, they may not be able to reach the same speeds (initially) of their smaller counterparts.

The 7850/7870 GPU's (compute-wise are comparable to Liverpool) don't seem BW starved on 153 GB/sec. If Sony had to drop to 144 GB/sec (4.5 Gb/sec IO speed) or 160GB/sec (5.0 Gb/sec), I think the BW would still be sufficient for the memory.

i was thinking the same, 660Ti that is the Nvidia counterpart to HD7870 only has 144GB for example.

But then I remembered the CPU has to share that bandwidth, so now I'm not sure.
 

thuway

Member
I think you're jumping to conclusions. There's always benefit to go up. I mean these are Teraflop+ GPUs. A 7970 HD is at 264GBs for example.

-_- It is not going up. There is no benefit of going up. Sony is not looking to use GDDR5 from what it sounds like. It will be downgraded by a few GB, but enough to get you 1080p 60 FPS. You aren't playing games in 4k son.
 
Close enough.

That would indicate that they either still use GDDR5 but with a lower clock - a different bus would be followed by a severe downgrade in bandwidth or their stacking, wide-IO, etc. implementation is ready for DDR4. I just wonder if everything would really work out with DDR4 what would be the problem to hit 192GB/s ...

So if something changed I vote it is a massive hit on bandwidth...
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Wait so are people assuming the Orbis ram won't be hitting 192GB/s?

Disappointing if true.

eh, it is just a number. What is important is that the GPU is well fed and not stalling due to bandwidth. Even at 150GB/s it is 3x the best PC DDR bandwidth.
 
Downgraded by a few GB/S, nothing substantial. It will allow for good AA, 1080p 60 FPS. 192 GB/S was quite high for the card to begin with.

Ah I see.
By the way thuway, have you heard anything about that special "comput unit" that Orbis is supposed to have?
 

Elios83

Member
Downgraded by a few GB/S, nothing substantial. It will allow for good AA, 1080p 60 FPS. 192 GB/S was quite high for the card to begin with.

You're doing all by yourself ^^''
You were absolutely sure of 192 GB/s, 4GB unified GDDR5 pool just the other day.
 
-_- It is not going up. There is no benefit of going up. Sony is not looking to use GDDR5 from what it sounds like. It will be downgraded by a few GB, but enough to get you 1080p 60 FPS. You aren't playing games in 4k son.

The more you say stuff like this, the more you show me how clueless you are about computer graphics.
 

Nirolak

Mrgrgr
I presume he meant for a specific part in the system. Not overall.

Yes, for draw calls in specific. Inherently a console GPU doesn't run 100 times more efficient than a PC one. :p

You might be able to do something like 2-3x though depending on how well set up your system is, 2x being the more realistic number for what traditionally happens.

That's why you usually don't see PCs completely blowing consoles out of the water until they're closer to 4x (and then eventually 10x and 20x a few years later) what consoles do.
 
You're doing all by yourself ^^''
You were absolutely sure of 192 GB/s just the other day.

it's thuway, he isn't really an "insider", he just likes to pretend his pm inbox is some kind of goldmine. a few weeks ago he was preaching that the ps4 would have a quad core cpu, that also didn't turn out to be true.
 

Elios83

Member
it's thuway, he isn't really an "insider", he just likes to pretend his inbox is some kind of goldmine. a few weeks ago he was preaching that the ps4 would have a quad core cpu, that also didn't turn out to be true.

Indeed thuway doesn't know anything and change his own music every day.
 

thuway

Member
That would indicate that they either still use GDDR5 but with a lower clock - a different bus would be followed by a severe downgrade in bandwidth or their stacking, wide-IO, etc. implementation is ready for DDR4. I just wonder if everything would really work out with DDR4 what would be the problem to hit 192GB/s ...

So if something changed I vote it is a massive hit on bandwidth...

I'll take a ban bet with you, it won't be substantial :).
 
I hope Sony is smart enough to have at least 6GBs of RAM in there, I know DDR5 is faster and all, but 3.5GBs of available RAM against possibly 7GBs of RAM for Durango is pretty big.
 

LiquidMetal14

hide your water-based mammals
There's more to all this than it seems. You guys are going to love seeing the tear downs of this new Playstation as the parts are always fascinating in their custom designs and implementations.
 

Sid

Member
Will that GPU last 5 years at least? Can we expect stable 1080p/30 with no jaggies until the end of its time?
That depends on how much will the devs push it and how much are publishers willing to invest into next gen games.
 

Ashes

Banned
Yes, for draw calls in specific. Inherently a console GPU doesn't run 100 times more efficient than a PC one. :p

You might be able to do something like 2-3x though depending on how well set up your system is, 2x being the more realistic number for what traditionally happens.

That's why you usually don't see PCs completely blowing consoles out of the water until they're closer to 4x (and then eventually 10x and 20x a few years later) what consoles do.



Or so a certain PC centric developer reckons anyway... ;)

And he knows a metric ton than most out there.
 

nib95

Banned
-_- It is not going up. There is no benefit of going up. Sony is not looking to use GDDR5 from what it sounds like. It will be downgraded by a few GB, but enough to get you 1080p 60 FPS. You aren't playing games in 4k son.

I wouldn't be so sure. What about 3D? And who's to say there won't be 4k options in a few select games?
 

Interfectum

Member
So what's the general consensus so far on PS4 vs Xbox 3 based on all these leaks/rumors? Are they going to be close enough to make next gen just as annoying as the beginning of this gen?
 
What is Thuway freaking out about? New info?

Putting various hints together even from dev posts from weeks ago I keep in memory banks, it points to a slight downgrade to Orbis memory bandwidth.

it would be funny if it dropped to 169 gb/s so then we could talk about how much the slower ram would hurt the ps4 vs durango's 170 gb/aggregate (lol)
 

DieH@rd

Banned
With the latest drivers, Radeon 7850 is ~40% slower than GTX680 in Battlefield 3 [somebody investigate, latest drivers @ http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7870_XT_Black/8.html] . PS4 GPU has two more CUs and little higher clock, so it should be even closer.

IMO, in closed console environment without driver/OS overheads, Orbis will easily reach GTX680 preformance levels, and greatly surpass it few years later when 1st party developers get accustomed to it.
 

ghst

thanks for the laugh
Yes, for draw calls in specific. Inherently a console GPU doesn't run 100 times more efficient than a PC one. :p

You might be able to do something like 2-3x though depending on how well set up your system is, 2x being the more realistic number for what traditionally happens.

That's why you usually don't see PCs completely blowing consoles out of the water until they're closer to 4x (and then eventually 10x and 20x a few years later) what consoles do.

i wonder if the simplified pc-like architecture will lead to consoles being able to make use of this theoretical advantage earlier on. for all the ability it showed to punch above its weight, the performance on early 360 titles was near enough in line with a PC of a similar hardware spec; before g80 opened up a performance gap it could never really recover from.
 
Top Bottom