• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: PS5 devkits ~ 13 TFLOPS

Status
Not open for further replies.

Aceofspades

Banned
Does Sony or MS have to opt for APU designs to maintain BC compatibility? Cant they used seperate AMD CPU and GPU combos?
 

LordOfChaos

Member
Does Sony or MS have to opt for APU designs to maintain BC compatibility? Cant they used seperate AMD CPU and GPU combos?

Unclear...Remember when Microsoft made the 360 Slim as a single chip, they actually had to artificially increase the latency between the two to ensure 100% compatibility? Even with a high bandwidth bridge between the two, it's possible the latency would cause some edge case issues.

Though this also depends on low level vs portable GNMX and DX for Xbox are.
 
Last edited:

SonGoku

Member
Unclear...Remember when Microsoft made the 360 Slim as a single chip, they actually had to artificially nerf the latency between the two to ensure 100% compatibility? Even with a high bandwidth bridge between the two, it's possible the latency would cause some edge case issues.

Though this also depends on low level vs portable GNMX and DX for Xbox are.
Wasn't there an inherit performance advantage of having CPU and GPU on the same die?
 

LordOfChaos

Member
Wasn't there an inherit performance advantage of having CPU and GPU on the same die?

Yup, that's why the 360 Slim had to artificially increase the latency between the CPU and GPU to keep it the same as two distinct chips. when they moved to single. So same problem in the reverse direction is possible if they didn't do an APU, there might be cases where the short round trip latency not being there anymore, even with all the bandwidth in the world between the two, could cause issues. Again I have no insight into how portable GNMX was designed to be though.
 

LordOfChaos

Member
But the 360 slim wasn't even on the same die afaik.

GNMX is the potato higher level api though, GNM is where is at for most AAA i would think.


They called it the CGPU, jogging my memory now...If the CPU was still a distinct die maybe it was still moving to an MCM that they had to introduce more latency for.

"they introduced a module to present latency between CPU and GPU imitating as if there is a FSB connection. "


CGPU.jpg


You're right, GNM rather.
 
Last edited:

SonGoku

Member
They called it the CGPU before APU was the hawt term
Damn i just googled it, im having the biggest case of mandela effect
I remember the forums discussions and articles, that it combined everything into a single package but cpu and gpu retained individual dies.
 

LordOfChaos

Member
Damn i just googled it, im having the biggest case of mandela effect
I remember the forums discussions and articles, that it combined everything into a single package but cpu and gpu retained individual dies.

Yeah looks like it was, but they did still have to emulate higher latency. Three dies became two on an MCM



It would have been easier and more natural to just connect the CPU and GPU with a high-bandwidth, low-latency internal connection, but that would have made the new SoC faster in some respects than the older systems, and that's not allowed. So they had to introduce this separate module onto the chip that could actually add latency between the CPU and GPU blocks, and generally behave like an off-die FSB.


 
Last edited:

Ar¢tos

Member
Does Sony or MS have to opt for APU designs to maintain BC compatibility? Cant they used seperate AMD CPU and GPU combos?
There is no mandatory need for pure hardware BC. We are talking x86 to x86, so the hardware can easily be virtualized, whether by APU or CPU+GPU (assuming some sort of shared memory pool for the later, otherwise things get more complex).
 

Aceofspades

Banned
Unclear...Remember when Microsoft made the 360 Slim as a single chip, they actually had to artificially increase the latency between the two to ensure 100% compatibility?.

Sony had also combined Cell and RSX into one chip in latest PS3 models to cut costs and improve thermals. Its one hell of engineering achievement given the complexity of Cell and the memory system of the PS3.
 
Last edited:

bitbydeath

Member

LordOfChaos

Member
Playing Days Gone I welcome this.



The amount they are pushing "making load screens a thing of the past" is making the hybrid solution seem less likely imo. If they had a rust drive and a flash storage part that just pulled your most used games in, loading a game from the hard drive part would be even more painfully slow than already with a generation designed for flash from the ground up. But then I still have to wonder about external drive support if that was the case.

Then again again, we spent most of the 8th gens life without external drive support on the PS4.
 
Last edited:
Playing Days Gone I welcome this.


Damn! They are hyping really hard this ssd thing. Maybe the rumors about Sony getting a *very* good HBM deal was actually referring to NAND storage? Zen 2 + PCIe 4.0 will bring insane speeds... if Sony somehow managed to get SSD goodness for a great price... :eek: :eek: :eek: :eek:
 

bitbydeath

Member
The amount they are pushing "making load screens a thing of the past" is making the hybrid solution seem less likely imo. If they had a rust drive and a flash storage part that just pulled your most used games in, loading a game from the hard drive part would be even more painfully slow than already with a generation designed for flash from the ground up. But then I still have to wonder about external drive support if that was the case.

Then again again, we spent most of the 8th gens life without external drive support on the PS4.

I’d imagine you could still have an external hdd but it couldn’t run or stream games from it as that’d impact the playability of the game itself.

It’d be limited to housing games only.
Here’s hoping the HDD they put in is 2TB.
 
Last edited:
I'm curious how Sony is planning on "making loading screens a thing of the past"?

I know they are talking about some very fast SSD and that will definitely help, but faster loading and no loading are different and there's also a point where faster SSD's don't seem to help anymore. ( for game loading speeds )

On PC moving from HDD to SSD makes a huge difference in loading times. But for SOME REASON moving from SSD to NVMe makes almost no difference at all. You basically save ONE SECOND despite the NVMe being 5x faster than the fastest sata based SSD.

There's another bottleneck in the system somewhere, but I don't know where it is. Maybe the move from PCIe 3.0 to 4.0 will improve this situation?
 

bitbydeath

Member
I'm curious how Sony is planning on "making loading screens a thing of the past"?

I know they are talking about some very fast SSD and that will definitely help, but faster loading and no loading are different and there's also a point where faster SSD's don't seem to help anymore. ( for game loading speeds )

On PC moving from HDD to SSD makes a huge difference in loading times. But for SOME REASON moving from SSD to NVMe makes almost no difference at all. You basically save ONE SECOND despite the NVMe being 5x faster than the fastest sata based SSD.

There's another bottleneck in the system somewhere, but I don't know where it is. Maybe the move from PCIe 3.0 to 4.0 will improve this situation?

Another interesting part is the example Cerny gave with being able to move faster in games as the world can build out quicker which to me hints at frame rates also getting a decent boost.
 
Another interesting part is the example Cerny gave with being able to move faster in games as the world can build out quicker which to me hints at frame rates also getting a decent boost.

Maybe. I thought he was talking about asset streaming speed in open worlds. This can be a problem on an old HDD. If anything hints at faster framerates it's the HUGE CPU upgrade. Sure not all games are CPU bound, but if there is such a thing as an average framerate for this gen I think it's definitely going to be higher next gen.
 

bitbydeath

Member
Maybe. I thought he was talking about asset streaming speed in open worlds. This can be a problem on an old HDD. If anything hints at faster framerates it's the HUGE CPU upgrade. Sure not all games are CPU bound, but if there is such a thing as an average framerate for this gen I think it's definitely going to be higher next gen.

Wouldn’t frame rate be a potential bottle neck to your movement?

Everyone always mentions needing 60fps for twitch shooters which both examples would require fast (smooth) movements putting them in the same ball park.
 
Last edited:

Imtjnotu

Member
I'm curious how Sony is planning on "making loading screens a thing of the past"?

I know they are talking about some very fast SSD and that will definitely help, but faster loading and no loading are different and there's also a point where faster SSD's don't seem to help anymore. ( for game loading speeds )

On PC moving from HDD to SSD makes a huge difference in loading times. But for SOME REASON moving from SSD to NVMe makes almost no difference at all. You basically save ONE SECOND despite the NVMe being 5x faster than the fastest sata based SSD.

There's another bottleneck in the system somewhere, but I don't know where it is. Maybe the move from PCIe 3.0 to 4.0 will improve this situation?
Well god of War didn't have a single loading screen during the game. And it was linear but to have something remotely similar in an open world game is amazing.
 

bitbydeath

Member
Well god of War didn't have a single loading screen during the game. And it was linear but to have something remotely similar in an open world game is amazing.

The loading is masked.
When your travelling between realms everything slows down so when you exit the door the new area exists.

Also GoW has loading when you boot up the game. Sony is talking about removing all of that.
 
Wouldn’t frame rate be a potential bottle neck to your movement?

Everyone always mentions needing 60fps for twitch shooters which both examples would require fast (smooth) movements putting them in the same ball park.

No, framerate isn't exactly a bottleneck for movement speed. You could fly at high speed through an open world like GTA at 5fps if you wanted.

High framerate for twitch shooters is desirable mostly for control response time but also for smooth visuals - a lot can change in a near instant in these games. But framerate won't affect how fast your character can move through the environment.

I'm fairly certain Cerny was talking about streaming in assets in open worlds. A good example of what Cerny was talking about is GTA 5 on the PS3/360. If you bought the digital version you had MUCH more pop-in than if you bought the physical disc. We're talking entire buildings just poping into existence at close range. The reason for this is that the game was designed to stream media in from the disc (slow) AND the HDD (slow) simultaneously. If you got the digital version then you could only stream in assets from the HDD but there was no disc to also stream assets in from. Slow + slow is faster than slow alone.

SSD's improve speeds massively.

What I would REALLY like to see disappear next gen is POP IN. I hate it. I hate watching things pop into existence constantly. Take Assassins Cred Odyssey for example. Great game but the pop in is so distracting IMO.
 
Last edited:

Imtjnotu

Member
The loading is masked.
When your travelling between realms everything slows down so when you exit the door the new area exists.

Also GoW has loading when you boot up the game. Sony is talking about removing all of that.
I'm talking loading screens. Yes it was masked and there might have been a minor slow down but there wasn't a single loading screen during the game.

We don't know about boot load screens because the game shown was in game fast traveling. But if you could reduce or remove the load screens from gta, horizon, and Spiderman, I'm in for all that too
 

TLZ

Banned
Playing Days Gone I welcome this.


That's great. I hope it isn't proprietary though, then replacing with another means Sony selling us expensive SSDs. Or maybe on the other hand they sell like hot cake and become widespread and a success like Blu-ray that everyone wants one.
 

pawel86ck

Banned
But that's buying a single graphics cards from a retailer. There's so many points in that chain where entities are taking a cut and thereby elevating the price.

How much would those Radeon VIIs cost if you bought 20 million (or more) of them, directly from AMD, and paid for them upfront? I can tell you right now it's going to be a lot less than $700. Probably even less if there's no sale-or-return clause in the contract.

Just to illustrate, a tear down of the iPhone XS Max (~$1250) revealed that the components cost ~$450. What's the cost if you bought a few million of those components in bulk? You could maybe halve the total bill if you include things like tax deductions for businesses.
Because SONY need millions of GPU's they probably get very good price from AMD, and on top of that they only need to buy chip from AMD without PCB, memory.

In 2012 sony had to pay around 100$ for both CPU and GPU, and there's no way you could buy back then GPU and CPU for 100$ on PC market.

I think price is not the issue when it comes to consoles, but rather temps and power consumption. In xbox classic era MS have bought the best chip from Nv, it was geforce 3/4 mix, high end GPU. The same with x360, first GPU with unified shaders and amazing performance (just one year before xbox360 launch people have used 6800 ultra in SLI to get the same results). In xbox one era however temps and power consumption was already a problem, so they couldnt use high end GPU anymore.
 
Last edited:
Post copied from Reset:

1. If the consoles actually have ray tracing hardware, it will be pretty limited. Mid gen refreshes could increase the ray tracing power.

2. The things I have mentioned aren't that creative because I am a stupid person, but devs are smart and talented and they will find ways to make things possible that weren't before or only in limited form. If you brake it down, yeah it is just loading assets faster, but it is an order of magnitude never before seen. Your comparison to PC doesn't work because basically every game is designed around the constraints of 5200rpm hard drives that spin in the consoles, but if ALL next gen consoles have NVME4.0 the engines will adapt to that and allow for new kinds of magic. What PCs with SSD do is just load games that are designed around HDDs faster and that's, they aren't really utilizing it. No game is built around calling 5 gigs of assets in a second, next gen, they will.

Look at it this way, there is a reason why ALL games must be installed to the HDD of modern consoles even though this means that it takes much longer to get into the game when you start it the first time. Having faster access speeds gives devs more freedoms to create cool stuff. Otherwise games would still draw assets from the disc itself.

The PlayStation 4’s Blu Ray disk speed is rated at 6x, and a maximum reading speed of about 27 MB/s. 5400 RPM drives offer an average of 100 MB/s read. Some users here suggested that the read speed of a pcie 4.0 nvme might be up to 8gb/s. That is pretty insane.

https://www.pcgamesn.com/ssd-pcie-4-amd-3rd-gen-ryzen

So if my math is correct we might be looking at a read speed increase in next gen consoles of up to 80, read E-I-G-H-T-Y, times faster. According to Cerny it's faster than anything on PC. The fastest on PC are 4gb/s so we will at least get 5gb/s which is still 50 fucking times faster than the hdd in the PS4.

:messenger_fearful:
 

TLZ

Banned
Would the PS5 have something like this in it?

Radeon Instinct™ MI25 Accelerator

Superior FP16 and FP32 performance for machine intelligence and deep learning.
Compute Units: 64
Stream Processors: 4096
Peak Half Precision (FP16) Performance: 24.6 TFLOPs
Peak Single Precision (FP32) Performance: 12.29 TFLOPs
Peak Double Precision (FP64) Performance: 768 GFLOPs
Memory Size: 16 GB
Memory Type (GPU): HBM2
Memory Bandwidth: 484 GB/s
 
Last edited:

SonGoku

Member
Bandwidth is surprisingly low considering its using 16GB HBM2. Any ideas about the big drop off compared to Radeon 7?
 
Last edited:
Would the PS5 have something like this in it?

Radeon Instinct™ MI25 Accelerator

Superior FP16 and FP32 performance for machine intelligence and deep learning.
Compute Units: 64
Stream Processors: 4096
Peak Half Precision (FP16) Performance: 24.6 TFLOPs
Peak Single Precision (FP32) Performance: 12.29 TFLOPs
Peak Double Precision (FP64) Performance: 768 GFLOPs
Memory Size: 16 GB
Memory Type (GPU): HBM2
Memory Bandwidth: 484 GB/s
Edit: nevermind, that's ~12TF.
 
Last edited:

TLZ

Banned
Edit: nevermind, that's ~12TF.
Yea. These Instinct are made for servers, but I was curious because the specs match the rumors we see. It's on 14nm though.

This one's on 7nm:

AMD Radeon Instinct™ MI50 Accelerator

Compute Units: 60
Stream Processors: 3840
Peak Half Precision (FP16) Performance: 26.8 TFLOPs
Peak Single Precision (FP32) Performance: 13.4 TFLOPs
Peak Double Precision (FP64) Performance: 6.7 TFLOPs
Memory Size: 16 GB
Memory Type (GPU): HBM2
Memory Bandwidth: 1024 GB/s

Looks excessive.

Oh and it uses PCie 4.
 
Last edited:

stetiger

Member
Next gen could be sweet. Loading times have actually stopped me from completing some games. I remember giving up doing some RDR2 quest because they were across the map and I honestly could not keep loading there and back again.
 
Last edited:

xGreir

Member
I will take even 8 TF if we can have HBM tech instead of GDDR, and forget about pop ins :messenger_loudly_crying:

I'm satisfied with the actual graphics of a base PS4, I will take a PS5 that's just a PS4 Pro with better CPU and a SSD/HBM combo everyday every hour.

Better textures, bigger and more detailed worlds, more options, better IA, richer environments, 1080p, fast and furious :messenger_horns::messenger_fire:
 
Last edited:

xool

Member
I remember reading it somewhere in 2012, not sure if its combining Cell and Rsx or shrinking them to smaller node.

Both got several die shrinks that gen.

Bandwidth is surprisingly low considering its using 16GB HBM2. Any ideas about the big drop off compared to Radeon 7?

Single 16GB package instead of 2 8GB packages explains it perfectly

Two 8GB packages instead of four 4GB packages explains it perfectly

[edit - well half the packages and double the package capacity - halfs the effective bus width - whatever the actual numbers]

[edit2 - it'll be a 2048 bit bus like vega64, not 4096 like vega20 . why - dunno ..]

[edit3 - there was an update to the HBM2 spec that allowed over 8GB per stack in late 2018 ]

I'm assuming that they didn't just cheap out on RAM, and instead decided that the application (compute) didn't need as much memory bandwidth - the explanation for this might be that compute doesn't need those huge textures like gfx .. so they could get away with less memory bandwidth - but this part isn't going to be cheap - sort of thing government funded research institutes buy ($$$) so I have a hard time explaining this step backwards.. Maybe it was the spec given by this big order https://www.anandtech.com/show/1430...ntier-supercomputer-cray-and-amd-1-5-exaflops
 
Last edited:

TeamGhobad

Banned
Once again RAM will be the deciding factor this gen, Unified or split and type.
I wouldnt be surprised if MS went with unified GDDR6 and sony split the pool with HBM2 and DDR4.
Not sure who would have the advantage, i know CPU tasks are latency sensitive but a unified pool is easier for devs.

The rest of the specs we know both are using zen2 8c SMT might be disabled or completely removed
GPUs in the 12-14tflops range. Both are using SSDs.

Whoever miscalculates the RAM will lose just like last gen with MS and their DDR3 and Esram taking up half the SoC.
 
Last edited:
Status
Not open for further replies.
Top Bottom