• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

SlimySnake

Flashless at the Golden Globes
I like how Cerny was like 36 rdna cus are like 58 ps4 gcn cus!!!!

well, ps4 cus were on 28nm. rdna 2.0 is on 7nm. thats a 4x reduction. should have 4x the CUs.

18*4=72 cus. they went so cheap they couldnt even give us a full 4x transistor leap. i have no idea why they picked the gpu to save money.
 
I thought 2.0ghz was ridiculous when that was the rumour, 2.2ghz is beyond mindboggling. No way would you set out making a console with that in mind. Simple fact is 10.3tf sounds a hell of a lot better than 9.2tf, especially when you know your opposition is bringing 12.1.
I dont care what anyone says, and imo tech sites will prove it after launch, PS5 wont be running close to 10.3tf when running actual games.

It will. Look my post above. 74 875
 
Last edited:
The fact they are comparing them to PC parts already means you shouldn't be listening to them on this. You cannot compare off the shelf price to what these companies get.
Don't know what this has to do with my comment but yeah, I agree. PS5 is a clear proof of this, good or bad is a very custom system considering it started from standard tech.
 
Last edited:

semicool

Banned
The fact that the clocks are locked shows the headroom is there, but why would they need to up them?. I think they have gone for stability and balance with XSX.
My instinct tells me they will wait to perform long term testing on higher frequencies and then release a system update. That's basically what Nintendo did. They did it more out of need, but I still feel Microsoft will do the same and those new levels will be constant too. It's a possibility.
 
Last edited:

Reindeer

Member
And? That is going to be a potential bottleneck for a large GPU running at 1.825GHz in addition to a 3.8GHz CPU. Which was the point the guy was originally making. We had this discussion couple of days back, Series X is monster of a console, but it's memory configuration is not ideal. It could be starved off bandwidth. Consoles are always going to come with compromises, MS did on memory. Sony, pretty much everywhere.
You forgetting that when it comes to GPU Series X has 10GB faster ram dedicated for texture streaming etc, the other 3.5GB slower ram is for CPU and audio which don't require as fast bandwidth. Not sure how you seeing this a win for PS5, at best they are gonna be very equal. Series X GPU will have access to faster ram than PS5 though.
 
Last edited:
I like how Cerny was like 36 rdna cus are like 58 ps4 gcn cus!!!!

well, ps4 cus were on 28nm. rdna 2.0 is on 7nm. thats a 4x reduction. should have 4x the CUs.

18*4=72 cus. they went so cheap they couldnt even give us a full 4x transistor leap. i have no idea why they picked the gpu to save money.
My opinion, but at this point I prefer loadings a fast as possible and CPU than raw details.
 

icerock

Member
I thought 2.0ghz was ridiculous when that was the rumour, 2.2ghz is beyond mindboggling. No way would you set out making a console with that in mind. Simple fact is 10.3tf sounds a hell of a lot better than 9.2tf, especially when you know your opposition is bringing 12.1.
I dont care what anyone says, and imo tech sites will prove it after launch, PS5 wont be running close to 10.3tf when running actual games.

Matt from REE says the dev-kits are actually running at those speeds. Besides if that figure is for a gimmick? Why not spread the FUD of a higher clock? Why not get close to 11?

I feel we should wait till we get a clarification on what the base clocks are actually like before speculating any figures.

I like how Cerny was like 36 rdna cus are like 58 ps4 gcn cus!!!!

well, ps4 cus were on 28nm. rdna 2.0 is on 7nm. thats a 4x reduction. should have 4x the CUs.

18*4=72 cus. they went so cheap they couldnt even give us a full 4x transistor leap. i have no idea why they picked the gpu to save money.

I chuckled during that part, actually through most of GPU talk. You knew he had a tough sell when they tried to pitch RDNA 2 Flop =/= PS5 Flop & RDNA 2 CU =/= PS5 CU.

In the end, no amount of high clocks can compensate for pure compute power. But, gotta sell what you have I guess.

You forgetting that when it comes to GPU Series X has 10GB faster ram dedicated for texture streaming etc, the other 3.5GB slower ram is for CPU and audio which don't require as fast bandwidth. Not sure how you seeing this a win for PS5, at best they are gonna be very equal. Series X GPU will have access to faster ram than PS5 though.

You can spin all you want, different memory structure will still have a theoretical limit on available bandwidth which in scheme of their GPU/CPU is on the lower side. Not sure, why this is so hard for you to grasp. Also, not sure where you're reading this as me seeing 'a win for PS5'. There are number of posts in here where I've said that design is shitty and a compromise.

Also, wrong on RAM, both are using 14Gbps chips.
 
Last edited:

Stuart360

Member
Matt from REE says the dev-kits are actually running at those speeds. Besides if that figure is for a gimmick? Why not spread the FUD of a higher clock? Why not get close to 11?

I feel we should wait till we get a clarification on what the base clocks are actually like before speculating any figures.
Well i did say 'after launch', but yeah we should wait for actual results.
 

PaintTinJr

Member
According to all the data and theoreticals I have 2.23GHz is 1.9x to 2x the power draw of the same number of CUs at 1.8GHz .. that's a lot of power ..

..it would be 33% more power consumption for 10.2 TF for the GPU compared to 52CUs at 1.8 giving 12TF ..

More power for less TF - I cannot believe this is deliberate design, and not compensation for a poor hand dealt to them due to circumstances out of their control.

I think that was the problem they had before their boost mode strategy – causing things to overheat – until they switched to a fixed power level, then clock higher for low CU utilisation, slightly lower clock for high CU utilisation and use AMD SmartShift to feed excess power from CPU to GPU which means the APU won't heat up as much and probably puts it below XsX power - which I assume we'll find out when they reveal the cooler design.
 

StreetsofBeige

Gold Member
I'm too lazy to rewatch the segment but there was a weird part I kind of heard, but didn't get it all as I was doing stuff on my work laptop (I guess working from home due to corona has it's perks!), but when he was talking CUs there was some BS I thought I heard where he tried to make it sound like it was better to only have 36 CUs than have more.

Did any of you get that and could clarify? Maybe I heard wrong.
 
Last edited:

GameSeeker

Member
But the consoles are equal , according to many folks, why would the XSX be more expensive ?

Differences in APU size. Microsoft said the Series X APU is 360 square mm and has 52CU. Sony hasn't said how big the PS5 APU is, but it has be significantly smaller at only 36CU. The smaller die size leads to better manufacturing yields and hence lower costs.
 

CrustyBritches

Gold Member
More power for less TF - I cannot believe this is deliberate design, and not compensation for a poor hand dealt to them due to circumstances out of their control.
Of course not. I'd guess Gonzalo was their 2019-target chip. Oberon is the attempt at 2GHz, and once they found out MS was at 12TF they went balls out on cooling and variable frequency to get above the "10TF" marketing bullet point. They are .06021GHz away from having a "9TF" console.

I think we might see $499 on both the PS5 and XSX. We'll have to get a look at the PS5 cooling solution and case. Personally, I'm mostly interested in a Lockhart, but who knows if we'll see that anytime soon, or at all. I thought they'd be using it for affordable console and to double as xCloud servers, but from what they said about XSX SoC being able to spin up 4 X1S instances the other day, it sounds like that's the chip they'll use in the servers. Regardless, if Lockhart is a thing, they wouldn't show it until the last minute as it diminishes the Halo effect of the XSX. If it is still a thing, I'm guessing $299.
 
Last edited:

Stuart360

Member
I'm too lazy to rewatch the segment but there was a weird part I kind of heard, but didn't get it all as I was doing stuff on my work laptop (I guess working from home due to corona has it's perks!), but when he was talking CUs there was some BS I thought I heard where he tried to make it sound like it was better to only have 36 CUs than have more.

Did any of you get that and could clarify? Maybe I heard wrong.
He was comparing PS5 to the Pro, or base PS4, cant remember. He was basically saying PS5 has lower cu's than PS4 but its much faster.
 
The different memory clocks means they're essentially splitting the RAM into two separate pools. I think I recall DF stating that it will be invisible to the developer, but I don't see how this is possible. Just like how a PS5 dev will have to work with the variable clock speeds when pushing PS5 to its limits, XSX devs will no doubt have to contend with varying RAM clock speeds.

Honestly for devs I suspect the memory config situation will be the easier of the two for them to deal with. Different pools of memory for different tasks at differing clocks isn't an alien concept. In fact up until PS4/XBO it was actually the norm. Just look at the memory setups in systems like SNES, Genesis, PS1, Saturn, PS2, PS3, etc. Systems like PS4 and 360 were the exception in that regard, and it even took the XBO until the S and X to join that train.

What PS5 is doing has never really been done with consoles before in terms of its implementation of variable clocks. However, devs who want to control power flux between the CPU and GPU for max throughput on either will have to make sure they don't do anything that can screw up engine stability, especially if they try doing it in fast, tricky ways to squeeze every bit of performance ouf of the hardware. And that fully depends on how much control over it the OS will allow them.

I'm weirdly turned on thinking of the possibilities for both.
 
  • LOL
Reactions: CJY

SlimySnake

Flashless at the Golden Globes
My opinion, but at this point I prefer loadings a fast as possible and CPU than raw details.
dude they are gonna downclock the shit out of that cpu to feed the gpu, you do know how he said hes hitting 2.3 ghz right? they need to take watts away from the cpu which means dowclocking it by god knows what to get some substantial watts in return to feed the gpu.

this genius has literally designed a system around bottlenecking one part to feed the other.
 

Reindeer

Member
You can spin all you want, different memory structure will still have a theoretical limit on available bandwidth which in scheme of their GPU/CPU is on the lower side.

Also, wrong on RAM, both are using 14Gbps chips.
How is this a spin SMH. Dude, 10GB of GPU elocated ram on Series X is at 560GBs bandwidth, which is obviously faster than the PS5 448GBs ram. This isn't rocket science. I'm pretty sure they calculated how much ram GPU and CPU would need and made the decision. Both approaches are valid and there shouldn't be much between them as I already said.
 

CJY

Banned
I'm too lazy to rewatch the segment but there was a weird part I kind of heard, but didn't get it all as I was doing stuff on my work laptop (I guess working from home due to corona has it's perks!), but when he was talking CUs there was some BS I thought I heard where he tried to make it sound like it was better to only have 36 CUs than have more.

Did any of you get that and could clarify? Maybe I heard wrong.
Here you go, the segment:

timestamped




Not seeing anything wrong with what he said. 36 is better from an efficiency standpoint and allowed them to save on their silicon budget for other things like the 3D SPU and custom IO. Actual performance in games will probably be a wash with the faster clock speeds
 
Honestly for devs I suspect the memory config situation will be the easier of the two for them to deal with. Different pools of memory for different tasks at differing clocks isn't an alien concept. In fact up until PS4/XBO it was actually the norm. Just look at the memory setups in systems like SNES, Genesis, PS1, Saturn, PS2, PS3, etc. Systems like PS4 and 360 were the exception in that regard, and it even took the XBO until the S and X to join that train.

What PS5 is doing has never really been done with consoles before in terms of its implementation of variable clocks. However, devs who want to control power flux between the CPU and GPU for max throughput on either will have to make sure they don't do anything that can screw up engine stability, especially if they try doing it in fast, tricky ways to squeeze every bit of performance ouf of the hardware. And that fully depends on how much control over it the OS will allow them.

I'm weirdly turned on thinking of the possibilities for both.
But so much for the "easiest console to develop for".
 
Last edited:

xool

Member
I'm too lazy to rewatch the segment but there was a weird part I kind of heard, but didn't get it all as I was doing stuff on my work laptop (I guess working from home due to corona has it's perks!), but when he was talking CUs there was some BS I thought I heard where he tried to make it sound like it was better to only have 36 CUs than have more.

Did any of you get that and could clarify? Maybe I heard wrong.
This was the "rising tides raise all boats" thing -- around 31min45sec
  • Efficiency drops (somewhat) as CUs rise -this is true
  • Other GPU components run faster - this is probably not as relevent - it assumes that when you increase CUs you didn't increase the number or size of other GPU function units ..

we'll it's kindof justification of the hand they got from one point of view, and has some validity from another


..

I'm still coming to terms with the "small triangles made the fans go too fast in GoW" and "simple geometry made HZD map screen overheat the console" claim .. (33-35min - it's real)

.. I just can't anymore

(not buying Sony again this gen .. I can't take anymore of this)
 

Null_Key

Neo Member
According to all the data and theoreticals I have 2.23GHz is 1.9x to 2x the power draw of the same number of CUs at 1.8GHz .. that's a lot of power ..

..it would be 33% more power consumption for 10.2 TF for the GPU compared to 52CUs at 1.8 giving 12TF ..

More power for less TF - I cannot believe this is deliberate design, and not compensation for a poor hand dealt to them due to circumstances out of their control.
Strictly speaking if your only metric is power consumption and TF, then it makes no sense like you say, the intention comes from not consumption or TF design, but to increase the other metrics such as your L1/L2 cache. Modern CPU/GPU design is based on thermal heat to reduce workload or increase workload, this is why you see a framerate drop if the game becomes too demanding, but the PS5 approaches this very differently, rather the consumption is fixed, but the variable freq is dependent of the workload, therefore if the cpu only demands X amount of computation, the rest will go to the GPU to push for additional res, etc. Despite what people think GPU isn't fully utilized as much as people think, it's why GPGPU exists last Gen, because graphics output doesn't require the full power of the GPU, so general processing can be used in lieu of traditional CPU, in PS4 case a very weak jaguar cores. It's also why TF isn't as important as people think, it has its uses, but other factors are at play. In a way the most wasteful tech out there is the SLI, you double the cost, but it doesn't double the performance, yet your TF is up the roof, so why not double the power? The short answer is it doesn't need or should I say it can't use it because of other elements at play.

What Sony is counting on is the max utilization of all the other computational needs to a game, not just TF. At this point it's hard to say what the gap is really going to be like without seeing the games in motion. I think the Xbox's brute power will probably beat out on the resolution front, just from the power difference alone, but performance wise like FPS, staggering in gameplay, open world game design requiring instant data satisfaction is alot more blur, and it may surprise some that the PS5 could possibly win on this front due to of a lack of a bottleneck. I think RDNA 2.0 probably removed the heavy power consumption needs for Sony to move into this realm, but the thermals on how they keep it constant is going to be interesting.
 

Reindeer

Member
except that it isnt true. they will not hit 3.5 ghz cpu clocks if they are hitting 2.23 ghz on the gpu and vice versa.

just how far will the cpu will go down remains to be seen but thats literally throttling one part of the console to feed the other. thats a bottleneck.
That's what DF said, that you can't run both CPU and GPU at max clocks at the same time.
 
Last edited:

HawarMiran

Banned
We'll have to see how Xbox is handling their version of 3D audio, but the PS5 seems to be doing it in a completely different way than Series X. Xbox has already adopted support for Atmos, but as Cerny said, what they're wanting to achieve is far beyond what Atmos is capable of. They have an audio block with the compute power equal to the entire Jaguar processor that was in the PS4. I think it will end up making a bigger difference than people think. If they are able to get custom HRTF maps for people, that would completely change how a game's audio is perceived for each person, because it will be custom tailored to how your ears are shaped.
Gimme BAD COMPANY 3 WITH THAT AUDIO NAAAAOW. If only I liked playing FPS on consoles :messenger_downcast_sweat: . Hopefully the PS5 will have native K/M support for the single-player part at least
 
Last edited:

dangopee

Neo Member
I'm too lazy to rewatch the segment but there was a weird part I kind of heard, but didn't get it all as I was doing stuff on my work laptop (I guess working from home due to corona has it's perks!), but when he was talking CUs there was some BS I thought I heard where he tried to make it sound like it was better to only have 36 CUs than have more.

Did any of you get that and could clarify? Maybe I heard wrong.
It's not really BS. He's talking about narrow-and-fast vs wide-and -slow. Narrow-and-fast is preferable if it is practical to reach those higher clocks. Key work being practical. A lot of people on here are arguing that the huge power consumption and heat increase for running the GPU at such high frequency is not practical. So it's up to you if you want to believe Cerny or random gaffers.
 

Null_Key

Neo Member
My initial reaction after glimpsing the specs over at EG was that I wanted to upload the Gif of Beau Bennet from the Ranch getting given a salad for his lunch. But after watching the whole thing – intently - these are the points I took away, in order of importance.

1. Going for more CUs than 36 didn’t fit with scene complexity in the game development they analysed. So even if packing workloads into 52 CUs resulted in as little as 10% more wasted compute than 36 CUs. At those relative speeds both are in touching distance.

36 @ 80% efficient * 2.23GHz * 64 * 2 = 8.22TF
52 @ 70% efficient * 1.825GHz * 64 * 2 = 8.5TF

2. Higher clocks provide better performance throughout the GPU, with Fillrate being the major one he mentioned specifically – so might be like comparing PS5 as Nvidia versus XsX as AMD GPU.

3. The die space in the APU used for their custom IO complex – with SRAM, Kraken (zlib enhanced) decompression unit and DMA controller, along with two I/O co-processors - is the real next-gen component and worth the sacrifice(IMHO), as it will make a radical difference in how the 16GB pool of GDDR6 will appear by size to game developers – removing the need to waste 2/3rds of the memory for assets your viewpoint isn’t looking at, like in this gen – so probably an effective increase in memory of 2GB to 10GB (1 to 5) for what we see on screen, even if we forget about the playstation unique GPU cache scrubbing feature(garbage collector) and the free Kraken h/w decompression making the 16gb of GDDR6 more effective.

4. The performance in the PS5 ssd flash controller is going to be significantly more important than the memory speed differences and the ssd drive speed differences of the consoles imply.

5. The simplicity and automatic ability to get the new memory controller/ssd benefits on the PS5 without additional work is likely to make it the target system for 3D party developers; especially if it takes less memory from developers for the PS5 OS.

6. Replacing the Nvme drive in the PS5 with alternatives that meet technical requirements will be possible, meaning improving specs, falling prices and competition by different brands will eventually reduce the costs of upgrading the 860GB nvme chip to be much cheaper than XsX’s exclusive proprietary Seagate nvme storage and the ability to park data on USB HDDs, means large cheap external SSDs will probably be used in the meantime with minimal inconvenience.

7. Their custom TEMPEST 3D AudioTech is a customized CU to be a lot like an SPU, and seems like it might be worth the effort.

8. Clock boosting looks like a very green balancing act and in the long run will probably result in 9.2TF performance, but those games will be at maximum efficiency workloads, so in reality getting more performance in real-terms. It is almost like the hardware clock is grading the development quality of the publisher by clock speed. My understanding was that it is like two people eat the same food, but one does an office job all day and then needs a high intensity workout to burn off excess energy. Someone else does a physically demanding job all day and so relaxes when they finish as their food and work are in balance.

9. They were very vague about the RT capabilities - which worries me as that was the real next-gen feature I wanted - but on balance I suspect magnitude of the bottlenecks that the PS5 design has removed for game developers, will result in many games performing better on PS5 than XsX, even if on paper the one important Xbox number is 2.8TF higher.

On the fence about buying straight away because of the vague RT talk, might be more inclined to buy the XsX first if it is better at RT.
I love people who use logic and facts. You will be in my dreams tonight, just kidding, but seriously.... dreams
 
Last edited:

mitchman

Gold Member
The different memory clocks means they're essentially splitting the RAM into two separate pools. I think I recall DF stating that it will be invisible to the developer, but I don't see how this is possible. Just like how a PS5 dev will have to work with the variable clock speeds when pushing PS5 to its limits, XSX devs will no doubt have to contend with varying RAM clock speeds.
It's invisible for the developer because regular "slow" memory is allocated with the std library allocation method while graphics memory is allocated using the texture allocation methods in DirectX. So the system will basically know what memory you want and provide it for you.
 

PaintTinJr

Member
This was the "rising tides raise all boats" thing -- around 31min45sec
  • Efficiency drops (somewhat) as CUs rise -this is true
  • Other GPU components run faster - this is probably not as relevent - it assumes that when you increase CUs you didn't increase the number or size of other GPU function units ..

we'll it's kindof justification of the hand they got from one point of view, and has some validity from another


..

I'm still coming to terms with the "small triangles made the fans go too fast in GoW" and "simple geometry made HZD map screen overheat the console" claim .. (33-35min - it's real)

.. I just can't anymore

(not buying Sony again this gen .. I can't take anymore of this)

It is because you are providing high power for high workload, then the workload drops on the map screen, and the power is still being drawn into the system and has to go somewhere when little is happening, so it heats the surface of the chip, which then makes the fan go mad and because the chips ambient temp has risen, even going back to the game and high utilisation doesn’t fix the temperature or noise.

The boost design is that the system pushes the clocks when less is happening, so the power is used by higher clocking and doesn’t heat the chip in the first place.
 
Here’s something I’m thinking about:

It’s really stupid for Sony to limit themselves like that and having to design a GPU that can mimic the PS4 Pro just to do BC with the older titles, what do they have to do when they want to abandon their 36 CUs design in future PlayStation iterations? Like maybe PS5 Pro or PS6?

What will they do to abandon that design while still retaining the PS4 + PS4 Pro titles support?


This here is a big question for Sony and a technical challenge that they’ll have to overcome to be honest.

Imagine if PS5 Pro is 20 TFLOPS of GPU power, what will they have to do when they want to maybe design a 64 CU console? Will they abandon PS4 BC support?

That’s why I’m really disappointed that this design is holding them back from doing like what Microsoft is doing and I think it is stupid, because eventually, they’ll have to leave that 36 CU design behind when they announce the PS5 Pro.

What do you guys think?
Totally agree.

The only way Sony can put this right is if they aim for a $399 PS5 or the final price is at least $100 less than the XSX.

I have watched the event again for 2 time and man, they cant talk about the PS5 to the general public in this way.

It is like BGs BGs said today, what the hell are you are doing, Sony? If this is the first time that you are going to talk about the PS5 in a public event, show the fans what your console can really do and stop the crap.

Anyway, I am going to buy the console no matter what.

At least we have all passed the speculation step, and man, it has been hard :messenger_tears_of_joy:
 
Last edited:
Strictly speaking if your only metric is power consumption and TF, then it makes no sense like you say, the intention comes from not consumption or TF design, but to increase the other metrics such as your L1/L2 cache. Modern CPU/GPU design is based on thermal heat to reduce workload or increase workload, this is why you see a framerate drop if the game becomes too demanding, but the PS5 approaches this very differently, rather the consumption is fixed, but the variable freq is dependent of the workload, therefore if the cpu only demands X amount of computation, the rest will go to the GPU to push for additional res, etc. Despite what people think GPU isn't fully utilized as much as people think, it's why GPGPU exists last Gen, because graphics output doesn't require the full power of the GPU, so general processing can be used in lieu of traditional CPU, in PS4 case a very weak jaguar cores. It's also why TF isn't as important as people think, it has its uses, but other factors are at play. In a way the most wasteful tech out there is the SLI, you double the cost, but it doesn't double the performance, yet your TF is up the roof, so why not double the power? The short answer is it doesn't need or should I say it can't use it because of other elements at play.

What Sony is counting on is the max utilization of all the other computational needs to a game, not just TF. At this point it's hard to say what the gap is really going to be like without seeing the games in motion. I think the Xbox's brute power will probably beat out on the resolution front, just from the power difference alone, but performance wise like FPS, staggering in gameplay, open world game design requiring instant data satisfaction is alot more blur, and it may surprise some that the PS5 could possibly win on this front due to of a lack of a bottleneck. I think RDNA 2.0 probably removed the heavy power consumption needs for Sony to move into this realm, but the thermals on how they keep it constant is going to be interesting.
This is exactly why I want to see games running.
 

CrustyBritches

Gold Member
Something I kind of breezed over earlier while I was having too much fun was this snippet from Mark Cerny's presentation. The more I hear it, the more curious I become:
If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console, that means our collaboration with AMD succeeded in producing technology useful in both worlds. It doesn't mean that we at Sony simply incorporated the PC part into our console.

Is this a stealth AMD GPU announcement?
giphy.gif
 

Reindeer

Member
It's not really BS. He's talking about narrow-and-fast vs wide-and -slow. Narrow-and-fast is preferable if it is practical to reach those higher clocks. Key work being practical. A lot of people on here are arguing that the huge power consumption and heat increase for running the GPU at such high frequency is not practical. So it's up to you if you want to believe Cerny or random gaffers.
The thing is, the frequency is variable because this is based on SmartShift tech from AMD and the GPU can only maintain those high clocks if the CPU clocks are lowered. So yes, it's practical, but not practical for both the CPU and GPU to run at boost clocks at the same time. One has to give in to the other based on what the workload requires. This was already explained by AMD.
 
Last edited:
The benefits of SSD, its speed, and getting rid of I/O bottlenecks has to be demonstrated in games. At this point our best comparison is current gen. Lets say PS4PRo has all of the current specs EXCEPT the SSD technology, I/O bottle neck removals just like the PS5. How would current PS4Pro games look better if such a huge 'game changing bottleneck removal" has been implemented. What would me, or any other average laymen notice?
 
Totally agree.

The only why Sony can put this right is if they aim for a $399 PS5 or the final price is at least $100 less than the XSX.

I have watched the event again for 2 time and man, they cant talk about the PS5 to the general public in this way.

It is like BGs BGs said today, what the hell are you are doing, Sony? If this is the first time that you are going to talk about the PS5 in a public event, show the fans what your console can really do and stop the crap.

Anyway, I am going to buy the console no matter what.

At least we have all passed the speculation step, and man, it has been hard :messenger_tears_of_joy:
Right now we are still speculating because to me seems that very few here have full comprehension of what PS5 can do, and it's normal I guess.
 

Null_Key

Neo Member
This is exactly why I want to see games running.
Yup, Cerny just schooled us in knowledge, but results are what is important and hopefully that isn't too far off. Sony has been so secretive it's getting annoying, lets start seeing dome of those games at work. I really do enjoy Microsoft open approach, there PR is doing such a great job.
 

Ovech-King

Gold Member
Dont underestimate how fast and optimized that SSD will be for leveraging GPU power output. Pretty sure Cerny knows what he's doing here by having smarter unified efficiency . If the machine is also cheaper , they may win the generation again.
 

Reindeer

Member
AMD SmartShift is dictated by thermals and this is the same thing for PS5. Mark Cerny tried to downplay the role of thermals in the variable frequency, but it's obviously dictated by thermals or else both CPU and GPU would be able to maintain max clocks at the same time. Obviously Sony are pushing that GPU hard and because of thermal limitation SmartShift (variable frequency) is used.
 

Null_Key

Neo Member
The thing is, the frequency is variable because this is based on SmartShift tech from AMD and the GPU can only maintain those high clocks if the CPU clocks are lowered. So yes, it's practical, but not practical for both the CPU and GPU to run at boost clocks at the same time. One has to give in to the other based on what the workload requires. This was already explained by AMD.
Well that is true in a sense, but rarely has any game needed to run both the CPU and GPU threshold to the max. The game determines the workload and AMD smartshift adjust the Freq based on it's needs not the thermals. It's a different approach, and it's hard to say what the result of it is, when we haven't really seen it in motion.
 
This was the "rising tides raise all boats" thing -- around 31min45sec
  • Efficiency drops (somewhat) as CUs rise -this is true
  • Other GPU components run faster - this is probably not as relevent - it assumes that when you increase CUs you didn't increase the number or size of other GPU function units ..

we'll it's kindof justification of the hand they got from one point of view, and has some validity from another


..

I'm still coming to terms with the "small triangles made the fans go too fast in GoW" and "simple geometry made HZD map screen overheat the console" claim .. (33-35min - it's real)

.. I just can't anymore

(not buying Sony again this gen .. I can't take anymore of this)

lol Imagine being a technical fellow helping design one of the most sophisticated game consoles in history and seeing a post like this with no background to speak of trying to shit on your comments. Aye dios mio.
 
Status
Not open for further replies.
Top Bottom