• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
1) He is not stating that PS5 is the xbones
2) He is saying with the comparison that both consoles are basically using the same hardware. Difference will be barely noticeable.. There is no way to know the exact difference on non final hardware.

I am not saying that the PS5 is slower. But do not read that much into each line.
After the Azure partnership was announced, I've been thinking that maybe Sony & MS collaborated even further to reduce the rising R&D costs:

2a_Design_costs_IBS_x_800.png


So there's a chance we're talking about the exact same APU (which will also be used in Azure PS5 blades for streaming) with slightly different clocks/cooling systems.

TL;DR: maybe (traditional) console wars will be dead as we know them.
 

McHuj

Member
Does that mean Reiner was right .. maybe ..

I think this should be be rephrased as this:

I don't think Reiner lied or made stuff up. Some devs told him that PS5 is (was?) more powerful.

Without knowing anything more than that, even Reiner himself, can't really say (or did he). He simply relayed what some devs told him. Nor do we know anything about the devs who supposedly relayed this information. Are they the technical types (engine designer, programmer, etc) or more on the other side of development (artists, marketing). Dev can be a very broad term. Without knowing who made the statement that PS5 is more powerful and based on what criteria, we shouldn't treat this as gospel, but just another data point until the specs are revealed.
 

JLMC469

Banned
PS5 more powerful or more accurately nexbox not more powerful than the ps5. so why would people pick it over the ps5? another major fuckup by MS. you would think professional pride in a trillion dollar company like MS would force them to design a more powerful system but no.

What are you talking about? Calm down :messenger_tears_of_joy:
 
i speculate the ps5 to give me a bj whenever i press the l2 trigger. surely my speculation is welcome here, right? :messenger_unamused:

go shit up the resetera thread...oh wait

I go out for the evening and this is welcome I return to? No matter.

The entertainment for me was figuring things out, not any console warring. People just takes things too personally.

There's no need to worry. I won't be spoiling things from here.

I had fun even if no one else did ;)
 

SonGoku

Member
I would expect closer to 10TF and this would be quite a way above my expectation so I would be very happy.
Someone also asked him about 10TF (40CU @2000MHz) and he said no.
If you believe the 2GHz leak that leaves anywhere from 11.2TF to 14.3TF

Taking into account his wont blow you away comment, im thinking 11.2-12.2TF
Anyone still expecting or hoping for 14.2TF or 18.4TF is captured in this brilliant moment :messenger_tears_of_joy:
I wouldn't mind a 10TF on a $400 box, for $500 11TF min
24g not happening.
Could happen but if they have a small DDR4 pool for OS, 20GB GDDR6 would be good enough
This is laughable.
The reason for the delay? agreed
Im thinking RDNA layout is meant to scale up, instead of adding an additional SE, the current 2xSEs could be beefed up with extra cache, rops and geometry processors
60CU - 56CU enabled
56CU - 52CU enabled
 
Last edited:

MadAnon

Member
Remember this? 2.1GHz GPU don't sound so crazy now lol
I think this 2ghz is the real deal and most likely 36/40CUs. Those previous benchmarks of Flute that were leaked fell in line with ryzen 1700x + 5700xt if I remember correctly?

I'm calling 40cu 2ghz that allows to break over 10TF territory, 10.2 to be precise. Would fall in line with what was said by Klee on resetera. "More than 9.2TF but don't get too excited" he said.

2.1ghz might be a target to beat that Stadia marketing number. But still under 11TF
 
Last edited:

SonGoku

Member
but adding more CU's would break this configuration,
Not necessarily
Individual WGPs can be disabled without turning off an entire SE
I'm calling 40cu 2ghz that allows to break over 10TF territory, 10.2 to be precise. Would fall in line with what was said by Klee on resetera. "More than 9.2TF but don't get too excited" he said.
He said no, that leaves 44 to 48CUs = 11.2TF to 12.2TF, IF you believe the 2GHZ leak
 
Last edited:

TLZ

Banned
Im thinking RDNA layout is meant to scale up, instead of adding an additional SE, the current 2xSEs could be beefed up with extra cache, rops and geometry processors
60CU - 56CU enabled
56CU - 52CU enabled
Oh ok. I'm just thinking of the possibility of taking PS4's 18CU and multiplying that by 3 = 54. Maybe easier for BC that way.
 
I dont think 2Ghz make sense, more heat, more money for cooling, i think AMD Game clock 1800Mhz is indicative of what their GPU can handle before going nuts (heat), unless EUV 1980Mhz (2Ghz), add more CU's 46 total for 11.7Tf.
 

PUNKem733

Member
PS5 more powerful or more accurately nexbox not more powerful than the ps5. so why would people pick it over the ps5? another major fuckup by MS. you would think professional pride in a trillion dollar company like MS would force them to design a more powerful system but no.

What? Professional pride?! Is that code for money? Cause it's ALWAYS about the bottom line.
 

LSWilson

Member
We all know based on so many rumors they will be around 12tf which is great!! Anything above is genie in a bottle wishing. The shit tht og ps4 spits out like Second Son and God of War at 1.8 TF is amazing so 12tf on PS5 will be eye melting. I mean look at TLOU 2 and Ghost of T. Graphics taking into account base models, that stuff looks next Gen juicy.
I am hoping for somewhere around 12 as well.
 

LSWilson

Member
2TF CPU? Where?! Even Cell wasn't that powerful...

CPU optimization isn't going anywhere. It's here to stay, especially if you consider the fact that Jaguar and Zen have some similarities (2 clusters of 4 cores) that will definitely require optimization.

Regarding GPU compute, RX 5700 seems to perform worse than RX 580 (XB1X GPU).

FpFNGn9.png


Consoles will definitely need something stronger than that for future-proofness.


18.4 TF is close to 24 TF GCN rasterization efficiency (4x GPU difference vs Scorpio).
Wow that chart makes the XT look terrible
 
2TF CPU? Where?! Even Cell wasn't that powerful...

CPU optimization isn't going anywhere. It's here to stay, especially if you consider the fact that Jaguar and Zen have some similarities (2 clusters of 4 cores) that will definitely require optimization.

Regarding GPU compute, RX 5700 seems to perform worse than RX 580 (XB1X GPU).

FpFNGn9.png


Consoles will definitely need something stronger than that for future-proofness.


18.4 TF is close to 24 TF GCN rasterization efficiency (4x GPU difference vs Scorpio).
Rx 5700 xt perform better than vega 64 in gaming.
 
Last edited:

MadAnon

Member
Rasterization != compute

You missed this part...

Unfortunately, as I mentioned earlier in my testing observations, the state of AMD's OpenCL driver stack at launch is quite poor. Most of our compute benchmarks either failed to have their OpenCL kernels compile, triggered a Windows Timeout Detection and Recovery (TDR), or would just crash. As a result, only three of our regular benchmarks were executable here, with Folding@Home, parts of CompuBench, and Blender all getting whammied.


And "executable" is the choice word here, because even though benchmarks like LuxMark would run, the scores the RX 5700 cards generated were nary better than the Radeon RX 580. This a part that they can easily beat on raw FLOPs, let alone efficiency. So even when it runs, the state of AMD's OpenCL drivers is at a point where these drivers are likely not indicative of anything about Navi or the RDNA architecture; only that AMD has a lot of work left to go with their compiler.


That said, it also serves to highlight the current state of OpenCL overall. In short, OpenCL doesn't have any good champions right now. Creator Apple is now well entrenched in its own proprietary Metal ecosystem, NVIDIA favors CUDA for obvious reasons, and even AMD's GPU compute efforts are more focused on the Linux-exclusive ROCm platform, since this is what drives their Radeon Instinct sales. As a result, the overall state of GPU computing on the Windows desktop is in a precarious place, and at this rate I wouldn't be entirely surprised if future development is centered around compute shaders instead.

Consoles don't use OpenCL anyway. After doing some search about OpenCL, seems like it has very poor adoption and is barely used.
 
Last edited:

nowhat

Member
Is Sony making anything this Gamescom?
They don't have an own press event, but are present in the "Opening Night Live" show, starting today 19:00 BST (14:00/11:00 EDT/PDT). I wouldn't expect much in terms of hardware though, or generally since they didn't host their own event, but supposedly there should at least be something about Death Stranding.
 

You missed this part...

Consoles don't use OpenCL anyway.
I didn't miss anything. Consoles have an OpenCL equivalent via low-level APIs (GNM, DX12).

Software is one part of the equation. Hardware is another.

1st gen Navi seems to have sacrificed compute is favor of rasterization, but consoles need both, so I expect Sony/MS to inject their semi-custom secret sauce.

Turing on the other hand is the first nVidia uarch that excels in both rasterization and compute, unlike Pascal/Maxwell.
 
If these consoles can do this in real-time.....then I don't care if it was 9TFs or not.


Looking at what Developers did with 1.8TF, 9TF is going to be amazing. Even Forza Horizon 4 looks pretty stellar on a 1.2TF machine. I think people's expectations were set too high, but being arm chair hardware developers will do that.
 
Last edited:
Status
Not open for further replies.
Top Bottom