• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

pasterpl

Member
So, considering the difficulty of joining the console market, you don't think any company will actually try to do it, apart from Google with their failed effort? Also, do you think Google will make another attempt?

i think Samsung or some Chinese companies are more likely to enter the race.
 

M1chl

Currently Gif and Meme Champion
Could we consider the CUs are a form of SPEs?
Well if you meant it like this, than yeah in some form today's GPUs are somewhat build around that principe, however, you were talking about APU, which consist of CPU, memory controller and GPU. But CPU cannot directly operate the CUs...also programming is not the same for the whole chip obviously. So I am looking into that from more as a dev perspective, because I have experience with it it.
 

kareemna

Member
Well if you meant it like this, than yeah in some form today's GPUs are somewhat build around that principe, however, you were talking about APU, which consist of CPU, memory controller and GPU. But CPU cannot directly operate the CUs...also programming is not the same for the whole chip obviously. So I am looking into that from more as a dev perspective, because I have experience with it it.

Exactly, noticed this is why i asked actually. I think i watched that video by MVG before, will watch it again.
 

M1chl

Currently Gif and Meme Champion
M1chl M1chl The video shared earlier did highlight the importance of cache coherency engines on the SoC of the PS5, what do you think is its benefit?
It matter because that way all of the compute unit (being on GPU or CPU) can access and map same pool in memory. So that way you don't have to think (on low level) where to specifically store the data. But that's without any API, which API solve that for you, so another benefit is that you don't have to wait till some memory, where you want to write your data, is empty. But we are talking really low level. API from vendor takes care of that. Real benefit is that you can have one memory controller = cheaper HW, fewer transistors and so that with the same prices you can build chip with more power (and smaller die, which means also cheaper product).

When it comes to PS3 the issue is, that SPE does not have it's memory controller, so everything was stalled by that controller PPC core.
 
Last edited:

mitchman

Gold Member
I know that AMD's Epyc CPUs are for servers and not for games, but I'd like to know how the 32 and 64 core models would stack up against the CPUs in the PS5 and XSX as apart of a build with comparable graphics capabilities and bandwidth.
I have a 3970x 32x/64t threadripper for work and it runs at a base clock of 3.7GHz, which is comparable to the consoles. I would expect the single core performance in consoles to not be far behind this monster CPU. Zen 2 scales pretty well with multiple cores, and massive multicore workloads is what we use these for (compiling 15M+ lines of code).
The console CPU is even more comparable to the Ryzen 7 3700 CPU (my home CPU), same 8c/16t setup. In some ways, being an APU and not chiplet design, the memory access latency might be better on consoles than with the desktop CPUs, but I'm speculating now.
 

travisktl

Member
wasn’t there similar rumour about Sony buying Konami some time ago?
It was about Sony buying the Castlevania, Silent Hill, and Metal Gear IP's, which I don't see happening either. Konami has no reason to sell those. I could see Sony doing a licensing deal with those IP's though.
 
So what are the assumptions of PS5's frequency range? I'm so tired of the fanboys out in the world who still insist that PS5 is mainly a 9.2 Tflop machine that sometimes hits boost speeds.

I personally believe Cerny when he says it will hit max speeds most of the time, but just curious how much of a drop we will get,maybe 2 - 4%?
 

Aceofspades

Banned
So what are the assumptions of PS5's frequency range? I'm so tired of the fanboys out in the world who still insist that PS5 is mainly a 9.2 Tflop machine that sometimes hits boost speeds.

I personally believe Cerny when he says it will hit max speeds most of the time, but just curious how much of a drop we will get,maybe 2 - 4%?

You heard it from the man himself (Cerny) and yet still needs validation from random forum posters? 😁
 

pasterpl

Member
>Tonight between 10PM and Midnight (est) Microsoft will finalize the purchase of the entire gaming catalogue from Konami.
>Posted more than three weeks ago.

Nothing to see here.

to be fair, regardless who would buy Konami, it wouldn’t be announced via simple pr release, probably they would have announce something during some kind of an event (online or offline)

I might be wrong but imo it wouldn’t be a good business decision, while I appreciate Konami’s history (spent countless nights playing iss/pes on psx) I don’t think their ips are extremely strong to justify a huge deal
 
Last edited:
MisterXmedia.com?
Do share.



  • Tales from your ass or you gonna post a link to where you read this

Of course. But not from mine ass. XSX is RDNA2+or 3! PS5 is even RDNA 1.5. And Jason is sooooo pathetic

YSOyB8l.jpg


6v4T5X2.jpg
 
Last edited:

SSfox

Member
Thoughts, remember that rumor that PS5 would cost 450 bucks?

This is pure speculation but what if 175Go 5G/S SSD cost is 50 bucks, So they had cut this number in last minute to be able to sell the console at 399$
 
Well he didn't say exactly how low it would go. Do you expect it to ever be under 10 Tflops?
The truth is that we don't know. The variability is based on the power budget (watt consumed) so we don't know how much power any type of game and every scene in it will require, if the power consumption goes to high the developers will set in advance a lower clock for the GPU, but again: how this will be handled depends game by game, scene by scene, for all we know could stay fixed all the time because one game was designed to not overload the power budget, regardless of the temperatures (that are taken care by the cooling).
Also, we don't take in account smartshift: if the CPU has power to spare, the power will be transfered to the GPU if needed, meaning the GPU could remain at max clocks even if should surpass the power consumption limit, because the CPU will lower the clocks thus the power consumed will remain the same but the GPU will get a boost.
We can't predict how smartshift will work because we don't know how often Zen 2 will be at full 100% so unable to lower its clocks to rise those of the GPU while still keeping the power balanced.
 
Last edited:
I wasn't trying to be a dick when I said that. I just meant that there's a good chance the game will look much better., considering what the devs have said.
Sure, I was just clarifyng, it was only a "mock" to the game not to PS5, after the devs being so cool on "imma do it all".
I hope so because the water at the start should improve a lot with RT alone. But aside from that, if they are a little studio they have other things to worry about than complains on graphics in the YouTube comments.
 
Last edited:

PaintTinJr

Member
CPU has no business with the majority of GPU workloads since CPU couldn't keep up with GPU's computational and memory access intensity.

AI and collision physics can be offloaded to the GPU with deep machine learning instruction set and RT cores.

8 cores Zen 2's AVX 2 with gather instruction is not a modern GPU.

Physics that alters gameplay can only be offloaded to the GPU if the results of the simulation get returned periodically to the game logic same with AI) – ie incur copying speeds from the 10GB -> 6GB at 336GB/s (AFAIK) a read to the CPU cache, followed by a write to the 6GB which isn't required on the PS5. Forward-only simulations that run on the GPU merely to improve the visuals wouldn’t incur such a cost, obviously. A standard wave simulation to render an ocean is likely done in this way. By comparison a soldier stood on a beach with a BFG shooting voids through those waves to capsize inbound vessels next to those voids – as the adjacent water collapsed into the voids – would require compute in the GPU and CPU to handle the physics/collision/AI and game logic – all incurring bi-directional data copies on a system that semantically has discrete memory pools for the CPU and GPU.
 
Have people noticed that the PS5 specs page has been updated to show that it supports the VRR spec of HDMI 2.1? Just like people have been saying? Yet folks want to argue because it wasn't specifically mentioned by Cerny.

Support of 4K 120Hz TVs, 8K TVs, VRR (specified by HDMI ver.2.1)
Don't you confuse VRR (Variable Refresh Rate or FreeSync) with VRS (Variable Rate Shading) ?
 
Semantics - from application perspective that's exactly how non-unified memory behaves on consoles (well, most of them anyway).

It's not as good as a single 320-bit bus across the address range, absolutely, but for 16 GB of ram that was never an option. And there's no realistic situation in which it won't be better than a single 256-bit bus.

From an application perspective I do think there's a difference, even for a system where two pools of memory share a common address range. When moving between memory pools I believe (I read a lot, but actually do a lot less so forgive me if I'm wrong) that there can be wildly different costs for different parts of the system accessing different memory pool. For example, IIRC Cell's read from GPU DDR3 was incredibly, unbelievably slow, at something like 1/1000 of percent of actual memory pool bandwidth.

XSX handles such situations very gracefully compared to a split memory pool, with relatively small and manageable graduations in total effective bw depending upon access. It appears to be a pretty forgiving way of getting a worthwhile boost in effective bandwidth for a system that can only stretch to 16GB of ram. And you don't have to DMA between regions unless you got something wrong to begin with.

I can't think of any split memory pool console that would be as fast or handle being treated as one single pool as well as this.
 

PaintTinJr

Member
It's not as good as a single 320-bit bus across the address range, absolutely, but for 16 GB of ram that was never an option. And there's no realistic situation in which it won't be better than a single 256-bit bus.

From an application perspective I do think there's a difference, even for a system where two pools of memory share a common address range. When moving between memory pools I believe (I read a lot, but actually do a lot less so forgive me if I'm wrong) that there can be wildly different costs for different parts of the system accessing different memory pool. For example, IIRC Cell's read from GPU DDR3 was incredibly, unbelievably slow, at something like 1/1000 of percent of actual memory pool bandwidth.

XSX handles such situations very gracefully compared to a split memory pool, with relatively small and manageable graduations in total effective bw depending upon access. It appears to be a pretty forgiving way of getting a worthwhile boost in effective bandwidth for a system that can only stretch to 16GB of ram. And you don't have to DMA between regions unless you got something wrong to begin with.

I can't think of any split memory pool console that would be as fast or handle being treated as one single pool as well as this.
(IMHO)320bit still isn’t ideal unless you have data packets that conveniently align that way without padding – because fragmentation and padding kill bandwidth efficiency. I would love for Xbox to do an early tech analysis of their RT Minecraft to let us see just how well their hardware choices align to the early code for the demos they showed.

Although you say the XsX handles these situations gracefully, I would wonder how we can tell at this stage. What games will try to achieve on XsX, and PS5 is yet to be defined and looking at what has been described from the split memory access, isochronous traffic is not an insignificant traffic type to handle under high BW contention without the priority access of the isochronous traffic impeding utilisation. From a data comms perspective it isn't hard to envisage controller lag, frame-pacing and stable frame-rates being occasional casualties of the bandwidth contention conundrum.
 
Last edited:
Status
Not open for further replies.
Top Bottom