• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
I'm wondering if a next-gen Spiderman patch means it could potentially decrease the install size of the game?

Cerny has said they duplicate assets 400 times to optimize HDD seek patterns (a technique used since the PS1/CD era).

SSD space will be valuable, so better not squander it.

If that's the reason game updates are so bloated this gen, then wow... although I suspect 1080p (and soon 4K) assets don't help either.
 

DeepEnigma

Gold Member
I'm wondering if a next-gen Spiderman patch means it could potentially decrease the install size of the game?

Cerny has said they duplicate assets 400 times to optimize HDD seek patterns (a technique used since the PS1/CD era).

SSD space will be valuable, so better not squander it.

If that's the reason game updates are so bloated this gen, then wow... although I suspect 1080p (and soon 4K) assets don't help either.

I was actually thinking the very same thing after the article came out as well.
 
The 128 CU part, yes, but in this case we're talking about a semi-custom chip with less (80) CUs, so they could have added ROPs/TMUs and other GPU-related circuitry.

Either way, we know Arcturus is a Vega successor and next-gen consoles are getting Navi/RDNA.

Sony might have also experimented with Vega-based devkits.
This business with "Arcturus" needs to stop. Arcturus is not an architecture. It's not a successor to Vega, it is a derivative of Vega. It is a singular chip with no graphics functionality whatsoever. It has no geometry, no ROPs, no TMUs, no display output. It is just for compute. It has no other function.

_But what if they took Arcturus and put the graphics functionality back in?_

There already is Arcturus with graphics, it's called Vega 20 (Radeon VII/Instinct MI50 & MI60). It would be absolutely pointless having a 128CU Vega graphics processor in a console cause GCN is massively bottlenecked in graphics workloads. There are occupancy problems with the SIMDs, the cache hierarchy is suboptimal, and there is a geometry bottleneck.

AMD made RDNA for gaming, so Microsoft and Sony will be using Navi derivatives, with some RT customisation.
Arcturus is Vega and Vega, while being an excellent compute architecture, is woeful at graphics. Let it die.

Why is there this bizarre obsession with Arcturus?
 

Perrott

Member
I still don’t get why people went crazy over Sony’s E3 that year.
It was filled with megatons, from the Big Three (TLG, FF7R and Shenmue) to COD switching sides, or from Guerrilla doing something new to the jawdropping Uncharted 4 demo. Also, the conference had a really good pace that maintained the hype of previous reveals throughout the whole show.
 

Evilms

Banned
I still don’t get why people went crazy over Sony’s E3 that year.

Because

hype-gif3kopp.gif

xTiTnCLj29FdoARTnW.gif
 
Last edited:

pawel86ck

Banned
you are not getting native 4k and you are not getting 60 fps.

BOTH things require 2x the GPU resources to go from 1440p to native 4k and from 30 fps to 60 fps. You are essentially taking a 10 tflops GPU and turning it into a 2.5 tflops GPU. Devs will never waste precious GPU cycles on rendering more pixels when they can use them to add detail to those pixels. They will never waste half of the GPU resources on 60 fps unless of course they are competing with CoD and need their multiplayer shooters to be 60 fps.

I do agree that if we are only getting 5700 performance, we can forget about 4k. but even at 12-14 tflops, you wont see devs target native 4k. unless of course, one console is 8 tflops and the other is 14 tflops in which case devs will target the lowest common denominator and simply use the remaining 6 tflops on pushing native resolution like they do with mid gen refreshes.

And yeah, my 2080 struggles to run games at native 4k 60 fps with ray tracing turned on. gears of war runs at 45 fps at native 4k. i really dont see how next gen gpus will run anything at native 4k 60 fps unless they are indie games not worried about pushing graphics effects like destruction, npcs, ray tracing and other kinds of simulations devs previously couldnt do. i expect to see 100% of open world games at 4kcb and 30 fps. it will be like uncharted 4, campaign 30 fps, multiplayer 60 fps.
14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.
 
Last edited:

TeamGhobad

Banned
14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.

more. they said 1 navi --> 30-60% more than a vega tflop.
 

SlimySnake

Flashless at the Golden Globes
14TF Navi should be as fast in games like 18TF polaris, so you really think power like that would be not enough to run 4K 60fps native in majority of games? Of course 12 cores CPU and 14TF GPU leak sounds too good to be true, especially at 500-599$ price point, but like I have said I would expect 4K native from next gen consoles, and especially when xbox x 6TF GPU can already run many games at 4K.
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.

Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
 

Lort

Banned
Obviously ( as your only pretending not to know this) videos being referenced by the person i quoted...

a gif of a religious cult leader throwing out half life 3 ?!?

literal gifs of war depicting the “console wars”
 

TLZ

Banned
Obviously ( as your only pretending not to know this) videos being referenced by the person i quoted...

a gif of a religious cult leader throwing out half life 3 ?!?

literal gifs of war depicting the “console wars”
Well you quoted me not him. The gifs are quite funny regardless of which side you're on.
 

Lort

Banned
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.

Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.

This is a very complex issue actually .. it used to be much simpler..

how does gears 5 do 4k 60 fps with awesome graphics ... optimisation...

But you cant go faster than the hardware however you can always optimise...when every game had a simple renderer if u wanted better textures u needed more bandwith ... if u wanted more polygons u need more vertex shaders ...

now most gpus could do 4k 60 fps on any scene ( even with mutiple layer of pixel fx) if they only had to show visible triangles..so its all about the back end code.. triangle culling .. game world streaming ... and pixel shader optimisation ...

games that do that well can look amazing and run 4k 60 .. even on current gen hardware ... but you cant do all the tricks in all situations ... open world games cant order the triangles and cull the map anywhere near as much .. so its way harder to get rdr or gta to 60 fps than say gears.

Triple A devs should be able to make 4k 60 fps open world games on next gen regularly.

...but there will be plenty of 30 fps games where devs want to throw crazy amounts of triangles and shaders and dont have time money or capability to deliver 60 fps ... and will lock 30-55 fps games to 30 ... or support variable refresh rate;)
 

SmokSmog

Member
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.

Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
LoL, what a lie!
PS4 pro renders 1920x2160 = 4milions pixsels, 4k is 8 milions pixels.
Ps4 Pro has slow memory bandwidth.

Cerny dubled the gpu cus from 18 to 36 and incresed the bandwidth only slightly from 176 --> 217GB/S.
The ps4 Pro is like little turd.

The mid refresh cycle was mistake, this is what you get selling console for less than 400$ with profit. Underpowered toy.

I hope nexgen will cost no less than 500$ with mayby 600$ BOM.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
LoL, what a lie!
PS4 pro renders 1920x2160 = 4milions pixsels, 4k is 8 milions pixels.
Ps4 Pro has slow memory bandwidth.

Cerny dubled the gpu cus from 18 to 36 and incresed the bandwidth only slightly from 176 --> 217GB/S.
The ps4 Pro is like little turd.

The mid refresh cycle was mistake, this is what you get selling console for less than 400$ with profit. Underpowered toy.

I hope nexgen will cost no less than 500$ with mayby 600$ BOM.
learn to read. 1.8 tflops + 4.2 tflops = 6 tflops.

I am saying that if rdr2 on base ps4 took 1.84 tflops to render the native 1080p image then all the remaining horsepower in the x1x 4.2 tflops out 6 total went into simply upping the resolution to native 4k. And that's at 30fps. To run the same game at twice the frame rate u would need a better cpu AND a shit ton of gpu resources.

Devs are simply not going to waste the gpu natively rendering 8 million pixels.
 

Mass Shift

Member
Skimming the last few pages for TF guesstimates, I see 9TF, 10TF and some posts about 14TF.

Where's all the insiders?

Elsewhere. Playing it safe, hiding behind caution mostly. And who could blame them? At any moment something could be revealed that totally blows up their credibility. lol
 
Skimming the last few pages for TF guesstimates, I see 9TF, 10TF and some posts about 14TF.

10TF is probably where it's going to be.

There was a tweet some time back about how one of the console's GPUs was scoring over 20,000 on the Firestrike 3D Mark test*. The nearest graphics card to that 20,000 score is a GTX 1080 TI. That card itself runs at around 10-11TF.

There's plenty of videos on YouTube showing that card doing 40-50fps in games at 4K, over 60fps at 1440p, and well over 80fps in 1080p.

But since we're on a console and can optimise heavily, it wouldn't be unbelievable for games to hit 60fps in 4K. But, of course, this is entirely up the the devs.

(* - yes, yes, we're all well aware that 3DMark isn't in existence for consoles, but these are dev kits running all sorts of software tools and people find ways of making this stuff happen).
 

pawel86ck

Banned
you are not getting it. Its not about whether or not the gpu is capable enough to do natuve 4k 60 fps, its whether devs will use the gpu resources to make better looking games vs making current gen looking games at a higher resolution.

Rdr2 uses 4.2 tflops jyst to render native 4k. Ps4 runs that game at 1080p using its 1.84 tflops. it will become much harder to render games at native 4k when devs start to pack more detail in each pixel. Forget about 60 fps in open world games.
RDR2 is big open world game with insane graphics, and yet it runs on 6TF GPU. In order to run the same game at 60fps you will of course need even more GPU resources but 14TF Navi (18-20TF polaris equivalent) would have enough resources because it's like 3x xbox x power. Of course some developers would still choose 30fps even on 14TF Navi, but many developers would go for 60fps or at least offer 4K NATIVE. I would expect 4K native as a standard on PS5 the same way as people have expected 1080p from PS4. The thing is these days more and more people own 4K screens, and native picture offers unmatched quality. Xbox x and PS4P is mid gen refresh, so 4K is not a standard just yet, but on on PS5 and xbox 4 it should be.

With something like RX5700 developers will not waste 70% of GPU resources into 4K native alone, so I can understand what you are saying. RX5700 is just too weak to offer 4K and nice graphic fidelity improvement compared to current gen games. They need more TF's like for exanple 14TF Navi. People say 14TF leak sounds too good to be true, and I agree and especially given current facts (RX5700 is already big and power hungry chip) but the thing is with technology normal people never know everything.

Have anyone expected unified shaders in x360? No one even heard about it one year before x360 has launched, and on PC market we had to wait whole year before first GPU's with unified shaders even have launched. Not some time ago people would laugh if you would even suggest HW RT on next gen consoles, and yet there you have it.

So personally I'm not going to discredit this 14TF leak just because it sounds too good. Until MS and Sony will tell you us official specs of their next gen consoles we cant be sure about TF. All what people can do currently is to estimate TF number based on current knowledge and technology, but both PS5 and Scarlett will launch one year from now and probably will use some clever technologies that are planed for PC marked in 2021.

If however next gen consoles will end up 8-9TF of course people will still buy next gen consoles regardless of resolution games will run at, but then I would expect cheap price point like 399$ because RX5700 performance (especially after Nv will launch their Apmere GPU's) will be considered average at best.
 
Last edited:

Racer!

Member
If some rumors are true, PS5 and Scarlett will get some kind of machine learning capabillities built in.

I wonder if this is also in part because of next gen animation systems. Tim Sweeney was asked on twitter about a year ago about how he saw the future of character animation, and answered that we were probably a couple of years from a "revolution".

Recently came by this tweet by an animator at Naughty Dog




Could this be the "revolution"?

Excited for next gen!
 
Last edited:
If some rumors are true, PS5 and Scarlett will get some kind of machine learning capabillities built in.
Machine learning (among other things) will need tons of compute power to deliver a next-gen leap.

What most people don't understand is that while 3D graphics are scalable (i.e. 14TF/4K -> 1440p/8-9TF downgrade), compute algorithms (such as neural networks, AI pathfinding, physics etc.) are NOT scalable by nature. We're talking about gameplay-enhancing algorithms and it's not acceptable to downgrade gameplay (unlike resolution).

So, if some game devs are really hellbent on deliving a next-gen leap via GPGPU algos, then you will also need raw compute power, aka high/double-digit TF.

I feel like some people are too fixated on the CPU, but it's not the CPU that is going to run all these crazy stuff. GPGPU isn't a forced "gimmick" because of Jaguar shenanigans. It's here to stay. FOREVER!

PCs are different, because there's PCIe latency between the discrete CPU and the GPU, so GPGPU isn't always beneficial. In PCs you need the CPU FPU/vector unit (which is in the same die, so zero latency) to do stuff like physics etc.

Consoles utilize a monolithic APU die and fast, unified DRAM. There is no game-breaking latency between the CPU and the GPU. Consoles are specifically made to take advantage of heterogeneous computing. If you're not using it, you're doing it wrong!

TL;DR: having a PC-centric (aka CPU-centric) way of thinking to understand consoles is an exercise in futility. :)
 

Fake

Member
But need AI a dedicated hardware or they just can delivery via cloud? Thats important vision from Microsoft.
 

Avtomat

Member
10TF is probably where it's going to be.

There was a tweet some time back about how one of the console's GPUs was scoring over 20,000 on the Firestrike 3D Mark test*. The nearest graphics card to that 20,000 score is a GTX 1080 TI. That card itself runs at around 10-11TF.

There's plenty of videos on YouTube showing that card doing 40-50fps in games at 4K, over 60fps at 1440p, and well over 80fps in 1080p.

But since we're on a console and can optimise heavily, it wouldn't be unbelievable for games to hit 60fps in 4K. But, of course, this is entirely up the the devs.

(* - yes, yes, we're all well aware that 3DMark isn't in existence for consoles, but these are dev kits running all sorts of software tools and people find ways of making this stuff happen).

I think 9TF is more realistic, looking at the die sizes of the individual components.

But I would be very pleased to see a 10TF console from either and absolutely over the moon if we get 24GB or main memory with a seperate 4GB DDR4 for the OS.
 

Racer!

Member
Machine learning (among other things) will need tons of compute power to deliver a next-gen leap.

What most people don't understand is that while 3D graphics are scalable (i.e. 14TF/4K -> 1440p/8-9TF downgrade), compute algorithms (such as neural networks, AI pathfinding, physics etc.) are NOT scalable by nature. We're talking about gameplay-enhancing algorithms and it's not acceptable to downgrade gameplay (unlike resolution).

So, if some game devs are really hellbent on deliving a next-gen leap via GPGPU algos, then you will also need raw compute power, aka high/double-digit TF.

I feel like some people are too fixated on the CPU, but it's not the CPU that is going to run all these crazy stuff. GPGPU isn't a forced "gimmick" because of Jaguar shenanigans. It's here to stay. FOREVER!

PCs are different, because there's PCIe latency between the discrete CPU and the GPU, so GPGPU isn't always beneficial. In PCs you need the CPU FPU/vector unit (which is in the same die, so zero latency) to do stuff like physics etc.

Consoles utilize a monolithic APU die and fast, unified DRAM. There is no game-breaking latency between the CPU and the GPU. Consoles are specifically made to take advantage of heterogeneous computing. If you're not using it, you're doing it wrong!

TL;DR: having a PC-centric (aka CPU-centric) way of thinking to understand consoles is an exercise in futility. :)

Have you been drinking sir? That was alot of gibberish :messenger_tears_of_joy:

Specialized hardware for neural nets see huge boosts to performance. Orders of magnitude. (Not to mention that they need it in order to drive the denoizer in regards to raytracing.) They can run neural nets blazing fast, and the kind of animation tech demonstrated in the video in my previous post. Motion matching is a type of animation used in upcoming titles such as Last of us 2. Neural net accelerated state machine/motion matching is a way to turbo charge this in quality and speed. This is the way I understand, the future of character animation and the way things are headed.
 
Last edited:

Racer!

Member
But need AI a dedicated hardware or they just can delivery via cloud? Thats important vision from Microsoft.

Will probably have to be done localy because of input lag!? Recent advances in neural nets and tensor tech is very promising. Google is driving much many of its neural nets on your android phone these days.
 
Last edited:
Well, what you wrote had nothing to do with my post in which you were replying to, so theres that.
I was motivated by your post to explain some things, since the Vega/GCN vs Navi/RDNA (compute vs rasterization) flops debate just doesn't want to die.

Why were you offended?

Here's another example of machine learning in the context of next-gen AI:



All these stuff have to run locally on the same hardware. No cloud BS (unless the whole game runs on the cloud).

We know that Navi will support INT4/INT8/INT16/FP16/FP32 acceleration, so it's going to be suitable for all sorts of compute workloads. No need for a dedicated "AI chip" or Tensor cores (like nVidia does). Where's the disagreement here?
 
Last edited:

Racer!

Member
I was motivated by your post to explain some things, since the Vega/GCN vs Navi/RDNA (compute vs rasterization) flops debate just doesn't want to die.

Why were you offended?

Here's another example of machine learning in the context of next-gen AI:



All these stuff have to run locally on the same hardware. No cloud BS (unless the whole game runs on the cloud).

We know that Navi will support INT4/INT8/INT16/FP16/FP32 acceleration, so it's going to be suitable for all sorts of compute workloads. No need for a dedicated "AI chip" or Tensor cores (like nVidia does). Where's the disagreement here?


Yes it supports INT4/INT8/INT16/FP16/FP32, but its not optimized for just one of them. You could make much better use of transistors arranged in a way to just utilize low precision, no?

Also, my post was in regards to character animation. I`m excited for what next gen means for those kinds of things. Not just graphics.
 
Last edited:
Yes it supports INT4/INT8/INT16/FP16/FP32, but its not optimized for just one of them. You could make much better use of transistors arranged in a way to just utilize low precision, no?
It's optimized for all of them, not just for rasterization/3D graphics.

AFAIK, Navi only lacks FP64 acceleration (which is a Vega uarch specific feature) and this makes sense, because FP64 requires a lot more transistors.

Example:

Navi 10TF (FP32)
FP16 -> 20TF (double performance at half accuracy)
INT8 -> 40 TOPS (integer operations per second, since it's not floating point anymore)
INT4 -> 80 TOPS

Machine learning uses INT8 right now and there's some research going on about INT4.

Modern GPUs aren't just for pixel/vertex shaders. I remember people mocking Rapid Packed Math, because they thought it was for pixel shaders. We live in 2019, not in 2003.

Also, my post was in regards to character animation. I`m excited for what next gen means for those kinds of things. Not just graphics.
Same. Are you not excited for next-gen, self-learning AI? :)

This will be the biggest leap ever in terms of AI...
 

Racer!

Member
It's optimized for all of them, not just for rasterization/3D graphics.

AFAIK, Navi only lacks FP64 acceleration (which is a Vega uarch specific feature) and this makes sense, because FP64 requires a lot more transistors.

Example:

Navi 10TF (FP32)
FP16 -> 20TF (double performance at half accuracy)
INT8 -> 40 TOPS (integer operations per second, since it's not floating point anymore)
INT4 -> 80 TOPS

Machine learning uses INT8 right now and there's some research going on about INT4.

Modern GPUs aren't just for pixel/vertex shaders. I remember people mocking Rapid Packed Math, because they thought it was for pixel shaders. We live in 2019, not in 2003.


Same. Are you not excited for next-gen, self-learning AI? :)

This will be the biggest leap ever in terms of AI...

Yes its optimized for all of them, which makes them not optimal for any one of them. Thats optimal for when you want flexibility, not efficiency. If they found a way to optimize 100% for all of them, that would be the holy grail. Theres always trade offs for flexibility.

Oh yes, I`m excited for every gameplay enhancing feature.

Sorry if I came off a little harsh btw, my apologies!
 
Yes its optimized for all of them, which makes them not optimal for any one of them. Thats optimal for when you want flexibility, not efficiency. If they found a way to optimize 100% for all of them, that would be the holy grail. Theres always trade offs for flexibility.
AMD has done some improvements, so that each CU can execute multiple workloads with differing accuracy. It's an evolution of Asynchronous compute if you will.

nVidia on the other hand has discrete INT32 pipelines and Tensor ones (INT8) in Turing.

If history is anything to come by (GeForce 7 vs Xenos), unification is inevitable.

Oh yes, I`m excited for every gameplay enhancing feature.

Sorry if I came off a little harsh btw, my apologies!
No worries!
 
its safe to say that both ps5 and xbox scarllett are going to have wifi6. Having wifi6 connections at consumer level is a whole other story.... Has anyone ever experienced network play-online play with wifi6 connections for PC?
 
Status
Not open for further replies.
Top Bottom