• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

if Senua's Saga: Hellblade II is actually what we can expect for next gen consoles does that mean rtx2080 will be outdade by 2020

lmao outdated LOL

You're going to actually see games built from the ground up that take advantage of hardware features currently not being utilized. Improvements to Engines, APIs, Drivers, Tools and developer knowledge all account for what you're seeing in Hellblade 2... Your 2080 will be fine.
 

sol_bad

Member
For me, I'm happy to spend $500-$700aud on a new graphics card but come the end if 2020 or beginning of 2021, I'd have to buy a new mother board and CPU SO I'd easily be spending over 1k to upgrade

Currently I have a gaming lap top with a NVIDIA GTX 1060. My PC in Australia is over 6 years old, I'm sure it has a GTX 660 Ti.

I basically won't upgrade until a graphics card is equal/better than what the PS5/XSEX can do at that $500-$700aud price range.

So for me, its not worth investing in PC hardware at the start of the generation.
 
Last edited:

RaySoft

Member
That Carmack tweet is outdated as fuck and hasn't been true the entire generation

Ever since consoles went x86 the "optimization" has become mostly a myth, similarly specced PC gives the same performance as console at same settings and resolution

And no, RTX2080 will not be outdated, it will still be stronger than consoles.
The theoretical performance maybe, but never inside a PC. You just have too many bottlenecks (PC's biggest problem is it's legacy hardware like motherboard layout, buses, southbridge etc. A console is more streamlined to deliver maximum peak level across the system. That's why Carmac's "2x" still apply.
 
Last edited:
Well, in some games the 1080ti is faster then 2080. So it's already obsolete.
But seriously, people lack foresight these days. The whole 2xxx series is a tech demo. It's the first card(s) that has dedicated RT cores specifically designed for raytracing. Everything that is first or groundbreaking in tech, is bound to be made obsolete by newer cards. If people want proper nvidia raytracing, you should wait a few generations before investing serious money in higher tier cards.
 
Last edited:

Senua

Member
The theoretical performance maybe, but never inside a PC. You just have too many bottlenecks (PC's biggest problem is it's legacy hardware like motherboard layout, buses, southbridge etc. A console is more streamlined to deliver maximum peak level across the system. That's why Carmac's "2x" still apply.
2x is utter shite.
 
Well, in some games the 1080ti is faster then 2080. So it's already obsolete.
But seriously, people lack foresight these days. The whole 2xxx series is a tech demo. It's the first card(s) that has dedicated tenser core specifically designed for raytracing. Everything that is first or groundbreaking in tech, is bound to be obsolete by newer cards. If people want proper nvidia raytracing, you should wait a few generations before investing serious money in higher tier cards.
Not being the best =/= obsolete...
Tensor cores are not responsible for ray-tracing.
What is "proper ray-tracing"?
 
Not being the best =/= obsolete...
Tensor cores are not responsible for ray-tracing.
What is "proper ray-tracing"?
I meant RT cores, dunno why I wrote tenser cores.
Proper raytracing is where all objects in the game are ray traced and not just a select few like metallic surfaces and puddles of water.
We're years away from that.
 

93xfan

Banned
Remember that this was in-engine, not in-game. And yes, there is a difference.

As for the RTX 2080 becoming obsolete, it might, but not for the reasons implied in the OP.

Very true. I expect this to be more of a tech demo. If the final game looks like this, i can’t image it being very interactive during these parts.
 
A console is more streamlined to deliver maximum peak level across the system. That's why Carmac's "2x" still apply.
Again, let's see some multiplat titles that run twice as fast on console than on a PC with similar hardware. If what Carmack says is still true, that would apply to almost every game out there, so coming up with dozens of examples should be a breeze for you.
 

LordOfChaos

Member
I honestly don't know what people are laughing of... try running a game like God of War on a PC with the same parts as a PS4 to see what happens. This is what I mean (and Carmack too).

Does optimization still exist, absolutely, so do particularly large first party budgets that have less to do with what it runs on and more to do with how much they spend on making it work well

But everyone else is also right, this quote was before Mantle/DX12/Vulkan/Metal and in the era where API draw calls had over a literal order of magnitude more impact than they do now. That I think is largely what Carmack was talking about, not that he isn't a genius that definitely knows more than this entire forum about 3D graphics.
 

VFXVeteran

Banned
lmao outdated LOL

You're going to actually see games built from the ground up that take advantage of hardware features currently not being utilized. Improvements to Engines, APIs, Drivers, Tools and developer knowledge all account for what you're seeing in Hellblade 2... Your 2080 will be fine.

What features are those?
 

VFXVeteran

Banned
The theoretical performance maybe, but never inside a PC. You just have too many bottlenecks (PC's biggest problem is it's legacy hardware like motherboard layout, buses, southbridge etc. A console is more streamlined to deliver maximum peak level across the system. That's why Carmac's "2x" still apply.

Just because it's a closed box doesn't make it more streamlined. It has the same x86 architecture and perhaps a custom bus but even these consoles have OS overhead. I don't think a motherboard layout is going to cause a massive difference in framerate drops. If I were to guess, they would completely program the base game engine with the PC and it's hardware components (mb, graphics card, etc..) and downport to the console's x86/Linux OS platform without changing anything other than turning down features that are too heavy to meet a FPS budget. If we used x86 64-bit SIMD assembly function calls or the CUDA cores on Nvidia's GPU, we'd be doing the same thing to the down port version with AMD's chip.
 
Last edited:

Vidius

Neo Member
Just because it's a closed box doesn't make it more streamlined. It has the same x86 architecture and perhaps a custom bus but even these consoles have OS overhead. I don't think a motherboard layout is going to cause a massive difference in framerate drops. If I were to guess, they would completely program the base game engine with the PC and it's hardware components (mb, graphics card, etc..) and downport to the console's x86/Linux OS platform without changing anything other than turning down features that are too heavy to meet a FPS budget. If we used x86 64-bit SIMD assembly function calls or the CUDA cores on Nvidia's GPU, we'd be doing the same thing to the down port version with AMD's chip.

This brings me back and I guess we wouldn't be entering another generation without all this huffing and puffing from Microsoft and Sony and their respective fans. Do you remember the "secret sauce" that ended up being the hardware scalers for the Xbox One? Also, I remember people making a big deal about the unified memory architecture and that turned out to be a big nothing as well since you still needed to define explicit memory partitions for the GPU and CPU and couldn't share addresses. To even further your point, what would make anyone believe that they would use a radically different bus design or deviate from set standards when a) they have been proven to work just fine and b) are faster out of the gate because you are not nearly as power limited like a console is

raysoft, if you can give one GOOD piece of evidence on why x86 PCs, from a software engineering perspective, have too many bottlenecks vs consoles, I will concede my argument
 

Turk1993

GAFs #1 source for car graphic comparisons
If a RTX 2080 is outdated next year, then all you amd fanboys can trow your GPU's in the bin lol. There is still nothing on Amd that matches that card. Also its funny that people trashing on RTX cards are now hyping up ray tracing for consoles while they where the ones making fun of it in a other thread. By the time that next gen consoles release, Nvidia while probably anounce RTX 3 series wich will again widen the power gap. But the consoles offer a damn incredible value for the price, if they price it at 499. I probably will get them both and upgrade my pc or build a new one. The future is looking good if this is the quality whe get from a launch game.
 

Coflash

Member
Assuming the title is true, what does that indicate?

Whatever it indicates, the same thing is going to be true for next generation consoles for the ~7 years that proceed their launch dates. It'll be true for PC for maybe a couple of months.
 
Last edited:

-kb-

Member
Just because it's a closed box doesn't make it more streamlined. It has the same x86 architecture and perhaps a custom bus but even these consoles have OS overhead. I don't think a motherboard layout is going to cause a massive difference in framerate drops. If I were to guess, they would completely program the base game engine with the PC and it's hardware components (mb, graphics card, etc..) and downport to the console's x86/Linux OS platform without changing anything other than turning down features that are too heavy to meet a FPS budget. If we used x86 64-bit SIMD assembly function calls or the CUDA cores on Nvidia's GPU, we'd be doing the same thing to the down port version with AMD's chip.

Whilst the difference may not always be a static 2x, or not even faster at all. There are plenty of advantages to a console that simply do not exist on PC's.

Having a single shared pool of memory greatly helps communication between the CPU and GPU by not requiring all data to traverse the PCIE bus.

Having a graphics API that lets you use all the features of the GPU and does not restrict you to a generic DX/OGL/Vulkan feature set.

Having a single target for the GPU and CPU allows you to eek more performance out of the hardware by letting you make take advantage of features or design decisions that would make your code slower on other configurations.

Hardware changes are generally made by Sony and MS to get more performance out of the consoles. (see this as a good example of this for the current gen. http://vgleaks.com/orbis-gpu-compute-queues-and-pipelines/)
 

Vidius

Neo Member
Whilst the difference may not always be a static 2x, or not even faster at all. There are plenty of advantages to a console that simply do not exist on PC's.

Having a single shared pool of memory greatly helps communication between the CPU and GPU by not requiring all data to traverse the PCIE bus.

Having a graphics API that lets you use all the features of the GPU and does not restrict you to a generic DX/OGL/Vulkan feature set.

Having a single target for the GPU and CPU allows you to eek more performance out of the hardware by letting you make take advantage of features or design decisions that would make your code slower on other configurations.

Hardware changes are generally made by Sony and MS to get more performance out of the consoles. (see this as a good example of this for the current gen. http://vgleaks.com/orbis-gpu-compute-queues-and-pipelines/)

Yeah, sure it didn't have to traverse a PCI-E bus, but its not as simple as you suggest since the GPU and CPU respectively can not cross address memory. I remember there was a large amout of talk about this and pointer sharing before it was discovered that it didn't work like that at all
 

-kb-

Member
Yeah, sure it didn't have to traverse a PCI-E bus, but its not as simple as you suggest since the GPU and CPU respectively can not cross address memory. I remember there was a large amout of talk about this and pointer sharing before it was discovered that it didn't work like that at all

That doesn't make any sense to me for a console, why wouldn't a CPU and GPU be able to share memory between each other noncoherently in the console APU's? and even if you need coherence you can just snoop the CPUs cache.
 
Last edited:
Well since the 3080ti will be coming out in 2020 then yes, the 2080 will be outdated in 2020. The consoles will have no bearing on that though.

What an odd question.

edit: the 3080ti will blow consoles out of the water so all of you console only folks should really keep it in your pants.
3fd.jpg
 

VFXVeteran

Banned
Variable Rate Shading, double rate FP16, Mesh/Amplification shaders, texture-space Shading... and tensor cores.

I think all of those things are being used now is my point. Not all in one game mind you, but I'm sure there is one game out there that can say "hey, we did that!" running on a PC.
 

Grinchy

Banned
As much as Microsoft loves lying and showing fake reveals of things, they usually keep that to their E3 conferences. It would be a really bone-headed move to explicitly state that everything was in-engine if it was CG.

And without the technical knowledge to describe this properly, look at the character model that's lit up by the torch. Everything looks fantastic, but it does look "videogamey" and not CG.

iZy5mUU.png


I think I actually believe that a significant portion of this trailer was really in-engine. Is all of it? What the fuck do I know. Maybe we'll get a DF analysis. Paging D dark10x :messenger_smiling_with_eyes:
 
Last edited:

VFXVeteran

Banned
Whilst the difference may not always be a static 2x, or not even faster at all. There are plenty of advantages to a console that simply do not exist on PC's.

Having a single shared pool of memory greatly helps communication between the CPU and GPU by not requiring all data to traverse the PCIE bus.

True. But I'd rather have a visit across the bus fewer times with LOTs more data than having to continuously flush my cache and do a read across a HDD because my assets can't fit into memory. That's the reason for the virtual memory tech behind the SSD drives in the consoles. But I would still prefer 64G of RAM + 12G of VRAM vs. a console's 16G of VRAM+RAM.

Having a graphics API that lets you use all the features of the GPU and does not restrict you to a generic DX/OGL/Vulkan feature set.

There is no such graphics API. Which 1st party company are you speaking of? And what is specifically restricting about DX/Vulkan?

Having a single target for the GPU and CPU allows you to eek more performance out of the hardware by letting you make take advantage of features or design decisions that would make your code slower on other configurations.

Graphics engine is mainly dominated by compiled x86 code using c/c++. All platforms have this in common. What we've seen in reality is performance struggling with fillrate. Which is directly tied to shading and resolution. That's why you see so many tests where the metric has been lowering the actual rendering resolution to some odd value and then upscaling. There is nothing else.

Hardware changes are generally made by Sony and MS to get more performance out of the consoles. (see this as a good example of this for the current gen. http://vgleaks.com/orbis-gpu-compute-queues-and-pipelines/)

These things are needed for consoles. Not necessary on the PC. This does NOT mean that a console game like Modern Warfare on a PS4 would outperform a PC with a 2xxx series Nvidia GPU.
 
Last edited:

Clear

CliffyB's Cock Holster
But no game today is even doing it. Which is why it's impressive.

They are though, its mostly just higher triangle counts and bigger textures. Its why the complexity of the shots matters, there's a comparatively much higher amount of ram and fill-rate being utilized compared to what we're used to seeing be used to create the exact same scenarios.

I want to see actual complex scenes of gameplay, not just cinematics because they are always a step above what you get in action. Lets not forget that Quantic Dream "Dark Sorceror" demo that was shown at the PS4 reveal. That was legit running in real-time on the hardware, but it presented a far higher fidelity than you'd see on that system for years.
 

VFXVeteran

Banned
As much as Microsoft loves lying and showing fake reveals of things, they usually keep that to their E3 conferences. It would be a really bone-headed move to explicitly state that everything was in-engine if it was CG.

And without the technical knowledge to describe this properly, look at the character model that's lit up by the torch. Everything looks fantastic, but it does look "videogamey" and not CG.

iZy5mUU.png


I think I actually believe that a significant portion of this trailer was really in-engine. Is all of it? What the fuck do I know. Maybe we'll get a DF analysis. Paging D dark10x :messenger_smiling_with_eyes:

It is in-engine. I told you guys that. I spoke with the producer of that demo today. It's PC though. :messenger_winking:
 

-kb-

Member
True. But I'd rather have a visit across the bus fewer times with LOTs more data than having to continuously flush my cache and do a read because my assets can't fit into memory. In other words, give me 64G of RAM + 12G of VRAM vs. a console's 16G of VRAM+RAM.



There is no such graphics API. Which 1st party company are you speaking of? And what is specifically restricting about DX/Vulkan?



Graphics engine is mainly dominated by compiled x86 code using c/c++. All platforms have this in common. What we've seen in reality is performance struggling with fillrate. Which is directly tied to shading and resolution. That's why you see so many tests where the metric has been lowering the actual rendering resolution to some odd value. There is nothing else.



These things are needed for consoles. Not necessary on the PC. This does NOT mean that a console game like Modern Warfare on a PS4 would outperform a PC with a 2xxx series Nvidia GPU.

I don’t think memory will really be a problem. The games on consoles look great and doubling that memory (at least) should do great, pairing that with a fast SSD to reduce the streaming buffers should make for plenty of graphics memory.

There’s plenty restricting about DX and Vulkan they expose a generic interface for hardware which is not generic this requires features to either not be exposed or for a abstraction layer to exist. For example of a API that does not have this issue GNM and the console versions of DX generally expose more then desktop APIs because they have a single target

There’s plenty of difference between x86 microarchitectures that can make code that is fast on one architecture slower on another bs a difference function. Additionally targeting a single GPU allows you to do the exact same thing, writing a single code path that targets the exact architecture. There’s plenty more to graphics then fill rate but I think you’ll see these consoles be fillrate monsters and additionally also probably contain some new AMD compression tech to help with bandwidth.

The changes aren’t “required” for a console what they do is allow you to make your GPU perform better then a desktop GPU of the same specs. It’s insane to compare a PS4 to the Nvidia 20xx no one is doing that.
 

Dante83

Banned
If next gen consoles use a RTX 2080 as a gpu with a closed architecture and optimisations, you guys will be singing a different tune...
 

VFXVeteran

Banned
I don’t think memory will really be a problem. The games on consoles look great and doubling that memory (at least) should do great, pairing that with a fast SSD to reduce the streaming buffers should make for plenty of graphics memory.

Memory is always a problem. I don't understand your statement. I could choke a Quadro with 48G of VRAM if I wanted to very easily. Streaming is the solution to this memory problem. As the asset quality increases, the data simply won't fit in a given frame.

There’s plenty restricting about DX and Vulkan they expose a generic interface for hardware which is not generic this requires features to either not be exposed or for a abstraction layer to exist. For example of a API that does not have this issue GNM and the console versions of DX generally expose more then desktop APIs because they have a single target

Can you give me a specific example of this GNM and where it makes a difference that would matter? Pointing me to a link would suffice.

--EDIT: NVM looked it up. No, it doesn't do anything special. What I read and what I was told are the same thing. Just a wrapper around DX11.

There’s plenty of difference between x86 microarchitectures that can make code that is fast on one architecture slower on another bs a difference function.

Are you talking about AMD cores vs. Intel cores? A gaming company will use the same hardware registers, AVX, SIMD, etc.. for their code. Are you talking about bench metrics for each of the processors? I'm not sure how a console with an x86 arch is different than a PC with the same x86 CPU?

Additionally targeting a single GPU allows you to do the exact same thing, writing a single code path that targets the exact architecture.

Too costly. I want my code to work for 10yrs, not 5yrs and then have to refactor it for other platforms.

The changes aren’t “required” for a console what they do is allow you to make your GPU perform better then a desktop GPU of the same specs. It’s insane to compare a PS4 to the Nvidia 20xx no one is doing that.

But that's the kicker right there. Brute force is going to win out. It's not that the GPU on the PC just won't run the console code. I think you are sending mixed messages. The PC is the agnostic hardware. Any company (ND, GG, Dice, Epic, etc..) can make their game on a PC with typical x86 architecture and a modern GPU. There is literally no difference between the platforms that would warrant any kind of special coding like the PS3 days.
 
Last edited:

Vidius

Neo Member
That doesn't make any sense to me for a console, why wouldn't a CPU and GPU be able to share memory between each other noncoherently in the console APU's? and even if you need coherence you can just snoop the CPUs cache.

It's due to the design of the ONION and GARLIC buses. Even though the memory is unified, the hardware keeps separate TLBs for the CPU and GPU portions respectively. Lets say you want to look at CPU paged memory from the GPU: you would have to make a request for the CPU to grab memory and if it is uncached, it will have to traverse the onion bus. For rendering, this is SLOW and you would not have any data that the GPU would need cached on the CPU to not have to go through ONION. It's potentially useful for some cases, but someone probably decided it was just easier to partition the memory to make it act more like two separate pools. Also keep in mind ONION is only 20 Gb/s. You really want to save that for the CPU if possible

If you want a more in-depth technical explanation, its really easy to find this information direct from AMD dating back as far as 2011.
 

VFXVeteran

Banned
Yeah, sure it didn't have to traverse a PCI-E bus, but its not as simple as you suggest since the GPU and CPU respectively can not cross address memory. I remember there was a large amout of talk about this and pointer sharing before it was discovered that it didn't work like that at all

Ahha! The details are now coming out. Tell me more about the incompatible pointer sharing across addressable memory spaces between the CPU and GPU. I would love to read up on this but I just don't have time. So basically you are saying you have to allocate your chunk of GPU space and your chunk of CPU space one time and process from there?
 
Last edited:

Vidius

Neo Member
Ahha! The details are now coming out. Tell me more about the incompatible pointer sharing across addressable memory spaces between the CPU and GPU. I would love to read up on this but I just don't have time. So basically you are saying you have to allocate your chunk of GPU space and your chunk of CPU space one time and process from there?

I replied to -kb- above. It all really is very interesting from an architecture viewpoint, but I think it just lost out/was way too much effort than it was worth for the type of computation games do. I suppose I should have been a bit more specific and stated that they could not cross address memory DIRECTLY. You can think of the TLBs handling the two pools of memory and you can't directly access one from the other without a huge penalty
 

VFXVeteran

Banned
I replied to -kb- above. It all really is very interesting from an architecture viewpoint, but I think it just lost out/was way too much effort than it was worth for the type of computation games do. I suppose I should have been a bit more specific and stated that they could not cross address memory DIRECTLY. You can think of the TLBs handling the two pools of memory and you can't directly access one from the other without a huge penalty

I see.. So almost the same speed latency as doing a copy from CPU memory to GPU memory on a PC through the PCIe bus? I wouldn't want to change things going from GPU to CPU. I guess that's why CUDA forces you to make copies before acting on the data.

In the end, brute force wins out again. Give me 64G of RAM and 16G of VRAM and I'll be much better off.
 
Last edited:

Kenpachii

Member
so i just smoke a spliff and you know i start to think
there is nothing out there tha looks technically close to what they show
i have a pretty good pc. most of the games i play on ultra at 1440p with framerates above 60..
mine will be outdate for sure

and the video benchmarks ive seen of rtx2018 most of them are 4k and really struggling to lock 60fps. and we don't have next-gen games yet

so what you guys think.

ps: English is not my first language so ''bear'' with me

U will be fine, nothing in that xbox currently announced that will make that 2080 obsolete even remotely.
 

ethomaz

Banned
We will be very lucky if next-gen console are at RTX 2080 level.

And you forget RTX 3080 launches six months before next-gen consoles.
 
Last edited:

Vidius

Neo Member
I see.. So almost the same speed latency as doing a copy from CPU memory to GPU memory on a PC through the PCIe bus? I wouldn't want to change things going from GPU to CPU. I guess that's why CUDA forces you to make copies before acting on the data.

In the end, brute force wins out again. Give me 64G of RAM and 16G of VRAM and I'll be much better off.

Yeah, you can pretty much look at it like that. The latency is probably better, but you would be splitting hairs at that point. There is more low hanging fruit to optimize at that point I think
 

-kb-

Member
Memory is always a problem. I don't understand your statement. I could choke a Quadro with 48G of VRAM if I wanted to very easily. Streaming is the solution to this memory problem. As the asset quality increases, the data simply won't fit in a given frame.



Can you give me a specific example of this GNM and where it makes a difference that would matter? Pointing me to a link would suffice.



Are you talking about AMD cores vs. Intel cores? A gaming company will use the same hardware registers, AVX, SIMD, etc.. for their code. Are you talking about bench metrics for each of the processors? I'm not sure how a console with an x86 arch is different than a PC with the same x86 CPU?



Too costly. I want my code to work for 10yrs, not 5yrs and then have to refactor it for other platforms.



But that's the kicker right there. Brute force is going to win out. It's not that the GPU on the PC just won't run the console code. I think you are sending mixed messages. The PC is the agnostic hardware. Any company (ND, GG, Dice, Epic, etc..) can make their game on a PC with typical x86 architecture and a modern GPU. There is literally no difference between the platforms that would warrant any kind of special coding like the PS3 days.

GNM and other APIs like it allow you target the direct hardware in the consoles and do not require the use of an abstraction layer like DX. Another example of this is the versions of DX that are used in the consoles (like DX11.x).

The same generic x86-64 interface is used by both amd and Intel for the most part. But the speeds at which instructions execute on the CPUs can greatly differ. This is the exact same situation as the GPU I mentioned in my previous post fixed hardware allows you to target the fastest path for that exact hardware. This is something that is time prohibitive on desktop.
 

Kenpachii

Member
Overhead thingy doesn't make much sense in 2020 anymore.

CPU wise i have tons of stuff running and my cpu barely uses 2% probably less then 1% if i shut it down, hardly weights up against the core locking of sony and microsoft on there consoles and let alone the lower ghz the cores run at.

Memory wise, 2+8gb setup runned all there games this generation on a weaker nvidia tflop card then what's in the console at equal or better performance. Which makes overhead on gpu level not much of a thing.

Lets say they double the memory, that means 4gb v-ram usage and 16gb memory. Hardly anything to write home about. And with focus on 4k memory probably is going to lower drastically for PC users that sit at lower resolutions.

So yea its safe to say overhead really isn't much of a thing anymore.
 

-kb-

Member
Overhead thingy doesn't make much sense in 2020 anymore.

CPU wise i have tons of stuff running and my cpu barely uses 2% probably less then 1% if i shut it down, hardly weights up against the core locking of sony and microsoft on there consoles and let alone the lower ghz the cores run at.

Memory wise, 2+8gb setup runned all there games this generation on a weaker nvidia tflop card then what's in the console at equal or better performance. Which makes overhead on gpu level not much of a thing.

Lets say they double the memory, that means 4gb v-ram usage and 16gb memory. Hardly anything to write home about. And with focus on 4k memory probably is going to lower drastically for PC users that sit at lower resolutions.

So yea its safe to say overhead really isn't much of a thing anymore.

To check just how 'little' overhead is a thing now, when death stranding comes out on PC. See just how well it runs with a GPU that matches the PS4.
 

Dante83

Banned
Next gen consoles are going to be outdated when they are released next year. They are already locked to using a 7nm node when there is already a better, more efficient 7nm+ EUV node out there, and the 5nm node will be out next year. PC cpu/gpu are going to get a lot more stronger next year on top of that. Some of the stuff may sound cool this year, but not so great next year.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Next gen consoles are going to be outdated when they are released next year. They are already locked to using a 7nm node when there is already a better, more efficient 7nm+ EUV node out there, and the 5nm node will be out next year. PC cpu/gpu are going to get a lot more stronger next year on top of that. Some of the stuff may sound cool this year, but not so great next year.

Performance evolution of CPU’s and GPU’s is slowing down, people try to hard sell this breakneck speed of evolution (alongside pushing for mid generation updates) because it is a lot cheaper than competing and gaining market share by reducing costs and prices... always better to get a new model out and with that those sweet high profit margins.
I think 7nm+ and 5nm are being oversold by vendors which are overselling the returns from those technology transitions.

The biggest differentiations between PC’s and consoles are really cost (feel free to buy that Mac Pro for $52k ;)) and size and thus power requirements (dissipation and consumption) where PC’s can and do use much much bigger boxes.

MS and Sony, with CY Q4 2020 semi-custom HW and custom OS, stand to raise the bar for PC gaming. Of course after a bit, some users with 32 GB of RAM or more will start to use ram disk solutions to brute force their way out, but that is beside the point.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Overhead thingy doesn't make much sense in 2020 anymore.

CPU wise i have tons of stuff running and my cpu barely uses 2% probably less then 1% if i shut it down, hardly weights up against the core locking of sony and microsoft on there consoles and let alone the lower ghz the cores run at.

Memory wise, 2+8gb setup runned all there games this generation on a weaker nvidia tflop card then what's in the console at equal or better performance. Which makes overhead on gpu level not much of a thing.

Lets say they double the memory, that means 4gb v-ram usage and 16gb memory. Hardly anything to write home about. And with focus on 4k memory probably is going to lower drastically for PC users that sit at lower resolutions.

So yea its safe to say overhead really isn't much of a thing anymore.

You have just said that your HW setup can brute force its way out of the current console specs and overhead, you have not proved that overhead is not a thing, that fixed HW specific optimisations are not a thing, and that will also apply to next generation launches. There could be DF sized threads about examining your results and the console results on a level playing field and ensuring we had matching IQ and framerate, but you can start by quoting 10 GB of overall memory vs 8 GB total and likely your CPU is a much more complex chip (core for core) than the Jaguar in the game consoles and clocked far above 1.6 GHz (a big part of the “overhead” is also mostly at the CPU side of things where OS and driver can impact performance the most... the rest is really where devs that have the time and possibility to optimise around a fixed and well documented HW target can get to).

Given how hungry modern desktop OS are and how many apps have switched to that resource waster tech called Electron... reports of running many tasks in the background 1-2% max usage feel dubious :), but it is not the point.
 

Denton

Member
The theoretical performance maybe, but never inside a PC. You just have too many bottlenecks (PC's biggest problem is it's legacy hardware like motherboard layout, buses, southbridge etc. A console is more streamlined to deliver maximum peak level across the system. That's why Carmac's "2x" still apply.
No, it literally doesn't. There have been so many console to "PC with similar spec" comparisons by DF and others and the performance is roughly equal. Especially now with lowlevel APIs like Vulkan and DX12 being utilized on PC.
 

FireFly

Member
So, the 5700 XT is 7.95 TFLOPS and performs similarly to a 2070/2060 Super. If the GPU in the Series X is 50% more more powerful at 12 TFLOPS, wouldn't that put into 2080 Ti territory?
 
Last edited:
Top Bottom