• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Is funny this change of generation to see all those only PC gamers who spend talking about specifications all the time but in the end, they do not know nothing, because now it turns out that the 12 TF of the new xbox is like a 57000 xt or that they think an SSD sata equivalent to a custom NVME. :messenger_squinting_tongue:
 

SlimySnake

Flashless at the Golden Globes
I heard they’re working on reviving and Richard Dean Anderson and Amanda Tapping might be involved. We should make a thread in OT for rumors and the like in it.

As for the rest of the thread, take the PS360 sales talk out of here and move it to a new thread for the 100th time on that tiring subject.
lol i seriously fear for your sanity. these people cant help themselves lmao.
 

icerock

Member
Dude, seriously? They also tested 3rd native mode with 36CUs at 2ghz which was refered to as "full chip" performance. If it was just BC tests then explain that native "full chip" mode.

Native mode implies native clock speed, for BC compatibility purposes, the machine could be trying to mimic the PS4 Pro Boost mode at its native clock speed.

Besides, none of this actually affects the overall CU count of the iGPU and the original argument which was about having "disabled CUs" because it most definitely does. And I already explained it you earlier, 36/36 active CUs is not realistic, because any redundant CU would render the chip useless. The console manufactures will always leave themselves with some breathing room to improve yields hence, would go for a slighter bigger die and have room for redundant CUs.

The Oberon A0 tested in the repo has 18/36 active CUs at times for a specific purpose, how many were disabled, it could be 4, it could be more, we don't know. This can also be explained by the Arden data found in the repo. It gave us 'active' 56CUs. Nothing on how many CUs were disabled, and no MS aren't manufacturing an iGPU with 56/56 active CUs either.
 

SlimySnake

Flashless at the Golden Globes


Last thing I got for tonight. Something interesting came to me: if both systems are targeting 12TF (we at least know one has hit that target but let's just keep it simple say that's what they'll both be), and RDNA2 is 50% more efficient over RDNA1, and the 2080TI is 34% stronger than 5700XT (9.7TF) which is an RDNA1 GPU...

In that case next-gen could be 16% better than 2080TI. And the 2080TI is already 13.45TF, so 12TF RDNA2 should come in around 13.8TF - 13.92TF relative to 2080TI!!

People might've been aiming too low thinking "only" regular 2080 would get lapped here. Very impressive RDNA2 is looking to be 👍

And again, RDNA2 kinda needs it 'cuz like the video says, it's targeting Turing, not Ampere. That's likely what RDNA3 will be for.

sorry to burst your bubble but perf/watt increase doesnt mean IPC or instruction per clock increase. they can do more with fewer watts but we dont know if those tflops are worth more just yet.

so at the moment 12 tflops navi is actually 12 tflops turing. lets wait and see what they can get out rdna 2.0 tflops.
 

Zero707

If I carry on trolling, report me.
You People needs to stop with %50 gains right now it hold the same weight as Peins growth ads
 

xPikYx

Member
Ok I want to do some maths here, trying to understand why I was expecting minimum a 14tf ps5 to really believe to have a high end console and not the usual, casual medium tier system

I know I cannot calculate this that easy and in this way but just for fun

The Radeon 5700 xt is 9.7 tf over 225watt
Multiplied by 50% is 14.55 tf

This means that 12tf XsX has 185+ tdp <200W which is Radeon 5700 NON xt level which is slightly less powerful of a rtx2060 which is the less powerful of the rtx nvidia gpus

SO THAT SUCKKKKKS

I don't even wanna imagine Sony going below 12tf, would be terrible, I expect them to match a tdp of 200+ <225 which means between 13 and 14.55 tflops.... And we are stll talking about a medium tier gpu
 
Ok I want to do some maths here, trying to understand why I was expecting minimum a 14tf ps5 to really believe to have a high end console and not the usual, casual medium tier system

I know I cannot calculate this that easy and in this way but just for fun

The Radeon 5700 xt is 9.7 tf over 225watt
Multiplied by 50% is 14.55 tf

This means that 12tf XsX has 185+ tdp <200W which is Radeon 5700 NON xt level which is slightly less powerful of a rtx2060 which is the less powerful of the rtx nvidia gpus

SO THAT SUCKKKKKS

I don't even wanna imagine Sony going below 12tf, would be terrible, I expect them to match a tdp of 200+ <225 which means between 13 and 14.55 tflops.... And we are stll talking about a medium tier gpu
Join the #13TeraflopsGang :messenger_sunglasses:
 

MadAnon

Member
Native mode implies native clock speed, for BC compatibility purposes, the machine could be trying to mimic the PS4 Pro Boost mode at its native clock speed.

Besides, none of this actually affects the overall CU count of the iGPU and the original argument which was about having "disabled CUs" because it most definitely does. And I already explained it you earlier, 36/36 active CUs is not realistic, because any redundant CU would render the chip useless. The console manufactures will always leave themselves with some breathing room to improve yields hence, would go for a slighter bigger die and have room for redundant CUs.

The Oberon A0 tested in the repo has 18/36 active CUs at times for a specific purpose, how many were disabled, it could be 4, it could be more, we don't know. This can also be explained by the Arden data found in the repo. It gave us 'active' 56CUs. Nothing on how many CUs were disabled, and no MS aren't manufacturing an iGPU with 56/56 active CUs either.
You just gave me a pointless lecture about things I already know. CUs disabled for yields is a completely different matter because it involves damaged CUs during production. They usually are permanently disabled.

You don't call it a "full chip" if there are non-yield disabled CUs.

It's most likely a 40CU chip with 4 disabled just like 5700 (36/40) compared to 5700xt (40/40)
 
Last edited:

Tesseract

Banned
giphy.gif


giphy.gif
 
sorry to burst your bubble but perf/watt increase doesnt mean IPC or instruction per clock increase. they can do more with fewer watts but we dont know if those tflops are worth more just yet.

so at the moment 12 tflops navi is actually 12 tflops turing. lets wait and see what they can get out rdna 2.0 tflops.

Fair enough; it'd be best for AMD, MS and Sony if it actually does lead to those big enough increases though. I mean given Turing is 34% more efficient than RDNA and RDNA2 is supposedly 50%, I just took the difference between the two. Thought that was fair enough.

So yeah, it might not be the 13.8 number, but if the gains are as much as AMD claims and not just marketing PR it does actually put at least one (possibly both) next-gen systems a bit over 2080TI. At the very least they blow past the 2080, let alone Stadia.

*Also I'm only claiming they would perform equivalent to a 13.8/13.92 Turing card theoretically, not actually have direct TF count equivalent to that, if efficiency gains of RDNA2 over RDNA1 are actually met.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Fair enough; it'd be best for AMD, MS and Sony if it actually does lead to those big enough increases though. I mean given Turing is 34% more efficient than RDNA and RDNA2 is supposedly 50%, I just took the difference between the two. Thought that was fair enough.

So yeah, it might not be the 13.8 number, but if the gains are as much as AMD claims and not just marketing PR it does actually put at least one (possibly both) next-gen systems a bit over 2080TI. At the very least they blow past the 2080, let alone Stadia.
turing isnt 34% more efficient though. dont fall for nvidia's reported tflops, their turing gpus average out at 1.9-1.95 tflops during gameplay which makes 2080 a 11.4 tflops gpu and a 2070 9.1 tflops during gameplay.

amd on the other hand inflates the number, but not by much. their average clocks for the 5700xt was around 1.8 ghz and 9.3 tflops. since 5700xt is roughly equivalent to a 2070, we can say they are basically both on par when it comes to tflops.
 

SlimySnake

Flashless at the Golden Globes
Ok I want to do some maths here, trying to understand why I was expecting minimum a 14tf ps5 to really believe to have a high end console and not the usual, casual medium tier system

I know I cannot calculate this that easy and in this way but just for fun

The Radeon 5700 xt is 9.7 tf over 225watt
Multiplied by 50% is 14.55 tf

This means that 12tf XsX has 185+ tdp <200W which is Radeon 5700 NON xt level which is slightly less powerful of a rtx2060 which is the less powerful of the rtx nvidia gpus

SO THAT SUCKKKKKS

I don't even wanna imagine Sony going below 12tf, would be terrible, I expect them to match a tdp of 200+ <225 which means between 13 and 14.55 tflops.... And we are stll talking about a medium tier gpu
what are you talking about?

12 rdna 2.0 tflops would be better than the 2080 which is 11.4 tflops.

what are you doing using the tdp of older cards that are only 7.9 tflops. just stop.
 

TLZ

Banned
Dude, seriously? They also tested 3rd native mode with 36CUs at 2ghz which was refered to as "full chip" performance. If it was just BC tests then explain that native "full chip" mode. Maybe go read those files again. They were not just BC tests.

There won't be 36/36CU GPU? No shit, sherlock.
Can simply mean testing 36CUs at the native speed of the chip, which is 2ghz? Why's that hard to comprehend? First clocked down to test with PS4 speed, then clocked down to test with Pro speed, all good? Ok. Now let's test it to the full capacity of this chip, 2ghz. We still good? No issues detected? Great. Now we can run all BC games at 2ghz and squeeze more out of them make them run great.

That can be a possibility, no?
 

TLZ

Banned
Dude, seriously? They also tested 3rd native mode with 36CUs at 2ghz which was refered to as "full chip" performance. If it was just BC tests then explain that native "full chip" mode. Maybe go read those files again. They were not just BC tests.

There won't be 36/36CU GPU? No shit, sherlock.
I just thought of another possibility.

Maybe they're testing that chip at PS4 and Pro speeds because they're planning to use it in PS4 super slim and Pro slim models? Just a thought.
 
Last edited:

jroc74

Phone reception is more important to me than human rights
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner



This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.

It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.

And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?

Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).

So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.

That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.

Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.

Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.

But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.

Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.

but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
First, amazing post.

I'll be the first to say I am not as well versed in this as some are.

But a swear that it wasn't until recently that the github data was thought to be a possible mix of Ariel and Oberon data in key areas.

Before recently It was mostly "the rtx and vrs were just turned off, not tested."

I could be wrong, like how for 2 months nobody saw that clip from the AMD engineer about both consoles having RDNA 2 that had both forums on fire.

We were still going on Oberon being Nav10. That's where most of the dur hur PS5 doesn't have ray tracing came from.
 
So does PS5 RDNA2 support or debunk the GitHub leaks? Is PS5 being RDNA2 even confirmed? Which one is it?

I won’t believe anything until Mark Cerny himself says so.
 
Last edited:

Silver Wattle

Gold Member
What if the 13.3 number is including the CPU? An AMD 3700X is around 480Gflops, if you round it down to 400Gflops due to reduced frequency and cache in the console, then add 12.9Tflops...
 
And I don't agree in the slightest tht Sony doing the API on its own won't have the same or better gains. This was the same talk during the dx12 with again many insiders and ppl talking about xbox advantage in that field and Sony created their own apis which were as good or better.

Sony has some geniuses over there tht know their shit.
ICE Team... second to none.
 

DaGwaphics

Member
So does PS5 RDNA2 or debunk the GitHub leaks? Is PS5 being RDNA2 even confirmed? Which one is it?

I won’t believe anything until Mark Cerny himself says so.

RDNA2 is what's on the AMD road map for this year, with PC parts likely shipping before these consoles. It would be extremely unusual for either MS or Sony not to leverage this tech in the respective consoles. Both companies had access to the same features from AMD, any "sauce" will be from in-house co-processors/tweaks/etc. that could live on there next to the AMD tech.
 

SlimySnake

Flashless at the Golden Globes
So does PS5 RDNA2 support or debunk the GitHub leaks? Is PS5 being RDNA2 even confirmed? Which one is it?

I won’t believe anything until Mark Cerny himself says so.
it debunks the github leaks because the apu in the gonzalo and flute apus was a navi 10 lite gpu.

it does not debunk a 9.2 tflops gpu however. its possible sony was going with a smaller chip and because rdna 2.0 so goddamn effiicient they were able to clock it higher. however, its still a dumb fucking design and no one in their right mind would pick that over a wide and slow design that will get them more tflops for fewer watts.

ps5 is rdna 2.0 confirmed today by amd.
 

IR3TR0

Neo Member
Really looking forward to these next consoles, cant wait to see the likes of gt and motorsport using rt etc, fun times ahead.
 
RDNA2 is what's on the AMD road map for this year, with PC parts likely shipping before these consoles. It would be extremely unusual for either MS or Sony not to leverage this tech in the respective consoles. Both companies had access to the same features from AMD, any "sauce" will be from in-house co-processors/tweaks/etc. that could live on there next to the AMD tech.

Thanks for pointing that out for me. It seems weird MS would be RDNA2 and not Sony, which would put AMD’s relationship with Sony in jeopardy.
 

icerock

Member
You just gave me a pointless lecture about things I already know. CUs disabled for yields is a completely different matter because it involves damaged CUs during production. They usually are permanently disabled.

What is wrong with you? Do you even read your own posts?

This is what kick-started this discussion.

36CUs at 2ghz were specifically refered to as "full chip" performance. This "we don't know how many CUs were disabled" narrative needs to die already.

Obviously doesn't mean It's 100% PS5 chip.

You, in your own words, are guessing how many CUs were disabled (see below). Which again, in plain english means, "we don't know how many CUs were disabled", it's all conjecture.

It's most likely a 40CU chip with 4 disabled just like 5700 (36/40) compared to 5700xt (40/40)

You don't call it a "full chip" if there are non-yield disabled CUs.

I want to be condescending here but I'll refrain, the "full chip" is specifically referring to the 18WGP which were activated to run those tests, which for umpteenth time were related to BC compatibility. We don't know how many CUs were disabled for testing purposes, or for yield purposes which is what I elucidated in my original post. Also, Sony and their engineers aren't restricted by AMDs PC offerings, AMD don't say to them, here is what is available to you, pick from 5600/5700/5800 series. They offer them all the choices from their available architecture and technologies to pick and build from. If Sony wanted to create a bigger NAVI based on RDNA1, they could have done that.

I've been polite in all of our discussion, and answered all your questions including the one on native clock speed. Stop hanging onto everything that has been provided on Github, it lacks a lot of context. For instance, Oberon A0 shares resemblance with Ariel B0 according to repo, Ariel B0 is NAVI 10, aka RDNA1 chips, meaning no HW RT/VRS. As of now, we know PS5 houses a RDNA 2 chip, does that mean Oberon is also based on RDNA1? Maybe, maybe not. We just don't know because the Github data is messy.

First, amazing post.

I'll be the first to say I am not as well versed in this as some are.

But a swear that it wasn't until recently that the github data was thought to be a possible mix of Ariel and Oberon data in key areas.

Before recently It was mostly "the rtx and vrs were just turned off, not tested."

I could be wrong, like how for 2 months nobody saw that clip from the AMD engineer about both consoles having RDNA 2 that had both forums on fire.

We were still going on Oberon being Nav10. That's where most of the dur hur PS5 doesn't have ray tracing came from.

All the confusion is, because the data in itself is pretty confusing. It provides a lot of information but raises a lot of questions which we don't have a definite answer to. Just more theories, for instance, on RT and VR, it has been pointed out, that those blocks are simply not required to be activated for the purpose of BC testing since neither PS4 or PS4 Pro use such tech.

I feel our interpretation of Oberon was wrong all along, the testing being done on Oberon was from Ariel iGPU profile as thicc_girls_are_teh_best thicc_girls_are_teh_best mentioned in their post. Ariel was RDNA1, imo it was an early 2019 PS5 design, lacking many features such as VRS and RT. The big shift already happened when Sony shifted to Oberon, it was RDNA 2 all along, we just didn't know, because the testing being done in the repo was a regression test of an Ariel iGPU.

None of this affects the CU count btw, I don't think CU changed from Oberon A0-> E0. Whatever is the total CU count on Oberon A0 is imo the final CU count on the PS5 iGPU. So, if the Oberon A0 housed 40CUs with 4 disabled CUs, then that's the PS5 we will get.
 
Last edited:
it debunks the github leaks because the apu in the gonzalo and flute apus was a navi 10 lite gpu.

it does not debunk a 9.2 tflops gpu however. its possible sony was going with a smaller chip and because rdna 2.0 so goddamn effiicient they were able to clock it higher. however, its still a dumb fucking design and no one in their right mind would pick that over a wide and slow design that will get them more tflops for fewer watts.

ps5 is rdna 2.0 confirmed today by amd.

Yeah, It doesn’t make sense for Sony to go for a lower spec unless they’re going for a cheaper system. But I‘d like to hope Cerny knows what he’s doing.
 
Why would Sony bother to go with such cutting edge tech if just to go only half way, for 9.2 TF? Wasn’t there a report Sony is going for an expensive cooling solution, making RDNA2 redundant if it’s more efficient?
 
Last edited:

Sinthor

Gold Member
Why would Sony bother to go with such cutting edge tech if just to go only half way, for 9.2 TF? Wasn’t there a report Sony is going for an expensive cooling solution, making RDNA2 redundant if it’s more efficient?

Here's my speculation on the whole cooling system thing. I think it could be because they're running a smaller chip at really high clock cycles to maximize performance, sure, that's possible, or it could be that they have a larger chip, but instead of using a bigger, 'tower' type of form factor, want to stick to a smaller one- something like the PS4 Pro is today. So could be for a few reasons. Myself, I tend to favor that they have a system that's very close to the XBsX in performance but sticking with a smaller form factor so they're using the cooling unit to maintain temperatures in that small form as well as making it quieter. But we'll see. Heck, they could do both..run a larger chip AND crank it up higher using the cooling unit. That kind of seems to be where it's headed, if the rumored leaks from developers and such are accurate. This is part of what makes speculation so much fun until we get a reveal, right?

Either way, I'm HYPED for this generation. I think it's pretty clear that for once, the consoles are going to be comparable to at LEAST the mid range of PC performance. Given optimizations in a closed system, etc., they may well get close to the high end. At least for the next year or so. That's pretty awesome! And I really am impressed by the games we have NOW. Things like God of War and what we're seeing in Ghost of Tsushima.....just WOW. I can't even imagine how it will be with all this extra power to bring to bear. In fact, I think people are focusing TOO much on the GPU's. I think those will be similar and that's great...but the CPU is where it's at, I think. The CPU's will be MUCH more powerful than consoles have had in the past. GPU's are great and improved graphics are awesome, but the CPU's are what will allow much more complex coding AROUND those graphics. So the games themselves can be much more complex and can do much more. That I think will give us a greater leap in a lot of ways than the GPU power will. Yes, games will be prettier, but the MECHANICS of the games will be more sophisticated due to the CPU's.
 

bitbydeath

Gold Member
Why would Sony bother to go with such cutting edge tech if just to go only half way, for 9.2 TF? Wasn’t there a report Sony is going for an expensive cooling solution, making RDNA2 redundant if it’s more efficient?

Despite github being dead some are clinging onto that Sony could still use a 36CU device. Seems extremely unlikely but whatever floats their boat.
 
The better question is will this be the biggest technological leap between consoles since the 3D era? Seems like it to me with the specs on the systems aiming at higher end PCs with GPU power similar to $499+, a powerful CPU, and a SSD that will destroy SATA drives. The SSD by itself is a massive leap forward for the system and will be the single biggest upgrade even over the GPU/CPU as it will impact everything in the system dramatically.

The PS3 --> PS4 jump was extremely underwhelming especially with the garbage CPU that bottlenecked games all gen on top of the bandwidth. This gen is more on par with the previous ones if not more powerful in generational leap. Systems are coming out at the right time.

I am excited. Now if only we can kill off all the cookie cutter AAA games where they actually start to use the hardware to innovate instead.
 
Last edited:

StreetsofBeige

Gold Member
I am excited. Now if only we can kill off all the cookie cutter AI games where they actually use the hardware to innovate.
Good luck with that. AI hasn’t improved in probably 20 years.

the biggest innovations the past year are raytrace lighting and 3D audio.

after that, it’s whatever Nvidia promotes like better hair strands.
 
The better question is will this be the biggest technological leap between consoles since the 3D era? Seems like it to me with the specs on the systems aiming at higher end PCs with GPU power similar to $499+, a powerful CPU, and a SSD that will destroy SATA drives. The SSD by itself is a massive leap forward for the system and will be the single biggest upgrade even over the GPU/CPU as it will impact everything in the system dramatically.

The PS3 --> PS4 jump was extremely underwhelming especially with the garbage CPU that bottlenecked games all gen on top of the bandwidth. This gen is more on par with the previous ones if not more powerful in generational leap. Systems are coming out at the right time.

I am excited. Now if only we can kill off all the cookie cutter AAA games where they actually start to use the hardware to innovate instead.

Well the jump between xbox to xbox 360, ps2 to ps3 was bigger maybe the only field where the jump is bigger is the ssd thing.
 
Good luck with that. AI hasn’t improved in probably 20 years.

the biggest innovations the past year are raytrace lighting and 3D audio.

after that, it’s whatever Nvidia promotes like better hair strands.

Problem with AI in games is devs intentionally don't make it good. It's not that they can't make it better, they refuse to because they think we the players don't want it. You can look at games in the same franchise from the past decade that has better AI.

Maybe you could say the garbage CPU in the X1/PS4 helped contribute to this problem but that's not really true. It's because devs started to put the power into graphics instead of pushing it towards gameplay. I can't tell you how cool it use to be to see AI where I'd snap a guy's neck in the dark then his team spotted me so they cautiously approached through the dark. Since they didn't have NVG they'd toss flashbags in the pitch black to make sure they were not at a disadvantage. Or enemies who would smartly pull out NVG to find me. Stuff like that was cool. How did it never evolve? Now they're so brain dead dumb they don't even tax the player.

Even seeing the AI in a game like Horizon Zero Dawn when I first saw a Sawtooth aggressively pursue me that was the most intense encounter ever...until I ran back 40ft and it decided to completely reset. WHY devs? There's no excuse for that now with a Zen 2 CPU so I better not see it anymore! Wasn't excuse then but I'll give you a pass for the crappy Jaguar CPUs.
 
Last edited:

James Sawyer Ford

Gold Member
The better question is will this be the biggest technological leap between consoles since the 3D era? Seems like it to me with the specs on the systems aiming at higher end PCs with GPU power similar to $499+, a powerful CPU, and a SSD that will destroy SATA drives. The SSD by itself is a massive leap forward for the system and will be the single biggest upgrade even over the GPU/CPU as it will impact everything in the system dramatically.

The PS3 --> PS4 jump was extremely underwhelming especially with the garbage CPU that bottlenecked games all gen on top of the bandwidth. This gen is more on par with the previous ones if not more powerful in generational leap. Systems are coming out at the right time.

I am excited. Now if only we can kill off all the cookie cutter AAA games where they actually start to use the hardware to innovate instead.

yeah this gen is going to be insane

the leap looks to be one of the biggest ever in terms of improvements that could have a huge influence on game design and not just prettier graphics
 
Status
Not open for further replies.
Top Bottom