• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

VFXVeteran

Banned
UPDATE: A quick update on backward compatibility – With all of the amazing games in PS4’s catalog, we’ve devoted significant efforts to enable our fans to play their favorites on PS5. We believe that the overwhelming majority of the 4,000+ PS4 titles will be playable on PS5.

We’re expecting backward compatible titles will run at a boosted frequency on PS5 so that they can benefit from higher or more stable frame rates and potentially higher resolutions. We’re currently evaluating games on a title-by-title basis to spot any issues that need adjustment from the original software developers.

In his presentation, Mark Cerny provided a snapshot into the Top 100 most-played PS4 titles, demonstrating how well our backward compatibility efforts are going. We have already tested hundreds of titles and are preparing to test thousands more as we move toward launch. We will provide updates on backward compatibility, along with much more PS5 news, in the months ahead. Stay tuned!

Well, Sony fans will love this because Bloodborne could possibly made to run at 4k/60FPS.
 
Last edited:
Has this already been posted? Update on PS5 BC.

UPDATE: A quick update on backward compatibility – With all of the amazing games in PS4’s catalog, we’ve devoted significant efforts to enable our fans to play their favorites on PS5. We believe that the overwhelming majority of the 4,000+ PS4 titles will be playable on PS5.

We’re expecting backward compatible titles will run at a boosted frequency on PS5 so that they can benefit from higher or more stable frame rates and potentially higher resolutions. We’re currently evaluating games on a title-by-title basis to spot any issues that need adjustment from the original software developers.

In his presentation, Mark Cerny provided a snapshot into the Top 100 most-played PS4 titles, demonstrating how well our backward compatibility efforts are going. We have already tested hundreds of titles and are preparing to test thousands more as we move toward launch. We will provide updates on backward compatibility, along with much more PS5 news, in the months ahead. Stay tuned!
Good on them for being more clear about it. Personally, BC is not a deal breaker for me, I don't care about it that much, but a lot of people do, so this should put their minds at ease.
 

SonGoku

Member
Zelda looks much better than Metro imo but that doesn't mean it's technically better. We simply can't rely on artistic visuals to be the factor in determining whether a console has the better graphics tech. It's been a weapon of choice for "winning" an argument by some gamers for a long time. What I'm excited about this coming generation is that exclusives won't be just for the PS5/XSX. Some will be coming out for the PC. In that regard, I'm excited to see what the studios develop with a mindset that they can make assets and 3D features that go beyond what the PS5/XSX can do at interactive rates.
I used those examples because at the core they are designed around PS4/XBONES HW (jaguar, 5gb ram, hdd, 1.3tf)
A game designed around Zen2 cores, 10TF+, 14GB ram, ssd would be technical superior in alot of aspects.
 
Last edited:

VFXVeteran

Banned
I used those examples because they are designed around PS4/XBONES HW (jaguar, 5gb ram, hdd, 1.3tf)
A game designed around Zen2 cores, 10TF+, 14GB ram, ssd would be technical superior in alot of aspects.

Right, but even with that hardware it won't be enough to run Metro @ 4k/Ultra/RTX/60FPS (I even doubt 30FPS).

Developers are going to be careful with their engine subsystems because while you'll have the throughput to place assets in memory very very quickly, your bottleneck will end up somewhere else -- mainly THE GPU (which is both console's weak spots IMO). No amount of SSD speed or 12TFLOPS will be enough to stay away from overloading the GPU with extremely high res textures, full RT lighting, global illumination and shadows at very high pixel density (i.e. resolution).
 

HeisenbergFX4

Gold Member
10 TF is still better than most people have expected (for comparison RTX 2080 is also 10TF). Maybe the real reason why some playstation fans feel disappointed now is because they really believed what fake insiders told them, and of course 10 TF looks less impressive than 13-14 TF :p.

Thats what I have been saying.

The main reason people are not happy they had unrealistic expectations.

We are still getting a kick ass system to play kick ass games.
 

Leskov

Neo Member
With your reasoning PS4 should had 60 fps as a standard while One 30 fps. Surprise: both sucked.
We've seen uncapped performance of ps4 and xbox one in games that targeted 60 fps. The difference in framerate clearly related to gpu power.
Target framerate is design decision, made by developers. Aside from several game studios that doing mostly fps all target 30 fps. Some mp modes in couple of games are 60fps, but it's marginal. Because tradeoff in image quality was too big. My hopes with next gen was that developers finally can achieve constant 60fps in all games. Right now I'm not so sure in that.
But it's my not so competent understanding of what is going on right now with next gen consoles and ps5 I general and this is why I hope that someone could explain this to me.
 

Aceofspades

Banned
I thought about doing a deep dive into the next generation consoles from the standpoint of a developer with a new thread, but I can´t open threads yet so I posted this here. So, I would like to take you onto a journey where we will investigate certain aspects and also dive into the specifications and what they could mean for game development and the technologies it enables. I apologize in advance for any grammatical / spelling or other error. Remember I am only human and I did very likely make some mistakes (please point them out if you can).

This will be a dive into the specifications of both consoles, so we will compare them and look at what each console can provide in each specific area and what kind of differences we can expect. That being said, this is still very much speculation on my part, since I haven’t got to play with either console.

Starting with the CPU:

The processor on both consoles is going to be nearly identical, we don’t know every specific detail, yet we can expect them to be very much the same in terms of design and architecture.

Xbox:
The Series X is running an 8 core 16 thread cpu @3,6 ghz with smt enabled and @3,8 ghz with smt disabled.

PlayStation:
The PlayStation 5 is running an 8 core 16 thread cpu @3,5 ghz with a variable frequency (more on the variable frequency later).

We will start by talking about simultaneous multithreading (smt):
Smt is a multithreading technology, which enables the cpu to divide each cpu core into two threads, giving the cpu the option to make each processor core do two simultaneous workloads. This parallelism enables the cpu to get much better efficiency out of the workload distribution and allows for faster workload completion if used correctly. This technology relies heavily on programming efficiency, as programmers must distribute the different workloads amongst the different threads of the cpu. Smt specially will shine very bright in cpu intensive games, which for instance rely on physics-based calculations like the euphoria engine used by Rockstar games. AI in general relies a lot on the cpu since most logic from the ai is calculated and processed by the cpu.

This technology in general then is perfect for OpenWorld games which need a lot of cores for all the different things going on at the same time. The following picture shows the difference between linear load processing and parallel workload processing.

what-is-simultaneous-multithreading-1-638.jpg


The cpu without smt does one task after the other, which results in idle time, visualized by the white spots in the picture. While this is an extreme case there is a lot of idle time in which the processor loses a lot of processing power in which it does nothing but wait for the next instruction (workload). The smt processor on the other hand is filled with way fewer empty spaces indicating a more efficient approach distributing different types of tasks to its threads. So, in short if used correctly the same workload can be calculated much more efficient by spreading it amongst the threads.
Heavily calculative based workloads (physics engine, object streaming, ai, destruction engine, etc..) are benefitting the most from this technology, so we can expect some nice improvements during the next generation.

The processor in our current generation consoles is severely underpowered and is holding back most of the other components. It is also the reason why most games lack good framerates; this is especially true in the Ps4 Pro and Xbox one X. It is the reason why games rarely hit 60 fps, but with the next gen consoles 60 fps is very doable, while game genres like racing could even opt for 120 fps. This is very exciting indeed and will surely enrichen game worlds to the point where in direct comparison the current generation games are very distinguishable from the next generation.

Remember the biggest reason we are not seeing more ai walking in our games is mostly due to lack of cpu power, a good example for this would be Assassin’s Creed Unity where the PlayStation 4 and Xbox one really struggle with keeping up for the most part. The frames per second even dip below 20 fps which makes the game very hard to enjoy.
()
Here is some gameplay analysis showing the fps on both consoles for comparison.
The Zen 2 processors in both new consoles will allow developers to put much more ai on the screen at the same time, without having to worry that the frames will dip into the low 20s. Also it could allow for much smarter ai with much more (thinking) operations per second, enabling deeper and smarter logic by the ai.

The Xbox Series X allows smt to be disabled, which in return clocks the cpu an additional 200 mhz higher. This will most likely be used for backwards compatibility or engines which do not yet make use of smt. Also games which do not need a lot of parallel compute, but any additional frequency boost will probably also make us of this (games like Counter Strike Global Offensive).

Now before we jump to the next part, I want to talk a bit about the variable clock frequency of the ps5 and why it is less of a big deal then people think. The cpu is rarely working with 100% load sustained throughout a longer period of time, which gives inefficiency to power consumption and heat. In a closed system such as the gaming consoles, engineers have to take into account that power consumption and heat need to be kept to a minimum, going above the limitations of a cooling inside of a closed system brings enormous problems with it. Some of the early PlayStation 4 adopters experienced this themselves. The apus inside the consoles are usually more power efficient when compared to the more traditional pc design of having the to parts separated. Yet at the same time both components share the same die, having to also rely on the same cooling element. What this means is that a cpu could possibly take away any upclock potential from the gpu, which is much more likely to need the additional clock speeds. While the cpu is not fully utilized or simply is not needed as much, the cpu could slightly underclock itself to give the gpu more overhead. Power consumption goes hand in hand with the clock speeds, so by lowering the cpu clock speed the power consumption is lowered exponentially (the higher the clock speed the more drastic of a power consumption is required, this is never a linear rise) only a few percent should be more then enough to give the gpu the desired maximum clockspeed of 2.23 ghz.
This is AMDs smart shift technology, with which the cpu can help the gpu by distributing more power to the gpu to squeeze out a bit more performance. Simply said, if the cpu is underused it will be using less power in order for the gpu to use more while staying at the same power limit the engineers had originally intended. This will not be a massive jump in power and result only in a few fps in the real world but again every bit helps of course.

Now then to the juicy part comparing the cpus in both consoles, will the 100 mhz higher frequency make any difference in the real world, well yes and no. Let me explain, for benchmark purposes you of course would see a few points going to the cpu of the Xbox Series X, but in real world applications like games, a performance delta of under 3% is not going to make any difference what so ever. So the performance of both consoles in regards to the cpu is essentially the same.


Then on to the next Topic, ssds:

Solid state drives are truly something magical for the next generation as it will simply allow much richer worlds without the hassle that game developers have to go through right now. Let me elaborate by going into detail how a standard hdd works and why it is such a limiting factor in game creation right now.
The main problem with hdds are the way they are working, it is a mechanical drive and as such it has a lag with which it can read data from its disk. Imagine a disk spinning at a certain speed, let’s say 5200 times a minute. During the spinning of the disk, each spin represents a chance to read data from the specific area in which the required data is stored. This of course means that there is a certain latency which is required for the head assembly to reach the location of the needed data, this is referred to as seek time.
This in itself is already a problem, because it is time in which the cpu and the rest of the components sit idly and wait for the data to be processed.
1280px-Hard_drive-en.svg.png

Looking at this picture we can understand that the head has to move to the area of the required data in order for it to be obtained. Depending on the location of the stored data the transfer speed is variable. Data on the outer ring could possibly be read faster than the one on the inner ring etc. So, we now have another problem variable read and write speeds, all this amounts to a lot of problems:

  • The hard drives inside of the current gen consoles are about 100 mb/s, this speed is a very limiting factor. With low transfer speed of hdds, assets need to get stored inside of the ram to be accessible much more quickly. This eats an unnecessary amount of ram in the current gen consoles, inefficiency is the keyword here. This is also one of the reasons game worlds can’t currently go much bigger than they are, while being as rich as they are. In order to stream in more objects, we simply need faster read and write speeds to the hard drive in order to not rely so much on the ram. Overstepping the speed limitations results in various problems but one very well known effect is the game object pip-in.
    ()
  • Here is a video illustration of that effect, this of course can also be caused by the cpu, though this is just an example. The seek time, this introduces latency in which the other components must wait for the data to be accessed by the hard drive. Combined with the first problem, programmers have to constantly innovate new tricks to bypass this limiting factor inside of the current consoles. The example given by Mark Cerny is very good and a real world current solution to this problem, by storing an asset multiple times on your hard drive in various places (example given, mailbox stored 400 times in different places), this is of course very inefficient in terms of storage utilization and takes away space needlessly. So, it is not just a problem and limiter for game designers but a headache for programmers as well.

Now then with the ssds installed in the next generation consoles we are bypassing all these problems with firstly very high raw speeds of the ssds:
PlayStation 5 = 5,5 gb/s (raw), 8-9 GB/s (compressed);
Xbox series X = 2,4 gb/s (raw), 4,8 GB/s (compressed);
While the Xbox series X is very fast, the PlayStation 5 has an insane amount of raw speed, this is very exciting especially for game designers of open world games, as these if they really utilize the full potential of the ps5 ssd could pull of some very amazing things. Secondly ssds have no seek time as data is stored digital and does not require for any mechanical part to do work, thus the latency issue is also resolved.

But what does this mean for the real world?

Well it isn’t just faster loading times, it also allows much bigger game worlds to be loaded, while being much richer. Game objects can be streamed in directly from the ssds, not needing to rely on the ram to handle as much caching for the hard drive, giving a more efficient use of the available ram hence the less required jump in terms of ram capacity (more on that later).
If you want to see what could be possible with the next generation ssds, Star Citizen is a very nice example to look at. This game heavily recommends an ssd to be used for gameplay and if you ever try playing it without one


this is pretty much what is awaiting you. But the ssd requirement at the same time enables amazing game worlds, which not only looks amazing but are stunningly rich given the size they are at.

Comparing the two next gen consoles then, the additional ssd speed of over 2x from the PlayStation 5 will initially not make a huge difference over the Xbox Series X in terms of capabilities. This is due to the fact that developers have to learn to make use of the crazy speeds both consoles will provide. It will take some time before we reach speeds where the Xbox Series X becomes the limiting factor. This is especially true in multiplatform games, where games are always programmed for the weakest platform with any given component, and then if there is enough resources and time it is optimized for the stronger hardware. First party Sony exclusive games could very well show the true potential of the PlayStation ssd, but do not expect multiplatform games to do the same. These will generally probably simply load faster and allow for other slight improvements over the Xbox Series X.

Then lets compare ssd storage to the pc, where it is already being utilized for years. But why is there no real difference between a sata ssd and a nvme ssd albeit the huge difference in speed? The answear is simple, it is due to the fact how data is being transferred, engines right now do not utilize nvme ssd speeds and simply assume that you still use a hdd. Hopefully this will change very soon even on pcs.


The ram:

PlayStation 5: 16gb GDDR6 @448GB/s
Xbox series X: 16gb GDDR6 (10GB @560GB/s, 6GB @ 336 GB/s)

The ram this generation is frankly very unsurprising and not very interesting, I already wrote a little bit about it in another thread, but I will summarize what I wrote. The amount of ram this generation is very lacking in terms of a generational leap. Only 2x the amount while the last generation had a 16x jump in raw capacity, that’s not very exciting, yet it is more then enough. In the ssd section I wrote that ssds will enable game assets to be read directly from the drives and do not need to be cached as much in the ram anymore. This will lead to the ram being utilized much better. The bandwidth itself is also more then enough with both consoles, I do not believe that the ram will be the limiting factor in any way.

The GPU:

The teraflop metric has ruled the gpu world for a long time and any slight uplift in teraflops is expected to make a world of a difference, but is this really true?
Well the answer is probably going to upset a lot of people, but I will explain why it is not the unbelievable difference maker that most people think it is. Now before we begin let us look at both gpus:

Xbox Series X: Custom RDNA 2 gpu, 52 CUs @1.825 Ghz resulting in 12.1 teraflops.

PlayStation 5: Custom RDNA 2 gpu, 36 CUs @2,23 Ghz resulting in 10.28 teraflops (variable frequency -> same explanation as the cpu).

On paper the Xbox Series X has the clear advantage with a 17% performance delta between the two at peak PlayStation 5 performance. This is quite a nice lead by Microsoft, but historically speaking it is one of the smallest we have ever had between the two console makers.

Going back to the PlayStation 4 and Xbox One, the ps4 has a 1,8 teraflops gpu, while the Xbox One has a 1,3 teraflops gpu. That’s more then a 40% difference in power and it is the same story with the Xbox One X and the PlayStation 4 Pro. Xbx = 6 teraflops vs ps4 pro 4,2 teraflops ~ 43% performance difference between the two. While looking at the difference between each of the consoles, we could see some very hefty performance differences, but did these really make so much of a difference that either console was a deal breaker? In my opinion not really, the only thing it did was, that each more performant console had the better-looking multiplatform games, which simply ran a bit better, but it still was the very same game running. The thing game developers usually do with this much of a performance delta is simply adjust the resolution of the game and / or adjust some of the graphics settings to be better.
Going back to this generation then, with a 17% performance delta, can we really expect that much of a difference between the to consoles seeing as it is probably the smallest performance difference in any of the console generations yet. In my opinion no. Multiplatform games will simply look slightly better or run better on Xbox Series X, and load faster on PlayStation 5. But it should in no way shape or form make either of the two consoles a dealbreaker because of some slight performance difference. Simply pick the console to your liking and you will very likely be pleased with your purchase.

Returning to a more technical standpoint and less of a comparative one, RDNA 2 is very exciting as both consoles have very powerful hardware indeed. These gpus should allow developers to further push graphics beyond what we have right now. No matter if you are playing on console or on pc, remember that games are always made with the weakest hardware in mind, you can expect a lot from when games are make from the ground up for the next generation hardware. In combination with technologies like real time ray tracing, games will indeed become much more lifelike. I feel myself coming back to Star Citizen, but I find it is a good representation of what we can expect next generation games to look like as a standard if you will.


I will not be talking about the audio chips yet, as I find I have to little information to go into that topic right now, yet I hope to come back to this one soon.


Sadly, I could not make a new thread and have to post it here, excuse the long post. If anyone want´s to make this into a new thread be my guest.
I really hope you enjoyed this little journey into the next generation.

Thanks for reading.


Fantastic post mate, detailed and to the point 👍

Someone give this chap a gold membership so he can post quality technical threads .
 

Mr Moose

Member
Well, Sony fans will love this because Bloodborne could possibly made to run at 4k/60FPS.
If only they'd patch it *cries in blood*
We need DriveClub patch, too, and a few others.

So basically, an attempt at copying what the Xbox is doing.
How long are your arms? PS systems have had BC since PS2.

So I am guessing that at launch it’s not going to be all the games? They still didn’t really clear this up.
We believe that the overwhelming majority of the 4,000+ PS4 titles will be playable on PS5

It'll be like on the Pro, probably, some titles (like a handful of them from what I remember) didn't like the boost mode.
 
Last edited:

Bo_Hazem

Banned
10 TF is still better than most people have expected (for comparison RTX 2080 is also 10TF). Maybe the real reason why some playstation fans feel disappointed now is because they really believed what fake insiders told them, and of course 10 TF looks less impressive than 13-14 TF :p.

Those fake insiders aren't playing around, they meant to create this disappointment atmosphere. The more we understand PS5 the more I feel XSX in trouble with its $500 minimum price tag and lack of games.
 

kareemna

Member
Cerny clearly presented it will BC for the entire library of PS4 games, but only a handful could handle the high GPU and CPU clock rate and not crash. This is where those 100 top games are enabled to utilize that "boost mode", more games will be tested.
 

splattered

Member
Right, but even with that hardware it won't be enough to run Metro @ 4k/Ultra/RTX/60FPS (I even doubt 30FPS).

Developers are going to be careful with their engine subsystems because while you'll have the throughput to place assets in memory very very quickly, your bottleneck will end up somewhere else -- mainly THE GPU (which is both console's weak spots IMO). No amount of SSD speed or 12TFLOPS will be enough to stay away from overloading the GPU with extremely high res textures, full RT lighting, global illumination and shadows at very high pixel density (i.e. resolution).

You're saying that consoles won't be able to play these at 4k/Ultra/RTX/60FPS or even 30FPS, but I feel like you're kind of using current PC hardware and versions of the game as your basis here.

The XB1X version already looks incredible... you don't think with literally twice the amount of power and new DX12U tools at its disposal it can turn things up a notch and implement Ultra textures and RT similar to the (unoptimized) upgrades we've already seen for Gears 5?

This will be an exciting thing to keep an eye on, even if it ends up a bit disappointing.
 

SonGoku

Member
Right, but even with that hardware it won't be enough to run Metro @ 4k/Ultra/RTX/60FPS (I even doubt 30FPS).
100% agree theres no console magic bullet, anything it does a similarly specced PC should be able to replicate
My point was that next-gen only games designed around the new baseline will look and feel much better than metro at max settings, they will also be more technical advanced in different areas.
RT implementation won't be as advanced, im sure they'll look for cheaper more efficient implementations or if they go for advanced RT it will likely be dynamic 4k/30fps
 
Last edited:
Im still wrapping my head around this, there are so many variables
  • PS5 has less CUs but they are clocked higher
  • 36CUs will have less idle resources, easier to use all 36CUs in parallel than 52CUs
  • Other GPUs components running at higher frequency
  • Does dropping resolution free up CU resources?
My guess is games will run at feature parity with PS5's resolution being lower, care to chime in on this points?

Third party games I suspect won't take advantage of XSX's extra CUs the way described, and it'll be harder for them to get the "free" boost (essentially) in graphics-orientated performance vs. PS5 since the games have to more directly target parallel pipelining.

Basically what I'm trying to get at, is PS5 will be capable of less unique compute and graphics-orientated tasks simultaneously (including combination tasks relying on the product of two separate tasks calculated on the same cycle) than XSX, but can do those tasks on a wider body of unique data assets that are all present in memory simultaneously. However, the rate in which it can feed those assets to the GPU is over 100 GB/s slower than XSX's. To make up for this, what smaller range of simultaneous calculation is CAN do, it can do at a somewhat higher frequency (405 MHz faster).

With XSX, it has a smaller block of GPU-orientated memory (10 GB), meaning it holds less unique assets at any given time simultaneously vs. PS5. However, it can feed those assets to its GPU over 100 GB/s faster (560 GB/s), and its GPU can calculate more unique graphics and compute tasks concurrently/in parallel versus PS5 due to having 16 extra CUs. This is slightly offset by the fact that the rate in which it can cycle through data related to those tasks applied to the objects in memory is slower, up to 405 MHz slower. To offset this, since that 10 GB memory pool is over 100 GB/s faster, it can "swap out" unique assets in that block of memory quick enough to simulate operating on a pool of unique assets virtually as large as PS5's.

This is only me focusing on graphical polygonal models, textures or the such in main memory, specifically, but it can apply for non-graphics assets and data that in terms of XSX, they'd rather want to put in the 10 GB partition vs. the 3.5 GB partition. In terms of additional GPGPU features, XSX would seem to have the upper hand there due to the additional 16 CUs, and for PS5 to match that level of GPGPU capabilities, either graphics fidelity has to drop, or framerates will drop since that stuff would be shifted to the CPU cores instead. Basically XSX can do more GPGPU compute for non-graphics loads while matching PS5 graphics fidelity in all areas besides perhaps some texture or low-priority data (small bits of text, maybe certain audio data) streaming (due to PS5's faster SSD), or it can have parity to PS5's GPGPU capabilities (just about; still slightly more due to faster CPU cores) while offering notably more graphical fidelity in most areas and/or more effective/accurate RT performance.

There unfortunately isn't a simpler way I can explain it (at least for now), but maybe an analogy worth looking at is the SNES's architecture vs. the MegaDrive's architecture. In most aspects I'd say XSX is closest to the SNES in most wide & slow approaches and the PS5 is closer to the MegaDrive in many narrow & fast approaches. REALLY rough analogies/comparisons but maybe easier to get a grasp on.
 

Zoradeity

Neo Member
Uhm? yes. If we are going down this road why not question CU count, CPU cores, SSD speed etc.
He is the lead PS5 designer, knows the ins and outs of the system, if he says it stays at 10.27TF or close to it most of the time then i believe him. He has a pretty solid track record, no reason to doubt him
You can't compare apples to oranges, PS5 boost system and power delivery is different than any pc or laptop. Also a 10% decrease in power translates to a 2% frequency reduction

See, a lot of people apparently missed that. I just read an article on Wffctech that suggests PS5 sustaining a 2.23 clock is misleading, but how can it be when Cerny literally said the opposite. It's so baffling to see these tech gurus fail at reading comprehension so miserably.
 
Last edited:

Bo_Hazem

Banned
Thats what I have been saying.

The main reason people are not happy they had unrealistic expectations.

We are still getting a kick ass system to play kick ass games.

Because most console gamers aren't familiar with PC gaming, as you can see the difference in a real test is laughable, and that without the SSD secret sauce and Mark Cerny's other witchcraft.
 
Last edited:

Ascend

Member
Xbox fans, sorry to piss on your party but the XsX may not have more performance than PS5, as I suspected was the case and echos what an industry inside hinted at on here:



This reality is slowly being disseminated after the premature celebrating. Sony's strategy with the tech talk was piss poor though, they'll need to clear the FUD.
Sounds like a bunch of vague nonsense.
"to the point where the GPU could have X number of flops that it can actually perform, but if the developer isn't able to actually access all of it for whatever reason, then it doesn't even matter, and there are so many other variables here that go into it. "

Really? And what exactly would be the reason that the Xbox cannot access the performance of all its 52CUs? There is none.

"At the end of the day, that is fundamentally the big question -- when Assassin's Creed Kingdom, or whatever it's called, Assassin's Creed Vikings comes out this fall, presumably, corona aside. Presumably it comes out this fall on both Xbox Series X and PlayStation 5 -- which one will it look better on, which one will have a better resolution and better framerate on? I don't think we can know the answer to that question just from the spec sheet, and that's the point I'm making."

I'm calling it right now that it will be superior on the XSX.
 

Aceofspades

Banned
See, a lot of people apparently missed that. I just read an article on Wffctech that suggests PS5 sustaining a 2.23 clock is misleading, but how can it be when Cerny literally said the opposite. It's so baffling to see these tech gurus failing at reading comprehension so miserably.

They understand every word from Cerny, but decided to push their agenda instead.

Honest, truth seeking journalism is DEAD long ago.
 

Thedtrain

Member
Proven?

You can't just say "Because Mark Cerny said so"

The fact that it even boosts to unheard of and typically unsafe levels of heat and performance should tell you that there is smoke and mirrors going on here.

AMD Smartshift provides ~10% increase in performance but it's playing with fire intended for the Laptop market.

10.28TF - 10% = 9.252TF
How long has school been out for you?
 

Ascend

Member
How long are your arms? PS systems have had BC since PS2.
Uh... Not entirely true.

The PS2 had BC with PS1 games
The PS3 temporarily had BC with PS2 games until they removed the feature for no reason.
The PS4 doesn't have BC. Unless of course, you count paying for a subscription to play games you technically already bought as backwards compatibility

By that mind set, Xbox is attempting to copy what PlayStation is doing: being a game console. (PlayStation came first)
Uh... No... BC was technically always about being able to play older games on the newer consoles. Who came up with the ability to improve framerate, resolution and even graphics with BC games? It's a good thing that Sony is going this route. But at least admit that it was Microsoft that innovated here.

So many uptight people here. Jesus.
 

Bo_Hazem

Banned
Someone that some don't like, but you must admit, he has the most elegant accent/narrating between all gaming youtubers :messenger_winking_tongue: I like his video personally.

 

Rudius

Member
A comperison I would like to see is between a heavily overclocked 2060 super and a downclocked 2080, so that they have identical teraflops. If faster clicks don't matter, only flops, they should performs the same.
 

Mr Moose

Member
Uh... Not entirely true.

The PS2 had BC with PS1 games
The PS3 temporarily had BC with PS2 games until they removed the feature for no reason.
The PS4 doesn't have BC. Unless of course, you count paying for a subscription to play games you technically already bought as backwards compatibility
PS3 had BC with PSOne games as well (all versions I think).
PS4 has a weird BC with PS2 games, I think? I'm not sure how that shit works, but it's not a good form of BC.
 

Ascend

Member
PS3 had BC with PSOne games as well (all versions I think).
PS4 has a weird BC with PS2 games, I think? I'm not sure how that shit works, but it's not a good form of BC.
Ah yes. I forgot about the PS3 BC with PS1 games.

That they are going this route is a good thing. It will take time though, but hopefully they ultimately implement it well.
 
Sounds like a bunch of vague nonsense.
"to the point where the GPU could have X number of flops that it can actually perform, but if the developer isn't able to actually access all of it for whatever reason, then it doesn't even matter, and there are so many other variables here that go into it. "

Really? And what exactly would be the reason that the Xbox cannot access the performance of all its 52CUs? There is none.

"At the end of the day, that is fundamentally the big question -- when Assassin's Creed Kingdom, or whatever it's called, Assassin's Creed Vikings comes out this fall, presumably, corona aside. Presumably it comes out this fall on both Xbox Series X and PlayStation 5 -- which one will it look better on, which one will have a better resolution and better framerate on? I don't think we can know the answer to that question from the spec sheet, and that's the point I'm making."

I'm calling it right now that it will be superior on the XSX.

A lot of people are really misinformed on what a NAND-based SSD actually entails and how comparing it with graphics compute CUs, volatile memories or CPU clock rates is like comparing apples to oranges.

Now, one of the most immediate benefits of PS5's SSD is that it will assist in player local-proximity texture asset streaming, but that only really matters with decompressed data files. Otherwise, the data will still have to be read from the NAND at a page-file size (since NAND operates on pages and blocks (which are clusters of pages)), decompressed, and written back to the NAND IC by doing a block-level erase. But block-level erases also erase data on that block that very likely has not even been altered (including decompression altering), and wears on power/erase cycles.

I suspect PS5's flash controller with obviously have some advanced wear-leveling built into it, but this doesn't negate the inherent limitations of flash NAND technology. Same goes for the XSX's custom SSD, too. Data on NAND cannot be accessed at the byte level, altered at the byte or bit level, nor written back to NAND at the byte level, nor be accessed through means of true random access speeds (as in, random access that is not magnitudes slower than sequential read and write). Those are ALL properties of volatile memory like GDDR6.

So what that ends up in regards to something like texture streaming, is that texture streaming will generally benefit large textures that fit the page size of the NAND IC's page size format, and doesn't need frequent modification on a byte or bit level, or where alterations can be applied in a way that effectively scales to a page-like level (in terms of any GPU algorithms or onboard AI hardware that can shift values as sorts of "masks" on the textures). That mainly benefits certain textures within the proximity of the player avatar (as Cerny mentioned) and textures in a ranged distance that are expected to either be fairly static, change at levels that are not bit or byte-dependent, and can do so at a rate where the "slow" (compared to GDDR6) SSD speeds can make it appear relatively seamless to the player (i.e no texture distortion, pop-out, tearing etc. that the player will be able to notice during gameplay).

I think if some folks had a better grasp on this as a whole we could all come to a better consensus on where the two systems actually land in terms of overall capabilities, strengths, weaknesses etc. And it'd also cut down on the spin, which is always appreciated ;)
 

SonGoku

Member
  1. In real world scenarios without perfect parallel use of CUs, do you think there will be any difference in multiplatform games besides resolution
  2. How come the PS4/xbones were at feature parity besides resolution in most games despite PS4 being more capable GPGPU wise
  3. Does dropping resolution free up CU resources or none at all?
 
Last edited:

VFXVeteran

Banned
You're saying that consoles won't be able to play these at 4k/Ultra/RTX/60FPS or even 30FPS, but I feel like you're kind of using current PC hardware and versions of the game as your basis here.

The XB1X version already looks incredible... you don't think with literally twice the amount of power and new DX12U tools at its disposal it can turn things up a notch and implement Ultra textures and RT similar to the (unoptimized) upgrades we've already seen for Gears 5?

Every graphics card has a limit. Memory bandwidth is the limit for these consoles and PCs. It will continue to be for a long time. Casting rays per pixel at 1080p resolution is 4x less than casting those same rays @4k. It's not an opinion but a fact. 12TFLOPs is the theoretical max. Period. So no matter what you throw at the GPU, you have to decide what you want to keep and what you can go without. The PC is an agnostic machine that can scale must larger than the consoles. Their main testing/benchmarking will almost guarantee you it'll be on the PC. No, I don't think these consoles have the memory bandwidth needed to run Metro @ 4k/60FPS/ULTRA/RTX.
 
Last edited:

splattered

Member
How long has school been out for you?

A very long time now... getting old here :)

I was saying Smartshift provides around 10% increase in performance.

If it doesn't work as well as people hope and you don't consistently see that 10% increase in real world performance... well 10.28tf minus that beneficial performance increase would make performance scale back to 9.252tf.
 

Reindeer

Member
Someone that some don't like, but you must admit, he has the most elegant accent/narrating between all gaming youtubers :messenger_winking_tongue: I like his video personally.


The guy said in the video that PS5 is far more advanced in tech than Series X and that Series X is more prone to bottlenecks 🤦‍♂️. The denial is strong with this one.
 
Last edited:

Mrdeveus

Member
Every graphics card has a limit. Memory bandwidth is the limit for these consoles and PCs. It will continue to be for a long time. Casting rays per pixel at 1080p resolution is 4x less than casting those same rays @4k. It's not an opinion but a fact. 12TFLOPs is the theoretical max. Period. So no matter what you throw at the GPU, you have to decide what you want to keep and what you can go without. The PC is an agnostic machine that can scale must larger than the consoles. Their main testing/benchmarking will almost guarantee you it'll be on the PC. No, I don't think these consoles have the memory bandwidth needed to run Metro @ 4k/60FPS/ULTRA/RTX.

any info about this? He goes in more about it if you keep reading the thread



 
Last edited:
We've seen uncapped performance of ps4 and xbox one in games that targeted 60 fps. The difference in framerate clearly related to gpu power.
Target framerate is design decision, made by developers. Aside from several game studios that doing mostly fps all target 30 fps. Some mp modes in couple of games are 60fps, but it's marginal. Because tradeoff in image quality was too big. My hopes with next gen was that developers finally can achieve constant 60fps in all games. Right now I'm not so sure in that.
But it's my not so competent understanding of what is going on right now with next gen consoles and ps5 I general and this is why I hope that someone could explain this to me.
You answered yourself. Fps are a target, I can make a 60 fps game on SNES and easily stress Series X to reach only 30. SeX being somewhat more powerful won't make developers chose 60 fps, PS5 won't make them chose 30. It's not about the console that "finally can achieve constant 60 fps". You're fixating too much on small numbers, and you can rest assured: if PS5 needs to be capped at 30 because the game goes around 35, SeX won't do any magic, they could scale both version to reach 60 fps easily.
Point is: if a developer want it, can do it. Something they should have done even before if you ask me, but now I don't drink that "the tradeoff in image quality is too big" like it's better to sacrifice fluidity for relatively better graphics that are incredible anyway.
 
Last edited:
Status
Not open for further replies.
Top Bottom