• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

01011001

Banned

I must be blind, but I don't recall PS3 games having such particle effects like Second Son, as you say, 'fancy particles effects'.

well... yes that's the point. the only really big jump in terms of graphics Second Son showed compared to a high budget PS3 title was it's particle effects... that's why I brought them up because PS3 games didn't use particle effects at such a high rate and density.
if you take those away it could be ported to PS3 with very similar graphics... of course downgraded in the usual areas, resolution, texture detail, LOD, polycount and shadow detail.

the only thing that was truly "next gen" "never before seen" back then that it brought to the table were it's extensively used particle effects.
 
Last edited:
well if you have no idea what is sown, like you... a casual... then I suppose that Minecraft demo was not impressive
well if you look at that demo and don't know why it is impressive you clearly have no idea what is on display.
a similar RT rendered game on PC, Quake 2 RTX, will bring even an RTX2080ti to its knees when pushed higher than 1080p. and Quake 2 RTX's graphics are arguably more simple than Minecraft's due to way smaller environments.
Sure! :messenger_tears_of_joy:
 

KingT731

Member
well... yes that's the point. the only really big jump in terms of graphics Second Son showed compared to a high budget PS3 title was it's particle effects... that's why I brought them up because PS3 games didn't use particle effects at such a high rate and density.
if you take those away it could be ported to PS3 with very similar graphics... of course downgraded in the usual areas, resolution, texture detail, LOD, polycount and shadow detail.

the only thing that was truly "next gen" "never before seen" back then that it brought to the table were it's extensively used particle effects.
So if you take away all the stuff that you can't do on the PS3...it looks like a PS3 game....okay...return to your crackpipe
 

PaintTinJr

Member
well if you look at that demo and don't know why it is impressive you clearly have no idea what is on display.
a similar RT rendered game on PC, Quake 2 RTX, will bring even an RTX2080ti to its knees when pushed higher than 1080p. and Quake 2 RTX's graphics are arguably more simple than Minecraft's due to way smaller environments.
The thing that sort of gets lost in those demos - by anyone that hasn't been waiting since PS3 Cell for RT - is that whether they are high fidelity models or not, the number of rays would be the same. The Minecraft RT demo is visually mind blowing different(IMHO) from the basic rasterization version. With Quake 2 RT its visuals make it look better in some ways than many modern games and there is a certain irony about Quake 2's original static lightmap lighting - which is a defacto lighting in all the highly polished Nintendo games eg MK8 deluxe - because it was RT into the textures for a static configuration and took hours to calculate in Q3Radiant on decent hardware. being able to do that in real-time at interactive rates is impressive in my book.
 
Last edited:
I don't know why people a little too easily pushed Gears 5 demo aside but with RT it looked much more believable. The shadows of the environment were really good, so that the objects didn't feel as though they were floating above ground like it does for both consoles in lower settings right now. Even both console can manage RT GI then I will be very happy man. Fullpath traced rays are mostly impossible for console except for scenarios like Minecraft and Quake. And that Minecraft demo MS have shown really excited me since it is made possible on a console for the first time.
 

01011001

Banned
So if you take away all the stuff that you can't do on the PS3...it looks like a PS3 game....okay...return to your crackpipe

again, the topic was launch window games that would have been impossible on the previous gen. it was brought up because of the whole Microsoft 1st party Xbox One support. so Second Son would work on PS3 was the point

I get that's too much to comprehend for most on this forum, which at this point is filled with mindless fanboys and people that can't tell AA from AO
 
Last edited:

M-V2

Member
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome


 
Last edited:

CJY

Banned
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome


Thanks for this. Best interview I have read so far. It's Ali Salehi, Rendering Engineer of Crytek. Seems very bullish on PS5. Here's the (not so good) Google translated version:

The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Funder had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user interface, both companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided.
As we tracked down the information and read the profile and prepared for a bachelor's degree in computer science, it seemed better to work with an engineer and programmer at Karbeld, one of the world's most tech-savvy companies, with a powerful gaming engine. It's an engine to talk about. That's why I called Ali Salehi, a rendering engineer from Kraitek, and asked him, as an expert, to answer our questions about console traffic and the power of their hardware, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.
In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox X series.


Console PlayStation 5 Xbox X series
Processor 8x Zen 2 Cores at 3.5GHz with variable frequency 8x Zen 2 Cores at 3.8GHz (3.6GHz with SMT)
Graphics processor 10.28TFLOPs, 36 CUs at 2.23GHz (variable frequency) 12TFLOPs, 52 CUs at 1.825GHz
Memory 16GB GDDR6 / 256-bit 16GB GDDR6
Memory bandwidth 448GB / s 10GB at 560GB / s, 6GB at 336GB / s
Storage space 825GB Custom NVMe SSD 1TB Custom NVMe SSD
Input and output operational power 5.5GB / s (Raw), Typical 8-9GB / s (Compressed) 2.4GB / s (Raw), 4.8GB / s (Compressed)
Memorable memory NVMe SSD 1TB Expansion Card
Optical drive 4K UHD Blu-ray Drive Blu-ray Drive
Vijayato: In short, what is the job of a rendering engineer in a gaming company?
Ali Salehi: The technical visual section of each game is related to us. That is, supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology, and features like RayTracing are all things we do.

What is the significance of Traflaps, and does higher Traflaps mean a console is stronger?
Traflaps shows that this processor can be as efficient if it is in the best and most ideal state possible. The Traflaps figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entity. Several elements must work together to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more paper traffic than the Xbox 360. But in practice, because of its complex architecture and Battlenick Memory and other problems, you never reached the peak of efficiency.


The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually increased the resolution to 720p with AppScill. But Sony's studios have been able to offer more detailed games such as The Last of Us and the second and third versions of Uncharted due to their greater familiarity with the console and the development of special software relationships.
That is why it is not possible to value this figure so much. But if all the parts in the Xbox X-Series can work optimally and the GPU works in its own peak mode, that's not possible in practice. In addition to all this, we also have a software section. The example we saw on the computer was the addition of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, it would be better to use the hardware.
The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same direct XPC for its consoles.

How have you experienced working with both consoles and how do you evaluate them?
I can't say anything right now, but I'm quoting others who have made a public statement. Game developers say that the PlayStation 5 is the easiest console we've ever coded to reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features that leave the developer free. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the standard for optimizing different parts of the GPU or not? Or what do these floating points mean? How would you describe it for a user who doesn't understand this information?
The problem is with the person who made these public statements that need to be explained now. This technical information does not matter to the average user and is not a measurement criterion.
Play with numbers

At E3 2016, Microsoft spoke for the first time about the Scorpio project (later known as the Xbox One X), and it was the same year that Troplaps-based processing power was discussed among game consoles. As improvements were made to the hardware of the mid-range console, Phil Spencer, Xbox brand manager, was looking for a benchmark to compare the power of the mid-range console with the early-generation console, thus beginning the era of using the term truffles. Until now, the real criterion for measuring console performance has been gaming performance, and from the very beginning of the use of the word truffles, experts have repeatedly warned that it's just a game with words and numbers, and therefore the amount of truffles should not be the standard.
Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are next to them in the best possible way, there are no restrictions, the battery doesn't boot, and the processor can get as much information as it needs, the SIOs can do 12 floating-point operations per second. Do. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not.
A good example of this is the X-Series Xbox series hardware. Microsoft has split the RAMs in two. The same mistake that the Xbox One made. One part of RAM has high bandwidth and one part of RAM has low bandwidth. And obviously, encoding this console will have a story. Because the total number of things we have to put in fast RAM is so much that it will be annoying again, and if we want 4K to support it, that's another story. So there will be parts that prevent the graphics card from reaching that speed.

You talked about the shadows. The PlayStation 5 now has 36 cc, and the Xbox X56 series has 4 bookings, so 52 units are available to the developer. What is the difference?
The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in some, they don't make much of a difference. An interesting example from an IGN reporter was that the Xbox Series X is very neat and tidy like an 8-cylinder engine, and the PlayStation 5 is turbocharged like a six-cylinder engine to the end; it has been turbocharged to the best of its ability. Raising the clock speed on the PlayStation 5 seems to me to have a number of good things to do, such as the memory, rasterizer, and other parts of the graphics card whose performance is related to this clock (something that is separate from the CV and has nothing to do with truffles). they do. So the rest of the PlayStation 5's GPU works faster than the X-Series. That's what makes the console more work on the announced peak 10.28 traffic peak. But for the X-Series, because the rest of the sections are slower, it will probably work much lower on Traflaps in general, and only reach 12 Traflaps in ideal conditions.

Doesn't this difference show its impact at the end of the generation, when developers become more familiar with the X-Series hardware?
No, because the PlayStation software interface generally leaves the hand more open, and usually later in each generation, Sony consoles have more exotic outputs. For example, in the early seventh generation, even shared games for both consoles performed poorly on the PlayStation 3. But the late Uncharted 3 and The Last of Us came out of the console. I think the next generation will be the same. But at higher resolutions, the PlayStation 5 will probably be in trouble, and the X-Series will be able to display more pixels.

Sony says the smaller the number, the more you can integrate the tasks. What does Sony's claim mean?
It costs money to use all the CDs at the same time. Because SIOs need resources that are limited in the graphics card when they want to run code. If the graphics card fails to distribute all the resources on all the CDs to execute a code, it will be forced to drop a number of CDs. For example, instead of 52, use 20 cents because it doesn't have enough resources for all caches at all.
Aware of this, Sony has hired a faster Sivio instead of a larger Sivio to reduce production costs. A more striking example of this was in the SIPs. AMD has had high-core CPUs for a long time, or even Intel's larger-core CPUs didn't necessarily work better. 4-core or 8-core CPUs, but with much higher performance per core, usually performed better in gaming. Obviously, a 16 or 32-core CPU has a higher number of traffic lights, but a CPU with a smaller core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores, they prefer to have fewer cores but faster.

Could the Hypertering feature included in the X series be the last years of Microsoft's winning generation?
Technically, hypertering has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the X-Series feature allow the developer to decide for himself whether he wants to use these virtual cores or turn them off with more CCP? And that's exactly what you're saying. It's not a big deal to make a regional decision from the start, so the use of hypertering is likely to reach the end of the generation.

Do you open the door saying "there is no way out"?
That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the previous generation, but with a higher clock, and then move on to this feature.

The 3328 Schider is available in the Xbox Series X Computing Unit. What does Schider have, what does it do, and what does 3328 Schider mean?
When saves want to execute code, they do so through units called Wavefront. Multiply the number of shadows by the number of vipers. But it doesn't really matter, and everything I said about the Sioux applies here. Again, there are limitations that make all of these shaders unusable, and many of them aren't necessarily good.
There is another important issue to consider, as Mark Serny put it. Sivio or even Traflaps are not necessarily the same between any architecture. That is, Traflaps cannot be compared to each other and is numerically superior. So you can't trust these numbers at all and set the criteria.

Comparisons between Android devices and Apple iPhones have also recently risen to the top of consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?
Software stacks that are placed on top of the hardware determine everything. As performance updates increase, so does your one-time updates. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is stable under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?
What Sony has done is much more logical because it decides whether the graphics card's frequency is higher or the CPU frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, Gippio gets involved and Cipio plays a very small role. On the other hand, it's good that the X-Series has good cooling and can keep the frequency constant and it doesn't have trawling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?
Not really, because we're already doing that on the engine side. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different criteria and measuring how much the graphics card is under pressure now and how low the resolution should be to keep it fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?
I don't think it will be of much use until the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The X-Series chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer topic and the amount of transistors?
Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation SSD speeds reach 8-9 GB in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?
The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be below zero in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.
Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?
Definitely PlayStation 5.
As a programmer, I'm saying that the PlayStation 5 is much better, and I don't think you can find a programmer who can tell you the advantages of the PlayStation 5 over the Xbox X-Series. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do anything, and that is the best way possible.
 
Last edited:
I didn't expect any real argument, so that's as expected given the level of discussion this thread has dived to in recent weeks.
If you did not expect any real argument why did you try to bait me with 3 pointless posts in row?

Think about that ;)

uHHS7tY.gif
 
Last edited:
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome
OK Here we go! It is a long one but full of info.
INTRO
The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Foundry had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user experience, the two companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided this time around.

As we tracked down the information and read the specifications and prepared for a bachelor's degree in computer science, it seemed better to work with an engineer and programmer at Crytek, one of the world's most tech-savvy companies, with a powerful gaming engine. It's an engine to talk about. That's why I called Ali Salehi, a rendering engineer from Crytek, and asked him, as an expert, to answer our questions about console traffic and the power of their hardware, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.

In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox Series X.

INTERVIEW
[Questions bolded,
answers not]
Vijayato: In short, what is the job of a rendering engineer in a gaming company?

Ali Salehi: The technical visual section of each game is related to us. That means supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology and features like RayTracing (RayTracing) are all things we do.

What is the significance of Teraflops, and does higher Teraflops mean a console is stronger?

Teraflops shows that this processor can be as efficient if it is in the best and most ideal state possible. The Teraflops figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entity. Several elements must work together to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more power on paper than the Xbox 360. But in practice, because of its complex architecture and bottlenecked Memory and other problems, you never reached the peak of efficiency.

There is an image here with following
[Woes of PlayStation 3
The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually increased the resolution to 720p with AppScale. But Sony's own studios have been able to offer more detailed games such as The Last of Us and Uncharted's second and third versions due to their greater familiarity with the console and the development of special software relationships.]

That is why it is not possible to value this figure so much. But if all the parts in the Xbox X-Series can work optimally and the GPU works in its own peak mode, that's not possible in practice. In addition to all this, we also have a software section. The example we saw on the computer was the addition of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, it would be better to use the hardware.

The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC for its consoles.

How have you experienced working with both consoles and how do you evaluate them?

I can't say anything right now about my own work, but I'm quoting others who have made a public statement. Developers say that the PlayStation 5 is the easiest console we've ever coded into so we can reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features that leave the developer free. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the standard for optimizing different parts of the GPU or not? Or what do these floating points mean? How would you describe it for a user who doesn't understand this information?

The problem is with the person who made these public statements that need to be explained now. This technical information does not matter to the average user and is not a measurement criterion.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are next to them in the best possible way, there are no restrictions, the battery does not boot, and as long as the processor has the necessary information, the servers in this mode can operate 12 times of floating-point operation per second. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not.

A good example of this is the X-Series Xbox series hardware. Microsoft has split the RAMs in two. The same mistake that the Xbox One made. One part of RAM has high bandwidth and one part of RAM has low bandwidth. And obviously, encoding this console will have a story. Because the total number of things we have to put in fast RAM is so much that it will be annoying again, and if we want 4K to support it, that's another story. So there will be parts that prevent the graphics card from reaching that speed.

You talked about the shadows. The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in some, they don't make much of a difference. An interesting example from an IGN reporter was that the Xbox Series X is very neat and tidy like an 8-cylinder engine, and the PlayStation 5 is turbocharged like a six-cylinder engine to the end. Raising the clock speed on the PlayStation 5 seems to me to have a number of good things to do, such as the memory, rasterizer, and other parts of the graphics card whose performance is related to this clock. So the rest of the PlayStation 5's GPU works faster than the X-Series. That's what makes the console work even more on the announced peak 10.28 Teraflops. But for the X-Series, because the rest of the sections are slower, it will probably work much lower on Teraflops in general, and only reach 12 Teraflops in highly ideal conditions.

Doesn't this difference show its impact at the end of the generation, when developers become more familiar with the X-Series hardware?

No, because the PlayStation software interface generally leaves the hand more open, and usually at the end of each generation, Sony consoles have even more exotic outputs. For example, in the early seventh generation, even multi platform games for both consoles performed poorly on the PlayStation 3. But the late in the generation Uncharted 3 and The Last of Us came out of the console. I think the next generation will be the same. But towards the end at higher native resolutions, the PlayStation 5 will probably be in a little trouble, and the X-Series will be able to display more pixels.

Sony says the smaller the number, the more you can integrate the tasks. What does Sony's claim mean?

It costs money to use all the CUs at the same time. Because CUs need resources that are allocated to the graphics card when they want to run code. If the graphics card fails to distribute all the resources on all the CUs to execute a code, it will be forced to drop a number of CUs. For example, instead of 52, use 20% of it because it doesn't have enough resources for all caches at all times.

Aware of this, Sony has hired a faster GPU instead of a larger GPU to reduce production costs. A more striking example of this was in the SIP. AMD has had high-core CPUs for a long time, or even Intel's larger-core CPUs didn't necessarily work better. 4-core or 8-core CPUs, but with much higher performance per core, usually performed better in gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a smaller faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Could the Hypertering feature included in the X series be the last years of Microsoft's winning generation?

Technically, hypertheading has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the X-Series feature allow the developer to decide for themselves whether they want to use these virtual cores or turn them off with more CPU clocks? And that's exactly what you're saying. It's not exactly a big deal to make a local decision from the start, so the use of hyperthreading is likely to reach the end of the generation.

Do you open the door saying "there is no way out"?

That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the previous generation, but with a higher clock, and then move on to this feature.

The 3328 Shader is available in the Xbox Series X Computing Unit. What does Shader have, what does it do, and what does 3328 Shader mean?

When developers want to execute code, they do so through units called Wavefront. Multiply the number of shadows by the number of vipers. But it doesn't really matter, and everything I said about the CUs applies here. Again, there are limitations that make all of these shaders unusable, and having many of them all at once aren't necessarily good.

There is another important issue to consider, as Mark Cerny put it. CUs or even Traflaps are not necessarily the same between any architecture. That is, Teraflops cannot be compared to each other and which is actually numerically superior. So you can't trust these numbers at all and set the criteria.

Comparisons between Android devices and Apple iPhones have also recently risen to the top of consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?

Software stacks that are placed on top of the hardware determine everything. As performance updates increase exponentially, so do they. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is constant under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?

What Sony has done is much more logical because it decides whether the graphics card's frequency is higher or the CPU's frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, GPU gets involved and CPU plays a very small role. On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?

Not really, because we're already doing that on the engine side. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different criteria and measuring how much the graphics card is under pressure and how low the resolution should be to keep it fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?

I don't think it will be very useful until the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The X-Series chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer and transistors?

Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation SSD speeds reach 8-9 GB in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?

The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be below zero in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course, there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.

Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?

Definitely PlayStation 5.

As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who can outperform the PlayStation 5 from the Xbox X-Series. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.
There is a thread with an updated translation, suggest you go there.
 
Last edited:

Shmunter

Member
Thanks for this. Best interview I have read so far. It's Ali Salehi, Rendering Engineer of Crytek. Seems very bullish on PS5. Here's the (not so good) Google translated version:
Yes solid professional insight. These are the sort of interviews that inform rather than simply entertain with market speak.

Interestingly no discussion in regards to ssd streaming.
 
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome



Thank you for sharing this, man. This is like finding water in a desert.

And thank you too to CJY CJY & Apollo Helios Apollo Helios for the translation :messenger_heart:
 
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome



Thank you for sharing this. It was realy interesting to read.
 

Gediminas

Banned
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome


this article sounds like iceberg for xbox titanic :D

i see the meltdown on youtube and forums in my eyes :D
 
Last edited:
Good interview. Surprised how open they were. Ms isn't going to be happy. Xbox era discord emergency meeting commissioned.

I heard from the Xbox era discord that developers are having to reduce their TFlops to 8 due to extremely bad chip yields

this took Sony totally be surprise and are now scrambling to do everything they can to delay the console until 2022
 
Good interview. Surprised how open they were. Ms isn't going to be happy. Xbox era discord emergency meeting commissioned.
this article sounds like iceberg for xbox titanic :D

i see the meltdown on youtube and forums in my eyes :D
I heard from the Xbox era discord that developers are having to reduce their TFlops to 8 due to extremely bad chip yields

this took Sony totally be surprise and are now scrambling to do everything they can to delay the console until 2022
Made a thread for it haha let's see if there are Little Boy or Fat Man size explosions.

Edit: Forgot to link the thread, so there it goes;
 
Last edited:

icy121

Member
I heard from the Xbox era discord that developers are having to reduce their TFlops to 8 due to extremely bad chip yields

this took Sony totally be surprise and are now scrambling to do everything they can to delay the console until 2022
A buddy of mine works for a massive distribution chain that's responsible for supplying some of Sony's components - specifically their PSU. He told me Sony is going to walk back their claim that certain off-the-shelf m.2 drives will work with the PS5. Reason being - their PSU is incredibly custom and not "consumer-proof" and Sony is seriously worried that a large portion of the user base that decide to upgrade storage in the future will mess something up. They're currently internal talks with Samsung to fast-track a solution to propriety storage in the way of memory cards, a la Microsoft.

I'll keep you guys posted.

LOL NOT! BUT WAIT! Jez was hearing similar things from his 1 developer friend who probably isn't a programmer or an engineer and "heard" that Sony was having "overheating" issues from someone over on XboxEra Discord. Someone send this to RTU but cut out the spoiler tag. LOL i want to see if he would actually make a video on this.
 
Last edited:
Thanks for this. Best interview I have read so far. It's Ali Salehi, Rendering Engineer of Crytek. Seems very bullish on PS5. Here's the (not so good) Google translated version:

What a fantastic machine Sony have built.

So smart and efficient. No Wonder hás been recieving much praised.

The smart shift technology is actually a Very good thing and Very praised here. It save Power consumption and redirect tô other component.

Higher frequency is also a smart choice.

Cant wait to see PS5 perfomances.
 
A buddy of mine works for a massive distribution chain that's responsible for supplying some of Sony's components - specifically their PSU. He told me Sony is going to walk back their claim that certain off-the-shelf m.2 drives will work with the PS5. Reason being - their PSU is incredibly custom and not "consumer-proof" and Sony is seriously worried that a large portion of the user base that decide to upgrade storage in the future will mess something up. They're currently internal talks with Samsung to fast-track a solution to propriety storage in the way of memory cards, a la Microsoft.

I'll keep you guys posted.

Why would the PSU supplier know about the M.2 drives?
Why would a custom PSU impact the ability to add in an M.2 drive?

Your buddy is talking rubbish.
 

rnlval

Member
Someone sent me this and very interesting reading... If it's not tralsated them translate it on Google chrome


Reminder, RTX 2080 Super has 48 CU FP32 equivalent with 1919 Mhz average clock speed and 496 GB/s memory bandwidth.
 
Good interview. Surprised how open they were. Ms isn't going to be happy. Xbox era discord emergency meeting commissioned.

Ps5 is a smarter and more democratic and efficient machine from what I understood.

Actually favoring higher clocks insted of larger CUs is a smart decision, as using all CUs at the same time is more unlikely due to How much resources It demands

Giving the cores to work at much faster frequency helps a lot more to keep at the theorical 10.3. While a slower clock speed make the CUs not work as efficient to reach Peak performance.

Smartshift with the variable clock speed was also Very praised and It indeed makes perfect sense.

Ps5 is sounding amazing the more You actually Care about How It Works and why Cerny made the choices he did.
 
Good interview. Surprised how open they were. Ms isn't going to be happy. Xbox era discord emergency meeting commissioned.
Also this came from a AAA dev, famous in the graphics departure not an indie game, yes the indie devs are important for the industry
but they are far to be the leader in this kind of things. :goog_unsure:
 

rnlval

Member
For all the NVIDIA DLSS lovers out there, Checkerboard 2.0 and Variable Rate Shading will save more than enough power at high resolutions on next gen consoles. Not every reconstruction technique needs to be hardware or machine learning based.
RTX's Tensor cores conserve shader resources.
 

INC

Member
Ps5 is a smarter and more democratic and efficient machine from what I understood.

Actually favoring higher clocks insted of larger CUs is a smart decision, as using all CUs at the same time is more unlikely due to How much resources It demands

Giving the cores to work at much faster frequency helps a lot more to keep at the theorical 10.3. While a slower clock speed make the CUs not work as efficient to reach Peak performance.

Smartshift with the variable clock speed was also Very praised and It indeed makes perfect sense.

Ps5 is sounding amazing the more You actually Care about How It Works and why Cerny made the choices he did.

thats all very well, but does it look like a fridge? checkmate to xbox
 

rnlval

Member
Ps5 is a smarter and more democratic and efficient machine from what I understood.

Actually favoring higher clocks insted of larger CUs is a smart decision, as using all CUs at the same time is more unlikely due to How much resources It demands

Giving the cores to work at much faster frequency helps a lot more to keep at the theorical 10.3. While a slower clock speed make the CUs not work as efficient to reach Peak performance.

Smartshift with the variable clock speed was also Very praised and It indeed makes perfect sense.

Ps5 is sounding amazing the more You actually Care about How It Works and why Cerny made the choices he did.
PS5's approach is like overclocking RTX 2070 36 CU into 2.2Ghz and gets defeated by RTX 2080 Super with 48 CU at 1.9Ghz.
 
Last edited:
OK Here we go! It is a long one but full of info.
The asymmetrical RAM configuration has been criticized more frequently lately. I came across this post and this person explained how the XSX's RAM setup isn't ideal.

After getting my head wrapped around his post, it makes sense. Memory bandwidth is determined by (Frequency) x (# of chips) x (Lanes per pin) / (8). Each chip has 1GB of "GPU optimized" RAM. (14 Ghz) x (10 chips) x (32 lanes per pin) / (8) = 560 GB/s. Six of the ten chips have an addition 1GB of "slower" RAM. As a result, the slower RAM's bandwidth is (14 Ghz) x (6 chips) x (32 lanes per pin) / (8) = 336 GB/s.

Here's the issue with that configuration. In the 2GB chips, the "fast" and "slow" RAM have to share those 32 lanes and because bandwidth is partially determined by how many lanes are used, sharing them will lower the bandwidth. If the memory usage goes above 10GB, some of those lanes need to be shared with the "slower" RAM.

Let's say the "slower" RAM in each chip needs to be accessed. If 16 lanes per chip is shared to the "slower" RAM, then the "GPU optimized" RAM's overall bandwidth will decrease: [(14 Ghz) x (4 chips) x (32 lanes per pin) / (8)] + [(14 Ghz) x (6 chips) x (16 lanes per pin)/ (8)] = 392 GB/s.

And then, I read through Ali Salehi's interview and he's making the same criticism:

A good example of this is the X-Series Xbox series hardware. Microsoft has split the RAMs in two. The same mistake that the Xbox One made. One part of RAM has high bandwidth and one part of RAM has low bandwidth. And obviously, encoding this console will have a story. Because the total number of things we have to put in fast RAM is so much that it will be annoying again, and if we want 4K to support it, that's another story. So there will be parts that prevent the graphics card from reaching that speed.
 

rnlval

Member
The asymmetrical RAM configuration has been criticized more frequently lately. I came across this post and this person explained how the XSX's RAM setup isn't ideal.

After getting my head wrapped around his post, it makes sense. Memory bandwidth is determined by (Frequency) x (# of chips) x (Lanes per pin) / (8). Each chip has 1GB of "GPU optimized" RAM. (14 Ghz) x (10 chips) x (32 lanes per pin) / (8) = 560 GB/s. Six of the ten chips have an addition 1GB of "slower" RAM. As a result, the slower RAM's bandwidth is (14 Ghz) x (6 chips) x (32 lanes per pin) / (8) = 336 GB/s.

Here's the issue with that configuration. In the 2GB chips, the "fast" and "slow" RAM have to share those 32 lanes and because bandwidth is partially determined by how many lanes are used, sharing them will lower the bandwidth. If the memory usage goes above 10GB, some of those lanes need to be shared with the "slower" RAM.

Let's say the "slower" RAM in each chip needs to be accessed. If 16 lanes per chip is shared to the "slower" RAM, then the "GPU optimized" RAM's overall bandwidth will decrease: [(14 Ghz) x (4 chips) x (32 lanes per pin) / (8)] + [(14 Ghz) x (6 chips) x (16 lanes per pin)/ (8)] = 392 GB/s.

And then, I read through Ali Salehi's interview and he's making the same criticism:

sP12EBS.png

Each GDDR6 chip has dual-16bit channels, hence 10 chips have 20 16bit channels (straws)


Scenario 1

For XSX GPU memory bandwidth

1st slice (odd straws), 28 GB/s x 6 chips = 168 GB/s potential

2nd slice (even straws) , 28 GB/s x 6 chips = 168 GB/s potential

3rd slice, 56 GB/s x 4 chips = 224‬ GB/s potential

IF CPU consumes 50 GB/s from 1st slice i.e. 168 - 50 = 118 then the total GPU bandwdith is 510‬ GB/s

----


For PS5 GPU memory bandwidth

448 - 50 = 398‬ GB/s

-----


XSX GPU has 28% memory bandwidth advantage.


PS5 GPU takes slightly more than half a penalty hit (ie. 4.5%) like RX 5600 OC 36 CU with 336 GB/s vs RX 5700 36 CU with 448 GB/s = 8% performance hit.

Not factoring PS5's TFLOPS scaling issues with gimp GPU memory bandwidth.

Scenario 2

For XSX GPU memory bandwidth

1st slice (odd straws), 28 GB/s x 6 = 168 GB/s potential

2nd slice (even straws) , 28 GB/s x 6 = 168 GB/s potential

3rd slice, 56 GB/s x 4 = 224‬ GB/s potential

IF CPU consumes 100 GB/s from 1st slice i.e. 168- 100 = 68 then the total GPU bandwdith is 460‬ GB/s

----


For PS5 GPU memory bandwidth

448 - 100 = 348‬ GB/s

-----


XSX GPU has 32% memory bandwidth advantage.

PS5 GPU takes a penalty hit like RX 5600 OC 36 CU (336 GB/s) vs RX 5700 36 CU (448 GB/s) = 8% performance hit.


qsc6ljB.png


Don't underestimate memory bandwidth at 4K resolution.
 
Last edited:
Made a thread for it haha let's see if there are Little Boy or Fat Man size explosions.

Edit: Forgot to link the thread, so there it goes;
Gonna be honest, it feels like a good bit is lost in translation, and the Xbox fanboys (especially the ones claiming to be 'neutral gamers') will seize on that.

Don't be surprised if they try to stir the pot on some anti-MS conspiracy again too.

That being said, interesting perspective.
 
Let's see IF AMD is lying with NAVI's scalability claims

UI3X5cu.jpg
I am agree if AMD doesn't improve how many bandwidth need they will have a problem specially PS5 and XSX to a lesser measure but we don't know.

But I couldn't care less, I have no shares in AMD or NVIDIA, I am going to buy the 2 consoles no matter what hardware they have.

Later when I see that my gtx 1080 is suffering running the games I will build another high-end PC.
 
On the XSX and PS5, both CPU and GPU are accessing the shared memory pool in terms of memory controller. Note how two of the 1GB chips share the same controller with the two 2GB chips. The only chips that won't suffer from the CPU and GPU access sharing are the other two 1GB chips. As a result, your graphic is actually inaccurate.

This also does not take into account that the XSX will still suffer varying memory bandwidths. As soon as the system accesses the "slow" memory, it's going to bog the "fast" memory down due because it has to share the 32 lanes per chip. If the OS footprint is put under the "slow" memory, then it's going to reduce the "fast" memory's bandwidth. If you put the OS footprint under the "fast" memory, then you'll have less of it to work with.

There's also the issue with how with the XSX, you need to split the "fast" RAM into two separate pools for loading and streaming because you can't load to frame while the frame is rendering. However, the PS5 has GPU cache scrubbers which allows the system to avoid this drawback, giving it way more RAM for streaming. P psorcerer does a good job breaking this down: https://www.neogaf.com/threads/ssd-and-loading-times-demystified.1532466/

And even if I were to take your numbers to heart, that advantage is not as large as the percent difference of CU's between the PS5 and XSX (44.4%). With more CU's, you need to feed more data for the CU's to make calculations. The GPU can only perform as well as the rate at which the data is fed to it. The higher rate at which the PS5 can transfer data from storage-to-memory and memory-to-cache is what allows the GPU to hit its theoretical peak more effectively. Add onto the fact that the cache scrubbers can selectively evict parts of the cache whereas the XSX's GPU needs to evict the entire cache before loading new data, this will also play a role on the efficiency.
 

rnlval

Member
On the XSX and PS5, both CPU and GPU are accessing the shared memory pool in terms of memory controller. Note how two of the 1GB chips share the same controller with the two 2GB chips. The only chips that won't suffer from the CPU and GPU access sharing are the other two 1GB chips. As a result, your graphic is actually inaccurate.

This also does not take into account that the XSX will still suffer varying memory bandwidths. As soon as the system accesses the "slow" memory, it's going to bog the "fast" memory down due because it has to share the 32 lanes per chip. If the OS footprint is put under the "slow" memory, then it's going to reduce the "fast" memory's bandwidth. If you put the OS footprint under the "fast" memory, then you'll have less of it to work with.

There's also the issue with how with the XSX, you need to split the "fast" RAM into two separate pools for loading and streaming because you can't load to frame while the frame is rendering. However, the PS5 has GPU cache scrubbers which allows the system to avoid this drawback, giving it way more RAM for streaming. P psorcerer does a good job breaking this down: https://www.neogaf.com/threads/ssd-and-loading-times-demystified.1532466/

And even if I were to take your numbers to heart, that advantage is not as large as the percent difference of CU's between the PS5 and XSX (44.4%). With more CU's, you need to feed more data for the CU's to make calculations. The GPU can only perform as well as the rate at which the data is fed to it. The higher rate at which the PS5 can transfer data from storage-to-memory and memory-to-cache is what allows the GPU to hit its theoretical peak more effectively. Add onto the fact that the cache scrubbers can selectively evict parts of the cache whereas the XSX's GPU needs to evict the entire cache before loading new data, this will also play a role on the efficiency.
What are you talk about a shared memory controller? Prove " two of the 1GB chips share the same controller with the two 2GB chips."

small_navi-die.jpg


For 256 bit bus, NAVI 10 has eight 32 bit memory controllers. Each 32bit memory controller has a divide for two 16 bit channels which is shown from NAVI 10's die shot.

Each GDDR6 chip has two 16bit channels.

Your "32 lanes per chip" argument is flawed and it's obsolete for GDDR6's dual 16-bit channels.
 
Last edited:

B_Boss

Member
I think that's a different guy :p

I posted his linked-in in the thread about this interview.

Ah, great and thanks dude lol. Amazing how he has a very similar (if not the exact same) name with even such an academic and professional background :messenger_grinning_squinting:.
 
Status
Not open for further replies.
Top Bottom