• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

This right here is a perfect example of exactly what took place with the 13.3 Teraflop PS5 mess, but even worse this time because the specifications are an open book and the meltdown when things don't pan out can be completely avoided. People need to understand the hierarchy in hardware function, practical, theoretical, it doesn't matter, the Series X is better on both fronts.

What's taking place here is a total denial of reality, and instead of accepting it and simply moving on people are fishing for anything that might continue to fuel their delusions, and then they double down on them. This is getting really old, it's not even a debate at this point, it's come down to one side understanding the reality of the situation and the other trying to manipulate the situation.

Series X is the more powerful console, the GPU is better, the CPU is better, the memory setup is better, there's no debate to be made in the opposition that can't be shot down instantly. Accept where things lie and move on, all that waits for you by not doing so is a barrage of disappointment and embarrassment.

I'm in favor of having them believe what they want at this point. We've been through this dance before. We mostly all have a good enough understanding of the numbers and their ultimate meaning for games. If people want to assume some degree of legendary incompetence on the part of Microsoft in not being able to build a console and then support it well enough from a software or API standpoint so that developers are able to actually extract the amount of performance Microsoft suggests will be possible, then I say let them assume away. I think AMD and Microsoft knew what they were doing.
 

SonGoku

Member
I'd suggest a title change: PS5 better dev tools and streamlined hardware design enables developers to get closer to peak performance compared to XSX
As stated in the interview XSX still has the power advantage to push extra pixels (resolution) but the difference in performance its closer than the numbers suggest, that's basically the tl;du version on the interview
 
Last edited:
The core bottleneck of the OG Xbox One was the slower RAM and more specifically the ESRAM.


It was nothing to do with the GPU or the CPU as to why we saw what we did in side by side comparisons between the two consoles.

Seriously now? It must be opposite day. Since when does a weaker GPU not matter at all? Nothing to do with it? Anyway, time to exit this thread. I didn't think the specs of both consoles being out would lead to the craziness I'm seeing lately.
 
Seriously now? It must be opposite day. Since when does a weaker GPU not matter at all? Nothing to do with it? Anyway, time to exit this thread. I didn't think the specs of both consoles being out would lead to the craziness I'm seeing lately.
Yeah it's devolved into anything beyond what I could have expected, for the first time in my life I'm actually seeing people trying to peddle a universally weaker console of the same architecture as having a computational leg up. It's the Twilight Zone around here, the inability for acceptance is off the charts.
 

GHG

Member
That's why I explicitly mentioned it was a mixture of both. Your own case a developer may see you lean that way but that won't be the case for all developers. Some of them do put financial practicality somewhat highly, because ultimately video games are as much a business as they are an art form.

Your reason for not taking MS on their word is...odd to say the least. That's something from several years ago. It's a similar thing to people who still clung to PS3 problems when trying to express their caution or disliking for PS4 (and that did happen). At some point you have to acknowledge that things have changed, be it management, company culture/division culture, policies etc. and start looking at things from that perspective.

IMO I can see that change has mostly manifested within the Xbox division; holding them to the same standards as 2013 Xbox division is doing them a big disservice.

My ultimate stance on this is let's wait and see what other 3rd party developers have to say about the dev tools and how they compare. Eventually that will transition to "let's look at the games".

If we flip this around and Sony came out and said (prior to this interview) our dev tools will be at least on par with Microsoft's efforts with DX12 Ultimate, would you have believed them flat out or would you say let's wait to see what the developers are saying? Or would it take a developer saying something like that for you to believe it's true. Because personally I've not been interested in what Cerny has had to say for the vast majority of the time.

As far as I'm concerned both of these companies can put as much PR out there as they want but I'll listen to what a developer who has access to both platforms has to say over them any day. These guys are the ones actually wrestling with the new hardware and dev tools.

What I can't understand is why when we've got the most in depth insight to date from somebody who is working with the consoles it's being met with more scepticism than your typical PR fluff piece does.

And the vast majority of developers are not as soulless as you think they are :)
 
I'd suggest a title change: PS5 better dev tools and streamlined hardware design enables developers to get closer to peak performance compared to XSX
As stated in the interview XSX still has the power advantage to push extra pixels (resolution) but the difference in performance its closer than the numbers suggest, that's basically the tl;du version on the interview

I think this one is better, much more neutral, so people can get their own conclutions:

"Ali Salehi, a rendering engineer at Crytek, talking about PS5/XSX"
 
Last edited:

StreetsofBeige

Gold Member
Because MS didn't do a reveal event that garnered massive backlash from their core constituency (regular gamers) and generated confusion and disappointment among the the way Sony did. Whether you personally liked the Road to PS5 event or not (I did), it's pretty obvious that it didn't go over too well with actual gamers who were excited to hear about PS5 and may've expected a lot more since they literally promoted the event on their social media accounts
The funny thing is it's not really Sony at fault. If they want to choose a 10tf system go ahead. But they never mislead anyone saying something else beforehand. Cerny's reveal was the first time they said anything about it in detail.

So even though his show was odd, you actually can't blame him for any of this. It's not like he claimed PS5 was going to be 12 tf in an interview in 2019 but showed a 10 tf system.

If anyone is to blame, it's all the so called insiders who played up PS5 > SeX and PS5 is 12-13tf for a year. There were even last minute mentions of PS4 even hitting 14-15. All BS.

Goes to show what kind of BS and hidden agendas they all have since most of them clustered together saying basically the same thing since last summer. I don't remember any insiders on forums/Twitter saying PS5 was going to be about 10 tf. Only people I remember were those GitHubbers tweeting their dirt digging every week about Oberon where they stood firm at 9.2.

I think what's happening with some gamers is they are stretching for those 12-13tf rumours to be true by using PS5 10tf boosting and SSD to be equivalent to raw 12-13 tf.

All the insiders had to do (for the ones who actually did know) is play up PS5 is 10 tf. So when Cerny shows a 10 tf system, nothing to be confused about or disappointed. It's like SeX.... "double the power", most people pegged SeX at double digits. So when SeX is announced at 12 tf, its not really a surprise. I think the biggest surprise for gamers wasn't even the 12 tf. Its that it was confirmed RDNA 2.
 
Last edited:

benjohn

Member
Reads really well! You don't have to fix the rest if you don't want to, you''re not getting paid :p
It was more Time consuming than I first thought but I'm glad I did it. Sorry for bad English


OK Here we go! It is a long one but full of info.

INTRO
The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Foundry had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user experience, the two companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided this time around.

As we tracked down the information and read the specifications and were searching for more information on the matter, it seemed better to talk with an engineer and programmer at Crytek, one of the world's most tech-savvy companies, with a powerful gaming engine. That's why I called Ali Salehi, a rendering engineer from Crytek, and asked him, as an expert, to answer our questions about Xbox Traflops adavtages over PS5 and the power of the consoles, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.

In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox Series X.

INTERVIEW
[Questions bolded,
answers not]
Vijayato: In short, what is the job of a rendering engineer in a gaming company?

Ali Salehi: The technical visual section of each game is what we do. That means supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology and features like RayTracing are somethings we do.

What is the significance of Teraflops, and does higher Teraflops mean a console is stronger?

Teraflops shows that this processor can be as efficient if it is in the best and most ideal state possible. The Teraflops figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entities that rarely get to their fullest potential. Several elements must work together in harmony to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more power on paper than the Xbox 360. But in practice, because of its complex architecture and bottlenecked Memory and other problems, you never reached the peak of efficiency.

There is an image here with following
[Woes of PlayStation 3
The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually up scaled the resolution to 720p. But Sony's own studios have been able to offer more detailed games such as The Last of Us and Uncharted 2 and 3 due to their greater familiarity with the console and the development of special software accessibilty.

That is why it is not a good idea to base our opinions only on numbers. But if all the parts in the Xbox X-Series can work optimally and the GPU works in its own peak, which is not possible in practice, we can achieve 12 Tflops. In addition to all this, we also have a software section. The example is the advent of of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, the hardware could be better put in use.

The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed exclusive software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC and for its consoles.

How have you experienced working with both consoles and how do you evaluate them?

I can't say anything right now about my own work, but I'm quoting others who have made a public statement. Developers say that the PlayStation 5 is the easiest console they’ve ever coded for. so they can reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features which leave a lot of options for developers. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the final defining factor over GPU power? Or what do these floating points mean? How would you describe it for a user who doesn't understand all of these?

I think it was a bad PR move to put all these information out. This technical information does not matter to the average user and is not a final judgement over GPU power.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are best put to use in the best possible way, and there are no other restrictions, there is not bottleneck in memory, and as long as the processor has the necessary information, 12 Tflops can be achieved. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not. ( he means we cannot remove all bottlenecks and 12 Tflpos only remains on paper)

A good example of this is the X-Series Xbox series hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

You talked about the CUs. The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles’ performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the X-Series. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the X-Series, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.

Doesn't this difference decline at the end of the generation, when developers become more familiar with the X-Series hardware?

No, because the PlayStation API generally gives devs more freedom, and usually at the end of each generation, Sony consoles produce more detailed games. For example, in the early seventh generation, even multi-platform games for both consoles performed poorly on the PlayStation 3. But late in the generation Uncharted 3 and The Last of Us came out on the console. I think the next generation will be the same. But generally speaking XSX must have less trouble pushing more pixels. (He emphasizes on “only” pixels)

Sony says the smaller the number of CUs, the more you can integrate the tasks. What does Sony's claim mean?

It costs resources to use all the CUs at the same time. Because CUs need resources that are allocated to the GPU when they want to run code. If the GPU fails to distribute all the resources on all the CUs to execute a code, it will be forced to drop a number of CUs in use. For example, instead of 52, use 20 of them because GPU doesn't have enough resources for all CUs at all times.

Aware of this, Sony has used a faster GPU instead of a larger GPU to reduce allocation costs. A more striking example of this was in the CPUs. AMD has had high-core CPUs for a long time. Intels on the other hand has used less core but faster ones. Intel CPUs with less cores but faster ones perform better in Gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Could the Hyperthreading feature included in the X series be the Microsoft's winning ace at the end of gerneration?

Technically, hypertheading has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the X-Series feature allow the developer to decide for themselves whether they want to use these virtual cores or turn them off with more CPU clocks? And that's exactly what you're saying. It's not exactly a big deal to make a local decision from the start, so the use of hyperthreading is likely to be used at later time of the generation not at first.

Can you elaborate?

That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the next generation, but with a higher clock, and then move on to use SMT (Hyperthreading).

The 3328 Shader is available in the Xbox Series X Computing Unit. What is a Shader?, what does it do, and what does 3328 Shaders mean?

When developers want to execute code, they do so through units called Wavefront. Multiply the number of CUs by the number of Wavefronts and you have the number of shaders. But it doesn't really matter, and everything I said about the CUs applies here. Again, there are limitations that make all of these shaders unusable, and having many of them all at once aren't necessarily good.

There is another important issue to consider, as Mark Cerny put it. CUs or even Traflaps are not necessarily the same between all architectures. That is, Teraflops cannot be compared between devices and decide which one is actually numerically superior. So you can't trust these numbers and call it a day.

Comparisons between Android devices and Apple iPhones have also recently risen analogous to consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?

Software stacks that are placed on top of the hardware determine everything. As performance updates increase exponentially, so do they. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is constant under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?

What Sony has done is much more logical because it decides whether the GPU frequency is higher or the CPU's frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, GPU gets involved and CPU plays a very small role. On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?

Not really, because we're already doing that on the engine. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different elements and measuring how much the GPU is under pressure and how low the resolution should be kept to be fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?

I don't think it will be very useful in the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The X-Series chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer and transistors?

Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation SSD speeds reach 8-9 GB in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?

The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be under a second in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course, there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.

Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?

Definitely PlayStation 5.

As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who chooses XBX over PS5. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.
 
Last edited:

Psykodad

Banned
d29c741aa18f2241fbaa5cb9011e37ce.jpg
 
so you believe more cubic inches in a sports car’s engine is a better engine?
Not exactly the best sword to fall on here or a great analogy.

We already know for the fact that on the same architecture that if the same teraflops are targeted with more CU's at lower frequency on one GPU and less CU's at higher frequency on another GPU; that the GPU with more CU's will outperform the one pushing higher frequencies even though their net floating point operations are the same.

Well a bit of bad news here, the Series X is pushing 16 more CU's which amount to a minimum of a 1.87 teraflop surplus. So even if the Series X at 1,825Mhz had 44 active CU's which would pin it at about 10.3 teraflops, it would still marginally outperform the PS5 GPU.

What's the goal here? What are you guys aiming to achieve? We already know these hardlined facts, there's no argument to be had.
 

AGRacing

Member
“On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.”

I don’t like what this implies about PS5 cooling.

SONY : I am NOT buying a hair dryer this time.
 
I'd suggest a title change: PS5 better dev tools and streamlined hardware design enables developers to get closer to peak performance compared to XSX
As stated in the interview XSX still has the power advantage to push extra pixels (resolution) but the difference in performance its closer than the numbers suggest, that's basically the tl;du version on the interview

Agreed. It's possible for one developer to have the opinion that the console with less CU's at a significant higher clock rate is more efficient overall and at the same time, still acknowledge the other console has more raw horsepower (higher res).

This is not another 'power of the cloud' or 'secret sauce' false equivalency that some fanboys are trying to push. It's simply two different approaches and philosophies that has their pros and cons (IMO, Sony had a smaller budget than MS and tried to make up for it by going for speed & efficiency)
 
“On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.”

I don’t like what this implies about PS5 cooling.

SONY : I am NOT buying a hair dryer this time.
It really doesn't imply anything about PS5 cooling. It's saying the Series X took steps to ensure good cooling while running at locked max frequency.
 

Mendou

Banned
A lot of people in this thread are talking in absolutes as if this guy has been clear cut, however there's an issue with that. Early in the interview, when asked if he has actual experience working with both consoles, he says quite clearly...

"I can't say anything right now about my own work, but I'm quoting others who have made a public statement."

So not really much different from any poster here building a case by quoting 'people on the internet'. It's hardly a technical deep dive by an expert familiar with both consoles.
This.
 
“On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.”

I don’t like what this implies about PS5 cooling.

SONY : I am NOT buying a hair dryer this time.

And yet Mark Cerny specifically highlighted how loud the PS4 and Pro were and kept talking about how the cooling is robust. I think they realize having a quiet system is a big deal.
 

Vroadstar

Member
It was more Time consuming than I first thought but I'm glad I did it. Sorry for bad English


OK Here we go! It is a long one but full of info.

INTRO
The hardware specifications of the PlayStation 5 and Xbox Series X were officially announced a few weeks ago by Sony and Microsoft, and Digital Foundry had the opportunity to take a deep technical look at what we expect. Although there aren't many games for consoles yet, and we don't know much about their overall performance and user experience, the two companies are constantly competing in technical and complex debates that no one but engineers and programmers can understand. Providing the deepest technical information is not avoided this time around.

As we tracked down the information and read the specifications and were searching for more information on the matter, it seemed better to talk with an engineer and programmer at Crytek, one of the world's most tech-savvy companies, with a powerful gaming engine. That's why I called Ali Salehi, a rendering engineer from Crytek, and asked him, as an expert, to answer our questions about Xbox Traflops adavtages over PS5 and the power of the consoles, and to comment on which one is more powerful. Convincing answers with simple and understandable explanations that were contrary to expectations and numbers on paper.

In the following, you will read the conversation between Mohsen Vafnejad and Shayan Ziaei with Ali Salehi about the hardware specifications of the PlayStation 5 and Xbox Series X.

INTERVIEW
[Questions bolded,
answers not]
Vijayato: In short, what is the job of a rendering engineer in a gaming company?

Ali Salehi: The technical visual section of each game is what we do. That means supporting new consoles, optimizing current algorithms, troubleshooting current ones, implementing new technology and features like RayTracing are somethings we do.

What is the significance of Teraflops, and does higher Teraflops mean a console is stronger?

Teraflops shows that this processor can be as efficient if it is in the best and most ideal state possible. The Teraflops figure is in ideal and theoretical conditions. In practice, however, the graphics card and console are a complex entities that rarely get to their fullest potential. Several elements must work together in harmony to provide each part of the feed to the other and output one part to another. If each of these elements fails to work properly, the efficiency of the other part will decrease. A good example of this is the PlayStation 3 console. Because of its SPUs, the PlayStation 3 had a lot more power on paper than the Xbox 360. But in practice, because of its complex architecture and bottlenecked Memory and other problems, you never reached the peak of efficiency.

There is an image here with following
[Woes of PlayStation 3
The PlayStation 3 had a hard time running multi-platform games compared to the Xbox 360. Red Dead Redemption and GTA IV, for example, ran at 720p on the Microsoft console, but the PlayStation 3 had a poorer output and eventually up scaled the resolution to 720p. But Sony's own studios have been able to offer more detailed games such as The Last of Us and Uncharted 2 and 3 due to their greater familiarity with the console and the development of special software accessibilty.

That is why it is not a good idea to base our opinions only on numbers. But if all the parts in the Xbox X-Series can work optimally and the GPU works in its own peak, which is not possible in practice, we can achieve 12 Tflops. In addition to all this, we also have a software section. The example is the advent of of Vulkan and DirectX 12. The hardware did not change, but due to the change in the architecture of the software, the hardware could be better put in use.

The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed exclusive software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC and for its consoles.

How have you experienced working with both consoles and how do you evaluate them?

I can't say anything right now about my own work, but I'm quoting others who have made a public statement. Developers say that the PlayStation 5 is the easiest console they’ve ever coded for. so they can reach the console's peak performance. In terms of software, coding on the PlayStation 5 is extremely simple and has many features which leave a lot of options for developers. All in all, the PlayStation 5 is a better console.

If I understood correctly, is Traflaps the final defining factor over GPU power? Or what do these floating points mean? How would you describe it for a user who doesn't understand all of these?

I think it was a bad PR move to put all these information out. This technical information does not matter to the average user and is not a final judgement over GPU power.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are best put to use in the best possible way, and there are no other restrictions, there is not bottleneck in memory, and as long as the processor has the necessary information, 12 Tflops can be achieved. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not. ( he means we cannot remove all bottlenecks and 12 Tflpos only remains on paper)

A good example of this is the X-Series Xbox series hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

You talked about the CUs. The PlayStation 5 now has 36 CUs, and the Xbox Series X has 52 CUs are available to the developer. What is the difference?

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles’ performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the X-Series. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the X-Series, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.

Doesn't this difference decline at the end of the generation, when developers become more familiar with the X-Series hardware?

No, because the PlayStation API generally gives devs more freedom, and usually at the end of each generation, Sony consoles produce more detailed games. For example, in the early seventh generation, even multi-platform games for both consoles performed poorly on the PlayStation 3. But late in the generation Uncharted 3 and The Last of Us came out on the console. I think the next generation will be the same. But generally speaking XSX must have less trouble pushing more pixels. (He emphasizes on “only” pixels)

Sony says the smaller the number of CUs, the more you can integrate the tasks. What does Sony's claim mean?

It costs resources to use all the CUs at the same time. Because CUs need resources that are allocated to the GPU when they want to run code. If the GPU fails to distribute all the resources on all the CUs to execute a code, it will be forced to drop a number of CUs in use. For example, instead of 52, use 20 of them because GPU doesn't have enough resources for all CUs at all times.

Aware of this, Sony has used a faster GPU instead of a larger GPU to reduce allocation costs. A more striking example of this was in the CPUs. AMD has had high-core CPUs for a long time. Intels on the other hand has used less core but faster ones. Intel CPUs with less cores but faster ones perform better in Gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Could the Hyperthreading feature included in the X series be the Microsoft's winning ace at the end of gerneration?

Technically, hypertheading has been on desktop computers since Pentium 4, and each physical core considers the CPU as two virtual cores, and in most cases helps with performance. Does the X-Series feature allow the developer to decide for themselves whether they want to use these virtual cores or turn them off with more CPU clocks? And that's exactly what you're saying. It's not exactly a big deal to make a local decision from the start, so the use of hyperthreading is likely to be used at later time of the generation not at first.

Can you elaborate?

That is, the analysis requires very accurate code execution. So it's not something everyone knows right now. There are now much more important concerns for recognizing console hardware, and developers are likely to work with a smaller number of cores at the beginning of the next generation, but with a higher clock, and then move on to use SMT (Hyperthreading).

The 3328 Shader is available in the Xbox Series X Computing Unit. What is a Shader?, what does it do, and what does 3328 Shaders mean?

When developers want to execute code, they do so through units called Wavefront. Multiply the number of CUs by the number of Wavefronts and you have the number of shaders. But it doesn't really matter, and everything I said about the CUs applies here. Again, there are limitations that make all of these shaders unusable, and having many of them all at once aren't necessarily good.

There is another important issue to consider, as Mark Cerny put it. CUs or even Traflaps are not necessarily the same between all architectures. That is, Teraflops cannot be compared between devices and decide which one is actually numerically superior. So you can't trust these numbers and call it a day.

Comparisons between Android devices and Apple iPhones have also recently risen analogous to consoles, with Internet discussions suggesting that Android users have higher RAM but poorer performance than iPhones. Is the comparison between the two with the consoles correct?

Software stacks that are placed on top of the hardware determine everything. As performance updates increase exponentially, so do they. Sony has always had better software because Microsoft has to use Windows. So that's right.

Microsoft has insisted that the Xbox Series X frequency is constant under any circumstances, but Sony does not have such an approach and provides the console with a certain amount of energy to use it as a variable and depending on the situation. What are the differences between the two and which will be better for the developer?

What Sony has done is much more logical because it decides whether the GPU frequency is higher or the CPU's frequency at certain times, depending on the processing load. For example, on a loading page, only the CPU is needed and the GPU is not used. Or in a close-up scene of the character's face, GPU gets involved and CPU plays a very small role. On the other hand, it's good that the X-Series has good cooling and guarantees to keep the frequency constant and it doesn't have throttling, but the practical freedom that Sony has given is really a big deal.

Doesn't this freedom of action make things harder for the developer?

Not really, because we're already doing that on the engine. For example, the Dynamic Resolution Scaling technique used by some games is now measuring different elements and measuring how much the GPU is under pressure and how low the resolution should be kept to be fixed on the frame. So it's very easy to connect these together.

What is the use of the geometry engine or Geometry Engine that Sony is talking about?

I don't think it will be very useful in the first year or two. We'll probably see more of an impact for the second wave of games released on this console, but it doesn't have much use at the start.

The X-Series chipset is 7 nanometers, and we know that the smaller the number, the better the chipset. Are you exploring the nanometer and transistors?

Lowering the nanometer means more transistors and controlling their heat in large numbers and smaller spaces. A production technology is better and the number of nanometers is not very important, what matters is the number of transistors.

PlayStation SSD speeds reach 8-9 GB in peak mode. Now that we've reached this speed, what else will happen apart from loading games and more details?

The first thing to do is remove the loading page from the games. Microsoft also showed the ability to stop and run new games, which can run multiple games simultaneously and move between each in less than 5-6 seconds. This time will be under a second in PlayStation. Another thing that can be expected is a change in the game menu. When there is no loading, of course, there is no expectation and you no longer need to watch a video to load the game in the background.

How will the games on PC be in the meantime? Because having an SSD is a choice for a PC user.

Consoles have always determined what the standard is. Game developers also build games based on consoles, and if someone has a PC and doesn't have an SSD on it, they have to deal with long loads or think about buying an SSD.

As a programmer and developer, which do you consider the best console for working and coding? PlayStation 5 or Xbox X series?

Definitely PlayStation 5.

As a programmer, I would say that the PlayStation 5 is much better, and I don't think you can find a programmer who chooses XBX over PS5. For the Xbox, they have to put DirectX and Windows on the console, which is many years old, but for each new console that Sony builds, it also rebuilds the software and APIs in any way it wants. It is in their interest and in our interest. Because there is only one way to do everything, and theirs is the best way possible.

Thanks for translating, it's very good read now and much more easier to understand
 

Neo Blaster

Member
You're really going to run with one person's point of view and base your whole opinion about the Ps5 on this? Have you even heard of this person before this article? Don't do this, not like this.
Funny you said that, how many Xbox fanboys accepted as truth what a random guy commented on that YouTube video, and he wasn't even a developer/engineer from a known company? Gosh, even Daniel and Jez from Windows Central did that.
 

dano1

A Sheep
Not exactly the best sword to fall on here or a great analogy.

We already know for the fact that on the same architecture that if the same teraflops are targeted with more CU's at lower frequency on one GPU and less CU's at higher frequency on another GPU; that the GPU with more CU's will outperform the one pushing higher frequencies even though their net floating point operations are the same.

Well a bit of bad news here, the Series X is pushing 16 more CU's which amount to a minimum of a 1.87 teraflop surplus. So even if the Series X at 1,825Mhz had 44 active CU's which would pin it at about 10.3 teraflops, it would still marginally outperform the PS5 GPU.

What's the goal here? What are you guys aiming to achieve? We already know these hardlined facts, there's no argument to be had.


Well it’s what came to mind. I really want both systems to do well! I wish Xbox had Sony’s lineup of games because I really want to support a American company!
That said I really don’t think your going to see much of a difference with 3rd party games?
They certainly didn’t take advantage of the extra power with the Xbox X !
Only 7 more months to go! Can’t wait!!
 
Well it’s what came to mind. I really want both systems to do well! I wish Xbox had Sony’s lineup of games because I really want to support a American company!
That said I really don’t think your going to see much of a difference with 3rd party games?
They certainly didn’t take advantage of the extra power with the Xbox X !
Only 7 more months to go! Can’t wait!!
Ugh, basically 97% of the faceoff analysis' would beg to differ.
 
Who's this "we"? Dude, you are not a developer, and this isn't a sports team. It's not political party. It's not an "us vs. them" bullshit thing, or it shouldn't be, anyway. But that's your mentality when you talk like that.

Also, what makes you think XSX won't support low-level programming? That's one of the things DX12 (Ultimate even moreso) are pushing for. And, FWIW too much to-the-metal low-level access can hamstring you in BC solutions, just look at the compromise Sony had to take in going with a 36 CU GPU in PS5 (apparently it was either 36 CUs, 48 CUs (Cerny's hypothetical example), or 72 CUs :S).

Same with the memory pool setup in XSX (the 336 GB/s bandwidth pool to accommodate XBO X BC seemingly; if that's true, though, then I would think that means the entire 2 GB chips can be accessed at that speed because the X had 12 GB of physical memory, not 6 GB. Might also suggest the XSX can access the 2 GB chips full memory at 56 GB/s across the board, too. Which might put to rest one of the arguments of bandwidth and physical memory contention going around, assuming this is actually the case).
We as "enthusiasts of gaming" ,"members of neogaf", "gamers" (really I need to specify this ?) and yes I am dev sorry for hurt you feelings ...

According to Ali the level of access is not the same in DX12 than the used for Playstation (Gnm) and this is not first time we heard similar things from a dev,
so you conjecture it about DX12 be as effective as low level API is just conjecture. And you say the possible main reason of why Cerny has so many problems
is the low level optimization used in PS4 games but looks in Xbox doesn't happens I am sure should be the magic of money or maybe just DX11/12 does not optimize
each fragment of code in low level.
Because Sony has developed software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC for its consoles.

Regarding the memory is say they need to be careful in not exceed the quantity of the fast memory, so they will need to optimize again to use a split the memory because
the other is so slow it could hurt overall performance as 12 tf is the peak in teorial performance, if that quantity of memory is enough or not is another topic.

Personally, I enjoy reading your opinions even when are different than mine, but if you are going to answer me in that tone (what was that first line), please avoid to quote me.
 
Last edited:

Pallas

Member
lol another thread for console wars. Both consoles gonna be amazing ffs.

I told you guys some Xbox fanboys will attack this thread and pretend that they're more educated than a rendering engineer working for cytek 🤭
Sure just like how Sony fans attack any positive Xbox thread. :messenger_ok:
What I don't understand... If people are so happy with the Series X specs (and they should be), then why are some people so hellbent on downplaying any positive talk that arises for the PS5? People should be sitting comfortably. This is a developer who clearly knows his shit but yet people are in here doubting his credentials etc? Meanwhile some no name dev who creates a basic 2D dolphin game is the one we should be listening to according to some of the same people.

It makes no fucking sense. It's almost as if we've been here before at some point. Every fucking generation.

That is true but it also goes both ways, you saw a lot of fans downplaying the TF, CU differences as well.
 
Last edited:

B_Boss

Member
Now I've come across a few posts here saying how the engineer "trashed the xbox" but seriously, do we have to read the article in that way necessarily? I did not. He is speaking technically with, I should assume(?) no emotional attachments to the argument or am I in error to think that? Technically accurate facts (regardless whether or not Salehi has spoken them) do not care how we feel, as consumers, professionals, you name it. I simply read someone's take, a professional who is far more adept than I am, at explaining what he knows in relation to the nextgen consoles. It could have easily have been positive or negative for either but in this case his responses seem to independently corroborate what we've been hearing others say, beginning with Schreier? If it wasn't I'd wonder whats up with the contradiction but I certainly wouldn't consider Salahi in error, a shill, etc. I'm genuinely interested in emotionally neutral analysis of Salehi's information, I'm also interested if anyone can, beyond reasonable doubt, show any errors in his thinking, because whatever the case that can get us closer to understanding the facts.
 
Last edited:
I think the most interesting part is this:

"The same can be said for consoles. Sony runs PlayStation 5 on its own operating system, but Microsoft has put a customized version of Windows on the Xbox Series X. The two are very different. Because Sony has developed exclusive software for the PlayStation 5, it will definitely give developers much more capabilities than Microsoft, which has almost the same directX PC and for its consoles."

It's quite possible that the Windows overhead will come with some disadvantage (just like when comparing consoles to PC).

I wonder how much. Also, doesn't Microsoft run the OS in a virtual machine? That should create even more overhead.
 

longdi

Banned
Who is this?
Sounds like he just hating on windows api, which we don't know how things are in next gen.

In Phil backwards compatibility stance we trust
 
Top Bottom