• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EDGE: "Power struggle: the real differences between PS4 and Xbox One performance"

Log4Girlz

Member
This all sounds like the devs are trying to test how larger memory pool will affect the animations of the characters. Naughty Dog devs were forced to cram all animations of Joel [walking, climbing, fighting with all weapons and mellee] Ellie, and all enemies [and their weapons] into 4 to 5 MB of RAM. Now that was an achievement.

I wonder what will they do with 50 MB of ram dedicated to animations. :D
.

I don't own the game, but I've seen lots of videos...GTA V is black magic then. The crazy animations, physics, minute details on everything. Its stunning.
 
By the way, I still don't really understand why Microsoft went with eSRAM and not eDRAM. Apparently, eSRAM gives them more options for manufacturing. But is that worth the eDRAM's obvious benefit of having way more capacity? Couldn't they have gone for a discrete on-die package of eDRAM manufactured on a different process like Intel did with Crystallwell which provides the CPU/GPU with 128MB of L4 cache?

Would've been harder to cost reduce (especially not good with Sony having a $100 price advantage), harder to manufacture, would have been more expensive up front, would've led to many less units available at launch, and the benefits of low latency ESRAM that can be used as a true data source to the GPU will prove to be a bigger win than EDRAM once devs start to become better about which data to put in there and how to properly manage it.

The upside of the ESRAM is being heavily overlooked, or so I'm told by a friend in first party development for the system. It may require some work, but from what I've heard the upsides are very real, and once devs have a solid gameplan for working it into their game outside of the obvious, devs will likely keep coming up with more and more creative ways to get extra benefit out of it. And they believe that this continued experimentation and discovery of how best to tackle its use will more than future proof the system for its entire lifespan. All I've managed to gather tells me that the bandwidth benefits are a very limited view of what the ESRAM means to the system as a whole, as important as that part of it might be..

Jesus Christ....


I hope you don't believe that "magically optimized super- drivers, move engines, audio block, "power of the cloud" and "special sauce" will make up for the missing RAM, missing CU's, missing ROPs, etc. etc.

Nobody believes that, but this idea that things such as the move engines, the dedicated decompression hardware on one of the move engines, that very powerful audio block and improved drivers and/or development tools, will somehow be completely irrelevant and in no shape or form helpful to the overall performance capabilities of the XB1 is every bit as ridiculous. Any view that doesn't take into consideration some of the custom work done to the system and the specific capabilities or usefulness of that custom hardware, and how it might help with features such as tiled resources, or even just with offloading work from the CPU and/or GPU, is a view that is automatically invalid. Perhaps it can be said that people are overestimating them, and that's certainly debatable, but to pretend as if they're completely useless? Come on. Forget cloud, the XB1 has some customizations in there that are being ignored. At a very basic level they are major extensions of pre-existing GPU and CPU functions. So whether or not it fits with any customizations that AMD plans on making in their next range of GPU, or whether they are identical to the PS4's customizations, that doesn't somehow make them any more or less valid. Sony made customizations to core components of the GPU to fit with what they want to do, and Microsoft made customizations to core components of the GPU to fit with what they want to do.

The move engines look specifically designed to make the XB1 gpu more efficient at handling tiled resources. In fact, people have talked about tier 2 PRT, but regardless of whether the XB1 has tier 1 or tier 2 support for PRT, Microsoft still appear to have made hardware customizations that clearly expand on base PRT support.
 

KidBeta

Junior Member
Would've been harder to cost reduce (especially not good with Sony having a $100 price advantage), harder to manufacture, would have been more expensive up front, would've led to many less units available at launch, and the benefits of low latency ESRAM that can be used as a true data source to the GPU will prove to be a bigger win than EDRAM once devs start to become better about which data to put in there and how to properly manage it.

The eSRAM latency might be low but that is most likely going to be dwarfed by the time it takes for the GPU to traverse its cache system, even if the eSRAM was incredibly low latency the practical time to access the memory wouldn't be that low once you factor in the cache times.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
I've heard from a secret source that if you pay for Xbox Platinum. It unlocks the extra dGPU and the other 4Gb to give you 12Gb.
 
(...)
And they believe that this continued experimentation and discovery of how best to tackle its use will more than future proof the system for its entire lifespan. All I've managed to gather tells me that the bandwidth benefits are a very limited view of what the ESRAM means to the system as a whole, as important as that part of it might be..

This sounds recycled from PS3 Cell's era.
 

ICPEE

Member
Loving the tech talk. Thanks guys.

Was wondering (and im hoping someone can clear this up for me) if the CPU can assist in some GPGPU tasks. And if so, can GPGPU functions then be used for more an even more prettier world OR am i getting this all wrong.

*LOVE GAMING*
 
This sounds recycled from PS3 Cell's era.

So the cell's unique attributes proved useless to the PS3?

And keep in mind that nobody is saying that these things somehow make the XB1 just as powerful as the PS4, or somehow more powerful. It isn't even being suggested that it significantly closes some performance gap. All that's being said is that it's a possibility that Microsoft aren't as clueless as some think and specifically made sure the XB1 was pretty well architected to take advantage of things that they expect to be a major part of game engines and content creation pipelines going forward.

The XB1 looks purpose built around tiled resources, and tiled resources also has strong benefits to shadows without the cost associated with those improved shadows. The presentation from Build regarding tiled resources specifically talks about how it allows ultra high-density shadow buffers, and Microsoft showcased a video demo example of it in action.

Page 38.of the following presentation.

http://www.google.com/url?sa=t&rct=...=xp5uIWa73_tER6TCFk5pkw&bvm=bv.52434380,d.dmg

The full presentation video - skip to 22::34 The demo is also shown on how tiled resources improves shadow mapping.

http://channel9.msdn.com/Events/Build/2013/4-063

If the XB1 is really specialty built for tiled resources, then this is also something that will improve as a result of some of the custom work MS did. Keep in mind I'm not referring to only the XB1 supporting PRT. I'm specifically talking about the tiled resources feature as exposed in Microsoft's API, and how the XB1 seems almost entirely customized with it in mind. 32MB of ESRAM looks a lot bigger when you realize how much data tiled resources allows to be utilized using a tiny amount of physical memory space.

Page 49 of the presentation, where they talk about the Granite runtime overview, one of the middlewares that will support tiled resources, it stresses both residency analysis and decompression. One of the XB1 specific Directx enhancements according to Microsoft's dev conf presentation is ESRAM residency control, and one of the 4 move engines has its own dedicated decompression hardware. All 4 move engines are also capable of tiling and untiling, another important factor in tiled resources.
 

ekim

Member
Jesus Christ....

If this was directed to me : read again. It's my interpretation of what Albert wants to explain to us.
I hope you don't believe that "magically optimized super- drivers, move engines, audio block, "power of the cloud" and "special sauce" will make up for the missing RAM, missing CU's, missing ROPs, etc. etc.

Not completely make up but at least help. These things are there for a reason.
 

Durante

Member
fixe'd that for you :p.

I don't believe ACE's handle graphics jobs.
Correct! Thanks.

So the cell's unique attributes proved useless to the PS3?
Not at all.

However, I think a fundamental difference is that coding to the Cell's unique attributes was required to unlock a theoretical compute capability significantly higher than that of its competitor. Conversely, not even the most efficient XB1 program will get more than 100% peerformance out of the architecture, and those 100% are simply less than PS4's 100 (or 80 for that matter) %.
 
Their spin - their interpretation that minimises disadvantages and maximises advantages. I wonder if we'll get any graphs this time around. A nice eSRAM latency graph perhaps? Some added-up bandwidths?

More spin and fuzzy math from Mr. Penello. Him and Larry Hryb have been in spin overdrive mode since that edge online piece on the 13th.
 

Tycho_b

Member
So the cell's unique attributes proved useless to the PS3?

Not useless, but pretty much the only way to make up for mediocre,weaker and outdated GPU (looking at architecture, Xenos was one generation ahead of RSX)

Does it sound familiar ???

I am not saying that special hardware inside Xbox ONE does nothing, but it's only to make up for the (relative) lack of juice on GPU/CPU side. It brings unnecessary complication and need to spend much more time on making the code run as expected.

Even if bringing some interesting possibilities, this architecture is less preferred by the ones affected - devs. One of X360s strengths were unified memory pool and flexible, very 'new' GPU . I am struggling to understand why it wasn't the NO 1 priority designing new machine.
 

Nozem

Member
Nobody believes that, but this idea that things such as the move engines, the dedicated decompression hardware on one of the move engines, that very powerful audio block and improved drivers and/or development tools, will somehow be completely irrelevant and in no shape or form helpful to the overall performance capabilities of the XB1 is every bit as ridiculous.

But these points can be made equally for the PS4 hardware, drivers and development tools. And it's all guesswork now anyway, so yeah, i'd say it's pretty irrelevant when comparing the systems right now.
 
Correct! Thanks.

Not at all.

However, I think a fundamental difference is that coding to the Cell's unique attributes was required to unlock a theoretical compute capability significantly higher than that of its competitor. Conversely, not even the most efficient XB1 program will get more than 100% peerformance out of the architecture, and those 100% are simply less than PS4's 100 (or 80 for that matter) %.

I say this all the time, but it's irrelevant how it matches up with whatever the PS4 is doing. All that matters is how it helps the XB1 produce even better games, and if it accomplishes just that much, then it's worth it. Utilizing the XB1's ESRAM effectively is also required on the XB1. It's required to get the most out of the architecture that Microsoft designed. All that's left after that is a belief that developers will either prove incapable of this, or they will rise to the occasion. Even if third parties don't make good use of it, there's a good chance that specific first parties will. For example, there should be little doubt that what 343i does will be amazing. And turn 10 will no doubt keep getting better and better. Rare will hopefully work on something besides Kinect Sports style games, and if that wave race section of Kinect Sports rivals is any indication, I'm eager to see what they do. I'm especially looking forward to how Remedy uses the system also.

Basically, all we can do is look forward to the games. I've said time and time again that the Xbox One need not concern itself with what the PS4 is doing, Microsoft just needs to do whatever is necessary to help devs get the most out of the architecture they built, and Microsoft has a pretty good reputation in that regard.

But these points can be made equally for the PS4 hardware, drivers and development tools. And it's all guesswork now anyway, so yeah, i'd say it's pretty irrelevant when comparing the systems right now.

I have to stress this again. In no way am I suggesting that these things won't be true about the PS4, too. I'm simply pointing out that Microsoft has done things with their hardware that aren't being properly appreciated largely because it's known they aren't packing as much raw power as the PS4, and I think that's a bit shortsighted. If Microsoft had a 2 teraflop GPU, everybody would defer more to some of their customizations, but because they instead have a 1.3TFLOP GPU that is weaker than Sony's, the customizations they've put in the man hours to make are seemingly overlooked as useless because they don't magically close the ROPs, TFLOPS gap.
 

KidBeta

Junior Member
Page 49 of the presentation, where they talk about the Granite runtime overview, one of the middlewares that will support tiled resources, it stresses both residency analysis and decompression. One of the XB1 specific Directx enhancements according to Microsoft's dev conf presentation is ESRAM residency control, and one of the 4 move engines has its own dedicated decompression hardware. All 4 move engines are also capable of tiling and untiling, another important factor in tiled resources.

Tiling is nothing new in graphics cards, its been around for decades, if not longer, Microsoft likes to mention things that are standard though as if developers might not understand they are there otherwise.
 
But these points can be made equally for the PS4 hardware, drivers and development tools. And it's all guesswork now anyway, so yeah, i'd say it's pretty irrelevant when comparing the systems right now.

This is what I don't understand. How is Penello and his tech guy going to compare the xb1 to the PS4 if they don't know the optimisations and the CPU, GPU clock speeds of the PS4? Is the little presentation they are going to put on for us just to ease the minds of future xb1 owners or just plain old marketing spin?
 

stryke

Member
I say this all the time, but it's irrelevant how it matches up with whatever the PS4 is doing.

A well balanced, optimised and efficient weak machine, is still a weak machine.

"Irrelevant"? I'm sorry, did you forget what thread you're posting in? It's a tech vs thread and obviously one is weaker than the other. It is very well relevant to discuss the implications of the performance gap and what that means for developers as well as consumers.
 

Bundy

Banned
Nobody believes that, but this idea that things such as the move engines, the dedicated decompression hardware on one of the move engines, that very powerful audio block and improved drivers and/or development tools, will somehow be completely irrelevant and in no shape or form helpful to the overall performance capabilities of the XB1 is every bit as ridiculous. Any view that doesn't take into consideration some of the custom work done to the system and the specific capabilities or usefulness of that custom hardware, and how it might help with features such as tiled resources, or even just with offloading work from the CPU and/or GPU, is a view that is automatically invalid. Perhaps it can be said that people are overestimating them, and that's certainly debatable, but to pretend as if they're completely useless? Come on. Forget cloud, the XB1 has some customizations in there that are being ignored. At a very basic level they are major extensions of pre-existing GPU and CPU functions. So whether or not it fits with any customizations that AMD plans on making in their next range of GPU, or whether they are identical to the PS4's customizations, that doesn't somehow make them any more or less valid. Sony made customizations to core components of the GPU to fit with what they want to do, and Microsoft made customizations to core components of the GPU to fit with what they want to do.

The move engines look specifically designed to make the XB1 gpu more efficient at handling tiled resources. In fact, people have talked about tier 2 PRT, but regardless of whether the XB1 has tier 1 or tier 2 support for PRT, Microsoft still appear to have made hardware customizations that clearly expand on base PRT support.

SenjutsuSage, that all sounds nice, etc.
But it doesn't change the fact that the PS4 is clearly more powerful on the gaming side.
MS is telling us that isn't the case.
Even if you call the audio block "very powerful".... it doesn't change anything.
Albert can talk about "optimizing, optimizing, optimizing" all he want.
It won't "make up for the raw power difference."

You and Albert are trying to tell us how "optimized" the XBone is and how un-optimized the PS4 is. So the "raw power" difference is meaningless, because the PS4 won't reach all that power, but the XBone will.
That's bullcrap! The PS4 has been optimized back and forth (to have no bottleneck, as good as it gets). Cerny, Guerrilla, Evolution, etc. already explained that far too often.

So the audio block and Mircosoft's hallucinated ESRAM bandwidth (200+) won't chance the fact, that the XBone is the weaker console.

I can't wait to see what Alberto and his tech-team are "cooking" at the moment.
Their upcoming tech-comparison will be...... let's say.... "interesting".

If this was directed to me : read again. It's my interpretation of what Albert wants to explain to us.
No, not to you.
I'm just shocked that they are really trying to do this.

Not completely make up but at least help. These things are there for a reason.
Of course! And yes, it will "help". But that's all!
Just like some developers told EDGE (regarding the CPU/GPU clock boost) --> "It's not significant! It does not change things that much. Of course, something is better than nothing.”
 
This is what I don't understand. How is Penello and his tech guy going to compare the xb1 to the PS4 if they don't know the optimisations and the CPU, GPU clock speeds of the PS4? Is the little presentation they are going to put on for us just to ease the minds of future xb1 owners or just plain old marketing spin?

A lot what Microsoft PR have done is basically preach to the choir and I imagine this tech talk won't be too different. I mean right now, they're doing exactly the same thing as what Sony used to do: using emotive codenames for their pieces of hardware (SHAPE, Move Engines, etc) to make their hardware sound more impressive than they probably are. There's probably some nice information to be gained from the presentation but its doubtful that they can actually make any real comparisons with the PS4.

I mean, right now do any of us really know what the secondary processor is in the PS4? Is it a single core ARM processor? Quad core? Or is it something else? If we know close to nothing, how would Microsoft know?
 

JaggedSac

Member
So the cell's unique attributes proved useless to the PS3?

And keep in mind that nobody is saying that these things somehow make the XB1 just as powerful as the PS4, or somehow more powerful. It isn't even being suggested that it significantly closes some performance gap. All that's being said is that it's a possibility that Microsoft aren't as clueless as some think and specifically made sure the XB1 was pretty well architected to take advantage of things that they expect to be a major part of game engines and content creation pipelines going forward.

The XB1 looks purpose built around tiled resources, and tiled resources also has strong benefits to shadows without the cost associated with those improved shadows. The presentation from Build regarding tiled resources specifically talks about how it allows ultra high-density shadow buffers, and Microsoft showcased a video demo example of it in action.

Page 38.of the following presentation.

http://www.google.com/url?sa=t&rct=...=xp5uIWa73_tER6TCFk5pkw&bvm=bv.52434380,d.dmg

The full presentation video - skip to 22::34 The demo is also shown on how tiled resources improves shadow mapping.

http://channel9.msdn.com/Events/Build/2013/4-063

If the XB1 is really specialty built for tiled resources, then this is also something that will improve as a result of some of the custom work MS did. Keep in mind I'm not referring to only the XB1 supporting PRT. I'm specifically talking about the tiled resources feature as exposed in Microsoft's API, and how the XB1 seems almost entirely customized with it in mind. 32MB of ESRAM looks a lot bigger when you realize how much data tiled resources allows to be utilized using a tiny amount of physical memory space.

Page 49 of the presentation, where they talk about the Granite runtime overview, one of the middlewares that will support tiled resources, it stresses both residency analysis and decompression. One of the XB1 specific Directx enhancements according to Microsoft's dev conf presentation is ESRAM residency control, and one of the 4 move engines has its own dedicated decompression hardware. All 4 move engines are also capable of tiling and untiling, another important factor in tiled resources.

I haven't been following much, but has it been said whether the Bone is Tier 2 in regards to PRT?
 

twobear

sputum-flecked apoplexy
Not useless, but pretty much the only way to make up for mediocre,weaker and outdated GPU (looking at architecture, Xenos was one generation ahead of RSX).
I've seen a lot of people claim this recently, but to the best of my knowledge the RSX was not in any real sense significantly weaker than Xenos and was in some ways better. PS3 was hamstrung by Cell and the split memory pool, not the RSX.

Someone with better knowledge (eg Durante) might be able to clarify though.
 
This is what I don't understand. How is Penello and his tech guy going to compare the xb1 to the PS4 if they don't know the optimisations and the CPU, GPU clock speeds of the PS4? It the little presentation they are going to put on for us just to ease the minds of future xb1 owners or just plain old marketing spin?

Maybe they talk to developers who are working on both and listen to their responses and also get a glimpse of side by side comparisons? The you have somebody like John Carmack who knows all about coding suggest they are very alike as well.

We will see soon enough when these games do finally come out. They will show the differences and then we can go from there. I predict the same thing will occur, people will automatically retract to exclusive games if those multiplat titles don't show real tangible differences. I am talking about real differences, not a couple of frames per second or a few blades of grass more than the other. I'm talking about if games on the PS4 are not running at 60 frames per second while the XBox One runs at 30 frames per second. Then if that doesn't happen they will compare exclusives because there is no real way to make the argument 100% factual since every developer has their own style and every game has various things going on that puts demand on the hardware.

The problem I foresee, which is still happening on current consoles, is we play the numbers games instead of simply enjoying them. I have no doubt the PS4 has better hardware in some areas but how much of that will really matter in the end? This is not like how the Wii was in comparison to the XBox 360 and PS3. Which means they will again be awfully close with the PC once again being the platform to go to for the best performance for those who truly care about getting the best experience.
 

Durante

Member
I've seen a lot of people claim this recently, but to the best of my knowledge the RSX was not in any real sense significantly weaker than Xenos and was in some ways better. PS3 was hamstrung by Cell and the split memory pool, not the RSX.

Someone with better knowledge (eg Durante) might be able to clarify though.
In terms of absolute total throughput, it wasn't really weaker. However, Xenos was the first mainstream GPU to offer unified shader processing (something that is now common on even the lowliest integrated GPU), which gave it a significant effective utilization advantage. How big that advantage is depends on what a game is doing.
 

twobear

sputum-flecked apoplexy
In terms of absolute total throughput, it wasn't really weaker. However, Xenos was the first mainstream GPU to offer unified shader processing (something that is now common on even the lowliest integrated GPU), which gave it a significant effective utilization advantage. How big that advantage is depends on what a game is doing.
Could clever optimization maximise usage of the RSX or is it the kind of thing that developers don't have control over?
 

Nozem

Member
I have to stress this again. In no way am I suggesting that these things won't be true about the PS4, too. I'm simply pointing out that Microsoft has done things with their hardware that aren't being properly appreciated largely because it's known they aren't packing as much raw power as the PS4, and I think that's a bit shortsighted. If Microsoft had a 2 teraflop GPU, everybody would defer more to some of their customizations, but because they instead have a 1.3TFLOP GPU that is weaker than Sony's, the customizations they've put in the man hours to make are seemingly overlooked as useless because they don't magically close the ROPs, TFLOPS gap.

Nobody is denying that the adjustments made by Microsoft are beneficial for the performance of the Xbox One. But you can't use it as an argument why the gap in power between the X1 and the PS4 won't be as big as the raw specs suggest, because Sony will have made similar adjustments.

Say these customizations give the X1 a 20% (or pick a number) increase in effective performance. That increase will be roughly the same for the PS4 with it's own customizations, so the gap is still as wide as it ever was.

So yes, customizations are a good thing for the X1, but no, it won't make the gap any smaller.
 

Durante

Member
Could clever optimization maximise usage of the RSX or is it the kind of thing that developers don't have control over?
It could to some extent, but it would require a lot of micromanagement of your rendering and you'd probably still never get utilization as good as with a unified architecture.
 
More spin and fuzzy math from Mr. Penello. Him and Larry Hryb have been in spin overdrive mode since that edge online piece on the 13th.
I thought they were busy touring malls and other outlets prone to have more of "not so well informed people" where their message can come across more easily?
 
Nobody is denying that the adjustments made by Microsoft are benificial for the performance of the Xbox One. But you can't use it as an argument why the gap in power between the X1 and the PS4 won't be as big as the raw specs suggest, because Sony will have made similar adjustments.

Say these customizations give the X1 a 20% (or pick a number) increase in effective performance. That increase will be roughly the same for the PS4 with it's own customizations, so the gap is still as wide as it ever was.

So yes, customizations are a good thing for the X1, but no, it won't make the gap any smaller.

I'm seeing this argument a lot online. Seems to be a repackaged 'special sauce' argument. You see the same with the '16 chips! does sony have those?' argument.

As you mentioned, this ignores any custom stuff the ps4 has in order to push the xbox special sauce argument.
 

mrklaw

MrArseFace
I have to stress this again. In no way am I suggesting that these things won't be true about the PS4, too. I'm simply pointing out that Microsoft has done things with their hardware that aren't being properly appreciated largely because it's known they aren't packing as much raw power as the PS4, and I think that's a bit shortsighted. If Microsoft had a 2 teraflop GPU, everybody would defer more to some of their customizations, but because they instead have a 1.3TFLOP GPU that is weaker than Sony's, the customizations they've put in the man hours to make are seemingly overlooked as useless because they don't magically close the ROPs, TFLOPS gap.

I think we've tried to dig down into both systems and see what optimisations there have been. We have had exhaustive threads discussion the move engines and other elements of the architecture, and still haven't some up with anything other than tweaks to the standard GCN architecture.

Everything points to a more complex and difficult to leverage architecture on Xbox one. If there are to be any optimisations, it'll need to come from the driver level IMO, making the use of the ESRAM more automatic etc. Even the move engines - supposed to help mitigate bandwidth - still use up the main bandwidth. So if you don't schedule your data moves carefully you'll get bad results. the big question for Xbox (for me at least) is how they can exploit that ESRAM and what that means. Whether it takes developers time to learn how to use it, or whether MS provides tools to do that, I'm just curious on the practical benefits

As for optimisation of the GPU - again arguably the PS4 has an advantage here. It should be more straightforward to saturate the PS4 GPU through the fine grained compute and therefore hit a high efficiency percentage.

I'm looking forward to what MS share on the tech details - I think they make for interesting threads on GAF. But right now it still looks like PS4 is
1) more powerful at a raw specs level
2) simpler for developers to tap that performance due to straightforward architecture
3) more capacity for growth and efficiency due to fine grained compute getting the most out of the GPU
 

onQ123

Member
I say this all the time, but it's irrelevant how it matches up with whatever the PS4 is doing. All that matters is how it helps the XB1 produce even better games, and if it accomplishes just that much, then it's worth it. Utilizing the XB1's ESRAM effectively is also required on the XB1. It's required to get the most out of the architecture that Microsoft designed. All that's left after that is a belief that developers will either prove incapable of this, or they will rise to the occasion. Even if third parties don't make good use of it, there's a good chance that specific first parties will. For example, there should be little doubt that what 343i does will be amazing. And turn 10 will no doubt keep getting better and better. Rare will hopefully work on something besides Kinect Sports style games, and if that wave race section of Kinect Sports rivals is any indication, I'm eager to see what they do. I'm especially looking forward to how Remedy uses the system also.

Basically, all we can do is look forward to the games. I've said time and time again that the Xbox One need not concern itself with what the PS4 is doing, Microsoft just needs to do whatever is necessary to help devs get the most out of the architecture they built, and Microsoft has a pretty good reputation in that regard.



I have to stress this again. In no way am I suggesting that these things won't be true about the PS4, too. I'm simply pointing out that Microsoft has done things with their hardware that aren't being properly appreciated largely because it's known they aren't packing as much raw power as the PS4, and I think that's a bit shortsighted. If Microsoft had a 2 teraflop GPU, everybody would defer more to some of their customizations, but because they instead have a 1.3TFLOP GPU that is weaker than Sony's, the customizations they've put in the man hours to make are seemingly overlooked as useless because they don't magically close the ROPs, TFLOPS gap.


The fact that Xbox One GPU is 1.31 TFLOPS & PS4 GPU is 1.84 TFLOPS but devs are saying that the PS4 GPU is 50% faster tells me that maybe the PS4 also has better customization to it's chip.
 

RoboPlato

I'd be in the dick
The fact that Xbox One GPU is 1.31 TFLOPS & PS4 GPU is 1.84 TFLOPS but devs are saying that the PS4 GPU is 50% faster tells me that maybe the PS4 also has better customization to it's chip.

The 50% number seems to be overall performance, not just GPU. It's probably also factoring in RAM benefits, API ease of use, and OS reserves.
 

EagleEyes

Member
The fact that Xbox One GPU is 1.31 TFLOPS & PS4 GPU is 1.84 TFLOPS but devs are saying that the PS4 GPU is 50% faster tells me that maybe the PS4 also has better customization to it's chip.
No, its sounds like to me the PS4 is or was 50% faster due to crappy drivers on Microsoft's part. I really think Microsoft has been scrambling this entire year and they are late with everything at this point.
 

mrklaw

MrArseFace
No, its sounds like to me the PS4 is or was 50% faster due to crappy drivers on Microsoft's part. I really think Microsoft has been scrambling this entire year and they are late with everything at this point.

no, the hardware is literally 50% faster all over.

The only way there could be parity would be if the MS drivers and tools could get eg 90% efficiency out of Xbox one, and the Sony tools only got 50% efficiency.

But again, comments and rumours point to Sony's tools actually being more mature than MS'

There are *zero* data points that give MS a lead in anything at the moment. Except for number of HDMI inputs.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
There are *zero* data points that give MS a lead in anything at the moment.

I would still like to see any evidence or argument supporting the notion that Microsoft's architecture is more efficient than the raw numbers suggest. I have the impression that it is all just based on the assumption that "it just cannot be" that Microsoft let this performance gap happen.
 

Bundy

Banned
No, its sounds like to me the PS4 is or was 50% faster due to crappy drivers on Microsoft's part. I really think Microsoft has been scrambling this entire year and they are late with everything at this point.
Don't just blame the drivers. They're not the only reason.
That's too easy ;)
 

onQ123

Member
The 50% number seems to be overall performance, not just GPU. It's probably also factoring in RAM benefits, API ease of use, and OS reserves.

No, its sounds like to me the PS4 is or was 50% faster due to crappy drivers on Microsoft's part. I really think Microsoft has been scrambling this entire year and they are late with everything at this point.

PS4 drivers\API\OS could be just as bad compared to what it's going to be once they are updated.
 

Crisco

Banned
Best case scenario, the ESRAM is utilized as a 32mb framebuffer to prevent the GPU from being bottlenecked by the main pool of relatively low bandwidth DDR3. That's it and that's all. It's not going to magically boost shader performance or anything like that. The worrying part is that it looks like, at least initially, developers are forced to manage its utilization manually rather than it being treated like a dedicated cache for the GPU either at the hardware or API level. MS needs to sort that shit out.
 
Top Bottom