• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EuroGamer: More details on the BALANCE of XB1

Yoday

Member
I have to say that it was a good article for what it was intended to do, confuse the masses. There is enough tech talk to confuse most people, and enough "balance" talk to convince them that that there will be little difference, or an advantage for MS. I count on seeing articles on sites like IGN reporting on this through editors that don't know what a GPU is. They will declare both consoles identical in power because MS's machine is highly balanced to take full advantage of every single tiny aspect of the hardware, and the PS4 will enter redundancy that holds back third party developers.
 

Durante

Member
Please don't ever bring this stupid argument into a tech thread. Having poor eyesight or a crappy setup doesn't negate a significant objective difference.
Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.
 
They're also a crap way to compare. Is the 360 better because Bayonetta is much better on there? Is the PS3 better because FFXIII is better on there?

A single game or even a handful of games isn't enough to draw a pattern, on the most they share the experience a single developer had with the systems, but the combination of all multiplatform games that are released within the same time frame does give a fairly estimative on how systems compare during that time frame. (That's an important distinction)

Bayonetta is also a good example on how a single game can be used to draw such comparisons, provided you have some knowledge of their differences. The game is heavy on transparency effects, something that is pretty straight forward on 360, but on PS3, you'd probably have to come up with a way to use the cell to handle that to be able to compare... Which is not impossible (Just look at the ZOE HD re-release that post patch performs a lot better than the 360 counterpart), but it does take it away the budget from something else (be it research, development or even computing resources on the cell).

To be clear, that's what I think the multiplatform games tell me when comparing the HD twins: They are pretty much on equal footing, despite some very known advantages on either machine, but to get the same performance on PS3 it's not always as simple as it is on 360, specially if you haven't planned for the Ps3 from the start.
 
Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.

You'll never get 8xSSAA at 900p as you might as well run it at 1080p if you've got those kinds of resources to spare.

As you say, ridiculous argument.
 
On the other hand the highest char poly count in Ryse is almost 4 times the one in KZ and there are more characters on screen as well.

Oh, you mean this here?

http://i.imgur.com/SFZ83X2.jpg

This is only about the player character Marius. Other models have a significantly lower poly count and less details as can be seen in screenshots. And can you please post a source about the number of on-screen characters? And the poly count for specific scenes?

And Ryse lighting is 100% dynamic even their GI, which is pre baked on KZ...

Source?

I could go on and on, but my points is: You are constraining your comparison into a single spec, but there are lots of aspects where the goals and technologies from these two games differ, which means your performance compared is flawed because you are assuming everything they are doing is the same.

Oh I doubt that.
 

benny_a

extra source of jiggaflops
Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.
You can say that you don't value resolution as much as other things that make up the graphics of a game. (For example: Geometry, effects, lighting.)

Saying that's impossible to tell at such a distance doesn't gel with what little I know of optics and what the eye can resolve.

Also you can go from 1080p to 900p while keeping your graphical effects and framerate and even adding some bells-and-whistles. You can't do the opposite.
 

Durante

Member
Microsoft's approach to asynchronous GPU compute is somewhat different to Sony's - something we'll track back on at a later date.
I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.

You can say that you don't value resolution as much as other things that make up the graphics of a game.
Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.
 

Sky78

Banned
MS may have inadvertently confirmed that they're a lot more fill limited than most of us had assumed which is very troubling news for anyone planning to buy an Xbone.

I'm planning on buying an X1 and I don't feel remotely troubled; even the first wave of games look pretty good graphically for me.
 

artist

Banned
I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.
It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.
 

benny_a

extra source of jiggaflops
Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.
I agree about the impossibility, but I don't agree that one is a foolish fool to value other graphical stuff that aren't resolution in a game.

I think Skyward Sword at 480p looks better than Minecraft at 1080p.
 

Bundy

Banned
It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.
Which will be ridiculous if they do that.
 
Is there anyway to block posts from senjutsusage? This guy just posts a lot of non sense through his Microsoft goggles and it bothers me.
 

benny_a

extra source of jiggaflops
Is there anyway to block posts from senjutsusage? This guy just posts a lot of non sense through his Microsoft goggles and it bothers me.
Click on the name, go to the profile and click "Add xxx to ignore list." That way his post aren't display by default except if they are quoted.

The guy that sparked off this 1080p vs 900p thing was comparing the same game on his 42" TV 6 ft away.
And if he had said he doesn't think it didn't make much of a difference everything would have been fine. He just said it's impossible, which is dumb.
 

vpance

Member
You can say that you don't value resolution as much as other things that make up the graphics of a game. (For example: Geometry, effects, lighting.)

Saying that's impossible to tell at such a distance doesn't gel with what little I know of optics and what the eye can resolve.

Also you can go from 1080p to 900p while keeping your graphical effects and framerate and even adding some bells-and-whistles. You can't do the opposite.

The guy that sparked off this 1080p vs 900p thing was comparing the same game on his 42" TV 6 ft away.
 

Sky78

Banned
Please don't ever bring this stupid argument into a tech thread. Having poor eyesight or a crappy setup doesn't negate a significant objective difference.

... It's just basic common sense. The differences between resolutions and how noticeable they are depends on size of screen and distance from that screen. The human eye only has so much fidelity.
 

Durante

Member
I agree about the impossibility, but I don't agree that one is a foolish fool to value other graphical stuff that aren't resolution in a game.

I think Skyward Sword at 480p looks better than Minecraft at 1080p.
I had hoped that my choice of words would make clear that I was mostly joking with that part. It's a subjective question. Personally, I feel like graphical artifacts (aliasing, flickering, bad filtering, tearing, you name it) totally ruin graphics, and I'd much rather have less bling and solid IQ than the opposite.
 
If anyone wants to actually see the tech behind Killzone, and why it is pretty amazing for a launch title just go here ...

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

Pretty sure the tech behind Killzone Shadow Fall is extraordinary work. Far more going on then just high poly counts and " dynamic " lighting. Volumetric area lighting per light source with pure HDR

Is there a tech sheet like this for a X1 game yet?

One cool thing from the presentation was that all particles were being done by the CPU. But they state they plan to actually use Compute in the future, which means the end product could actually use the GPCPU code functionality of the PS4. Killzone and Res0gun using GP Compute at launch? Impressive.
 
MS engineers referenced Vgleaks articles about 14+4 PS4 CUs that was later debunked by Cerny in an interview.




hilarious. lol

I will try to find Cerny interview when asked about this.

Cerny didn't debunked that the system is balanced for 14 CUs doing graphics, and 4 doing GPGPU, he debunked the notion that those 4 couldn't be used for graphics too.

What he said did give some credibility to the notion that going higher than 14 CUs for graphics won't give you a linear performance scale.
 

gofreak

GAF's Bob Woodward
It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.

More likely to discuss eSRAM/latency here methinks.
 

Feindflug

Member
I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.

Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.

WTF is this crap?
 

LiquidMetal14

hide your water-based mammals
Nah you can never compare two exclusives only multiplat from the same dev. Even if both were thrown together quickly the best hardware that's the easiest to program for will show itself just fine.

It should but the differences will become more apparent as devs get their grasp on the HW. I do agree that the launch games should already give you some indication but it might not be as profound as it will become.
 

benny_a

extra source of jiggaflops
I had hoped that my choice of words would make clear that I was mostly joking with that part. It's a subjective question. Personally, I feel like graphical artifacts (aliasing, flickering, bad filtering, tearing, you name it) totally ruin graphics, and I'd much rather have less bling and solid IQ than the opposite.
If I identified primarily as a PC gamer I would hope console developers would focus the fuck out of anything that wasn't IQ and then port the game to PC. :p

This 60FPS stuff on consoles is not good for people with beefy rigs. ;-)

WTF is this crap?
He clarified that he as joking.
 

Durante

Member
WTF is this crap?
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?

Anyway, to put the whole resolution thing to rest:
47015717_1080vs4kvs8kvsmore.png
 

KKRT00

Member
Gemüsepizza;83118521 said:
Oh, you mean this here?

http://i.imgur.com/SFZ83X2.jpg

This is only about the player character Marius. Other models have a significantly lower poly count and less details as can be seen in screenshots. And can you please post a source about the number of on-screen characters? And the poly count for specific scenes?
Source?

Oh I doubt that.
Sure, barbarians do not have 150k poly, but they probably dont have less than 70k poly. That same goes for KZ:SF characters, some will have 45k others will have 32k, it depends of model requirements.

This screen alone shows more than 50 characters and thats without counting bodies lying on the ground:
http://i2.minus.com/izO6ZrEEeItQY.png

And looking just at resolution in tech comparison is as valid as comparing GoW collection II thats running in 1080p to GoW 3 thats running in 720p.

--
And yes, GI and whole lighting is dynamic, its CryEngine 3, even Crysis 3 and 2 on current gen consoles had in some areas real-time GI applied.
 
I thought this stuff was pretty significant information, even though people are acting like they literally say nothing at all in this article except balance this or balance that. They say a whole hell of a lot more than that. They explain how the ESRAM is able to read and write simultaneously.

"There are four 8MB lanes, but it's not a contiguous 8MB chunk of memory within each of those lanes. Each lane, that 8MB is broken down into eight modules. This should address whether you can really have read and write bandwidth in memory simultaneously," says Baker

Each 8MB block is seriously 8 separate modules for a total of 32 memory modules for the full 32MB of ESRAM? We sure as hell didn't know that before.

"Yes you can - there are actually a lot more individual blocks that comprise the whole ESRAM so you can talk to those in parallel. Of course if you're hitting the same area over and over and over again, you don't get to spread out your bandwidth and so that's one of the reasons why in real testing you get 140-150GB/s rather than the peak 204GB/s... it's not just four chunks of 8MB memory. It's a lot more complicated than that and depending on how the pattern you get to use those simultaneously. That's what lets you do read and writes simultaneously. You do get to add the read and write bandwidth as well adding the read and write bandwidth on to the main memory. That's just one of the misconceptions we wanted to clean up."

And I think it's quite cool to have confirmation that they've measured in real running games that ESRAM is achieving a fully measured 140-150GB/s. Short of they're not being honest, they seem quite clear on this.

"That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally."

So 140MB-150MB (obviously means GB/s) is a realistic target and DDR3 bandwidth can really be added on top?

"Yes. That's been measured."

They even finally explain the 204GB/s vs 218GB/s discrepancy.

That equivalent on ESRAM would be 218GB/s. However just like main memory, it's rare to be able to achieve that over long periods of time so typically an external memory interface you run at 70-80 per cent efficiency.

"The same discussion with ESRAM as well - the 204GB/s number that was presented at Hot Chips is taking known limitations of the logic around the ESRAM into account. You can't sustain writes for absolutely every single cycle. The writes is known to insert a bubble [a dead cycle] occasionally... one out of every eight cycles is a bubble so that's how you get the combined 204GB/s as the raw peak that we can really achieve over the ESRAM. And then if you say what can you achieve out of an application - we've measured about 140-150GB/s for ESRAM.

Essentially, this is a crazy amount of information that is far from just all PR bull. There are real architectural details about the layout of the system and real measured numbers in running games, not random simulations. If after all this people can still claim that Microsoft literally answered nothing at all, then people simply weren't interested in answers in the first place, and obviously that's not entirely a surprise, but I'm glad they finally did this.

We especially now have confirmation that the Xbox One GPU is not old GCN, and is instead newer Sea Islands tech, like the PS4's appears to be. Short of they're lying out of their ass, which is a view some seem to be taking, I think a lot of good information is in this article.
 

gofreak

GAF's Bob Woodward
Cerny didn't debunked that the system is balanced for 14 CUs doing graphics, and 4 doing GPGPU, he debunked the notion that those 4 couldn't be used for graphics too.

What he said did give some credibility to the notion that going higher than 14 CUs for graphics won't give you a linear performance scale.

The scaling of performance won't conveniently change beyond x number of CUs, and how performance scales will be dependent on the game.

In the absence of a bottleneck elsewhere, those extra CUs will happily, linearly, improve performance on a given workload. Now, there are of course bounds elsewhere, but one key bound is significantly looser on PS4 than XB1 (pixel fill), and I'm pretty sure games will be happy to have 18 CUs feeding 32 ROPS than 12 or 14 whether they have significant compute workloads or not.
 

Durante

Member
And looking just at resolution in tech comparison is as valid as comparing GoW collection II thats running in 1080p to GoW 3 thats running in 720p.

Yeah, comparing the resolution between different games is silly. You really need the same game if you want to make any conclusions about hardware, and even then you have to contend with the possibility of one of the implementations being more efficient or highly tuned.
 

Feindflug

Member
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?

Anyway, regarding the whole resolution thing:
http://cdn.avsforum.com/4/47/47015717_1080vs4kvs8kvsmore.png[/QUOTE]

Yeah I'm sure some smileys would help next time that you'll try to "lighten up" the mood by calling names and dismissing opinions that differ from yours.
 
Setting aside that we know that the CUs can be used for whatever a dev wants for a minute... could someone elaborate on what exactly "balanced for 14 CUs" is actually even supposed to mean?

That beyond 14CUs the 32 ROPs and 72 TMUs become the bottleneck? Or the RAM bandwidth? That the extra 4 CUs are just there to sit idle for a few years until people start off-loading CPU work to them?
 

GameSeeker

Member
This article confirms several things we knew:
1) The PS4 unified memory approach is easier to program for than the Xbone
2) Most of the fancy audio HW in Xbone is reserved for Kinect processing
3) The 218GB/s number Albert Pennello came up with was totally bogus. The 204GB/s number is unachieveable on actual code. The best real code will ever do is 140-150/GB/s, and that's only if the memory access pattern of read/writes/banks is perfect. Otherwise, you get 109GB/s.
4) The Move Engines are Microsoft just renaming DMA engines which have been around for decades.
5) Only Xbone has a an HDMI input, which helps make it the "center of your home entertainment system"

Net: The PS4 is a significantly more powerful piece of gaming hardware (Xbone wins at TV input though). Xbone fans can't argue "secret sauce" anymore. The more facts we get from Microsoft, the more the "secret sauce" argument of ESRAM/low latency/audio blocks/balanced performance/etc., becomes moot.

I also believe the PS4 has the better, more powerful solution for GPGPU. 1st Party titles will take advantage of this power in the 2015-2016 timeframe (as Mark Cerny has stated) and the gap with Xbone will become wider over time, not less.
 

Durante

Member
Yeah I'm sure some smileys would help next time that you'll try to "lighten up" the mood by calling names and dismissing opinions that differ from yours.
Let me get this straight:
1) You thought I was referring to a group of people as "foolish fools" in all seriousness.
2) You were offended by that.

Just for the record. If those are true, I do apologize profusely.
 

Duke2k

Neo Member
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?

Anyway, to put the whole resolution thing to rest:
47015717_1080vs4kvs8kvsmore.png

This charts seems to be prepared by 4k marketing folks.
 
I thought this stuff was pretty significant information, even though people are acting like they literally say nothing at all in this article except balance this or balance that. They say a whole hell of a lot more than that. They explain how the ESRAM is able to read and write simultaneously.



Each 8MB block is seriously 8 separate modules for a total of 32 memory modules for the full 32MB of ESRAM? We sure as hell didn't know that before.



And I think it's quite cool to have confirmation that they've measured in real running games that ESRAM is achieving a fully measured 140-150GB/s. Short of they're not being honest, they seem quite clear on this.



They even finally explain the 204GB/s vs 218GB/s discrepancy.



Essentially, this is a crazy amount of information that is far from just all PR bull. There are real architectural details about the layout of the system and real measured numbers in running games, not random simulations. If after all this people can still claim that Microsoft literally answered nothing at all, then people simply weren't interested in answers in the first place, and obviously that's not entirely a surprise, but I'm glad they finally did this.

We especially now have confirmation that the Xbox One GPU is not old GCN, and is instead newer Sea Islands tech, like the PS4's appears to be. Short of they're lying out of their ass, which is a view some seem to be taking, I think a lot of good information is in this article.

Good post but there's no point. The same posters that flood every X1 thread have already selectively shredded this article apart. I still come to gaf for new news but certainly not to partake in any meaningful nextgen discussion. At least not until things cool down post launch.
 

Durante

Member
This charts seems to be prepared by 4k marketing folks.
Unlike other charts of the type I have seen, it does cite its sources. Which are published in a SMPTE journal -- since that's not my field I have no idea how well regarded that is, but I'd still take it over unsourced material.
 
Yeah, comparing the resolution between different games is silly. You really need the same game if you want to make any conclusions about hardware, and even then you have to contend with the possibility of one of the implementations being more efficient or highly tuned.

Well I don't think it's unreasonable to compare two games made by some of the most tech talented devs in the industry.

Multiplatform games would be perfect if devs didn't short change one of the systems, but we will have to wait and see how the differences express themselves in the real world. Personally, 2nd gen games will be the best early indicator. Things don't get that much better beyond 2nd gen games, as seen with Killzone 2 on Ps3 and Gears of War on Xbox 360.

E3 next year is going to be a bloodbath for the forum, and I can't wait to see how Halo 5 will compare against whatever Sony has for the PS4. Timing sounds right for Naughty Dog to announce a new game, since by fall of 2014 it will have been 3 years since Uncharted 3 shipped. Naughty Dog and 343i are two studios you just expect to make the hardware shine, and it will be the best way to see how far apart these systems are.
 
Top Bottom