• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EuroGamer: More details on the BALANCE of XB1

Feindflug

Member
Let me get this straight:
1) You thought I was referring to a group of people as "foolish fools" in all seriousness.
2) You were offended by that.

Just for the record.

Oh please, we were two that didn't understand your "lighten up the mood" part of the post so it seems that you had the problem of showing your joking/lighthearted intent in the first place.

Just for the record.
 

benny_a

extra source of jiggaflops
I don't understand why Microsoft went with the eram route and not the basic pc style route like sony
Basic PC style would be split memory and dedicated CPU and GPU. The PS4 is a SoC with CPU and GPU combined with one big memory pool that is addressable by both CPU and GPU.

That is not common in PCs.

Unlike other charts of the type I have seen, it does cite its sources. Which are published in a SMPTE journal -- since that's not my field I have no idea how well regarded that is, but I'd still take it over unsourced material.
Someone with some chops in math could just use regular human visual acuity of 1 (arc minute; 20/20) and PPI of a device and then make a chart.
Maybe nothing as pretty as what they did, but for the common scenarios that would be interesting to see. Like people did back when Steve Jobs made the (IMO correct) claim that iPhone 4 is a retina display at 12 inches.
 

Durante

Member
Well I don't think it's unreasonable to compare two games made by some of the most tech talented devs in the industry.
In theory, this may be the case, but in practice we simply don't have enough information about what two completely different games are doing in detail to derive useful objective conclusions about the hardware.

Perhaps a real dumb question but I'll throw it out there:

Is it possible that MS releases xboxone consoles 3-4 years out with better specs to close performance gap?
Only if XB1 fails utterly. And then not to bridge the performance gap, but to start next gen early.
 
Sure, barbarians do not have 150k poly, but they probably dont have less than 70k poly. That same goes for KZ:SF characters, some will have 45k others will have 32k, it depends of model requirements.

This screen alone shows more than 50 characters and thats without counting bodies lying on the ground:
http://i2.minus.com/izO6ZrEEeItQY.png

And looking just at resolution in tech comparison is as valid as comparing GoW collection II thats running in 1080p to GoW 3 thats running in 720p.

--
And yes, GI and whole lighting is dynamic, its CryEngine 3, even Crysis 3 and 2 on current gen consoles had in some areas real-time GI applied.

Any evidence for this?
 

Duke2k

Neo Member
Unlike other charts of the type I have seen, it does cite its sources. Which are published in a SMPTE journal -- since that's not my field I have no idea how well regarded that is, but I'd still take it over unsourced material.

There are many older charts for 1080p, like this this http://carltonbale.com/1080p-does-matter/, based upon 20/20 vision and 1/60th degree of an arc resolution. According to them 50" is fine at 10ft with 1080p.

And suddenly, with 4k eracoming, the same diagonal / distance needs 16k resolution.
 
They're not saying that. That would be nonsense.

And saying the CPU often helps avoid sudden drops due to CPU activity isn't the same as saying the game is typically CPU bound. Maybe, for microsecond the CPU might become a bound if it's doing something suddenly intensive. Their comment elsewhere indicates a bound in software residing elsewhere typically - in the GPU, somewhere other than CUs.

That is exactly what they are saying. They use the actual phase "CPU-bound"

"Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU," Goosen reveals. "Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. "

When they were talking about the GPU, they didn't say it was bound at all. What they said was that they got the biggest improvement by increasing the clock speed over simply adding more CUs. That implies that if they did add more CUs another part of the GPU would limit its theoretical improvement but that doesn't imply that the GPU is the overall limiting part of the system. What they said was that the GPU is balanced so that if you increased one part, CU, without increasing other parts, memory access speed for example, it would become out of balance. That is how they justified that the GPU clock increase was better than extra CUs.

"Right. By fixing the clock, not only do we increase our ALU performance, we also increase our vertex rate, we increase our pixel rate and ironically increase our ESRAM bandwidth," continues Goosen.

Their whole point is that the CPU is the weak link and that they put considerable amount of effort to strengthen that part.

This in part explains why several of the custom hardware blocks - the Data Move Engines - are geared towards freeing up CPU time. Profiling revealed that this was a genuine issue, which has been balanced with a combination of the clock speed boost and fixed function silicon - the additional processors built in to the Xbox One processor.

"We've got a lot of CPU offload going on. We've got the SHAPE, the more efficient command processor relative to the standard design, we've got the clock boost - it's in large part actually to ensure that we've got the headroom for the frame-rates,"


So yes, I say once again that Microsoft says it is the CPU to look at. I know that PlanetSide 2 was CPU bound but that was because it couldn't take advantage of multiple cores. I've also heard that RTS games can be CPU bound. What I want to know is has any dev stated that CPUs were the limiting factor in their console game?
 
Good post but there's no point. The same posters that flood every X1 thread have already selectively shredded this article apart. I still come to gaf for new news but certainly not to partake in any meaningful nextgen discussion. At least not until things cool down post launch.

Oh, trust me, I know that already. Some people were never actually interested in hearing anything about the system, hence why so much of what is actually said in this article is more or less being ignored altogether, or is just being twisted to mean something entirely different from what was actually said. Fact is, for people that were actually really interested in getting these details out of genuine curiousity about the system, and weren't just looking for something they can troll and make fun of, those people have a lot of interesting and detailed information to reference in this article. Some of the more negative reaction this article has been getting is proof of the fact that there was never any information or kind of clarification that Microsoft could have the engineers or architects that actually worked on this system provide to all of us that would satisfy them.

Nevermind the fact that many of the more strongly held beliefs or claims constantly made by posters about the Xbox One architecture were, in some cases, completely shutdown as false. None of that matters apparently. What would one of those things be? For a good long while I've been saying that a comparison to the 7770 and the Xbox One GPU made very little sense, and that it made much more sense to compare the Xbox One GPU to Bonaire, or the 7790 -- something I was mocked often for saying -- and as it now turns out, the Xbox One GPU is based on Sea Islands Architecture, and, as such, a lot more similar to Bonaire than many thought.

And that's a just a tiny fraction of the information shared in this article. Again, for people that were looking forward to this, lots of cool bits of info and some things were finally cleared up. Look forward to seeing what some tech sites say about some of the information shared. Would especially be interested in seeing what Anandtech and some other sites have to say about the revelation regarding the ESRAM layout actually being 32 individual modules.

Any evidence for this?

Not sure, a lot of info Crytek put out seems to lean towards that being the case, but I can't remember where I saw all that information on the game.
 

Feindflug

Member
Well I don't think it's unreasonable to compare two games made by some of the most tech talented devs in the industry.

Multiplatform games would be perfect if devs didn't short change one of the systems, but we will have to wait and see how the differences express themselves in the real world. Personally, 2nd gen games will be the best early indicator. Things don't get that much better beyond 2nd gen games, as seen with Killzone 2 on Ps3 and Gears of War on Xbox 360.

E3 next year is going to be a bloodbath for the forum, and I can't wait to see how Halo 5 will compare against whatever Sony has for the PS4. Timing sounds right for Naughty Dog to announce a new game, since by fall of 2014 it will have been 3 years since Uncharted 3 shipped. Naughty Dog and 343i are two studios you just expect to make the hardware shine, and it will be the best way to see how far apart these systems are.

Gears 1 looks like crap when compared to Judgment so I wouldn't say that it doesn't get much better...in the past second wave games were hardly an indication on what to expect from a console graphics wise, sure Killzone 2 is still one of the best looking games on the PS3 but it's the exception and not the rule.
 

Dragon

Banned
Good post but there's no point. The same posters that flood every X1 thread have already selectively shredded this article apart. I still come to gaf for new news but certainly not to partake in any meaningful nextgen discussion. At least not until things cool down post launch.

In order to engage in meaningful discussion you have to have something meaningful to say. All I see from you is a big persecution complex and a bunch of complaining. Seriously you're the problem, someone who cannot get around his own biases to look at the facts objectively.
 
If anyone wants to actually see the tech behind Killzone, and why it is pretty amazing for a launch title just go here ...

http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

Pretty sure the tech behind Killzone Shadow Fall is extraordinary work. Far more going on then just high poly counts and " dynamic " lighting. Volumetric area lighting per light source with pure HDR

Is there a tech sheet like this for a X1 game yet?

One cool thing from the presentation was that all particles were being done by the CPU. But they state they plan to actually use Compute in the future, which means the end product could actually use the GPCPU code functionality of the PS4. Killzone and Res0gun using GP Compute at launch? Impressive.

It's interesting that the demo only uses 874 shaders, which is less than 14 CUs. (Page 62). I wonder what the other 4 CUs were doing, maybe compute?
 

ekim

Member
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?

Anyway, to put the whole resolution thing to rest:
47015717_1080vs4kvs8kvsmore.png

I'm having a hard time believing this chart. I sit 10 feet away from a 47" TV and can barely tell the difference between 720p and 1080p. And I have good eyes.
 

KidBeta

Junior Member
And that's a just a tiny fraction of the information shared in this article. Again, for people that were looking forward to this, lots of cool bits of info and some things were finally cleared up. Look forward to seeing what some tech sites say about some of the information shared. Would especially be interested in seeing what Anandtech and some other sites have to say about the revelation regarding the ESRAM layout actually being 32 individual modules.

It isn't a relevation that much TBH, we have known for a while that it was 4 set of 8, now we know that each 8 has 8 1MB sets in it. This doesn't tell us anything useful (like the latency).
 
In theory, this may be the case, but in practice we simply don't have enough information about what two completely different games are doing in detail to derive useful objective conclusions about the hardware.

I don't think we need a tech breakdown to see which games looks better. You can see it in the texturing, the lightning, the shadows, the resolution, the framerate, the amount of detail. On screen result is what matters.
 

benny_a

extra source of jiggaflops
I'm having a hard time believing this chart. I sit 10 feet away from a 47" TV and can barely tell the difference between 720p and 1080p. And I have good eyes.
I'm sorry that I'm the one to tell you this but I have bad news... :-D
 

Durante

Member
Someone with some chops in math could just use regular human visual acuity of 1 (arc minute; 20/20) and PPI of a device and then make a chart.
Maybe nothing as pretty as what they did, but for the common scenarios that would be interesting to see. Like people did back when Steve Jobs made the (IMO correct) claim that iPhone 4 is a retina display at 12 inches.
Because of the characteristics of rendered digital images and aliasing I'd say that Vernier acuity is actually more applicable for such a chart (at ~0.13 arc minutes). Particularly if you are concerned with game resolution.
 
Gears 1 looks like crap when compared to Judgment so I wouldn't say that it doesn't get much better...in the past second wave games were hardly an indication on what to expect from a console graphics wise, sure Killzone 2 is still one of the best looking games on the PS3 but it's the exception and not the rule.

Halo 4, I feel, is the best looking game on the Xbox 360, although I do have some other personal favorites. I happen to think Blue Dragon is quite beautiful for such an early title, as well as a number of other games.

It isn't a relevation that much TBH, we have known for a while that it was 4 set of 8, now we know that each 8 has 8 1MB sets in it. This doesn't tell us anything useful (like the latency).

They didn't tell us anything useful? I don't agree. Considering that one of the most controversial things regarding ESRAM was how it could possibly read and write simultaneously was answered quite clearly by the architects of the system, I don't know if I would say there was nothing useful. That's the opposite of not useful. That's big info, and I can't wait to see what Anandtech and other sites say about it. We also have confirmation of real measured results of what's being achieved in real games with the ESRAM, no longer just anonymous rumors. We have a far better understanding of how ESRAM and main memory operate in parallel with one another, and we have a quite clear defense with actual examples of why the architects feel that ESRAM is a superior, less limiting design over the Xbox 360's eDRAM. We also have a real explanation of the discrepancy between 218GB/s and 204GB/s Neither number was bogus, we just didn't have the clarity that we do now on why we weren't understanding what was happening. The latency numbers would have been cool, but considering very few companies ever fully detail what their latencies are on everything that they do, I guess we can't be surprised they didn't tell us that. It's usually hell trying to get this kind of info out of Nvidia and AMD a lot of times, and the times I've seen this info from Intel has been in very rare circumstances, and it seems like the writer of the article had to pull some teeth to get an answer.

I'll try to find out from my friend and see if I can repeat what the latency is here.
 

tipoo

Banned
The One not benefiting from more CUs doesn't mean the PS4 would not. The PS4 also has more ROPs and TMUs, which may be what limited the Ones performance gains from it, making a clock speed increase better for that chip while CUs were better for the PS4.
 

gofreak

GAF's Bob Woodward
That is exactly what they are saying. They use the actual phase "CPU-bound"

In some titles, which is perfectly believable.

But a GPU upclock if they are generally CPU bound would be pointless (wrt improving framerate). And they talk about a performance advantage in 'a lot of titles' with the GPU upclock. Kind of meaningless if that frame time advantage were being negated by a CPU bound.

Not to mention, we can tangibly see a GPU bound in some games. Ryse didn't dial back resolution because it was CPU bound...

So yes, I say once again that Microsoft says it is the CPU to look at. I know that PlanetSide 2 was CPU bound but that was because it couldn't take advantage of multiple cores. I've also heard that RTS games can be CPU bound. What I want to know is has any dev stated that CPUs were the limiting factor in their console game?

That obviously varies from game to game and the target res. But I'd stand by the read of their commentary around the GPU upclock. A performance improvement within the context of the GPU frametime alone would be pointless in those games if they were CPU bound, and render the upclock pointless in the context of those games. The simpler explanation is that in the games referred to there, there was a non-CU GPU bound. Of course that doesn't mean that in other games there isn't a CPU bound.
 

Respawn

Banned
I read the the article many times but still I have no idea how did they get 204GB/s number.

Why not 218GB/s? Heck why not another random number like 206GB/s?
It's the maximum theoretical output. Not that it will be reached in real world performance. Its the R/W performance going both ways I'm assuming.

It is an interesting read even with the spins
 
We got the configuration of the GPU, reasoning for upclock, ESRAM, clarity on it's functionality etc.

Unfortunately the article could've been a lot better only if it was some one else asking more tough questions.

Sure. I think I'm just over the war part of the spec discussion at this point.
 

JaggedSac

Member
"Everybody knows from the internet that going to 14 CUs should have given us almost 17 per cent more performance," he says, "but in terms of actual measured games - what actually, ultimately counts - is that it was a better engineering decision to raise the clock. There are various bottlenecks you have in the pipeline that can cause you not to get the performance you want if your design is out of balance."

Shots fired at you guys.

"Exemplar ironically doesn't need much ALU. It's much more about the latency you have in terms of memory fetch, so this is kind of a natural evolution for us," he says.

Interesting, I had never heard the Kinect skeletal computation system called Exemplar before. Also, latency. That should send GAF into a tizzy again.


Cool read.
 
It's the maximum theoretical output. Not that it will be reached in real world performance. Its the R/W performance going both ways I'm assuming.

It is an interesting read even with the spins

theoretical peaks are just that

it is what you would get if there was never a miss of any kind, if there is a miss, there is a penalty. And well, there are always misses, which is why he states it is more around 140/150GB/s in real gaming terms, because there is a miss every 8th cycle or so.

I still call BS on that number. Maybe in in house tech demos using extremely controlled environments will there be a miss only once every 8th cycle, but in the real world gaming environment I think that number will be higher. The more misses, and the higher penalty, the lower your GB rate becomes. The GDDR5 number is 178GB/s peak, but that goes down pretty quickly the more misses you have. Which is why hUMA was important, because it allows the GPU AND the CPU to " see " the thread schedulers and queued jobs and know which one it needs to pull next. So the amount of " misses " are lowered significantly, and also takes some load off the CPU. The fact that the GPU is also side by side with the CPU also means the GDDR5's usual high rate of penalty on a miss is much lower then you would find on a standard PC where everything is separated.

If I had to take a wild guess I would say on a normal PC with 178GB/s memory bandwidth from GPU to GDDR5, your real world bandwidth would be more like 70-90GB/s, maybe lower. But on the PS4, it is likely more around 110-125GB/s. Due to the lower cycle miss penalty and the drop in actual misses and such. Of course developers can write benchmarks if none are provided to run tests on their product to see what the actual performance is bandwidth wise.
 

2MF

Member
Unlike other charts of the type I have seen, it does cite its sources. Which are published in a SMPTE journal -- since that's not my field I have no idea how well regarded that is, but I'd still take it over unsourced material.

Scientifically I have nothing against it as I haven't read the sources, but empirically it does seem very exaggerated, wouldn't you agree?
 

Pain

Banned
Oh, trust me, I know that already. Some people were never actually interested in hearing anything about the system, hence why so much of what is actually said in this article is more or less being ignored altogether, or is just being twisted to mean something entirely different from what was actually said. Fact is, for people that were actually really interested in getting these details out of genuine curiousity about the system, and weren't just looking for something they can troll and make fun of, those people have a lot of interesting and detailed information to reference in this article. Some of the more negative reaction this article has been getting is proof of the fact that there was never any information or kind of clarification that Microsoft could have the engineers or architects that actually worked on this system provide to all of us that would satisfy them.

Nevermind the fact that many of the more strongly held beliefs or claims constantly made by posters about the Xbox One architecture were, in some cases, completely shutdown as false. None of that matters apparently. What would one of those things be? For a good long while I've been saying that a comparison to the 7770 and the Xbox One GPU made very little sense, and that it made much more sense to compare the Xbox One GPU to Bonaire, or the 7790 -- something I was mocked often for saying -- and as it now turns out, the Xbox One GPU is based on Sea Islands Architecture, and, as such, a lot more similar to Bonaire than many thought.

And that's a just a tiny fraction of the information shared in this article. Again, for people that were looking forward to this, lots of cool bits of info and some things were finally cleared up. Look forward to seeing what some tech sites say about some of the information shared. Would especially be interested in seeing what Anandtech and some other sites have to say about the revelation regarding the ESRAM layout actually being 32 individual modules.
This is nothing more than technical PR. They are talking about their strengths while flat-out ignoring those of the competition. It doesn't really change anything. It is great to get some specific details though. I'm sure some people will really appreciate that.
 

Feindflug

Member
Halo 4, I feel, is the best looking game on the Xbox 360, although I do have some other personal favorites. I happen to think Blue Dragon is quite beautiful for such an early title, as well as a number of other games.

Gears Judgment is a better example I think because it didn't compromise anything to look much better when in Halo 4 there were compromises (reduced scale, worse particles, less enemies on screen, no motion blur e.t.c.) to get the game to look that good.

Blue Dragon is a great looking game I agree but we haven't seen a game with an evolution of that engine to see what further improvements were possible on the same hardware.
 

gofreak

GAF's Bob Woodward
It's interesting that the demo only uses 874 shaders, which is less than 14 CUs. (Page 62). I wonder what the other 4 CUs were doing, maybe compute?

I think that's a reference to the number of distinct vertex shaders in use in the scene, not the number of jobs in flight on the GPU at one time. There'd be tens of thousands of jobs in flight.
 

KidBeta

Junior Member
Shots fired at you guys.



Interesting, I had never heard the Kinect skeletal computation system called Exemplar before. Also, latency. That should send GAF into a tizzy again.


Cool read.

Yeah im surprised they didn't mention the latency of the eSRAM.
 

KKRT00

Member
Any evidence for this?

Nope, but half seems adequate, especially when head poly will be similar across all characters and thats already 24k.

===
Is there a tech sheet like this for a X1 game yet?
No exactly, most of the stuff from those presentations are applicable to Ryse.
http://crytek.com/cryengine/presentations
I'll especially recommend those:
http://crytek.com/download/Playing with Real-Time Shadows.pdf
http://crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf
http://crytek.com/download/fmx2013_c3_art_tech_donzallaz_sousa.pdf
http://crytek.com/download/Sousa_Tiago_Rendering_Technologies_of_Crysis3.pptx
http://crytek.com/download/S2011_SecretsCryENGINE3Tech.ppt
http://crytek.com/assets/Crysis-2-Key-Rendering-Features.pdf

There is also presentation from autodesk convention about Ryse
http://www.youtube.com/watch?v=VYr980beuIQ

==
Most high end feature from Crysis 3 are confirmed for Ryse with exception to Area Lights, SSDO and Pixel Accurate Displacement Mapping.
 
The information came straight from Sony's own documentation, and speaking of pushing known misinformation: Remember that Xbox One GPU downclock that some of those exact same insiders were so sure of that they were already drawing the chalk lines around the Xbox One, and calling for an autopsy?

Those same insiders? Listen, far be it from me to doubt sony devs, but the information was derived from official documentation. Am I suppose to listen to certain insiders, one of which, according to other posters (haven't seen this myself) supposedly said that Sony planned on upping their GPU speed to 1GHZ AFTER the PS4 launched, knowing how batshit insane that sounds? I won't deny that some of these guys have information or know people, but some, not all, of our insiders that have given us information have appeared, at least to me, and certainly to others if they're willing to speak up, to have a clear agenda behind some of what they say. I'm not referring to cboat. I'm not even referring to the devs and sony employees.

The 6GB spec was later confirmed by another website.

http://gaminglately.com/playstation...ined-flexible-if-devs-working-closely-w-sony/

Today we have more information which has been cleared to us by our source which is a development studio that is working on a PS4 title and prefers to go unnamed at this time (though they have provided identifying information in our original post concerning the project they are working on). This new information will come as a pleasant surprise to many gamers who were worried about the PS4′s RAM capabilities. The RAM of PS4 will be flexible to some extent, in regard to how much is used for the Operating System and how much is used for the games themselves. While 5.5gb of RAM out of the 8gb in total is set aside from the OS to be used for the game, there is a buffer amount of around 512mb extra which can be applied for by any developer so that 6gb can be used for the game instead.
 
It's kind of funny observing Microsoft's constant re-juggling of their PR message hoping something sticks. If at first you don't succeed, try, try again...

"Cloud is going to give you more computational power!" - No, it's not.
"We never targeted the best specifications on our machine." - Fair enough, but...
"We up-clocked the CPU, this makes us better!" - Well, it is a good start but...
"Everything is going to be even, 50% is not going to happen!" - Yeah, 50% is probably not what we'll see but...
"... this shit is balanced. It's been designed to work together. Basically, it's secret sauce." - Okay but... the whole initial push of the PS4 was how there is no bottlenecks and how it's perfectly balanced.


And yet every single step of the way they pick up some stragglers who have been waiting to be wrapped in the warm bosom of Microsoft once again.

I see it like them giving fuel for the uninformed to spread it on chats and forums, hoping that a little misinformation or spin here and there will take root.

Not a wholly unintelligent plan as I've seen the fruits of it spring up here and other venues.
 

Plinko

Wildcard berths that can't beat teams without a winning record should have homefield advantage
Anyway, to put the whole resolution thing to rest:
47015717_1080vs4kvs8kvsmore.png

I find it incredibly fascinating that before 4k TVs exist, we had a much different chart.

Then, when 4k TVs are out and 8k TVs are on the horizon, voila! This magically appears.
 

Durante

Member
Scientifically I have nothing against it as I haven't read the sources, but empirically it does seem very exaggerated, wouldn't you agree?
I think one reason for that may be that what they call "enough" in the chart is what you get when you use the "98% indistinguishable from reality" pixel count per arcminute. At least if I'm reading the source right. Which is a valid definition for "enough" I guess (and appeals to my perfectionism), but you'd still get "95% indistinguishable from reality" with half as many pixels.
 
Out of curiosity, I went back to the claims/comparisons that led to the contention, that led to the technical fellow, that led to the Digital Foundry article.


  1. 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
  2. Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
  3. We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
  4. We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
  5. We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
  6. Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.
How well does the article address or elaborate on these?

I still love this one.
I'm going to hazard a guess with that one that what Penello really meant to say, and what the article elaborates on is that with an upclock it's not just a 6% increase to the CUs, it's a 6% increase to the GPU in general. But it ended up almost backward...?
 

Insane Metal

Gold Member
Sure, barbarians do not have 150k poly, but they probably dont have less than 70k poly. That same goes for KZ:SF characters, some will have 45k others will have 32k, it depends of model requirements.

This screen alone shows more than 50 characters and thats without counting bodies lying on the ground:
http://i2.minus.com/izO6ZrEEeItQY.png

And looking just at resolution in tech comparison is as valid as comparing GoW collection II thats running in 1080p to GoW 3 thats running in 720p.

--
And yes, GI and whole lighting is dynamic, its CryEngine 3, even Crysis 3 and 2 on current gen consoles had in some areas real-time GI applied.
Dude, look at that screen. Those dudes in the background have no more than 10k polis. They look like ass. Seriously.
 
The article is a good read, and quite insightful about the choices Microsoft made. But it doesn't address how they are going to close the performance gap, for obvious reasons.
MS can never close the performance gap. It's up to 1st party devs with artistic and technical wizardry to try and close that gap.
 

Skeff

Member
Out of curiosity, I went back to the claims/comparisons that led to the contention, that led to the technical fellow, that led to the Digital Foundry article.


  1. 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
  2. Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
  3. We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
  4. We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
  5. We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
  6. Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.
How well does the article address or elaborate on these?

It doesn't and to be honest, they won't be able to because at least 2 of these are direct comparisons to things on the PS4, one of which is completely wrong (cache cohenrent bandwidth) and one is unannounced (CPU speed).

Albert already changed his mind on the 204 number but now it has been changed back, and they have revealed the best they've got to in specifically testing this bandwidth is 140-150. Then there's the 6% one which can be refuted by basic Algebra. Microsoft will abandon those points because they are unprovable and clearly wrong in cases.
 

JaggedSac

Member
"So a lot of what we've designed for the system and the system reservation is to offload a lot of the work from the title and onto the system. You have to keep in mind that this is doing a bunch of work that is actually on behalf of the title," says Andrew Goosen.

"We're taking on the voice recognition mode in our system reservations whereas other platforms will have that as code that developers will have to link in and pay out of from their budget. Same thing with Kinect and most of our NUI [Natural User Interface] features are provided free for the games - also the Game DVR."

This is an interesting bit here. I was at a panel recently with Mike Capps(guy from Epic) where he mentioned that they wanted to add some ancillary Kinect stuff to Gears but adding it in meant that they lost something like 10% of their computing resources. So that doesn't seem to be the case. Perhaps devs will be more willing to use the various features provided by Kinect since there will be no hit to their compute budget.
 

vpance

Member
I find it incredibly fascinating that before 4k TVs exist, we had a much different chart.

Then, when 4k TVs are out and 8k TVs are on the horizon, voila! This magically appears.

Until we can't distinguish looking at a screen from looking outside through a window, there's still room for improvement. Even people who can't tell 720p from 1080p 10 ft away would see the difference that ultra high resolutions will eventually bring.
 
ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
Unless I'm missing something, can't the GDDR5 in PS4 also do reads/writes simultaneously? Not to mention simultaneous CPU access to RAM w/ the Onion (or is it Garlic?) bus?

On its own 272 t/ 204 rw look like good numbers, but they only get them one out of every eight cycles, and only certain (very limited) processes actually allow for that sort of bandwidth to be reached. No matter how MS puts it, GDDR5 is the better memory solution.
 

KidBeta

Junior Member
This is an interesting bit here. I was at a panel recently with Mike Capps(guy from Epic) where he mentioned that they wanted to add some ancillary Kinect stuff to Gears but adding it in meant that they lost something like 10% of their computing resources. So that doesn't seem to be the case. Perhaps devs will be more willing to use the various features provided by Kinect since there will be no hit to their compute budget.

The quote is weird though, those features are only free as in so far as they don't cost the game more performance but they do this by reserving the performance all the time, meaning games that don't use it won't get the performance they would of without the constant reservation.
 
"We've done things on the GPU side as well with our hardware overlays to ensure more consistent frame-rates," Goosen adds. "We have two independent layers we can give to the titles where one can be 3D content, one can be the HUD. We have a higher quality scaler than we had on Xbox 360. What this does is that we actually allow you to change the scaler parameters on a frame-by-frame basis."

Any experts on dynamic resolutions out there? Is this any different than what Wipeout for PS3 did? Does this mean that developers can lower the resolution of the 3D content layer for even 1 individual frame in order to maintain a consistent frame rate? Is this exactly what Wipeout did, or could Wipeout not lower the resolution on a frame per frame basis?
 
Top Bottom