Kabuki Quantum Lover
Member
The cloud is a good source of electrolytes.Does the cloud make it unbalanced?
The cloud is a good source of electrolytes.Does the cloud make it unbalanced?
Multiplayer is at 60fps
Singleplayer is at 30fps
Does the cloud make it unbalanced?
After transistors, and infinite power of the cloud comes balance. LOL
So why is single player at 30fps? Does AI logic consume too much CPU?
Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.Please don't ever bring this stupid argument into a tech thread. Having poor eyesight or a crappy setup doesn't negate a significant objective difference.
They're also a crap way to compare. Is the 360 better because Bayonetta is much better on there? Is the PS3 better because FFXIII is better on there?
Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.
On the other hand the highest char poly count in Ryse is almost 4 times the one in KZ and there are more characters on screen as well.
And Ryse lighting is 100% dynamic even their GI, which is pre baked on KZ...
I could go on and on, but my points is: You are constraining your comparison into a single spec, but there are lots of aspects where the goals and technologies from these two games differ, which means your performance compared is flawed because you are assuming everything they are doing is the same.
You can say that you don't value resolution as much as other things that make up the graphics of a game. (For example: Geometry, effects, lighting.)Yeah, that's ridiculous. The difference between 900p and 1080p is obvious, just like the difference between 1080p and 1440p. It gets harder when you have perfect AA (like 8xSSAA), but none of these games will have that.
I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.Microsoft's approach to asynchronous GPU compute is somewhat different to Sony's - something we'll track back on at a later date.
Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.You can say that you don't value resolution as much as other things that make up the graphics of a game.
MS may have inadvertently confirmed that they're a lot more fill limited than most of us had assumed which is very troubling news for anyone planning to buy an Xbone.
It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.
The PS4 is Hitomi Tanaka. Amazing.
from you? come on beta, this is lame and puts you in a corner where you long said you wouldn't belong to.
I agree about the impossibility, but I don't agree that one is a foolish fool to value other graphical stuff that aren't resolution in a game.Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.
Which will be ridiculous if they do that.It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.
Click on the name, go to the profile and click "Add xxx to ignore list." That way his post aren't display by default except if they are quoted.Is there anyway to block posts from senjutsusage? This guy just posts a lot of non sense through his Microsoft goggles and it bothers me.
And if he had said he doesn't think it didn't make much of a difference everything would have been fine. He just said it's impossible, which is dumb.The guy that sparked off this 1080p vs 900p thing was comparing the same game on his 42" TV 6 ft away.
You can say that you don't value resolution as much as other things that make up the graphics of a game. (For example: Geometry, effects, lighting.)
Saying that's impossible to tell at such a distance doesn't gel with what little I know of optics and what the eye can resolve.
Also you can go from 1080p to 900p while keeping your graphical effects and framerate and even adding some bells-and-whistles. You can't do the opposite.
Please don't ever bring this stupid argument into a tech thread. Having poor eyesight or a crappy setup doesn't negate a significant objective difference.
I had hoped that my choice of words would make clear that I was mostly joking with that part. It's a subjective question. Personally, I feel like graphical artifacts (aliasing, flickering, bad filtering, tearing, you name it) totally ruin graphics, and I'd much rather have less bling and solid IQ than the opposite.I agree about the impossibility, but I don't agree that one is a foolish fool to value other graphical stuff that aren't resolution in a game.
I think Skyward Sword at 480p looks better than Minecraft at 1080p.
Leadbetter said:Xbox One's circa 200MB/s of "real-life" bandwidth trumps PS4's 176GB/s peak throughput
MS engineers referenced Vgleaks articles about 14+4 PS4 CUs that was later debunked by Cerny in an interview.
hilarious. lol
I will try to find Cerny interview when asked about this.
Is there anyway to block posts from senjutsusage? This guy just posts a lot of non sense through his Microsoft goggles and it bothers me.
It's obvious they have less compute resources (by disabling two CUs) and less amount of ACEs means granularity isnt as good. I'm guessing they'll point to the increase in CPU clock for that.
I'm curious what this part means. We more or less know Sony's approach -- having 8 ACEs to, well, feed the chip with up to 8 compute workloads asynchronously. I wonder what (if anything?) MS proposes to counter that.
Maybe (if you are a foolish fool). But that's not what I was arguing against. I was arguing against the supposed impossibility of telling a difference between the resolutions.
Nah you can never compare two exclusives only multiplat from the same dev. Even if both were thrown together quickly the best hardware that's the easiest to program for will show itself just fine.
If I identified primarily as a PC gamer I would hope console developers would focus the fuck out of anything that wasn't IQ and then port the game to PC.I had hoped that my choice of words would make clear that I was mostly joking with that part. It's a subjective question. Personally, I feel like graphical artifacts (aliasing, flickering, bad filtering, tearing, you name it) totally ruin graphics, and I'd much rather have less bling and solid IQ than the opposite.
He clarified that he as joking.WTF is this crap?
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?WTF is this crap?
No.What he said did give some credibility to the notion that going higher than 14 CUs for graphics won't give you a linear performance scale.
Sure, barbarians do not have 150k poly, but they probably dont have less than 70k poly. That same goes for KZ:SF characters, some will have 45k others will have 32k, it depends of model requirements.Gemüsepizza;83118521 said:Oh, you mean this here?
http://i.imgur.com/SFZ83X2.jpg
This is only about the player character Marius. Other models have a significantly lower poly count and less details as can be seen in screenshots. And can you please post a source about the number of on-screen characters? And the poly count for specific scenes?
Source?
Oh I doubt that.
"There are four 8MB lanes, but it's not a contiguous 8MB chunk of memory within each of those lanes. Each lane, that 8MB is broken down into eight modules. This should address whether you can really have read and write bandwidth in memory simultaneously," says Baker
"Yes you can - there are actually a lot more individual blocks that comprise the whole ESRAM so you can talk to those in parallel. Of course if you're hitting the same area over and over and over again, you don't get to spread out your bandwidth and so that's one of the reasons why in real testing you get 140-150GB/s rather than the peak 204GB/s... it's not just four chunks of 8MB memory. It's a lot more complicated than that and depending on how the pattern you get to use those simultaneously. That's what lets you do read and writes simultaneously. You do get to add the read and write bandwidth as well adding the read and write bandwidth on to the main memory. That's just one of the misconceptions we wanted to clean up."
"That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally."
So 140MB-150MB (obviously means GB/s) is a realistic target and DDR3 bandwidth can really be added on top?
"Yes. That's been measured."
That equivalent on ESRAM would be 218GB/s. However just like main memory, it's rare to be able to achieve that over long periods of time so typically an external memory interface you run at 70-80 per cent efficiency.
"The same discussion with ESRAM as well - the 204GB/s number that was presented at Hot Chips is taking known limitations of the logic around the ESRAM into account. You can't sustain writes for absolutely every single cycle. The writes is known to insert a bubble [a dead cycle] occasionally... one out of every eight cycles is a bubble so that's how you get the combined 204GB/s as the raw peak that we can really achieve over the ESRAM. And then if you say what can you achieve out of an application - we've measured about 140-150GB/s for ESRAM.
Cerny didn't debunked that the system is balanced for 14 CUs doing graphics, and 4 doing GPGPU, he debunked the notion that those 4 couldn't be used for graphics too.
What he said did give some credibility to the notion that going higher than 14 CUs for graphics won't give you a linear performance scale.
And looking just at resolution in tech comparison is as valid as comparing GoW collection II thats running in 1080p to GoW 3 thats running in 720p.
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?
Anyway, regarding the whole resolution thing:
http://cdn.avsforum.com/4/47/47015717_1080vs4kvs8kvsmore.png[/QUOTE]
Yeah I'm sure some smileys would help next time that you'll try to "lighten up" the mood by calling names and dismissing opinions that differ from yours.
Let me get this straight:Yeah I'm sure some smileys would help next time that you'll try to "lighten up" the mood by calling names and dismissing opinions that differ from yours.
MS has joined the Nintendo club, with this whole balance and efficiency talk. I guess they forgot to tell this to developers.
A reference to a personal preference, formulated in a jocular manner so as to lighten the conversation. Should I put a few smileys around it next time?
Anyway, to put the whole resolution thing to rest:
I thought this stuff was pretty significant information, even though people are acting like they literally say nothing at all in this article except balance this or balance that. They say a whole hell of a lot more than that. They explain how the ESRAM is able to read and write simultaneously.
Each 8MB block is seriously 8 separate modules for a total of 32 memory modules for the full 32MB of ESRAM? We sure as hell didn't know that before.
And I think it's quite cool to have confirmation that they've measured in real running games that ESRAM is achieving a fully measured 140-150GB/s. Short of they're not being honest, they seem quite clear on this.
They even finally explain the 204GB/s vs 218GB/s discrepancy.
Essentially, this is a crazy amount of information that is far from just all PR bull. There are real architectural details about the layout of the system and real measured numbers in running games, not random simulations. If after all this people can still claim that Microsoft literally answered nothing at all, then people simply weren't interested in answers in the first place, and obviously that's not entirely a surprise, but I'm glad they finally did this.
We especially now have confirmation that the Xbox One GPU is not old GCN, and is instead newer Sea Islands tech, like the PS4's appears to be. Short of they're lying out of their ass, which is a view some seem to be taking, I think a lot of good information is in this article.
GDDR5, so good it's uncomfortable.
Unlike other charts of the type I have seen, it does cite its sources. Which are published in a SMPTE journal -- since that's not my field I have no idea how well regarded that is, but I'd still take it over unsourced material.This charts seems to be prepared by 4k marketing folks.
Yeah, comparing the resolution between different games is silly. You really need the same game if you want to make any conclusions about hardware, and even then you have to contend with the possibility of one of the implementations being more efficient or highly tuned.