• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

150Mhz CPU boost on XBO, now in production

Status
Not open for further replies.

Klocker

Member
Because flushing a cache is bad, its something you want to avoid, it means that any reads for anything afterwards will result in a memory read instead of a chance of a cache hit, increasing offchip bandwidth usage by quite a lot.
I understand that but my point really is all of the armchair computer science theory means nothing until we know everything about how the system functions and what ms has designed to combat that


and clearly we do not know. Some may think they know all but it is obvious there are some Things Ms is doing that we do not understand yet with regard to memory sub system and how devs will.work around that issue
 

KidBeta

Junior Member
You did kinda make some pretty massive programming related assumptions there. The PS4 will clearly have an edge on GPGPU related stuff, but anything a dev does related to GPGPU would need to be reduced on the xbox one by 50%? I'm no programmer, and even I know something sounds very off about what you said.

Who says the GPGPU related task, whatever it/they may be, will even be too much for the Xbox One to handle in the first place? And who says that any coherent write to memory alongside normal graphics operations will automatically cause a massive hit? The PS4 possibly being better optimized and equipped to deal with such operations doesn't automatically mean any such attempt on Xbox One GPU architecture will cause some massive hit. By saying such a thing, I suspect you aren't just significantly underestimating the Xbox One GPU, but you're also very much underestimating a big part of the reason GCN was created in the first place, which was to enable this kind of simultaneous and efficient cooperation between graphics and GPGPU operations.

Without getting into a drawn out discussion on this one, I'm not sure you're properly qualified to make these level of assumptions. Unless, of course, you're a games programmer, in which case you're qualified, and I'll instead just call you a "lazy developer." :)

I never it would be too big for it handle, i just said it will cause a large hit, which any cache flush will cause. There are no ways around it, coherent memory writes from GPGPU on the XBONE causes the entire cache to flushed to memory.

If the hit to eSRAM is so small why even bother having a cache? theres a good reason its there, and the coherent GPGPU writes in the XBONE invalidates and flushes the entire thing, L1, L2 and even the read only texture cache i am led to believe.

It would appear that outside of the eSRAM itself, Microsoft has done nothing about this issue.

It should be noted that GCN also suffers from the same fate, its not just the XBONE, its something you normally just deal with, its something you don't have to deal with on the PS4.

You seem to think your qualified to try and call me out on my qualifications, and you seem to comment rather rigorously on most technical threads, what are your qualifications to be making such comments may I ask?.

I'm not making any assumptions, unless you think that assuming a cache has a much higher bandwidth and lower latency then any offchip memory is a 'large' assumption.


I understand that but my point really is all of the armchair computer science theory means nothing until we know everything about how the system functions and what ms has designed to combat that


and clearly we do not know. Some may think they know all but it is obvious there are some Things Ms is doing that we do not understand yet with regard to memory sub system and how devs will.work around that issue

From the rather detailed vgleaks, it would seem that Microsoft has done short of nothing to combat it (aside form the aforementioned eSRAM).
 
You did kinda make some pretty massive programming related assumptions there. The PS4 will clearly have an edge on GPGPU related stuff, but anything a dev does related to GPGPU would need to be reduced on the xbox one by 50%? I'm no programmer, and even I know something sounds very off about what you said.

Who says the GPGPU related task, whatever it/they may be, will even be too much for the Xbox One to handle in the first place? And who says that any coherent write to memory alongside normal graphics operations will automatically cause a massive hit? The PS4 possibly (correction: I know the PS4 is better equipped) being better optimized and equipped to deal with such operations doesn't automatically mean any such attempt on Xbox One GPU architecture will cause some massive hit. By saying such a thing, I suspect you aren't just significantly underestimating the Xbox One GPU, but you're also very much underestimating a big part of the reason GCN was created in the first place, which was to enable this kind of simultaneous and efficient cooperation between graphics and GPGPU operations.

Without getting into a drawn out discussion on this one, I'm not sure you're properly qualified to make these level of assumptions. Unless, of course, you're a games programmer, in which case you're qualified, and I'll instead just call you a "lazy developer." :)

Well, one of the first thing you learn when learning to optimize code is to make more cache friendly code because cache misses are expensive. You don't have to be a game programmer to know this. It applies to all programs.
 

KidBeta

Junior Member
Well, one of the first thing you learn when learning to optimize code is to make more cache friendly code because cache misses are expensive. You don't have to be a game programmer to know this. It applies to all programs.

Exactly, access pattern and cache hits are king (aside from maths ops) for good performance.
 

TechnicPuppet

Nothing! I said nothing!
I said this earlier in the thread but it's worth repeating.

If this latest information is true, then it will show in every single game at launch. There can be no hiding such a massive advantage behind any excuses.

That sound about right?
 

Chobel

Member
I said this earlier in the thread but it's worth repeating.

If this latest information is true, then it will show in every single game at launch. There can be no hiding such a massive advantage behind any excuses.

That sound about right?

Are you talking about 150Mhz boost or the 50% advantage?
 
I never it would be too big for it handle, i just said it will cause a large hit, which any cache flush will cause. There are no ways around it, coherent memory writes from GPGPU on the XBONE causes the entire cache to flushed to memory.

If the hit to eSRAM is so small why even bother having a cache? theres a good reason its there, and the coherent GPGPU writes in the XBONE invalidates and flushes the entire thing, L1, L2 and even the read only texture cache i am led to believe.

It would appear that outside of the eSRAM itself, Microsoft has done nothing about this issue.

It should be noted that GCN also suffers from the same fate, its not just the XBONE, its something you normally just deal with, its something you don't have to deal with on the PS4.

You seem to think your qualified to try and call me out on my qualifications, and you seem to comment rather rigorously on most technical threads, what are your qualifications to be making such comments may I ask?.

I'm not making any assumptions, unless you think that assuming a cache has a much higher bandwidth and lower latency then any offchip memory is a 'large' assumption.

Don't get me wrong, I know I'm not qualified, but I have a sneaking suspicion that you aren't qualified enough to make such large assumptions, either. And I'm not trying to be disrespectful when I say this. I just think that if you're going to make such a major assumption, then at least there should be some reasonable background or experience on the particular subject to go hand in hand with such a claim. That said, I don't want to discourage you from saying your piece, as you have every right to say whatever, and I do love discussing this kind of stuff, but I'm not sure if what you said would exactly prove accurate if a more qualified person were able to assess the soundness of the argument, but we'll operate on the assumption regardless that you're right, although I still think saying it will be a "massive" hit is perhaps a bit much.

Now, obviously the GPU needs the caches. They all do. Not having them would be absolute suicide and destroy performance, so the caches aren't necessarily there because ESRAM isn't helpful enough. They are there because they must be, regardless of what design decision Microsoft or Sony opted for.

Also, the ESRAM is sure to be further out from the GPU's L1 and L2 and other caches, but it's definitely not offchip memory. It's on chip, and is a very similar style of memory to what's used inside the GPU's L1 and L2 caches, only obviously not as fast. The DDR3 is the only off chip memory in the xbox one equation. And, hypothetically, since Microsoft put in place ESRAM residency control, so that developers have more control over which pieces of data can simply remain inside ESRAM (something I have confirmation they did) without ever having to be moved, why couldn't a dev take advantage of the still rather low latency ESRAM, and have the GPGPU operation take place from within the ESRAM without having to flush the GPU cache in the first place? And the ESRAM obviously has plenty more capacity to spare than the much smaller GPU caches, so perhaps that's a way they could handle things?

ESRAM residency control is one of the main DX11.x enhancements that Microsoft highlights in their development document for the Xbox One along with

Tiled Resources.
Enhanced support for multithreaded graphics operations.
Batched resource creation.
Improved texture format and swizzling support.

Well, one of the first thing you learn when learning to optimize code is to make more cache friendly code because cache misses are expensive. You don't have to be a game programmer to know this. It applies to all programs.

Oh, believe me, I know this. I certainly say it all the time around here regarding ESRAM and the dangers of cache misses, but I'm saying how does he know that Microsoft doesn't have a solution in place to deal with just that issue? There's another piece of on chip memory that's the same kind of memory as the GPU's caches, just not nearly as fast, but it's also quite low latency. Instead of flushing the cache, why not take advantage of the, by cache standards, some of the enormous amount of space inside the ESRAM to help perform the coherent writes? Microsoft's hot chips presentation did showcase that the ESRAM appears to have some access to coherent memory.
 

goonergaz

Member
Fantastic post. I'm leaning towards Xbox One version for certain MP games, because literally that's what some of my friends are getting, but mostly because I prefer the controller. However, no way I'm buying a crappy version that the PS4 version possibly runs circles around, so I'll make certain first before I purchase. Only MP titles I'm guaranteed to get on the Xbox One are the sports titles, but everything else I'll wait to see what the differences are first.

So you tried the DS4? what's it like compared to DS3?

Also I doubt the logic of games being closer due to less complexities...I see that they will get the game running fine on XBO and then port to PS4 and up the res/get a better fps with little/no effort.
 

KidBeta

Junior Member
Now, obviously the GPU needs the caches. They all do. Not having them would be absolute suicide and destroy performance, so the caches aren't necessarily there because ESRAM isn't helpful enough. They are there because they must be, regardless of what design decision Microsoft or Sony opted for.

Also, the ESRAM is sure to be further out from the GPU's L1 and L2 and other caches, but it's definitely not offchip memory. It's on chip, and is a very similar style of memory to what's used inside the GPU's L1 and L2 caches, only obviously not as fast. The DDR3 is the only off chip memory in the xbox one equation. And, hypothetically, since Microsoft put in place ESRAM residency control, so that developers have more control over which pieces of data can simply remain inside ESRAM (something I have confirmation they did) without ever having to be moved, why couldn't a dev take advantage of the still rather low latency ESRAM, and have the GPGPU operation take place from within the ESRAM without having to flush the GPU cache in the first place? And the ESRAM obviously has plenty more capacity to spare than the much smaller GPU caches, so perhaps that's a way they could handle things?

ESRAM residency control is one of the main DX11.x enhancements that Microsoft highlights in their development document for the Xbox One along with

Tiled Resources.
Enhanced support for multithreaded graphics operations.
Batched resource creation.
Improved texture format and swizzling support.

I'am not going to talk about my qualifications because they are irrelevant to the discussion, and it would be nice if you tried to stop discrediting what people say by constantly asking for qualifications.

Sure, it would be great to do all your GPGPU work into the eSRAM, and that would work nicely too, but the problem rears its head is that you can't specifically bypass the caches in a GPU (atleast not in the XBONE from what we know), so you are still going to get cached values and you are still going to get a cache invalidate/flush when you try to do coherent GPGPU work.

Sure, having the GPGPU work in the eSRAM will cause a smaller hit then going to say the DDR3 but it still going to cause a rather large hit compared to not having to flush your graphics data at all.

The main reason the eSRAM is a scratchpad probably resides more around the fact that a cache that large, is much larger then just a bunch of SRAM cells (which is what the eSRAM is) [because it requires the tags/addresses to be stored and it also requires hardware to be able to check all of the tags quickly] and they didn't want to take the size hit that would have caused.
 
And one last point, since I really need to head to bed so I can get up for work, even if you can't run the GPGPU operation straight from the 32MB of ESRAM, maybe flushing the cache might not be such a bad or costly thing to do in the first place.

Now, why would I say that? Remember, a cache miss as we know is costly, especially when you have to go offchip to retrieve data. However, therein lies the rub with the Xbox One architecture. There is another piece of on chip memory that is very low latency on the Xbox One GPU that would save the much more expensive trip to main memory. Why couldn't the caches on the GPU be flushed, and then the data quickly received from the neighboring 32MB of low latency ESRAM which is also on chip and close to the GPU's execution units, even if further out from the L1 and L2 cache? The presence of the ESRAM, to me, sounds like a very fast way to rearm that GPU's cache with the data it will need to perform the coherent write to memory. All without ever having to make a more expensive trip out to main memory, or even worse, the hard drive.

I'am not going to talk about my qualifications because they are irrelevant to the discussion, and it would be nice if you tried to stop discrediting what people say by constantly asking for qualifications.

Sure, it would be great to do all your GPGPU work into the eSRAM, and that would work nicely too, but the problem rears its head is that you can't specifically bypass the caches in a GPU (atleast not in the XBONE from what we know), so you are still going to get cached values and you are still going to get a cache invalidate/flush when you try to do coherent GPGPU work.

Sure, having the GPGPU work in the eSRAM will cause a smaller hit then going to say the DDR3 but it still going to cause a rather large hit compared to not having to flush your graphics data at all.

The main reason the eSRAM is a scratchpad probably resides more around the fact that a cache that large, is much larger then just a bunch of SRAM cells (which is what the eSRAM is) [because it requires the tags/addresses to be stored and it also requires hardware to be able to check all of the tags quickly] and they didn't want to take the size hit that would have caused.

Forget the qualifications bit. It wasn't my intent to insult you, and nor was I necessarily asking for your qualifications. I was simply assuming that unless you're hands on and working on the hardware, your assumptions may not entirely be correct, but I've already moved past that point, and have just agreed to discuss the matter anyway, and the rest regarding whether or not you're right, or even qualified, is irrelevant.
 

Krakn3Dfx

Member
I said this earlier in the thread but it's worth repeating.

If this latest information is true, then it will show in every single game at launch. There can be no hiding such a massive advantage behind any excuses.

That sound about right?

I guess we'll find out in November. You're clearly going to see degrees of performance gaps depending on the developer, budget, and intent.

all i want to know is does sony have this upclock?

I would say all things being equal, Sony doesn't need it if they don't, and if they did, they're not in a position to need announcing it to the world.

I would be surprised if you hear anything out of Sony about CPU or GPU clock speeds between now and launch unless somehow tables took a massive turn and they felt the need to publicize their specs more than they already have.
 

KidBeta

Junior Member
And one last point, since I really need to head to bed so I can get up for work, even if you can't run the GPGPU operation straight from the 32MB of ESRAM, maybe flushing the cache might not be such a bad or costly thing to do in the first place.

Now, why would I say that? Remember, a cache miss as we know is costly, especially when you have to go offchip to retrieve data. However, therein lies the rub with the Xbox One architecture. There is another piece of on chip memory that is very low latency on the Xbox One GPU that would save the much more expensive trip to main memory. Why couldn't the caches on the GPU be flushed, and then the data quickly received from the neighboring 32MB of low latency ESRAM which is also on chip and close to the GPU's execution units, even if further out from the L1 and L2 cache? The presence of the ESRAM, to me, sounds like a very fast way to rearm that GPU's cache with the data it will need to perform the coherent write to memory. All without ever having to make a more expensive trip out to main memory, or even worse, the hard drive.

Your still missing the point entirely, even with the eSRAM, it is horribly slow in comparison to the cache and having to go even to that will most likely cause a performance drop.

Not to mention that a full cache invalidate / flush takes nigh on 400 cycles.

Also we don't know enough yet to know where the cache flushes / invalidates to, but if you wanted the CPU to see the information it would probably be wise to flush to the DDR3.
 

TechnicPuppet

Nothing! I said nothing!
I guess we'll find out in November. You're clearly going to see degrees of performance gaps depending on the developer, budget, and intent.

First gen/cross platform games are going to be a mixed bag at this point.

Just now people are not listening to MS because the evidence which is the specs say different. I think come November 22nd the games should become the evidence and nothing else.
 

demolitio

Member
You're trying too hard.

For the past few pages especially.

I do love the "my history exempts me" response though. This is your recent history in this thread alone so why would anyone bother looking into it further to somehow justify the cliche trolling we've seen so much already?

so the xbone has more power than the ps4. with the kinect 500$ feels like a bargain.

Get over it. Ms has a higher CPU clock. It will be nice snapping apps while watching tv and playing games. Hit me up when Sony does that.

Its amazing that the xbox can squeeze out more power while this ps4 can't. Maybe they built the box big to harness the power. The tiny ps4 looks like a toy. No offense.

PS4 doesn't need to.

i agree. hence why i categorize it with a toy.

ps4 has too much power for that tiny case. overheating anyone?

read my history.. never shit on the ps4. actually am buying one.. love infamous.. try again.

sorry the ps4 looks like a toy.. get over it.

But sure, just curious!
 
Your still missing the point entirely, even with the eSRAM, it is horribly slow in comparison to the cache and having to go even to that will most likely cause a performance drop.

Not to mention that a full cache invalidate / flush takes nigh on 400 cycles.

No way are we dealing with 400 cycles with the ESRAM right there on chip. Anything is slow in comparison to the cache, but I think you're horribly exaggerating how quickly GCN is equipped to get the necessary information back in there to handle the GPGPU task. This takes me back to an earlier point. Simply because the PS4 has these extra optimizations, you're almost essentially discrediting GCN architecture to the point that you make it seem like it simply isn't effective at doing precisely what it was designed to do in the first place, which is handle GPGPU tasks alongside graphics related tasks, and do so effectively.

Now, we may not agree on the minor details, but I simply cannot agree with you so strongly disregarding the ability of AMD's GCN architecture to accomplish the very thing it was designed to do. With some of the things you've said, and since many PC GPU, well, all, do not have PS4 type optimizations, you're beyond the point where you're underestimating the GPGPU capabilities of the Xbox One architecture, you're effectively now underestimating and indirectly labeling as flawed AMD"s entire line of GCN class GPUs. It may not handle it as effectively as the PS4 will, but the Xbox One can most definitely handle GPGPU. And GCN hardware that exists right now on the PC can handle GPGPU, or compute tasks such as TressFX just fine. GCN was designed to handle these things, as was the Xbox One, even if not to the extent that the PS4 was. And one of Microsoft's key takeaways to the developers they briefed on the Xbox One architecture was to start looking at GPGPU. That's not something you say to developers if your hardware is somehow not very well equipped to handle it, or I would hope not.

Just my 2 cents. But this chat was very informative, and while I'll be heading off to bed. We'll def continue this tomorrow. Good chatting with you, dude. And night everyone!
 
For the past few pages especially.

I do love the "my history exempts me" response though. This is your recent history in this thread alone so why would anyone bother looking into it further to somehow justify the cliche trolling we've seen so much already?














But sure, just curious!
awww those are my comments! thanks for summing them up! btw this upclock is huge.
 

Krakn3Dfx

Member
Just now people are not listening to MS because the evidence which is the specs say different. I think come November 22nd the games should become the evidence and nothing else.

Whatever you're trying to insinuate, you're thinking far too black and white for game development. A game like Bayonetta on the PS3 was bad because of the developers at Sega who ported it, not because the PS3 wasn't capable of it.

News at 11, there will still be poorly developed titles not given enough time top finish properly next generation, especially at launch. Whatever we see in November is not going to be indicative of what we can expect this generation.
 

TechnicPuppet

Nothing! I said nothing!
Whatever you're trying to insinuate, you're thinking far too black and white for game development. A game like Bayonetta on the PS3 was bad because of the developers at Sega who ported it, not because the PS3 wasn't capable of it.

News at 11, there will still be poorly developed titles not given enough time top finish properly next generation, especially at launch. Whatever we see in November is not going to be indicative of what we can expect this generation.

I'm not insinuating anything. All I am saying is that we should let the games do the talking come launch day.
 

KidBeta

Junior Member
No way are we dealing with 400 cycles with the ESRAM right there on chip. Anything is slow in comparison to the cache, but I think you're horribly exaggerating how quickly GCN is equipped to get the necessary information back in there to handle the GPGPU task. This takes me back to an earlier point. Simply because the PS4 has these extra optimizations, you're almost essentially discrediting GCN architecture to the point that you make it seem like it simply isn't effective at doing precisely what it was designed to do in the first place, which is handle GPGPU tasks alongside graphics related tasks, and do so effectively.

Now, we may not agree on the minor details, but I simply cannot agree with you so strongly disregarding the ability of AMD's GCN architecture to accomplish the very thing it was designed to do. With some of the things you've said, and since many PC GPU, well, all, do not have PS4 type optimizations, you're beyond the point where you're underestimating the GPGPU capabilities of the Xbox One architecture, you're effectively now underestimating and indirectly labeling as flawed AMD"s entire line of GCN class GPUs. It may not handle it as effectively as the PS4 will, but the Xbox One can most definitely handle GPGPU. And GCN hardware that exists right now on the PC can handle GPGPU, or compute tasks such as TressFX just fine. GCN was designed to handle these things, as was the Xbox One, even if not to the extent that the PS4 was. And one of Microsoft's key takeaways to the developers they briefed on the Xbox One architecture was to start looking at GPGPU. That's not something you say to developers if your hardware is somehow not very well equipped to handle it, or I would hope not.

Just my 2 cents. But this chat was very informative, and while I'll be heading off to bed. We'll def continue this tomorrow. Good chatting with you, dude. And night everyone!

It takes 400 cycles to invalidate and flush the actual cache.

Your going to have to flush the cache on a coherent write

This includes graphics related items

You don't need to do this on the PS4, infact you can avoid doing it all on the PS4 by bypasisng the cache entirely.

I think you don't understand the implications of the differences between the two, no amount of lowish latency eSRAM is going to save you from having to do expensive cache flushes all the time.

Read this, it should inform you on some topics.

http://www.vgleaks.com/playstation-4-includes-huma-technology/

then read this

http://www.vgleaks.com/more-exclusi...plementation-and-memory-enhancements-details/

Whilst i may be wrong on the over all timings its in the hundreds of cycles to flush the thing.
 
I seriously suggest you go read the Geneva Convention. You'll see the section on "no offense", right below the section on "with all due respect."

http://www.youtube.com/watch?v=Af-Id_fuXFA#t=0m17s

The fact you're falling back on the Geneva convention in a Neogaf forum topic regarding console specs is just idiotic and screams of desperation, especially when it's pretty obvious that wasn't how it was initially supposed to be interpreted, No offence. ;)
 
Flushing the cache has nothing to do with ESRAM.

Indirectly it does, because the data that will go back into that cache has to come from somewhere, and if it's coming from ESRAM, which is low latency and on chip and close to the GPU's execution units, then there's no way it's taking 400+ cycles to get some data back in those caches for processing. PC GPUs without embedded low latency ESRAM handle GPGPU tasks quite effectively already, so certainly even without packing the most raw horsepower, the Xbox One isn't exactly incapable of doing these things effectively.

http://www.youtube.com/watch?v=4MfYuP6L44k#t=2m50s

Ryse's boat scene as shown in that vid up top is a perfect example of impressive GPGPU work in action on the Xbox One. I really don't think the system will have an issue with this type of stuff. I knew I said I was headed to bed, but I just wanted to post this last thing.

The fact you're falling back on the Geneva convention in a Neogaf forum topic regarding console specs is just idiotic and screams of desperation, especially when it's pretty obvious that wasn't how it was initially supposed to be interpreted, No offence. ;)

You're a regular chip off the old block. :p
 
Ryse's boat scene as shown in that vid up top is a perfect example of impressive GPGPU work in action on the Xbox One. I really don't think the system will have an issue with this type of stuff. I knew I said I was headed to bed, but I just wanted to post this last thing.

Serious question, how do you know that was GPGPU and not simply done by the CPU? Did the developers reveal how they achieved it? If so that's cool, I just might have missed the story/topic.
 

KidBeta

Junior Member
Indirectly it does, because the data that will go back into that cache has to come from somewhere, and if it's coming from ESRAM, which is low latency and on chip and close to the GPU's execution units, then there's no way it's taking 400+ cycles to get some data back in those caches for processing. PC GPUs without embedded low latency ESRAM handle GPGPU tasks quite effectively already, so certainly even without packing the most raw horsepower, the Xbox One isn't exactly incapable of doing these things effectively.

http://www.youtube.com/watch?v=4MfYuP6L44k#t=2m50s

Ryse's boat scene as shown in that vid up top is a perfect example of impressive GPGPU work in action on the Xbox One. I really don't think the system will have an issue with this type of stuff. I knew I said I was headed to bed, but I just wanted to post this last thing.



You're a regular chip off the old block. :p

To flush the entire L2 on the GPU, would actually take 4096 eSRAM cycles.

128 bytes / cycle for 512KB = 4096.
 

Codeblew

Member
Indirectly it does, because the data that will go back into that cache has to come from somewhere, and if it's coming from ESRAM, which is low latency and on chip and close to the GPU's execution units, then there's no way it's taking 400+ cycles to get some data back in those caches for processing. PC GPUs without embedded low latency ESRAM handle GPGPU tasks quite effectively already, so certainly even without packing the most raw horsepower, the Xbox One isn't exactly incapable of doing these things effectively.

http://www.youtube.com/watch?v=4MfYuP6L44k#t=2m50s

Ryse's boat scene as shown in that vid up top is a perfect example of impressive GPGPU work in action on the Xbox One. I really don't think the system will have an issue with this type of stuff. I knew I said I was headed to bed, but I just wanted to post this last thing.



You're a regular chip off the old block. :p

The 400 cycles (or whatever it is) is how long it takes to flush the cache, not repopulate it. This has nothing to do with ESRAM.
 
Jesus Christ, why are some of you compelled to try to defend the indefensible?

It's pretty damn clear that ps4 is more powerful than Xbone in terms of raw hardware power. We've had 3 or 4 developers say that already. The spec sheet speaks for itself. Heck we had the article in which AMD themselves stated that ps4 was more powerful, right? Why would Microsoft keep announcing hardware upgrades if there was no need to close the gap, perceived or otherwise?

Does this somehow diminish the awesomeness of Titanfall?

Reading some of the post here feels like watching salmon try to swim upstream. If you are buying a Xbone and are happy with what you've seen then be content with your system and move on. I'm just as guilty as the next man for contributing to the system wars bullshit that is going on now, but there comes a time where you have to look at all the evidence and just accept the facts.

PS4 is more powerful but how will that factor in the development of the games?
What if Microsoft becomes the de facto system for developers like last gen?
How is the power difference translating into actual game performance?

I feel like these are the questions we should be actually debating, not some nonsense PR answer about Directx and ESRAM.
The gifs are pretty awesome though. :)
 
Serious question, how do you know that was GPGPU and not simply done by the CPU? Did the developers reveal how they achieved it? If so that's cool, I just might have missed the story/topic.

I thought it was confirmed already in one of their interviews or tech demos.

http://www.youtube.com/watch?feature=player_embedded&v=vF1zjDSqPoo

I guess I might be wrong. I thought it was confirmed in this cry engine demo that contains that very scene from Ryse, but I guess not. I guess it was only referring to the GPGPU weather effect.
 
Get over it. Ms has a higher CPU clock. It will be nice snapping apps while watching tv and playing games. Hit me up when Sony does that.

Well, I would be very glad to not have a snap feature, if this means that the console does not have to reserve 2 cores and 10% of the GPU for the OS. If I had to choose between 60fps and a snap feature, I'd take 60 fps any day of the week.
 

KidBeta

Junior Member

sirap

Member
Well, I would be very glad to not have a snap feature, if this means that the console does not have to reserve 2 cores and 10% of the GPU for the OS. If I had to choose between 60fps and a snap feature, I'd take 60 fps any day of the week.

I'm still scratching my head over this feature. Is it an american thing? I don't know why i'd want to see my friend's webcam feed while gaming or watching movies. If you're playing a game you'd probably want to focus on doing that instead of staring at a floating head. Voice seems good enough..
 
Status
Not open for further replies.
Top Bottom