• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

How much different will X1 and PS4 multiplats be visually?

Status
Not open for further replies.
Enough of a performance difference to force MS to hype the shit out of superior online, system versatility, and motion control intergration.

MS already lost the power battle, they better take advantage of their usability advantages...

Mandating features in 3rd party games that simply aren't possible on PS4 (kinect 2) would be great, the problem is they haven't shown what those exclusive features are so no one is confident they will make a difference.

They are in a pickle because if Kinect hardly adds anything to games and third parties ignore Kinect integration than all they're left with is ppl paying the same price for inferior versions of games. And late down the line ppl will leave an already sinking ship because their money isn't working for them.
 

Averon

Member
After reading the old post I noticed that X1 camp went from " MS will go all out this gen, Sony can't compete financially" to " 40% is negligible, we have AUDIO" lol

EPIC TRANSFORMATION

Oh, man this thinking was rampant back then. MS was gong to have this beastly console that Sony just can't match because they have no money. I also noticed this notion has now transferred to third party games.

"Sony just can't afford to compete with MS's money hats to buy 3rd party exclusivity!!"
 

ekim

Member
Based on the hardware, what makes you think that? What does "5 to 10 percent" even mean?

PC GPUs with the same performance gap have shown pretty big real-world difference.

Although they share a similar base architecture, both machines still have some singularitys. In the end, it depends how much the unique features in the X1 can make up for the more straight forward approach combined with the raw power advantage on the PS4. And all things considered, with my basic understanding, the gap isn't that large as many think. So 5-10% seems about right and could mean a higher resolution or a slightly better frame rate on the PS4 - or just simple things like better shadow resolution.
 

TheKurgan

Member
25 fps can feel very sluggish compared to a locked 30 fps. So actually 5 fps can make a big difference..

Agreed, but the difference between 55 fps and 60 fps won't be as noticable. Both GPU's will be capable of decent frame rates.

But if you think all that extra raw GPU power will only result in slightly sharper textures and a small FPS bump, you're mistaken. It will likely result in better image quality (resolution, AA) and better/more transparencies (more foliage,etc). Third party devs can take advantage of this extra power without a lot of effort.

Again I agree with you. I just don't think the differences you are talking about will amount to anything but fuel for the fan boy war. I know people always get excited about little differences in draw distance, foliage density,etc.. but it doesn't change the game play experience. We aren't talking about PS2 to XBOX differences here.

My point was don't worry about hardware specs, they don't matter that much in the end. Worry about the games you want to play. If you want to play Killzone and Infamous buy a PS4, if you would prefer Halo and Forza buy a XBOX ONE. The multiplats will be similar enough to not be a determining factor in your console choice.
 

CoG

Member
Agreed, but the difference between 55 fps and 60 fps won't be as noticable. Both GPU's will be capable of decent frame rates.

It's a huge difference actually because rendering >= 60fps will lock at 60 and give a smooth experience. 30 > && < 60 will not lock to the monitor refresh rate and not be as good of an experience visually.
 
After reading the old post I noticed that X1 camp went from " MS will go all out this gen, Sony can't compete financially" to " 40% is negligible, we have AUDIO" lol

EPIC TRANSFORMATION

Strange. I think most sane posters realized that both companies would pursue very similar models to the 360 and PS3 - comparable power levels with MS looking for higher margins.
 
I just don't think the differences you are talking about will amount to anything but fuel for the fan boy war.

Consumer satisfaction for getting superior products for the same price.

Constantly stealth-trolling xbones on forums as to which version is superior and running a tally of all the games which are superior with graphs and everything.

profit.

You might not think it makes any difference whatseover but in reality it does.
 

Piggus

Member
Agreed, but the difference between 55 fps and 60 fps won't be as noticable. Both GPU's will be capable of decent frame rates.



Again I agree with you. I just don't think the differences you are talking about will amount to anything but fuel for the fan boy war. I know people always get excited about little differences in draw distance, foliage density,etc.. but it doesn't change the game play experience. We aren't talking about PS2 to XBOX differences here.

My point was don't worry about hardware specs, they don't matter that much in the end. Worry about the games you want to play. If you want to play Killzone and Infamous buy a PS4, if you would prefer Halo and Forza buy a XBOX ONE. The multiplats will be similar enough to not be a determining factor in your console choice.

I think it's more important for people who own both systems. It's not really great for MS if every multiplatform game is better on PS4, even if just a little. Because that's the version people will buy.

I'm buying third party games on PC anyway so it doesn't matter to me. :p
 

TheKayle

Banned
Agreed, but the difference between 55 fps and 60 fps won't be as noticable. Both GPU's will be capable of decent frame rates.



Again I agree with you. I just don't think the differences you are talking about will amount to anything but fuel for the fan boy war. I know people always get excited about little differences in draw distance, foliage density,etc.. but it doesn't change the game play experience. We aren't talking about PS2 to XBOX differences here.

My point was don't worry about hardware specs, they don't matter that much in the end. Worry about the games you want to play. If you want to play Killzone and Infamous buy a PS4, if you would prefer Halo and Forza buy a XBOX ONE. The multiplats will be similar enough to not be a determining factor in your console choice.


this should be added to the op
 

TheKayle

Banned
I think it's more important for people who own both systems. It's not really great for MS if every multiplatform game is better on PS4, even if just a little. Because that's the version people will buy.

I'm buying third party games on PC anyway so it doesn't matter to me. :p

owner of both consoles arent a lots i think...but yes if every multiplat look better on ps4 would be so bad for ms (but i dont htink it will happen)
 

vpance

Member
No one can say anything is certain at this point until we can compare multi plat titles side by side. But even then we have to consider which game it is. BF4 would be the best measuring stick, since it'll probably be the most intensive on the hardware.

My prediction? 900p+ on PS4, 720p on Xbone with less AA and lower texture quality.
 

TheKurgan

Member
I think it's more important for people who own both systems. It's not really great for MS if every multiplatform game is better on PS4, even if just a little. Because that's the version people will buy.

I'm buying third party games on PC anyway so it doesn't matter to me. :p

Yeah, if you own both platforms I could see the differences matter. I know I based my buying decisions on a few games this gen based of which was the lead platform. But in most cases I just bought the multiplatform game with the best exclusive content. :p
 

eastmen

Banned
No one can say anything is certain at this point until we can compare multi plat titles side by side. But even then we have to consider which game it is. BF4 would be the best measuring stick, since it'll probably be the most intensive on the hardware.

My prediction? 900p+ on PS4, 720p on Xbone with less AA and lower texture quality.

Not sure a launch title will matter.


You'd have to wait a few years really to see what the difference is . There is a lot we don't know about . The ps4 seems to have a gpu advantage , the xbox one a cpu advantage. But who knows what either will equate too and if we even know everything about both boxes yet.

THe ps4 may have more than just a faster gpu . Some of the changes that ms wont talk about until the nda is up might narrow a gpu gap. The audio blocks in the xbox one could be much more advanced than what is in the ps4 and thus ps4 could fall further behind in cpu power having to make up for sound processing.

To much to really know. Everyone though the ps3 would barrel over the xbox 360 leading up to and months after release. But in the end it was the 360 that produced the majority of better visual experiances.

So like I said we have to wait a lot longer than launch to see whats up
 

pixlexic

Banned
Hope you guys realize that tool sets for next gen console are no where near the level of maturity that current gen tools are. there is going to be a pretty large performance loss because of this.

It will take time for the unreals and frost bites to get the new consoles going like they should be.
 

Knuf

Member
Although they share a similar base architecture, both machines still have some singularitys. In the end, it depends how much the unique features in the X1 can make up for the more straight forward approach combined with the raw power advantage on the PS4. And all things considered, with my basic understanding, the gap isn't that large as many think. So 5-10% seems about right and could mean a higher resolution or a slightly better frame rate on the PS4 - or just simple things like better shadow resolution.

I simply don't understand how you and some others can think the real gap between the two will be smaller than in theory.
For the first time in console history we have 2 very similar architectures, with the most powerful (most of the gaming code these days pushes the gpu, the cpu counts relatively) also being the one easier to develop for and more importantly the one with the simpler architecture: how can you even think that the weaker and more exotic console will be more efficient than the other, when logic suggests that the first is much more prone to have bottlenecks somewhere, is beyond me: I'd expect the gap in real world to be even wider than the theoretical 30-40%, not the opposite.
 

cormack12

Gold Member
I imagine they'll have the games final build and run it through the dev kits. They'll then strip out whatever assets they need to in order to get the performance levels they're aiming for. They'll probably need to strip out less from the PS4 builds, and also maybe able to lock the frame rate higher.

I doubt they'll make an Xbox build then port it to PS4, the architectures are now really similar. I expect very few games to run at 1080 natively simply because of anti-aliasing. Aside from that though there could be partnerships in which the 3rd party agrees to release equivalent builds in order to remain independent.
 

TechnicPuppet

Nothing! I said nothing!
I simply don't understand how you and some others can think the real gap between the two will be smaller than in theory.
For the first time in console history we have 2 very similar architectures, with the most powerful (most of the gaming code these days pushes the gpu, the cpu counts relatively) also being the one easier to develop for and more importantly the one with the simpler architecture: how can you even think that the weaker and more exotic console will be more efficient than the other, when logic suggests that the first is much more prone to have bottlenecks somewhere, is beyond me: I'd expect the gap in real world to be even wider than the theoretical 30-40%, not the opposite.

At the same time as you are asking why some think the differences will be less than the numbers indicate in theory you are suggesting the gap will actually bigger than this.
 

ekim

Member
I simply don't understand how you and some others can think the real gap between the two will be smaller than in theory.
For the first time in console history we have 2 very similar architectures, with the most powerful (most of the gaming code these days pushes the gpu, the cpu counts relatively) also being the one easier to develop for and more importantly the one with the simpler architecture: how can you even think that the weaker and more exotic console will be more efficient than the other, when logic suggests that the first is much more prone to have bottlenecks somewhere, is beyond me: I'd expect the gap in real world to be even wider than the theoretical 30-40%, not the opposite.

Because you are simply looking at only a few numbers in your equation. The ESRam is probably more powerful than some of you assume. Just look for what the Edram has done for the 360 - you can get several forms of post processing and AA basically for zero bandwidth cost. That's a clear advantage. And as Albert Penello hinted, there seem to be even more variables involved that are unknown to that date. This is the basis for my assumption, that multiplat difference will be smaller than expected. I can also be totally wrong.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Because you are simply looking at only a few numbers in your equation. The ESRam is probably more powerful than some of you assume. Just look for what the Edram has done for the 360 - you can get several forms of post processing and AA basically for zero bandwidth cost.

Why zero? Everything that you read/write into the eSRAM should consume eSRAM bandwidth. The 360 had the benefit of having additional pixel processors attached to it that performed some tasks like AA "for free", but that was not an immediate effect of the eDRAM itself.
 

ekim

Member
Why zero? Everything that you read/write into the eSRAM should consume eSRAM bandwidth. The 360 had the benefit of having additional pixel processors attached to it that performed some tasks like AA "for free", but that was not an immediate effect of the eDRAM itself.

As I said - I'm not an expert - I just read that from an actual developer:
http://beyond3d.com/showpost.php?p=1738762&postcount=3325

On Xbox 360, the EDRAM helps a lot with backbuffer bandwidth. For example in our last Xbox 360 game we had a 2 MRT g-buffer (deferred rendering, depth + 2x8888 buffers, same bit depth as in CryEngine 3). The g-buffer writes require 12 bytes of bandwidth per pixel, and all that bandwidth is fully provided by EDRAM. For each rendered pixel we sample three textures. Textures are block compressed (2xDXT5+1xDXN), so they take a total 3 bytes per sampled texel. Assuming a coherent access pattern and trilinear filtering, we multiply that cost by 1.25 (25% extra memory touched by trilinear), and we get a texture bandwidth requirement of 3.75 bytes per rendered pixel. Without EDRAM the external memory bandwidth requirement is 12+3.75 bytes = 15.75 bytes per pixel. With EDRAM it is only 3.75 bytes. That is a 76% saving (over 4x external memory bandwidth cost without EDRAM). Deferred rendering is a widely used technique in high end AAA games. It is often criticized to be bandwidth inefficient, but developers still love to use it because it has lots of benefits. On Xbox 360, the EDRAM enables efficient usage of deferred rendering.

Also a fast read/write on chip memory scratchpad (or a big cache) would help a lot with image post processing. Most of the image post process algorithms need no (or just a little) extra memory in addition to the processed backbuffer. With large enough on chip memory (or cache), most post processing algorithms become completely free of external memory bandwidth. Examples: HDR bloom, lens flares/streaks, bokeh/DOF, motion blur (per pixel motion vectors), SSAO/SSDO, post AA filters, color correction, etc, etc. The screen space local reflection (SSLR) algorithm (in Killzone Shadow Fall) would benefit the most from fast on chip local memory, since tracing those secondary rays from the min/max quadtree acceleration structure has quite an incoherent memory access pattern. Incoherent accesses are latency sensitive (lots of cache misses) and the on chip memories tend to have smaller latencies (of course it's implementation specific, but that is usually true, since the memory is closer to the execution units, for example Haswell's 128 MB L4 should be lower latency than the external memory). I would expect to see a lot more post process effects in the future as developers are targeting cinematic rendering with their new engines. Fast on chip memory scratchpad (or a big cache) would reduce bandwidth requirement a lot.
 
I simply don't understand how you and some others can think the real gap between the two will be smaller than in theory.
For the first time in console history we have 2 very similar architectures, with the most powerful (most of the gaming code these days pushes the gpu, the cpu counts relatively) also being the one easier to develop for and more importantly the one with the simpler architecture: how can you even think that the weaker and more exotic console will be more efficient than the other, when logic suggests that the first is much more prone to have bottlenecks somewhere, is beyond me: I'd expect the gap in real world to be even wider than the theoretical 30-40%, not the opposite.
Not enough information about the way the two vastly differ from each other to be certain about anything just yet. I mean, until we see second generation software that has had the time, resources, and experience to actually operate according to what the hardware is capable of and how to best achieve it, I think it's way too early to get comfortable with any sense of in-game differences that are equal to the difference in dry FLOP counts.
 
I simply don't understand how you and some others can think the real gap between the two will be smaller than in theory.
For the first time in console history we have 2 very similar architectures, with the most powerful (most of the gaming code these days pushes the gpu, the cpu counts relatively) also being the one easier to develop for and more importantly the one with the simpler architecture: how can you even think that the weaker and more exotic console will be more efficient than the other, when logic suggests that the first is much more prone to have bottlenecks somewhere, is beyond me: I'd expect the gap in real world to be even wider than the theoretical 30-40%, not the opposite.
I'm in your boat.

This gen kind of showed that devs won't take the time to make the most of exotic hardware. You can't really blame them, they work to tight deadlines.

That's why being straightforward is the best approach, that's why when Sony asked devs what they wanted they gave them exactly what they asked for.

Let's not forget, Cerny said that their number one wish was a single pool of memory. And that's now what they have. So ekim, this eSRAM that you think is going to solve all of your problems - devs don't want it.
 
You'd have to wait a few years really to see what the difference is.

Why would a platform with 40% better gpu, gddr5 and easier to program for suffer from identicle frame drops and tearing. The differences will be there day1 and that's assuming the laziest of lazy devs.
 

ekim

Member
You're misreading him. When he talks about using a scratchpad 'reducing bandwidth consumption' or 'costing no bandwidth', he's talking about 'external bandwidth' - to DDR3 in this case. These things still all consume bandwidth to/from eSRAM.

Which is still a good thing though. :p
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
You're misreading him. When he talks about using a scratchpad 'reducing bandwidth consumption' or 'costing no bandwidth', he's talking about 'external bandwidth' - to DDR3 in this case. These things still all consume bandwidth to/from eSRAM.

Yes but while the GPU is using that eSRAM for post processing work. The DDR3 bandwidth is totally freed up for the CPU. If so. That's quite efficient.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
As I said - I'm not an expert - I just read that from an actual developer:
http://beyond3d.com/showpost.php?p=1738762&postcount=3325

They are not really saving bandwidth but saving "external bandwidth" (= "external memory bandwidth requirement"), that is, bandwidth on the bus between GPU and main memory, since they are, speaking in a simplified manner, only reading texture data from main memory but render (i.e., write pixels) into the internal eDRAM. The total bandwidth that they consume does not change, it is just split between main memory and eDRAM.

That is nothing new, since it is exactly what the eDRAM/eSRAM's main purpose is. It is small enough to hold render targets ("pixelbuffers") which, for 1080p, take 1920 x 1080 x the information per pixel (in the example you quoted 12bytes) ~= 23MB (or ~11MB at 720p). Despite their relatively small memory footprint, reading and writing into those buffers consumes much bandwidth, especially if you are rendering in multiple passes; which you do in case of deferred rendering. In this technique you gather information in a first pass and store that into a G-Buffer. You then use that information in a second pass to render the actual picture more efficiently. That was actually some sort of a problem for the 360 since a G-Buffer like the one you described didn't quite fit into its 10MB of eDRAM. That is the reason why games like Halo 3 didn't run at true 720p.

Anyway, in summary, you don't save total bandwidth, you save bandwidth between GPU and main memory. Since the bandwidth of DDR3 is limited, mitigating that limit is the main purpose of eSRAM.
 

Neo Ankh

Neo Member
Why would a platform with 40% better gpu, gddr5 and easier to program for suffer from identicle frame drops and tearing. The differences will be there day1 and that's assuming the laziest of lazy devs.

This. Everyone will still just blame the dev though.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Yes but while the GPU is using that eSRAM for post processing work. The DDR3 bandwidth is totally freed up for the CPU. If so. That's quite efficient.

Having a unified memory pool with the combined bandwidth of two DDR3/eSRAM pools is not less efficient, though. In fact, it's more flexible. You don't run into problems like not being able to fit your render targets into a small eSRAM pool. The render targets of KZ:SF are already ~40MB in size.
 
I think the multiplats will look better on the PS4 but not by as much as people think. I'm thinking something like the difference between the PS3 and X360 for Red Dead Redemption. It would be noticeable side by side and to digital foundry but both versions would be perfectly useable. I don't think the extra GPU resources of the PS4 show themselves as better framerates or resolutions - its unclear how that will work. The most likely advantage they will have is better textures and effects. The only time you'd see a PS4 game at 60fps and a X1 version at 30fps is if someone has difficulty extracting performance from the memory system on the X1.

Also I do wish people stop quoting Microsoft on the 15 custom processors bit. Its been clarified that 12 of those were the CUs on the the GPU. That leaves 3 custom processors. My guess is those would be audio (SHAPE), kinect processing and some sort of trust zone. One of the problems with Microsoft's messaging is how willing they have been to deliberately obfuscate the message to avoid admitting that they are down on raw power. That's led to idiocies like 'infinite power of the cloud', 5b transistors and now 15 custom processors. It comes across as slimy and you'd be surprised how many people see through it. They need to stop letting PR people like Hyrb drive the messaging and let people like Marc Whitten take over that department. They immediately come across as more trustworthy.
 

Knuf

Member
At the same time as you are asking why some think the differences will be less than the numbers indicate in theory you are suggesting the gap will actually bigger than this.

If you think about it, you'll see why. Bottlenecks aside, this time there is a paradox: the "lazier" devs will actually get much better results on the most powerful console, since they'll have to optimize their code for the weaker target if they want it to run on par, not the other way around.

Because you are simply looking at only a few numbers in your equation. The ESRam is probably more powerful than some of you assume. Just look for what the Edram has done for the 360 - you can get several forms of post processing and AA basically for zero bandwidth cost. That's a clear advantage. And as Albert Penello hinted, there seem to be even more variables involved that are unknown to that date. This is the basis for my assumption, that multiplat difference will be smaller than expected. I can also be totally wrong.

Even if the eSRAM is more powerful in practice than in theory, it's there only for a reason: increasing the memory bandwidth for the gpu since ddr3 would be too slow to access. Let's pretend its bandwidth is on par or better than the gddr5 memory: this doesn't change the fact that the gpu on the PS4 is faster and better at computation than the one found in the Xbone, and this is a clear advantage.
The other unknown variables most probably are the DMEs, Shape and the other custom chips, but I think their main purpose is to free the CPU from some of its task, but then again in order to fully exploit them, devs will probably have to put some not trivial effort even here: will they think this is worth their time?
On the PS3 most clearly showed they don't.
 

buenoblue

Member
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.
 

Hollow

Member
I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

jyWaua9.gif
 

Caronte

Member
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.

Jesus Christ.
 

RoboPlato

I'd be in the dick
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.
You don't have to be a fanboy to think Killzone looks mediocre? OK. That's also 60fps in multiplayer.

AI offset on the cloud is only useful for multiplayer games and any game that uses the cloud can do the same thing on PS4.

The DirectX API may be standard but several devs haven't liked the most recent changes to it and have said that the PS4 toolset and API are easier to use and allow for lower level access.
 
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.

Antonio-Banderas-computer-you-got-me-yospos-reaction-13677939419.gif
 

TechnicPuppet

Nothing! I said nothing!
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.

Killzone does not look mediocre, not in any way shape or form.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Having a unified memory pool with the combined bandwidth of two DDR3/eSRAM pools is not less efficient, though. In fact, it's more flexible. You don't run into problems like not being able to fit your render targets into a small eSRAM pool. The render targets of KZ:SF are already ~40MB in size.
You don't factor in the very low latency of eSRAM.
 

pixlexic

Banned
I think the X1 could have up to 30% more CPU power than ps4. Here's why.

X1 has dedicated servers freeing up to 10% CPU power for multiplayer and heavily integrated online games that the ps4 CPU has to deal with. Add on the 5-10% CPU savings from x1 having dedicated audio processors, Then the 10% higher clock than ps4 and you have a significant CPU advantage on x1 over ps4.

I also believe the x1's use of directx gives it an advantage simply because its the industry standard and such a mature toolset.

I mean anyone who's not a fanboy can see that killzone, knack and driveclub look mediocre, and on top of that run at 30fps. The more I see of x1 the more balanced it looks.

I've preordered both ps4 and x1 so ill be enjoying all the goodness. From the initial specs I thought the ps4 was gonna blow the x1 out the water, but that doesn't seem to be true. Also can I just say that dedicated servers on every game is a gamechanger. No more host advantage, the reason I fell out with online on past gen. Yay.


Knack and drive club i will give you,., but killzone ? not by a long shot.
 

buenoblue

Member
I'm just saying I don't think killzone single player, driveclub and knack running at 30ps showcases a night and day upgrade over x1. There must be some reason for this and a CPU advantage for x1 could be it. I'm primarily a pc gamer and as I've said I'm buying both machines at launch. I'm all about the games and an objective view.
 

buenoblue

Member
Killzone shadowfall reminds me a lot of uncharted and killzone on ps3, stunning in game engine cutscenes followed by lower quality actual gameplay.
 
Status
Not open for further replies.
Top Bottom