You cannot use what we've seen so far as any kind of determinant. Most (all?) of the games shown at E3 and elsewhere have been running on powerful PC hardware, nothing like what's in the final consoles.should be clear as night/day but we didnt see it and something dosnt add to me
coz i didnt feel AT ALL the differences
That could be straight from some PR flyer.
ok lets put it in this way
there are two cars
1 - a 1200 cc
2 - a 1800 cc turbo
both cars need to be presented to the customers doing some laps
Which is probably what it is from, too many guerilla marketing morons these days. There, I said it.
I thought it was known that PS4 has 4.5 GB GDDR5 available to devs and X1 5 GB DDR3? There's that additional 512 MB on PS4 but that's for some caching/virtual disk thing. Might wanna update those PR flyers unless you want to be sued for pretending otherwise.
You cannot use what we've seen so far as any kind of determinant. Most (all?) of the games shown at E3 and elsewhere have been running on powerful PC hardware, nothing like what's in the final consoles.
Edit in response to your edit: stop using the DF article as any kind of source, their comparison is garbage, as has been discussed at great length here and elsewhere.
I have an ATI card and it doesn't look like that -- currently. If you do a reverse image search on that shot you'll see that it was taken a day after launch, which was before AMD got their shit together and released the correct drivers for the game.
It's kinda sad even to this day id catches so much flak for something that was entirely AMD's fault...
@ The Kayle :
sigh...you don't make any sense. First, the power gap -which exists- is not a huge one. We're not talking about a two-three times more powerful machine, so the differences should be minimal (AA, more stable framerate, perhaps better post process...). Second, we don't have a way of comparing when there are many variables that create noise (How long was the developement of titles on each plateform? what about the budgets? what sacrifices were made to get the engine running?...). Plus, what are you comparing exactly : Ryse vs Knack?
You're going to have to wait until November to see them doing the same laps.
I do not believe anyone has shown their game running on both pieces of hardware, or side by side. At E3 everyone was showing on one or the other - although, tellingly, mostly on PS4 or PC supposedly.
So we'll see when the systems and games are released how they compare 'on the same laps'. In the meantime, though, for people trying to make a decision now around which to get or which to preorder, I'm not sure how anyone could argue PS4 isn't the safe bet.
i judge GAMING console from GAMES they show to me...i dont use to masturbate my mind on cerny or major nelson words...if i need to go on specs i really dont like to talk about apu's machines..coz is like to wanna win a f1 race with a volvo
and again like the past gen there was 3rd party games that look better than exclusives (mgs5,the division)....exclusives running on what should be the less powerfull console that looked better than games running on the most powerful on paper ..and there will be some games that will show the power differences etc etc
thats it....another ps3 x360 gen
ok lets put it in this way
there are two cars
1 - a 1200 cc
2 - a 1800 cc turbo
both cars need to be presented to the customers doing some laps
before all of this start
we already know that the first (1.200) not only is slower but also car mechanics are in late to optimize the aerodinamics ..engine etc etc...everyone know this
the second car (1800 turbo) is ready their engine eeprom is well developed and optimized all work great and already on paper we know this car have at least 50% advantage on the first one
the day of the presentation arrive and they do their laps
when they run the 1800 turbo dont destroy the 1200 but istead they arrive pretty much equal or ppl need the photofinish coz probably the 1200 did better than the 1800 turbo
this is what happen with the presentation of the games running on both consoles
on paper the ps4 should destroy the xb1
if one of the two console have ..better developed driver (ps4) better devs tools (ps4) 50% more powerful gpu (ps4) unified pool of faster ram (ps4) more ram allocated for games (ps4) less heavy os (ps4)
you need years of learning curve to show the power differences?
and for example ALSO if both running not optimized games ..would be EASY for the most powerful console with this big gap to show the power differences.....that aso the worst developer should be good to show...
should be clear as night/day but we didnt see it and something dosnt add to me
coz i didnt feel AT ALL the differences and im sure that for the average joe graphics like that one of ryse or quantum break could be the best of what we seen
is this underwhelming for devs that work on a console that is so much better than the other ? or just the gap will be just whta DF showed to us ..in a number of frame that u can count in two hand fingers
MGS2 and ZOE2, on the PS2,were more impressive than many Xbox games, and the Xbox was a lot more powerful. They were made by talented developers and targetted the strenghts of the machine (bandwitdth, mainly...).
As for games looking better on the X-box One, I'd like to know which ones you're referring to...
i think that graphically talking Ryse looked impressive....but TO ME the best if the words of the remedy creative director are true and it was real time in engine Quantum Break was the best of the show
yes some fps somethng around 25% in certain situation ( i think df talked about this)
this before the upclock..now something aroung 18%
That was a cutscene, not a gameplay segment. By that logic we should also be able to include The Order 1886, which was easily just as impressive, and the Dark Sorcerer demo, an even more impressive bit, proven to be real-time. As for the games playable on devkits (Ryse, Forza, Killzone, KI, Infamous SS), Infamous SS was the best of the bunch, imo.
clearly ps4 is the safer bet
but will MS playing all on games (givn ppl lots of exclusives ) knowing they got the weaker console? this could be a "game changer" too or not?
Obviously things like art direction and creative design will trump technical specs, but if you're going to be comparing titles across platforms...i judge GAMING console from GAMES they show to me...i dont use to masturbate my mind on cerny or major nelson words...if i need to go on specs i really dont like to talk about apu's machines..coz is like to wanna win a f1 race with a volvo
My point is you actually haven't seen anything of the sort. Everything you've seen was running on jacked up PC hardware. No one knows what those games will look like running on their actual consoles yet.and again like the past gen there was 3rd party games that look better than exclusives (mgs5,the division)....exclusives running on what should be the less powerfull console that looked better than games running on the most powerful on paper ..and there will be some games that will show the power differences etc etc
I really don't think it's going to be so cut and dry. Ironically despite far more similar base architectures, the overall directions Microsoft and Sony are pushing have diverged quite a bit (or rather, Microsoft is wanting to try new directions). Either way, I think going forward we'll see a lot more pronounced differences, both in games and otherwise this gen.thats it....another ps3 x360 gen
To get to the point since I don't have the hardware I mentioned above on hand to conduct the tests myself but I can extrapolate data from graphics cards that are similar to what goes into these consoles.
I don't know.
I think a look at platform holders' track records in supporting exclusive development across a system's lifecycle might inform a potential buyer here. I don't think there's anything to indicate PS4 is a riskier bet on that front, quite to the contrary perhaps.
If someone has specific games they want that are only available on Xbox One, and are heart set on those franchises or games, obviously that's another scenario. But if someone wants 'the best choice' in general for a console for the next 6 or 7 years, with no specific ties, Xbox is a hard sell right now.
You can't compare GPU's on PC running on the same driver running on the same code to Xbox One and PS4 custom hardware.
For starters, the drivers are different for Xbox One and PS4 and are optimised for the hardware. Also the rendering pipeline and render code base will be different. And finally devs are developing on a fix platform.
I've no doubt there might be difference but your analysis method is severely flawed.
Debatable.Not as flawed as Digital Foundry's
We can debate the intricacies of the system all we want but the fact of the matter is that the hardware limitations are already in place. No amount of optimization can overcome that unless you are saying that Sony will have less optimized drivers compared to Microsoft.
Holy irony, Batman!You can't compare GPU's on PC running on the same driver running on the same code to Xbox One and PS4 custom hardware.
For starters, the drivers are different for Xbox One and PS4 and are optimised for the hardware. Also the rendering pipeline and render code base will be different. And finally devs are developing on a fix platform.
I've no doubt there might be difference but your analysis method is severely flawed.
It really, REALLY isn't.Debatable
Debatable.
Xbox One and PS4 are not the same. Devs will have to use different techniques the get the most out of them. I do believe that the eSRAM if used correctly could significantly help performance on the Xbox One GPU.
Although on paper the PS4 looks a fair bit more powerful. I think in the real world the performance gap won't be nearly as big as folks think it will be. Certainly not the laughable 50% you've been trying to push.
The methodology they used would have been perfect if they grabbed a HD 7790 and clocked it to ~731 Mhz for the Xbox One's GPU and used a a HD 7850 and clocked it to 898 Mhz.
Xbox One = 1.310 Teraflops ------- (HD 7790)Bonaire @ 731 Mhz = .731*2*896 = ~1.310 Teraflops
PS4 = 1.84 Teraflops ------- (HD 7850)Pitcairn @ 898 Mhz = .898*2*1024 = ~1.84 Teraflops
Sony winning or losing the gen has nothing to do with them having the superior hardware. It's a nice argument to have, and could sway some buyers, but there are more important issues (price, ease of use, games catalogue, quality of services, network features, advertising...). They're on the right track though...
As for the cotinuing support for the PS3, I don't share your view, at all. Not only does it send a signal that the company commits to its plateforms longer that their competitors do, building a trust relationship with their customers (I, for exemple, will be buying the PS4 solely on the promise of getting a steady stream of first party games from some of my favorite developers at Sony for the next 6 years, at least), but its alos a better decision, financially speaking. The PS3 consumers' base is much larger, and Sony wants those profits. Plus, the PS4 will sell regardless of the number of first party exclusives at launch. The tempting price, the good PR, the brand name and the big thrd party sellers will be carrying it for quite a few months...
is this underwhelming for devs that work on a console that is so much better than the other ? or just the gap will be just whta DF showed to us ..in a number of frame that u can count in two hand fingers
In their defense, the bandwidth differential between a 7790 and 7850 would have made it a far worse comparison. The difference between 96GB/s and ~150GB/s of bandwidth is vastly greater at 1080p than the difference between 16 and 32 ROPS.For the Xbox One they used a HD 7850 over a HD 7790 while for the PS4 they used a 7870 XT over a 7850. This is wrong because using the 7850 for the Xbox One thrusts it into a whole different category of graphics cards that bring many advantages not present in the Xbox One GPU (32 ROPs vs 16 ROPs).
The methodology they used would have been perfect if they grabbed a HD 7790 and clocked it to ~731 Mhz for the Xbox One's GPU and used a a HD 7850 and clocked it to 898 Mhz.
Holy irony, Batman!
Everything you said applies just as equally to the Digital Foundry article badb0y was responding to.
The methodology they used would have been perfect if they grabbed a HD 7790 and clocked it to ~731 Mhz for the Xbox One's GPU and used a a HD 7850 and clocked it to 898 Mhz.
Xbox One = 1.310 Teraflops ------- (HD 7790)Bonaire @ 731 Mhz = .731*2*896 = ~1.310 Teraflops
PS4 = 1.84 Teraflops ------- (HD 7850)Pitcairn @ 898 Mhz = .898*2*1024 = ~1.84 Teraflops
Just suppose the situation was inverse ( Xbone with better spects than PS4 )
What would be Carmack article or media in general approach?
I bet that we will have lots of articles from some sources with titles like :
"Xbox one hardware 50% better than PS4"
"MS shows a monster at E3 , Sony in panic"
DF : We tested both spects , Xbone 25% better FPS" PS4 pales !!!
"Sony overclocks PS4 CPU 6% , not enought to achieve success!!!
Are you serious? I am providing raw data not some pixie dust...lol
My method is not perfect but it certainly is more accurate than Digital Foundry. The best method would be this one I mentioned in my original post:
Holy shit, those idiots over at Digital Foundry have really fucked up this conversation, haven't they?
To preface this post I want to say that although I admire what Digital Foundry tried to do but their methodologies were not only flawed but straight up wrong.
The first thing they did wrong was select the wrong graphics cards to correctly compare the 2 situations.
For the Xbox One they used a HD 7850 over a HD 7790 while for the PS4 they used a 7870 XT over a 7850. This is wrong because using the 7850 for the Xbox One thrusts it into a whole different category of graphics cards that bring many advantages not present in the Xbox One GPU (32 ROPs vs 16 ROPs).
The methodology they used would have been perfect if they grabbed a HD 7790 and clocked it to ~731 Mhz for the Xbox One's GPU and used a a HD 7850 and clocked it to 898 Mhz.
Xbox One = 1.310 Teraflops ------- (HD 7790)Bonaire @ 731 Mhz = .731*2*896 = ~1.310 Teraflops
PS4 = 1.84 Teraflops ------- (HD 7850)Pitcairn @ 898 Mhz = .898*2*1024 = ~1.84 Teraflops
I reckon using these cards at these clocks would really show the GPU difference between the two consoles but alas Digital Foundry went about this some other dumbass way.
To get to the point since I don't have the hardware I mentioned above on hand to conduct the tests myself but I can extrapolate data from graphics cards that are similar to what goes into these consoles. For the Xbox One that would be an HD 7770 because it has a similar amount of GPU power as the GPU used in the Xbox One (1.310 Tflops vs 1.28 Tflops) while for the PS4 I would use the HD 7850 (1.84 Tflops vs 1.76 Tflops) and then compare the results for a realistic look at the performance gap. Oh, by the way the gap between 1.28(HD 7770) and 1.76(HD 7850) is smaller than the gap between 1.31(Xbox One) and 1.84(PS4) so that's already giving a slight break to the Xbox One in this comparison.
For my analysis I am using benchmark numbers from Anandtech.com:
http://anandtech.com/bench/product/777?vs=778
Using the data above I came up with the following chart:
I submit to you that the performance difference between the Xbox One and the PS4 will be much more apparent than what Digital Foundry has hypothesized and that the overall difference in power will not only be tangible but possible even staggering.
Hey John, let's read the article:
http://www.eurogamer.net/articles/digitalfoundry-rage-face-off
Man , the performance gap between PS3 and 360 is less than 20%, but DF has already told a few of noticeable differences in the game.
Now the performance gap between PS4 and Xbone is more than 40% and you are telling me they are still very close?
In their defense, the bandwidth differential between a 7790 and 7850 would have made it a far worse comparison. The difference between 96GB/s and ~150GB/s of bandwidth is vastly greater at 1080p than the difference between 16 and 32 ROPS.
The higher number of ROPs mainly improves MSAA performance at 1080p (rather than baseline performance without) and Eurogamer ran its tests without MSAA. It wasn't a perfect comparison but it was closer than many people realize.
You are comparing PC parts. When you've done a deep dive on next gen dev kit. Let me know.
Sorry I don't understand what you are saying, both of the consoles use the same CPU and by virtue of the architecture anything that runs on Xbox One will run better on the PS4 unless there is bottenecks in other areas.I can't imagine that situation in consoles. When some dev are coding a game for consoles, they want to stress all components to the max. A good multiplatform game coded for consoles, will demand the max performance peak for both cpu and gpu. It's not the same situation that in pc games, where you can have a very strong cpu + weak gpu, or rise too much gpu related graphic options (resolution, AA, shaders,...) or cpu options (shadows, LOD, distance view, ...). You as final user will be the performance balancer tweaking options or buying components.
You have to offer a similar final framerate to the users of both machines, so for a game well balanced in console word, you only can't result in better performance using a strong gpu because your cpu will be stressed to the max. Your only option is add new only graphical exclusives differences in the machine with stronger gpu so you will not affect the base cpu requeriment. And this is a very undesired situation for a multiplatform developer.
On top of that you have to think that many developments started in DX11 platform as a base. And not everyone is going to devote the necessary time to achieve an optimal port to sony libraries. Perhaps the gap in graphics power over xbone will be used to save weeks of development in the port. As with similar performance is achieved, they will close development. Although more time would have meant a better outcome for ps4.
So what you are saying is that these systems could've been 299$ 800Gflops machines and nobody would notice the difference in visuals.
You are comparing PC parts. When you've done a deep dive on next gen dev kit. Let me know.
what im saying is seem more easy for gaffer to talk about a uber duper 50% advantage than for devs to show in real games...or maybe the driveclub etc etc devs are not good enough..or maybe the difference is just in 8/10fps (that is a lots but not what ppl expecting is)
What in the same way that the Xbox360 and PS3 are using PC GPU's.The consoles are using PC parts. What's this obsession with trying to distance the consoles to the PC? These consoles are literally mid-ranged PCs in a box.
What in the same way that the Xbox360 and PS3 are using PC GPU's.
Ludicrous.
What in the same way that the Xbox360 and PS3 are using PC GPU's.
Ludicrous.
I agree with your other points.There's nothing wrong in comparing two PC GPUS that have the same architecture of PS4 and Xbox One gpus, and are from the same manufacturer that produces the same drivers.
We are talking about two systems where one has the advantage in pretty much everything, unlike previous generations where even Xbox vs Ps2 had situations where PS2 hardware was more capable.
Sure, but that difference is nowhere near the difference between 128-bit and 256-bit interfaces.The bandwidth between PS4 and Xbox One is not the same anyways, effective bandwidth on the PS4 is better.
I agree with your other points.
I don't believe it's valid trying to search for a current PC GPU that seems similar to leaked specs and then running benchmarks on them as a proxy to determine next gen console performance. It's laughable.
So you believe that the difference in performance comes from the difference in memory bandwidth? If you leave the core clock and OC the memory (increases bandwidth) the performance gain is so minuscule it wouldn't even change anything in the grand scheme of things. Maybe you would squeeze out 5% of extra performance but the fact of the matter is the GPU would run out of processing power way before the memory bandwidth is saturated.Sure, but that difference is nowhere near the difference between 128-bit and 256-bit interfaces.
The HD 7770 in your chart represents a GPU with 72GB/s and compares it to a GPU with 150GB/s.
Sure, but that difference is nowhere near the difference between 128-bit and 256-bit interfaces.
The HD 7770 in your chart represents a GPU with 72GB/s and compares it to a GPU with 150GB/s.