• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Battlefront beta - PC Performance Benchmarks

Using a reference 980Ti with stock clocks? Really? Every 980Ti can be overclocked by at least 200MHz. The same can not be said of the Fury X.
 
Gemüsepizza;181267054 said:
Using a reference 980Ti with stock clocks? Really? Every 980Ti can be overclocked by at least 200MHz. The same can not be said of the Fury X.

with how big amd is leading, a 980ti at 1500/8000 would still either be tying or still losing depending on which site you go by.
 
quick question

game runs surprisingly well on my pc . 70 fps on an i5 4460, 8 GB Ram, MSI Gaming R9 280 (972mhz/ 1050 mhz) . Setting on Ultra all but shadows, AA and texture quality ( i think or the other in the list)

Didnt think to play it so well on my 800€ pc i build in february.

The thing is, it runs smooth during the entire game but when the Gamescore comes up after the round i have huge Framerate drops until the next match starts.

What could be the reason? i turned all settings to medium to check if it´s the graphic settings but still the same! Do i need more DDR Ram? why is the game with all the stuff going on during the match perfect and in a menu where nothing happens my pc starts to get a heart attack?
 

KKRT00

Member
quick question

game runs surprisingly well on my pc . 70 fps on an i5 4460, 8 GB Ram, MSI Gaming R9 280 (972mhz/ 1050 mhz) . Setting on Ultra all but shadows, AA and texture quality ( i think or the other in the list)

Didnt think to play it so well on my 800€ pc i build in february.

The thing is, it runs smooth during the entire game but when the Gamescore comes up after the round i have huge Framerate drops until the next match starts.

What could be the reason? i turned all settings to medium to check if it´s the graphic settings but still the same! Do i need more DDR Ram? why is the game with all the stuff going on during the match perfect and in a menu where nothing happens my pc starts to get a heart attack?
Everybody has that. Probably just a bug.
 

dr_rus

Member

It's an interesting case for sure but still the game runs at ~60 fps on a 770 on Ultra in 1080p so I don't think that it matters much if AMD's cards are faring a bit better in it:

index.php
 
It's an interesting case for sure but still the game runs at ~60 fps on a 770 on Ultra in 1080p so I don't think that it matters much if AMD's cards are faring a bit better in it:

index.php

it continues the trend of amd outperforming nvidia in modern game engines. i dont see why it wouldnt make a difference.
 

dr_rus

Member
it continues the trend of amd outperforming nvidia in modern game engines. i dont see why it wouldnt make a difference.

It wouldn't make a difference because this game is unlikely to push anyone into buying a new videocard for it.

And there is no such trend.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-The_Vanishing_of_Ethan_Carter_Redux-test-EthanCarter_2560.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-MMO-Dota_2_Reborn-test-dota2_3840.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Metal_Gear_Solid_V_The_Phantom_Pain-test-m2560.png


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Mad_Max_-test-MadMax_2560.jpg


Each engine and game has its own preference for h/w, this was like this since Voodoo Graphics and it's nothing new.

Considering that SWBF is an AMD's Gaming Evolved title it's hardly a surprise that they've gimped NV's hardware as much as they could. No revelations here.
 

Xyber

Member
It's an interesting case for sure but still the game runs at ~60 fps on a 770 on Ultra in 1080p so I don't think that it matters much if AMD's cards are faring a bit better in it:

index.php

But 60fps and 1080p is not really what a lot people with these high-end cards are going for. 120/144Hz and 1440p and above is where the cards shine and there you want all the power you can get.

And I definitely think it's a good thing that AMD has a card that is very competitive with Nvidia because I don't want it to be like with Intel and AMD on the CPU side.
 
It wouldn't make a difference because this game is unlikely to push anyone into buying a new videocard for it.

And there is no such trend.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-The_Vanishing_of_Ethan_Carter_Redux-test-EthanCarter_2560.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-MMO-Dota_2_Reborn-test-dota2_3840.jpg


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Metal_Gear_Solid_V_The_Phantom_Pain-test-m2560.png


http--www.gamegpu.ru-images-stories-Test_GPU-Action-Mad_Max_-test-MadMax_2560.jpg


Each engine and game has its own preference for h/w, this was like this since Voodoo Graphics and it's nothing new.

Considering that SWBF is an AMD's Gaming Evolved title it's hardly a surprise that they've gimped NV's hardware as much as they could. No revelations here.
I bought a new card just for Battlefront. R9 290x.
 
It wouldn't make a difference because this game is unlikely to push anyone into buying a new videocard for it.

And there is no such trend.

Each engine and game has its own preference for h/w, this was like this since Voodoo Graphics and it's nothing new.

Considering that SWBF is an AMD's Gaming Evolved title it's hardly a surprise that they've gimped NV's hardware as much as they could. No revelations here.

dude if any game would convince people to buy a new gpu its star wars. ue4 is the only modern engine likely to favor nvidia. madmax is a cross gen, rush job port of a ps360 title to their new engine. wait for JC3 benchmarks to get an idea of how their engine really compares between gpus. all 4 games you listed have their development firmly rooted in old technology btw.
 

Kezen

Banned
I find hard to believe they have "gimped" Nvidia hardware, rather the optimization process has not gone as far as it must have for AMD GCN, for practical reasons, the fact that the GCN is present across all three of their platforms.

I can't blame devs for not going the extra mile for Nvidia, it's up to them to reach out to devs and assist them whenever they can.

I hope Battlefront supports DX12 at some point, so we can see how does the hardware landscape evolves from DX11.
 
I find hard to believe they have "gimped" Nvidia hardware, rather the optimization process has not gone as far as it must have for AMD GCN, for practical reasons, the fact that the GCN is present across all three of their platforms.

I can't blame devs for not going the extra mile for Nvidia, it's up to them to reach out to devs and assist them whenever they can.

I hope Battlefront supports DX12 at some point, so we can see how does the hardware landscape evolves from DX11.

battlefront is huge, im pretty sure nvidia worked a lot with dice. having GCN in both consoles is just a huge benefit for AMD. not much nvidia can do. only going to get worse when dx12 comes.
 

Durante

Member
And there is no such trend.
It's amazing how people always detect trends in single (or a handful of) games while ignoring the vast majority of releases.

There is one real trend confirmed by all the results, and that is that AMD's DX11 drivers still hold back all but the most powerful CPUs.

I can't blame devs for not going the extra mile for Nvidia, it's up to them to reach out to devs and assist them whenever they can.
That's an absolutely sane and valid perspective, and I fully agree.

I wish some people who probably also agree with this sentence would still agree if you substitute "Nvidia" with "AMD" in it. It's obviously equally valid.
 

jmga

Member
From a specs point of view, this game just runs according to AMD VGAs theorical power while most games need a much more powerfull AMD card to catch up with NVIDIA.

And a 750 Ti with an i3 seems to run on par with PS4, so I wouldn't precisely say NVIDIA is getting trashed.
 

Kezen

Banned
battlefront is huge, im pretty sure nvidia worked a lot with dice. having GCN in both consoles is just a huge benefit for AMD. not much nvidia can do. only going to get worse when dx12 comes.
That begs the question : how far can Nvidia and DICE possibly go ? There is obviously a limit as to what Nvidia can achieve with a game tailored (or at least whose design decisions were heavily influenced by) around console hardware. My point is that what we're seeing here is maybe not the peak of what Nvidia hardware is capable of but that's to be expected. Nvidia can only do so much and I believe they have tried to suggest improvements on their hardware.
We have little visibility on the optimization process from our perspective, I don't pretend to know how better it could run on Nvidia hardware had they done that or that.

That's an absolutely sane and valid perspective, and I fully agree.

I wish some people who probably also agree with this sentence would still agree if you substitute "Nvidia" with "AMD" in it. It's obviously equally valid.
Obviously it would be, as AMD can't quite match Nvidia drivers in very CPU intensive situations, again they are back to the wall in those scenarios because their drivers just let them down. Taking time out of devs busy schedule is not wise when the answer is : improve your drivers.

However, so far Nvidia have managed to do well in games heavily tailored for their competitor, I wonder if DX12 will genuinely change that.
 

Hellcrow

Member
Game runs fine when actually in game. Menus freeze all the time, and usually a hoth round starts with a 3-4 second freeze.
 

Daffy Duck

Member
Getting quite a few crashes. Like almost every match

latest drivers installed

Everything Ultra at 1080p 120fps

980ti MSI TF
4690k
16gb

Pretty much the same system here and no issues for me.

Out of interest what cpu temps are you/people getting in this game?
 

byropoint

Member
i7 6700k at stock
R9 390 at stock
16gb ram

I get ~90 avg fps maxed out at 1080p, was surprised to see the game pushing system ram usage to 10gb.
 
That begs the question : how far can Nvidia and DICE possibly go ? There is obviously a limit as to what Nvidia can achieve with a game tailored (or at least whose design decisions were heavily influenced by) around console hardware. My point is that what we're seeing here is maybe not the peak of what Nvidia hardware is capable of but that's to be expected. Nvidia can only do so much and I believe they have tried to suggest improvements on their hardware.
We have little visibility on the optimization process from our perspective, I don't pretend to know how better it could run on Nvidia hardware had they done that or that.

yeah, im sure there are some low level coding paths in every engine that are going to be fundamentally most beneficial to a given architecture
 
A 2500k and a 980 is a POS rig? I've got the same CPU and just upgraded from a 560ti to a 970 after almost 4.5 years and I'm blown away with how I can run things, including the Battlefront beta.
I call it a pos cause I get screen tears in batman origins and Stanley parable ran at 10fps. Something's wrong with my pc hence the pos part.
 

Kezen

Banned
yeah, im sure there are some low level coding paths in every engine that are going to be fundamentally most beneficial to a given architecture
Which is what made me said Nvidia are doing really well, it's a new situation for them. But I hope they made the right architectural choices for Pascal. I wish they were less aggressive when it comes to power consumption. I want more performance and I can handle more watts.

The R7 360 is even with the 750 Ti in that bench; pretty good despite being weaker than the 260x that the 750 ti usually competed with.
It's a strong show for AMD all around.
 
Which is what made me said Nvidia are doing really well, it's a new situation for them. But I hope they made the right architectural choices for Pascal. I wish they were less aggressive when it comes to power consumption. I want more performance and I can handle more watts.


It's a strong show for AMD all around.

i think nvidia themselves have stated that pascal is the same architecture as maxwell. no changes are coming until at least volta
 

dr_rus

Member
dude if any game would convince people to buy a new gpu its star wars. ue4 is the only modern engine likely to favor nvidia. madmax is a cross gen, rush job port of a ps360 title to their new engine. wait for JC3 benchmarks to get an idea of how their engine really compares between gpus. all 4 games you listed have their development firmly rooted in old technology btw.

A game must put pressure on h/w to make people go and buy a new one. With SWBF running at 60 fps in 1080p in Ultra on GTX770 this just won't happen. What don't you understand here?

As for Source 2, Fox Engine and Avalanche's engine being "firmly rooted in old technology" I don't even know what that means. Let me see if I got this right: Frostbite used in SWBF was made to run on PS2 level hardware initially so that means that FB3 used for SWBF is "firmly rooted in old technology" as well. Am I correct?

I find hard to believe they have "gimped" Nvidia hardware, rather the optimization process has not gone as far as it must have for AMD GCN, for practical reasons, the fact that the GCN is present across all three of their platforms.

Pretty much every GE title gimps NV's h/w in one way or another and you find it hard to believe? It's an expected thing, AMD will push for the strong points of their GPUs, NV will push for theirs. They are equal in this, the only difference is that NV has much better driver support which usually comes to their rescue with them fixing stuff via driver updates.

It's amazing how people always detect trends in single (or a handful of) games while ignoring the vast majority of releases.

There is one real trend confirmed by all the results, and that is that AMD's DX11 drivers still hold back all but the most powerful CPUs.

There is a trend of Kepler h/w often being slower than their rival GCN and Maxwell h/w in newer titles, I fully agree with this. But it seems that some people are starting to widen their approach here on all NV h/w which is simply false.

battlefront is huge, im pretty sure nvidia worked a lot with dice. having GCN in both consoles is just a huge benefit for AMD. not much nvidia can do. only going to get worse when dx12 comes.

Sure, not much NVIDIA can do if a developer is willing to ignore optimizing for 75% of PC graphics h/w because some company paid them not to bother with this.

I think you're overestimating the effect of GCN being in consoles on NV's GPUs. Console GPUs are 750Ti levels of performance and they have their own peculiarities (ACEs, UMA, SRAM, other APIs) which basically mean that you still need to rewrite your code to make it work on PC. You can look no further than BAK to see what happens when a developer don't do this. And when you do the porting you have an opportunity to optimize for PC graphics h/w as well. Which you may skip for some reason which is a generally bad idea all around considering that PC h/w isn't fixed like console one.
 

Kezen

Banned
Pretty much every GE title gimps NV's h/w in one way or another and you find it hard to believe? It's an expected thing, AMD will push for the strong points of their GPUs, NV will push for theirs. They are equal in this, the only difference is that NV has much better driver support which usually comes to their rescue with them fixing stuff via driver updates.
In that case we do not agree on what constitutes "gimping". Of course AMD Gaming Evolved games are tasked with casting AMD hardware in the best possible light, but there is a difference between making the most of X and artificially "gimping" or crippling the competition. I don't believe DICE have been doing what they can to make sure Nvidia hardware underperforms and AMD are seen as the best option. However, I'd agree with the notion that a developper putting absolute equal amount of ressources into optimization for both IHVs is idealistic at best, hence why some games just favor X or Y, there is no avoiding that because developper ressources are finite.

I don't suscribe to the view that AMD and their partners put malicious code in their games for Nvidia to pull their hairs out, but it's clear that games part of the Gaming Evolved program are fine tuned for the GCN, like Gameworks is first and foremost optimized for Nvidia cards.
 

dr_rus

Member
In that case we do not agree on what constitutes "gimping". Of course AMD Gaming Evolved games are tasked with casting AMD hardware in the best possible light, but there is a difference between making the most of X and artificially "gimping" or crippling the competition. I don't believe DICE have been doing what they can to make sure Nvidia hardware underperforms and AMD are seen as the best option. However, I'd agree with the notion that a developper putting absolute equal amount of ressources into optimization for both IHVs is idealistic at best, hence why some games just favor X or Y, there is no avoiding that because developper ressources are finite.

I don't suscribe to the view that AMD and their partners put malicious code in their games for Nvidia to pull their hairs out, but it's clear that games part of the Gaming Evolved program are fine tuned for the GCN, like Gameworks is first and foremost optimized for Nvidia cards.

I know pretty much first hand that AMD's code in GE titles are made in such a way as to exploit the weaknesses of NV's hardware as much as possible while staying fast on GCN and being "proper" - meaning that they don't use anything like "if it's NV then slow down the function" and instead try to hit the slow parts of NV's architecture. You may not believe me of course. But it's the exact same tactics NV is using in their titles with high tessellation loads for example.

As for DICE doing stuff to cripple NV hardware - well, they've spent a ridiculous amount of effort on Mantle and their engine is much better suited for AMD's h/w as a result of this. It's not so much about crippling anything as using stuff that perform better on AMD's h/w. I call this lack of proper optimization in general because in my opinion an engine must be optimized for all gaming grade h/w which is on the market currently.
 

EatChildren

Currently polling second in Australia's federal election (first in the Gold Coast), this feral may one day be your Bogan King.
i5 2500K @ 4.3GHz
8GB RAM
980 Ti
2560x1440
Ultra, no AA
-1.000 LOD Bias
High Quality x16 AF

Performance is a bit weird. 60 - 90 basically on full Walker Assault, drops sub 60 if I'm caught RIGHT in the middle of an explosion, which is to be expected and not noticeable. Taking off my LOD bias tweak adds an average 10 frames I think.

My gut says the newest NVidia drivers reduced my performance a little bit, but that could be bullshit.
 

Kezen

Banned
I know pretty much first hand that AMD's code in GE titles are made in such a way as to exploit the weaknesses of NV's hardware as much as possible while staying fast on GCN and being "proper" - meaning that they don't use anything like "if it's NV then slow down the function" and instead try to hit the slow parts of NV's architecture. You may not believe me of course. But it's the exact same tactics NV is using in their titles with high tessellation loads for example.
First hand ? How ? I don't want to come across as apologetic towards anything AMD do as a company (the narrative that they are "the good little underdog that needs to be saved" is puerile at best) but I find it a bit hard to believe, although there might not be such mutual exclusiveness between tapping into your hardware and exploiting weaknesses in the competitor's. Just like I'm not sold on the idea that Nvidia are doing everything they can to slow down AMD, it comes down to tessellation performance for Hairworks for example, nothing they can do if AMD's geometry engines are not yet on par. Just like some compute workloads very much tailored-made for the GCN will not run that well on anything Nvidia. If Nvidia do not fully support async compute then what can AMD do about that ? Every hardware vendor has to face their responsabilites and the choices they made.

As long as one is not deliberately looking for ways to cripple, hurt the competition then it's "fair" I suppose.

I suspect the reality is far away from that, I'm just desperately trying to be optimistic.
I don't know anything, I've not reverse-engineered any code,I've not run renderdoc or GPU View, I'm just speculating and ultimately it's very hard to tell if a game truly runs as well as it could on any hardware. Who tells me that Hairworks couldn't run better on AMD, maybe there is way to increase the performance of X effect but it would require reworking major components. Or that whatever techniques AMD and their partners are working on could not be more efficiently written for Kepler/Maxwell ? As usual both IHV perfectly know devs are working on borrowed time all the time, and taking time they don't have out of their schedule to specifically optimize for another hardware is not an option.

As for DICE doing stuff to cripple NV hardware - well, they've spent a ridiculous amount of effort on Mantle and their engine is much better suited for AMD's h/w as a result of this. It's not so much about crippling anything as using stuff that perform better on AMD's h/w. I call this lack of proper optimization in general because in my opinion an engine must be optimized for all gaming grade h/w which is on the market currently.
Come on now. Working on Mantle which was like a dream come true for Andersson who claimed he wanted an API of this kind on PC is not a sign that they neglected or excluded Nvidia, I know this is not exactly what you imply but I don't see DICE as being "partisans". They are one of the very best tech houses out there, obviously as I said earlier it's unlikely they would put as much ressources into Nvidia-specific optimization but we can't conclude they have done anything to ensure their games run "well" on Nvidia cards. In absolute terms Nvidia performance in DICE/Frostbite games is good but it is true that AMD are stronger. As an Nvidia owner I really can't complain.
I'd like for all games to truly and methodically push every hardware to its limits but that's not very realistic. When you have a multiplatform environment dominated by the GCN then it makes a great deal of sense to tailor (does not mean not to consider any other arch) your engine to it.

So, what we're seeing with Battlefront, is it due to raw hardware or simply the result of a relative lack of focus on Nvidia hardware ? I guess we'll never know for sure but I don't think this game is fueling Nvidia's nightmares. They would be more concerned if cases like Ryse were to be more commonplace. This game exhibits the more drastic gap between Nvidia/AMD.
 

dr_rus

Member
First hand ? How ?
Used to work in the field, still have some connections. It's actually rather funny at times to see them both throwing feces the other way while making the exact same thing themselves in a couple of months down the road.

I don't want to come across as apologetic towards anything AMD do as a company (the narrative that they are "the good little underdog that needs to be saved" is puerile at best) but I find it a bit hard to believe, although there might not be such mutual exclusiveness between tapping into your hardware and exploiting weaknesses in the competitor's. Just like I'm not sold on the idea that Nvidia are doing everything they can to slow down AMD, it comes down to tessellation performance for Hairworks for example, nothing they can do if AMD's geometry engines are not yet on par. Just like some compute workloads very much tailored-made for the GCN will not run that well on anything Nvidia.
Exactly. There are a lot of things which have widely different performance profiles on different architectures, tessellation is just the one on the surface because you can usually notice it as "effect" with your own eyes. But it's hardly the only thing in modern GPUs which both vendors do somewhat differently and there are lots of stuff in GCN which while being within specs are running much slower on NV's h/w. They both exploit this as much as they can and this is why we see different comparative results between GE and TWIMwhatever titles on the whole range of h/w.

If Nvidia do not fully support async compute then what can AMD do about that ? Every hardware vendor has to face their responsabilites and the choices they made.
Ah, AMD's FUD settled in I see. "Async compute" means that you can launch compute jobs on the GPU prior to a graphics job finish. That's all, nothing more. It doesn't say anywhere in the specs how exactly these jobs should be executed. Executing them serially is thus as full support for async compute as executing them concurrently is. With that being said I think that Maxwell 2 will do fine with concurrent execution - at least during this console gen, with low number of additional compute queues.

Come on now. Working on Mantle which was like a dream come true for Andersson who claimed he wanted an API of this kind on PC is not a sign that they neglected or excluded Nvidia, I know this is not exactly what you imply but I don't see DICE as being "partisans".
If you're building your renderer on a tech which is supported by one vendor only then no matter if you're a "partisan" or not your renderer will end up being skewed towards this h/w in particular. This has happened a lot of times already with Carmack's engines, Unreal engines, Crytek engines, etc. I don't see why FB is any different. DICE is exclusively AMD sponsored since BF4 (BF3 was kinda in both programs). Their games performance reflect that.

So, what we're seeing with Battlefront, is it due to raw hardware or simply the result of a relative lack of focus on Nvidia hardware ? I guess we'll never know for sure but I don't think this game is fueling Nvidia's nightmares. They would be more concerned if cases like Ryse were to be more commonplace. This game exhibits the more drastic gap between Nvidia/AMD.
What's "due to hardware"? If they'd crank up the tessellation then GCN would be biting dust behind most of NV's mid-top offerings. Would that be "due to hardware" or because of their conscious choice to balance the workload in NV's favour? Same can be applied to AMD - they do balance out the workload in their favour and it's not "due to hardware", it's because they consciously decided so. NV's h/w (Maxwell 2 especially) is in no way worse or less advanced than GCN. The difference lies in how you actually use it.
 
i5 2500K @ 4.3GHz
8GB RAM
980 Ti
2560x1440
Ultra, no AA
-1.000 LOD Bias
High Quality x16 AF

Performance is a bit weird. 60 - 90 basically on full Walker Assault, drops sub 60 if I'm caught RIGHT in the middle of an explosion, which is to be expected and not noticeable. Taking off my LOD bias tweak adds an average 10 frames I think.

My gut says the newest NVidia drivers reduced my performance a little bit, but that could be bullshit.

-1 LOD bias increases the resolution of distant textures, right?
Why not increase the internal resolution instead? It has a similar effect but also reduces aliasing and makes distant objects easier to see.
 
That's (a) a really good resource, thanks for the link, and (b) pretty much can be summed up like what dr_rus just said: "There are a lot of things which have widely different performance profiles on different architectures".

i never said there wasnt, i just dont believe dice didnt work with nvidia. its also seeming like maxwell is going to be a repeat of kepler
 

dr_rus

Member
That's (a) a really good resource, thanks for the link,

There are a lot of assumptions which aren't really based on anything we currently know there. I would suggest for everyone to at least wait for the promised driver enabling concurrent compute execution on Maxwell 2 and/or some more real world benchmarks of the feature.
 

belmonkey

Member
Even at 900p high settings (with medium AO and shadows), my 750 ti doesn't seem to hold 60 fps as well PS4. That's unfortunate. Well, I suppose it's actually quite close if I overclock a bit. 1300 MHz core and 6 GHz memory.
 

Kyari

Member
I seem to be falling into the small (but acknowledged) category of people that gave a serviceable rig but a bug with how the game handles some AMD graphics cards causes the game to spike cpu use to 100% :/
 

Kezen

Banned
Used to work in the field, still have some connections. It's actually rather funny at times to see them both throwing feces the other way while making the exact same thing themselves in a couple of months down the road.
At the end of the day it's a matter of public relations, so both are actually going out of their way to "cripple" the competition ? A part of me still does not want to believe this, it's just so sad. Focus on your core strengthes rather than wasting time putting down X or Y.

Exactly. There are a lot of things which have widely different performance profiles on different architectures, tessellation is just the one on the surface because you can usually notice it as "effect" with your own eyes. But it's hardly the only thing in modern GPUs which both vendors do somewhat differently and there are lots of stuff in GCN which while being within specs are running much slower on NV's h/w. They both exploit this as much as they can and this is why we see different comparative results between GE and TWIMwhatever titles on the whole range of h/w.
I can only wonder what will happen with AMD sponsored games now that both consoles have a GCN engine under the hood. Hitman, Rise of the Tomb Raider, Deus Ex MD, how will those games run on Nvidia cards ? Are Nvidia even allowed to submit code which could improve performance on their hardware. Surely you've heard of AMD's allegations that in one game they were essentially locked out of the optimization process and could not offer suggestions to speed things up on their cards, I believe Richard Huddy mentioned this in a Pcper podcast around the Gameworks controversy.
That's already sad enough assuming it's true, I don't want AMD to sign deals with developpers to enforce similar clauses.

Ah, AMD's FUD settled in I see. "Async compute" means that you can launch compute jobs on the GPU prior to a graphics job finish. That's all, nothing more. It doesn't say anywhere in the specs how exactly these jobs should be executed. Executing them serially is thus as full support for async compute as executing them concurrently is. With that being said I think that Maxwell 2 will do fine with concurrent execution - at least during this console gen, with low number of additional compute queues.
Alright, I should have been more specific, what I meant was "concurrent execution of graphics and compute workloads". Apparently, this is not something Nvidia Kepler or Maxwell have been designed for.

If you're building your renderer on a tech which is supported by one vendor only then no matter if you're a "partisan" or not your renderer will end up being skewed towards this h/w in particular. This has happened a lot of times already with Carmack's engines, Unreal engines, Crytek engines, etc. I don't see why FB is any different. DICE is exclusively AMD sponsored since BF4 (BF3 was kinda in both programs). Their games performance reflect that.
While their core rendering technologies might be heavily influenced by the GCN, I'm sure they have taken the necessary steps to ensure Nvidia hardware is not left out in the cold, but it might not perform as well because of optimization focus on the dominant architecture. My wording is apologetic I suppose because I understand their ressources are finite and considering Nvidia are only relevant on PC, it's not easy to argue that they really should put equal amount of ressources into optimization. I understand the practical reasons as to why AMD hardware is going to be priorised, which again does not mean no attention whatsoever has been given to Nvidia.
It's possible Nvidia are more worried about games heavily advertised by AMD, because it might mean even less thought has been given to how well Nvidia hardware will perform.

What's "due to hardware"? If they'd crank up the tessellation then GCN would be biting dust behind most of NV's mid-top offerings. Would that be "due to hardware" or because of their conscious choice to balance the workload in NV's favour? Same can be applied to AMD - they do balance out the workload in their favour and it's not "due to hardware", it's because they consciously decided so. NV's h/w (Maxwell 2 especially) is in no way worse or less advanced than GCN. The difference lies in how you actually use it.
My point was : is what we are seeing here on Nvidia's side is the best the hardware can do ? Hard to say for sure, but I would guess not because they have 2 major platforms with GCN as the common denominator.
 
Top Bottom