Gemüsepizza;181267054 said:Using a reference 980Ti with stock clocks? Really? Every 980Ti can be overclocked by at least 200MHz. The same can not be said of the Fury X.
Everybody has that. Probably just a bug.quick question
game runs surprisingly well on my pc . 70 fps on an i5 4460, 8 GB Ram, MSI Gaming R9 280 (972mhz/ 1050 mhz) . Setting on Ultra all but shadows, AA and texture quality ( i think or the other in the list)
Didnt think to play it so well on my 800 pc i build in february.
The thing is, it runs smooth during the entire game but when the Gamescore comes up after the round i have huge Framerate drops until the next match starts.
What could be the reason? i turned all settings to medium to check if it´s the graphic settings but still the same! Do i need more DDR Ram? why is the game with all the stuff going on during the match perfect and in a menu where nothing happens my pc starts to get a heart attack?
It's an interesting case for sure but still the game runs at ~60 fps on a 770 on Ultra in 1080p so I don't think that it matters much if AMD's cards are faring a bit better in it:
it continues the trend of amd outperforming nvidia in modern game engines. i dont see why it wouldnt make a difference.
It's an interesting case for sure but still the game runs at ~60 fps on a 770 on Ultra in 1080p so I don't think that it matters much if AMD's cards are faring a bit better in it:
I bought a new card just for Battlefront. R9 290x.It wouldn't make a difference because this game is unlikely to push anyone into buying a new videocard for it.
And there is no such trend.
Each engine and game has its own preference for h/w, this was like this since Voodoo Graphics and it's nothing new.
Considering that SWBF is an AMD's Gaming Evolved title it's hardly a surprise that they've gimped NV's hardware as much as they could. No revelations here.
It wouldn't make a difference because this game is unlikely to push anyone into buying a new videocard for it.
And there is no such trend.
Each engine and game has its own preference for h/w, this was like this since Voodoo Graphics and it's nothing new.
Considering that SWBF is an AMD's Gaming Evolved title it's hardly a surprise that they've gimped NV's hardware as much as they could. No revelations here.
I find hard to believe they have "gimped" Nvidia hardware, rather the optimization process has not gone as far as it must have for AMD GCN, for practical reasons, the fact that the GCN is present across all three of their platforms.
I can't blame devs for not going the extra mile for Nvidia, it's up to them to reach out to devs and assist them whenever they can.
I hope Battlefront supports DX12 at some point, so we can see how does the hardware landscape evolves from DX11.
It's amazing how people always detect trends in single (or a handful of) games while ignoring the vast majority of releases.And there is no such trend.
That's an absolutely sane and valid perspective, and I fully agree.I can't blame devs for not going the extra mile for Nvidia, it's up to them to reach out to devs and assist them whenever they can.
That begs the question : how far can Nvidia and DICE possibly go ? There is obviously a limit as to what Nvidia can achieve with a game tailored (or at least whose design decisions were heavily influenced by) around console hardware. My point is that what we're seeing here is maybe not the peak of what Nvidia hardware is capable of but that's to be expected. Nvidia can only do so much and I believe they have tried to suggest improvements on their hardware.battlefront is huge, im pretty sure nvidia worked a lot with dice. having GCN in both consoles is just a huge benefit for AMD. not much nvidia can do. only going to get worse when dx12 comes.
Obviously it would be, as AMD can't quite match Nvidia drivers in very CPU intensive situations, again they are back to the wall in those scenarios because their drivers just let them down. Taking time out of devs busy schedule is not wise when the answer is : improve your drivers.That's an absolutely sane and valid perspective, and I fully agree.
I wish some people who probably also agree with this sentence would still agree if you substitute "Nvidia" with "AMD" in it. It's obviously equally valid.
Getting quite a few crashes. Like almost every match
latest drivers installed
Everything Ultra at 1080p 120fps
980ti MSI TF
4690k
16gb
That begs the question : how far can Nvidia and DICE possibly go ? There is obviously a limit as to what Nvidia can achieve with a game tailored (or at least whose design decisions were heavily influenced by) around console hardware. My point is that what we're seeing here is maybe not the peak of what Nvidia hardware is capable of but that's to be expected. Nvidia can only do so much and I believe they have tried to suggest improvements on their hardware.
We have little visibility on the optimization process from our perspective, I don't pretend to know how better it could run on Nvidia hardware had they done that or that.
Game runs on ultra 59.7 fps locked down. 980 and a 25k. Wish all games ran this nice on my pos rig.
I call it a pos cause I get screen tears in batman origins and Stanley parable ran at 10fps. Something's wrong with my pc hence the pos part.A 2500k and a 980 is a POS rig? I've got the same CPU and just upgraded from a 560ti to a 970 after almost 4.5 years and I'm blown away with how I can run things, including the Battlefront beta.
CPU benchmarks courtesy of computerbase.de :
http://www.computerbase.de/2015-10/star-wars-battlefront-erste-benchmarks-der-open-beta/#abschnitt_prozessorbenchmarks
980ti vs Fury X CPU scaling
Indeed, the Fury X is easily limited by the CPU.
Which is what made me said Nvidia are doing really well, it's a new situation for them. But I hope they made the right architectural choices for Pascal. I wish they were less aggressive when it comes to power consumption. I want more performance and I can handle more watts.yeah, im sure there are some low level coding paths in every engine that are going to be fundamentally most beneficial to a given architecture
It's a strong show for AMD all around.The R7 360 is even with the 750 Ti in that bench; pretty good despite being weaker than the 260x that the 750 ti usually competed with.
Which is what made me said Nvidia are doing really well, it's a new situation for them. But I hope they made the right architectural choices for Pascal. I wish they were less aggressive when it comes to power consumption. I want more performance and I can handle more watts.
It's a strong show for AMD all around.
i think nvidia themselves have stated that pascal is the same architecture as maxwell. no changes are coming until at least volta
That would be severely disappointing. I except at least 30% improvement.
dude if any game would convince people to buy a new gpu its star wars. ue4 is the only modern engine likely to favor nvidia. madmax is a cross gen, rush job port of a ps360 title to their new engine. wait for JC3 benchmarks to get an idea of how their engine really compares between gpus. all 4 games you listed have their development firmly rooted in old technology btw.
I find hard to believe they have "gimped" Nvidia hardware, rather the optimization process has not gone as far as it must have for AMD GCN, for practical reasons, the fact that the GCN is present across all three of their platforms.
It's amazing how people always detect trends in single (or a handful of) games while ignoring the vast majority of releases.
There is one real trend confirmed by all the results, and that is that AMD's DX11 drivers still hold back all but the most powerful CPUs.
battlefront is huge, im pretty sure nvidia worked a lot with dice. having GCN in both consoles is just a huge benefit for AMD. not much nvidia can do. only going to get worse when dx12 comes.
In that case we do not agree on what constitutes "gimping". Of course AMD Gaming Evolved games are tasked with casting AMD hardware in the best possible light, but there is a difference between making the most of X and artificially "gimping" or crippling the competition. I don't believe DICE have been doing what they can to make sure Nvidia hardware underperforms and AMD are seen as the best option. However, I'd agree with the notion that a developper putting absolute equal amount of ressources into optimization for both IHVs is idealistic at best, hence why some games just favor X or Y, there is no avoiding that because developper ressources are finite.Pretty much every GE title gimps NV's h/w in one way or another and you find it hard to believe? It's an expected thing, AMD will push for the strong points of their GPUs, NV will push for theirs. They are equal in this, the only difference is that NV has much better driver support which usually comes to their rescue with them fixing stuff via driver updates.
In that case we do not agree on what constitutes "gimping". Of course AMD Gaming Evolved games are tasked with casting AMD hardware in the best possible light, but there is a difference between making the most of X and artificially "gimping" or crippling the competition. I don't believe DICE have been doing what they can to make sure Nvidia hardware underperforms and AMD are seen as the best option. However, I'd agree with the notion that a developper putting absolute equal amount of ressources into optimization for both IHVs is idealistic at best, hence why some games just favor X or Y, there is no avoiding that because developper ressources are finite.
I don't suscribe to the view that AMD and their partners put malicious code in their games for Nvidia to pull their hairs out, but it's clear that games part of the Gaming Evolved program are fine tuned for the GCN, like Gameworks is first and foremost optimized for Nvidia cards.
First hand ? How ? I don't want to come across as apologetic towards anything AMD do as a company (the narrative that they are "the good little underdog that needs to be saved" is puerile at best) but I find it a bit hard to believe, although there might not be such mutual exclusiveness between tapping into your hardware and exploiting weaknesses in the competitor's. Just like I'm not sold on the idea that Nvidia are doing everything they can to slow down AMD, it comes down to tessellation performance for Hairworks for example, nothing they can do if AMD's geometry engines are not yet on par. Just like some compute workloads very much tailored-made for the GCN will not run that well on anything Nvidia. If Nvidia do not fully support async compute then what can AMD do about that ? Every hardware vendor has to face their responsabilites and the choices they made.I know pretty much first hand that AMD's code in GE titles are made in such a way as to exploit the weaknesses of NV's hardware as much as possible while staying fast on GCN and being "proper" - meaning that they don't use anything like "if it's NV then slow down the function" and instead try to hit the slow parts of NV's architecture. You may not believe me of course. But it's the exact same tactics NV is using in their titles with high tessellation loads for example.
Come on now. Working on Mantle which was like a dream come true for Andersson who claimed he wanted an API of this kind on PC is not a sign that they neglected or excluded Nvidia, I know this is not exactly what you imply but I don't see DICE as being "partisans". They are one of the very best tech houses out there, obviously as I said earlier it's unlikely they would put as much ressources into Nvidia-specific optimization but we can't conclude they have done anything to ensure their games run "well" on Nvidia cards. In absolute terms Nvidia performance in DICE/Frostbite games is good but it is true that AMD are stronger. As an Nvidia owner I really can't complain.As for DICE doing stuff to cripple NV hardware - well, they've spent a ridiculous amount of effort on Mantle and their engine is much better suited for AMD's h/w as a result of this. It's not so much about crippling anything as using stuff that perform better on AMD's h/w. I call this lack of proper optimization in general because in my opinion an engine must be optimized for all gaming grade h/w which is on the market currently.
Used to work in the field, still have some connections. It's actually rather funny at times to see them both throwing feces the other way while making the exact same thing themselves in a couple of months down the road.First hand ? How ?
Exactly. There are a lot of things which have widely different performance profiles on different architectures, tessellation is just the one on the surface because you can usually notice it as "effect" with your own eyes. But it's hardly the only thing in modern GPUs which both vendors do somewhat differently and there are lots of stuff in GCN which while being within specs are running much slower on NV's h/w. They both exploit this as much as they can and this is why we see different comparative results between GE and TWIMwhatever titles on the whole range of h/w.I don't want to come across as apologetic towards anything AMD do as a company (the narrative that they are "the good little underdog that needs to be saved" is puerile at best) but I find it a bit hard to believe, although there might not be such mutual exclusiveness between tapping into your hardware and exploiting weaknesses in the competitor's. Just like I'm not sold on the idea that Nvidia are doing everything they can to slow down AMD, it comes down to tessellation performance for Hairworks for example, nothing they can do if AMD's geometry engines are not yet on par. Just like some compute workloads very much tailored-made for the GCN will not run that well on anything Nvidia.
Ah, AMD's FUD settled in I see. "Async compute" means that you can launch compute jobs on the GPU prior to a graphics job finish. That's all, nothing more. It doesn't say anywhere in the specs how exactly these jobs should be executed. Executing them serially is thus as full support for async compute as executing them concurrently is. With that being said I think that Maxwell 2 will do fine with concurrent execution - at least during this console gen, with low number of additional compute queues.If Nvidia do not fully support async compute then what can AMD do about that ? Every hardware vendor has to face their responsabilites and the choices they made.
If you're building your renderer on a tech which is supported by one vendor only then no matter if you're a "partisan" or not your renderer will end up being skewed towards this h/w in particular. This has happened a lot of times already with Carmack's engines, Unreal engines, Crytek engines, etc. I don't see why FB is any different. DICE is exclusively AMD sponsored since BF4 (BF3 was kinda in both programs). Their games performance reflect that.Come on now. Working on Mantle which was like a dream come true for Andersson who claimed he wanted an API of this kind on PC is not a sign that they neglected or excluded Nvidia, I know this is not exactly what you imply but I don't see DICE as being "partisans".
What's "due to hardware"? If they'd crank up the tessellation then GCN would be biting dust behind most of NV's mid-top offerings. Would that be "due to hardware" or because of their conscious choice to balance the workload in NV's favour? Same can be applied to AMD - they do balance out the workload in their favour and it's not "due to hardware", it's because they consciously decided so. NV's h/w (Maxwell 2 especially) is in no way worse or less advanced than GCN. The difference lies in how you actually use it.So, what we're seeing with Battlefront, is it due to raw hardware or simply the result of a relative lack of focus on Nvidia hardware ? I guess we'll never know for sure but I don't think this game is fueling Nvidia's nightmares. They would be more concerned if cases like Ryse were to be more commonplace. This game exhibits the more drastic gap between Nvidia/AMD.
That's (a) a really good resource, thanks for the link, and (b) pretty much can be summed up like what dr_rus just said: "There are a lot of things which have widely different performance profiles on different architectures".
i5 2500K @ 4.3GHz
8GB RAM
980 Ti
2560x1440
Ultra, no AA
-1.000 LOD Bias
High Quality x16 AF
Performance is a bit weird. 60 - 90 basically on full Walker Assault, drops sub 60 if I'm caught RIGHT in the middle of an explosion, which is to be expected and not noticeable. Taking off my LOD bias tweak adds an average 10 frames I think.
My gut says the newest NVidia drivers reduced my performance a little bit, but that could be bullshit.
That's (a) a really good resource, thanks for the link, and (b) pretty much can be summed up like what dr_rus just said: "There are a lot of things which have widely different performance profiles on different architectures".
That's (a) a really good resource, thanks for the link,
Even at 900p high settings (with medium AO and shadows), my 750 ti doesn't seem to hold 60 fps as well PS4. That's unfortunate.
At the end of the day it's a matter of public relations, so both are actually going out of their way to "cripple" the competition ? A part of me still does not want to believe this, it's just so sad. Focus on your core strengthes rather than wasting time putting down X or Y.Used to work in the field, still have some connections. It's actually rather funny at times to see them both throwing feces the other way while making the exact same thing themselves in a couple of months down the road.
I can only wonder what will happen with AMD sponsored games now that both consoles have a GCN engine under the hood. Hitman, Rise of the Tomb Raider, Deus Ex MD, how will those games run on Nvidia cards ? Are Nvidia even allowed to submit code which could improve performance on their hardware. Surely you've heard of AMD's allegations that in one game they were essentially locked out of the optimization process and could not offer suggestions to speed things up on their cards, I believe Richard Huddy mentioned this in a Pcper podcast around the Gameworks controversy.Exactly. There are a lot of things which have widely different performance profiles on different architectures, tessellation is just the one on the surface because you can usually notice it as "effect" with your own eyes. But it's hardly the only thing in modern GPUs which both vendors do somewhat differently and there are lots of stuff in GCN which while being within specs are running much slower on NV's h/w. They both exploit this as much as they can and this is why we see different comparative results between GE and TWIMwhatever titles on the whole range of h/w.
Alright, I should have been more specific, what I meant was "concurrent execution of graphics and compute workloads". Apparently, this is not something Nvidia Kepler or Maxwell have been designed for.Ah, AMD's FUD settled in I see. "Async compute" means that you can launch compute jobs on the GPU prior to a graphics job finish. That's all, nothing more. It doesn't say anywhere in the specs how exactly these jobs should be executed. Executing them serially is thus as full support for async compute as executing them concurrently is. With that being said I think that Maxwell 2 will do fine with concurrent execution - at least during this console gen, with low number of additional compute queues.
While their core rendering technologies might be heavily influenced by the GCN, I'm sure they have taken the necessary steps to ensure Nvidia hardware is not left out in the cold, but it might not perform as well because of optimization focus on the dominant architecture. My wording is apologetic I suppose because I understand their ressources are finite and considering Nvidia are only relevant on PC, it's not easy to argue that they really should put equal amount of ressources into optimization. I understand the practical reasons as to why AMD hardware is going to be priorised, which again does not mean no attention whatsoever has been given to Nvidia.If you're building your renderer on a tech which is supported by one vendor only then no matter if you're a "partisan" or not your renderer will end up being skewed towards this h/w in particular. This has happened a lot of times already with Carmack's engines, Unreal engines, Crytek engines, etc. I don't see why FB is any different. DICE is exclusively AMD sponsored since BF4 (BF3 was kinda in both programs). Their games performance reflect that.
My point was : is what we are seeing here on Nvidia's side is the best the hardware can do ? Hard to say for sure, but I would guess not because they have 2 major platforms with GCN as the common denominator.What's "due to hardware"? If they'd crank up the tessellation then GCN would be biting dust behind most of NV's mid-top offerings. Would that be "due to hardware" or because of their conscious choice to balance the workload in NV's favour? Same can be applied to AMD - they do balance out the workload in their favour and it's not "due to hardware", it's because they consciously decided so. NV's h/w (Maxwell 2 especially) is in no way worse or less advanced than GCN. The difference lies in how you actually use it.
It would be nice to be able to turn up the tesselation distance and factor, it is a bit low.