• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Anandtech: The DirectX 12 Performance Preview

diablo 3 is not a good example at all, the game is horribly unoptimized on pc. i get significant fps lag in 4 player games and my rig is just as good as yours, 4690k@4.5, 280x.

it gets especially bad in rakkis crossing and cesspools. a lot of other people have the same problems with these two areas as well.

That's entirely my point.

It's poorly optimized, but it's still cPU bound based on the optimization that is done.

SC2 suffers from the same problems, and it's on the same engine. It seems to be a bottleneck in the huge amount of draw calls the engine does which spikes 1-2 cores very heavy. This is a scenario I see where DX12 would help tremendously.
 

Diffense

Member
Mantle only works on AMD hardware
DirectX12 only works on Windows
OpenGL works with Intel, AMD, Nvidia HW and on Windows, Linux, Mac

So they each will have their niche though Mantle will perhaps be the least relevant in terms of developers coding directly to it. But AMD is a stakeholder in the development of the next GL revision and have offered the Khronos group insight into how Mantle is designed. So the work that they have put into developing it will still be relevant. It's not like they went out on some kind of dead-end path. DirectX and Opengl will follow Mantle's philosophy though at a higher level of abstraction (Dx to support Nvidia and Intel hardware and GL to support various hardware/Os combinations).

GL is Linux's only 3D graphics API so that links it closely to Android and Valve's Steambox efforts. As I'm a Linux user, I'm a little biased because DirectX isn't directly relevant to me.Though it's great to see the overall evoution of computer graphics and the changing relationship between the increasingly programmable GPU and the rest of the PC. Interfaces to the GPU have been getting more general. APIs are actually getting leaner and less graphics focused because the 'shader programs' can handle most of the domain specific tasks. So the goal now is to set the gpu programs free.
 

dr_rus

Member
AMD might make Mantle multi-platform in the future, so there's that.

There's glNext for that.
Mantle is out as soon as DX12 and glNext is in.
No developer will spend time coding for an API which is supported by only one GPU vendor when there is an API which gives the same results with much wider vendor support.
This is the prime reason why I always said that AMD's Mantle initiative was a waste of time and resources for them and the industry. They could've opted for pushing for DX12 to happen sooner instead.

As for the benchmarks - Star Swarm is a highly tuned benchmark which shows extreme results unreachable in the real world games.
I fully expect DX12 to be on par with Mantle results in Frostbite games. Which is nice but in no way groundbreaking as the Star Swarm benchmark tend to suggest.
Most games are GPU limited and no amount of CPU optimizations will help with that.
 

FLAguy954

Junior Member
Ah. Completely forgot about glide.

Still, IMO DirectX 12 will win. Of course AMD fans will try to convince themselves otherwise but with Nvidia, Intel backing it up (instead of Mantle) ,major engines supporting it and it being cross-platform Mantle stands no chance. The fact that Mantle is everything but open drastically limits its uptake. AMD want to keep the ball for themselves and they are entitled to that but they can't complain everyone will move on and forget about their tech.

I admit, I am an AMD fan but I can see the writing on the wall that DX12 will win this one. I applaud AMD for pushing API devs to further graphics performance on PCs. I doesn't always have to be black or white
or red or green ;)
.

And even if Mantle is faster, how faster would it need to be for devs to bother ? For Windows DX12 is the choice of reason and for Linux/Mac OpenGL....Unless you only want to sell your games to GCN owners.


In its defense Mantle has a decent number of games but I fear it needed to do much better to stand a ghost of a chance against a superior (and supported by Nvidia) solution.
I'd go as far as to say at least one third of devs having commited to Mantle only did it to better prepare themselves for Directx 12.
The Anandtech benchmarks reveal a somewhat embarrassing thing : DX12 while not even being in beta already trade blows with an API launching one year earlier. You can only wonder what it will be like when the drivers, API, tools, will be finalized.

DX12 will drastically alter the PC gaming landscape and I can't wait to see what devs do with it on PC, be it with exclusives or multiplatform games.

This is a great point, and we already have evidence of this by the Star Swarm guys already developing a DX12 rendering path (they were pioneers of developing on the Mantle API).
 

Crisium

Member
I wonder how much gains I would see on my potato phenom II going from my 6870 to say a gtx 750 or some similar performing card...
Anyone have a guess?

Why would you do that? I'm not even sure a GTX 750 is faster than a 6870. You'd probably get the same performance. 750Ti would be faster just by one tier.

What in the world?
 

FordGTGuy

Banned
Mantle only works on AMD hardware
DirectX12 only works on Windows
OpenGL works with Intel, AMD, Nvidia HW and on Windows, Linux, Mac

So they each will have their niche though Mantle will perhaps be the least relevant in terms of developers coding directly to it. But AMD is a stakeholder in the development of the next GL revision and have offered the Khronos group insight into how Mantle is designed. So the work that they have put into developing it will still be relevant. It's not like they went out on some kind of dead-end path. DirectX and Opengl will follow Mantle's philosophy though at a higher level of abstraction (Dx to support Nvidia and Intel hardware and GL to support various hardware/Os combinations).

GL is Linux's only 3D graphics API so that links it closely to Android and Valve's Steambox efforts. As I'm a Linux user, I'm a little biased because DirectX isn't directly relevant to me.Though it's great to see the overall evoution of computer graphics and the changing relationship between the increasingly programmable GPU and the rest of the PC. Interfaces to the GPU have been getting more general. APIs are actually getting leaner and less graphics focused because the 'shader programs' can handle most of the domain specific tasks. So the goal now is to set the gpu programs free.

I like how you worded it to make it sound like DX12 doesn't support Intel, Nvidia and AMD hardware....

DX12 might only work on Windows but Windows also controls the overwhelming majority of gamers and non-gamers alike.
 
What a completely useless article from Anandtech.

When a 750 Ti goes at 22 fps and a 290X at 8 fps you know that the results are rigged.

What's the point of using a rigged benchmark? Not a single game out there, on DX11, goes three times as fast on a 750ti than the 290X.

So what are we looking at here? It's completely pointless for an actual game benchmark.
 
What a completely useless article from Anandtech.

When a 750 Ti goes at 22 fps and a 290X at 8 fps you know that the results are rigged.

What's the point of using a rigged benchmark? Not a single game out there, on DX11, goes three times as fast on a 750ti than the 290X.

So what are we looking at here? It's completely pointless for an actual game benchmark.
This is a bad post and you should feel bad. So much wrong in here. You sure you read the benchmark / understand the purpose of the benchmark / even know what benchmarking is?
 
This is a bad post and you should feel bad. So much wrong in here. You sure you read the benchmark / understand the purpose of the benchmark / even know what benchmarking is?

This benchmark is only measuring one single aspect of the rendering, and we know that.

But the mistake is that the benchmark compares DX12 code to BAD DX11 code.

So let's take a real game like Shadow of Mordor:
750 ti = 41 fps
290x = 79 fps

So it's safe to assume that usually the 290x doubles the fps of the 750 ti.

We take the actual dx11 fps of the 750 in the benchmark: 22 fps.

And then we double that number to obtain the realistic performance on the 290x in the scenario the code was written by someone who wasn't deliberately crippling it to be slow.

What we get? That the 290x goes, in DX11, at 44 fps. Compared to 43 of DX12.

Meaning that in real world situation the DX12 performance is IDENTICAL.

Look at those numbers. The chart says the 750 ti goes at 22 fps, the 980 at 27 fps. Tell me in what kind of game you see the 980 getting 5 fps more compared to a 750 ti.

It's just bad code, pushing only one case, and it's nowhere even close to a real game scenario. So: completely useless.
 

Durante

Member
This is a timely article, especially since we were just discussing here on GAF a few days ago how much worse AMD DX11 drivers are in terms of CPU utilization comapred to NV DX11 drivers. There was no hard data out there, but - at least in my opinion - lots of circumstantial evidence pointing towards "a lot worse". Now it's rather rigorously confirmed.

DX12 might only work on Windows but Windows also controls the overwhelming majority of gamers and non-gamers alike.
That used to be the case. Now, the majority of gaming revenue is not generated on devices which run Windows, and the majority of computing devices sold don't run Windows.
 

tuxfool

Banned
DX12 is available on Xbox One and PC. Hence my cross-platform comment.
The next revision of OpenGL might make some noise but I would not bet on it ever being as relevant as Directx 12.

W10 upgrade is free for W7 and 8 owners for a whole year. Needless to say DX12's future looks bright while Mantle's is really grim. I see no point in using it when you can have low-level efficiency on Nvidia and AMD hardware with DX12, why would devs waste their time implementing a Mantle renderer ?

I really do not have anything against AMD of course but I don't see a future for Mantle, good on them for having put pressure on both Nvidia and Microsoft but that's about it.

Well AMD themselves have said Mantle is merely a means to an end. If there are other APIs that do the same things then they will drop it (statement almost verbatim), so provided DX12 and glNext get wide acceptance AMD will probably retire mantle.
 

Durante

Member
Also, about the relevance of various APIs, this is what people at Unity think:
glnext_unitytauc4.png
 

Crisium

Member
I do think it's misleading. I think it's quite clear that AMD didn't optimize DX11 drivers for this game because they spent resources on Mantle instead. There is no game ever released that has this kind of horrid AMD Dx11 performance. And even if this one does release like this, you have Mantle anyway, so what's the point?

I find the CPU benchmarks far more relevant than the AMD Dx11 vs Dx12. It's still an important article, but those using this to further their claim of "lol AMD drivers" need to look at the superb Mantle performance and realize AMD performance is just fine. People always try to further their cause though...
 

TSM

Member
This benchmark is only measuring one single aspect of the rendering, and we know that.

But the mistake is that the benchmark compares DX12 code to BAD DX11 code.

So let's take a real game like Shadow of Mordor:
750 ti = 41 fps
290x = 79 fps

So it's safe to assume that usually the 290x doubles the fps of the 750 ti.

We take the actual dx11 fps of the 750 in the benchmark: 22 fps.

And then we double that number to obtain the realistic performance on the 290x in the scenario the code was written by someone who wasn't deliberately crippling it to be slow.

What we get? That the 290x goes, in DX11, at 44 fps. Compared to 43 of DX12.

Meaning that in real world situation the DX12 performance is IDENTICAL.

You don't seem to understand what that benchmark is showing. It's showing how much less efficient the AMD DX11 driver is at handling draw calls then the Nvidia DX11 driver. The 100k+ draw calls the demo is generating completely choke the AMD driver to the point where neither the CPU nor the GPU are the limiting factor. Meanwhile the Nvidia DX 11 driver is able to triple the number of draw calls the GPU can process. If you look at the DX12/Mantle numbers you can clearly see the 290x is almost twice as powerful as the 750 ti.
 

KissVibes

Banned
It's going to be tough for AMD to keep Mantle relevant when DX12 ships.

Impressive results compared to DirectX 11 for both lower level APIs but I'm really curious to know how more expensive is taking advantage of those compared to the thick DX11.

AMD gave access to their code to the OpenGL guys to push for better standards, so I doubt they're all that upset. I mean yeah, now it gives them one less major selling point over Nvidia but they have always set themselves up to be more open.
 

tuxfool

Banned
There is no game ever released that has this kind of horrid AMD Dx11 performance. And even if this one does release like this, you have Mantle anyway, so what's the point?

If anyone had read the article, they would know that the author has made this very remark. The Dx11 results here don't really reflect real game benchmarks. The AMD driver is a lot less effective than the Nvidia driver, but not to the degree shown here.

AMD gave access to their code to the OpenGL guys to push for better standards, so I doubt they're all that upset. I mean yeah, now it gives them one less major selling point over Nvidia but they have always set themselves up to be more open.

That is because Khronos is a consortium of a lot of graphics stakeholders including AMD and Nvidia (and pretty much everyone else you can think of). But because it is a commitee it gets dragged down by all the conflicting interests of each company. DX stepped in because it had the advantage of being spearheaded by one company MS. However you'd be very foolish if you thought that MS develops Direct3D in a vacuum; they take input from all the vendors from hardware to software.
 
You don't seem to understand what that benchmark is showing. It's showing how much less efficient the AMD DX11 driver is at handling draw calls then the Nvidia DX11 driver. The 100k+ draw calls the demo is generating completely choke the AMD driver to the point where neither the CPU nor the GPU are the limiting factor. Meanwhile the Nvidia DX 11 driver is able to triple the number of draw calls the GPU can process. If you look at the DX12/Mantle numbers you can clearly see the 290x is almost twice as powerful as the 750 ti.

Yes, and in real world scenarios you have many ways to go around a bottleneck when you know there's one.

What I'm saying is that improving one single aspect of the rendering speed may yield actually nothing in a REAL GAME. Because not a single game out there shows that kind of performance you see in that benchmark.

It's as if today someone wrote a benchmark that stressed the last 500Mb of your videocard memory. And so you obtain a result where every single card is now faster than a 970.

The result? It proves a technical point, but it doesn't say nothing about the kind of margin is actually useful and will produce a measurable difference in a real game.

This means simply that DX12 require marginally less work compared to DX11 to run well. But it also means that if you want DX12 in your game you need to write two different code paths, meaning again you're doing twice the work to gain some actual 5% in performance (or nothing).

Hence, right now, completely useless and won't make a difference (like Mantle).
 

TSM

Member
Yes, and in real world scenarios you have many ways to go around a bottleneck when you know there's one.

What I'm saying is that improving one single aspect of the rendering speed may yield actually nothing in a REAL GAME. Because not a single game out there shows that kind of performance you see in that benchmark.

It's as if today someone wrote a benchmark that stressed the last 500Mb of your videocard memory. And so you obtain a result where every single card is now faster than a 970.

The result? It proves a technical point, but it doesn't say nothing about the kind of margin is actually useful and will produce a measurable difference in a real game.

This means simply that DX12 require marginally less work compared to DX11 to run well. But it also means that if you want DX12 in your game you need to write two different code paths, meaning again you're doing twice the work to gain some actual 5% in performance (or nothing).

Hence, right now, completely useless and won't make a difference (like Mantle).

I think it shows you how much the industry has to struggle with draw call performance. If you wanted to make a real game with all the stuff going on in that demo, you literally couldn't without rendering it unplayable on AMD hardware. The reason no real world game acts like that is because they can't make a game that uses that many draw calls as long as DX11 is the limit. Once DX12 is released, a developer can turn the Star Swarm demo into an actual game without have to compromise on the number of draw calls used.
 

tuxfool

Banned
I think it shows you how much the industry has to struggle with draw call performance. If you wanted to make a real game with all the stuff going on in that demo, you literally couldn't without rendering it unplayable on AMD hardware. The reason no real world game acts like that is because they can't make a game that uses that many draw calls as long as DX11 is the limit. Once DX12 is released, a developer can turn the Star Swarm demo into an actual game without have to compromise on the number of draw calls used.

Sure. But only on PC, there is no way that console CPUs would handle those draw calls. Consoles already have those APIs and they're barely treading water with a mid level PC.

So for most games, added features on PC versions of multi-platform games would be superficial in most instances.
 
Why would you do that? I'm not even sure a GTX 750 is faster than a 6870. You'd probably get the same performance. 750Ti would be faster just by one tier.

What in the world?

Read the op? Because dx12 promises gains (gains ok? not 3x the performance,but some gains) in cpu dependant scenarios, and on nvidia dx11 drivers you are already much closer to that dx12 level compared to on amd (which is terrible in that benchmark)
I'm obviously not going to buy a 750 to replace my 6870, but I wondered how much better my performance would have been in (the many many) games where my old phenom II is bottlenecking me if I had gone with a 750 equivalent nvidia card instead of a 6870.

It was a question for someone knowledgable about the subject to provide insight to.

I think dx12 and any cpu performance it can free up is exciting, as cpu performance is at a complete and total standstill now, it's not going up anymore on the hardware level so every percent or removed bottleneck on the software level counts.
 

FLAguy954

Junior Member
This is a timely article, especially since we were just discussing here on GAF a few days ago how much worse AMD DX11 drivers are in terms of CPU utilization comapred to NV DX11 drivers. There was no hard data out there, but - at least in my opinion - lots of circumstantial evidence pointing towards "a lot worse". Now it's rather rigorously confirmed.

That used to be the case. Now, the majority of gaming revenue is not generated on devices which run Windows, and the majority of computing devices sold don't run Windows.

I would actually argue that there is hard data if one were to collate the data presented from game benchmarks in the last couple of months. Star Swarm is just another game added to the evidence pile imo. The Omega drivers helped a little but they need to keep working at it.
 

Derp

Member
Wait by we won't see gains this big in real-world gaming situations, right?

Also will current games see an improvement in performance or is it only the future games that are developed on DX12?
 

FordGTGuy

Banned
This is a timely article, especially since we were just discussing here on GAF a few days ago how much worse AMD DX11 drivers are in terms of CPU utilization comapred to NV DX11 drivers. There was no hard data out there, but - at least in my opinion - lots of circumstantial evidence pointing towards "a lot worse". Now it's rather rigorously confirmed.

That used to be the case. Now, the majority of gaming revenue is not generated on devices which run Windows, and the majority of computing devices sold don't run Windows.

A9K41tX.png


89% of the marketshare for Windows on Steam will have the option to get Windows 10 for free.

We're talking about PC gaming and the effect of DX12 on PC gaming.
 

RexNovis

Banned
Also, about the relevance of various APIs, this is what people at Unity think:
glnext_unitytauc4.png


Any research on how well DX12 scales with mobile GPUs? Curious how much of a boost I can expect out of my GTx 670m 3gb mobile GPU. its coupled with a quad core i7 at 2.8ghz. Hoping its a decent one but it seems to scale lower with lower clocked GPUs.
 

fardeenah

Banned
hoping to keep my core i5 2500 and R9 270X a little more loger, maybe AC games will finally run well. can't wait what the future hold.

anyway will existing games on DX 11 performs better with DX 12 or will the games need to be patched?
 

FordGTGuy

Banned
How so? I don't expect XB1 to get much boost from DX12.

There are a few things to consider whe discussing how it is going to affect the Xbox One.

As much as some people don't want to believe it, the Xbox One was built with DX12 in mind. This means that the Xbox One is ready to fully utilize the feature set of DX12 when it is finally made available to developers.

There are two major things that DX12 is bringing to Xbox One when it comes to development. One is the feature set of DX12 and the other is multi-core communication.
 

Seanspeed

Banned
I do think it's misleading. I think it's quite clear that AMD didn't optimize DX11 drivers for this game because they spent resources on Mantle instead. There is no game ever released that has this kind of horrid AMD Dx11 performance. And even if this one does release like this, you have Mantle anyway, so what's the point?
Do you have Mantle, though? I know you cant expect a complete and total immediate adoption rate, but Mantle is extra work on a developer's part and far from guaranteed to be there.

Now, you can argue the same thing about DX12, but when the extra work is likely to be useful by 95% of your consumers unlike Mantle, then I'd say there's a lot more incentive to actually take advantage of it.

Sure. But only on PC, there is no way that console CPUs would handle those draw calls. Consoles already have those APIs and they're barely treading water with a mid level PC.

So for most games, added features on PC versions of multi-platform games would be superficial in most instances.
Consoles *can* take advantage of those draw calls, though. Consoles lag behind on core performance, but consoles can also get away with only having 30fps games and nobody will complain.

Not to mention that not every PC game is multiplatform.
 

Etnos

Banned
89% of the marketshare for Windows on Steam will have the option to get Windows 10 for free.

We're talking about PC gaming and the effect of DX12 on PC gaming.

That is changing quickly, OSx keeps growing year after year. Why let go 11% of the market?

Consider most (if not all) of the important engines out there already support OpenGL

Why would the important companies making middleware invest in a proprietary rendered that limits them to MS platforms?
 
There are a few things to consider whe discussing how it is going to affect the Xbox One.

As much as some people don't want to believe it, the Xbox One was built with DX12 in mind. This means that the Xbox One is ready to fully utilize the feature set of DX12 when it is finally made available to developers.

There are two major things that DX12 is bringing to Xbox One when it comes to development. One is the feature set of DX12 and the other is multi-core communication.
I swear I've read this before. Did this come from a PR statement somewhere?
 

c0de

Member
Great news for the Windows gaming community.
To GLnext: I hope they are able to standardize OpenGL among all the different platforms because the current state where everything has GL in its name but is totally diverse in what they are able is awful.
 

Seanspeed

Banned
That is changing quickly, OSx keeps growing year after year. Why let go 11% of the market?

Consider most (if not all) of the important engines out there already support OpenGL

Why would the important companies making middleware invest in a proprietary rendered that limits them to MS platforms?
Is it really changing quickly, though?

I seriously doubt Windows is going to lose any significant foothold on the PC gaming market anytime soon, no matter how much some of you want it to.
 

dr_rus

Member
MS is still working to make it thinner.

Why? It is as thin as it can be.
DX12 on XBO means that a game written for DX12 should basically just run on both XBO and Windows 10 PC. That's the main plus here - any indie who want to build a PC game is building an XBO version at the same time basically.
 

jfoul

Member
DX12 is looking pretty good so far. I'm glad Windows 7 & 8 users are getting a free upgrade to Windows 10.
 

dr_rus

Member
anyway will existing games on DX 11 performs better with DX 12 or will the games need to be patched?

They will perform on the same DX11 which isn't going anywhere so they will perform exactly the same.

To take advantage of DX12 you need a new renderer. I can see some games getting one after their release in a patch, sure. But on average I don't expect that many games will use DX12 at all. It will be easier to use DX11.3 for most of developers.
 

Durante

Member
Any research on how well DX12 scales with mobile GPUs? Curious how much of a boost I can expect out of my GTx 670m 3gb mobile GPU. its coupled with a quad core i7 at 2.8ghz. Hoping its a decent one but it seems to scale lower with lower clocked GPUs.
DX12 (or any other low-level API) won't do too much for you if you are already GPU limited.

Despite what people want to believe, if you are limited on the GPU (and not CPU), there isn't any large overhead on PCs in the first place which could be mitigated to greatly increase performance. Almost all benchmarks of multi-platform titles confirm this.

We're talking about PC gaming and the effect of DX12 on PC gaming.
Your original claim was about gaming in general, not PC gaming. That was wrong.
 

Kayant

Member
Does this mean hyperthreading may benefit?

Doesn't look like it.

The CPU they are using is an i7-4960X and using it to emulate an i5 and i3 by disabling cores and hyper threading and based on results there isn't much of a difference. Although that could change in an actual game but in this benchmark it shows no advantages so far.


This is due to the fact that at these settings, even pushing over 100K draw calls, both GPUs are solidly GPU limited. Anything more than 4 cores goes to waste as we’re no longer CPU-bound. Which means that we don’t even need a highly threaded processor to take advantage of DirectX 12’s strengths in this scenario, as even a 4 core processor provides plenty of kick.

Your original claim was about gaming in general, not PC gaming. That was wrong.

That goalpost moving.
 
There are a few things to consider whe discussing how it is going to affect the Xbox One.

As much as some people don't want to believe it, the Xbox One was built with DX12 in mind. This means that the Xbox One is ready to fully utilize the feature set of DX12 when it is finally made available to developers.

There are two major things that DX12 is bringing to Xbox One when it comes to development. One is the feature set of DX12 and the other is multi-core communication.

No. Just no.

Will the Xbox benefit? Yes, but in minor circumstances.

Xbox already utilises most of what DX12 brings to the table. Phil Spencer said it himself:

“On the DX12 question, I was asked early on by people if DX12 is gonna dramatically change the graphics capabilities of Xbox One and I said it wouldn’t. I’m not trying to rain on anybody’s parade, but the CPU, GPU and memory that are on Xbox One don’t change when you go to DX12. DX12 makes it easier to do some of the things that Xbox One’s good at, which will be nice and you’ll see improvement in games that use DX12, but people ask me if it’s gonna be dramatic and I think I answered no at the time and I’ll say the same thing.”

There will be no more than a 5% or so increase in performance on Xbox imo.

And 5% of not very much, is not very much.

A plus could be that creating a game for PC allows a straight, or near straight transfer to the Xbox platform, via renaming the effects option from "low" on PC to "Xbox".
 

derExperte

Member
That is changing quickly, OSx keeps growing year after year. Why let go 11% of the market?

Consider most (if not all) of the important engines out there already support OpenGL

Why would the important companies making middleware invest in a proprietary rendered that limits them to MS platforms?

Large parts of the 11% are Windows users, they just won't be able to get W10 for free. A bunch will still buy it while OSX and Linux show miniscule gains. Those gains are slightly higher when looking at the whole market but in the PC gaming space not a lot is changing and W10 could become a hit going by the early buzz. Ignoring everything but Windows is still viable.
 

Livelife

Banned
No. Just no.

Will the Xbox benefit? Yes, but in minor circumstances.

Xbox already utilises most of what DX12 brings to the table. Phil Spencer said it himself:



There will be no more than a 5% or so increase in performance on Xbox imo.

And 5% of not very much, is not very much.

A plus could be that creating a game for PC allows a straight, or near straight transfer to the Xbox platform, via renaming the effects option from "low" on PC to "Xbox".



The xbox one has dx12 hardware that uses the full features. it will obviously benefit in raw performance and power efficiency. In addition, the ability to communicate simultaneously from CPU to GPU vice versa with all 7 CPU cores instead of 1 core at a time.


I cant see MS investing 3 billion in xbox hardware with only minimal performance increase. Phil Spencer also said think launch 360 Perfect Dark Zero to HALO 4 jump

i.e. Perfect Dark Zero = 1152×640 (no AA) – 30 fps to Halo 4 = 1280×720 (post-AA) – 30 fps [ plus all those effects and detailed textures]

As the CRY engine, Unreal 4 and Snowdrop will be full Directx 12 game engines running on Windows 10 (as per xbox), any game developed with these engines will be benefit without extra work and resources. I am sure we will see more when the NDAs have expired hopefully by E3 this year.
 
Top Bottom