I would love to champion AMD any day but this last page or so has been intolerable to read.
Agreed. Quiet, children.
I would love to champion AMD any day but this last page or so has been intolerable to read.
GTX 1070 will be the 980 rebadged.This card sounds badass but at this point IMO you might as well just wait for the 20nm/16nm parts. I wouldn't be surprised to see a GTX 1070 with gaming performance very close to Titan X at less than half the price. Probably only 6-8GB of VRAM though. (Or 5.5GB lol get it?)
Are we expecting even the next series of Maxwell GPUs to be 28nm?
This:Are we expecting even the next series of Maxwell GPUs to be 28nm?
These are all of the Maxwell chips.If it works out as it did last generation, 1080 will use the TitanX chip, 1070 will be a re-branded 980.
Non-sense.It really wouldn't.thats a 110mhz overclocked 970 vs a 30mhz overclocked 290X, at stock clocks the story would be far different.
Correct.Sean, those are outdated benchmarks with shitty metrics. I know you know better.
As it is right now, the 290X does outperform the 970. AMD has done a lot, especially since the Omega Driver release, which isn't reflected in some of the older stuff.
But also, it is a power hungry bitch and definitely produces more heat. The other person doesn't seem to understand the correlation between watts consumed and heat in watts produced.
Take these with a huge grain of salt because I have no idea where they originally came from so no idea if they are legit or not. I saw them on Anandtech but there was no source provided.
Based on the rumored Titan X specs the performance over the 980 seems to be a little lower than you would expect as it is only 35% @ 4k and I would have expected 40% + with the increase in VRAM + memory bandwidth on top of a 50% shader increase.
![]()
![]()
![]()
We have been hearing about the imminent demise of Moore's Law quite a lot recently. Most of these predictions have been targeting the 7nm node and 2020 as the end-point. But we need to recognize that, in fact, 28nm is actually the last node of Moore's Law.
Beyond this point, we can continue to make smaller transistors and pack more of them into the same size die, but we cannot continue to reduce the cost. In most cases, in fact, the same SoC will actually have a higher cost!
...
Beyond 28nm, the SRAM bit-scaling rate is about 20% per node instead of the historical 50%.
The decision to go for an 8GB Fiji rather than the planned 4GB version was in part attributed by Nvidias Titan X 12GB card announcement. This is just the first part of the story. One of the main reason is that the card is expected to perform so well in 4K gaming, that the 4GB frame buffer could impose a serious limitation.
Our sources are confident that the card is coming this summer, or early summer to be precise, but we dont have a better date than that. It could be as early as Computex that starts in early June, but we would expect that it happens slightly later than that. Our friends from Sweclockers.com were reporting that the cards should come in Q2 2015 with 4GB HBM memory but I guess that this plan might be slghtly altered.
50% faster over 980 isn't nearly as much as I would have hoped. The 4K/60 FPS dream remained just a dream.
Instead of assuming he read the graphs incorrectly, you assume he is holding the PR wagon and spreading FUD. Dude, you have to calm down... seriously.its not 50% faster, its ~35% faster. an overclocked titan-x is 50% faster than a non overclocked 980, stop spreading or believing FUD and lies from nvidia, its embarassing.
That seems odd since you have higher fps on higher resolutions?
I thought gen 1 HBM was maxed at 4GB?
That seems odd since you have higher fps on higher resolutions?
I think what he meant was the Nvidia slide in the OP and the title of this thread. I dont think he (US) perused those chinese graphs to conclude it was 50% faster.Instead of assuming he read the graphs incorrectly, you assume he is holding the PR wagon and spreading FUD. Dude, you have to calm down... seriously.
Because an aggressive strategy, especially coming right off the 970 fiasco, could really give AMD a huge boost in marketshare, something they've been lacking for a long time.
I wouldn't think $500, but $600 might just be a genius, bold move.
Since this was being discussed prior in this thread but not sure about the credibility of this rumor
Fiji Radeon 390X comes with 8GB frame buffer
http://www.fudzilla.com/news/graphics/37258-fiji-radeon-390x-comes-with-8gb
It's interesting that the Nvidia slide says 50% while all practical benchmarks seen so far says ~35. I wonder if they are making theoretical claims or plan to actually back that up with benchmarks at GTC. Also wonder if that slide is actually an official slide or not (though I can't imagine it being unofficial)
Since this was being discussed prior in this thread but not sure about the credibility of this rumor
Fiji Radeon 390X comes with 8GB frame buffer
http://www.fudzilla.com/news/graphics/37258-fiji-radeon-390x-comes-with-8gb
It's 50% clock for clock.
Obviously the bigger chip isn't going to clock as high in practical situations. It has cooling and power limits to stay inside.
Sorry to ask again is a 750 watt power supply expected to be sufficient enough, running a 5820k, 8 gb ddr4.
So it is theoretical, then.
As long as you aren't planning an SLI setup in the future (or now), you should be good.
It could probably be, which is why I posted with a disclaimer.Article reads like a pile of rubbish, to be honest.
A lot of games can go over 3GB already.And presumably an optional 12GB option too right?
Marketing wise, its just 2/3 as capable versus 1/3,
(Its hard for me to imagine many games will require more than 4, given how almost all games are limited by having console versions)
They are all sad. $399, $349 or $999....and I thought people getting defensive over consoles was bad. People getting defensive over $1000 hardware is just something else
Yep. Though if it's a poor quality 750W, and you're pushing your 5820K to the bleeding edge on an overclock, and want to do the same with a Titan X, then you might (but still probably wouldn't) run into a problem.Sorry to ask again is a 750 watt power supply expected to be sufficient enough, running a 5820k, 8 gb ddr4.
Yep. Though if it's a poor quality 750W, and you're pushing your 5820K to the bleeding edge on an overclock, and want to do the same with a Titan X, then you might (but still probably wouldn't) run into a problem.
The reference design for R9 290/290x is really shitty.
I have one and its loud and runs at 94°C... the worst ? It throttles a lot.
But I bought a 40 euros aftermarket cooler...
That made such a difference
Runs quietly... and barely reach 75°C on load and overclocked to 1ghz.
Weren't there rumors that the compute side of things were gimped this time around? Doesn't look like it.
Are we seriously expecting 2 more years of 28nm? Nvidia has been on 28nm for 3 years already (600 series launched March 2012).GTX 1070 will be the 980 rebadged.
You're talking about 2+ years out for the next die shrink and architecture change.
Are we seriously expecting 2 more years of 28nm? Nvidia has been on 28nm for 3 years already (600 series launched March 2012).
The new architecture (Pascal) is coming in 2016. But I was under the impression that we'd see 16nm GPUs in 2015.
This guy is pretty salty about the reference cooler... Is he right? https://youtu.be/zytKg9169Zc
So roughly 50% faster at 200% of the cost of a 980?
This guy is pretty salty about the reference cooler... Is he right? https://youtu.be/zytKg9169Zc
I know he sounds extremely negative. Just wanted opinions about his points regarding the reference coolerWho the hell is he even? He has like 50 views on a week old video and i can see why, all he does is saying "i would never buy a reference cooler design" for 6 minutes straight without giving any reason of why he doesn't like it or why it's objectively bad... hell he doesn't even know how the card performs with that cooler because nobody tried the thing.
Most people I have heard from like the Titan Styled reference cooler. I will watch the video to see what he means.
So roughly 50% faster at 200% of the cost of a 980?
no, roughly 35% faster at 200% cost (nvidias charts claim 50% because they are comparing an overclocked titan to a reference clocked 980, which is misleading, seems typical for nvidia these days
no, roughly 35% faster at 200% cost (nvidias charts claim 50% because they are comparing an overclocked titan to a reference clocked 980, which is misleading, seems typical for nvidia these days.
One highly respected developer can see things moving in another direction based on the way games are now made.
"I can totally imagine a GPU with 1GB of ultra-fast DDR6 and 10GB of 'slow' DDR3," he says. "Most rendering operations are actually heavily cache dependent, and due to that, most top-tier developers nowadays try to optimise for cache access patterns... with correct access patterns, correct data preloading and swapping, you can likely stay in your L1/L2 cache all the time."