PLASTICA-MAN
Member
WCCFTech posted a rumor that 1080 Ti is coming in Jan 2017, and gave the following specs:
Well from the other rumours I read it will be GDDR5 and not GDDR5X. I hope what you posted is true.
WCCFTech posted a rumor that 1080 Ti is coming in Jan 2017, and gave the following specs:
WCCFTech posted a rumor that 1080 Ti is coming in Jan 2017, and gave the following specs:
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
Yeah, but Titan owners always get screwed eventually. It happened with the original and the X. That's the price to pay (literally and figuratively) for getting that level of power at the first opportunity.
I'm quite comfortable with my 980ti and Gsync, so I'll be waiting it out and seeing what hits next year.
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
Do they want to lose the trust of consumers? Like damn. Don't get me wrong I'd be glad but I'd feel bad for someone who picked up a 1080
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
Will probably wait till the new line to upgrade from my 970.
I did consider getting a 1070 before it's launch but they priced me out. If I'm going to pay Nvidia's higher "were popular and in demand" prices, then I at least want a significant upgrade. 970 is mostly doing me fine, outside of outliers like Forza Horizon 3 which runs rather poorly all things considered,
Is there any leaps forecasted (or 'just' happened) in the GPU industry? The only two I remember are HBM memory and stacked RAM, both of which haven't taken off yet. Are things looking like the same-ish gradual improvements or is there something 'big' going on?
The CPU development 'stagnated' in my view- relatively speaking since multi-cres appeared then stalled again. Nvidia seems to put major emphasis on Tegra. I think they did great with the latest GTX cards (10xx), but nothing major industry tech-wise.
Going to ride my 980Ti through at least 2017.
At that point I'll either jump on a dropped in price 2017 GPU or start scoping what 2018 has to offer. I also plan on seeing what exactly Scorpio has to offer and its price point.
The smallish gap between Titan and 1080 makes slotting in 1080Ti tricky. Either 1080 buyer or Titan buyer may feel gypped.
That makes no sense. GTX 1080 already has GDDR5X, why would 1080Ti go back to GDDR5 ?
Anyway, the rumors are saying 1080Ti will have GDDR5X.
Well from the other rumours I read it will be GDDR5 and not GDDR5X. I hope what you posted is true.
Makes no sense cause it's probably bullshit. It will have GDDR5X
Exactly.
Anyway, at GTC 2017 (in April) I expect Nvidia to update their roadmap by showing the name and time frame of their next GPU architecture after Volta. This year was the first time Nvidia didn't update their roadmap, given the shifting of their roadmap in previous years with Volta, then Pascal, then Pascal and Volta.
Einstein
Heres the fun bit to contemplate. Nvidia has demonstrated it can deliver 42.5 teraflops of peak performance and about 28.7 teraflops of sustained performance in a 4U server node. That is about 10.6 teraflops per 1U of rack form factor space. IBM, Nvidia, and Mellanox are working to get more than 40 teraflops of capacity into a 2U Witherspoon system next year with the Summit supercomputing they are building for the US Department of Energy facility at Oak Ridge National Laboratory. To our way of thinking, that means the future Volta GV100 accelerator cards should have about twice the performance of the Pascal GP100 cards currently shipping, if IBM sticks with the 2U form factor and only puts four GPU accelerators into the Witherspoon box alongside a pair of 24-core Power9 chips as we expect.
That makes no sense. GTX 1080 already has GDDR5X, why would 1080Ti go back to GDDR5 ?
Anyway, the rumors are saying 1080Ti will have GDDR5X.
The current rumor is at 10GB. GDDR5X or non X doesn't seem to be confirmed yet. However, a 12GB 1080Ti with GDDR5 (NON X) memory would still have more memory bandwidth than the 8GB 1080 with GDDR5X. I haven't calculated 10GB yet...too lazy.
The Summit configuration also tells us, perhaps, something about the Volta GPUs, or at least the ones being used inside of Summit.
Way back when, in early 2015, Nvidia said that it would be able to deliver Pascal GPUs with 32 GB of HBM memory on the package that delivered 1 TB/sec of bandwidth into and out of that GPU memory. What really happened was that Nvidia was only able to get 16 GB of memory on the package and only delivered 720 GB/sec of bandwidth with that on-package HBM with the Tesla P100 card. No one is making promises about the amount of GPU memory or bandwidth coming with Volta, as you can see above. What Nvidia has said, way back in 2015, is that the Volta GP100 GPUs would deliver a relative single precision floating general matrix multiply (SGEMM) of 72 gigaflops per watt, compared to 40 SGEMM gigaflops per watt for the Pascal GP100.
If you use that ratio, and then cut it in half for double precision, then a Volta GPU held at a constant 300 watts (the same as the Pascal package) would have a little over 9.5 teraflops of double precision performance, and four of them would deliver 38.2 teraflops of oomph, the vast majority of the more than 40 teraflops of performance expected in the Summit node. Six GPUs at this performance level would deliver a total of 57.2 teraflops just from the GPUs alone, which no one has promised and that is why we think Nvidia is gearing these Volta GPUs down to maybe 200 watts. If you cut the clocks and therefore the thermals down by 100 watts on each card, you can stay in the same 1,200 watt GPU envelope as four Pascal P100 cards but maybe only cut performance by 20 percent to 25 percent against that 33 percent wattage drop. By moving from four Voltas to six Voltas per node, the HBM memory per node could increase by a lot (100 percent in capacity per card and another 50 percent from having more cards) and the performance per watt and the aggregate performance could be pushed a little further, too. With a geared down Volta card running at 200 watts, you could have a V100 card that deliver 7.6 teraflops at double precision and 38.2 gigaflops per watt compared to something like 38.2 gigaflops per watt for the faster V100 card we theorized above.
For fun, lets call it 50 teraflops per node in Summit. That is a total of 512 GB of main memory (with 120 GB/sec of bandwidth, according to specs provided by IBM earlier this year), and if Nvidia can reach its original goal of 32 GB of HBM memory per GPU accelerator it hoped to hit with Pascal on the Voltas, that works out to 192 GB of HBM memory with 6 TB/sec of bandwidth. That is a lot more than the 64 GB of HBM memory and aggregate 2.8 TB/sec of GPU memory bandwidth in the current Minsky Power Systems LC precursor to the Summits Witherspoon node. There is another 800 GB of non-volatile memory in the Summit node, and we are pretty sure it is not Intels 3D XPoint memory and we would guess it is flash capacity (probably NVM-Express drives) from Seagate Technologies but Oak Ridge has not said. The math works with this scenario, with 512 GB of DDR4 main memory, a total of 192 GB of HBM memory on the GPUs and 800 GB of flash, across 4,600 nodes that is a total of 6.9 TB of aggregate memory. (By the way, that chart has an error. The Titan supercomputer has 32 GB of DDR3 memory plus 6 GB of GDDR5 memory per node to reach a total of 693 TB of aggregate memory.)
At that 50 teraflops of performance per node, which we think is doable if the feeds and speeds for Volta work out, that is a 230 petaflops cluster peak, and if the performance of the Volta GPUs can be pushed to an aggregate of 54.5 teraflops, then we are talking about crossing through 250 petaflops a quarter of the way to exascale. And this is also a massive machine that could, in theory, run 4,600 neural network training runs side-by-side for machine learning workloads (we are not saying it will), but at the half precision math used in machine learning, that is above an exaflops of aggregate compute capacity.
Boom.
Help me with math, what would a single GV100 do in single precision?
Then we might know roughly how much a paired-back GV102 (Titan X Volta) would do, assuming Nvidia goes with a GV102 rather than GV100 for GTX Titan and GTX 1180 Ti![]()
It took them a year and a half to renew the 980. There's no way this will be released in May 2017 unless it's for the supercomputing sector.
It took them a year and a half to renew the 980. There's no way this will be released in May 2017 unless it's for the supercomputing sector.
Maxwell update was limited by the availability of 16nm production process. If Volta will use the same 16nm process there won't be any reason to wait for anything but the readiness of the architecture itself.
Depends a lot on what AMD puts out with Vega. Then Nvidia will probably put out a new lineup, otherwise they might wait if AMD does not compete with their 1070/1080 cards.
Meanwhile displays are not keeping up with GPUs. We still only have 4K @ 60 Hz displays. I was hoping for 120+ Hz models already at the end of this year but it looks like they will be available earliest next summer if even then.
Shit and I was planning on building my first gaming pc this cyber monday. Should I wait for the volta to release?
Probably not. You could grab a cheap 980 Ti and use that. When heavily overclocked it is almost the same as a stock 1080.
Shit and I was planning on building my first gaming pc this cyber monday. Should I wait for the volta to release?
You can say that every year, you can wait or you can go now, but the GPU industry releases stuff pretty fast, so you can always play the waiting game or make a decision and jump in.
If these specs are true, Titan X owners got screwed. Since 1080Ti's will come with superior aftermarket coolers, I can easily see them offering better performance through overclocking to make up for the marginally less CUDA cores.
They can't release it too fast after 1080ti since people would be really pissed of without at least 1 year of owning top end card.
Probably not. You could grab a cheap 980 Ti and use that. When heavily overclocked it is almost the same as a stock 1080.