OK, so the card arrives on Monday (I'm EST in the US). I'll likely be able to start posting benchmarks by late afternoon (in my world, that's 5pm). The titles I'll be running are:
1.) OctaneBench (by request)
2.) Star Citizen (big, poorly optimized, perfect!)
3.) Ghost Recon: Wildlands (this ran inconsistently in 4K at max settings on my Titan Xp )
4.) new Doom (for Vulcan/DX12 perf)
5.) AC: Unity (I should just start saying "ubisoft" as my reason ;P )
6.) The Witcher 3 (to test the dreaded hair works! otherwise this already runs smooth at 4K/60 on the Titan Xp)
7.) AC: Origins (Currently hovers around 50fps in 4K/max with the Titan Xp)
8.) Mass Effect: Andromeda (also ran jankie at 4K/max settings, hoping for smooth play)
If you feel a specific title really needs to be added or removed from this list, please let me know. I'll do my best to provide as much data as I can, but please keep in mind that I make no promises that I will get to your title.
If you feel a specific title really needs to be added or removed from this list, please let me know. I'll do my best to provide as much data as I can, but please keep in mind that I make no promises that I will get to your title.
@people touting "non gaming" use
Is Titan V really usable in "pro" setting? If so, what is the point of "Tesla V100"?
GTA V 4k
He isn't. It's just the automatic Nvidia defence from the usuals.
What Deep Learning company do you work for Kirankara? And when did you bench a Titan V in the correct algorithms to say with authority that this is worth the price?
Why an NVMe drive? They're an order of magnitude slower than actual RAM.
Why not just install more system memory and use it as a cache?
This kind of memory caching has been around forever, and used to be touted as a feature starting with AGP cards, as I recall. (And it's still in use with on-die graphics. They just carve out a chunk of system ram.)
Hi there. I can answer that for you. Don't think of the 12GB of HBM as a hard limit. Instead, think of this memory as a cache.
HP has a workstation that will take 3TB of RAM, I'm sure there are other solutions.TB+ data sources. Check out why the SSG does it. Of course RAM would be maxed, but that still can't get up there.
https://pro.radeon.com/en/product/pro-series/radeon-pro-ssg/
The older systems you're thinking of also weren't as effective as using the VRAM as a last level cache for the system memory for large datasets. Old systems were dumber and may have split a high need asset into system RAM for instance and suffered slowdowns because of it, so it was best to keep within VRAM.
https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
Cool, that's what I wondered. Basically the HBCC concept.
OK, so the card arrives on Monday (I'm EST in the US). I'll likely be able to start posting benchmarks by late afternoon (in my world, that's 5pm). The titles I'll be running are:
1.) OctaneBench (by request)
2.) Star Citizen (big, poorly optimized, perfect!)
3.) Ghost Recon: Wildlands (this ran inconsistently in 4K at max settings on my Titan Xp )
4.) new Doom (for Vulcan/DX12 perf)
5.) AC: Unity (I should just start saying "ubisoft" as my reason ;P )
6.) The Witcher 3 (to test the dreaded hair works! otherwise this already runs smooth at 4K/60 on the Titan Xp)
7.) AC: Origins (Currently hovers around 50fps in 4K/max with the Titan Xp)
8.) Mass Effect: Andromeda (also ran jankie at 4K/max settings, hoping for smooth play)
If you feel a specific title really needs to be added or removed from this list, please let me know. I'll do my best to provide as much data as I can, but please keep in mind that I make no promises that I will get to your title.
HP has a workstation that will take 3TB of RAM, I'm sure there are other solutions.
I just imagine a huge speed penalty for dropping down to an SSD cache over ram, but I guess there's something to be said for not having to go over the PCIE bus.
I see the chief sticking point being the naming convention. Branding it as Titan card causes the gaming media to add the obligatory "technically this is not a gaming card!" comment at the head of an article but then they quickly delve into its value as a gaming card. The PC HEDT people have also, perhaps begrudgingly, embraced the Titan series as an enthusiast card.
Nvidia, for their part, does a good job making the point that its not a gaming card, while always giving gamers just enough to want to bite and see it as a gaming card. The price point for this card is excellent considering what it is designed for. Its just fools such as myself that even bring it into the gaming world equation.
I do look forward to t̶h̶i̶s̶ ̶f̶o̶o̶l̶s̶ err, I mean your review though. Numbers and unboxing thread would be cool.
I see the chief sticking point being the naming convention. Branding it as Titan card causes the gaming media to add the obligatory "technically this is not a gaming card!" comment at the head of an article but then they quickly delve into its value as a gaming card. The PC HEDT people have also, perhaps begrudgingly, embraced the Titan series as an enthusiast card.
Nvidia, for their part, does a good job making the point that its not a gaming card, while always giving gamers just enough to want to bite and see it as a gaming card. The price point for this card is excellent considering what it is designed for. Its just fools such as myself that even bring it into the gaming world equation.
Will this actually be able to run 99% of games at 4k 60?
How long until you post benchmarks ?
110 Tflops @FP16? Holy fuck!
110 Tflops @FP16? Holy fuck!
That's really for the deep learning side of things. For gaming purposes this should be seen as 15Tflops card that with some tweaking might hit 16Tflops. Minecraft should run butter smooth.
Ah, I see. Thanks. Yea, minecraft should finally hit 60fps@4K.
Here's me (and folks like myself who aren't willing to put down US $3K (or more like CAD $5K) on a graphics card) hoping that the tech does trickle down to next gen consoles in around 2-3 years time esp. HBM2 and 12nm fab.
I'm fortunate to be able to afford such a financially questionable purchase and plan to donate this card to a research group, who can proper use it, when I switch out to my next card. Rest assured Gaffers, this card will do some good for the world long before is calculates its last poly! ;P
As for next gen hardware, I would expect it to be all on AMD hardware CPU/GPU, with the GPU using GDDR6 memory.
Wow! That is truly fantastic of you. Might I ask what kind of application this card would be best serving (aside from bitcoin mining)?
EDIT: Also, isn't AMD also pursuing HBM2 tech?
Deep learning/AI mostly. I happen to work at well known university with a fair amount of research going on, so finding a suitable home for this card should be fairly easy! As for HBM2 memory, I'm guessing its too cost prohibitive to be the path forward.
EDIT: Also, isn't AMD also pursuing HBM2 tech?
Deep learning/AI mostly. I happen to work at well known university with a fair amount of research going on, so finding a suitable home for this card should be fairly easy! As for HBM2 memory, I'm guessing its too cost prohibitive to be the path forward.
Really looking forward to your benchmarks. Run unigine superposition as well. You should do an unboxing as well. Too bad it is getting late here in Sweden
OK, so the card arrives on Monday (I'm EST in the US). I'll likely be able to start posting benchmarks by late afternoon (in my world, that's 5pm). The titles I'll be running are:
1.) OctaneBench (by request)
2.) Star Citizen (big, poorly optimized, perfect!)
3.) Ghost Recon: Wildlands (this ran inconsistently in 4K at max settings on my Titan Xp )
4.) new Doom (for Vulcan/DX12 perf)
5.) AC: Unity (I should just start saying "ubisoft" as my reason ;P )
6.) The Witcher 3 (to test the dreaded hair works! otherwise this already runs smooth at 4K/60 on the Titan Xp)
7.) AC: Origins (Currently hovers around 50fps in 4K/max with the Titan Xp)
8.) Mass Effect: Andromeda (also ran jankie at 4K/max settings, hoping for smooth play)
If you feel a specific title really needs to be added or removed from this list, please let me know. I'll do my best to provide as much data as I can, but please keep in mind that I make no promises that I will get to your title.
\Sweet. What's the rest of your build? Is that a Bitfinix Prodigy?
HP has a workstation that will take 3TB of RAM, I'm sure there are other solutions.
I just imagine a huge speed penalty for dropping down to an SSD cache over ram, but I guess there's something to be said for not having to go over the PCIE bus.
The Titan V is a slightly cut down version of the Tesla V100 chipset. As AI/Deep learning GPU its quite a bargain and a big deal exactly because it competes very favorably with a Tesla V100. That said, it does not seem top support SLI or NV link so the relative "cheap" price of this budget card is its lack of ability to be stack in a computational array.
800 Watt PSU, can't recall off hand who made it, the shame of it all!
be quiet!
That's the maker.
Also please add these benchmarks:
SuperPosition
FireStrike/TimeSpy Extreme
Thx.
They will probably run together just fine for HPC applications.Is SLI support really removed or is it still possible to run two Titan V's on one computer?
Not exactly. 110Tflops at tensor math. A tensor core is a unit that multiplies two 4×4 FP16 matrices.
Gflops at base clock: 12,288
Gflops at boost clock: 14,899
Double those for non-tensor FP16.
https://en.wikipedia.org/wiki/Tensor
@people touting "non gaming" use
Is Titan V really usable in "pro" setting? If so, what is the point of "Tesla V100"?
Shadow of War at 4K, max settings:
https://drive.google.com/file/d/1eiFF76V1afHDtyZ-wlVw8Okv8d6dPRI3/view?usp=sharing
~61% faster than my GTX 1080, not bad but not exactly spectacular either.
Has it actually been confirmed anywhere that GV100 has double rate FP16 on its main SIMDs? This seems like something unnecessary to have when there's an abundance of FP16 processing power in tensor ALUs and the main point of having FP16 there is for deep learning which fits really well on tensor ops 99,9% of time.
.
For arbitrary floating point arithmetic, TensorCore would not be used in the general case. Therefore, 16-bit and 32-bit (and 64-bit) floating point ALUs are still provided for general purpose arithmetic uses, i.e. any use case that does not map into a 16-bit input hybrid 32-bit matrix-matrix multiply and accumulate.