• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Fudzilla - Nvidia GP102 (Pascal) Titan coming "soon"

PnCIa

Member
The ultra-highend-community is always after the fastest, fanciest thing, even if it doesnt make sense. Hence why something like the Titans sells.
 
It's not necessarily the performance of GDDR5X that's the 'issue' for some. I think it's more to do with power savings HBM2 would bring to a 16-18GB card. This amount of GDRR5X VRAM is going to take a comparatively large amount of power I would imagine and to a lesser extent, space.
 

dr_rus

Member
The ultra-highend-community is always after the fastest, fanciest thing, even if it doesnt make sense. Hence why something like the Titans sells.

Well, yeah, but in that case if it will be $999 you'll get 50% performance for what? 33% more money over 1080 AIB? So it kinda makes sense if you have the money for it.

It's not necessarily the performance of GDDR5X that's the 'issue' for some. I think it's more to do with power savings HBM2 would bring to a 16-18GB card. This amount of GDRR5X VRAM is going to take a comparatively large amount of power I would imagine and to a lesser extent, space.

From what was shown GDDR5X power savings are comparable to HBM2's. Although in case of HBM a lot here depends on the I/O width it seems.
 
It's not necessarily the performance of GDDR5X that's the 'issue' for some. I think it's more to do with power savings HBM2 would bring to a 16-18GB card. This amount of GDRR5X VRAM is going to take a comparatively large amount of power I would imagine and to a lesser extent, space.

HBM being more efficient means it has lower power draw/ mb/s
As hbm speeds go up it'll quickly surpass gddr5 in power consumption

3-3.png


HBM 1 in fury etc was so low power because it was very very slow (for HBM obviously, it was plenty fast)

HBM2 has twice as much bandwidth and already consumes more power than HBM1

GDDR5x is also more efficient than gddr5 as well.

The space saved on pcb argument would be a bigger one for me (I have a fairly small case, antec sonata III , which can barely fit 11" gpus) but it's a niche thing
 
HBM being more efficient means it has lower power draw/ mb/s
As hbm speeds go up it'll quickly surpass gddr5 in power consumption

3-3.png


HBM 1 in fury etc was so low power because it was very very slow (for HBM obviously, it was plenty fast)

HBM2 has twice as much bandwidth and already consumes more power than HBM1

GDDR5x is also more efficient than gddr5 as well.

The space saved on pcb argument would be a bigger one for me (I have a fairly small case, antec sonata III , which can barely fit 11" gpus) but it's a niche thing

Interesting. I've not compared HBM2 to GDDR5X before. I don't think I've compared HBM1 to GDDR5X but I knew how much more power efficient HBM1 was over standard GDRR5 and it's still more power efficient than GDDR5X at the same speeds.

But yeah the stackable nature of HBM is an undeniable plus.
 

ethomaz

Banned
Personally it's hbm 2 or bust at that price point. I'm just going to wait until next year to get an elite GPU.
384bits bus + GDDR5X and you will be fine.

Depends how big Titan is? They could have anywhere from 50-100% more shaders than 1080. Surely at that point you are going to want more bandwidth than even GDDR5X can give you?

Although I don't know why it's expected to be a cut down GP102. Isn't that too DP focused? Why not just make a big GP104?
We are talking here at near 500GB/s with GDDR5X... you won't need more.
 

AmyS

Member
Also picked up by numerous other sites.

http://vrworld.com/2016/07/05/nvidia-gp100-titan-faster-geforce-1080/
http://wccftech.com/nvidia-pascal-gp100-geforce-gtx-titan-launch/
http://techreport.com/news/30347/rumor-nvidia-next-titan-might-be-the-titan-p
http://www.tweaktown.com/news/52886/nvidias-next-gen-titan-50-faster-geforce-gtx-1080/index.html

VR World:

What matters is the performance. At the time of writing, we can report that Tesla and Quadro clocks are complete, but the engineering team is still trying to extract the last bit of performance from the cards. Expect the chip packaging to remain the same as on Tesla and Quadro cards, thus we should see two versions of Titan boards, the 16GB and 12GB one, with HBM2 memory clocked higher than it was the case with the Tesla family (Tesla boards are geared towards HPC use – industrial design, ECC-enabled memory, 60-to-72 month lifetime cycle).

The target is at least 50% higher performance than GeForce GTX 1080 Founders Edition, and our sources are saying they’re now bound by the CPU. Even Core i7-6950X isn’t enough to feed all the cards and in a lot of scenarios you could see an Intel Core i7-6700K, with its supreme clock (4.0 vs. 3.0 GHz) easily feed the GP100 more efficiently than Broadwell-E based Core i7 Extreme Edition. The running joke inside Nvidia is “don’t buy the 6950X – buy 6700K and a Titan” but we’re not sure that Nvidia will use this for an official tagline. Truth to be told, they might be right – we need Intel to return to the 4-core next-gen mainstream/enthusiast and then X-core big-daddy part using the same architecture, rather than the current cadence which makes sense only if you’re working for Intel. AMD’s 8-core ZEN cannot come soon enough.

If all things go well, Nvidia should unveil GeForce GTX Titan P on Gamescom in Cologne, Germany – August 17-21, 2016.
 

AmyS

Member
The thing is, the rumored 480 GB/sec bandwidth (HBM2) seems really disappointing for a Titan product. Radeon R9 Fury X with HBM1 gets 512 GB/sec. I know Nvidia's Pascal architecture is far more efficient than the Fiji X, but still...

480 GB/sec is like right between GDDR5-X and HBM1

The bandwidth spec seems more in line with what you'd expect from 1080Ti.
 

Oxn

Member
Knowing the new Titan will not be $1000, and I normally would not entertain the idea of buying a $1K GPU. If it releases for that price, I'll buy it.

But I know itll be between 1300-1500.
 

dr_rus

Member
The thing is, the rumored 480 GB/sec bandwidth (HBM2) seems really disappointing for a Titan product. Radeon R9 Fury X with HBM1 gets 512 GB/sec. I know Nvidia's Pascal architecture is far more efficient than the Fiji X, but still...

480 GB/sec is like right between GDDR5-X and HBM1

The bandwidth spec seems more in line with what you'd expect from 1080Ti.

384 bit GDDR5X @10Gbps will provide that exact figure of 480GB/sec of bandwidth. No HBM2 in sight.

I have no idea why anyone still expect any GeForce to use HBM2 this year. HBM2 would provide 720-1024 GB/s of bandwidth which is a complete waste for a gaming GPU with peak processing power of 1.5x GP104. And if you halve this number by going with 2 stacks / 2048 bit bus you're basically getting the same figure which would be way less expensive to implement and produce as a 384 bit GDDR5X bus.

I don't think that NV will use HBM2 in the gaming space until Volta at the earliest - and possibly even beyond that, when 10nm will allow them to pack enough processing power to actually require HBM2 bandwidth.
 

laxu

Member
As a 980 Ti owner, I still wouldn't bother with a new Titan even if it was 50% faster than what I have now. The thing is that an overclocked 980 Ti or 1080 runs everything really great at 1440p and for 4K you are stuck at 60 Hz until late this year or next year. So until 120+ Hz 4K displays are on the market I have no reason to upgrade. Even if you can't run anything at actually 120 fps, the higher refresh rate still generally makes for less perceived motion blur.
 
The running joke inside Nvidia is “don’t buy the 6950X – buy 6700K and a Titan” but we’re not sure that Nvidia will use this for an official tagline

This line made me cringe
Don't buy an insanely expensive 1700 cpu, buy a 350 dollar cpu and then what? a1350 dollar gpu that is just as insanely expensive?

As a 980 Ti owner, I still wouldn't bother with a new Titan even if it was 50% faster than what I have now. The thing is that an overclocked 980 Ti or 1080 runs everything really great at 1440p and for 4K you are stuck at 60 Hz until late this year or next year. So until 120+ Hz 4K displays are on the market I have no reason to upgrade. Even if you can't run anything at actually 120 fps, the higher refresh rate still generally makes for less perceived motion blur.

a 980ti is not enough for 60 fps at 4k in the newest (and demanding games), let alone to get proper 60 fps (no drops below 60) , the kind of framerate you need when gaming
 

ethomaz

Banned
The thing is, the rumored 480 GB/sec bandwidth (HBM2) seems really disappointing for a Titan product. Radeon R9 Fury X with HBM1 gets 512 GB/sec. I know Nvidia's Pascal architecture is far more efficient than the Fiji X, but still...

480 GB/sec is like right between GDDR5-X and HBM1

The bandwidth spec seems more in line with what you'd expect from 1080Ti.
480GB/s is overkill for 1080... so think a little about how waste is to use HBM in a Fury X lol

Titan with GDDR5x @ 480GB/s is the way to go actually.
 

ocean

Banned
Thats crazy. I like to max out my games but i cant see myself spending over a grand for the next Ti branded card. Might just stick to console.
So it's either full Ultra 60 FPS 4K or 1080p medium 30 FPS consoles? I feel there's a healthy in between.
 

wowzors

Member

Seems a bit asinine, a 6950x can easily hit 4.0 overclocked as well as all of the broadwell-e line. A 6700k can hit 4.5 but that is like a broadwell-e hitting 4.3 which is still doable. I don't understand.
 

Celcius

°Temp. member
I hope it releases next month... I'm ready to upgrade from my GTX 780 Ti. It will be glorious!
I think it will be called the Titan V though (since it's the 5th Titan).
 

ethomaz

Banned
Seems a bit asinine, a 6950x can easily hit 4.0 overclocked as well as all of the broadwell-e line. A 6700k can hit 4.5 but that is like a broadwell-e hitting 4.3 which is still doable. I don't understand.
Skylake IPC > Broadwell-e IPC.

That is why even at high overclock Broadwell-c can't rival Skylake in games.
 

Frozone

Member
Yea, I'm tempted but will wait for Volta. Got a Titan X now and am satisfied with Ultra 4k @ 30FPS in W3 (no hairworks). I want a big jump in order for it to be worth spending $1.5k. And besides, most games aren't going to require that kind of power before the next gen Nvidia boards come out.
 

wowzors

Member
Skylake IPC > Broadwell-e IPC.

That is why even at high overclock Broadwell-c can't rival Skylake in games.

Isn't the IPC gain like 8-12% no chance in hell something is being bottlenecked on one and breezing in the other with that minimal difference.
 
Top Bottom