• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RTX 3080 design possibly Leaked

LOLCats

Banned
xfx tried a fan setup like that on amd cards. it didnt go great... maybe xfx just sucks though, i think the design is okay.
 
Sounds like you might had bad PSU . You should not need a 1200 watt PSU for 2080 TI 650 wattage is recommend.

You had two options RMA the PSU

Or just buy another PSU with similar wattage.

Trust me if you where overloading the PSU you wouldn't be getting a buzzing sound.

Could be right about the psu but that’s the adventures of over gaming :)
 

CrustyBritches

Gold Member
Possible 3080ti GPU design leaked from factory?

wJxzJgo.jpg


Wooo! What a beast!

 

ZywyPL

Banned


Wow, if true Nvidia ain't joking this time. 3080 could be based off the big die, GA102. I hope AMD is prepared for that.


3080 having the same specs as 2080Ti but at 7nm doesn't sound exciting for me, quite the opposite. So either all the improvements will go into Tensor and RT cores, or the rumor is simply false.
 

llien

Member
I think AMD will actually try to compete

Note how Leonidas Leonidas couldn't answer the question about which AMD chip is "RIP"-ed by "leaked" 3080.

AMD is not going after halo product (just yet). 505mm2 is too small for that.
What NV is gonna get is AMD annoying it with cheaper and faster products across broader spectrum, NV would need something substantially larger than 500mm2 for niche cards not challenged by AMD.

RT will be a total misnomer, with only a handful of games supporting it and even then, only for a handful of effects, so expect lots of Leonidast minded sites twist it a lot "AMD is cheaper and faster and (perhaps) consumes less, but that RT thing in NV sponsored games doesn't run that fast". We'll likely see Tesselation Returns: RT edition.
 
Note how Leonidas Leonidas couldn't answer the question about which AMD chip is "RIP"-ed by "leaked" 3080.

AMD is not going after halo product (just yet). 505mm2 is too small for that.
What NV is gonna get is AMD annoying it with cheaper and faster products across broader spectrum, NV would need something substantially larger than 500mm2 for niche cards not challenged by AMD.

RT will be a total misnomer, with only a handful of games supporting it and even then, only for a handful of effects, so expect lots of Leonidast minded sites twist it a lot "AMD is cheaper and faster and (perhaps) consumes less, but that RT thing in NV sponsored games doesn't run that fast". We'll likely see Tesselation Returns: RT edition.

The CFO at AMD literally stated to investors in no uncertain terms that [the chip affectionately referred to as] Big Navi will be a halo product for them.

Whether or not 505mm2 is "too small" for the halo end of the market depends entirely on performance:

GP102 was 471mm2 and had 3840 shaders, meanwhile Vega10 was 495mm2 and had 4096 shaders. I think all of you can remember which of those chips was faster at the end of the day. Vega 10 was thoroughly beaten in power and performance by a cut down 1080Ti which had only 3584 shaders.
 
Last edited:

Leonidas

Member
3080 having the same specs as 2080Ti but at 7nm doesn't sound exciting for me, quite the opposite. So either all the improvements will go into Tensor and RT cores, or the rumor is simply false.

Have you ever been excited by an x80 GPU?
 

llien

Member
Big Navi will be a halo product for them.
Yes. Emphasis on "for them".
Remind me when the fastest product you have isn't your halo product.
Or who would expect RDNA2 not to have something much faster than 5700XT.

Whether or not 505mm2 is "too small" for the halo end of the market depends entirely on performance:
It won't beat NVs halo, which will be a bigger chip.

GP102 was 471mm2 and had 3840 shaders, meanwhile Vega10 was 495mm2 and had 4096 shaders.
Yep. And you chose Vega to compare, because?

That 495mm2 translated into 330mm2 7nm chip, which, even though it was equipped with HBM2 memory, is barely faster than AMD's own Navi 250mm2 chip with cheaper memory.

That being said, Vega is exceptional at computing (but that's beside the point).

Now you can also see why many are glad Raja isn't making AMD great again any more.
 
Yes. Emphasis on "for them".
Remind me when the fastest product you have isn't your halo product.
Or who would expect RDNA2 not to have something much faster than 5700XT.
Of course its a halo device "for them". Who else would it be a halo device for? They're making it.
Besides,

https://www.pcgamer.com/amd-big-navi-first-rdna-product/

You don't say, and I quote:
"Big Navi is a Halo Product [...] enthusiasts love to buy the best, and we are certainly working on giving them the best. "

You don't make this kind of statement when you're not trying compete at the highest end of the market. Whether or not they succeed and "win" they halo end of the market remains to be seen. But it is a halo device.

It won't beat NVs halo, which will be a bigger chip.

Will it? Do you know the die size for GA102?

We know GA100 is massive - 826mm2. That's a big boy. But its not going to come to consumers. It definitely is not coming to consumers.
Its designed with 128 SMs (8192 cuda cores) with a 6144-bit HBM2 bus. And yet, the actual Ampere A100 product runs with 108 SMs and a 5120-bit bus. Its yields must be absolutely appalling if they need to cut down 16% of the die for a usable product. That and it runs at a GPU frequency of around 1400MHz, which is slower than both GV100 and GP100, all while consuming more power than its predecessors; an insane 400W.
This doesn't even factor in the fact that GA100 doesn't have any RT cores at all, and has billions of transistors dedicated to AI/ML features that are totally meaningless in games like BFloat instructions, FP64, low-precision integer instructions and so on and so forth.

GA102 we know nothing. The only info we have is from Kopite7kimi, about shader numbers. The largest form of GA102 apparently has 84 SM's (5376 cuda cores). Kopite has been pretty much spot on with Nvidia leaks. He nailed GA100's rough specs a full year before its announcement, so I think these numbers are perfectly valid.
So if we take 84/128 = 0.65.
0.65 x 826mm2 = 540mm2.

That's bigger than Big Navi to be sure, but not hugely so. Now of course it could be bigger, Nvidia do like their chips to be thicc. But do bare in mind GDDR6 memory controllers are larger than HBM2 controllers. Bare in mind all of the AI transistors they're going to totally remove. Perhaps that transistor budget is exactly equal to the supposedly new and improved RT cores, but we don't know.

We don't know how either will perform. Based on history you'd expect Nvidia to win. But that doesn't mean AMD aren't going to try and force them right to the very edge.


Yep. And you chose Vega to compare, because?

Why wouldn't I choose Vega? Vega10 launched around the same time as GP102. Vega10 was based on Global Foundries 14nm, while GP102 was based on TSMC's 16nm, both being roughly equivalent nodes. They both represent products that competed against each other. Its pretty much the perfect comparison.


That 495mm2 translated into 330mm2 7nm chip, which, even though it was equipped with HBM2 memory, is barely faster than AMD's own Navi 250mm2 chip with cheaper memory.

Yes. That's down to Navi being a much, much better gaming architecture. Vega20 is a fine architecture for compute, but its much much worse than Navi10 at gaming. But Navi10 launched a full 2 years after Vega10 and if you count Radeon Instinct MI60, a full year before Vega20. You would almost certainly expect a much more modern, succeeding architecture to outperform its direct predecessors, surely. That's why I compared Pascal to Vega. Because both were launched around the same time. Its a "fairer" comparison.

Pascal is effectively a die shrink of 28nm Maxwell to 16nm. Turing is also on 16nm (they modified it to have a larger reticle for Volta and call it 12FFN, but it is exactly the same process), but its much much faster in gaming than Pascal is. TU104 with a paltry 3072 shaders is roughly equivalent to or even slightly faster than a 1080Ti (GP102) which has 3584 shaders. Certainly TU104 is a larger chip, but fewer shaders are still outperforming more shaders. This is all down to architectural improvements.

These architectural improvements contribute a great deal to performance gains. Particularly nowadays where process shrinks aren't quite as large and dramatic as they used to be.

I brought all this up because its evidence that die size isn't everything. Architecture is the big thing. GP102 did more than Vega, with less.

If you go all the way back out to ATi's Cypress based on Terascale 2 (HD5870), way back in 2009. Cypress was a tiny 331mm2 chip on 40nm that was duking it out against Fermi - GF100.

GF100 was an absolute titanic (for the time) 530mm2.

So we have Cypress 330mm2 (2.1 Billion transistors), up against GF100 (3.1 Billion transistors).
GF100 was indeed a bit faster than Cypress, but it was not fast enough. GF100 and the GTX 480 that was based on it, was the original "hot and loud" meme. There's a reason that it was referred to by the PC community as Thermi.
The GTX 480 was around 5-10% faster than the HD5870, but it consumed 50% more power for that paltry performance advantage. In fact, the GTX 480 consumed more power than the HD5970, a GPU that contained 2 full Cypress chips and was ultimately significantly faster.
Same exact process. Both used TSMC's 40nm node. But one was so much more efficient. a 60% difference in die size and roughly comparable performance.

The whole situation basically reversed when Kepler came out and faced off against Tahiti and GCN1.0. A smaller more efficient GK104 (GTX 680) was able to outperform the larger Tahiti (HD 7970).
Of course GCN and its drivers matured to the point where that situation is reversed and now the 7970 is faster than the 780 Ti, but that comes down to drivers. At the time though, smaller efficient chip beat larger chip.

Anyway, enough history lessons.
My point is, die size isn't everything. The architecture is what makes the difference. There are a number of historical instances where this has been the case.
Recent history is Nvidia's favour and that counts for something, to be sure. But it ain't over till its over, and we've seen the benchmarks.
 
Last edited:

ZywyPL

Banned
Have you ever been excited by an x80 GPU?

1080 was quite a jump compared to Maxwell GPUs, not to mention the Ti version. But that was achieved thanks to lower process node, so I expect similar jump this time as well thanks to 7nm. I mean, we already know the specs of the full die.
 

Leonidas

Member
1080 was quite a jump compared to Maxwell GPUs, not to mention the Ti version. But that was achieved thanks to lower process node, so I expect similar jump this time as well thanks to 7nm. I mean, we already know the specs of the full die.

Then why jump to conclusions that 3080 possibly having the same core count as 2080 Ti as unimpressive or false?
 
Last edited:

The vents on the backplate have me second guessing my thoughts on the fan directions. Blower GPUs have those vents, but this is clearly not a blower GPU.

Maybe they are both intake? Does the upward facing fan on the long end pull in air then push it through heat pipes in the fins before the second fan which pulls directly onto the GPU and fires it out the rear...

giphy-downsized-large.gif


Yeah, I need to see how this looks taken apart... such a strange design, I can't even imagine how tacky the vendor cars based on this are going to look.
 

llien

Member
You don't make this kind of statement when you're not trying compete at the highest end of the market. Whether or not they succeed and "win" they halo end of the market remains to be seen. But it is a halo device.
Fair enough.
Just my opinion: they'd need to go bananas, way beyond 505mm2 to win "I have the fastest of the obnoxiously priced cards", given how much value does NV see in that.

It could still be touted to "compete at the highest end" with "pay $500 less and get only 15% slower card"

On the Fermi times: as Anand later revealed (don't remember what the article was actually about, sorry), there was a major issue with that node and AMD happened to find brilliant solution to it. That level of discrepancy could happen now only if TSMC is vastly superior to Samsung's process node (as there seem to be no gotchas).

As for why Vega ain't relevant: Vega comes from "starving AMD" times (+Raja).
Perf/transistor wise, Navi is far ahead of it, roughly on par with NV. (so was Polaris, by the way).

Will it? Do you know the die size for GA102?
Just my speculation, based on GA100 being that massive this early into 7nm and how NV was acting in the past.

GA100 is sold at 20k a piece, 10 per package (mkay, maybe a bit less as two AMD CPUs in that package should cost 10-20k-ish). It is basically "the best you can cram" kind of card.

But what happens to stuff that has only some faulty units? NV doesn't sell a product with them.
1.4Ghz being relatively low: well, with some of the units/non-gaming number crunchers disabled, they surely can go higher.

1080 was quite a jump compared to Maxwell GPUs
1080 wasn't interesting to 980Ti owners with good OCed AIBs.
 
Last edited:
Fair enough.
Just my opinion: they'd need to go bananas, way beyond 505mm2 to win "I have the fastest of the obnoxiously priced cards", given how much value does NV see in that.

It could still be touted to "compete at the highest end" with "pay $500 less and get only 15% slower card"
That depends entirely on how things stack up in terms of performance. AMD could go bigger. In fact in HPC they undoubtedly will. Its a matter of if they think its worth putting the investment in making another massive 600+mm2 die like Fiji. Who knows.
These kinds of decisions are effectively akin to making bets. AMD have made a bet on a large, but not gigantic die.
If they win in performance, they get to dictate prices and make a nice chunk of profit. If they lose in performance by just a little bit, they can undercut Nvidia and still make a nice chunk of profit. Remember, the bigger the die, the higher the cost per (working) chip. If AMD can match or beat a larger chip with a smaller one, they effectively "win", because at the same price they make more money.
Whether that is a successful bet remains to be seen.

On the Fermi times: as Anand later revealed (don't remember what the article was actually about, sorry), there was a major issue with that node and AMD happened to find brilliant solution to it. That level of discrepancy could happen now only if TSMC is vastly superior to Samsung's process node (as there seem to be no gotchas).

Indeed yes. But that's just engineering talent. They engineered their way around the problem and won.

As for why Vega ain't relevant: Vega comes from "starving AMD" times (+Raja).
Perf/transistor wise, Navi is far ahead of it, roughly on par with NV. (so was Polaris, by the way).

Whether the designer of the architecture was Raja Koduri, David Wang, or even Lisa Su herself, doesn't matter. At the end of the day, Vega had AMD's logo on it. Its relevant, because they still released the product. They still made big promises. They still made stupid marketing decisions like the inane "Poor Voltage" campaign. Its a product under their brand. Whether it was the product of bad management or bad finances doesn't matter.

Just my speculation, based on GA100 being that massive this early into 7nm and how NV was acting in the past.

GA100 is sold at 20k a piece, 10 per package (mkay, maybe a bit less as two AMD CPUs in that package should cost 10-20k-ish). It is basically "the best you can cram" kind of card.

GA100 is huge because Nvidia have always gone big with their HPC offerings for AI/ML. GP100 was 610mm2. GV100 was 815mm2 and now GA100 is 826mm2. In a market where cost is no object they can afford to do this.

But what happens to stuff that has only some faulty units? NV doesn't sell a product with them.
1.4Ghz being relatively low: well, with some of the units/non-gaming number crunchers disabled, they surely can go higher.

826mm2 is a huge die. And not only is it huge, its also very dense. It is the same node as Navi 10, yet it has 50% higher transistor density, because it seems they're using higher density mobile libraries instead of the less dense HPC libraries. I guess it was the only way to cram the number of transistors they needed into the reticle. Its probably also why GA100 doesn't clock to well and draws so much power. The libraries are probably optimised for small, low-clock chips, not huge high-clocking datacentre beasts.

So, what happens to the units that aren't faulty? Let's see:

unknown.png


Most wafers are 300mm diameter. In such a wafer at most you can have 58 GA100 chips per wafer. That's not a lot. TSMC's N7 has a defect rate of around 0.1 per cm2. That's very very good. But even with a defect rate that low, with that huge 826mm2 die you're left with only 27 fully functioning chips per wafer. Think about HPC. If you're an HPC client you're placing orders for hundreds of chips at a time. I would wager Nvidia has a number of clients each looking to get hundreds of GA100s they can slot into their AI/ML datacentres. Nvidia needs to supply that volume. They also need to make their DGX servers. They also need to make their single PCIe slot versions. In order to meet that volume Nvidia needs to pump out chips at a rate.
There are two options for them. They could salvage the chips that do not have critical defects and sell them as a lower sku, and keep the fully functioning dies as a higher tier sku. If you're an HPC client you're going to want the highest tier sku right? But there's only 27 of those per wafer. So Nvidia needs to buy more wafers and fab more GA100s so they can meet the HPC demand.

Unfortunately, Nvidia has a maximum allocation of wafers per year from TSMC. They need to balance those wafers with their other 7nm chips. Do they waste all of their TSMC 7nm wafers on GA100, and leave their highest end gaming chips to a second source like Samsung?

I expect that for the time being Nvidia decided it would be more profitable to simply examine a sample of fabricated GA100s, and decide based on their testing how much of an average defective die is still functional and set that as a minimum. Then they take every single GA100 and cut it down to that minimum and sell it as fast as they can. Then apportion the rest of the wafers they are allocated to making GA102 for their highest end gaming graphics and any other chips they deem that need TSMC's excellent node for.

There is an opportunity cost to this. Trust me, Nvidia would have sold the fully unlocked GA100 if they could - they could probably have commanded a higher price. It seems to me though, that they can't. Which is why they've had to settle for cutting them down.

Bare in mind, that a cut down GA100 at a relatively low frequency is drawing 400W. Imagine how much power a fully unlocked GA100 would consume - even if you dropped the clocks further, it would be astronomical. So I think they've cut it down for more than just economical reasons. It simply might not be possible for a fully functional GA100 to exist as a saleable product at this time.

The situation is complicated in that way.
 
Last edited:

LordOfChaos

Member
Curious about the fan on both the top and bottom. If these were both blowing through the same heatsink, it seems like they would contend and create turbulence [since multi fan GPUs always have fans in the same orientation], I wonder if this is a physically split heatsink design (kind of like Sony was patenting for the PS5) where it's split into two thermal zones both fans blow cleanly out of?
 
anybody else feel sorry for the awesome cards that become obsolete

remember the titan x?

this card one day will get laughed at

remember all the 970s?

hundreds of thousands of them

where are they now?

in some dark corner collecting dust
 

llien

Member
Whether the designer of the architecture was Raja Koduri, David Wang, or even Lisa Su herself, doesn't matter. At the end of the day, Vega had AMD's logo on it. Its relevant, because they still released the product. They still made big promises. They still made stupid marketing decisions like the inane "Poor Voltage" campaign. Its a product under their brand. Whether it was the product of bad management or bad finances doesn't matter.
Tell me about a single case of that sort of nonsense, once Koduri was gone.
There ain't any.
Now, that might be a coincidence, but uh..

And for perf/transistor parity, I was comparing NAVI (AMD's latest) to Tesla (NV's latest).
AMD made a major leap forward, out of blue (seriously, nobody expected NAVI would be that ahead of Vega)

826mm2 is a huge die. And not only is it huge, its also very dense. It is the same node as Navi 10
Mm, that would be weird, how come it's not on the same node as Navi 20?

Do they waste all of their TSMC 7nm wafers on GA100
I think here, we had "The Leather man has played 'piss partners off' with yet another company" case. :)
Just my speculation though.

Most wafers are 300mm diameter. In such a wafer at most you can have 58 GA100 chips per wafer. That's not a lot. TSMC's N7 has a defect rate of around 0.1 per cm2. That's very very good. But even with a defect rate that low, with that huge 826mm2 die you're left with only 27 fully functioning chips per wafer.
And then you get 10k+ per each of those 27. How much is that 300 waffer + process?
I recall Anand saying NV would go for even bigger, if it was available (last gen)

Bare in mind, that a cut down GA100 at a relatively low frequency is drawing 400W. Imagine how much power a fully unlocked GA100 would consume - even if you dropped the clocks further, it would be astronomical.
GA100 is a (in a way, certainly when compared to CPUs, very dumb) number cruncher, which is likely to be 100% busy, unlike GPUs.
Drop part of the circuits, drop number of CUs and suddenly you can push clocks further.
 

llien

Member
Interesting analysis hinting that the card/leak could be very much real:

1. One of the shrouds has a small little tiny "NVIDIA" logo, no faker would invest time into putting a small logo on these types of things. It would require pretty specific tooling and usually with fakers they love to put the logo big so that there's no ambiguity to the fake.
2. Irregular PCB shape, both Komachi and Kitty have said this in prior leaks.
3. It says EMC Certification Pending on the PCI-E connector.
4. The design looks familiar to NVIDIA and it's clear NVIDIA loves to change their cooler design every generation, it's definitely unique for a reference card, but too complex for a faker to make themselves, most fakers do something dumb like add three fans based on the existing cooler and call it a day for their "leak".
5. The blue plastic wrap, any faker wouldn't leave this on, they always like the logos to appear unobstructed, also it just seems too legit to have the blue plastic wrap on there.
 

dispensergoinup

Gold Member
Almost pulled the trigger on a 2080S this year but I'll wait now.

Between this, PS5, and possibly a new CPU/Mobo at the end of the year my wallet's gunna suffer a famine. :messenger_grinning_sweat:
 

KungFucius

King Snowflake
Don't give a shit what it looks like as I only look inside my PC when building/upgrading/troubleshooting. Curious how this through channel cooler will work. Did they really manage to shrink the PCB that much or is there something else going on there? Either there is some cool design work or this is fake.
 

Kenpachii

Member
Sounds like you might had bad PSU . You should not need a 1200 watt PSU for 2080 TI 650 wattage is recommend.

You had two options RMA the PSU

Or just buy another PSU with similar wattage.

Trust me if you where overloading the PSU you wouldn't be getting a buzzing sound.

650w = full system recommendation. 1200w is utter overkill unless he has 2 of those 2080ti's and even then its overkill.
 

ZywyPL

Banned
Then why jump to conclusions that 3080 possibly having the same core count as 2080 Ti as being a bad thing? The only possible spec in that tweet where you jumped to your conclusions was core counts.

An x80 GPU has never had higher core counts than the previous gen x80 Ti.

But there was a clock gain in return with each new generation, Pascals could go as high as 2,1GHz compared to 1,4-1,5 on Maxwells, that's why there was such a jump in performance, and when switching from Pascals to Turings the jump was basically non-existent, because the clocks can go higher only by 25-50MHz at best, so if both the core count and clock speeds will remain the same in Ampere GPUs again then sorry, I cannot get excited. Bare in mind once next-gen only games will start to show up, games not made with PS4/XB1 in mind, the requirements will go through the roof if we want to maintain 60FPS and more. Some rumors suggest the RT performance will quadruple, with even better DLSS on top of that, others suggest up to 30TF GPUs, too many different/opposite rumors are out there to make a concrete conclusion if you ask me, I'm personally hoping for all around improvements, in rasterization, RT, DLSS, Mesh Shading, and maybe even some new fancy tech NV will come up with.
 

Kenpachii

Member
So how does that design even work? the air goes on the front fan towards outside and the fan on the back sucks in all the hot cpu air into the card?

Sounds like a absolute disaster.

Also is the card bend then? or are the fans super small because otherwise u will have to get extra room between the cpu cooler and GPU which basically makes it a dud for many setups.
 

Kenpachii

Member
But there was a clock gain in return with each new generation, Pascals could go as high as 2,1GHz compared to 1,4-1,5 on Maxwells, that's why there was such a jump in performance, and when switching from Pascals to Turings the jump was basically non-existent, because the clocks can go higher only by 25-50MHz at best, so if both the core count and clock speeds will remain the same in Ampere GPUs again then sorry, I cannot get excited. Bare in mind once next-gen only games will start to show up, games not made with PS4/XB1 in mind, the requirements will go through the roof if we want to maintain 60FPS and more. Some rumors suggest the RT performance will quadruple, with even better DLSS on top of that, others suggest up to 30TF GPUs, too many different/opposite rumors are out there to make a concrete conclusion if you ask me, I'm personally hoping for all around improvements, in rasterization, RT, DLSS, Mesh Shading, and maybe even some new fancy tech NV will come up with.

Yea agree with this, the 3080 needs to deliver 100% over 1080ti performance for me to care to be honest. If not will wait for a 3080ti. For some reason i got the feeling that 3080ti will be 70% increase at best and 3080 50%. not worth it.
 
Last edited:

Xyphie

Member
Judging by the fan blades both fans blow into the card, so the heatsink probably extends the entire length underneath the shroud or in two pieces connected with heatpipes. It's interesting how short the PCB is, area isn't much bigger than those 17cm ITX GPUs. So either the card is very power efficient (~150-200W?) with a small VRM or there's some sort of daughter board for the VRM in the part with the fan blowing downwards.
 

Leonidas

Member

Damn. $150 for the cooler. 320-350 TBP. According to Igor's Lab.

Can't wait to see full specs and performance figures in the coming months :goog_smile_face_eyes:

But there was a clock gain in return with each new generation, Pascals could go as high as 2,1GHz compared to 1,4-1,5 on Maxwells, that's why there was such a jump in performance

True, but you jumped to conclusions without knowing "IPC" or clock improvements of Ampere vs. Turing.

If someone simply saw 1080 vs. 980 Ti core counts then no one would be impressed. But that's simply not enough info to go on, which is why I'm confused why you've jumped to conclusions when core counts, which actually is impressive for an x80 GPU, were the only thing mentioned in the tweet.

and when switching from Pascals to Turings the jump was basically non-existent, because the clocks can go higher only by 25-50MHz at best, so if both the core count and clock speeds will remain the same in Ampere GPUs again then sorry, I cannot get excited.

Turing had improvements over Pascal which allowed GPUs with fewer or same amount of cores as Pascal to perform better, despite similar frequency. Ampere will almost certainly improve over Turing in multiple areas which will ultimately result in 3080 easily outperforming the 2080 Ti, if the core counts are the same.

if you ask me, I'm personally hoping for all around improvements, in rasterization, RT, DLSS, Mesh Shading...

It's safe to say Ampere will bring improvements to all of those areas.
 
Last edited:

Faenrir

Member
You are really salty about that, aren't you? I wonder when a PC will be able to have an SSD IO subsystem as fast as the PS5's. A year, two maybe? On topic, I don't really get the point of an elaborate GPU design, they all kind of look the same and go inside your PC case anyway.
A good design improves cooling. That's the point.
 

Azurro

Banned
A good design improves cooling. That's the point.

I don't mean the cooling system, I mean the backplate, with the green paint, angular design and whatnot. Some of them even have RGB if I recall correctly. The whole "xtreeem!!" aesthetics just looks to be made for children.
 
I don't mean the cooling system, I mean the backplate, with the green paint, angular design and whatnot. Some of them even have RGB if I recall correctly. The whole "xtreeem!!" aesthetics just looks to be made for children.
Lots of people have tempered glass cases and can see inside their PC. A backplate looks better. Not complicated.
 

Faenrir

Member
I don't mean the cooling system, I mean the backplate, with the green paint, angular design and whatnot. Some of them even have RGB if I recall correctly. The whole "xtreeem!!" aesthetics just looks to be made for children.
The backplate is part of the cooling system. A good cooling system seals up the air so that the airflow follows the desired path.
The RGB stuff isn't for "children", it's not to your taste but some people have cases with clear sides that show the components and want the parts to match their visual theme.
It's childish to actually say it's for children. "It's not like i want, it's for children waah waah waah"
 

Kenpachii

Member
150 bucks for a cooler, watch that card cost 1200 bucks. then the ti version double that.

Good luck with that, will sail on my 1080ti until the 4000 hits then.
 
Last edited:

Azurro

Banned
The backplate is part of the cooling system. A good cooling system seals up the air so that the airflow follows the desired path.
The RGB stuff isn't for "children", it's not to your taste but some people have cases with clear sides that show the components and want the parts to match their visual theme.
It's childish to actually say it's for children. "It's not like i want, it's for children waah waah waah"

I mean, some people would say "visual theme", but the entire pc gamer line seems made for children and teens. I mean, the gaming chairs are using racing chairs, headsets are covered in neon and shitty plastic, the laptops are covered in the cheapest shitty plastic in angular design to be able to fit in the best CPU/GPU at the lowest possible price. And why would you want a lightshow if you are doing anything on your computer?

It's not exactly out of this realm to say that the design for this demographic is immature and childish.
 

Faenrir

Member
Yeah, we get it, anything you dislike is for children.
It's immature to get things you enjoy ? You sound like someone that's just jealous because you can't afford it, tbh.
If not, why does it bother you ?

I do like lighting myself, especially for keyboards, i usually work in low light environments with dimmed screens and the light on keys helps. But hey, maybe i'm just a child. Or maybe you should stop trolling.
 
Top Bottom