We are not talking about ingame vram usage info, that's real vram usage, and it can go up even higher than that at least in 4K.Did you checked with another tool? Because it didn't use 9GB of VRAM... the ingame tool maths are wrong... it is broken.
Source?We are not talking about ingame vram usage info, that's real vram usage, and it can go up even higher than that at least in 4K.
If I hear another "wait for Navi" I will just go ahead and buy everything intel and nvidia because you can wait forever for new hardware as there is always something new on the horizon with all of the promise of summer.
I have linked you screenshot with REAL vram usage, so what more do you want to see .Source?
I tried to find and there is nothing in the Google.
Are there any difference in graphic quality between the 4k max setting with lower VRAM and higher VRAM?
Yes and DLSS is and will forever be useless right?And the 2080 doesn't possess the non-gaming features that could be provided by 16GB HBM2.
Most importantly, those RTX features are essentially meaningless if the performance and support is non-existent.
As of right now, after close to 6 months on the market, Battlefield is still the only game with support for ray tracing.
I didn't.Based on the reviews, this has been a disaster. It's hard to fathom how stupid AMD was to release these out to review with the drivers being in such poor shape.
Despite that, the performance in a lot of cases is decent, but not exceptional.
The thing that surprised me is how good they made the 1080 Ti look. It's too bad everyone missed out when they could be had for $600 - $650.
It's quite possible.Yes and DLSS is and will forever be useless right?
Exactly what I was predicting.
Same price as RTX 2080, louder, hotter, higher power consumption, worse performance in most games, and none of the RTX features.
Might be good for “prosumers” but there’s very little reason for a gamer to choose this card.
This is clearly a stopgap product just so AMD can at least show their face in the high-end gaming market. Wait for Navi.
Yes, you are asking folk to consider RTX for a feature that's pretty much useless and missing in all games except BFV with no appreciable visual or performance gains..... As opposed, people are benefitting since yesterday with Radeon 7's with it's extra 8Gb and it x2. 1 bandwidth, in games at 4k with less stutter, you can also go crazy on Radeon with AA and supersampling at 4k and productivity sees an immense boost over the competition.Yes and DLSS is and will forever be useless right?
No doubt, I think Radeon 7 and Navi is gonna sing with Ryzen 3000 performance.........Is it wierd that the Radeon VII works better on the AMD Threadripper rather than the Intel i7 8700k?
Intel is planning on entering the discrete GPU market but at this point that is literally all we know.I had hopes for this but I think it's time to hop back on the Nvidia train. Isn't Intel releasing a graphics solution this year or did I dream that up?
Intel's dGPU won't come out until 2020.I had hopes for this but I think it's time to hop back on the Nvidia train. Isn't Intel releasing a graphics solution this year or did I dream that up?
I think they've invested in the driver/software team heavily recently, their drivers are pretty good tbh, but every product will have some issues at launch...Turing had more issues, space invaders corruption, pixelated and heavy dot pitch screens, dying cards etc......I don't think Radeon 7's issues are too bad in contrast.....Some persons were even able to get the card overclocked and working nicely with cool temps and lower DB......OptimumTech knows what's up......The problem with AMD is their driver team must be seriously underfunded compared to Nvidya. You see this in Battlefield 5 benchmarks where the Radeon 7 is 8% faster at 4K.
Then you look at 'smaller', unoptimized games like Dragon Quest XI where it's a massive 30%+ slower!
I think it's also important to update folk on FP64 computing on Radeon 7......I guess some are quick to report anything they deem bad news, and try to obsfucate any good news or updates.....So I know some posters made a big issue about FP64 performance on Radeon VII a whle ago........Yet this is what we got for launch....
"The Radeon VII graphics card was created for gamers and creators, enthusiasts and early adopters. Given the broader market Radeon VII is targeting, we were considering different levels of FP64 performance. We previously communicated that Radeon VII provides 0.88 TFLOPS (DP=1/16 SP). However based on customer interest and feedback we wanted to let you know that we have decided to increase double precision compute performance to 3.52 3.46 TFLOPS (DP=1/4SP).
If you looked at FP64 performance in your testing, you may have seen this performance increase as the VBIOS and press drivers we shared with reviewers were pre-release test drivers that had these values already set. In addition, we have updated other numbers to reflect the achievable peak frequency in calculating Radeon VII performance as noted in the [charts]."
https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/3
FP 64 comparison/List;
Radeon 7 = 3.46 TFlops
Radeon 7 (prior to launch) = 880 GFlops
Vega 64 = 786.4 GFlops
Vega 56 = 660.4 GFlops
RX590 = 445 GFlops
RX 580 = 385.9 GFlops
RTX Titan = 509.8 GFlops
RTX2080 Ti = 420.2 GFlops
RTX 2080 = 314.6 GFlops
RTX 2070 = 233.3 GFlops
RTX 2060 = 201.6 GFlops
Titan X Pascal= 342.9 GFlops
GTX 1080 Ti = 354.4 GFlops
https://www.techpowerup.com/gpu-specs/
Digital Foundry specifically mentioned that RE2 seems to fill whatever VRAM is available, but doesn't suffer stuttering even at 4K max on a GTX 1060 6GB, suggesting that it's not VRAM-bound.Did you checked with another tool? Because it didn't use 9GB of VRAM... the ingame tool maths are wrong... it is broken.
That means it is only allocating all VRAM and not using all of it.Digital Foundry specifically mentioned that RE2 seems to fill whatever VRAM is available, but doesn't suffer stuttering even at 4K max on a GTX 1060 6GB, suggesting that it's not VRAM-bound.
There's a few vids that break down the difference between texture settings. Going from 'High 8GB' to 'High 4GB' is basically nothing. Digital Foundry went into a little bit in the 2nd vid linked below.That means it is only allocating all VRAM and not using all of it.
That is why I’m asking if there is any tool that really shows how much a game uses? Not allocation only.
Another way is to use maxed setthings in a 8GB card vs a 16GB and see if there is any difference in image quality.
Which greatly just shows that its a prosumer card and not 100% dedicated to gaming. You simply don't need this feature in gaming. So them putting in the silicon to support FP64 to this extend just shows that it's either a cut down workstation GPU or it aims to be a entry level workstation GPU. That's actually a thing against it being a good gaming GPU.
I mean, the Radeon 7 is just a renamed Instinct MI50.
Radeon Fantasy VII - 33 games benchmarked
Radeon Fantasy VII manages to consistently keep up with 3 years old 1080 Ti in most games. A huge win for AMD! (spoiler: /s)
The real GOAT of this really is the 1080 Ti. I've had my 1080 Ti for more than 2 years now and I don't see myself replacing it anytime soon, especially if Ampere is planned for 2020 as rumored. Great video card which ended up having much more longevity than I could ever have anticipated.
Here's the reality, AMD matched the 1080 Ti and 2080 while making the card more friendly for prosumers and also gaming applications which require ridiculous amounts of RAM.
The card can be undervolted the same as the 64 and 56 can while consuming much less power, it can be made to perform quietly, it can be run on liquid. Let's not forget these things when trying to go nuts on AMD. What did Nvidia do? They released a redundancy for the 1080 Ti and priced themselves out of their own consumer market by making the 2080 Ti $1,200.
They pushed their flagship consumer card to the cost of Titan's, cards that basically no one buys because they're so cost prohibitive. So for actual normal consumers all they did was release a 1080 Ti with Ray Tracing and DLSS which just about nothing uses and most things probably won't for another two hardware generations.
So how exactly is this a disaster for AMD? Their card is more useful.
Yes, the Radeon VII is a great PROsumer card, it's a great gaming card too, it's already showing less stutter in games and better frametimes over the RTX 2080....Drivers will only improve Radeon 7's performance........Which greatly just shows that its a prosumer card and not 100% dedicated to gaming. You simply don't need this feature in gaming. So them putting in the silicon to support FP64 to this extend just shows that it's either a cut down workstation GPU or it aims to be a entry level workstation GPU. That's actually a thing against it being a good gaming GPU.
I'm not disagreeing with your post, but it was the RX 480 that launched against the GTX 1060, not the RX 580. Many of the RX 580 AIBs ship with base clocks near the RX 480 max OC levels. Rx 580 is generally ~7% faster than the RX 480 out of the box.Look at RX580 at launch and now over the GTX 1060...
I'd guess a theoretical 8GB Radeon VII would have a better price/performance ratio than the 16GB model. If I could get the same card with 8GB and save $150 while only losing a few frames and the "promise" of fine wine, I'd go for it easy.Anybody who says Vram is not important or 16Gb is too much is straight bonkers....
Anybody who says Vram is not important or 16Gb is too much is straight bonkers....
The RX580 is essentially the RX480 with slightly better clocks, the 580 is the product on offer now from AMD in the mid-low range for a while now, almost 2 years, the 480 is no longer in production. I compared the 580 at it's launch against the 1060 to now, because it's the card being sold by AMD now and it's been the 1060's competitor for much longer the 480 ever was.......If you compare the 1060 with GDDR5x with the 580 that launched almost two years ago, I have no problem with that btw.....I'm not disagreeing with your post, but it was the RX 480 that launched against the GTX 1060, not the RX 580. Many of the RX 580 AIBs ship with base clocks near the RX 480 max OC levels. Rx 580 is generally ~7% faster than the RX 480 out of the box.
The games that the RX 480 does better in are better optimized because of consoles using AMD/Polaris architecture which relies more on compute, and nothing to do with amount of VRAM. I find the GTX 1060 6GB to be a better card than the RX 480 8GB for gaming.
I'd guess a theoretical 8GB Radeon VII would have a better price/performance ratio than the 16GB model. If I could get the same card with 8GB and save $150 while only losing a few frames and the "promise" of fine wine, I'd go for it easy.
That's what makes this more of a converted PROsumer card than a made-for-gaming card. So in that sense, we agree.
Before this gen started, and people saw the PS4 specs, they said 8Gb was overkill.......2GB was more than enough and even 4GB was a pipe-dream, 8GB was just an insane asylum level prediction. Today, it's the same, consoles are dropping next year, I'll tell you something, they will have more than 16GB and since AMD has the consoles locked, devs will use the higher Vram count in their games to their best ability and they will push visuals and fidelity and even higher resolutions on account of that ram and better bandwidth......Even RTX (in it's current form) is a Vram fiend, and you're only talking a low quality hybrid solution in Turing, only for reflections.....So tell me, when they offer Raytracing in other aspects of the pipeline like shadows etc...What happens then? Why can't we see past our toes?I guarantee you, we'll be around 8 to 10GB of VRAM for the next 3-4 years.
VRAM is important, but not as important as you think.
People think if you can fit in 16GB of HD or 4k Textures, than games will look super realistic, while completely forgetting that the complete image is what counts.
If a GPU can only render at medium settings for things like shadows, illumination etc, then 4k textures make the scene look even worse than if the textures we're a bit muddier.
Just look at Minecraft, do you think this game would look beautiful with 4k realistic textures but no shaders applied to it? That would look horrendous. But if you take even HD textures and add in Shaders, then it already look 10x better than 4k Textures alone. If VRAM would be all it takes, hell we'd be seeing 32GB VRAM cards left and right, but guess what.. we're not because they don't solve many issues.
What I try to say is that banking on 1 aspect of the GPU alone and counting on it delivering superb performance and being future proof is naive at best.
People say we'll need a lot of VRAM once next gen consoles arrive because of the bump in quality we'll see happen to games.
I agree that consoles will push the image quality and looks of games upwards but that will mean calculation complexity will go up as well and VRAM alone won't be the holy grail to circumvent those problems. It will be something that you will use less and less the lower you have to go with your graphics settings as crappy lighting and effects coupled with 4k textures just looks very odd.
The RX 480 launched against the GTX 1060 in the 150W and below class. RX 480 ended up being more power hungry than they let on, and the RX 580 even more so, consuming over 200W on some models against the GTX 1060's ~130W power consumption.The RX580 is essentially the RX480 with slightly better clocks, the 580 is the product on offer now from AMD in the mid-low range for a while now, almost 2 years, the 480 is no longer in production. I compared the 580 at it's launch against the 1060 to now, because it's the card being sold by AMD now and it's been the 1060's
Well it's not like Nvidia didn't refresh the 1060......Besides, NV GPU's (Pascal) are generally more overclockable than AMD GPU's.....If you keep rebranding and overclocking your chip sometime you will be ahead in performance lol
RX 480 -> RX 580 -> RX 590
Finewine is real......Again, go back to Pubg at launch and now, Arma 3 on AMD hardware to now, Vermintide, Kingdom Come Deliverance and so many more...The RX 480 launched against the GTX 1060 in the 150W and below class. RX 480 ended up being more power hungry than they let on, and the RX 580 even more so, consuming over 200W on some models against the GTX 1060's ~130W power consumption.
I've used both cards since they released and I never experienced fine wine as a result of more memory. Can you point me to a specific example of that?
Anyway, I agree with you that the Radeon VII is a PROsumer card, and I consider the RX 480 to be a great card for AMD and an equal to the GTX 1060. I just wanted to nitpick a bit.
The RX580 is a pretty good "bang for your buck" card. I got mine a few months ago and plan to wait until AMD releases Arcturus when they will finally break away from GCN.I just ordered an AMD.....580. Ok i mainly did it for the 2 free games. XFX better not let me down.
I feel that was tied more to Nvidia sabotaging their older hardware while AMD had poorer initial drivers that they eventually worked out.Finewine is real......
I've made previous posts on how erroneous RE2's VRAM reporting is. It's a poor example anyway since the half-memory models perform so closely in this game and the frame gap can largely be explained by the lower memory clocks on the half-sized models.It's not only about memory, but memory plays a big role, especially when trying to play at higher resolutions and when you want to avoid or minimize stuttering in games...
I have been using rx580 from sapphire already for a month. It is a fine card IMO, though it gets quite warm under the desk when the card is under stress haha.I just ordered an AMD.....580. Ok i mainly did it for the 2 free games. XFX better not let me down.
See post 54, the cycle has begun anew.Remember "wait for Vega"?
Navi is mid-range like Polaris thoughthis card is just a prank bro until Navi drops