FatKingBallman
Banned
Spinning that I was right? They said it had lower AF in the original comparison like I said.
You said AF is the same. So, DF also said it was not. Yeah, Xbox fan, synonym for spinning
Spinning that I was right? They said it had lower AF in the original comparison like I said.
You said AF is the same. So, DF also said it was not. Yeah, Xbox fan, synonym for spinning
At least he got you to activate your Windows I'm also pretty sure you misunderstood what he said.You said AF is the same. So, DF also said it was not. Yeah, Xbox fan, synonym for spinning
The "same" as in the same as this comparison, lower than PS5
Oh, my misunderstanding then. My bad. Then how the hell you concluded i was wrong when i also said few times that at launch comparison AF was also lower on XSX then on PS5?
My point was originally that if you could run the X1X version on Series X it would force 16x AF, so that says to me it's a problem with the GDK.
Series consoles force 16x AF on Xbox One games, but you can't play the X1X version on Series X as it just prompts you to upgrade. On PS5 you can play either version.Is it 16 AF in BC in this game?
Series consoles force 16x AF on Xbox One games, but you can't play the X1X version on Series X as it just prompts you to upgrade. On PS5 you can play either version.
The 16x AF for One games is surely due to the ample GPU & bandwidth overhead available on SeriesX executing software designed for half that.My point was originally that if you could run the X1X version on Series X it would force 16x AF, so that says to me it's a problem with the GDK.
Here we go, another person I have to explain to that comparing Ampere (RTX 30 series) teraflops to consoles is a bad idea. Ampere teraflops do not scale 1:1 or anything close to Turing (RTX 20 series) and RDNA2 (what consoles use). There's been videos and articles done on the subject of teraflop comparison between different architectures being irrelevant at this point. All you have to do is look at 3080 and 6800XT being identical in performance and yet 3080 has 10 more teraflops. Another example would 3070 and 2080TI that are identical in performance, but yet 3070 shows 7 more teraflops. This is why you have to stick to RDNA2 if you're talking about consoles and scale and compare teraflops from there.What the heck are you talking about the 3090 came out in 2020 and it can already do 36TFs. AMD doesn't make a chip that big and architectural differences give different TFs results for them but how in god's name can you think AMD will not have an affordable chip 7 years from now that will deliver the 40TFs we could almost achieve a year ago.
Did you think the behemoth that was the original Titan was the peak of graphics development or something? Because a dirt cheap RX480(similar to what the Xbox One X has) that launched 3 years later outperformed it. There's several big graphics advancements that are in the pipeline that will likely release way before 2027 and chips containing these advancements will run circles around the bleeding edge tech we had last year, by the time the PS6 comes it should blow the crap out of a 3090 just like how the PS5 does to the Nvidia Titan(2013).
Thanks for the subtitles, no one can spin that right?You said AF is the same. So, DF also said it was not. Yeah, Xbox fan, synonym for spinning
It can't force it to 16AF if next-gen version isn't 16 AF at all. Hitman 3 on XSX has 8x, on X1X is 4x for example.
Just checked boosted mode on XSX video by DF. No, there is no 16 AF in GTAV, Far Cry 4, Sniper Elite. Bluriness is clearly visible
brilliant marketing by nvidia bro. as you can see, it really works. they will keep blabbering about teh 36 tflops 4ever nowHere we go, another person I have to explain to that comparing Ampere (RTX 30 series) teraflops to consoles is a dumb idea. Ampere teraflops do not scale 1:1 or anything close to Turing (RTX 20 series) and RDNA2 (what consoles use). There's been videos and articles done on the subject of teraflop comparison between different architectures being irrelevant at this point. All you have to do is look at 3080 and 6800XT being identical in performance and yet 3080 has 10 more teraflops. Another example would 3070 and 2080TI that are identical in performance, but yet 3070 shows 7 more tflops. This is why you have to stick to RDNA2 if you're talking about consoles and scale and compare teraflops from there.
I have a dumb question.
What is anistropic filtering and why should anyone care?
Genuine question by the way.
it makes the textures in the distance look sharperI have a dumb question.
What is anistropic filtering and why should anyone care?
Genuine question by the way.
The AF is real issue on XSX, not sure why that would be...
brilliant marketing by nvidia bro. as you can see, it really works. they will keep blabbering about teh 36 tflops 4ever now
this is not nvidia's first prank tbh. they did this before;
(4.2 tflops)NVIDIA GeForce GTX 780 Specs
NVIDIA GK110, 902 MHz, 2304 Cores, 192 TMUs, 48 ROPs, 3072 MB GDDR5, 1502 MHz, 384 bitwww.techpowerup.com
(2.1 tflops)NVIDIA GeForce GTX 1050 Ti Specs
NVIDIA GP107, 1392 MHz, 768 Cores, 48 TMUs, 32 ROPs, 4096 MB GDDR5, 1752 MHz, 128 bitwww.techpowerup.com
everyone can observce how the 2 times more flops card performs
its really impossible to convey these facts to the people
theoritical maximum fp32 performance does not always translate to better gaming performance.
same situation can be observed between vega 64 and a gtx 1080. vega 64 was a computational beast, managing a maximum theoritical fp32 tflops of 12.66. gtx 1080 with 7.6 tflops trounced the vega 64 in the entirety of the generation. only recently that vega 64 saw a slight win against the 1080, but its so slight that its irrevelant mostly
gives textures in the distance better detail when they are viewed from an angle , x16 is the highest value
Thank you, learn something new every day.it makes the textures in the distance look sharper
consoles have limited bandwidth budget, hence developers used to run 4x AF for resource saving purposes
pc gpus have the benefit of having bandwith completely reserved to the gpu itself, hence it usualy seems like it is cost free to run 16x AF
people thought that with beastly GPUs this generation of consoles would force 16x AF in every game, but problem is, most console gamers will still play on couches and from a distance to their TV, hence most of them wouldn't notice the difference between 4x and 16x so some developers will still keep using 4x for a better budget management
Since w're early into the generation, i don't think there should be any budget limitations, but we may never know
some vendors can use their own specific AF algorythms, it seems like PS5 is doing something similar for some games and provides better AF quality across the board. but honestly, with a TV-couch setup, 4x and 16x difference may mostly be irrevelant.
we can observe similar effect on nvidia and amd gpus;
Imgsli
imgsli.com
driver level AF usually gives better texture filtering compared to in-game filters
odyssey;
Imgsli
imgsli.com
ubisoft's default 16x AF tends to look worse than driver level AF implementations. so xbox needs to refine their AF if they care about it but so far they only forced 16X AF on backwards compat games
Scrawny memory bandwidth on PS4. AF was first thing on chopping block.Did we ever figure out the AF issue with PS4 vs Xbox One?
GTA V
yes on ps5 version distant textures at angles will appear less blurry despite overall lower resolution , but who knows maybe xsx gdk update will improve anisotropy in the futureThank you, learn something new every day.
So PS5 has better filtering at this point and will give the impression of an overall more detailed and clear image even at lower resolutions as more content will be visible in the background.
Interesting thanks again lads.
Realistic hits hard lolYou know why As long as ethomaz tier console war shitposts are allowed (muh 675p), not much will change.
Yes it runs most of time at 2.23Ghz… only in some heavy workload like AVX it drops the clock a bit.You can't possibly calculate that accurately knowing that the clock speeds are variable.
The PS5 is not running at peak GPU performance 100% of the time.
Bandwidth situation was clearly better on PS4 overall its 176 GB/s vs 68 GB/s for main RAM bandwidth. 109 GB/s SRAM was very tiny at only 32 MB. XboxOne games having better anisotropic filtering are quite few and some of them were later fixed on PS4. Actually there are far more games having better anisotropic filtering on PS4.Scrawny memory bandwidth on PS4. AF was first thing on chopping block.
XOne esram helped.
Scrawny memory bandwidth on PS4. AF was first thing on chopping block.
XOne esram helped.
Legit tools issue (on PS4 SDK). It happened on a dozen of games and most have being eventually patched without any performance repercussions. I had a source at Japan Studio back then who told me this.Did we ever figure out the AF issue with PS4 vs Xbox One?
The funny thing is that if you could run the X1X version on Series X it would force 16x AF. It's obviously a setting in the GDK that is causing this.
Maybe you haven't been paying attention but pretty much every bit of positive Xbox news gets greeted by negativity. I get it the Xbox is less popular worldwide so it is what it is but let's not pretend it doesn't happen. Look to how Jim Ryan's and Phil Spencer's cross generational statements were taken even though they were doing the same thing basically.Always the MS victim card. No, people do not dump on the XSX as it is a great console.
The only pushback you hear is when some people try it to push it as God level HW and console war around the forum gloating and unfairly representing its competition to make the gap appear as giant as they feel it has to be… only then people will go into compromises both HW are doing and in comparative analysis about each architecture Pros and Cons… months later seeing how close both consoles are in third party titles there is some vindication in that.
It is not as a PS5 owner that I have an issue with the XSS, but as an XSX one: apparently being part of the club does not mean enjoying a console and its games, but agreeing to and praising its business strategy .
I will keep saying it, as a console user who likes to keep the console model going, I think a digital only XSX with half the storage (512 GB instead of 1 TB) could have sold for $349-399 and would have made the situation better for a lot of gamers. Apparently given Riky’s analysis having the XSS version is the reason many Xbox Series games are much bigger than their PS5 equivalents so if we only had the regular XSX and a half storage cheaper digital only XSX games would be smaller too apparently. This is on top of making XSX support more complex (you need to target two HW profiles not one).
Yet another reason XSS is a solution that looked good to MS (before PS5 DE was announced), but does not look nor help XSX users.
Gears 5 Hivebusters is a first party cross gen title (not sure why people expect the XSS to fare that much better if XSX is pushed without having to worry about Xbox One and much of its GPU is used for Resolution independent processing… but ) that targets between 1080p and 1440p at stable 60 FPS so , but even there you have graphical settings differences (reflections toned down in anything but the XSX version).
I just thought it was a sensible conclusion? Why release a Pro model three years into the generation only to sell it at a loss when it's a premium model already aiming for a niche of the main install base to begin with?Where did you get the info?
Can you please share the source, if you don't mind?
You don't have to explain anything because my original post literally already addressed TF differences between different architectures. I'm not gonna repeat all the words I already typed. Basically 2020 tech will be ancient in 2027, the PS6 will trump the 3090 and AMDscurreny flagship. And 40 TFs AMD TFs or not will be affordable enough to be on a console. End of story.Here we go, another person I have to explain to that comparing Ampere (RTX 30 series) teraflops to consoles is a bad idea. Ampere teraflops do not scale 1:1 or anything close to Turing (RTX 20 series) and RDNA2 (what consoles use). There's been videos and articles done on the subject of teraflop comparison between different architectures being irrelevant at this point. All you have to do is look at 3080 and 6800XT being identical in performance and yet 3080 has 10 more teraflops. Another example would 3070 and 2080TI that are identical in performance, but yet 3070 shows 7 more teraflops. This is why you have to stick to RDNA2 if you're talking about consoles and scale and compare teraflops from there.
He’s a dumbass. Always been one on any forum. Even Reeeeeeee figured that out early on. Just hit ignore and move on.You know why As long as ethomaz tier console war shitposts are allowed (muh 675p), not much will change.
You say you understand the difference yet quoted 36 teraflops of 3090 as some kind of comparison point, which makes your whole comment a bit daft. And yet again you bring up 3090 as some form of benchmark, the same card that is only equivalent to 23-24 tflops of RDNA2 console GPUs.You don't have to explain anything because my original post literally already addressed TF differences between different architectures. I'm not gonna repeat all the words I already typed. Basically 2020 tech will be ancient in 2027, the PS6 will trump the 3090 and AMDscurreny flagship. And 40 TFs AMD TFs or not will be affordable enough to be on a console. End of story.
That is not on SMT and the PS5 is low level API so it has better CPU overhead.Series X CPU: 3.8GHz Custom Zen 2 (constant)
Series S CPU: 3.6GHz Custom Zen 2 (constant)
PS5 CPU: 3.5GHz Custom Zen 2 (variable)
The PS5 does not have a CPU edge in any instance.
That is not on SMT and the PS5 is low level API so it has better CPU overhead.
Sure but the Xbox has higher CPU overhead because of the API and there is the I/O.with SMT it's still higher, with the option for devs to not use it if their engine doesn't need it and instead go for higher clockes. either way the Xbox is ahead
with SMT it's still higher, with the option for devs to not use it if their engine doesn't need it and instead go for higher clockes. either way the Xbox is ahead
The magic of lower resolution.And for some reason PS5 holds on par or better fps performance, but clock is lower. So, how so?
The magic of lower resolution.
You asked, I answered. No need to be butthurt.Yeah so much lower that stronger GPU provides screen tearing and worse texture filtering . :/
I assume you are deliberately stupid at this point because I did 3 posts trying to explain to you how it could works but still you persist with stupid childish argumentation. Believe whatever you want and persist with your stupid console warI assume you uncle works at Ubisoft or something?
You asked, I answered. No need to be butthurt.
Btw, I never have screen tearing on my XSX
I'm on ps5. I mean there is barely any AA at all let alone lod and draw distance. Valhalla is leagues ahead from a technical perspective. But that's my subjective observation. I am massively disappointed. But I don't mind. Only paid 20€ new so fuck it.What are you playing it on? I thought it looked great on PS5 especially considering it probably has the most advanced and impressive world simulation every seen in a game. Every NPC is out there living a life unrelated to the actions of your character.
Legit tools issue (on PS4 SDK). It happened on a dozen of games and most have being eventually patched without any performance repercussions. I had a source at Japan Studio back then who told me this.
Here it was a very specific setting not used by the devs so they were actually not using the API correctly. It was not a general performance problem like on Xbox. The "tools" excuse in the case of XSX is just a way to say their libraries are going to be better optimized (games will run better) in later versions of their SDK (which is always the case on all systems, consoles and PCs).Tools lol, just another excuse...
Am I doing it right?
Here it was a very specific setting not used by the devs so they were actually not using the API correctly. It was not a general performance problem like on Xbox. The "tools" excuse in the case of XSX is just a way to say their libraries are going to be better optimized (games will run better) in later versions of their SDK (which is always the case on all systems, consoles and PCs).
No one said series X is underpowered eh. Probably the game will run worse with higher AF and they lowered it. Not means the impact was dramatic in the performance but could help to eliminate some annoying drops here and there. Not sure to understand what you trying to say talking about the Xbox one AF capabilities.And you have evidence that devs are purposely turning off AF on Xbox Series X because it is underpowered below Xbox One AF capabilities and that it's not a similar bug?