• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!

01011001

Banned
Nothing is ever enough. We should all know that by now. 16G of memory isn't enough for the visuals that everyone wants. The next-gen consoles won't have all available memory just for graphics. The PC has a separate memory pool for CPU stuff (i.e. running the game, loading levels, etc..). If we shoot for ray-tracing, it will be memory bandwidth limited. 1080p will surely be the standard for the consoles while the high end GPUs will be able to render at true 4k with much larger memory footprints.

this all depends on how well Microsoft's "Velocity Engine" actually works. as of now this sounds very much like Blast Processing but who knows.
I say wait and see. Consoles were always way behind in RAM compared to PC, and for the most part it worked out fine. This generation the consoles had a surprisingly large RAM pool and it really didn't help them much it seemed.
 

cryogenic7

Member
I was looking at the photos at xbox wire, the console is amazing and all but I just realized that it doesn't have an optical out port on the back... How am I going to connect my sound system to this now?
It's time to upgrade your receiver. Many will need to as HDMI 2.1 will force it if we want to take full advantage of the systems capabilities.
 

Shin

Banned
You know, 7 years later I still haven't come across a single article with developer claiming they are retrained/limited by the 5.5GB (PS4) or whatever.
Granted 4K will require more memory and bandwidth, but along with those also comes improved compression just like every generation.
Considering the engineering that went into X and XS, I'm sure they profiled the shit out of the devkits + developer feedback to come to 16GB.
 

VFXVeteran

Banned
this all depends on how well Microsoft's "Velocity Engine" actually works. as of now this sounds very much like Blast Processing but who knows.
I say wait and see. Consoles were always way behind in RAM compared to PC, and for the most part it worked out fine. This generation the consoles had a surprisingly large RAM pool and it really didn't help them much it seemed.

But it didn't work out fine. Texture resolution is a BIG factor in a game's visuals. Last generation, we saw several games have lower resolution everything across the board. That's not "working out fine" imo.
 
It's time to upgrade your receiver. Many will need to as HDMI 2.1 will force it if we want to take full advantage of the systems capabilities.
For a company touting backwards and forwards compatibility as much as MS I think that is a big oversight. I have two consoles, a PC, a home entertainment system and my guitar connected to the same receiver for many years; and if PS5 isn't forcing me to renew it then it is sadly bye bye for Series X for me as it is a deal breaker. I was considering buying it mid 2021 when MS exclusives might drop. Maybe a splitter may work but a good one that is also compatible with HDCP and audio formats are expensive and unnecessarily adding cost and complication to what ought to be a simple plug and play device.
 

Mattyp

Gold Member
Can you tell me what speeds it has? Because NVMe.

There's an nvme system like a usb that non tech people can just plug into the front of their case without fear of damaging the drive or board?

Or is that system exactly what Microsoft has just created and the entire point of my last message.
 

CrustyBritches

Gold Member
Nothing is ever enough. We should all know that by now. 16G of memory isn't enough for the visuals that everyone wants. The next-gen consoles won't have all available memory just for graphics. The PC has a separate memory pool for CPU stuff (i.e. running the game, loading levels, etc..). If we shoot for ray-tracing, it will be memory bandwidth limited. 1080p will surely be the standard for the consoles while the high end GPUs will be able to render at true 4k with much larger memory footprints.
Doubt it. 8GB VRAM is fine for 2080 performance. Try my example from earlier: Use a PC with RX 480/580 8GB with 4GB system RAM and see how it performs in comparison to X1X. Additionally, do you have hands-on experience with XSX to know how well their memory-saving techniques work anyway?

As for RT, the more likely outcome is that they'll use RT only for select effects, like PC does now, and use it in tandem with AI image upscaling for playable frame rates, like PC does now. Path tracing at 1080p isn't going to happen. It's just a tech demo. We'll find out more tomorrow.
 

01011001

Banned
But it didn't work out fine. Texture resolution is a BIG factor in a game's visuals. Last generation, we saw several games have lower resolution everything across the board. That's not "working out fine" imo.

look how fast the quick resume worked. that switched games within 5 seconds, saving the whole game state on the SSD and loading the previously saved game state of the other game including assets... 5 seconds.
I think we underestimate how much the SSD can be used to stream assets.
I am pretty sure the SSD and their "velocity engine" thingy is the reason they are confident in the amount of RAM they have.
 
Last edited:

VFXVeteran

Banned
Doubt it. 8GB VRAM is fine for 2080 performance. Try my example from earlier: Use a PC with RX 480/580 8GB with 4GB system RAM and see how it performs in comparison to X1X. Additionally, do you have hands-on experience with XSX to know how well their memory-saving techniques work anyway?

Probably because I have experience in that regard. How can you say 8GB VRAM is fine for 2080 performance when I've been able to exhaust 12G of VRAM completely on one character with several 4k maps. Do the math and you'll realize it too. Just because you *want* it to be the case doesn't make it the case. One can easily take assets and set them all to 4k to see how much memory they would take up (including compression algorithms like DXTC).

As for RT, the more likely outcome is that they'll use RT only for select effects, like PC does now, and use it in tandem with AI image upscaling for playable frame rates, like PC does now. Path tracing at 1080p isn't going to happen. It's just a tech demo. We'll find out more tomorrow.

High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release. The 2080Ti just doesn't have the bandwidth. The consoles will be no different.
 
Last edited:

VFXVeteran

Banned
look how fast the quick resume worked. that switched games within 5 seconds, saving the whole game state on the SSD and loading the previously saved game state of the other game including assets... 5 seconds.
I think we underestimate how much the SSD can be used to stream assets.
I am pretty sure the SSD and their "velocity engine" thingy is the reason they are confident in the amount of RAM they have.

RAM is king and will always be king. 5s is too slow compared to being able to fit entire games in RAM if available. One could easily buy 128G of high speed RAM and create a RAM disc with the entire assets from the game in RAM totally bypassing the PCI bus to retrieve data. Streaming data is feasible, but you can still see LOD switching in the most demanding games.
 

01011001

Banned
RAM is king and will always be king. 5s is too slow compared to being able to fit entire games in RAM if available. One could easily buy 128G of high speed RAM and create a RAM disc with the entire assets from the game in RAM totally bypassing the PCI bus to retrieve data. Streaming data is feasible, but you can still see LOD switching in the most demanding games.

I also just feel that the asset detail we already have in AAA games like TLOU2 or Red Dead 2 are so good that we won't see a massive jump in that department next gen and both of these are running on less than 8GB of ram since I think RDR2 has no differences in asset streaming or quality across One X and PS4. I think the new way forward is better shading and better physics (cloth, hair, soft body).
but again, it's hard to know how well they will be able to keep up without having at least a hand full of actual next gen gameplay demos and tech analysis.

if the trailer of Hellblade 2 is what this system is actually capable of in real time, I think they're good for the majority of games. we will see hopefully sooner than later
 
Last edited:

VFXVeteran

Banned
I also just feel that the asset detail we already have in AAA games like TLOU2 or Red Dead 2 are so good that we won't see a massive jump in that department next gen. I think the new way forward is better shading and better physics (cloth, hair, soft body).
but again, it's hard to know how well they will be able to keep up without having at least a hand full of actual next gen gameplay demos and tech analysis.

if the trailer of Hellblade 2 is what this system is actually capable of in real time, I think they're good for the majority of games. we will see hopefully sooner than later

Better physics that can at least match Zelda: BOTW. That game has the best physics engine I've ever seen in a game. Incredible technical optimization there.

I wouldn't count out big texture sizes. They are always a part of the PC version of games (multiplat or not).

Shading is good outside of RT rendering. I think we have that down except a few other shading techniques that are missing.

Hair - forget it. Not there yet at all. Cloth - expensive. Soft body - depends on what you want done on the softbody.

FX needs a big revamp. Playing Zelda allowed me to see the big strides that could be taken for next-gen.
 

CrustyBritches

Gold Member
Probably because I have experience in that regard. How can you say 8GB VRAM is fine for 2080 performance when I've been able to exhaust 12G of VRAM completely on one character with several 4k maps. Do the math and you'll realize it too. Just because you *want* it to be the case doesn't make it the case. One can easily take assets and set them all to 4k to see how much memory they would take up (including compression algorithms like DXTC).
We're talking games, not CGI for movies. Games that will more or less be Control or Gears 5 with a new coat of paint. Major difference being that on PC gamers shoot for 60fps, while the graphical showcases on consoles, PS5 in particular, will be 30fps.

High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release. The 2080Ti just doesn't have the bandwidth. The consoles will be no different.
We're not talking about "High-end PCs" or yet-to-be released GPUs. This is about 2080 performance.
 

VFXVeteran

Banned
We're talking games, not CGI for movies. Games that will more or less be Control or Gears 5 with a new coat of paint. Major difference being that on PC gamers shoot for 60fps, while the graphical showcases on consoles, PS5 in particular, will be 30fps.

I've done both. Don't discount what I know because I'm not a game developer.

We're not talking about "High-end PCs" or yet-to-be released GPUs. This is about 2080 performance.

2080Ti performance is still higher than next-gen consoles. That's just a fact. Now will the consoles be able to handle 4k @ 60FPS with full RTX like in Control? I seriously doubt it. Something will have to be taken away from the pipeline. Lower res something. Decreasing the resolution is the thing I would look at first. It significantly reduces the amount of rays to compute (as you saw in the Minecraft demo).
 
Last edited:

01011001

Banned
Better physics that can at least match Zelda: BOTW. That game has the best physics engine I've ever seen in a game. Incredible technical optimization there.

I wouldn't count out big texture sizes. They are always a part of the PC version of games (multiplat or not).

Shading is good outside of RT rendering. I think we have that down except a few other shading techniques that are missing.

Hair - forget it. Not there yet at all. Cloth - expensive. Soft body - depends on what you want done on the softbody.

FX needs a big revamp. Playing Zelda allowed me to see the big strides that could be taken for next-gen.

BotW's physics are impressive, but the impressive part is not how accurate or realistic they are. the impressive part is how all of it works organically with everything else as you would expect it to. hot air gives an up draft, wooden objects and other light objects float, rocks roll etc. but it's not really accurately simulating real world physics (which of course is also partially down to game design, because realism is not always a great thing)
what I am talking about is more what Control did, a lot of (relatively) realistically destructible environments with a fuck ton of small objects.

current consoles almost go up in flames trying to keep up with that game. and that game is set in a very closed up game world, the One X can barely run the game at a barely steady 30fps, and even there it dips down hard in many instances. because with every move you make you break something, small pebbles fly across the room, desks fall apart into small pieces and interact with each other, every bookshelf actually has dozens of physically interactive books in it, you can at any time rip a chunk out of a wall and throw it at an enemy.
 

CrustyBritches

Gold Member
I've done both. Don't discount what I know because I'm not a game developer.



2080Ti performance is still higher than next-gen consoles. That's just a fact. Now will the consoles be able to handle 4k @ 60FPS with full RTX like in Control? I seriously doubt it. Something will have to be taken away from the pipeline. Lower res something. Decreasing the resolution is the thing I would look at first. It significantly reduces the amount of rays to compute (as you saw in the Minecraft demo).
You seem confused, and intent on making this about yourself. Let's revisit what was said:
-I stated the following facts:
1. DF said benchmark results for Gears 5 were like 2080.
2. Consoles aren't like PC with memory requirement. Try using RX 480/580 8GB with 4GB system RAM and find out.
3. Path tracing is computationally expensive.

You respond:
1. MS memory saving techniques are "Nothing"
2. "16G of memory isn't enough for the visuals that everyone wants."
3. "1080p will surely be the standard for the consoles" with RT enabled

You're just making guesses, and poor ones at that. Now you're on about, "High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release."

How did we end up there from my original statements? Why are you intent on twisting Xbox Series X discussion into talking about Nvidia GPUs?
 

Kenpachii

Member
Yes, but PC SDD dont use hardware decompression and whole technology that went into velocity architecture. Video from my previous post explains in details why developers will be able to use 2-3x as much RAM (to make long story short, because thanks to velocity architecture they don't need to load into RAM nearly as much) .

They won't be able to use 2-3x more ram because there is no 2-3x more ram available it's that simple.

PC can use SSD's perfectly fine for the same virtual memory exchanges which PC already does right now at far complexer solutions then anything consoles have shown.. Maybe u should check out starcitizen. DF even did a small video about it and goes into a lot of detail what it actually does on PC if you want to read up on it.

While u call it endless memory, PC users call it SSD's aka virtual memory, i use it all day long.

Exactly funny enough what digital foundry mentions and microsoft itself which explains this concept perfectly fine yet people see numbers and come to idiotic conclusions because they don't understand what the numbers mean in its own context.

The fundamental issue with SSD's is they are 1000 times slower on access speed and a good 4-5 times slower ( that is with there compression technique otherwise make it 12-15 times slower than ram on performance) . If you think speed that's only 4-5 times slower then ram isn't much interesting. well call up those nvidia people that sued nvidia and won when they realized they got scammed with 512mb of there 4gb pool being half speed. Yea there you go. Guess what happens when it gets addressed fps tanks. Now imagine 5x slower, 1000 times slower access speed and compression delay with it. yea gg.

SSD will be fine for loading in information that is slow. What xbox demonstrated with swapping in 5 seconds to another game for example, slow information can be swapped out more rapidly. Still doesn't replace ram. And that's exactly what a harddrive aka SSD aka virtual memory does.

I agree with people 16gb is low specially when part of that is even locked away from games and ray tracing probably also eating a good chunk ( performance off it, but it's acceptable as the price will probably be to high to increase it. I do however think sony will opt for more just for that very reason.

Maybe not. Parts of the dash could be paged to SSD or elsewhere. And you aren't going to watch Netlfix PiP with the processing and overlay running on this thing.

Game runs on this, your social media shit runs on your phone (you social twat!!).

4 X 5 GB per game means 20 GB.

Cloud instance could manage 20GB with frequency reduced 16 GBit ram at 12 Ghz.

For all the important game running shit that's fine.



Imagine thinking that a PC NVME was the same as what's talked about here.

Imagine thinking that peak PC performance rates are sustainable in perpetuity regardless of drive, or that just any PC could run the compression, paging and blending operations in real time regardless of performance impact.

Imagine being that guy.

Also: I expect PS5 to be as forward thinking, or perhaps even more. Peak means little, sustainable means everything.

Imagine thinking that compression is even remotely needed, because somehow that compression eliminates all the issue's SSD's exactly have over actual RAM.

Imagine having compressing already on a slow source of data so u can compare it towards something that is beyond more faster and think its comparable.
 
Last edited:
You seem confused, and intent on making this about yourself. Let's revisit what was said:
-I stated the following facts:
1. DF said benchmark results for Gears 5 were like 2080.
2. Consoles aren't like PC with memory requirement. Try using RX 480/580 8GB with 4GB system RAM and find out.
3. Path tracing is computationally expensive.

You respond:
1. MS memory saving techniques are "Nothing"
2. "16G of memory isn't enough for the visuals that everyone wants."
3. "1080p will surely be the standard for the consoles" with RT enabled

You're just making guesses, and poor ones at that. Now you're on about, "High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release."

How did we end up there from my original statements? Why are you intent on twisting Xbox Series X discussion into talking about Nvidia GPUs?
Definitely not on rtx 2080 level. I definitely understand that's the whole point of PR, to try and get more people interested, boost sales, etc... But at least be honest and transparent about it.

Then again the truth will sting for many people when they realize it's just another pipe dream, like each prior generation.
5IQlHAU.png


AM4DpUb.png


HZx3y9p.png
 
Last edited:
Imagine thinking that compression is even remotely needed, because somehow that compression eliminates all the issue's SSD's exactly have over actual RAM.

Imagine having compressing already a slow source of data so u can compare it towards something that is beyond more faster and think its comparable.

Imagine being a technologically illiterate cancer like Kenpachii, who thought that compression wasn't needed because "PC".

Imagine being so intellectually crippled you would actually proffer this inane mind-self-fuckery as wisdom:

"Imagine having compressing already a slow source of data so u can compare it towards something that is beyond more faster and think its comparable."

That's an actual word-thing you have to deal with if you engage with people like the Sage 'Kenpachii'.
 

pawel86ck

Banned
They won't be able to use 2-3x more ram because there is no 2-3x more ram available it's that simple.

PC can use SSD's perfectly fine for the same virtual memory exchanges which PC already does right now at far complexer solutions then anything consoles have shown.. Maybe u should check out starcitizen. DF even did a small video about it and goes into a lot of detail what it actually does on PC if you want to read up on it.

While u call it endless memory, PC users call it SSD's aka virtual memory, i use it all day long.

Exactly funny enough what digital foundry mentions and microsoft itself which explains this concept perfectly fine yet people see numbers and come to idiotic conclusions because they don't understand what the numbers mean in its own context.

The fundamental issue with SSD's is they are 1000 times slower on access speed and a good 4-5 times slower than ram on performance. If you think speed that's only 4-5 times slower then ram isn't much interesting. well call up those nvidia people that sued nvidia and won when they realized they got scammed with 512mb of there 4gb pool being half speed. Yea there you go. Guess what happens when it gets addressed fps tanks. Now imagine 5x slower, 1000 times slower access speed and compression delay with it. yea gg.

SSD will be fine for loading in information that is slow. What xbox demonstrated with swapping in 5 seconds to another game for example, slow information can be swapped out more rapidly. Still doesn't replace ram.

I agree with people 16gb is low specially when part of that is even locked away from games, but its acceptable as the price will probably be to high to increase it.



Imagine thinking that compression is even remotely needed, because somehow that compression eliminates all the issue's SSD's exactly have over actual RAM.

Imagine having compressing already a slow source of data so u can compare it towards something that is beyond more faster and think its comparable.
SDD doesnt need to be as fast as RAM to make a huge difference in streaming speed. There will be still 16GB RAM, but thanks to velocity architecture developers can stream only whats REALLY needed into RAM and make huge RAM savings because of that.

Screenshot-20200318-023534-You-Tube.jpg


Screenshot-20200318-023620-You-Tube.jpg


Everything is explained here and MS engineers must know what they are doing and because velocity architecture consists of 4 components working together we will not see something similar in the near future on PC (on PC we have only one component, NVMe SDD and that's it).

VFXVeteran however is probably right and loading entire game into 128GB RAM on PC would do the job even better but the problem is 128 GB RAM will cost more than even XSX itself 😂😂.
 
Last edited:

Ascend

Member
I was looking at the photos at xbox wire, the console is amazing and all but I just realized that it doesn't have an optical out port on the back... How am I going to connect my sound system to this now?
Sound through HDMI, and using the optical from your TV to your sound system.
 
Seriously, the Xbox Series X GPU quite literally has ALL the goodies. Mesh Shading, Hardware Accelerated Ray Tracing, VRS
FP16, they can go FP8 and FP4...….Just as you heard on PS4 and PRO all those years...….What ONQ got derided and ridiculed for all these years...


Where are you getting that from? 560Gb/s and 336 GB/s is the same ram pool, 6GB is clocked lower......It does not exceed 560Gb/s if the lower speed memory is used, it can get slightly lower then..

Tommy Fisher was as accurate as a die on XBOX Series X specs......He also gave information on another console....You can check on it in that thread....

Not true. To developers it looks like a single unified pool and both can and will be addressed simultaneously by the GPU for more than 560GB/s. The 10GB of RAM that runs at 560GB/s is just the most optimal ram for the GPU, but it doesn't mean that the slower side can't be used to augment the faster side. It can be, that's confirmed.
 

Ascend

Member
But it didn't work out fine. Texture resolution is a BIG factor in a game's visuals. Last generation, we saw several games have lower resolution everything across the board. That's not "working out fine" imo.
I think the magical HDR addition to non-HDR games, the Gears upgrade, and especially the effect of RT on Minecraft, they all show that textures are really not that vital anymore to get good graphics. They are generally more than good enough to make games look good. Even at 4K.

Now will the consoles be able to handle 4k @ 60FPS with full RTX like in Control? I seriously doubt it.
No current modern game is fully ray traced. Just thought I'd mention that. Most likely you already know that, but, I want to make sure everyone else does. And honestly, I see no reason why the Xbox Series X would not be able to do this. It seems perfectly capable of doing what RTX is doing at this point, which is rasterization with a handful of ray traced effects.
 

CrustyBritches

Gold Member
Its gears 5 in 3 different resolutions with all of the rtx cards for a comparison. Just going from what Microsoft said... Things just don't seem to add up exactly.
I saw the PC benchmarks, but I was wondering where you were getting the Xbox numbers for comparison. I've only skimmed the DF vids and articles, and saw they mentioned, "we were shown benchmark results that, on this two-week-old, unoptimised port(Gears 5), already deliver very, very similar performance to an RTX 2080."

Aside from that I haven't seen anything else comparing performance metrics to PC. Later tonight I'll actually watch the vids in case I missed something.
 

Ascend

Member
A lot of TV's will convert the 5.1 signal to stereo when doing it like that.
Then there's always this;

Definitely not on rtx 2080 level. I definitely understand that's the whole point of PR, to try and get more people interested, boost sales, etc... But at least be honest and transparent about it.

Then again the truth will sting for many people when they realize it's just another pipe dream, like each prior generation.
Definitely not RTX 2080 level based on what? Let's do simple math shall we?

RX 5700XT = 40 CU
Xbox series X = 52 CU
That is 30% more CUs.

Then we have;
RX 5700XT game clock = 1755 MHz
Xbox Series X clock = 1825 MHz
That's a 4% clock boost, although, some boost clocks of AIB cards are in the same range as the Xbox Series X, so for ease, let's assume the same clock speed for both.

Then we still have;
5700XT = RDNA1
Xbox Series X = RDNA2
We don't know anything about RDNA2 at this point. So once again, let's assume the worst case scenario, which is that RDNA2 performs exactly the same as RDNA1.

So at worst, the Xbox Series X GPU is 30% faster than the 5700XT. The RTX 2080 is 15% faster than a 5700XT. Do you know what is closest to 30% faster? A 2080 Ti, which is 34% faster. What happens if you take the clock speed and architectural improvements into account?

In conclusion, assuming that the Xbox Series X GPU is the equivalent of an RTX 2080 is actually conservative. It can actually be faster than a 2080Ti, if RDNA2 has significant improvements over RDNA1. The only way this would not be true is if they significantly cut down on ROPs and TMUs, which we currently don't have info about. But it would be really weird to increase CUs and decrease those, so...
 
Last edited:
Definitely not RTX 2080 level based on what? Let's do simple math shall we?

RX 5700XT = 40 CU
Xbox series X = 52 CU
That is 30% more CUs.

Then we have;
RX 5700XT game clock = 1755 MHz
Xbox Series X clock = 1825 MHz
That's a 4% clock boost, although, some boost clocks of AIB cards are in the same range as the Xbox Series X, so for ease, let's assume the same clock speed for both.

Then we still have;
5700XT = RDNA1
Xbox Series X = RDNA2
We don't know anything about RDNA2 at this point. So once again, let's assume the worst case scenario, which is that RDNA2 performs exactly the same as RDNA1.

So at worst, the Xbox Series X GPU is 30% faster than the 5700XT. The RTX 2080 is 15% faster than a 5700XT. Do you know what is closest to 30% faster? A 2080 Ti, which is 34% faster. What happens if you take the clock speed and architectural improvements into account?

In conclusion, assuming that the Xbox Series X GPU is the equivalent of an RTX 2080 is actually conservative. It can actually be faster than a 2080Ti, if RDNA2 has significant improvements over RDNA1. The only way this would not be true is if they significantly cut down on ROPs and TMUs, which we currently don't have info about. But it would be really weird to increase CUs and decrease those, so...
If you truly believe you'll get the power of the current, best consumer gpu that AMD themselves haven't been able to beat, or even come close to, I honestly don't know what to say. Even if Microsoft could get similar performance, at wholesale price, let's say around %50 of the cost of a 2080 TI, you'd still be looking at over $500. Take into account of ram, motherboard, cpu, i/o controllers, WiFi, ssd, controller, case, marketing, shipping, and advertising... Do I even need to go on? Pretty sure you get the gist of where I'm going with this. Yes, consoles are sold at a loss, but to even think your getting a full fledged, cream of the crop, mini pc for a fraction of the price of the best gpu on the market... Is the most ridiculous thing ever. The BOM of consoles aren't that much more than what the initially sell it for. There's so many holes in the idea that a console will even come close to enthusiasts levels of hardware on pc side.

Look at the frame drops in both Minecraft and gears 5, and think to yourself, if this even comes close to an rtx 2080. Because you won't get frame drops into the 30fps on Minecraft. You can say it's not optimized and what not, but to show off a pretty poor demo for each last gen game, definitely does not scream RTX 2080 performance. If xsex struggles to play last gen games, at 1080p@30fps.... Your in for a shit show of bad performance in current titles, and next gen titles. There's a reason they are only showing last gen games.
 
Last edited:

Ascend

Member
If you truly believe you'll get the power of the current, best consumer gpu that AMD themselves haven't been able to beat, or even come close to, I honestly don't know what to say. Even if Microsoft could get similar performance, at wholesale price, let's say around %50 of the cost of a 2080 TI, you'd still be looking at over $500. Take into account of ram, motherboard, cpu, i/o controllers, WiFi, ssd, controller, case, marketing, shipping, and advertising... Do I even need to go on? Pretty sure you get the gist of where I'm going with this. Yes, consoles are sold at a loss, but to even think your getting a full fledged, cream of the crop, mini pc for a fraction of the price of the best gpu on the market... Is the most ridiculous thing ever. The BOM of consoles aren't that much more than what the initially sell it for. There's so many holes in the idea that a console will even come close to enthusiasts levels of hardware on pc side.

Look at the frame drops in both Minecraft and gears 5, and think to yourself, if this even comes close to an rtx 2080. Because you won't get frame drops into the 30fps on Minecraft. You can say it's not optimized and what not, but to show off a pretty poor demo for each last gen game, definitely does not scream RTX 2080 performance. If xsex struggles to play last gen games, at 1080p@30fps.... Your in for a shit show of bad performance in current titles, and next gen titles. There's a reason they are only showing last gen games.
You get all that performance on a 360mm2 chip. The same size as the Xbox One X chip, which is why the price can remain relatively low to modern day GPUs.

More importantly. Are you implying that 52 CUs of RDNA2 are going to be just as fast or slower than 40CUs of RDNA1 at similar clock speeds?
 

Three

Member
There's an nvme system like a usb that non tech people can just plug into the front of their case without fear of damaging the drive or board?

Or is that system exactly what Microsoft has just created and the entire point of my last message.
'System', you mean like slapping regular HDDs in a proprietary plastic case and charging customers 2 times as much like the good old days?

Lets see if they allow those who know how to insert a single board use whatever cheap housing they use instead of actively banning and blocking anyone using standard parts.
 

KAZme

Banned
For a company touting backwards and forwards compatibility as much as MS I think that is a big oversight. I have two consoles, a PC, a home entertainment system and my guitar connected to the same receiver for many years; and if PS5 isn't forcing me to renew it then it is sadly bye bye for Series X for me as it is a deal breaker. I was considering buying it mid 2021 when MS exclusives might drop. Maybe a splitter may work but a good one that is also compatible with HDCP and audio formats are expensive and unnecessarily adding cost and complication to what ought to be a simple plug and play device.

you can use the audio Return channel of your TV.
 
Gears 5 at 100fps and 4K ultra? I wonder how they did it, on PC not even RTX 2080S can run Gears 5 so smooth at ultra settings (just around 50fps, not to mention with even higher settings). RDNA2 is either far more efficient than I thought, or game is using some dynamic resoluton or VRS (after all VRS can offer 70% boost to performance according to 3dmark)

And it looks like SDD is even more impressive than GPU:



velocity architecture utilise SDD in order to multiply (2-3x, so around 32-48GB RAM equivalent) effective physical memory developers can use. Texture quality on next gen consoles will skyrocket compared to current gen, and I wonder how much VRAM will be needed to run next gen ports on PC (Titan RTX 24GB owners should be fine however :p).


For pc devs dont do low level system optimisation. On consoles they do.
 
Nothing is ever enough. We should all know that by now. 16G of memory isn't enough for the visuals that everyone wants. The next-gen consoles won't have all available memory just for graphics. The PC has a separate memory pool for CPU stuff (i.e. running the game, loading levels, etc..). If we shoot for ray-tracing, it will be memory bandwidth limited. 1080p will surely be the standard for the consoles while the high end GPUs will be able to render at true 4k with much larger memory footprints.


Its enough. Devs will make games around 16gb ram

Also are u a dev,?
 

Dunnas

Member
Look at the frame drops in both Minecraft and gears 5, and think to yourself, if this even comes close to an rtx 2080. Because you won't get frame drops into the 30fps on Minecraft. You can say it's not optimized and what not, but to show off a pretty poor demo for each last gen game, definitely does not scream RTX 2080 performance. If xsex struggles to play last gen games, at 1080p@30fps.... Your in for a shit show of bad performance in current titles, and next gen titles. There's a reason they are only showing last gen games.
Gears 5 footeage shown was 4k Ultra. Based on the benchmarks you posted, the average FPS for the 2080 and 2080S are 51 and 53 (and min 43 and 45). Therefore, even if it was a 2080S you were watching, you would still see frame drops, completely invalidating that argument. The 2080ti is also only 62 ave and 51 min.

As for Minecraft, it is fully pathtraced, making it and quake II RTX the best demos to show the RT capabilities.
 

Dunnas

Member
You get all that performance on a 360mm2 chip. The same size as the Xbox One X chip, which is why the price can remain relatively low to modern day GPUs.

More importantly. Are you implying that 52 CUs of RDNA2 are going to be just as fast or slower than 40CUs of RDNA1 at similar clock speeds?
Obvoiusly it is slower, because it is in a console and as everybody knows, consoles just suck and make everything slow for reasons. If you were to make a giant sized console case and stuck a Ferrari in it, it will turn into a Hyundai. It is just simple physics. Are you too dumb too understand?
 
Last edited:
Gears 5 footeage shown was 4k Ultra. Based on the benchmarks you posted, the average FPS for the 2080 and 2080S are 51 and 53 (and min 43 and 45). Therefore, even if it was a 2080S you were watching, you would still see frame drops, completely invalidating that argument. The 2080ti is also only 62 ave and 51 min.

As for Minecraft, it is fully pathtraced, making it and quake II RTX the best demos to show the RT capabilities.
Definitely not 4k at 100 fps. RTX Titan can't even achieve that, with maxed out everything. Ill get over 100fps with everything maxed out and HD texture pack @ 3440x1440. Dropping into high 90's in combat.
unknown.png
 
lol this does not apply to console.
Obviously not this gen. Consoles can't even maintain native 4k30fps. But if you think a console will compete or be on par with an RTX 2080, why wouldn't it consume near 8gb of vram or more? Would those textures and assets not be full res?
 

pawel86ck

Banned
lol this does not apply to console.
Agree. In the near future 8GB will be not enough on PC, but not on XSX. Thanks to velocity architecture developers will be able to use more memory (according to MS 2-3x more, 32-48 GB effectively), and that's plenty.
 
The recommendation for gpu vram for 4k Ultra was 8+gb in 2019. So Games this year will only consume more



3A3vdUw.png
Read up on the Velocity Architecture of theirs. Run time decompression on a dedicated hardware other than CPU/GPU, and being able to stream MIPs on the fly will change how LOD changes are handled across the system for all the games and will pretty much null and voids the pop in of objects/meshes and higher detail of MIPs as viewport gets closer. Streaming will be handled on the system level as long as the game engine is using the API that is made entirely new just for this goddamn console and its bespoke nvme+decompression hw solution. PS5 will use exactly the same method albeit with different API and different runtime calls. THUS consoles are revolutionizing what is game streaming on fly assisted with hardwares, PCs don't have that yet even now, but will have to follow suit if the entire industry moves in that direction.....
 
Agree. In the near future 8GB will be not enough on PC, but not on XSX. Thanks to velocity architecture developers will be able to use more memory (according to MS 2-3x more, 32-48 GB effectively), and that's plenty.
Good luck with bottlenecks if you think it'll work like that. Ssds will never be anywhere near the speeds of ram. Especially when you have the cram all of the graphical data and everything, while trying to move slower data into that same pipeline. Frame pacing galore. The best engineering can't beat physics.

Also what about the read/write on the ssd? Wouldn't that increasingly degrade the drive? And if they are going to stream directly from the drive like that, there would need to be a partition specifically for that, and I doubt it would magically grow in size.

Read up on the Velocity Architecture of theirs. Run time decompression on a dedicated hardware other than CPU/GPU, and being able to stream MIPs on the fly will change how LOD changes are handled across the system for all the games and will pretty much null and voids the pop in of objects/meshes and higher detail of MIPs as viewport gets closer. Streaming will be handled on the system level as long as the game engine is using the API that is made entirely new just for this goddamn console and its bespoke nvme+decompression hw solution. PS5 will use exactly the same method albeit with different API and different runtime calls. THUS consoles are revolutionizing what is game streaming on fly assisted with hardwares, PCs don't have that yet even now, but will have to follow suit if the entire industry moves in that direction.....
LOD and pop in hasn't been an issue for pc since og xb1 and ps4. Ssd's have been around for a while you know. I would ask for examples, but I doubt you saw anything next gen shown the other day. I only seen last gen games and nothing impressive or new. It all sounds good on paper, but until you or I have seen proof, it would make more sense to wait and see than hype all of this up.
 
Great Machine!!! But 16GB GDDR6?!!!!

That's it?


The hard drive is so fast it can be treated as extended ram (basically) if I understood digital foundry correctly. They also added features to reduce ram and graphics memory footprints over what is normal on pc.
 
Last edited:

Dunnas

Member
Definitely not 4k at 100 fps. RTX Titan can't even achieve that, with maxed out everything. Ill get over 100fps with everything maxed out and HD texture pack @ 3440x1440. Dropping into high 90's in combat.
unknown.png
Nobody said what you were watching was supposed to be 100fps. The 100fps comment made by the devs was unrelated to the demo that was shown and didn’t mention anything about resolution or settings. Again, nothing you are saying in any way supports your nonsensical claim about a 2070 being more performance than the Xbox SX.
 

Shin

Banned
Nobody said what you were watching was supposed to be 100fps. The 100fps comment made by the devs was unrelated to the demo that was shown and didn’t mention anything about resolution or settings. Again, nothing you are saying in any way supports your nonsensical claim about a 2070 being more performance than the Xbox SX.
That's because he's a pc gamer that's been crying since yesterday. Almost 24h later he's still at it, the insecurity and stupidity is off the charts.
Using tweaktown for your argument is plain stupid, they are like gamingbolt, worse in fact because according to them ps5 will have dual gpus.
 
Top Bottom