• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next Xbox is ‘More Advanced’ Than the PS5 according to Insiders.

WHOA! How much of a RAM difference? Will one console have 24 GBs of RAM and then other have 12 GBs of RAM?
My assumption is that Anaconda will be a PS5 level system or beyond it in some capacity (GPU likely) meant to run at 4K. Lockhart and whatever it is will likely be a 1080p variant of the exact same console meant to linearly scale the graphical profile of Anaconda at a lower resolution. Anaconda will probably be 12 Teraflops with 16-24 GB's of RAM and Lockhart will will probably be 4 Teraflops with 12 GB's of RAM.
 

splattered

Member
Why would they bother with a 4tf machine when the 6tf X1X already exists and is at a decent price point during frequent sales? Couldn't they just put out a revised X1X with some of the features from Anaconda and make it the new entry tier machine? Or are we expecting MS to fully carry 3 to 4 different consoles through the beginning of next gen?
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
My assumption is that Anaconda will be a PS5 level system or beyond it in some capacity (GPU likely) meant to run at 4K. Lockhart and whatever it is will likely be a 1080p variant of the exact same console meant to linearly scale the graphical profile of Anaconda at a lower resolution. Anaconda will probably be 12 Teraflops with 16-24 GB's of RAM and Lockhart will will probably be 4 Teraflops with 12 GB's of RAM.

So what happens if a Sony 1st party developer makes a game for the PS5 (that has 24 GBs of RAM) and they design the game around all 20 GBs of usable video game RAM but at a 1440p resolution (with the help of Checkerboard Rendering "CB" it looks 90% close to native 4K)? You do realize that the same game on a 1080p TV will get free AA and look better than ANY game made for either Xbox Lockhart and Xbox Anaconda right?
 

Nikana

Go Go Neo Rangers!
Why would they bother with a 4tf machine when the 6tf X1X already exists and is at a decent price point during frequent sales? Couldn't they just put out a revised X1X with some of the features from Anaconda and make it the new entry tier machine? Or are we expecting MS to fully carry 3 to 4 different consoles through the beginning of next gen?

Architecture is different on one x to future gens. It would be harder to continue to develop for 2 types vs 1.
 

NickFire

Member
Why would they bother with a 4tf machine when the 6tf X1X already exists and is at a decent price point during frequent sales? Couldn't they just put out a revised X1X with some of the features from Anaconda and make it the new entry tier machine? Or are we expecting MS to fully carry 3 to 4 different consoles through the beginning of next gen?
Perhaps they really want the internet to start talking about an Xbox 2/3?
 

ethomaz

Banned
Why would they bother with a 4tf machine when the 6tf X1X already exists and is at a decent price point during frequent sales? Couldn't they just put out a revised X1X with some of the features from Anaconda and make it the new entry tier machine? Or are we expecting MS to fully carry 3 to 4 different consoles through the beginning of next gen?
To make easy the development between Lockhart and Anaconda plus decrease the level of optimization required.

Develop a game for Xbox One X and Anaconda at same time will be more like developing for one and porting to other... MS didn't want that... so they needs to have the same features at hardware level.
 

Chizzle

Neo Member
Games load in less than a second, eh? Remember when they told us that PS4 and Xbone games would be playable within seconds because they would install as we played them? Yeah, now I get a notification saying "Game is ready to start!" and I get to play ... the start screen. I'll wait until the systems actually come out before I get excited for any "features". Also, I don't care about 8k. The goal at this point should be 4k at 60FPS.
 
So what happens if a Sony 1st party developer makes a game for the PS5 (that has 24 GBs of RAM) and they design the game around all 20 GBs of usable video game RAM but at a 1440p resolution (with the help of Checkerboard Rendering "CB" it looks 90% close to native 4K)? You do realize that the same game on a 1080p TV will get free AA and look better than ANY game made for either Xbox Lockhart and Xbox Anaconda right?
No it wouldn't because Anaconda could run it all the same as the PS5, and Lockhart could simply run it at a lower resolution and employ the same checkerboard technique to make up any deficit. Also the RAM usage would be completely different at each respective resolution. It's all about linear scalability, it would be designed to do what the other system could exactly but at a divisional resolution.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
No it wouldn't because Anaconda could run it all the same as the PS5, and Lockhart could simply run it at a lower resolution and employ the same checkerboard technique to make up any deficit. Also the RAM usage would be completely different at each respective resolution. It's all about linear scalability, it would be designed to do what the other system could exactly but at a divisional resolution.

Creating a game world with 20 GBs of RAM in mind is completely different than if you only have 8 GBs of usable RAM. All of this can't be made up in "resolution" differences. Resolution isn't that big of a differentiator.
 
Creating a game world with 20 GBs of RAM in mind is completely different than if you only have 8 GBs of usable RAM. All of this can't be made up in "resolution" differences. Resolution isn't that big of a differentiator.
Well that all depends on a number of factors, first off why the arbitrary introduction of 8 GB's? Also capacity can be offset by bandwidth, the higher your bandwidth the less that needs to be stored in memory, this also applies to instruction passthrough between the CPU and GPU, more bandwidth equals more instructions can be sent. Furthermore resolution has a decent difference in VRAM usage on its own, when applying AA to a game though the usage dramatically goes up and every game these days uses AA.

None of this is cut and dry, also to be completely honest your 20 GB example I find to be asinine to begin with. We're not even using 9 right now for total system memory in a native 4K game like Red Dead Redemption 2 which is inarguably one of if not the best looking game ever released and it also functions on a massive scale.
 
Last edited:
Next gen the selling point will be loading the game in 10 nanoseconds. If you are loading in 20 nanoseconds, your console will be considered "garbage" by Dual Shockers.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Well that all depends on a number of factors, first off why the arbitrary introduction of 8 GB's? Also capacity can be offset by bandwidth, the higher your bandwidth the less that needs to be stored in memory, this also applies to instruction passthrough between the CPU and GPU, more bandwidth equals more instructions can be sent. Furthermore resolution has a decent difference in VRAM usage on its own, when applying AA to a game though the usage dramatically goes up and every game these days uses AA.

None of this is cut and dry, also to be completely honest your 20 GB example I find to be asinine to begin with. We're not even using 9 right now for total system memory in a native 4K game like Red Dead Redemption 2 which is inarguably one of if not the best looking game ever released and it also functions on a massive scale.

If the rumors are saying the Xbox Lockhart will have 12 GBs of total RAM, then I'm assuming 4 of those gigs will be used as "system RAM" leaving only 8 GBs usuable for games. And RDR2 will look like and feel like a last gen game by the year 2022.

It always happens this way. GTA5 was equally as massive for a "last gen" game and look at it now. Once the PS4 and Xbox One was out for a year or two, it totally felt like a last gen game.
 

DeepEnigma

Gold Member
It matters when Sony has the edge.

iu
 

thelastword

Banned
HBM2 doesn't seem like the right direction. The bandwidth is really low compared to GDDR6. I also thought NAVI was supposed to be GDDR6.
Sony has always gone with the latest technology at release........They waited on GDDR5........A PS5 will have HBM3, not spare parts HBM2, that's never been Sony's way of doing things.......The PS5 will be either HBM3 whcih heightens bandwidth even more and lowers latency or it will have GDDR6.......Either of the later technologies....

The HBM2 via HBCC is a neat idea, don't get me wrong, that combined with streaming off an SSD is pretty good, but such features can still be implemented without going for a more expensive HBM2 solution....People have to understand, newer technologies usually will give us better performance for cheaper..(unless you're Nvidia of course).....HBM3 will have improved efficiencies and layouts over HBM2, it will also be based on 7nm, which is the PS5 node for everything more or less.......It may very well use a 2048 bit bus with stacking...I can see stacked 4-8GB HBM3 chips to round out to 32GB total on a PS5, it will use less power, have bandwidth out the wazoo and if Sony so desires, they can even have 8GB of DDR4 for the OS and link it to the Vram pool via HBCC....

If you go back to my Prediction specs, not only did I say 1TB SSD for PS5, I also said 4TB HDD (mechanical drive), so they can use StorMi to link two HD technologies whilst maintaining top seek and read speeds......The technology is already there, but It will no doubt be improved significantly with PCIE 4.0.....

Exactly, that's how it is on PC. Not in consoles. Pretty much every dev complained about separate pools in the PS3 compared to the shared one on XBox 360.
No, the seperate pools was not the problem, the problem was devs not having access to enough Vram.......We always seem to forget that the PS3 OS used much more RAM than the 360 OS......The 360 never catered to bluray playback functionality, move functionality, remoteplay, Sound features and Media features as complex like PS3 did......Essentially, devs were limited to 256Mb on the GPU side, whilst 360 owners could offset certain 360 features and use more than 256MB for textures and whatever else they wanted to focus on in the pipeline, because it was one pool, where much less was reserved for the OS.....If PS3 had 768MB of Vram like my 8800 GTX Ultra had, it would be an entirely different story.....PS3 games would look a whole generation ahead of 360 games with the cell in tow......Funny saying that, because PS3 games already outshone 360 exclusives by far....

But the benefit of a console is that it's "NOT" like developing for a PC. Who cares about scaling when you are making an exclusive game for one console? And yes Xbox One X games were held back by the Xbox One S. Any exclusive X1X title would look miles better than they do now if they didn't have to support the "S".



Exactly! I don't think people honestly realize how making games for a console is different than a PC. The mindset is different. If you are a dev and can design the game from the ground up to only work on a console that has 14 TFs of power, it'll be designed differently than if you had to support a 4TF console.
Absolutely, the crazy thing about this is you want to start with parity hardware......I don't mind PRO consoles a bit later on if folks want to put their 8k TV's into use or if they want some kit that ushers them closer to the new gen. Frankly, I think PS4 so well designed that it holds up even now...GOW, DAYS GONE, SPiderman all look awesome on PS4 hardware, many multiplats are still a full fat 1080p even as the gen winds down.....So it was well designed......That 1.84TF showed it's metal throughout this gen........And yes, the GOW's, Horizon Zero Dawns, Spidermans, Detroits etc....were done on 1.84TF hardware even with the PRO available, that's why they look so good and perform so well on vanilla........So I agree that the spec must be as high as it can be for a PS5 next gen, so devs can push; gaming graphics, framerate, ai, physics forward....

Having seperate spec/power skus is the first I've seen in a console launch ever.......12TF vs 4TF, that's insane......You will spend resources catering to the 4TF box and optimizing for it, when your first run should be just for 1 sku (12TF) and you could go balls to the wall with squeezing every last bit of perf from that and making a showcase....That won't be the case with two XB2 skus at launch including a discless system........MS will not want games to look too appaling on the weaker sku's and 12TF XBONEX owners will start balking about MS not taking advantage of their 12TF machines when ND and Santa Monica starts melting eyeballs on their 14+ TF machine......

Maybe, but the Xbox One X and the Xbox One S are fairly different. XOS being 1.4TF with significantly slower RAM and the XOX at 6TF with significantly faster RAM. That's a very big difference, but that's also the difference it needs to allow current generation games to hit 4K. The difference for next-gen will be graphics at 1080p or 4k and give or take a few effects and toggles. The games will most likely look prettier on the beefed up system kind of like how games scale on a PC.

These consoles are becoming more PC like in architecture, but they are still focused on games wheras the PC is a multi-tool. But to think they can't scale like the do now is a little naive, especially because all first party will end up on PC anyway and all third party will usually have PC ports.
Well, if this is just an XBOX argument, I'd agree......Hence the reason why Sony's strategy makes more sense.....Their games aren't on PC, they have the best devs in this industry.......They can go all out on producing the best graphics and perf on one common power spec, where there will be no PC version or not lower quality version, all you will see is the best representation of LOU3 or Savage Starlight or GT7 or GOW2 or Spiderman 2....When a PRO comes, it will be to touch up on the graphics, give a bit better perf, render in 8k, but the base game would have already been mighty impressive from a base 14TF with no compromises, no focusing on other sku's and the like....People have to understand, time and resources are critical to the final product we see, how good the graphics are etc......Even moreso if the team is very talented, it layers the shine some more compared to the competition...

Funny that every time people want to talk about a really impressive piece of tech, they quote AMD, but when they buy or recommend GPU hardware, they shit on AMD and quote Nvidia. AMD has Radeon Chill, HBCC, StoreMI, Primitive Shaders, RPM, DSBR and a whole slew of great features, they also banked on NCU's and that's proving to be revoutionary with latest API's like DX12 and Vulkan but people would rather that they be behind the 8 ball in terms fo features SO NV could continue to offer them GTX 1050's and 1650's at higher prices than a RX 570 which deposits all over these cards at $130 with and also comes with two games......

You'd be a fool to believe that NV would ever offer consoles anything close to AMD performance in gaming consoles this gen or the next......Had it been for NV, current consoles would probably have been packing GT 740 performance at a much higher asking price.....If some of you guys won't learn from NV in the past you will never......Had it not been for AMD, we would be rocking 4 core cpu's and and maybe 1660ti GPU performance next gen at exorbitant prices.......Sony and MS would have to sacrifice their first borns if they wanted Jensen Huang's fresh off the presses "RTX" that "just works"......Don't kid yourself.......Thank AMD you will be getting 8 core 16 thread cpu's on a console next gen....Thank AMD you will be getting a high end GPU with RTX support in a console next gen........You would get nothing close with Nvidia....

If you doubt this, it is times like this when I wish MS had gone fully Intel+NV, so you could give your story on how superior MS hardware would be......To match PS, MS would have to price their consoles at $800 or above to compare....

But I thought we needed 4x the processing power to achieve native 4K for all?

Otherwise it seems we will still have 1800p on our hands with select 4K.

You would still get way more out of 12TF baseline than a 4TF baseline even at 1080p. I want more effects over just a rez bump. And throw RT into the mix, and that 4TF seems a hair more anemic or none at all.

Imagine 1440p with some CB rendering pushing 12TF to the wall, it won't happen with a 4TF handicap all gen if resolution bump is their only focus.
People think it's just 4x TF power, it's not......What about bandwidth, architecture, ram type and ram setup......many things can affect a game at render time and whether it's able to maintain it's rez without buckling under subpar framerates....

Graphics next gen will be more complex, textures and lighting more complicated, even GPU hardware wil push raytracing which will tie to sound and AI....It will be a formidable leap on a holistic level where all aspects of the pipeline are tied to each other........So many people see next gen as just COD-Advanced Warfare in 4k, it's not...It's much more than that.....Alpha rez and complexity will be higher, game worlds will be larger with much more to texture, lods will improve....We can't just believe that all that's needed is some baseline exponential from this gen to the next.....Cerny said, that he is working on a true next gen leap from the PS4 PRO, which he deems is very much a current gen machine.....

It has to be or MS is doomed. I love Sony but it makes logical sense that next gen will belong to Microsoft. It's just how these cycles work.
Hey, if you wish MS to win, maybe you are a fan, that's fine.....Ye,t let's put some receipts on all of this? When was the last time MS won a gen? if we're going by history...Never.....So there's no precedence there.........To be completely clear, the only reason Wii won last gen was because PS3 was a $600 console....

Full Auto and Ridge Racer don't have 360 counterparts to compare
All NBA versions shared between PS360 are at resolution parity

I'll give you the benefit of doubt here and assume you had a memory lapse or meant 20%

Fair enough.

Raw power metrics were similar but Xenos had the huge unified shader arc advantage
Full Auto and RR6 were on XBOX 360, granted they may have released earlier than the PS3 version, but that's because 360 released 1 year prior to the PS3, but it was essentially the same game...Granted the devs made some improvements for the PS3 versions, but if the GPU was exceedingly better on 360 they would not have been able to drive a pixel count of over 2x.....

I do agree that 360 Xenos was better designed, and perhaps if Sony went to NV on day 1 for the GPU portion maybe they would have developed something much better, like say a 768MB solution.....but it is what it is, so I agree that Xenos was better, but not as far off from RSX as you would think....I think the lack of memory hurt PS3 ports much more than the GPU's power tbh...

Sony definitely paid too much for what they got, but it was not so left field that devs could not make it work, especially if they made a small investment in cell usage....Some third parties were able to do just fine on PS3, like the prototype games, FF games etc....and first parties consistently pulled in better visuals...If RSX was much worse it would not have been possible....Remember FF13 was higher rez on PS3 too with the same 2xMSAA...

How do you figure? That's only a 1080 Ti equivalent, the Xbox One X is already approximately 50% of that in terms of GPU capability.
How do you go about spreading this FUD, if we're gaming at 4k, you really believe a XBONEX with lower clocks than an RX580, with no dedicated vram path and ram path, but instead using the same bus across CPU/GPU/MEM is 50% of a 2080/1080ti..........That's what you refer to as getting it all wrong.....There is nothing I can give a 1/10 in this answer.....You better do thsi math again...

The X GPU is not half a 1080Ti, nvidia has more performance per flop than AMD
I do agree however 400GB/s is way to low for next gen, 600GB/s minimum considering the X already has 326.4GB/s
Nvidia does not have more performance than AMD flop to flop, NVIDIA uses compression technology that decreases IQ to lift perf.....A flop is a flop, it's math...The differences lies in architecture and what NV sacrifices to achieve similar performance to higher TFLOP AMD parts........All of that is thrown out of the window when using low level api's like DX12 and vulkan, AMD TFLOPs pulls ahead because their TFLOPS is much higher........Under DX11 with compression tech and more CPU hogging for AMD, NV can shine, but these days are going the way of the DODO.......Pull up any undervolted Vega, not even OC'd on RE2, Forza, Strange Brigade, World WarZ, DMC, Division 2, RE7, Battlefield, NMS vulkan, even some DX11 titles like Kingdom Come Deliverance and Vega perf pulls ahead to equivalent NV GPU's .......
 

Lort

Banned
Sony has always gone with the latest technology at release........They waited on GDDR5........A PS5 will have HBM3, not spare parts HBM2, that's never been Sony's way of doing things.......The PS5 will be either HBM3 whcih heightens bandwidth even more and lowers latency or it will have GDDR6.......Either of the later technologies....

The HBM2 via HBCC is a neat idea, don't get me wrong, that combined with streaming off an SSD is pretty good, but such features can still be implemented without going for a more expensive HBM2 solution....People have to understand, newer technologies usually will give us better performance for cheaper..(unless you're Nvidia of course).....HBM3 will have improved efficiencies and layouts over HBM2, it will also be based on 7nm, which is the PS5 node for everything more or less.......It may very well use a 2048 bit bus with stacking...I can see stacked 4-8GB HBM3 chips to round out to 32GB total on a PS5, it will use less power, have bandwidth out the wazoo and if Sony so desires, they can even have 8GB of DDR4 for the OS and link it to the Vram pool via HBCC....

If you go back to my Prediction specs, not only did I say 1TB SSD for PS5, I also said 4TB HDD (mechanical drive), so they can use StorMi to link two HD technologies whilst maintaining top seek and read speeds......The technology is already there, but It will no doubt be improved significantly with PCIE 4.0.....


No, the seperate pools was not the problem, the problem was devs not having access to enough Vram.......We always seem to forget that the PS3 OS used much more RAM than the 360 OS......The 360 never catered to bluray playback functionality, move functionality, remoteplay, Sound features and Media features as complex like PS3 did......Essentially, devs were limited to 256Mb on the GPU side, whilst 360 owners could offset certain 360 features and use more than 256MB for textures and whatever else they wanted to focus on in the pipeline, because it was one pool, where much less was reserved for the OS.....If PS3 had 768MB of Vram like my 8800 GTX Ultra had, it would be an entirely different story.....PS3 games would look a whole generation ahead of 360 games with the cell in tow......Funny saying that, because PS3 games already outshone 360 exclusives by far....


Absolutely, the crazy thing about this is you want to start with parity hardware......I don't mind PRO consoles a bit later on if folks want to put their 8k TV's into use or if they want some kit that ushers them closer to the new gen. Frankly, I think PS4 so well designed that it holds up even now...GOW, DAYS GONE, SPiderman all look awesome on PS4 hardware, many multiplats are still a full fat 1080p even as the gen winds down.....So it was well designed......That 1.84TF showed it's metal throughout this gen........And yes, the GOW's, Horizon Zero Dawns, Spidermans, Detroits etc....were done on 1.84TF hardware even with the PRO available, that's why they look so good and perform so well on vanilla........So I agree that the spec must be as high as it can be for a PS5 next gen, so devs can push; gaming graphics, framerate, ai, physics forward....

Having seperate spec/power skus is the first I've seen in a console launch ever.......12TF vs 4TF, that's insane......You will spend resources catering to the 4TF box and optimizing for it, when your first run should be just for 1 sku (12TF) and you could go balls to the wall with squeezing every last bit of perf from that and making a showcase....That won't be the case with two XB2 skus at launch including a discless system........MS will not want games to look too appaling on the weaker sku's and 12TF XBONEX owners will start balking about MS not taking advantage of their 12TF machines when ND and Santa Monica starts melting eyeballs on their 14+ TF machine......


Well, if this is just an XBOX argument, I'd agree......Hence the reason why Sony's strategy makes more sense.....Their games aren't on PC, they have the best devs in this industry.......They can go all out on producing the best graphics and perf on one common power spec, where there will be no PC version or not lower quality version, all you will see is the best representation of LOU3 or Savage Starlight or GT7 or GOW2 or Spiderman 2....When a PRO comes, it will be to touch up on the graphics, give a bit better perf, render in 8k, but the base game would have already been mighty impressive from a base 14TF with no compromises, no focusing on other sku's and the like....People have to understand, time and resources are critical to the final product we see, how good the graphics are etc......Even moreso if the team is very talented, it layers the shine some more compared to the competition...

Funny that every time people want to talk about a really impressive piece of tech, they quote AMD, but when they buy or recommend GPU hardware, they shit on AMD and quote Nvidia. AMD has Radeon Chill, HBCC, StoreMI, Primitive Shaders, RPM, DSBR and a whole slew of great features, they also banked on NCU's and that's proving to be revoutionary with latest API's like DX12 and Vulkan but people would rather that they be behind the 8 ball in terms fo features SO NV could continue to offer them GTX 1050's and 1650's at higher prices than a RX 570 which deposits all over these cards at $130 with and also comes with two games......

You'd be a fool to believe that NV would ever offer consoles anything close to AMD performance in gaming consoles this gen or the next......Had it been for NV, current consoles would probably have been packing GT 740 performance at a much higher asking price.....If some of you guys won't learn from NV in the past you will never......Had it not been for AMD, we would be rocking 4 core cpu's and and maybe 1660ti GPU performance next gen at exorbitant prices.......Sony and MS would have to sacrifice their first borns if they wanted Jensen Huang's fresh off the presses "RTX" that "just works"......Don't kid yourself.......Thank AMD you will be getting 8 core 16 thread cpu's on a console next gen....Thank AMD you will be getting a high end GPU with RTX support in a console next gen........You would get nothing close with Nvidia....

If you doubt this, it is times like this when I wish MS had gone fully Intel+NV, so you could give your story on how superior MS hardware would be......To match PS, MS would have to price their consoles at $800 or above to compare....


People think it's just 4x TF power, it's not......What about bandwidth, architecture, ram type and ram setup......many things can affect a game at render time and whether it's able to maintain it's rez without buckling under subpar framerates....

Graphics next gen will be more complex, textures and lighting more complicated, even GPU hardware wil push raytracing which will tie to sound and AI....It will be a formidable leap on a holistic level where all aspects of the pipeline are tied to each other........So many people see next gen as just COD-Advanced Warfare in 4k, it's not...It's much more than that.....Alpha rez and complexity will be higher, game worlds will be larger with much more to texture, lods will improve....We can't just believe that all that's needed is some baseline exponential from this gen to the next.....Cerny said, that he is working on a true next gen leap from the PS4 PRO, which he deems is very much a current gen machine.....


Hey, if you wish MS to win, maybe you are a fan, that's fine.....Ye,t let's put some receipts on all of this? When was the last time MS won a gen? if we're going by history...Never.....So there's no precedence there.........To be completely clear, the only reason Wii won last gen was because PS3 was a $600 console....


Full Auto and RR6 were on XBOX 360, granted they may have released earlier than the PS3 version, but that's because 360 released 1 year prior to the PS3, but it was essentially the same game...Granted the devs made some improvements for the PS3 versions, but if the GPU was exceedingly better on 360 they would not have been able to drive a pixel count of over 2x.....

I do agree that 360 Xenos was better designed, and perhaps if Sony went to NV on day 1 for the GPU portion maybe they would have developed something much better, like say a 768MB solution.....but it is what it is, so I agree that Xenos was better, but not as far off from RSX as you would think....I think the lack of memory hurt PS3 ports much more than the GPU's power tbh...

Sony definitely paid too much for what they got, but it was not so left field that devs could not make it work, especially if they made a small investment in cell usage....Some third parties were able to do just fine on PS3, like the prototype games, FF games etc....and first parties consistently pulled in better visuals...If RSX was much worse it would not have been possible....Remember FF13 was higher rez on PS3 too with the same 2xMSAA...

How do you go about spreading this FUD, if we're gaming at 4k, you really believe a XBONEX with lower clocks than an RX580, with no dedicated vram path and ram path, but instead using the same bus across CPU/GPU/MEM is 50% of a 2080/1080ti..........That's what you refer to as getting it all wrong.....There is nothing I can give a 1/10 in this answer.....You better do thsi math again...

Nvidia does not have more performance than AMD flop to flop, NVIDIA uses compression technology that decreases IQ to lift perf.....A flop is a flop, it's math...The differences lies in architecture and what NV sacrifices to achieve similar performance to higher TFLOP AMD parts........All of that is thrown out of the window when using low level api's like DX12 and vulkan, AMD TFLOPs pulls ahead because their TFLOPS is much higher........Under DX11 with compression tech and more CPU hogging for AMD, NV can shine, but these days are going the way of the DODO.......Pull up any undervolted Vega, not even OC'd on RE2, Forza, Strange Brigade, World WarZ, DMC, Division 2, RE7, Battlefield, NMS vulkan, even some DX11 titles like Kingdom Come Deliverance and Vega perf pulls ahead to equivalent NV GPU's .......
TLDR

Xbox one x > ps4 pro
Xbox 5 pro > ps5
 

Ar¢tos

Member
and im so bitter about it, so much wasted potential.
I bought one just to play Tearaway and Murasaki Baby, ended up giving it to my nephew because Sony just abandoned it.
If they used SD cards instead of proprietary shit, it would still be alive and kicking!
 

Housh

Member
I always thought 360 won last gen overall and then PS3 started to shine at the end when first party devs learned how to use the cell processor for powering single player narratives and after PSN got hacked they were forced to focus on their online strategy. Don't get me wrong. I love PS2, PS3 and PS4 over anything Microsoft has put out but I have a feeling that out the door MS will have a compelling launch next gen specially with Game Pass value.

I'll still be Japanese consoles plus gaming PC for life though.
 
So, anyone really expects Anaconda to be 64CUs? Honestly I don't see that happening, historically, consoles come with either CPU cores or CUs in their GPUs disabled (or both) for better yields. Getting a completely new APU, in 7nm, to come at the limit of its capacity, with absolutely no room to spare, sounds highly unlikely to me.
 
Last edited:
How do you go about spreading this FUD, if we're gaming at 4k, you really believe a XBONEX with lower clocks than an RX580, with no dedicated vram path and ram path, but instead using the same bus across CPU/GPU/MEM is 50% of a 2080/1080ti..........That's what you refer to as getting it all wrong.....There is nothing I can give a 1/10 in this answer.....You better do thsi math again...
That's cute.

Probably because it has 4 more CU's than the RX 580 so it doesn't need to be operating at the same frequency, it's got 2560 shader cores vs. the 580's 2304, and it's got 160 TMU's vs. the 580's 144. To add to this it's also got a 384-bit bus and 326 GB/s of memory bandwidth vs. the RX 580's 256-bit bus and 256 GB/s of available bandwidth. Adding to that they made custom improvements to the IGP above the standard Polaris architecture, so that's yet another thing to account for. In terms of shared bus that doesn't matter as it still has higher bandwidth, a CPU's bandwidth needs from the memory bus rarely exceed 20 GB/s.

shrug.png
 
Last edited:

SonGoku

Member
So, anyone really expects Anaconda to be 64CUs? Honestly I don't see that happening, historically, consoles come with either CPU cores or CUs in their GPUs disabled (or both) for better yields. Getting a completely new APU, in 7nm, to come at the limit of its capacity, with absolutely no room to spare, sounds highly unlikely to me.
Maybe.. with 2 or 4 CUs disabled
Honestly i don't expect more than 1TF difference between PS5 and Snek if not outright identical.
 
Last edited:
Maybe.. with 2 or 4 CUs disabled
Honestly i don't expect more than 1TF difference between PS5 and Snek if not outright identical.
I expect no more than 3 tbh. Even then, that's going to be less than vanilla PS4 x vanilla XBO. Difference should be minimal. Also, each console will probably have their own 'secret sauces'.

Here's my theory: 60 CUs for Snek and 56CUs for PS5.
 
So, anyone really expects Anaconda to be 64CUs? Honestly I don't see that happening, historically, consoles come with either CPU cores or CUs in their GPUs disabled (or both) for better yields. Getting a completely new APU, in 7nm, to come at the limit of its capacity, with absolutely no room to spare, sounds highly unlikely to me.
Disable 4 CU's for yields and improve other aspects of the pipeline like they did with the X to offset the loss of compute units.
 
To have the same power difference than PS4 and XB1 we need something like:

PS5 11TFs
Anaconda 15.4TFs

I can’t see that happening again... ever.
If we're talking pure Teraflops yes, but the amount of ROP's matters, the amount of TMU's, shader cores, memory bandwidth etc. Most people just jump to CU's and frequency without accounting for the others, there's a lot more that goes into it than that.
 

Armorian

Banned
If we're talking pure Teraflops yes, but the amount of ROP's matters, the amount of TMU's, shader cores, memory bandwidth etc. Most people just jump to CU's and frequency without accounting for the others, there's a lot more that goes into it than that.

True, but if these consoles (PS5 and Anaconda) are targeting same price point then it's more than likely that they will end up within ~10% difference.
 

SonGoku

Member
Yes, if the rumours are true. I'm no insider, I have no idea if that's what you were thinking :p
I just wanted to hear your reasoning for Snek being more powerful.
If we're talking pure Teraflops yes, but the amount of ROP's matters, the amount of TMU's, shader cores, memory bandwidth etc. Most people just jump to CU's and frequency without accounting for the others, there's a lot more that goes into it than that.
Why do you assume Sony will cripple PS5 in any of those aspects?
 
Last edited:

joe_zazen

Member
Microsoft can just add another $150 onto the ps5 price for mr snek, since Lockhart is their focus, they wont care about lost sales. That is why I think sony wont be able to match anaconda theoretical performance, microsoft doesnt actually care about anaconda sales numbers.
 

SonGoku

Member
Clever and amusing. lol.
ikr! I don't like the era bunch but they can be clever/funny sometimes
It's not necessarily that they will cripple themselves, I don't think they have the willingness to invest as much money as Microsoft.
Then they will aim for a lower tf target instead of aiming higher and create potential bottlenecks
There isn't much difference in R&D really, both get the same designs from AMD. Difference will come down to release timing and price bracket
 

SonGoku

Member
Microsoft can just add another $150 onto the ps5 price for mr snek, since Lockhart is their focus, they wont care about lost sales. That is why I think sony wont be able to match anaconda theoretical performance, microsoft doesnt actually care about anaconda sales numbers.
So say PS5 goes for $500, $650 snek then?
Not really seeing it happening but would be nice to see what kind of console they cook with that budget
 

stetiger

Member
Why would they bother with a 4tf machine when the 6tf X1X already exists and is at a decent price point during frequent sales? Couldn't they just put out a revised X1X with some of the features from Anaconda and make it the new entry tier machine? Or are we expecting MS to fully carry 3 to 4 different consoles through the beginning of next gen?
Well teraflops don't really scale that way. Most likely you the extra flops will be used for rasterization and post process, whereas geometry will be close to the same if the CU count diff is not too dramatic. See this video to understand better
 

Panajev2001a

GAF's Pleasant Genius
Again, that's not how game development works. I don't know how many different ways it can be reiterated to you that what you see on a console is not what was created. It's a scaled down build of the original texture resolution, assets, environmental geometry, shadows, reflections, LoD distance and draw etc.

Despite belief of the contrary, you will base your rendering budget and development time to maximise revenue and thus the likelihood of your target users to purchase the game (I.e.: you will cater the experience towards them). So if you are a PC developer, unless you are a bit crazy and based on an odd business model and target tomorrow’s average PC’s as your average consumers instead what you have today (see Star Citizen) you will design you game around scaling cheaply on a wide variety of HW and optimise your choices not around the PC you develop on, but around the PC’s you want to sell on trying to minimise the rendering paths implemented and reducing QA time. You do the same in mobile development as well. The fact that the models are higher resolution in ZBrush / Etc... and you have higher resolution textures on your artists PC’s make it cheaper to brute force your way to HD/4K asset packs, but that is not the same thing. Example: back when Polybump / normal maps created from multi million polygons models to provide finer details to low poly ones, the models you would actually produce for the games (which would be normal mapped) and skinned and animated would cater to this “minimum common denominator” HW we are talking about.

Also not everything just automagically scales down with the wave of a wand and does not bring pain to your devs and QA team (aka money)... but yes they could do that and maximise every HW they run on as the PC they use to simulate that HW is much faster and better resources, but they do not as the return on investment would generally be very poor (same thing happens on mobile for HW specs, but hey also OS features... until an OS is on like 60/80% of your customers’ phones or more you will see little incentive to use and test all the shiny features of the new OS unless you have to and it does not make your codebase become a difficult to maintain mess of specialised code paths all of which could break only when you switch testing device... thus making your unit and integration tests more complex, numerous, and taking it longer for them to run... which decreases iteration time... etc... etc... making your product worse / late / lesser featured or all three ;)).

... unless when you see exclusive Next Generation titles doing a major jump over Xbox One S titles and PS4 / PS4 Pro ones (despite the new consoles not being say 4-6x faster than a PS4 Pro) will we say it is just because the developer PC’s are faster :)?
 

SonGoku

Member
It would be the RAM capacity as well, a system with a different target resolution doesn't need the same amount of RAM.
Yes... but half the ram would create problems
... unless when you see exclusive Next Generation titles doing a major jump over Xbox One S titles and PS4 / PS4 Pro ones (despite the new consoles not being say 4-6x faster than a PS4 Pro) will we say it is just because the developer PC’s are faster :)?
Im curious couldn't 3rd party just develop around ps5/snek as base hardware and downgrade until it runs on lockart?
It would certainly make look lockart bad and ms wouldn't be too happy about it but i think its the best course of action.
 
Last edited:
Top Bottom