HBM2 doesn't seem like the right direction. The bandwidth is really low compared to GDDR6. I also thought NAVI was supposed to be GDDR6.
Sony has always gone with the latest technology at release........They waited on GDDR5........A PS5 will have HBM3, not spare parts HBM2, that's never been Sony's way of doing things.......The PS5 will be either HBM3 whcih heightens bandwidth even more and lowers latency or it will have GDDR6.......Either of the later technologies....
The HBM2 via HBCC is a neat idea, don't get me wrong, that combined with streaming off an SSD is pretty good, but such features can still be implemented without going for a more expensive HBM2 solution....People have to understand, newer technologies usually will give us better performance for cheaper..(unless you're Nvidia of course).....HBM3 will have improved efficiencies and layouts over HBM2, it will also be based on 7nm, which is the PS5 node for everything more or less.......It may very well use a 2048 bit bus with stacking...I can see stacked 4-8GB HBM3 chips to round out to 32GB total on a PS5, it will use less power, have bandwidth out the wazoo and if Sony so desires, they can even have 8GB of DDR4 for the OS and link it to the Vram pool via HBCC....
If you go back to my Prediction specs, not only did I say 1TB SSD for PS5, I also said 4TB HDD (mechanical drive), so they can use StorMi to link two HD technologies whilst maintaining top seek and read speeds......The technology is already there, but It will no doubt be improved significantly with PCIE 4.0.....
Exactly, that's how it is on PC. Not in consoles. Pretty much every dev complained about separate pools in the PS3 compared to the shared one on XBox 360.
No, the seperate pools was not the problem, the problem was devs not having access to enough Vram.......We always seem to forget that the PS3 OS used much more RAM than the 360 OS......The 360 never catered to bluray playback functionality, move functionality, remoteplay, Sound features and Media features as complex like PS3 did......Essentially, devs were limited to 256Mb on the GPU side, whilst 360 owners could offset certain 360 features and use more than 256MB for textures and whatever else they wanted to focus on in the pipeline, because it was one pool, where much less was reserved for the OS.....If PS3 had 768MB of Vram like my 8800 GTX Ultra had, it would be an entirely different story.....PS3 games would look a whole generation ahead of 360 games with the cell in tow......Funny saying that, because PS3 games already outshone 360 exclusives by far....
But the benefit of a console is that it's "NOT" like developing for a PC. Who cares about scaling when you are making an exclusive game for one console? And yes Xbox One X games were held back by the Xbox One S. Any exclusive X1X title would look miles better than they do now if they didn't have to support the "S".
Exactly! I don't think people honestly realize how making games for a console is different than a PC. The mindset is different. If you are a dev and can design the game from the ground up to only work on a console that has 14 TFs of power, it'll be designed differently than if you had to support a 4TF console.
Absolutely, the crazy thing about this is you want to start with parity hardware......I don't mind PRO consoles a bit later on if folks want to put their 8k TV's into use or if they want some kit that ushers them closer to the new gen. Frankly, I think PS4 so well designed that it holds up even now...GOW, DAYS GONE, SPiderman all look awesome on PS4 hardware, many multiplats are still a full fat 1080p even as the gen winds down.....So it was well designed......That 1.84TF showed it's metal throughout this gen........And yes, the GOW's, Horizon Zero Dawns, Spidermans, Detroits etc....were done on 1.84TF hardware even with the PRO available, that's why they look so good and perform so well on vanilla........So I agree that the spec must be as high as it can be for a PS5 next gen, so devs can push; gaming graphics, framerate, ai, physics forward....
Having seperate spec/power skus is the first I've seen in a console launch ever.......12TF vs 4TF, that's insane......You will spend resources catering to the 4TF box and optimizing for it, when your first run should be just for 1 sku (12TF) and you could go balls to the wall with squeezing every last bit of perf from that and making a showcase....That won't be the case with two XB2 skus at launch including a discless system........MS will not want games to look too appaling on the weaker sku's and 12TF XBONEX owners will start balking about MS not taking advantage of their 12TF machines when ND and Santa Monica starts melting eyeballs on their 14+ TF machine......
Maybe, but the Xbox One X and the Xbox One S are fairly different. XOS being 1.4TF with significantly slower RAM and the XOX at 6TF with significantly faster RAM. That's a very big difference, but that's also the difference it needs to allow current generation games to hit 4K. The difference for next-gen will be graphics at 1080p or 4k and give or take a few effects and toggles. The games will most likely look prettier on the beefed up system kind of like how games scale on a PC.
These consoles are becoming more PC like in architecture, but they are still focused on games wheras the PC is a multi-tool. But to think they can't scale like the do now is a little naive, especially because all first party will end up on PC anyway and all third party will usually have PC ports.
Well, if this is just an XBOX argument, I'd agree......Hence the reason why Sony's strategy makes more sense.....Their games aren't on PC, they have the best devs in this industry.......They can go all out on producing the best graphics and perf on one common power spec, where there will be no PC version or not lower quality version, all you will see is the best representation of LOU3 or Savage Starlight or GT7 or GOW2 or Spiderman 2....When a PRO comes, it will be to touch up on the graphics, give a bit better perf, render in 8k, but the base game would have already been mighty impressive from a base 14TF with no compromises, no focusing on other sku's and the like....People have to understand, time and resources are critical to the final product we see, how good the graphics are etc......Even moreso if the team is very talented, it layers the shine some more compared to the competition...
AMD's Vega GPU architecture brings many notable features to the table, but the one to find its way into Radeon chief Raja Koduri's heart is HBCC - or "high-bandwidth cache controller". In this article, we're going to take a look at what HBCC is, why it offers no benefit right this moment, and...
techgage.com
Funny that every time people want to talk about a really impressive piece of tech, they quote AMD, but when they buy or recommend GPU hardware, they shit on AMD and quote Nvidia. AMD has Radeon Chill, HBCC, StoreMI, Primitive Shaders, RPM, DSBR and a whole slew of great features, they also banked on NCU's and that's proving to be revoutionary with latest API's like DX12 and Vulkan but people would rather that they be behind the 8 ball in terms fo features SO NV could continue to offer them GTX 1050's and 1650's at higher prices than a RX 570 which deposits all over these cards at $130 with and also comes with two games......
You'd be a fool to believe that NV would ever offer consoles anything close to AMD performance in gaming consoles this gen or the next......Had it been for NV, current consoles would probably have been packing GT 740 performance at a much higher asking price.....If some of you guys won't learn from NV in the past you will never......Had it not been for AMD, we would be rocking 4 core cpu's and and maybe 1660ti GPU performance next gen at exorbitant prices.......Sony and MS would have to sacrifice their first borns if they wanted Jensen Huang's fresh off the presses "RTX" that "just works"......Don't kid yourself.......Thank AMD you will be getting 8 core 16 thread cpu's on a console next gen....Thank AMD you will be getting a high end GPU with RTX support in a console next gen........You would get nothing close with Nvidia....
If you doubt this, it is times like this when I wish MS had gone fully Intel+NV, so you could give your story on how superior MS hardware would be......To match PS, MS would have to price their consoles at $800 or above to compare....
But I thought we needed 4x the processing power to achieve native 4K for all?
Otherwise it seems we will still have 1800p on our hands with select 4K.
You would still get way more out of 12TF baseline than a 4TF baseline even at 1080p. I want more effects over just a rez bump. And throw RT into the mix, and that 4TF seems a hair more anemic or none at all.
Imagine 1440p with some CB rendering pushing 12TF to the wall, it won't happen with a 4TF handicap all gen if resolution bump is their only focus.
People think it's just 4x TF power, it's not......What about bandwidth, architecture, ram type and ram setup......many things can affect a game at render time and whether it's able to maintain it's rez without buckling under subpar framerates....
Graphics next gen will be more complex, textures and lighting more complicated, even GPU hardware wil push raytracing which will tie to sound and AI....It will be a formidable leap on a holistic level where all aspects of the pipeline are tied to each other........So many people see next gen as just COD-Advanced Warfare in 4k, it's not...It's much more than that.....Alpha rez and complexity will be higher, game worlds will be larger with much more to texture, lods will improve....We can't just believe that all that's needed is some baseline exponential from this gen to the next.....Cerny said, that he is working on a true next gen leap from the PS4 PRO, which he deems is very much a current gen machine.....
It has to be or MS is doomed. I love Sony but it makes logical sense that next gen will belong to Microsoft. It's just how these cycles work.
Hey, if you wish MS to win, maybe you are a fan, that's fine.....Ye,t let's put some receipts on all of this? When was the last time MS won a gen? if we're going by history...Never.....So there's no precedence there.........To be completely clear, the only reason Wii won last gen was because PS3 was a $600 console....
The other thread was getting to big for its own good, and as has been requested often enough, an easy-to-find compilation of discovered resolutions is needed. I or other mods may update this thread as time goes. Here is how resolutions can be determined...
forum.beyond3d.com
Full Auto and Ridge Racer don't have 360 counterparts to compare
All NBA versions shared between PS360 are at resolution parity
I'll give you the benefit of doubt here and assume you had a memory lapse or meant 20%
Fair enough.
Raw power metrics were similar but Xenos had the huge unified shader arc advantage
Full Auto and RR6 were on XBOX 360, granted they may have released earlier than the PS3 version, but that's because 360 released 1 year prior to the PS3, but it was essentially the same game...Granted the devs made some improvements for the PS3 versions, but if the GPU was exceedingly better on 360 they would not have been able to drive a pixel count of over 2x.....
I do agree that 360 Xenos was better designed, and perhaps if Sony went to NV on day 1 for the GPU portion maybe they would have developed something much better, like say a 768MB solution.....but it is what it is, so I agree that Xenos was better, but not as far off from RSX as you would think....I think the lack of memory hurt PS3 ports much more than the GPU's power tbh...
Sony definitely paid too much for what they got, but it was not so left field that devs could not make it work, especially if they made a small investment in cell usage....Some third parties were able to do just fine on PS3, like the prototype games, FF games etc....and first parties consistently pulled in better visuals...If RSX was much worse it would not have been possible....Remember FF13 was higher rez on PS3 too with the same 2xMSAA...
How do you figure? That's only a 1080 Ti equivalent, the Xbox One X is already approximately 50% of that in terms of GPU capability.
How do you go about spreading this FUD, if we're gaming at 4k, you really believe a XBONEX with lower clocks than an RX580, with no dedicated vram path and ram path, but instead using the same bus across CPU/GPU/MEM is 50% of a 2080/1080ti..........That's what you refer to as getting it all wrong.....There is nothing I can give a 1/10 in this answer.....You better do thsi math again...
The X GPU is not half a 1080Ti, nvidia has more performance per flop than AMD
I do agree however 400GB/s is way to low for next gen, 600GB/s minimum considering the X already has 326.4GB/s
Nvidia does not have more performance than AMD flop to flop, NVIDIA uses compression technology that decreases IQ to lift perf.....A flop is a flop, it's math...The differences lies in architecture and what NV sacrifices to achieve similar performance to higher TFLOP AMD parts........All of that is thrown out of the window when using low level api's like DX12 and vulkan, AMD TFLOPs pulls ahead because their TFLOPS is much higher........Under DX11 with compression tech and more CPU hogging for AMD, NV can shine, but these days are going the way of the DODO.......Pull up any undervolted Vega, not even OC'd on RE2, Forza, Strange Brigade, World WarZ, DMC, Division 2, RE7, Battlefield, NMS vulkan, even some DX11 titles like Kingdom Come Deliverance and Vega perf pulls ahead to equivalent NV GPU's .......