• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

01011001

Banned
Can someone explain the XSX's hardware component that's dedicated to ray tracing? Does it really completely take the load off the GPU and make the XSX's compute performance for ray tracing equivalent to 25 teraflops? Also, is it true that the PS5 doesn't have an equivalent to this?

Considering how poorly Nvidia's very expensive RTX cards perform while ray tracing, I'm curious how Microsoft was able to implement a superior hardware solution in a game console that's going to cost much less than Nvidia's very expensive cards.

uhm, no.
the raytracing hardware is embedded into the Compute Units. and they work the same in both systems. basically every compute unit (CU) has one of these "Intersection Engines" built into them.
that 25TF number comes from the fact that you would need the full power of a 13TF GPU in order to have enough horsepower to calculate as many rays as the hardware inside the Series X can calculate alone. but it doesn't off-load all of the work, the GPU still has to shade, render etc.
so if the Series X didn't have any raytracing acceleration it would need a 25TF gpu in order to run, Minectraft RayTracing for example, at the same quality as it does with it's 12TF GPU + RT Acceleration hardware.
 
Last edited:

BluRayHiDef

Banned
uhm, no.
the raytracing hardware is embedded into the Compute Units. and they work the same in both systems. basically every compute unit (CU) has one of these "Intersection Engines" built into them.
that 25TF number comes from the fact that you would need the full power of a 13TF GPU in order to have enough horsepower to calculate as many rays as the hardware inside the Series X can calculate alone. but it doesn't off-load all of the work, the GPU still has to shade, render etc.
so if the Series X didn't have any raytracing acceleration it would need a 25TF gpu in order to run, Minectraft RayTracing for example, at the same quality as it does with it's 12TF GPU + RT Acceleration hardware.
Do you have a source, because I've seen numerous posts asserting that this tech is exclusive to the XSX.
 

01011001

Banned
Do you have a source, because I've seen numerous posts asserting that this tech is exclusive to the XSX.

that's not the case. and what you have most likely seen are posts about Direct X DXR 1.1 which was developed by Microsoft together with AMD in order to fully support their way of doing raytracing within Direct X, and DIrect X is a microsoft exclusive of course.

all AMD RDNA2 gpus work like this.
they have what they call an "intersection engine" in each compute unit. this part of the hardware is fully dedicated to calculating rays. both consoles have these, the Series X has a lot more of them but they work the same in both.
 
Last edited:

SonGoku

Member
Nice...only its a 1 TB SSD😉
I can believe a special edition would have more storage
How would a fast SSD limit open world design instead of enhance it?
Well to be fair he didn't say it would limit it either, he said it wouldn't have a big impact on open worlds visuals (asset quality and diversity) which is still false but you get the point.
Personally i think devs will rely on offline procedural generated assets on powerful servers to catch up to the massive increase in world size, scale and complexity the SSD enables (with CPU/GPU/RAM of course )
 

BluRayHiDef

Banned
that's not the case. and what you have most likely seen are posts about Direct X DXR 1.1 which was developed by Microsoft together with AMD in order to fully support their way of doing raytracing within Direct X, and DIrect X is a microsoft exclusive of course.

all AMD RDNA2 gpus work like this.
they have what they call an "intersection engine" in each compute unit. this part of the hardware is fully dedicated to calculating rays. both consoles have these, the Series X has a lot more of them but they work the same in both.
Understood. However, considering how taxing ray tracing is, I doubt that either console will be powerful enough to implement it on visually complex games (i.e. not games like Minecraft) without massive drops in frame rate.
 

01011001

Banned
Understood. However, considering how taxing ray tracing is, I doubt that either console will be powerful enough to implement it on visually complex games (i.e. not games like Minecraft) without massive drops in frame rate.

what you have seen with the Minecraft Raytracing demo Microsoft showed was massively impressive, at least compared to current PC cards.
that Minecraft demo was rendered 100% through pathtracing, meaning every pixel you see has been generation by shooting rays.
it apparently ran at 1080p at a framerate of around 45fps but not steady... so it went up and down from there it seems.

now if we compare this to a similar case on PC we have Quake 2 RTX, another fully pathtraced game. this game runs at 1440p at also around 45FPS give or take... on a Geforce RTX 2080ti

the thing is, fully pathtraced games are not what developers will actually develop in 99% of the cases. usually what you see in every major game, like Battlefield 5, Control, Wolfenstein etc. is a mix of your normal, every day rasterised graphics with ray traced effects layerd ontop. often supported by screenspace effects in order to save performance.

so Quake 2 RTX and Minecraft RT are the 2 most dmeanding games you can currently throw at a raytracing GPU.
another great comparison would be comparing the performance of Quake 2 RTX vs. the performance of Control... and if we compare them on a non RTX card.
-the GTX1070 can run Control at 1080p30fps with medium Raytracing settings, meanwhile it can't even run Quake 2 RTX at a steady 720p30fps
 
Last edited:
Yeah I enjoyed that exchange, Other place can have some nice technical inputs, I like reading Lady Gaia perspective.

yeah lady gaia has some of the best technical posts there

I like gofreak and that guy with the rat avatar (ananananou or something lol I can’t remember...is he banned?)
 

ZehDon

Gold Member
Fantasies aside, lets listen Mark Cerny again ;)

Timestamped: 22gb/s figure
Road to PS5. 17:50 minute mark.

"but the unit itself is capable of outputting as much as 22G/bps if the data had compressed particularly well"
Sorry folks, I can see that my post was pretty poorly worded. My post was in response to the claim that the SSD itself was capable of 22gb/s. As per Cerny's presentation, the SSD can move approximately 9gb/s of compressed data - using Kraken for its compression - with a base speed of 5.5gb/s. The Kraken unit - separate to the SSD, while still part of the IO chain - is capable of 22gb/s if the data is compressed particularly well. This is quite separate to the SSD, and the reason I highlighted it specifically was because we were discussing the compression systems at play. To get a little more clearer, if Sony's custom SSD itself was capable of 22gb/s - prior to Kraken's involvement - Microsoft's compression system could never hope to drag its slower SSD to a comparable figure, regardless of how good it was. I should've been clearer in that I was only referring to the physical custom SSD speeds, rather than compression output. I'll do better in the future to be clear about what I'm talking about, as specifics matter in this topics. Thanks for pointing this out.

...I think I have a deja vu. "We invented DirectX" and other bold statements of the previous generation change.

As far as Microsoft's "leadership" in multi-platform development goes, e.g. they totally failed at migrating their OS to ARM architecture and I think it's even been discontinued. I don't claim they make no progress but if you have at least a vague idea how large corporations operate, you know that being a market leader and cutting-edge innovative is a very rare case. There's no point in investing in R&D extensively if you dominate the market and your shareholders are happy with the cash you make.
Microsoft successfully migrated their Windows OS to ARM architecture, and continue their support to this day. So... not sure what you're talking about? As for the market leader, aren't Microsoft famous for not winning a single console generation, and lagging behind in the gaming space in terms of raw numbers? Hardly the market leader then, and explains why they're looking innovate in several key ways. For example, the competition from Vulkan, and the continued support of OpenGL, have pushed Microsoft's DirectX platform enormously for better performance and lower level hardware access. DX12 has a long way to go *cough*Division2hasterribleDX12performance*cough* but its still an incredible achievement, and part of Microsoft's larger move to embrace multi-platform development. In fact, in terms of pure platform agnosticism, I'd say Microsoft are just about the global leaders. And let's be clear - it's not from the goodness of their hearts. They're a multi-billion dollar global corporation. These moves are because more platforms = more users = more revenue. Still, it doesn't detract from their efforts.

Ok, then explain to everyone if MS have this magical feature, why did they choose to use unoptimized content to show their tech?
I already did, in response to BluRayHiDef BluRayHiDef 's excellent post, and highlighted that I don't have a real answer, only speculation. I can only guess that they're not ready to break their NDAs yet, or believe that unoptimized code was enough to demonstrate their goals. Your guess is as good as mine in this respect.
 
Understood. However, considering how taxing ray tracing is, I doubt that either console will be powerful enough to implement it on visually complex games (i.e. not games like Minecraft) without massive drops in frame rate.

Part of the reason why there's been such a big impact on performance is due to the fact that games were developed fully with rasterized shadows, reflections, with ray traced versions of those effects being tacked on to specific materials/surfaces/etc or instances (ie. Metro: E using RTGI in outdoor spaces, but not indoor) after the fact.

I agree we probably won't see fully path traced games with visuals most are expecting from "next gen", but developers will find ways to maximize both RT and rasterization techniques with regards to performance and time:

UeAPLte.png



m9WXmNQ.png
 
Last edited:

SlimySnake

Flashless at the Golden Globes
yeah lady gaia has some of the best technical posts there

I like gofreak and that guy with the rat avatar (ananananou or something lol I can’t remember...is he banned?)
lady gaia got upset at me because of my chicken little tactics or perhaps because i thought she was a dude. lmao.

she's alright, but after seeing so many devs mislead insiders and journalists like klee, osiris, jason, colin and reiner, im no longer as trusting as i used to be. i want to see this magical ssd in action. im cautiously hopeful.
 

SlimySnake

Flashless at the Golden Globes
Technically the XSX isn't "Fastest. Most Powerful."

But "Widest. Most Powerful." hasn't got the same ring to it..

=P
love how blatantly ms has been lying since e3 2019 and no one seems to have an issue with it. but when cerny said his ssd is faster than anything on pc, people freak out and call him a liar. when he said rt is hardware based, they go but is it really? (dictator) and now after he was right about both of things, they have switched to questioning him on variable clocks and even ssd speeds. (brad sams)

maybe they are talking about fastest cpu clocks? because they definitely dont have the fastest gpu clocks or ssd speeds.
 
Is it wrong for me to assume that with greater asset streaming thanks to the SSD that Sony might aim for fidelity over peak output? Meaning sub 4K and lower frames for higher lod?
 
lady gaia got upset at me because of my chicken little tactics or perhaps because i thought she was a dude. lmao.

she's alright, but after seeing so many devs mislead insiders and journalists like klee, osiris, jason, colin and reiner, im no longer as trusting as i used to be. i want to see this magical ssd in action. im cautiously hopeful.

People get so hung up over pure TF numbers, and I think this has something to do with the past generation due to the need for increasing resolutions. This gen will be quite a bit different in a certain respect since the mid-gen refreshes already got us to CB 4K and we really aren't going to go beyond native 4K for any reason (and hopefully devs opt to go lower to push graphics further).

We've never really had an interesting situation where the consoles had a pretty big difference in throughput, so this is the first time we are coming to terms with what that could mean. Ever since we left the Cartridge era, loading has just been something developers have put up with in order to get access to a LOT of data (even if it's slow to access), and streaming systems became much more modern and important yet the slow throughput became a pretty big bottleneck.

What you see on the screen is not single handedly determined by TFlops. It's one factor, but they don't see the limitation on the sheer amount of detail that these games lack due to limitations in throughput. Back in 2005, RAM was a big deal. Shaders were becoming mainstream on console GPUs. EPIC warned Microsoft that they would be at a severe disadvantage if they did not increase their RAM to 512 MB. So they doubled it, and Sony followed suit. I feel like this generation will have a similar vibe, but where Sony doubled their throughput and got a bit exotic while Microsoft went with a more conventional approach that will probably in retrospect look to be a missed opportunity for them and not very forward looking.

The I/O speeds are a big deal in terms of the detail you see on screen. The GPUs will process what they can process, and if that means a slight dip in resolution to account for a gap then that's what will happen. But if that GPU cannot get texture detail fast enough, the devs are going to rely on the same textures.

Super fast SSDs enable LOD streaming the likes of which we've never seen. Detail abundant absolutely everywhere, because it can load in most of VRAM in less than a second on PS5. Objects getting closer into the screen? No problem, we can wipe out the data we don't need that second (because it can be added back a second later if need be), and you will see ridiculously detailed faces, pores, skin textures, etc. The closer you zoom in, the more EFFICIENT the compression then becomes as well, because a lot of the detail comprises of similar image characteristics...so you're looking at faces with 22 gb/s of detail...It will look ABSURD. Why? Because the system can. It is not a problem. Imagine all the texture complexity into an entire open world game being applied extensively to a few characters, then to 1 character, then to only that character's face. That's what the ex ND technical director was trying to convey.

I definitely think that each console will have a certain "look" when it comes to third party games. PS5 versions being much more richly diverse and detailed potentially if the devs take advantage of it. XSX will have an effective higher resolution, but the difference will be so small that nobody is going to really notice. So you'll have higher resolution...that's blurrier because the texture streaming solution is significantly weaker....what will look better, do you think? First party will go absolutely bonkers on the set-up and really blow away the competition moreso than they EVER have IMHO...especially when they can focus 100% on PS5 and won't be held back by Xbox One or Lockhart.

This is why Jason Schreir was saying that the PS5 is one of the most exciting advances in console history according to some of his developer friends. Computational power isn't as much of a limitation as the sheer data you can feed it. I suspect the big limitation will be potential file sizes this gen, because it will be tempting for a lot of devs to really go nuts with what they can do on the PS5.
 
Last edited:

B_Boss

Member
The developer stated why they are not supporting PlayStation;

Where's the support for PlayStation?
PS4/5 development kits are far too expensive for me to obtain (they are in excess of $4000, compared to Switch's $400 and Xbox's $0 dev mode). Unless someone wants to donate me a couple of PS4/5 devkits, I have no plans to support the platform. I apologize for any inconvience this may cause. Talk to Sony about lowering the price of their devkits to something reasonable if you are unhappy. You are of course free to make your own PS4/5 fork of Void2D, as long as it remains open source. If I do manage to get my hands on some PS4/5 devkits I may even use your code in the main branch, and of course you will be fully credited.


So the developer has a grudge on Sony devkit prices and decides to downplay PS because cannot afford one? Ok, let's just throw professionalism out of the window and be bitter.

I remember an old (polygon) article detailing development for PS4, I wonder if it holds true for PS5 in some regards:

 
People get so hung up over pure TF numbers, and I think this has something to do with the past generation due to the need for increasing resolutions. This gen will be quite a bit different in a certain respect since the mid-gen refreshes already got us to CB 4K and we really aren't going to go beyond native 4K for any reason (and hopefully devs opt to go lower to push graphics further).

We've never really had an interesting situation where the consoles had a pretty big difference in throughput, so this is the first time we are coming to terms with what that could mean. Ever since we left the Cartridge era, loading has just been something developers have put up with in order to get access to a LOT of data (even if it's slow to access), and streaming systems became much more modern and important yet the slow throughput became a pretty big bottleneck.

What you see on the screen is not single handedly determined by TFlops. It's one factor, but they don't see the limitation on the sheer amount of detail that these games lack due to limitations in throughput. Back in 2005, RAM was a big deal. Shaders were becoming mainstream on console GPUs. EPIC warned Microsoft that they would be at a severe disadvantage if they did not increase their RAM to 512 MB. So they doubled it, and Sony followed suit. I feel like this generation will have a similar vibe, but where Sony doubled their throughput and got a bit exotic while Microsoft went with a more conventional approach that will probably in retrospect look to be a missed opportunity for them and not very forward looking.

The I/O speeds are a big deal in terms of the detail you see on screen. The GPUs will process what they can process, and if that means a slight dip in resolution to account for a gap then that's what will happen. But if that GPU cannot get texture detail fast enough, the devs are going to rely on the same textures.

Super fast SSDs enable LOD streaming the likes of which we've never seen. Detail abundant absolutely everywhere, because it can load in most of VRAM in less than a second on PS5. Objects getting closer into the screen? No problem, we can wipe out the data we don't need that second (because it can be added back a second later if need be), and you will see ridiculously detailed faces, pores, skin textures, etc. The closer you zoom in, the more EFFICIENT the compression then becomes as well, because a lot of the detail comprises of similar image characteristics...so you're looking at faces with 22 gb/s of detail...It will look ABSURD. Why? Because the system can. It is not a problem. Imagine all the texture complexity into an entire open world game being applied extensively to a few characters, then to 1 character, then to only that character's face. That's what the ex ND technical director was trying to convey.

I definitely think that each console will have a certain "look" when it comes to third party games. PS5 versions being much more richly diverse and detailed potentially if the devs take advantage of it. XSX will have an effective higher resolution, but the difference will be so small that nobody is going to really notice. So you'll have higher resolution...that's blurrier because the texture streaming solution is significantly weaker....what will look better, do you think? First party will go absolutely bonkers on the set-up and really blow away the competition moreso than they EVER have IMHO...especially when they can focus 100% on PS5 and won't be held back by Xbox One or Lockhart.

This is why Jason Schreir was saying that the PS5 is one of the most exciting advances in console history according to some of his developer friends. Computational power isn't as much of a limitation as the sheer data you can feed it. I suspect the big limitation will be potential file sizes this gen, because it will be tempting for a lot of devs to really go nuts with what they can do on the PS5.

Outstanding post.
 
People get so hung up over pure TF numbers, and I think this has something to do with the past generation due to the need for increasing resolutions. This gen will be quite a bit different in a certain respect since the mid-gen refreshes already got us to CB 4K and we really aren't going to go beyond native 4K for any reason (and hopefully devs opt to go lower to push graphics further).

We've never really had an interesting situation where the consoles had a pretty big difference in throughput, so this is the first time we are coming to terms with what that could mean. Ever since we left the Cartridge era, loading has just been something developers have put up with in order to get access to a LOT of data (even if it's slow to access), and streaming systems became much more modern and important yet the slow throughput became a pretty big bottleneck.

What you see on the screen is not single handedly determined by TFlops. It's one factor, but they don't see the limitation on the sheer amount of detail that these games lack due to limitations in throughput. Back in 2005, RAM was a big deal. Shaders were becoming mainstream on console GPUs. EPIC warned Microsoft that they would be at a severe disadvantage if they did not increase their RAM to 512 MB. So they doubled it, and Sony followed suit. I feel like this generation will have a similar vibe, but where Sony doubled their throughput and got a bit exotic while Microsoft went with a more conventional approach that will probably in retrospect look to be a missed opportunity for them and not very forward looking.

The I/O speeds are a big deal in terms of the detail you see on screen. The GPUs will process what they can process, and if that means a slight dip in resolution to account for a gap then that's what will happen. But if that GPU cannot get texture detail fast enough, the devs are going to rely on the same textures.

Super fast SSDs enable LOD streaming the likes of which we've never seen. Detail abundant absolutely everywhere, because it can load in most of VRAM in less than a second on PS5. Objects getting closer into the screen? No problem, we can wipe out the data we don't need that second (because it can be added back a second later if need be), and you will see ridiculously detailed faces, pores, skin textures, etc. The closer you zoom in, the more EFFICIENT the compression then becomes as well, because a lot of the detail comprises of similar image characteristics...so you're looking at faces with 22 gb/s of detail...It will look ABSURD. Why? Because the system can. It is not a problem. Imagine all the texture complexity into an entire open world game being applied extensively to a few characters, then to 1 character, then to only that character's face. That's what the ex ND technical director was trying to convey.

I definitely think that each console will have a certain "look" when it comes to third party games. PS5 versions being much more richly diverse and detailed potentially if the devs take advantage of it. XSX will have an effective higher resolution, but the difference will be so small that nobody is going to really notice. So you'll have higher resolution...that's blurrier because the texture streaming solution is significantly weaker....what will look better, do you think? First party will go absolutely bonkers on the set-up and really blow away the competition moreso than they EVER have IMHO...especially when they can focus 100% on PS5 and won't be held back by Xbox One or Lockhart.

This is why Jason Schreir was saying that the PS5 is one of the most exciting advances in console history according to some of his developer friends. Computational power isn't as much of a limitation as the sheer data you can feed it. I suspect the big limitation will be potential file sizes this gen, because it will be tempting for a lot of devs to really go nuts with what they can do on the PS5.

This ladies and gentlemen is the comment I was waiting for. 🔥
 

rnlval

Member


RX 5700 OC (~ 2150MHz, 2304 Shading Units, 256-bit memory bus, 448GB/s memory bandwidth, ~ 9,9TF FP32 performance)
RX 5700 XT Stock (~1750MHz, 2560 Shading Units, 256-bit memory bus, 448GB/s memory bandwidth, ~8,9TF FP32 performance)
+256 Shading Units vs. higher frequency

Try it with RX 5600 XT 36CU OC models since your example didn't factor in desktop-class CPU's memory bandwidth consumption.
 

SlimySnake

Flashless at the Golden Globes
People get so hung up over pure TF numbers, and I think this has something to do with the past generation due to the need for increasing resolutions. This gen will be quite a bit different in a certain respect since the mid-gen refreshes already got us to CB 4K and we really aren't going to go beyond native 4K for any reason (and hopefully devs opt to go lower to push graphics further).

We've never really had an interesting situation where the consoles had a pretty big difference in throughput, so this is the first time we are coming to terms with what that could mean. Ever since we left the Cartridge era, loading has just been something developers have put up with in order to get access to a LOT of data (even if it's slow to access), and streaming systems became much more modern and important yet the slow throughput became a pretty big bottleneck.

What you see on the screen is not single handedly determined by TFlops. It's one factor, but they don't see the limitation on the sheer amount of detail that these games lack due to limitations in throughput. Back in 2005, RAM was a big deal. Shaders were becoming mainstream on console GPUs. EPIC warned Microsoft that they would be at a severe disadvantage if they did not increase their RAM to 512 MB. So they doubled it, and Sony followed suit. I feel like this generation will have a similar vibe, but where Sony doubled their throughput and got a bit exotic while Microsoft went with a more conventional approach that will probably in retrospect look to be a missed opportunity for them and not very forward looking.

The I/O speeds are a big deal in terms of the detail you see on screen. The GPUs will process what they can process, and if that means a slight dip in resolution to account for a gap then that's what will happen. But if that GPU cannot get texture detail fast enough, the devs are going to rely on the same textures.

Super fast SSDs enable LOD streaming the likes of which we've never seen. Detail abundant absolutely everywhere, because it can load in most of VRAM in less than a second on PS5. Objects getting closer into the screen? No problem, we can wipe out the data we don't need that second (because it can be added back a second later if need be), and you will see ridiculously detailed faces, pores, skin textures, etc. The closer you zoom in, the more EFFICIENT the compression then becomes as well, because a lot of the detail comprises of similar image characteristics...so you're looking at faces with 22 gb/s of detail...It will look ABSURD. Why? Because the system can. It is not a problem. Imagine all the texture complexity into an entire open world game being applied extensively to a few characters, then to 1 character, then to only that character's face. That's what the ex ND technical director was trying to convey.

I definitely think that each console will have a certain "look" when it comes to third party games. PS5 versions being much more richly diverse and detailed potentially if the devs take advantage of it. XSX will have an effective higher resolution, but the difference will be so small that nobody is going to really notice. So you'll have higher resolution...that's blurrier because the texture streaming solution is significantly weaker....what will look better, do you think? First party will go absolutely bonkers on the set-up and really blow away the competition moreso than they EVER have IMHO...especially when they can focus 100% on PS5 and won't be held back by Xbox One or Lockhart.

This is why Jason Schreir was saying that the PS5 is one of the most exciting advances in console history according to some of his developer friends. Computational power isn't as much of a limitation as the sheer data you can feed it. I suspect the big limitation will be potential file sizes this gen, because it will be tempting for a lot of devs to really go nuts with what they can do on the PS5.
i want to believe.

i have also heard that the ps5 will have better detail, better LOD, better character models so it's going to be extremely interesting to see what happens next gen. did you see the coretech video? apparently the i/o stuff cerny has done might be even more important than the ssd. he seems to have taken a hint from nvidia's turing cards. it seems rdna 2.0 is also doubling down on i/o efficiencies but cerny has gone all out.

are you aegon?
guilty as charged. i have a feeling im gonna be perma'd during my 3 week long ban. despite being a hardcore sony fan, i always manage to get myself banned for making fun of or criticizing sony.
 

mitchman

Gold Member
AMD Radeon ‘Big Navi’ RX Gamma Flagship GPU Specs And Benchmarks Leaked – An Absolute Beast

The interesting part related to the PS5 high clock speed is this quote: "What is (pleasantly) surprising is the fact that the card sustains a clock speed of 2.633 GHz - which is absolutely insane. "

So if this leak is accurate, it confirms that RDNA2 can run at a much higher clocks than previous generations due to it's vastly reduced power per performance metrics. I suspect that card will be priced at eye watering levels, but man do I want one.
 
Are you sure Sony should be more afraid of Xbox than Nintendo ?

Because as I understand with gamepass and Xbox one X, Xbox recovered the market or maybe not...

sf2zs32.jpg



My point is I don't think any company put its specs as the main reason for buy a console because doesn't works
in the real life, also the services looks like are not enough (what a surprise).

So if any company (xbox and playstation) way our money the best way is by the good games so if any
fail in this well doesn´t matter how many gpu flops or bandwidth has your ssd your sells will suffer.
 

SonGoku

Member
guilty as charged.
Big fan of your posts on previous/current ree next gen threads lol (just a few days ago was thinking how your posting style was similar), sorry i got annoyed with you at first on this thread wasn't quite familiar with your extreme skepticism or chicken little tactics as you put it :messenger_grinning_sweat:, thought you were either trolling or biased.
i have a feeling im gonna be perma'd during my 3 week long ban. despite being a hardcore sony fan, i always manage to get myself banned for making fun of or criticizing sony.
Yeah... it seems as if they were just looking for a reason to punish you for being one of the contributors to yesterday's debacle. Ree staff cant handle losing control and they make sure to scold outliers/"troublemakers" to control the herd

I mean they ban you for this
Q0qwQ6A.png

While overlooking this
NKturl6.png
 
Last edited:
Understood. However, considering how taxing ray tracing is, I doubt that either console will be powerful enough to implement it on visually complex games (i.e. not games like Minecraft) without massive drops in frame rate.
PS5 can do it already in a 4K 60 game (kind of Dead space game,):

 

SonGoku

Member
“In order to avoid confusion, AMD is dropping the ‘+’ from its roadmaps. In speaking with AMD, the company confirmed that its next generations of 7nm products are likely to use process enhancements and the best high-performance libraries for the target market, however it is not explicity stating whether this would be N7P or N7+, just that it will be ‘better’ than the base N7 used in its first 7nm line.”

AMD 7nm can be N7P or N7+.
It would be interesting if consoles use 7nm EUV, those high clocks wouldn't take as much power
btw did you see coreteks video? he theorizes PS5/SEX use 7nm+ (EUV) based on SRAM used in the IO which is listed as 7nm+ part on Synopsis site
Or maybe features or both ?
What do you mean both?
 

ethomaz

Banned
It would be interesting if consoles use 7nm EUV, those high clocks wouldn't take as much power
btw did you see coreteks video? he theorizes PS5/SEX use 7nm+ (EUV) based on SRAM used in the IO which is listed as 7nm+ part on Synopsis site

What do you mean both?
Yeap I guess the 50% perf. watt increase is related to 7nm+ EUV.

I believe he is saying that you can have different process in the same chip... like AMD does with the I/O controller of the Ryzen that is 14nm while the rest of the CPU is 7nm.
 

SlimySnake

Flashless at the Golden Globes
Big fan of your posts on previous/current ree next gen threads lol (just a few days ago was thinking how your posting style was similar), sorry i got annoyed with you at first on this thread wasn't quite familiar with your extreme skepticism or chicken little tactics as you put it :messenger_grinning_sweat:, thought you were either trolling or biased.

Yeah... it seems as if they were just looking for a reason to punish you for being one of the contributors to yesterday's debacle. Ree staff cant handle losing control and they make sure to scold outliers/"troublemakers" to control the herd

I mean they ban you for this
Q0qwQ6A.png

While overlooking this
NKturl6.png
yep. i noticed that too. i actually survived six hours after making that post, and only got banned after alex threw a fit and basically forced the mods to ban everyone.

as for trolling, i like to joke around and make fun of both sony and MS, and especially nintendo. they say there is a hint of truth in every joke, and thats why i get banned so often i guess. my fault is assuming everyone knows im a sony fan, and knows that if im being skeptical of cerny's variable clocks design, im not doing it because im an xbox MVP in disguise. i just genuinely disagree with those choices.

even my extreme skepticism was me being genuinely disappointed by cerny's meeting. the coretech video made me understand some of cerny's choices and ive begun to come around. like james pointed out, this could be a very special console. sony just needs to show it already.
 
Status
Not open for further replies.
Top Bottom