• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

onesvenus

Member
No, the article says the RDNA 2 ISA document doesn’t detail what type of BVH tree is used. It also says from some of the info in the ISA document it can be inferred that the RT instructions for RDNA 2 are looking for a BVH4 tree, which is a BVH tree with 4 child nodes per node.

The ISA instructions point to the BVH nodes which are stored in memory, implying the BVH has already been created. How you ask? By the developer using a compute shader. Which is self-evident, because AMD’s RDNA 2 RT hardware solution does not accelerate BVH tree creation or traversal, so why would you expect instructions related to that to be part of the ISA? You wouldn’t.

BVH tree creation is done using a compute shader. Ray/box and ray/triangle intersections tests are done using the dedicated hardware (hence there’s description in the ISA doc). Then BVH tree traversal is also achieved in software using a compute shader.

That final line is referencing HLSL, i.e. DirectX’s high level shader language used for writing custom shaders in Direct3D. The writer is remarking on his desire to see these ISA instructions exposed in HLSL, where currently in DirectX they’re used as part of the DirectRT API in a more restrictive and limited format... of course, being DirectX this has no bearing on PS5.



As above, BVH creation and traversal are not exposed in the RDNA 2 ISA because there exists no hardware accelerating those features to expose to developers in the first place. It has to be done in software using GPU compute shaders.
Thanks for your explanation.
On more question though, the ISA needing a BVH4 tree means you can't really create the BVH tree like you want, doesn't it? Meaning that you are not able to create a BVH tree with 8 childs per node per example
 
Well AMD have created their own acceleration structure (BVH) for RT for the PC cards and with so many different PC configurations out there, they blocked access to anything BVH-related and only allowed devs to have control over ray intersection tests.

Sony might already be using AMD’s acceleration structure at launch at least, but in that tech PDF that was shared on this thread, they talked about how they’re researching/working on advanced acceleration structures for “faster” RT WITH input from developers also being considered throughout the entire process. And since they’re using their own API and are allowing low-level access, they’re giving devs a bit more fine-grained control over the RT process. Btw, none of the current-gen consoles and RDNA 2 PC cards have HW-accelerated BVH traversal, so I’m guessing these happen on the shaders (SPs in a CU) themselves through asynchronous compute.

Correction:
AMD hasn’t created any acceleration structures for RT on the PC. The philosophy behind their RT hardware implementation aims to provide more flexibility at the cost of performance than NVidia. Nvidia’s hardware solution is rigidly designed around BVH trees, using dedicated hardware to accelerate ray intersection tests and BVH traversal. On the other hand, AMD provides dedicated hardware to accelerate ray/box and ray/tri intersection tests and that’s it, which is actually more interesting for exploring opportunities to use alternative acceleration structures and/or traversal schemes.

RT in real-time rendering is still in its infancy. So just because NVidia chose to focus on BVH trees and their specific method for BVH traversal in hardware, doesn’t mean it’s the most optimum, efficient or best method. With more time, exploration and research, better more performant approaches may emerge which for AMD’s RT hardware approach offers the opportunity for practical application in a way NVidia’s doesn’t.

Going even further, RT on PC is dominated by DirectX with DirectRT, which is a black box for devs, closing off exposure to the low level hardware features that devs would want to use to explore alternative approaches to writing RT algorithms. This is even worse for AMD, whose entire philosophy behind their hardware implementation is designed around being able to offer devs the opportunity to explore new approaches in software. This is a key issue on PC and by extension Xbox.

For PS5, they will have their own API for RT exposing all the hardware features, meaning devs might choose to explore the use of k-d trees for examples, in place of BVH trees for the acceleration structure. Sony’s engineers have patents already in this area and it wouldn’t surprise me to see PS5 devs exploring these alternate approaches to great effect. It wouldn’t surprise me if this was what Insomniac are using for Spiderman’s RT implementation, as though k-d trees take more computation to build, higher scene complexity with more object occlusion will far more greatly favour k-d trees versus BVHs, and the increased build cost will be mitigated somewhat by the fact that RT is only used in a limited sense (I.e. for reflections etc).

Thanks for your explanation.
On more question though, the ISA needing a BVH4 tree means you can't really create the BVH tree like you want, doesn't it? Meaning that you are not able to create a BVH tree with 8 childs per node per example

Not necessarily. Certain functions within the ISA reference a BVH4. So a BVH with 4 child nodes per node is the maximum number of child nodes per node allowed for that specific function.

A BVH4 will grant the most efficient utilisation of the hardware, as the intersection hardware is designed to do 4 ray/box or tri intersection tests per clock cycle. You can use a BVH2 or 3, but you’ll see perhaps less efficiency in performance.

On the other hand, given the general nature of the ISA, other acceleration structures entirely can be used. It’s just that the dev will have to be careful to ensure the selected structure maps to the hardware limitations in order to maximise performance.

These are things devs will likely only be able to explore on PS5 as now, until MS exposes the RDNA 2 RT ISA instruction in HLSL, so that devs can start to write their own custom shaders for it.
 
Last edited:

roops67

Member
Correction:
AMD hasn’t created any acceleration structures for RT on the PC. The philosophy behind their RT hardware implementation aims to provide more flexibility at the cost of performance than NVidia. Nvidia’s hardware solution is rigidly designed around BVH trees, using dedicated hardware to accelerate ray intersection tests and BVH traversal. On the other hand, AMD provides dedicated hardware to accelerate ray/box and ray/tri intersection tests and that’s it, which is actually more interesting for exploring opportunities to use alternative acceleration structures and/or traversal schemes.

RT in real-time rendering is still in its infancy. So just because NVidia chose to focus on BVH trees and their specific method for BVH traversal in hardware, doesn’t mean it’s the most optimum, efficient or best method. With more time, exploration and research, better more performant approaches may emerge which for AMD’s RT hardware approach offers the opportunity for practical application in a way NVidia’s doesn’t.

Going even further, RT on PC is dominated by DirectX with DirectRT, which is a black box for devs, closing off exposure to the low level hardware features that devs would want to use to explore alternative approaches to writing RT algorithms. This is even worse for AMD, whose entire philosophy behind their hardware implementation is designed around being able to offer devs the opportunity to explore new approaches in software. This is a key issue on PC and by extension Xbox.

For PS5, they will have their own API for RT exposing all the hardware features, meaning devs might choose to explore the use of k-d trees for examples, in place of BVH trees for the acceleration structure. Sony’s engineers have patents already in this area and it wouldn’t surprise me to see PS5 devs exploring these alternate approaches to great effect. It wouldn’t surprise me if this was what Insomniac are using for Spiderman’s RT implementation.



Not necessarily. Certain functions within the ISA reference a BVH4. So a BVH with 4 child nodes per node is the maximum number of child nodes per node allowed for that specific function.

A BVH4 will grant the most efficient utilisation of the hardware, as the intersection hardware is designed to do 4 ray/box or tri intersection tests per clock cycle. You can use a BVH2 or 3, but you’ll see perhaps less efficiency in performance.

On the other hand, given the general nature of the ISA, other acceleration structures entirely can be used. It’s just that the dev will have to be careful to ensure the selected structure maps to the hardware limitations in order to maximise performance.

These are things devs will likely only be able to explore on PS5 as now, until MS exposes the RDNA 2 RT ISA instruction in HLSL, so that devs can start to write their own custom shaders for it.
Not gonna pretend that I understand everything you have been posting but I get the gist , and really impressed with your in-depth knowledge on the subjects and ability to explain in a non-patronizing way. Its apparent your knowledge goes beyond enthusiast, have you worked in this field?
 
Not gonna pretend that I understand everything you have been posting but I get the gist , and really impressed with your in-depth knowledge on the subjects and ability to explain in a non-patronizing way. Its apparent your knowledge goes beyond enthusiast, have you worked in this field?

Thanks very much.

Not at all. I’m a engineer but in an entirely non-computing/IT related field. I’ve just taken a keen interest in computer graphics and rendering technology and I love to read about this stuff.

So in many respects I just an enthusiast like many of you.
 

SSfox

Member
It’s worth nothing that neither doesn’t the XSX, contrary to MS’s claims.

Mixed precision packed math for integer Ops don’t really qualify as dedicated hardware for ML. It’s an advancement to the general integer compute functionality of the graphics and compute array that also benefits ML computation.

Tensor cores are an example of dedicated hardware for ML. Neither console has anything like this.



Have you been living under a rock?

PS Now is more the equivalent of XCloud+GamePass. On PS Now you can stream or download any game aside from PS3 titles. So it’s not just a streaming service.
And MGS HD collection is a ps3 game.
 
Correction:
AMD hasn’t created any acceleration structures for RT on the PC. The philosophy behind their RT hardware implementation aims to provide more flexibility at the cost of performance than NVidia. Nvidia’s hardware solution is rigidly designed around BVH trees, using dedicated hardware to accelerate ray intersection tests and BVH traversal. On the other hand, AMD provides dedicated hardware to accelerate ray/box and ray/tri intersection tests and that’s it, which is actually more interesting for exploring opportunities to use alternative acceleration structures and/or traversal schemes.

RT in real-time rendering is still in its infancy. So just because NVidia chose to focus on BVH trees and their specific method for BVH traversal in hardware, doesn’t mean it’s the most optimum, efficient or best method. With more time, exploration and research, better more performant approaches may emerge which for AMD’s RT hardware approach offers the opportunity for practical application in a way NVidia’s doesn’t.

Going even further, RT on PC is dominated by DirectX with DirectRT, which is a black box for devs, closing off exposure to the low level hardware features that devs would want to use to explore alternative approaches to writing RT algorithms. This is even worse for AMD, whose entire philosophy behind their hardware implementation is designed around being able to offer devs the opportunity to explore new approaches in software. This is a key issue on PC and by extension Xbox.

For PS5, they will have their own API for RT exposing all the hardware features, meaning devs might choose to explore the use of k-d trees for examples, in place of BVH trees for the acceleration structure. Sony’s engineers have patents already in this area and it wouldn’t surprise me to see PS5 devs exploring these alternate approaches to great effect. It wouldn’t surprise me if this was what Insomniac are using for Spiderman’s RT implementation, as though k-d trees take more computation to build, higher scene complexity with more object occlusion will far more greatly favour k-d trees versus BVHs, and the increased build cost will be mitigated somewhat by the fact that RT is only used in a limited sense (I.e. for reflections etc).



Not necessarily. Certain functions within the ISA reference a BVH4. So a BVH with 4 child nodes per node is the maximum number of child nodes per node allowed for that specific function.

A BVH4 will grant the most efficient utilisation of the hardware, as the intersection hardware is designed to do 4 ray/box or tri intersection tests per clock cycle. You can use a BVH2 or 3, but you’ll see perhaps less efficiency in performance.

On the other hand, given the general nature of the ISA, other acceleration structures entirely can be used. It’s just that the dev will have to be careful to ensure the selected structure maps to the hardware limitations in order to maximise performance.

These are things devs will likely only be able to explore on PS5 as now, until MS exposes the RDNA 2 RT ISA instruction in HLSL, so that devs can start to write their own custom shaders for it.
Radeon Rays could turn out to be Amd's solution for raytracing. It is open source after all.
 
Thanks very much.

Not at all. I’m a engineer but in an entirely non-computing/IT related field. I’ve just taken a keen interest in computer graphics and rendering technology and I love to read about this stuff.

So in many respects I just an enthusiast like many of you.


Accuse Nuclear Blast GIF by Machine Head


Nah just kidding I really appreciate your input and find what you share quite interesting. Don't let the craziness of gaf discourage you from sharing as many of us like to read your type of posts. It's a breath for fresh air compared to the usual dribble that we get here.
 
Radeon Rays could turn out to be Amd's solution for raytracing. It is open source after all.
Nah, that’s just their own software based RT solution.

I’m sure it can offer devs the opportunity to pinch pieces of the code if it were free. Most devs, however, would write their own RT’ing code for their own game engines.

Most games this new gen, at least, will use RT for limited effects like reflections, shadowing and AO. As the gen goes on we’ll see ever more clever implementations utilising whole new approaches that may or may not take advantage of the dedicated RT hardware at all.

Game developers are an incredibly creative and clever bunch.
 

yewles1

Member
Correction:
AMD hasn’t created any acceleration structures for RT on the PC. The philosophy behind their RT hardware implementation aims to provide more flexibility at the cost of performance than NVidia. Nvidia’s hardware solution is rigidly designed around BVH trees, using dedicated hardware to accelerate ray intersection tests and BVH traversal. On the other hand, AMD provides dedicated hardware to accelerate ray/box and ray/tri intersection tests and that’s it, which is actually more interesting for exploring opportunities to use alternative acceleration structures and/or traversal schemes.

RT in real-time rendering is still in its infancy. So just because NVidia chose to focus on BVH trees and their specific method for BVH traversal in hardware, doesn’t mean it’s the most optimum, efficient or best method. With more time, exploration and research, better more performant approaches may emerge which for AMD’s RT hardware approach offers the opportunity for practical application in a way NVidia’s doesn’t.

Going even further, RT on PC is dominated by DirectX with DirectRT, which is a black box for devs, closing off exposure to the low level hardware features that devs would want to use to explore alternative approaches to writing RT algorithms. This is even worse for AMD, whose entire philosophy behind their hardware implementation is designed around being able to offer devs the opportunity to explore new approaches in software. This is a key issue on PC and by extension Xbox.

For PS5, they will have their own API for RT exposing all the hardware features, meaning devs might choose to explore the use of k-d trees for examples, in place of BVH trees for the acceleration structure. Sony’s engineers have patents already in this area and it wouldn’t surprise me to see PS5 devs exploring these alternate approaches to great effect. It wouldn’t surprise me if this was what Insomniac are using for Spiderman’s RT implementation, as though k-d trees take more computation to build, higher scene complexity with more object occlusion will far more greatly favour k-d trees versus BVHs, and the increased build cost will be mitigated somewhat by the fact that RT is only used in a limited sense (I.e. for reflections etc).



Not necessarily. Certain functions within the ISA reference a BVH4. So a BVH with 4 child nodes per node is the maximum number of child nodes per node allowed for that specific function.

A BVH4 will grant the most efficient utilisation of the hardware, as the intersection hardware is designed to do 4 ray/box or tri intersection tests per clock cycle. You can use a BVH2 or 3, but you’ll see perhaps less efficiency in performance.

On the other hand, given the general nature of the ISA, other acceleration structures entirely can be used. It’s just that the dev will have to be careful to ensure the selected structure maps to the hardware limitations in order to maximise performance.

These are things devs will likely only be able to explore on PS5 as now, until MS exposes the RDNA 2 RT ISA instruction in HLSL, so that devs can start to write their own custom shaders for it.
Would it be possible to do a 3 cone solution for, say, checkerboard reflections?
 

3liteDragon

Member


0:00 Intro Banter
3:21 What were Tech Deals’ expectations for 2020?
8:56 Cyberpunk 2077’s Launch Performance
14:30 AMD Zen 3 vs Comet Lake
19:27 Nvidia Ampere & AMD RDNA 2 Discussion
30:28 Are PlayStation 5 & XBOX Series X Future Proof?
43:01 Comparing the XSX & PS5 to XB1 & PS4
45:41 Final thoughts on Comet Lake & Zen 3
54:26 When will Intel be competitive again?
1:07:00 Do we think Intel Xe will be successful? (And why we need it to be)
1:21:22 RTX 3060 & RX 6700XT - GPU Launches in Early 2021
1:25:00 Is the midrange dying? (We need High End APUs)
1:34:34 Is Ray Tracing actually relevant yet in 2020?
1:43:59 Has Nvidia pressured Tech Deals to talk about Ray Tracing?
1:50:29 Undervolting RDNA 2 vs Ampere
1:55:47 AMD’s Answer to DLSS, Do we need to change the way we review tech?
2:07:36 Final Hopes & Expectations for 2021
 

Md Ray

Member
It's time for me to finally go silent.

Would like to wish everyone a happy Christmas and a safe new year no matter what side of the console park you sit on.

It's time to concentrate on Mince pies, writing stories, and giving the Turkey a hell of a good stuffing.

To the friends i made, those that supported me and even you @Mod of War, you dirty bugsnax lover, have a great one.

Hope to see you in the new year.

xxxxx
Fish Rule :messenger_beaming: "lollipop_disappointed:
Miss u dude.
 

bitbydeath

Member
Should be finding out the next PS5 PS+ game within the next 24 hours. What are you hoping for?

I’d like to see Planet Coaster. I want to try it but not sure how much i’d enjoy it on consoles.
 

vpance

Member


0:00 Intro Banter
3:21 What were Tech Deals’ expectations for 2020?
8:56 Cyberpunk 2077’s Launch Performance
14:30 AMD Zen 3 vs Comet Lake
19:27 Nvidia Ampere & AMD RDNA 2 Discussion
30:28 Are PlayStation 5 & XBOX Series X Future Proof?
43:01 Comparing the XSX & PS5 to XB1 & PS4
45:41 Final thoughts on Comet Lake & Zen 3
54:26 When will Intel be competitive again?
1:07:00 Do we think Intel Xe will be successful? (And why we need it to be)
1:21:22 RTX 3060 & RX 6700XT - GPU Launches in Early 2021
1:25:00 Is the midrange dying? (We need High End APUs)
1:34:34 Is Ray Tracing actually relevant yet in 2020?
1:43:59 Has Nvidia pressured Tech Deals to talk about Ray Tracing?
1:50:29 Undervolting RDNA 2 vs Ampere
1:55:47 AMD’s Answer to DLSS, Do we need to change the way we review tech?
2:07:36 Final Hopes & Expectations for 2021


31:00 "There are some pretty severe memory problems with the Series X already"


confused cat GIF
 
31:00 "There are some pretty severe memory problems with the Series X already"


confused cat GIF

Had to watch that part to get the full context. Essentially he's saying that he wished both systems had over 20GBs of unified memory. So currently neither is meeting his expectations. However he talks about the memory setup for both and developers told him they are running into some memory issues with the Series X. That's probably due to the disadvantages that come from a split memory setup.

It could just be BS that he's making up to make the XSX look bad but you can't deny that multiplats have been weird in the comparisons for some reason.
 

vpance

Member
Had to watch that part to get the full context. Essentially he's saying that he wished both systems had over 20GBs of unified memory. So currently neither is meeting his expectations. However he talks about the memory setup for both and developers told him they are running into some memory issues with the Series X. That's probably due to the disadvantages that come from a split memory setup.

It could just be BS that he's making up to make the XSX look bad but you can't deny that multiplats have been weird in the comparisons for some reason.

Yes the source of the quote is what he heard from devs.

If it's true then it should be easier to see in future shootouts with more demanding games. Valhalla probably showing it the best at the moment with XSX having to dip lower in res for scenes where it could be bandwidth limited.
 
Yes the source of the quote is what he heard from devs.

If it's true then it should be easier to see in future shootouts with more demanding games. Valhalla probably showing it the best at the moment with XSX having to dip lower in res for scenes where it could be bandwidth limited.

That's what some were saying here. If a game doesn't need more than 10GBs for the GPU then there's no issue on the XSX. If the GPU needs more than that then it has to use the much slower 6GBs of ram. That's where the problems will take place.

If I were to guess which games would have that problem it's probably the ones that require a lot of ram. So basically not your average 2D indie.
 

roops67

Member
That's what some were saying here. If a game doesn't need more than 10GBs for the GPU then there's no issue on the XSX. If the GPU needs more than that then it has to use the much slower 6GBs of ram. That's where the problems will take place.

If I were to guess which games would have that problem it's probably the ones that require a lot of ram. So basically not your average 2D indie.
Even if games used less than 10GB, the CPU and accessories still have to get at it's memory in-between using the reduced data lanes access. Which still is a bandwidth hindrance to the 10GB GPU memory access

Microsoft asked AMD to design them a high performance system with specs they may have outlined. Then MS crippled it by reducing the memory from 20GB (which the memory bus is built for) to 16GB. This is the only logical explanation for the royal cock-up of the memory bus architecture

It could be that they will have the full 20GB in its server configuration but had cut corners in the console config to save money, inadvertently hampering it's performance without fully researching the consequences ... but hey the specs looked good on paper and that's all that mattered !
 
Last edited:
Even if games used less than 10GB, the CPU and accessories still have to get at it's memory in-between using the reduced data lanes access. Which still is a bandwidth hindrance to the 10GB GPU memory access

Microsoft asked AMD to design them a high performance system with specs they may have outlined. Then MS crippled it by reducing the memory from 20GB (which the memory bus is built for) to 16GB. This is the only logical explanation for the royal cock-up of the memory bus architecture

It could be that they will have the full 20GB in its server configuration but had cut corners in the console config to save money, inadvertently hampering it's performance without fully researching the consequences ... but hey the specs looked good on paper and that's all that mattered !

Question about the 20GB theory.

Is that unified or split?
 
It would have been unified if they did it the way it supposed to have been. XSX has 5 memory bus lanes, 5 goes into 20 evenly, not into 16 without fiddling about with different size memory modules and therefore segregating some of the lanes to the memory (sort of like that)

That is very strange that they made that decision if it would cause issues for developers.
 
They had the architecture laid out by AMD, then made some snips here and there to save some money to stop going over the budget. Maybe the APU was already commited fixed by that time

Kind of strange to screw your system at the last minute. I guess RAM prices must have spiked for that to happen. I don't think they would have created that bottleneck without knowing about the results.
 

SSfox

Member
How, in this day and age, game can be completed before we've seen any screenshots or gameplay?

Because even when the game is completed, there's still the polish time that may take some times too, and some devs rather start communicating about their games once they feel it advance enough or even completed, which actually i don't think it's a bad idea at all, quite the opposite. This apply specially for big huge scale games.

And i mean look at Cyberpunk 2077, It's a finished game when it comes to content, but when it comes to polish that's another story, it's out already, but in a sens it's still need like 6 months to a year at least to be called a finished completed game. Also Elden Ring a Japanese game and Japanese usually are more serious into polishing their games compare to western devs. TWW3, Skyrim, remember how those games were buggy retard too at launch, while games like Sekiro, Nier Automata, Dark Souls 3,FF15 and other were working just normally fine, with minors bugs (that got fix super fast too) and not some crazy buggy stuffs like TW3, Skyrim, Fallout or recently Cyberpunk.
 
Last edited:

I mean ,it was only a matter of time before they copied those features. Devs and reviewers had been praising them and the new xbox controller while pretty good ,played it pretty safe by only adding a share button after 7 long years. The thing is though, Microsoft doesn't know the exact design of the adaptive triggers, only how it looks in the patents which means they can't copy the design entirely ,only specific portions of it as doing an exact carbon copy will classify as patent infringement .Pretty nice to see Xbox is following though, goes to shows that Sony was thinking ahead of time.
 

Panajev2001a

GAF's Pleasant Genius
I mean ,it was only a matter of time before they copied those features. Devs and reviewers had been praising them and the new xbox controller while pretty good ,played it pretty safe by only adding a share button after 7 long years. The thing is though, Microsoft doesn't know the exact design of the adaptive triggers, only how it looks in the patents which means they can't copy the design entirely ,only specific portions of it as doing an exact carbon copy will classify as patent infringement .Pretty nice to see Xbox is following though, goes to shows that Sony was thinking ahead of time.
It will be good if they put this to use because:

a.) the feature works really well and together with the rest of haptic feedback in the controller it is a multiplatform game decision tie breaker

and

b.) there are few things as hilarious as seeing hardcore Xbox fans that called them a useless gimmick and/or mocked them having to take their creative writing classes exercises to 11 in order to praise the new Xbox controller feature without also praising the PlayStation Dual Sense controller by proxy and thus admitting they were wrong 😂.
 
Last edited:


Great 👍🏻 looking forward to them! Better stolen than self made but wrongly designed .

Microsoft the most innovative company ever


... some said
1466366197-risitas10.png

Lol, lets hope Xbox is still relevant by then

I love when people are criticizing something when it seems they clearly don't know what they are speaking about. The patent was filled 3 years ago, so what is a filled patent ? This Is the first step of a patent, where you provide a first "incomplete" overview of the concept. After that, it is possible to publish your patent, where you describe completely the patent (it is done here in 2018 for the patent we are speaking about: https://onedrive.live.com/?authkey=!AEO8J_kmK2SIZT0&cid=693B6E47405BB294&id=693B6E47405BB294!5567&parId=root&o=OneUp). And finally, you need to wait for your patent to be granted, as it the case for this one with the datas shared in the tweet.
 
Last edited:

Lunatic_Gamer

Gold Member
PS5 shipments to reach 16.8-18 million units in 2021

Production for Sony's PS5 game consoles is likely to reach 16.8-18 million units in 2021, fueled by additional capacity support from TSMC and backend services firms, according to industry sources.

 
Last edited:

MrFunSocks

Banned

Why would they be referencing a prior patent? They’re not, it’s the date the patent was filed. So they filed this long before Sony revealed their controller, meaning they aren’t “copying Sony”.

I’ll be turning it off if an Xbox controller ever gets it, just makes pressing triggers slower.
 

v_iHuGi

Banned
31:00 "There are some pretty severe memory problems with the Series X already"


confused cat GIF

i´ve been saying this since i joined Gaf, nobody gave a shit back then.

"MS claims about Most powerful Console will look ridicously in some weeks"
"Split Ram will cause massive bottlenecks"

etc. etc... Check Post history everything is there.

Nothing new to me, Series X is a bottlenecked Console till the absolute limit, it was made to serve Azure not direct Consumers and YES i do have I REPEAT I DO HAVE people who know the matter and work with Azure, Optane you name it.

Also in before "my uncle works at Nintendo".

command and control GIF by St John Ambulance
 
Last edited:

RespawnX

Member
Wasn’t this already present in the XOne controller?
Xbox One Controller has impulse triggers. In other words, rumble motors inside the triggers. Works very well, some games make great use of it and feels good. In my opinion adaptive triggers from Dualsense are far to weak and construction doesn't look very durable. Nevertheless, I think Sony is going the right way. By the way, Nintendo showed this with the Joy-Cons. I think it's great that Microsoft considers to make the controller more tactile as well. It increases the chances that more developers will use the functions.
 
i´ve been saying this since i joined Gaf, nobody gave a shit back then.

"MS claims about Most powerful Console will look ridicously in some weeks"
"Split Ram will cause massive bottlenecks"

etc. etc... Check Post history everything is there.

Nothing new to me, Series X is a bottlenecked Console till the absolute limit, it was made to serve Azure not direct Consumers and YES i do have I REPEAT I DO HAVE people who know the matter and work with Azure, Optane you name it.

Also in before "my uncle works at Nintendo".

command and control GIF by St John Ambulance

And I can claim also that's not a real problem, just need to learn how to access and use the memory in an efficient way. That's simply require more work, but that's not a real bottleneck. You'll see.
 
Status
Not open for further replies.
Top Bottom