• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

ethomaz

Banned
Anexanhume from Era found this, looks is related to PS5 (maybe the geometry engine ?)
http://www.freepatentsonline.com/10510183.pdf
I can be wrong but that is a software abstraction patent and not hardware one.

What that means? It means Sony patented a way via software (API) to use the hardware to determined result.

An abstraction layer to use the hardware units.
 
Last edited:

Ptarmiganx2

Member
It's more than just TF's that makes the XSX superior to the PS5. Are you guys really going to do this the whole gen? Now we get to hear all this until games come out. 3rd party games will show they play best on the XSX, and I fear that there will be something else to spin.

And quote this.. all of you! When PS5 Pro comes out. TF's will matter again. Book it!

Now everyone stay safe out there. Corona is no joke!

Once the actual death rate statistics are determined we will see the amount of overreaction. Worst case 0.9% looking closer to 0.5%. We need antibody testing on a mass scale to get accurate numbers. I'd push the actual number of infections to 100-200k in the US if not higher.
 
Geometry engine is part of rdna1/rdna2.


? Neither console in 7nm+. XSX is "7nm enhanced" and we know that's not 7nm+.
Of any that chip only exists in PS5 as we know but looks like this will use the feature like mesh shader but using hardware dedicate only for this, the gpu of ps5 is rnda 2 but Sony just add this chip
because they now in this way will work better when use things like VRS.

Both consoles will be make in the same n7p as you say is just a best version of the first 7nm.
 

SonGoku

Member
if it's 10.08TF under most extreme situations then why didn't cerny locked the clock the GPU to get 10.08TF ?
Short answer: Because its higher than that most of the time
Fixed clocks will always be better for consoles than variable clocks
False
By having fixed power budget with variable frequencies (depending of loads) They dont have to guess the apropiate cooling solution for the worst case game power consumption. In other words they can pin point the exact cooling solution needed to run cool and quiet without going overkill
Cerny whole deal if you payed attention was that you don't need to have a cpu stuck at that speed if it isn't doing much of anytbing at that time. That's why they went with frequency to only have the Cpu ramp up when needed.

Dling it that way leads to less heat, less fan, less noise.
Actually both CPU & GPU can stay simultaneously at their theoretical max frequencies (or very close to). It the type of workloads that affect power consumption and trigger the system to drop/raise frequencies depending of the power allocated.

Cerny gave the example of 256bit CPU instructions to be particularly power hungry
 

Bo_Hazem

Banned
When there is such a tiny gap between the two consoles, why do people argue that the SSD in the PS5 is such a game changer?

Looks to me that maybe third party devs can't use that faster ssd to full potential or that its faster speed doesnt really matter that much.

Because it can reach up to 22GB/s with devs seeing 20GB/s in action, faster than DDR4 ram at 15GB/s. Making it a VRAM that you can run your OS directly from it instead of wasting 2.5GB from the RAM like in XSX. Plus you need to load and calculate much more less assets as you can upload/offload up to 22GB/s or 2GB per 0.1 second!!! Assuming you're playing DOOM and turning insanely fast.

It's too huge that normal people like us are still struggling to comprehend until we see some demo in action.

Think about it like the Horizon Zero Dawn technique but on steroids:




Making you deal with MUCH, MUCH less work on the GPU/CPU/RAM.
 

ethomaz

Banned
Of any that chip only exists in PS5 as we know but looks like this will use the feature like mesh shader but using hardware dedicate only for this, the gpu of ps5 is rnda 2 but Sony just add this chip
because they now in this way will work better when use things like VRS.

Both consoles will be make in the same n7p as you say is just a best version of the first 7nm.
There is no chip in the patent...
It is common GPU units.
 
Last edited:

StreetsofBeige

Gold Member
Because it can reach up to 22GB/s with devs seeing 20GB/s in action, faster than DDR4 ram at 15GB/s. Making it a VRAM that you can run your OS directly from it instead of wasting 2.5GB from the RAM like in XSX. Plus you need to load and calculate much more less assets as you can upload/offload up to 22GB/s or 2GB per 0.1 second!!! Assuming you're playing DOOM and turning insanely fast.

It's too huge that normal people like us are still struggling to comprehend until we see some demo in action.

Think about it like the Horizon Zero Dawn technique but on steroids:




Making you deal with MUCH, MUCH less work on the GPU/CPU/RAM.

Official specs are 8-9 gb/s compressed, you edged it up to 11 gb/s a few days ago and now it's 22 gb/s?

lol

So when is 33 gb/s?
 
Which is a totally subjective claim, as both of us discussed a day or two ago.

If he was so confident in the specs, he'd say PS5 can handle cpu and gpu at max speed all the time, or clarify how often it can and can't.

Saying "most of the time" means nothing and you know it.

Ya, ya. I know. Like the last time you supported Cerny, "most of the time" means 98-99%+, which is total guesswork.

If is all the time, than clocks in definition AREN'T VARIABLE. He was confident in what he said and about variability. Both CPU and GPU can work at their top frequency together as long as the total workload does not exceed the power cap. It that happens, downclock will be minor. It's sufficient to lower clocks by 2% to lower power consumption by 10%. Dropping power consumption by 10% IS NOT 10% drop in performance.
 

StreetsofBeige

Gold Member
If is all the time, than clocks in definition AREN'T VARIABLE. He was confident in what he said and about variability. Both CPU and GPU can work at their top frequency together as long as the total workload does not exceed the power cap. It that happens, downclock will be minor. It's sufficient to lower clocks by 2% to lower power consumption by 10%. Dropping power consumption by 10% IS NOT 10% drop in performance.
fetch
 
Bo_Hazem Bo_Hazem Okay bro you lured me back in if just for a second :nougat_rofl:



Really interesting mention of RDNA2 for consoles not being the same as the RDNA2 coming to PC later this year. That could just mean that certain non-priority features have been removed (Sony alluded to doing such on Wednesday), it could also mean there are exclusive hardware features on both that won't be present in the upcoming batch of PC RDNA2 cards. Could even mean maybe some other stuff in them like (maaaaybe) some RDNA3 or CDNA2 features?

Regardless it's interesting to know and should show both systems aren't just taking literal PC architectures and slamming them int console boxes. He also mentions BCPack compression; there's some developer (or developers?) making mention of it having about a 50% compression rate potential, versus Kraken's 20%-30%, and Microsoft is looking to push BCPack very hard but how hard they push it will determine how well the compression can get.

It seems MS has a lot of details on the XSX tech they're still looking to go in-depth on, also makes sense Sony would as well for PS5. While Kraken's peak compression may be lower than BCPack's, in the video it's mentioned there's a way devs on PS5 can encode their textures and combine with Kraken to effectively match BCPack's compression rates. Also mentions that BCPack is specifically targeted at game textures; could mean Kraken might be target at general data compression including game textures, that could give it an advantage with non-texture data over BCPack depending on if BCPack is really focused primarily on game texture compression methods.

Sounds like some pretty interesting stuff, it'll be fun seeing more on the systems over the next few weeks 👍
 

Disco_

Member
Of any that chip only exists in PS5 as we know but looks like this will use the feature like mesh shader but using hardware dedicate only for this, the gpu of ps5 is rnda 2 but Sony just add this chip
because they now in this way will work better when use things like VRS.

Both consoles will be make in the same n7p as you say is just a best version of the first 7nm.
Another chip? Geometry Engine is a feature of rdna, it was even in GCN IIRC. From what I've been reading, and if I understood i correctly geometry engine = sony PR speak for primitive/mesh shaders.
Both XSX and PS5 have a bunch of rdna2 features that they assigned their own names to and are trying to pass it off as their own. Some of them are indeed "bespoke" but not all.
 

Yeah, sure. You tried. An Xbone fan trying desperately spread FUD how PS5 is 9.2 TF console. LOL

If game surely sustain resolution at max. most of the time, than surely console will be at power peak most of the time. Like in bunch DF and NXG games comparison where it was mentioned so often - resolution drops are rarity ( depends on platform )

I've gave you an simple explanation in these post. Looks like you're the of those who, probably DELIBERATELY didn't listen when Cerny stated clearly they expect both CPU and GPU to spend most of their time at their top frequencies. And drops will happen in worst case scenarios.
 

Bo_Hazem

Banned
Bo_Hazem Bo_Hazem Okay bro you lured me back in if just for a second :nougat_rofl:



Really interesting mention of RDNA2 for consoles not being the same as the RDNA2 coming to PC later this year. That could just mean that certain non-priority features have been removed (Sony alluded to doing such on Wednesday), it could also mean there are exclusive hardware features on both that won't be present in the upcoming batch of PC RDNA2 cards. Could even mean maybe some other stuff in them like (maaaaybe) some RDNA3 or CDNA2 features?

Regardless it's interesting to know and should show both systems aren't just taking literal PC architectures and slamming them int console boxes. He also mentions BCPack compression; there's some developer (or developers?) making mention of it having about a 50% compression rate potential, versus Kraken's 20%-30%, and Microsoft is looking to push BCPack very hard but how hard they push it will determine how well the compression can get.

It seems MS has a lot of details on the XSX tech they're still looking to go in-depth on, also makes sense Sony would as well for PS5. While Kraken's peak compression may be lower than BCPack's, in the video it's mentioned there's a way devs on PS5 can encode their textures and combine with Kraken to effectively match BCPack's compression rates. Also mentions that BCPack is specifically targeted at game textures; could mean Kraken might be target at general data compression including game textures, that could give it an advantage with non-texture data over BCPack depending on if BCPack is really focused primarily on game texture compression methods.

Sounds like some pretty interesting stuff, it'll be fun seeing more on the systems over the next few weeks 👍


Welcome back! Glad to see you here ;)

That 20-30% is kinda odd. Speaking of 5.5GB can be compressed up to 22GB sounds more like 75% compressing, BGs seen 20GB/s in action as well. Great video I just need to watch it now at 1.25x speed :messenger_winking_tongue:
 

kyliethicc

Member
I believe consoles (I will check PS4) display base2... so a 1TB HDD shows 931GB there.

Edit - You are right.

I used a Pendrive with a 311MB file in Base2 (1024) that means 326.xMB in base10 (1000).

PS4 shows the file as 326.9MB.

That means PS4 shows base10 (1000 bytes = 1 kilobyte).

PS5 will probably shows 825GB.

yeah so this means what we all think of as a 50 GB or 100 GB game matches up with the 825 or 1 TB etc. so it’s a bit annoying I guess, but for most people they will never notice or care.
 
Bo_Hazem Bo_Hazem Okay bro you lured me back in if just for a second :nougat_rofl:



Really interesting mention of RDNA2 for consoles not being the same as the RDNA2 coming to PC later this year. That could just mean that certain non-priority features have been removed (Sony alluded to doing such on Wednesday), it could also mean there are exclusive hardware features on both that won't be present in the upcoming batch of PC RDNA2 cards. Could even mean maybe some other stuff in them like (maaaaybe) some RDNA3 or CDNA2 features?

Regardless it's interesting to know and should show both systems aren't just taking literal PC architectures and slamming them int console boxes. He also mentions BCPack compression; there's some developer (or developers?) making mention of it having about a 50% compression rate potential, versus Kraken's 20%-30%, and Microsoft is looking to push BCPack very hard but how hard they push it will determine how well the compression can get.

It seems MS has a lot of details on the XSX tech they're still looking to go in-depth on, also makes sense Sony would as well for PS5. While Kraken's peak compression may be lower than BCPack's, in the video it's mentioned there's a way devs on PS5 can encode their textures and combine with Kraken to effectively match BCPack's compression rates. Also mentions that BCPack is specifically targeted at game textures; could mean Kraken might be target at general data compression including game textures, that could give it an advantage with non-texture data over BCPack depending on if BCPack is really focused primarily on game texture compression methods.

Sounds like some pretty interesting stuff, it'll be fun seeing more on the systems over the next few weeks 👍


Yeah, yeah, the XSX decompression chip has a theoretical max of around 6GB/s, while PS5's has a theoretical max of 22GB/s. Really, it's really funny when people trying to highlight XSX decompression solution ( OMG, it's tech from space, something like alien technology ), yet PS5 decompressing tech demolish XSX with 3x higher peak number.
 
I can be wrong but that is a software abstraction patent and not hardware one.

What that means? It means Sony patented a way via software (API) to use the hardware to determined result.

An abstraction layer to use the hardware units.
Yeah looks like more related to an API than the hardware.

To make a possible solution when exists an increment of pixels like now to 4k,
you have an id to each pixel in order to know from which primitive this come from also you know if some pixel is not part of the same primitive,
so you can improve things like rasterized process of even stuff like antialiasing process.

Sorry if missunderstand the pattent :lollipop_squint_tongue:
 

StreetsofBeige

Gold Member
Yeah, sure. You tried. An Xbone fan trying desperately spread FUD how PS5 is 9.2 TF console. LOL

If game surely sustain resolution at max. most of the time, than surely console will be at power peak most of the time. Like in bunch DF and NXG games comparison where it was mentioned so often - resolution drops are rarity ( depends on platform )

I've gave you an simple explanation in these post. Looks like you're the of those who, probably DELIBERATELY didn't listen when Cerny stated clearly they expect both CPU and GPU to spend most of their time at their top frequencies. And drops will happen in worst case scenarios.
Who's spreading FUD? I'm not Rogame or those dataminers. But I'd rather support some data than believe a bunch of random people on the net with hidden agendas spreading FUD about 12-13TF and PS1-PS3 BC.

Maybe next time you should be more trusting at data, than internet users whispering vague tips.

The 9.2TF was closer to the 10.3TF, then all the FUD for a year about 12-13TF, which people like you got hooped on for a year. The only thing Oberon missed was a GPU boost. They even got the same 36 CU chip right, which in theory is a unique claim because who lately releases new systems with the same CU as a system 4 years ago? But they stuck to it ans was right.

Guys like Jason Schreier I'm sure knew the whole time but didn't want to spill the beans until 2 hrs before the presentation. And I'm sure there were some true insiders and devs who could have told the truth, but they didn't. And you believed their smoke and mirrors.

My latest call was even 10.6 - 10.9TF and even that was overstated. I gave Sony the benefit of the doubt and did a random call right in the middle between 9.2 and 12-13 and even I was actually closer than most people.
 
Last edited:
Another chip? Geometry Engine is a feature of rdna, it was even in GCN IIRC. From what I've been reading, and if I understood i correctly geometry engine = sony PR speak for primitive/mesh shaders.
Both XSX and PS5 have a bunch of rdna2 features that they assigned their own names to and are trying to pass it off as their own. Some of them are indeed "bespoke" but not all.
Yeah you could be right but we need to know of rdna 2 is made, maybe just sony use a PR speack maybe they put another chip which does the same things (I belive is this) or is just a useless chip.
 

Chun Swae

Banned
Honestly it’s the fault of both companies for not showing games or at least tech demos first when talking about their new systems, your average person isn’t a hardware enthusiast and we couldn’t care less. Right now this just feels like an extremely long boring ramp up to next gen.
 
Who's spreading FUD? I'm not Rogame or those dataminers. But I'd rather support some data than believe a bunch of random people on the net with hidden agendas spreading FUD about 12-13TF and PS1-PS3 BC.

Maybe next time you should be more trusting at data, than internet users whispering vague tips.

The 9.2TF was closer to the 10.3TF, then all the FUD for a year about 12-13TF, which people like you got hooped on for a year.

Guys like Jason Schreier I'm sure knew the whole time but didn't want to spill the beans until 2 hrs before the presentation. And I'm sure there were some true insiders and devs who could have told the truth, but they didn't. And you believed their smoke and mirrors.

My latest call was even 10.6 - 10.9TF and even that was overstated. I gave Sony the benefit of the doubt and did a random call right in the middle between 9.2 and 12-13 and even I was actually closer than most people.

I really don't understand what the hell you tried to prove with that gif???

But those people were right for XSX. Yeah! Cool if you were close. Thumbs up
 

StreetsofBeige

Gold Member
I really don't understand what the hell you tried to prove with that gif???

But those people were right for XSX. Yeah! Cool if you were close. Thumbs up
And almost everyone was way off about PS5's 12-13TF. Going by trackers, only a handful were in that 9.2-10 range. The majority clustered up to 12-13TF .

There's a reason why nobody wanted to spill the beans at PS5 being the Oberon leak + a GPU boost. Think about it.
 

pasterpl

Member
I am quite interested to hear more about how both will use machine learning? Sony also didn’t mention anything about input lag, so for many people playing online FPS games xbsex might be ”better” choice.
 
People need to read what VFX Veteran has said in regards to the SSD hype in the " PS5 focus is high performance, not high power. Are people underestimating the value of the approach?" thread.
 

SgtCaffran

Member
I read your post but there's no info that suggests the SEX audio chip will handle audio RT
Project Acoustic engine can be used for the CPU & GPU as well, its more likely audio rt will be handled by gpu rt hardware for both consoles
I am not suggesting that the engine will do audio rt. That can be done by the rt cores.

However, the results of project Acoustics might replace audio rt. They make a representation model of the game environment and do a very complex wave simulation on azure servers. They then simplify this into a useable sound model for use in the actual game. I imagine the Acoustics engine will have dedicated hardware to utilise this model in-game.

They bake the audio model similar to how some games bake lighting by using offline calculations.
 

SonGoku

Member
Say for 99% of the time at 2.13 Ghz its running fine, 1 % when it sees certain commands like AVX ? it heats up and needs 2 Ghz for like half a frame to handle heat and the CPU lends a hand
It won't heat up since it can't go over the fixed power budget, if a particular CPU instruction needs more power than its being given at the time it will either
A) Run CPU at lower frequency to accommodate the increased power consumption of the load/instruction
B) Siphon power from the GPU (causing GPU frequency to drop) to run the instruction without dropping CPU frequency
C) Siphon power from the GPU (switch GPU to a lighter load while retaining frequency) to run the instruction without dropping CPU frequency

In scenario A) GPU runs at max frequency and CPU runs at lower frequency
In scenario B) GPU runs at lower frequency and CPU runs at max frequency
In scenario C) both GPU & CPU run at max frequency

This is all done by the devs, their choice and design and of course there can also be in between scenarios, i just explained the extremes.
 
Last edited:

Bo_Hazem

Banned
Most people don't know what they want til they experience it.

There's a difference between hearing audio and being placed in the soundscape itself; Sony's solution takes care of every link in the chain to achieve this.

The audio angle is a riskier move as it's harder to market, as are the benefits of a super fast, parallel and granular SSD beyond loading times. But at least Sony are pushing literal game changers, things that fundamentally move the industry forward. Even when most of those advantages will only be seen by consumers in First Party titles.

Without a doubt, MS' marketing has been way better so far and they have the advantage of more compute width. But the real-world performance differential that will likely amount to will be something like ~2034p vs ~2160p.

You can prefer whichever you like, no judgement, no ill will meant.

But to me, what Sony's doing is fundamentally more important moving forward and the compromises they had to make to achieve them within their budget/thermal/power constraints are more than worth it. I feel like MS are more concerned with hitting TF numbers.

Again, that's fine if that's where your priorities lie. When it comes to getting that last bit of quality out of something I've been on the receiving end of plenty of confusion and ridicule in my life for my ethos when collecting movies and music....

For movies I have to get the absolute best transfer and master with the best bitrate on the best format. With music I'll look for the best mix+master and then try and find that on the best format possible. This can result in hours of research and comparison simply because I care about mass-archival and preservation, I care about experiencing content as faithfully as possible, effectively as intended by its creators; or at least true to an original release.

...So yeah, I understand about wanting the very best possible, but games are multifaceted and compromises have to be made. The negligible compromises Sony made are worth it.

I was wondering; do you use any of those fancy, expensive Sony Walkman devices?
 
Who's spreading FUD? I'm not Rogame or those dataminers. But I'd rather support some data than believe a bunch of random people on the net with hidden agendas spreading FUD about 12-13TF and PS1-PS3 BC.

Maybe next time you should be more trusting at data, than internet users whispering vague tips.

The 9.2TF was closer to the 10.3TF, then all the FUD for a year about 12-13TF, which people like you got hooped on for a year. The only thing Oberon missed was a GPU boost. They even got the same 36 CU chip right, which in theory is a unique claim because who lately releases new systems with the same CU as a system 4 years ago? But they stuck to it ans was right.

Guys like Jason Schreier I'm sure knew the whole time but didn't want to spill the beans until 2 hrs before the presentation. And I'm sure there were some true insiders and devs who could have told the truth, but they didn't. And you believed their smoke and mirrors.

My latest call was even 10.6 - 10.9TF and even that was overstated. I gave Sony the benefit of the doubt and did a random call right in the middle between 9.2 and 12-13 and even I was actually closer than most people.
All you saying about this insider chit chat has nothing to do with the argument.
It's clear you don't know how PS5 handles clocks, it's clear you haven't any idea of what the 22 GB\s refers to, it was all in the conference.
You are stating the exact contrary of what the lead architect said in the conference.
Do you have proofs? Yes or no?
 

Bo_Hazem

Banned
I know about power Efficiency, but Temperatures will always play rule imo even the PS5 SSD will produce Heat and PS5 wil try to push 2.23GHz for most the time no matter how efficient the GPU this is high speed clock for a console
let's say thermal paste dry and you use PS5 Actively then 3 years down line PS5 won't keep up and will only push to base clock or close to base clock in the other hand if Even XSX was in the Oven the chip will always push 1.825GHz.

i think i won't comment on PS5 TF anymore until they show us the cooling system

With conventional cooling XSX could as well produce more heat, it packs more CU's and RT. So let's wait and see :)
 

BGs

Industry Professional
"Fuck out" was an ironic way of saying that you don't post here often, as you said yourself. There wasn't any evil intent in it. No, it doesn't mean "crazy".
Also I've no idea where did you took all the Era vs GAF thing, at least from my post. Just asked if and why there are more devs there, because it seems so.
Thanks for the infos.
Really only the first paragraph was for you.

Era vs Gaf is evidence in both forums. It wasn't directly for you.
 

StreetsofBeige

Gold Member
All you saying about this insider chit chat has nothing to do with the argument.
It's clear you don't know how PS5 handles clocks, it's clear you haven't any idea of what the 22 GB\s refers to, it was all in the conference.
You are stating the exact contrary of what the lead architect said in the conference.
Do you have proofs? Yes or no?
SSD825GB
5.5GB/s Read Bandwidth (Raw)


Internal StorageCustom 825GB SSD
IO Throughput5.5GB/s (Raw), Typical 8-9GB/s (Compressed)
 
SSD825GB
5.5GB/s Read Bandwidth (Raw)


Internal StorageCustom 825GB SSD
IO Throughput5.5GB/s (Raw), Typical 8-9GB/s (Compressed)
You didn't watch the conference, it's fine buddy.

Someone can post the clip from Cerny regarding this? Because I just realized I give zero fuck at this level of discussion.
 
Last edited:
With conventional cooling XSX could as well produce more heat, it packs more CU's and RT. So let's wait and see :)
More CU's at much lower clocks. Its why MS has the right approach. More CU's at lower clocks gives you more bang for your buck with less potential cooling issues. Hence why XSX does not need variable clocks like the PS5. Having locked clocks gives devs more predictability allowing for much easier development.
 
Last edited:
Welcome back! Glad to see you here ;)

That 20-30% is kinda odd. Speaking of 5.5GB can be compressed up to 22GB sounds more like 75% compressing, BGs seen 20GB/s in action as well. Great video I just need to watch it now at 1.25x speed :messenger_winking_tongue:

The 75% figure you're referring to might be Kraken + the encoding method the video brings up when on that section, and might be referring to particular types of very well-compressed data that can hit that level, as Cerny was saying Wednesday.

Also there's time stamps 'cuz I already know how some folks feel with the speech speed :LOL: (not a bother for me personally, but everyone's got different preferences with that).
 

Audiophile

Member
I was wondering; do you use any of those fancy, expensive Sony Walkman devices?

Nope, I use a Foobar in Windows or PowerAmp on my phone > usb dac > headphone amp/stereo speaker amp. If my music is in the analogue or physical-digital domain I'll rip it in in hi-res. I find most dedicated devices to be clunky and you're saddled with whatever dac or amp chips they pack in their device. I honestly find Sony's music devices to be underwhelming these days, they just reissue old stuff from the Asian market years later and tend to bork something.
 
Because it can reach up to 22GB/s with devs seeing 20GB/s in action, faster than DDR4 ram at 15GB/s. Making it a VRAM that you can run your OS directly from it instead of wasting 2.5GB from the RAM like in XSX. Plus you need to load and calculate much more less assets as you can upload/offload up to 22GB/s or 2GB per 0.1 second!!! Assuming you're playing DOOM and turning insanely fast.

It's too huge that normal people like us are still struggling to comprehend until we see some demo in action.

Think about it like the Horizon Zero Dawn technique but on steroids:




Making you deal with MUCH, MUCH less work on the GPU/CPU/RAM.

They won't run the whole OS on it, they'll probably use it to store what doesn't need to be written often like suspended app, assets and such.
 
Status
Not open for further replies.
Top Bottom