• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
This opinion is wrong. I personally like Microsoft hardware. Streaming is going to suck and everyone knows it.
I don't like streaming either, and sure I like the Surface Pro, I think it's a great overpriced device to compete with the overpriced Apple hardware. As far as gaming consoles go, I don't know if Microsoft delivered with the Xbox One, I mean sure the Xbox One X has great specs but it's just sitting there in my room and I only have Halo for it. I would have much rather paid a streaming subscription of 15$ a month just to play Halo and that's it.
 

xool

Member
Soldering on an SSD just seems like a terrible idea regardless of the benefits.

The soldered SSD is from the (unconfirmed) dev kit board like I think ?

Ignoring wear leveling, it seems like a bad idea from upgradeability point of view - I'd 100% expect the storage to be expandable ..

.. and a PCIe 4.0 pluggable SSD standard doesn't even exist right now (M.2 only goes to PCIe3.0) . maybe the PS5 with have a PCIe slot .. there's a lot of ifs here.

..I'd guess soldered SSD (to save money) plus a M2 slot for expandability.
 

SonGoku

Member
It probably interfaces with any plug and play SSD or drive. I would not panic at all in this department, it would just be pointless FUD.
I hope for this outcome too, but it woulnt be unthinkable for it to be fused with the board and they could always offer upgradable cold storage
 

DeepEnigma

Gold Member
I hope for this outcome too, but it woulnt be unthinkable for it to be fused with the board and they could always offer upgradable cold storage

I do not think they are going to toss out external storage solutions which people use to play games as well.

It is possibly how they work their interface with the memory in the I/O, and any drive will work like we have had, you will just get varying performance ranges depending on your solution (for installs). It probably loads the game (or several) from the storage solution you use, to their SSD solution or the like. Hot swaps them out, etc.. So you can still have your USB storage or the like to hold all of your content, and depending on recent played gets loaded into their solution. Like caching essentially.

O onQ123 was talking about this very thing, or something similar for months now that people scoffed at.
 
Last edited:

joe_zazen

Member
1tb soldiered with the option for regular hdd back. Sure you might have to wait five minutes while a game is transferred from hdd to nvme ssd, but so what? They need to make sure the UI is super clear though.

The highly customised software stack says to me that you will need a very specific ssd.
 
Last edited:

SonGoku

Member
I do not think they are going to toss out external storage solutions which people use to play games as well.
What if they are used for cold storage only?
1tb soldiered with the option for regular hdd back. Sure you might have to wait five minutes while a game is transferred from hdd to nvme ssd, but so what? They need to make sure the UI is super clear though.
Plus this way they guarantee speed across the board
 

Fake

Member
Hope the PS5 OS looks similar to PS3. Nothing against PS4, but that PS3 os theme is great and very smoth. Even for select HDMI/Sound options looks quite refine in comparison with PS4.
 

SonGoku

Member
Explain?

Edit: nvm, we are on the same page I think, since my last edit.
yup, use external hdd and user repleaceble drive as cold storage in order to play a game it must be installed on the internal ssd.
Streaming does suck, but paying 200$ for the cheaper model is still way too expensive to play Halo. Everything else I can play on my PS4 or PC.
Is it worth it though? input lag ruins the experience
isnt halo coming to pc anyways?
 
Last edited:
Is it worth it though? input lag ruins the experience
isnt halo coming to pc anyways?
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.
 

SonGoku

Member
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.
lol i don't see a point in paying for condom (input lag) sex, its a downgraded experience, rather just play with myself or get a gf(console)
 
Last edited:

ethomaz

Banned
Solded SSD is even worst than proprietary SSD lol

SSD should be an standard replaceable PC part... if not Sony will ask you the same they did with the infamous Vita Cards lol
 
Last edited:

LordOfChaos

Member
Post I've found online about a Sony patent for SSD
This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.
I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.
I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.
The patents were filed across 2015 and 2016.

Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.
That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.

http://www.freepatentsonline.com/y2017/0097897.html

There's quite a lot going on, but to try and break it down:
It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.

Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.

At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.

Here are the hardware changes the patent proposes vs a 'typical' SSD system:

- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.

- The SSD's read unit is 'expanded and unified' for efficient read operations.

- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.

- The hardware accelerator and the IO controller are connected to the IO bus.

An illustrative diagram of the system:
uS6bo2P.png


At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.

The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.

When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.

The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.

When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.

A diagram illustrating data flow:
yUFsoEN.png


Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:
CYH6AMw.png


Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.

The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.

Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.
TL;DR
- some hardware changes vs the typical inside the SSD (SRAM for housekeeping and data buffering instead of DRAM)
- some extra hardware and accelerators in the system for handling file IO tasks independent of the main CPU
- at the OS layer, a second file system customized for these changes

all primarily aimed at higher read performance and removing potential bottlenecks for data that is written less often than it is read, like data installed from a game disc or download.


Adding this to the other rumors (more RAM cache per terabyte of NAND than normal by double - and now it may be SRAM instead of DRAM, plus PCI-E 4 doubling available throughput, possibly using the Phison E16 controller), and we have the blueprints for a very high throughput SSD indeed, enough that I'd go ahead and call calling it the "broadband" SSD not bullspeak lol.

What i wonder is if they'll have to block external hard drives, if games are to be remade to take full advantage of an SSDs architecture then they'd perform even worse on HDDs than just the increase in size for next gen, which could push load times into unacceptable for them. And what about external SSDs.
 
Last edited:

joe_zazen

Member
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.

holy shit...LOL

Solded SSD is even worst than proprietary SSD lol

Depends on what the benefits are.
 

SonGoku

Member
(more RAM cache per terabyte of NAND than normal by double,

The idea i got is that by using SRAM the cache size could be decreased considerably, doesnt that contradict the rumor?
What i wonder is if they'll have to block external hard drives, if games are to be remade to take full advantage of an SSDs architecture then they'd perform even worse on HDDs than just the increase in size for next gen, which could push load times into unacceptable for them. And what about external SSDs.
I assume they would treat external drives and any other form of user repleaceble drives as cold storage.
 

vpance

Member
I can see an OS that intelligently caches the top 5 games you play the most or something like that. And new games you buy and install will bump an older game off the cache.
 

CyberPanda

Banned
If its soldered to the board does it really matter? They could always offer user replaceable cold storage.
From the looks of it its not the SSD thats customized but the board with added hardware (memory controller, secondary procesor, sram, hw accelerator), so maybe it could be user upgradedable fast nvme.
The SSDs I have in my MacBook Pro are blazing fast. It makes such a huge difference.
 

LordOfChaos

Member
The idea i got is that by using SRAM the cache size could be decreased considerably, doesnt that contradict the rumor?

I assume they would treat external drives and any other form of user repleaceble drives as cold storage.


Maybe both are true, the dev kit version has double the DRAM/TB to replicate what the final version with half the RAM, but faster SRAM, will have.

I hope some of the more hardcore PC review sites dig into this SSD, could be really interesting. Wonder if the Phison version for PCs will be the same.
 
I don't like streaming either, and sure I like the Surface Pro, I think it's a great overpriced device to compete with the overpriced Apple hardware. As far as gaming consoles go, I don't know if Microsoft delivered with the Xbox One, I mean sure the Xbox One X has great specs but it's just sitting there in my room and I only have Halo for it. I would have much rather paid a streaming subscription of 15$ a month just to play Halo and that's it.
If you have a PC than consider your wish granted. But just because YOU don't play it doesn't mean that 50 Million others don't. Microsoft came out on the wrong foot this gen and the previous gen was much better for them. But just because you don't do as great as your competitor selling consoles, doesn't mean you didn't make money selling games. You don't just exit the console market when you have over $10 Billion in revenue in gaming alone.
 

SonGoku

Member
Maybe both are true, the dev kit version has double the DRAM/TB to replicate what the final version with half the RAM, but faster SRAM, will have.

I hope some of the more hardcore PC review sites dig into this SSD, could be really interesting. Wonder if the Phison version for PCs will be the same.
Maybe, that being said 16GB would be so disappointing.
 

LordOfChaos

Member
Maybe, that being said 16GB would be so disappointing.

16GB? Was talking about this part, the SSD Cache
3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)

Maybe the prototype is using double the DRAM to match the latency SRAM would provide with less of it
 
Last edited:

SonGoku

Member
16GB? Was talking about this part, the SSD Cache
3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)

Maybe the prototype is using double the DRAM to match the latency SRAM would provide with less of it
I thought you were at first then got confused lol.
 

ethomaz

Banned
16GB? Was talking about this part, the SSD Cache
3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)

Maybe the prototype is using double the DRAM to match the latency SRAM would provide with less of it
I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.
 

SonGoku

Member
I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.
yes.. that's what he is saying:
The dev kit version has double the DRAM/TB to replicate what the final version with half the DRAM, but faster SRAM, will have.
But wouldn't the final version have zero dram?
 
Last edited:

xool

Member
(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)

.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
 

SonGoku

Member
(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)

.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
Read the patent, its use is justified
 

ethomaz

Banned
(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)

.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..
That is why they are using SRAM.
 

Aceofspades

Banned
Post I've found online about a Sony patent for SSD
This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.
I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.
I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.
The patents were filed across 2015 and 2016.

Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.
That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.

http://www.freepatentsonline.com/y2017/0097897.html

There's quite a lot going on, but to try and break it down:
It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.

Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.

At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.

Here are the hardware changes the patent proposes vs a 'typical' SSD system:

- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.

- The SSD's read unit is 'expanded and unified' for efficient read operations.

- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.

- The hardware accelerator and the IO controller are connected to the IO bus.

An illustrative diagram of the system:
uS6bo2P.png


At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.

The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.

When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.

The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.

When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.

A diagram illustrating data flow:
yUFsoEN.png


Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:
CYH6AMw.png


Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.

The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.

Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.
TL;DR
- some hardware changes vs the typical inside the SSD (SRAM for housekeeping and data buffering instead of DRAM)
- some extra hardware and accelerators in the system for handling file IO tasks independent of the main CPU
- at the OS layer, a second file system customized for these changes

all primarily aimed at higher read performance and removing potential bottlenecks for data that is written less often than it is read, like data installed from a game disc or download.

Damn , Sony is going all out. I hope this stuff would make it into PS5 hardware.
 

kikonawa

Member
Is it worth it? Ok, let's put it this way, you wanna fuck an escort with fake tits and the bitch charges 3,000$ an hour. Is it worth the 3,000$ just for the fake tits? You could find a chick that looks as good as her but without the fake tits, or a street whore that charges 250$ an hour but doesn't have fake tits. You could just use that 3,000$ to trick a hoe and you get multiple fucks out of it versus just an hour fuck, A financially smarter person would say Nah, it's not worth it, I'ma just go with the cheaper option or trick a hoe and pay for her implants.


250 an hour is a lot
 

Ar¢tos

Member
Even if its soldered, I'm sure there will be a slot for a normal sata hdd (or ssd). Just be ready to lose some space in the main ssd, to be reserved as cache for games from the hdd.
 

LordOfChaos

Member
yes.. that's what he is saying:

But wouldn't the final version have zero dram?

I said RAM, not sure how that D snuck in there ;)

2x DRAM prototype -> 1X SRAM final

I'm a bit confuse here... SDD already uses DRAM for cache (about 1GB per TB of SSD) and that is what the patent talks... the patent change the 1GB DRAM to way lower amount of SRAM and that way that cache become way faster and smaller.

From the OQA PCB specs of



3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND)
 
Last edited:

LordOfChaos

Member
(ignoring the patent) .. there's no way they're going to use SRAM for a SSD cache - SRAM is so fast the place to put it would be some part of main memory (or render buffer ...)

.. the SRAM would read data faster than any GDDR6/HBM2 could be written to ..


It's more the latency than the throughput imo. You have an even faster SRAM cache on the SSD controller holding the lookup table used to fetch from the NAND. This would be an IOPS monster - useful for if you want a generation that largely streams games off the drive rather than hitting load screens....
 

GermanZepp

Member
Solded SSD is even worst than proprietary SSD lol

SSD should be an standard replaceable PC part... if not Sony will ask you the same they did with the infamous Vita Cards lol

If you can't manage 1 TB of space for games you deserve to pay the price.
 
Last edited:

xool

Member

It's two different caches with different purposes - the DRAM and SRAM caches for SSD . I don't think A is supposed to replace B.

The patent is about using SRAM within a SSD hardware driver ("flash controller") primarily for address translation look ups - that's not the same as replacing the 4GB of DRAM that Sony (allegedly) is going to be using as a intermediate cache between SSD and main memory

The SRAM address translation table could point to the DRAM [instead of/additionally to locations on SSD] (which should/could help, but not mentioned in patent), but it's not going to replace 4GB of DDR4.

They also mention using it (SRAM) as a buffer (not the same as a cache) - but when the do this they need to load if from SSD first.

This isn't the same as the expected use of the 4GB DDR4 - which would be a cached version of the SSD which can be accessed (read) faster, without ever needing to access the relatively slow SSD.
 
Last edited:
Status
Not open for further replies.
Top Bottom