• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 SSD & I/O Complex Patent

M1chl

Currently Gif and Meme Champion
Sorry, I’m with the iPhone. Anyways, northern enough.
dMBZtPdnYDo.jpg


I am sorry for me out of place rude comment : D
 

Great Hair

Banned
I knew it! When I said that constant video recording for the "share" functionality would destroy the SSD in very short time in the Next Gen thread they told me that modern SSDs are capable of handling it.

It should handle 1TB and more per day without a sweat. Mine dropped from 100% to 81 after 10 years (OS). Even streaming at 4K60 wont hurt the SSD.

Normal use, will never affect them negatively and even heavy use wont be an issue.

At the best possible quality, Stadia will use 35 Mbps of data per second, or about 15.75GB per hour. At Google's recommended minimum quality, Stadia will use about 4.5GB per hour.

24*16*365 = 140,160GB per year (140TB, 24/7)@35Mbit (4K preset stadia)
24*32*365 = 280,320GB per year (280TB, 24/7)@70Mbit
24*320*365 = 2,800,320GB per year (2,800TB, 24/7)@700Mbit

SAMSUNG 970 EVO (500GB, 2018)
300TB per day for 5 years guaranteed
300 * 365 = 109,500TB per year, for 5 years non stop before affecting perf.

2TB model goes up to 1,200TB per day.

 
Last edited:

Radical_3d

Member
It should handle 1TB and more per day without a sweat. Mine dropped from 100% to 81 after 10 years (OS). Even streaming at 4K60 wont hurt the SSD.

Normal use, will never affect them negatively and even heavy use wont be an issue.

At the best possible quality, Stadia will use 35 Mbps of data per second, or about 15.75GB per hour. At Google's recommended minimum quality, Stadia will use about 4.5GB per hour.

24*16*365 = 140,160GB per year (140TB, 24/7)@35Mbit (4K preset stadia)
24*32*365 = 280,320GB per year (280TB, 24/7)@70Mbit
24*320*365 = 2,800,320GB per year (2,800TB, 24/7)@700Mbit

SAMSUNG 970 EVO (500GB, 2018)
300TB per day for 5 years guaranteed
300 * 365 = 109,500TB per year, for 5 years non stop before affecting perf.

2TB model goes up to 1,200TB per day.

Thanks. This is a relief.
 

Trimesh

Banned
So they are changing memory adresses to lessen the rewriting impact on SSDs? That seems like a good idea. Good on them.

Yes, precisely. This is just a description of the wear management system that Sony have used in their drive, written in maximally confusing patentese. Note that all SSDs have some form of wear management because if they didn't the most frequently accessed blocks would be quickly destroyed. Also note that for at least several decades the majority of patents don't really describe any advances in the state of the art - just different ways of doing things that aren't covered by existing patents, which is why poring over them trying to extract anything of value is often a pointless exercise.
 

Kenpachii

Member
A healthy dose of skepticism is always good...

Uncharted and god of war are designed in this way to push visuals forwards maximal, not because the HDD is shit.

The load times in between yes that's harddrives, but walking through coridoor area's isn't because the HDD can't keep up its because they wanted to design the game like that for whatever reason of fillers or simple gameplay they aimed for. That's absolute not a HDD limitation or even memory. ( not talking about god of war teleport walks , but corridors )

We got games on the PS4 like ac odyssey, and rdr2 and tons more open world games as example of this which all work perfectly fine and are magnitudes bigger in scope without loading.

What he should have talked about is that SSD's will give you more details on the fly in a game exactly what sony said and with details u could see it like this as example.

From PS4:

JtWP.gif


To this:

giphy.gif


From a extremely limited parallel world that can only be really simplistic toward multiple fully fletched worlds with minimal delay in between that all can be swapped on the fly at high quality and effect each other worlds. Now imagine hell / heaven / earth / ghost world where u can swap in between all day long in fully blown high quality environments in a massive open world game without loading to the point which breaks up the experience. ( basically press witcher senses and u are now in a entire new world and release it and boom u are back in the other world and directly play )

That's SSD and Memory speed for you and that's exactly what Sony should have demonstrated in there PS5 presentation. It would make something like any current open world game or concept extremely static and outdated and would explain the gains of a SSD far more then there shitty as jax example that is no longer relevant even today in the way people think it is. Have a PS4 load the worlds with quests that go in between them and a PS5 that does the quest, u will sit in loading screens forever on the PS4 and even the xbox series X will look like a slug. PS5? u will actually be playing the game.

Even while there is still a delay but can be covered with just a animation.
 
Last edited:

Lethal01

Member
Can't wait to finally have open-world games, where u can walk into houses and leave and move further in the world without loading.

Oh wait.
The houses you can go in can now be far more complex and far more different from everything that's around it, allowing for much more unique interiors for houses that are close together. Not to mention it's far easier to implement even in the cases you mention meaning they will be able to do it more often and in a far larger capacity. Also that it's possible at higher speeds.

What you're doing is like saying you aren't impressed because we can already seamlessly transition between the open-world and houses when we are playing Minecraft and Fortnite.

Game devs have done a great job of making these kinds of transitions happen despite the limitation of a hard drive but how crazy things will get when they can do it with no difficulty is definitely going to be a "game changer"

Flying is neat too though.
 
Yes, precisely. This is just a description of the wear management system that Sony have used in their drive, written in maximally confusing patentese. Note that all SSDs have some form of wear management because if they didn't the most frequently accessed blocks would be quickly destroyed. Also note that for at least several decades the majority of patents don't really describe any advances in the state of the art - just different ways of doing things that aren't covered by existing patents, which is why poring over them trying to extract anything of value is often a pointless exercise.
This is more than wear management. It's also about a low level SSD access system that works along with a traditional one, but is used for accessing read-only data (think textures, level data, code) in a very low latency/high bandwidth way. The low level system has a much simpler and condensed way to store access metadata so that much more of it can be stored on the SRAM of the SSD controller. This prevents needing to first go to the SSD or main memory just to look up house keeping data before even accessing the data you really want. Cutting out that middleman reduces latency a lot, and helps maintain bandwidth when making lots of small reading requests.

There is other stuff in there like
  • using priority levels to give that low level access first dibs on the data
  • using accelerators to check for data integrity, tampering, and doing decompression
  • splitting up reads into small chunks that can be done in parallel and stored on the controller before ultimately going to main memory
 

Trimesh

Banned
This is more than wear management. It's also about a low level SSD access system that works along with a traditional one, but is used for accessing read-only data (think textures, level data, code) in a very low latency/high bandwidth way. The low level system has a much simpler and condensed way to store access metadata so that much more of it can be stored on the SRAM of the SSD controller. This prevents needing to first go to the SSD or main memory just to look up house keeping data before even accessing the data you really want. Cutting out that middleman reduces latency a lot, and helps maintain bandwidth when making lots of small reading requests.

There is other stuff in there like
  • using priority levels to give that low level access first dibs on the data
  • using accelerators to check for data integrity, tampering, and doing decompression
  • splitting up reads into small chunks that can be done in parallel and stored on the controller before ultimately going to main memory
But none of these techniques are new - all they have done is mashed them all up into a specific implementation and wrapped it with enough patentese to pass the now very low bar for "innovation" you need to get a patent. I've been involved in this game myself - when you are designing something you make a whole bunch of implementation decisions and it's now common practice to try and describe them in a way that's possibly patentable to give you extra ammunition against other people.

Personally, I absolutely hate it - you end up expending engineering effort on stupid crap that's just different enough to avoid the minefield created by all the other nonsense patents that have been issued rather than concentrating on the things that will actually make a difference. But it makes work for "IP lawyers" and I guess that has to count for something ... maybe.
 
But none of these techniques are new - all they have done is mashed them all up into a specific implementation and wrapped it with enough patentese to pass the now very low bar for "innovation" you need to get a patent. I've been involved in this game myself - when you are designing something you make a whole bunch of implementation decisions and it's now common practice to try and describe them in a way that's possibly patentable to give you extra ammunition against other people.

Personally, I absolutely hate it - you end up expending engineering effort on stupid crap that's just different enough to avoid the minefield created by all the other nonsense patents that have been issued rather than concentrating on the things that will actually make a difference. But it makes work for "IP lawyers" and I guess that has to count for something ... maybe.
I was responding to you saying this was about SSD wear management. It wasn't. Wear management was hardly mentioned in the patent. This patent was primarily about how to remove the bottlenecks that would cripple a very fast SSD.

I happen to agree that way too much stuff gets patented. No, there is nothing earth shattering in this patent that anyone else with expertise in the tech wouldn't think of if given the task. Although, adding the ability of the storage system to tailor itself to each individual file in order to make storage and retrieval of that file more efficient at least sounds like something that isn't widely done. At least not at that granular of a level.

The real benefit of the patent is that it gives insight to what the PS5's SSD is doing. While nothing is exceptionally new, it is still more than what is standard and likely in the XSX. Sony spent real design and hardware resources to get the latency low enough and transfer speed high enough so the SSD could work as a major force multiplier on system ram. That will be a PS5 differentiator next gen just like the XSX higher CU count will differentiate it. The thing is though, a higher CU count is pretty standard stuff. There is not a lot of mystery there. The really fast SSD for use in gaming is new, which explains at least my excitement to read about it.
 
Last edited:
Top Bottom