• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

SNG32

Member
2e08250d-playstation5-ps5-psv-vr4player-003-1140x641.jpg




If this came at 1000 dollars I would actually pay for that.
 
The "EU low power off" thing is a legacy of when TVs/HiFi systems had massive iron cored transformers in the power supply that remained energized in standby mode and drew a lot of power because of their low efficiency.

Now all that is needed is a tiny microcontroller to monitor the power button, (and possibly a wireless chip) for an on signal so that it can switch a relay to start sucking power again.

0.5W standby is pretty normal for home electronics standby mode in the EU nowadays..

Now if they could keep the GDDR6 to retain memory in that 0.5W standby mode I would be impressed .. I don't think that is what we will get though [edit - this Enhancing DRAM Self-Refresh for Idle Power Reduction puts lowest self refresh power at ~6W for 32GB of DDR4 .. too high )
I'll always take 6W over 0.5W, especially if the custom SSD is built-in with no upgrade option.

with 2+GB/s SSD is it needed though?

100-1000 write cycles for QLC (the cheapest and densest variant of NAND). It's awfully low to waste it for suspend to disk, isn't it?

DRAM/SRAM allow for unlimited writes.

Does Sony seriously want to deal with bricked consoles and disgruntled consumers? I mean, I get the PR aspect of 0.5W ("saving the planet" is a noble goal), but it's not worth the trade-off for me.
 

SonGoku

Member
I'll always take 6W over 0.5W, especially if the custom SSD is built-in with no upgrade option.



100-1000 write cycles for QLC (the cheapest and densest variant of NAND). It's awfully low to waste it for suspend to disk, isn't it?

DRAM/SRAM allow for unlimited writes.

Does Sony seriously want to deal with bricked consoles and disgruntled consumers? I mean, I get the PR aspect of 0.5W ("saving the planet" is a noble goal), but it's not worth the trade-off for me.
writes are inevitable to install games though
 
writes are inevitable to install games though
Yeah, but it's far less frequent. Suspending a game you play on a daily basis is going to add up (16-32GB * 365 days * 5-10 years).

Not to mention that we will have the option of USB HDDs, whether it's for cold storage or not. I reckon games will have appropriate flagging depending on their requirements.

Cross-gen AAA games and indies should play just fine off an external HDD, while true next-gen games written specifically around the SSD baseline (no door/elevator scenes to mask loading times) will be flagged to require SSD storage.
 

xool

Member
About GPU compute
  1. GPU compute was a thing because weak PS4 CPU, will go away next gen
  2. GPU compute is the future
Which is it 1 or 2 ?? Is it really possible to do compute on GPU and not affect gfx performance?
 
Last edited:
About GPU compute
  1. GPU compute was a thing because weak PS4 CPU, will go away next gen
  2. GPU compute is the future
Which is it 1 or 2 ??
Offloading CPU tasks to the (GP)GPU is not going anywhere. Why? Because there's a historical, long-term trend that proves it again and again.

Remember when nVidia had released GeForce 256? 3Dfx used to be the top dog back then. They kept saying "hardware T&L is a fad, you only need a faster CPU to calculate matrices faster".

And where's 3Dfx now? That's exactly my point. ;) Adapt or perish.

Is it really possible to do compute on GPU and not affect gfx performance?
Yes, it is. In fact, the GCN ISA has been designed that way.

When people say that GCN is less efficient than Maxwell/Pascal, they mean there's only 70% of GPU occupancy in GCN vs 90% in nVidia. So nVidia is more efficient from the get-go. That doesn't mean that GCN cannot surpass 90% with proper coding.

That means there's 30% of untapped potential in the GPU pipelines that you can take advantage of via Asynchronous Compute. You're not really affecting gfx performance (traditional pixel/vertex shaders) when you're able to fill the 30% with compute tasks. GCN is designed to work that way.

nVidia Kepler also had something similar:

large-video-hyper-q-2-en.jpg


It's basically the equivalent of HyperThreading that we've had since the Pentium 4 era. Inside a modern CPU there are lots of pipelines that are not working to their fullest capacity. AMD even ponders the possibility of offering 4-way SMT in future Zen CPUs.

For reference, modern GPUs offer 8-10x way HT. They perform exceptionally well when you fill them with lots of threads. Even Turing gets serious performance benefits with Async compute shaders.

TL;DR: if games like Uncharted 4, Gears 5 (raytracing running via Async), God of War, Spiderman, DOOM 2016 have not convinced you about the immense benefits of GPGPU compute, then I don't know what else will convince you. :)
 

xool

Member
It's basically the equivalent of HyperThreading that we've had since the Pentium 4 era.
Good (literal) analogy

When people say that GCN is less efficient than Maxwell/Pascal, they mean there's only 70% of GPU occupancy in GCN vs 90% in nVidia.
My assumption was the gains in RDNA over old GCN are mostly due to shifitng this number towards Nvidia level of occupancy -- but that's just me speculating.

I wonder what the deal is with bandwidth (and the GPUs) local memory/register array when using GPUcompute ..
 

Darius87

Member
My assumption was the gains in RDNA over old GCN are mostly due to shifitng this number towards Nvidia level of occupancy -- but that's just me speculating.

I wonder what the deal is with bandwidth (and the GPUs) local memory/register array when using GPUcompute ..
The RDNA architecture introduces a new scheduling and quality-of-service feature known as Asynchronous Compute Tunneling that enables compute and graphics workloads to co-exist harmoniously on GPUs. In normal operation, many different types of shaders will execute on the RDNA compute unit and make forward progress. However, at times one task can become far more latency sensitive than other work. In prior generations, the command processor could prioritize compute shaders and reduce the resources available for graphics shaders. As Figure 5 illustrates, the RDNA architecture can completely suspend execution of shaders, freeing up all compute units for a high-priority task. This scheduling capability is crucial to ensure seamless experiences with the most latency sensitive applications such as realistic audio and virtual reality.
 

xool

Member
The RDNA architecture introduces a new scheduling and quality-of-service feature known as Asynchronous Compute Tunneling that enables compute and graphics workloads to co-exist harmoniously on GPUs. In normal operation, many different types of shaders will execute on the RDNA compute unit and make forward progress. However, at times one task can become far more latency sensitive than other work. In prior generations, the command processor could prioritize compute shaders and reduce the resources available for graphics shaders. As Figure 5 illustrates, the RDNA architecture can completely suspend execution of shaders, freeing up all compute units for a high-priority task. This scheduling capability is crucial to ensure seamless experiences with the most latency sensitive applications such as realistic audio and virtual reality.
Mmh intersting - sounds a bit like thread priority
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Plus sony has talked about a more apple approach. Shortish reveal to release windows. I can see April for the hardware reveal. Heck switch was shown like 4.5 months before launch.

Nah! They will at least give us at least 6 months before release. 6 months has to be a minimum.
 

xool

Member

So summing up the article's content - I annoted the useful table :

XNfcpjQ.png


There's a better source on the RX5600 https://www.theinquirer.net/inquirer/news/3079033/amd-radeon-rx-5600-leak - max 24 CUs - Komachi leak (mmh actually the same amount of information as in https://wccftech.com/amd-navi-14-rx-5600-series-gpu-leaked-24-cus-1536-sps-1900mhz/ .. that was July - well if there's no news just reprint old news )

[edit] is anyone expecting better than 5700/5700XT performance next gen (raytrace extras excluded ?)
 
Last edited:

LordOfChaos

Member
It's basically the equivalent of HyperThreading that we've had since the Pentium 4 era. Inside a modern CPU there are lots of pipelines that are not working to their fullest capacity. AMD even ponders the possibility of offering 4-way SMT in future Zen CPUs.


Great post. Though on this, I've been speaking to some peeps in the know that seem to think this is a pesky rumor that keeps springing up and won't die - AMD's backend is already completely occupied with 2-way SMT, you'd also have to substantially beef up the TLB and other buffers to avoid loss from that much resource scheduling.

Now, Power8 has offered that for years, so it's possible, but at the time that only eked out wins in a few areas against Intel, and anyways there's differences between optimizing for consumer code and massive enterprise code. At any rate you'd end up with a substantially more beefed up architecture.
 
Last edited:
So summing up the article's content - I annoted the useful table :

XNfcpjQ.png


There's a better source on the RX5600 https://www.theinquirer.net/inquirer/news/3079033/amd-radeon-rx-5600-leak - max 24 CUs - Komachi leak (mmh actually the same amount of information as in https://wccftech.com/amd-navi-14-rx-5600-series-gpu-leaked-24-cus-1536-sps-1900mhz/ .. that was July - well if there's no news just reprint old news )

[edit] is anyone expecting better than 5700/5700XT performance next gen (raytrace extras excluded ?)
Source info:
Based on the specifications posted by 3DCenter, the GPU could offer 30 to 50% more stream processors than Navi 10 which maxes out at 2560 SPs. If that ends up being the case then we can see a core count similar to the RX Vega 56. Its also stated that the card would retain a 256-bit bus interface and would utilize GDDR6 memory but we can possibly see faster memory clocks resulting in higher bandwidth.
However, the source doesn’t rule out the possibility of getting HBM2 memory alongside Navi 12.
 

LordOfChaos

Member
Panos on stage with a semicustom Ryzen Surface Laptop 3 ("Custom GPU cores"?), not sure if it's the first mobile AMD 7nm part or not...

 

LordOfChaos

Member
12nm, alas. I had a faint hope that they had struck a launch exclusive deal for the first 7nm AMD mobile parts, but it's still cool that AMD got in a surface.
 

LordOfChaos

Member
While Microsoft announced the basic specs for Project Scarlett, it left us confused as to why we had to wait another year for the console’s release. I posited that one reason the console wouldn’t arrive until 2020 was that AMD was slow to get ray tracing support onto Navi, its next-generation graphics platform that will be the basis for both Sony and Microsoft’s future consoles. In the message, the tipster said, “Correct that AMD Navi v late.”

So I did some poking around because the tipster made curious claims about the speed of the devices, the quality of their graphics, and even what kind of cameras they’d use for streaming. We typically wouldn’t run a story with a source who won’t confirm their identity. But, again, this one had something other tipsters do not: a photograph of a device. And not just any photo. Their photo looks just like official Sony illustrations pulled from a registry on a government website. The illustrations are rumoured to be mockups of a pre-production PS5, and they circulated widely after their discovery in August. Our tipster sent us the photo in June. (In the interest of protecting our tipster, we won’t post the photos here.)




 
Last edited:

xool

Member

also here https://www.neogaf.com/threads/ps5-devkit-name-nextbox-camera-a-huge-priority.1504789

Seriously fuck these Gizmondo clown.

Got pics - doesn't print them

Got tech info - doesn't print it

Whole think is just a clickbait pile of nothing that says we got a leak, can't tell you, something camera something.

Then there's this :

Update: A Microsoft spokesman denied any camera technology is in development and that none has been delivered to developers in any form. The orignial post remains as originally written.

So MS just said "fake" on the single point gizmodo felt able to reveal? Or did they get a spoof MS call.. ? wtf. They usually "don't comment" .. ? maybe they got shook by the prospect of bad "camera spy" publicity after kinect.

Also love the professionalism of not even spellchecking the update. Clowns
 
Last edited:

Imtjnotu

Member
also here https://www.neogaf.com/threads/ps5-devkit-name-nextbox-camera-a-huge-priority.1504789

Seriously fuck these Gizmondo clown.

Got pics - doesn't print them

Got tech info - doesn't print it

Whole think is just a clickbait pile of nothing that says we got a leak, can't tell you, something camera something.

Then there's this :



So MS just said "fake" on the single point gizmodo felt able to reveal? Or did they get a spoof MS call.. ? wtf. They usually "don't comment" .. ? maybe they got shook by the prospect of bad "camera spy" publicity after kinect.

Also love the professionalism of not even spellchecking the update. Clowns
Pretty much clickshit. Over it at this point
 

Fake

Member
With MS big pushbfor ai and neural computing in pray sony doesn't get left behind.
The last report from Richard was about how devs are quite happy with PS5 devkit in comparison with nextbox devkit.
IMO both will get similar or near specs, but the secret saurce from both will make the difference.
 
Status
Not open for further replies.
Top Bottom