• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
I imagine because it would double costs of power consumption
Streaming hw is a costly endeavor
Yeah, people assume that 40TF Google Stadia blades will be free. Nope, it doesn't work that way.

They can potentially even offer 100TF blades, but be prepared (as a dev) to pay 10x the regular cost. :)
 

SonGoku

Member
Yeah, people assume that 40TF Google Stadia blades will be free. Nope, it doesn't work that way.

They can potentially even offer 100TF blades, but be prepared (as a dev) to pay 10x the regular cost. :)
Yeah if they scaled the spec up to 20TF it would cut their infrastructure in half.
 

ethomaz

Banned
Some people argue that GCN (7xxx) is an evolution of VLIW4 (6xxx).

Is there any truth to that?

4xxx/5xxx series was VLIW5 (one SIMD core had 5 x 16 ALUs), which later became VLIW4 (4 x 16 ALUs = 64 just like GCN CUs) for efficiency reasons.

Maybe it's something similar this time around...
Not exactly the same but yes they are finally changing the core SIMD engine again... the last time was with GCN 1.0 years ago.
 
Last edited:

CrustyBritches

Gold Member
So pretty much confirm PS5 using GDDR6 I guess.
AMD said that Navi can be either GDDR6 or HBM depending on the customer's needs. Here's the PCGamesn interview with AMD's David Wang and Scott Herkelman:
So, does that mean, as with Nvidia, that the high-end of AMD’s graphics stack – the datacentre, heavily compute-focused cards – remain tied into HBM2, while the gaming cards shift to GDDR6? Not necessarily.


“I would say it’s opportunistic,” says Scott Herkelman. “It depends on how we see our roadmap, how we would like to play it out with some of our partners, and the innovations we want to have and what we want to do in the professional space. But we are fully committed to HBM and we’re going to be fully committed to GDDR6, and let the best solution win.


“And maybe it evolves into different market places, but it will really depend on what we’re trying to solve.”
 
Last edited:

Fake

Member

Racer!

Member
Nice info. So no confirmation at all. Can be one of them. If the HBM price drops or even HBM3, who knows.
Cool to have so many options, at least in the gpu market.

With HBM3 they can get insane bandwith...it would push those 8K textures at lightning speed.
 

Panajev2001a

GAF's Pleasant Genius
Nice info. So no confirmation at all. Can be one of them. If the HBM price drops or even HBM3, who knows.
Cool to have so many options, at least in the gpu market.

It certainly allows Sony for example to really balance their design needs and constraints:

“AMD Navi Interview” said:
“I think HBM technology is a great technology for datacentre/workstation type of application,” David Wang explains as talk turns to the AMD Navi GPUs, “also certain applications require a smaller form factor. Certainly you pay for it, right? It’s lower power, it’s a smaller form factor.”

If they need to reduce power consumption, reduce cooling cost and complexity, and reach a smaller form factor maybe they will pay the price for HMB2 or whatever customised variation they go for.

If they can tolerate and manage the power consumption and cooling when using GDDR6 at similar bandwidth / throughput then they can save R&D budget and invest it somewhere else. AMD is a kick ass semi-custom partner: great job Lisa Su!
 

Panajev2001a

GAF's Pleasant Genius
Some people argue that GCN (7xxx) is an evolution of VLIW4 (6xxx).

Is there any truth to that?

4xxx/5xxx series was VLIW5 (one SIMD core had 5 x 16 ALUs), which later became VLIW4 (4 x 16 ALUs = 64 just like GCN CUs) for efficiency reasons.

Maybe it's something similar this time around...

(Notes: not trying to be patronising as I suspect you know this already and well :), but extended the post past my intended original reply if it can be useful to others)

I think warp width + number of ALU’s and VLIW/Scalar are orthogonal bits here. GCN uses more conventional 16 lanes SIMD vector units while VLIW4/5 in previous AMD GPU’s had multiple groups (16 groups for what they called SIMD engine for some reason) with 4/5 SP’s (ALU’s/ SFU or Special Function Unit for trigonometric functions and log or exp / LSU or Load Store Unit).
The compiler has to figure out of to take each thread (think of a pixel as a thread) and schedule work for all 4/5 units in parallel while GCN schedule a single instruction for each of the 16 lanes of their SIMD vector units and as if each lane processed a different thread (vertex or pixel data for example) in parallel.
Scalar designs also allow you to easily extend the vector ALU width to many lanes (see AVX-512 and wider ARM proposed variable width vector extensions) as each lane processes independent items.

It does change how you want to store and process your data though.

From:

Array of Structures form (each register holds a single vectors item: say xywz but it could be arbitrary parameters in each component, they do not have to be 3D coordinates)

to:

Structure of Arrays (you split a vector, say a 3D coordinate, in as many registers as there are items in the logical vector and in each register you store one of the items... e.g.: [xxx...x], [yyy...y], [zzz...z], [www...w] as you can see the length of each register can vary and be optimal to feed the width of the vector unit).

The bet with VLIW is that with each data item / thread operation there is enough non inter-dependent instruction level parallelism we can extract and we can let the compiler do it (as opposed to an OOOe architecture where the HW does the scheduling and dependency tracking for you). The bet with scalar models mapped with wide vector units, nVIDIA calls it SIMT or Single Instruction Multiple Threads, is that we have enough independent processing streams to feed all the lanes of the wide vector ALU.

VLIW4-dependency-handling.jpg


(Slide 14, nice slides): https://www.archive.ece.cmu.edu/~ec...1.3-simd-and-gpus-part3-vliw-dae-systolic.pdf

 
Last edited:

psorcerer

Banned
I remember ND commenting how quickly they filled the available 5GB and how they had to make the most out of it, they are not known for wasting resources.

They did not have SSD, so they could not stream assets into RAM fast enough.
That's why you've got all these "opening large doors" animations. And "dark tunnel" transitions.
If new SSD will have even 2-3GB/sec transfer rates 16GB RAM will be more than enough.
 

psorcerer

Banned
The bet with scalar models mapped with wide vector units, nVIDIA calls it SIMT or Single Instruction Multiple Threads, is that we have enough independent processing streams to feed all the lanes of the wide vector ALU.

It's pretty simple:
1. VLIW are VERY hard to write a good compiler for.
2. Nvidia and GCN are just SMT (akin to multithreading) but having 1000s of threads going on at the same time. In fact modern GPUs are just how CPUs were supposed to be built for "no more Moore law" situation.
 

Panajev2001a

GAF's Pleasant Genius
It's pretty simple:
1. VLIW are VERY hard to write a good compiler for.
2. Nvidia and GCN are just SMT (akin to multithreading) but having 1000s of threads going on at the same time. In fact modern GPUs are just how CPUs were supposed to be built for "no more Moore law" situation.

Sure, but SIMT/SPMD is similar to SMT, but SMT is not a programming model but a HW characteristic.(instructions from multiple threads tracked across the CPU pipeline).
Big difference between SMT and SoE MT/CMT is how and when you switch between threads. I see the compiler and programming model changing from VLIW GPU’s and Scalar / SIMT ones, but I do not disagree with you as I wrote about before. I was not really talking about programming it, but the structure of it.

Fundamentally SMT / HT / FGMT is Out of Order Execution for Thread Level Parallelism (you have HW doing the dependency and hazard tracking and finding available instructions from multiple threads to feed multiple execution units... as OOOe is doing with HW what VLIW asks the compiler to do... extended to multiple threads)
 
Last edited:

Geki-D

Banned
DD744027-C71-F-4-A8-B-B8-A0-B6560101229-E.png


So, new dashboard and OS. Nice. Hopefully we get more info about the PS5 UI.
Ok guys, I know a guy who knows a guy who is the brother of a guy who is friends with the sister of the cousin of a guy who is friends with a guy who once met a guy who isn't relevant to this info but does also know the best buddy of a guy who passed down this totally real PS5 info:

The PS5 will have a new physical plastic shell and it's own unique packaging. I can confirm this info is 100% correct.
 

TeamGhobad

Banned
PS4 is 300dollars
Xbox one S is 300 dollars

these are 6 year old machines and still this expensive. I wouldn't be surprised if the anaconda was 599-699, same with ps5 599.
 

Bogroll

Likes moldy games
Ok guys, I know a guy who knows a guy who is the brother of a guy who is friends with the sister of the cousin of a guy who is friends with a guy who once met a guy who isn't relevant to this info but does also know the best buddy of a guy who passed down this totally real PS5 info:

The PS5 will have a new physical plastic shell and it's own unique packaging. I can confirm this info is 100% correct.
Well that's more concrete than some of the shit we've got on here 😉
 

Achillias

Member
There was a jeuxvideo leak picked up from reddit (posted c. Jan 15 2019, reported 28 Jan 2019)

  • $249 Lockhart- 8-core CPU, a 4TF GPU, 12GB of RAM, 1TB SSD and DirectX Ray Tracing support.
  • $499 Anaconda.. same plus stronger 12TF GPU and 16GB of RAM
SonGoku SonGoku - can this go in the first post list ? Thanks done

A quick summary (website in french) :

Two consoles, both launch holiday 2020.

Lockhart (streaming)
  • $ 249
  • CPU : Zen2 (custom) 8 cores/16 threads
  • GPU : Navi (custom) 4+ TF
  • Memory : 12GB GDDR6
  • "hard Drive" : 1TB NVMe SSD @ 1+GB/s
  • DirectX Raytracing + MS AI
Anaconda
  • $ 499
  • CPU : Zen2 (custom) 8 cores/16 threads
  • GPU : Navi (custom) 12+ TF
  • Memory : 16GB GDDR6
  • "hard Drive" : 1TB NVMe SSD @ 1+GB/s
  • DirectX Raytracing + MS AI
The also claimed to known PS5 specs (!) :
  • $399
  • CPU : Zen2 (custom) 8 cores/16 threads
  • GPU : Navi (custom) 8+ TF
  • Memory : 12GB GDDR6
  • "hard Drive" : 1TB SSD
  • Delayed from holiday 2019 to early 2020 or later

Originally posted by u/NovaMonbasa (deleted account) on reddit (imgur screencaps) .. they also claimed Asobo, IO interactive, Platinum Games, Turtle Rock, Bluepoint, Relic, and The Farm 51 (and others) in talks about acquisitions

Xbox launch games :
  • Halo Infinite (crossgen)
  • Forza Motorsport (crossgen)
  • Age of Empires 4 (crossgen)
  • Perfect Dark
  • Killer Instinct 2
  • Bleeding Edge (crossgen)
So the general consensus is that the Xbox next generation console is going to be more raw powered than the one from Sony (actually the opposite from Xbox one vs PS4), nice.
 

sinnergy

Member
If we get 3072 SPS we should be around 10,2 TFLOPS stock. With Some modifications we could get 11 TFLOPS, I think, but that could also be wishful, unless they lower the clock and we will be looking at something closer to 9 TFLOPS.
 

Ellery

Member


255mm² and 1080 class performance does seem much lower than I expected for the discrete graphics card part.
For the PS5 that is going to be fine, but what does AMD expect us to pay for a "1080 class performance card" in mid to late 2019?

Pretty disappointing or maybe Digital Foundry has it wrong, because between 10% faster than RTX 2070 and GTX 1080 there is quite a huge difference.
 


255mm² and 1080 class performance does seem much lower than I expected for the discrete graphics card part.
For the PS5 that is going to be fine, but what does AMD expect us to pay for a "1080 class performance card" in mid to late 2019?

Pretty disappointing or maybe Digital Foundry has it wrong, because between 10% faster than RTX 2070 and GTX 1080 there is quite a huge difference.

What did everyone truly expect for Navi 10? You aren't going to get a Wine tier GPU at Beer budget prices. The customizations on consoles is where they shine, more bandwidth and a wealth of GDDR6 is where they will get their edge. But to expect 2070 TI or better in performance was setting yourselves up for disappointment to begin with.
 
Last edited:

Fake

Member


255mm² and 1080 class performance does seem much lower than I expected for the discrete graphics card part.
For the PS5 that is going to be fine, but what does AMD expect us to pay for a "1080 class performance card" in mid to late 2019?

Pretty disappointing or maybe Digital Foundry has it wrong, because between 10% faster than RTX 2070 and GTX 1080 there is quite a huge difference.

Its not 'that' bad taking into consideration the 'custom' part.
Again we're still trying to figure out what could be. None of those desktop gpu will have RT for now. A 1080 plus Ryzen in the dev pespective or what they can archieve.
If you rewatch the DF video Tom told about not making comparison of console against computer.
edit

5:34
 
Last edited:

ethomaz

Banned
DF is using the AMD benchmark to made that claim.

GTX 1080 is about 3-5% faster than RTX 2070.
Navi 10 is suppose to be a bit stronger than RTX 2070 too.

255mm² and 1080 class performance does seem much lower than I expected for the discrete graphics card part.
Navi 10 is a mainstream chip to replace Polaris 10.
It was build to fight against RTX 2070... so 1080-class.

Vega 20 7nm with full 64CUs is 331 mm² with a bit over 1080-class performance.

If we try to use old chip size for comparison:
Vega 10 14nm: 495 mm² 64CUs
Polaris 10 14nm: 232 mm² 36CUs

Polaris 10 is about half of the Vega 10 size with a bit over half of the CUs.

So if we take the Vega 20 7nm size... half should about 170 mm² that tells us that at 255 mm² Navi 10 has way more CUs than 36.

Peharps Navi 10 is a 48 CUs chip.
 
Last edited:

Ellery

Member
DF is using the AMD benchmark to made that claim.

GTX 1080 is about 3-5% faster than RTX 2070.
Navi 10 is suppose to be a bit stronger than RTX 2070 too.


Navi 10 is a mainstream chip to replace Polaris 10.
It was build to fight against RTX 2070... so 1080-class.

Vega 20 7nm with full 64CUs is 331 mm² with a bit over 1080-class performance.

If we try to use old chip size for comparison:
Vega 10 14nm: 495 mm² 64CUs
Polaris 10 14nm: 232 mm² 36CUs

Polaris 10 is about half of the Vega 10 size with a bit over half of the CUs.

So if we take the Vega 20 7nm size... half should about 170 mm² that tells us that at 255 mm² Navi 10 has way more CUs than 36.

Peharps Navi 10 is a 48 CUs chip.

Lets hope it will have Polaris 10 pricing then. Pricing it what it all comes down to.

I am not going to pay more than $350 for GTX 1080 performance in 2019.

What was Polaris pricing when it came out? It was 229$ wasn't it? That would be okay I guess

Also the Vega 20 7nm is like 20-30% over the GTX 1080. I wouldn't call that "a bit"
 
I find the HBM + ddr4 rumor to be weird. Wouldnt that require 2 separate controllers on board which would increase price and use more space?
IIRC, that was the reason PS3 Super Slim never got a unified chip (Cell + RSX) on the same die, while both PS2 and XBOX 360 achieved this.

PS4 is 300dollars
Xbox one S is 300 dollars

these are 6 year old machines and still this expensive. I wouldn't be surprised if the anaconda was 599-699, same with ps5 599.
PS4 7nm will drop RRP to $199.

XBOX ONE S will be phased out. It's not a scalable design (eSRAM + 16 x DDR3 chips).
 

ethomaz

Banned
Lets hope it will have Polaris 10 pricing then. Pricing it what it all comes down to.

I am not going to pay more than $350 for GTX 1080 performance in 2019.

What was Polaris pricing when it came out? It was 229$ wasn't it? That would be okay I guess

Also the Vega 20 7nm is like 20-30% over the GTX 1080. I wouldn't call that "a bit"
Prices?

$499 Navi 10 XT
$399 Navi 10 Pro

That was what Sapphire leaked.

RTX 2070 that is the 1080-class performance is $499 too.
 
Last edited:

Ellery

Member
Prices?

$499 Navi 10 XT
$299 Navi 10 Pro

I guess $299 would still be okay for GTX 1080 performance in 2019.
That is a bit worse price/performance ratio than the Vega 56 which can be had for like 230€ now with custom designs, but u get a better perf/watt I would guess
 

ethomaz

Banned
I guess $299 would still be okay for GTX 1080 performance in 2019.
That is a bit worse price/performance ratio than the Vega 56 which can be had for like 230€ now with custom designs, but u get a better perf/watt I would guess
Typo sorry... it was $399.

And no... Pro performance will be between RTX 2060 and RTX 2070... GTX 1070-class.
 
Last edited:

Ellery

Member
Typo sorry... it was $399.

And no... Pro performance will be between RTX 2060 and RTX 2070... GTX 1070-class.

That doesn't make sense because the RTX 2060 is better than the GTX 1070.
The RTX 2060 is nearly on GTX 1080 level and the RTX 2070 is like 8-10% better than the GTX 1080.
 

ethomaz

Banned
Doing some maths with 48CUs.

48 CUs @ 1700Mhz = 10.4 TFs

Performance of 10.4TFs Navi 10 using the 1.25x showed by AMD is equal to 13 TFs Vega 20.
RX 5700 XT (full Navi 10) is suppose to perform between RTX 2070 and RTX 2080 / below Radeon VII (13.44 TFs)...

Fits.
 
Last edited:

Ellery

Member
Just a bit better (RTX 2060 > GTX 1070) but GTX 1080 is better than RTX 2070.

GTX 1080 > RXT 2070 > RTX 2060 > GTX 1070

oh boy I have some surprising news for you.

The RTX 2070 is better than the GTX 1080. If you want I can get some benchmarks for you but they are easy to find like this one.



or this one



I can go on but they basically all look like this.
 

ethomaz

Banned
oh boy I have some surprising news for you.

The RTX 2070 is better than the GTX 1080. If you want I can get some benchmarks for you but they are easy to find like this one.



or this one



I can go on but they basically all look like this.

Edit - Wrong graphs... you are probably right.
 
Last edited:
oh boy I have some surprising news for you.

The RTX 2070 is better than the GTX 1080. If you want I can get some benchmarks for you but they are easy to find like this one.



or this one



I can go on but they basically all look like this.

PC Performance =/= Console Performance. Too many discrepancies to compare. I have a 1080 currently but I can almost guarentee the conoles will perform better due to optimization, extra bandwidth, less limitations, and more memory. For instance, the Xbox One X has a customized 580 and can do 4k rather well, the PC counterpart cannot.
 
Last edited:

Fake

Member
They will be close... both Radeon VII and RX 5700 will fall between RTX 2070 and RTX 2080 with the first close to RTX 2080.
I know, but at that point AMD will probably launch a high end version of NAVI. Maybe with HBM3... who knows. I guess is important to compare a mid end with a mid end.
Besides, we're talking too much of desktop gpu and less of next consoles. Getting a RTX 2070 perfomance on consoles would be great for the dev side.
 

Ellery

Member
Nahhh doesn't need... GTX 1080 is still perform better than RTX 2070... two games doesn't tell the whole story.
They are practically even in 4k thought.

performance-per-watt_1920-1080.png

performance-per-watt_2560-1440.png

performance-per-watt_3840-2160.png


Holy shit. This is performance per watt.

Dude what are you doing? Are you for real right now?
 
Status
Not open for further replies.
Top Bottom