• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
If nVidia releases an equivalent 7nm GPU with HBM2 memory, then you'll understand why I'm saying that a 12nm GPU with slower memory is punching above its weight.

Turing is obviously a better uarch for 3D graphics, there's no question about that.
 
Yeah, and hopefully variable rate shading makes it to Navi as well, its going to be another game changer
Can you explain the difference between TBR and VRS, both seem to operate similarly
They're not the same thing, even though both help in different ways.

Tiled rasterization basically means there's an on-die (L2) cache in the GPU where most of the rendering is being done (kinda like an evolved eSRAM in a way). This helps to reduce memory accesses to the power hungry GDDR5 pool.

VRS means that certain segments (peripheral vision) are being rendered at a lower accuracy (i.e. FP16 shaders compared to FP32). This could be very useful for PSVR2.
 

TLZ

Banned
For people who are interested in the usefulness of an SSD on PS5, there is this very interesting video on Spider-man.



(17:30 - 26:30)

This is why better devs and optimizations make a big difference. The way Spiderman was done actually helped in whatever was done in PS5 to get that 0.8s loading speed. Another game doing things differently will show different results. So it doesn't mean every game will have the same instant loading speed.
 
They're equal compute-wise (13.45 TF vs 13.44 TF) in case you didn't notice. Look it up.

I'm pretty sure Radeon VII is faster in heavy compute workloads due to its higher memory bandwidth. It's a card made for productivity/mining and less about 3D graphics.

3D graphics require clever tricks like tiled rasterization and rely much less on raw power. GCN GPUs have raw power, but they lack clever tricks (they're getting better though with every iteration).

Not sure where you're getting your numbers from.

2080Ti is 16.97 Tflops - stock no OC.

Without overclock the 2080ti will hold a stable in game clock @ 1950 ( easily up to 2050 OC'd on air ).

Radeon VII is 13.44 Tflops - stock no OC

But the story for the Radeon 7 is not so rosy. It's advertised boost clock is 1750 but it can't hold it.

"As our test progresses and Vega 20 warms up, we see that Radeon VII starts switching back and forth between ~1500 MHz and 1740 MHz on the open bench. The jumps progressively become larger and more frequent as the temperature rises. In a closed case, the changes in frequency also seem to be dependent on the rate at which temperature changes. The card even sporadically drops to 1367 MHz!" - Tom's Hardware

I think you're going off the advertised boost clocks. You can ignore those. For Nvidia their advertised boost clocks are always quite a bit LOWER than what you'll actually get in game ( thanks to Nvidia GPU Boost ). Meanwhile AMD's boost clock is truly best case scenario and you're lucky if you can get their and hold it.
 
Last edited:

SonGoku

Member
They're not the same thing, even though both help in different ways.

Tiled rasterization basically means there's an on-die (L2) cache in the GPU where most of the rendering is being done (kinda like an evolved eSRAM in a way). This helps to reduce memory accesses to the power hungry GDDR5 pool.
Correct me if im wrong but doesnt TBR select specific tiles for rendering/drawing/rasterizing depending of what is actually seen on screen.
VRS means that certain segments (peripheral vision) are being rendered at a lower accuracy (i.e. FP16 shaders compared to FP32). This could be very useful for PSVR2.
Ah i see but it would also free up gpu resources for regular gaming, its not just for VR:
  • Variable Rate Shading is a new rendering technique enabled by Turing GPUs. It increases rendering performance by applying full GPU shading horsepower to detailed areas of the scene, and less GPU horsepower to less detailed areas
  • Variable Rate Shading works by varying the number of pixels that can be processed by a single pixel shader operation. Single pixel shading operations can now be applied to a block of pixels, allowing applications to effectively vary the shading rate in different areas of the screen.
Having these two additions in Navi will surely help it punch above it way
 
Last edited:
This is why better devs and optimizations make a big difference. The way Spiderman was done actually helped in whatever was done in PS5 to get that 0.8s loading speed. Another game doing things differently will show different results. So it doesn't mean every game will have the same instant loading speed.
ND games also utilized a lot of experimental tech on PS3:

1) Uncharted 3 was the first one that could be downloaded in "packages" (different languages, 2D/3D cutscenes, MP etc.)

2) TLOU PS3 digital version utilized an experimental form of PlayGo, where you download a small client from the PS store and the rest was downloaded from ND servers.

Rest assured, both PS4 Pro and XBOX ONE X are experimental testbeds for next-gen consoles. :)

Not sure where you're getting your numbers from.

2080Ti is 16.97 Tflops - stock no OC.
 
Yeah, that's the problem. You can't use that boost clock to calculate Tflops.

Any 2080ti will boost to 1950 stock on air and above 2000 OC. Above 2100 on water.

4352 cores x 2 x 1950 = 16.97 Tflops.
 
Last edited:
Correct me if im wrong but doesnt TBR select specific tiles for rendering/drawing/rasterizing depending of what is actually seen on screen.
I think you're talking about another part of the GPU pipeline that discards polygons/pixels, depending on whether they're visible or not.

CorrectAh i see but it would also free up gpu resources for regular gaming, its not just for VR
I'll be impressed if VRS is not just for VR, since it actually reduces rendering quality in certain areas (that don't really matter much, but still). Seems like a more refined version of CB rendering (which reduces quality on the entire screen).

If this becomes a reality, then I guess O onQ123 and his 8.4 TF shenanigans might come into fruition. :)

Yeah, that's the problem. You can't use that boost clock to calculate Tflops.

Any 2080ti will boost to 1950 stock on air and above 2000 OC. Above 2100 on water.

4352 cores x 2 x 1950 = 16.97 Tflops.
So the listed specs are wrong?

What about this, straight from the horse's mouth:


Boost clock is 1545 MHz, base clock is 1350 MHz.

1545 * 2 * 4352 = 13.45 TF
1350 * 2 * 4352 = 11.75 TF (pretty close to next-gen consoles)

Explain how 2080 Ti is 16.97 Tflops at stock clocks/no OC?

Water cooling is niche and therefore irrelevant for most users.
 

SonGoku

Member
I'll be impressed if VRS is not just for VR, since it actually reduces rendering quality in certain areas (that don't really matter much, but still). Seems like a more refined version of CB rendering (which reduces quality on the entire screen).
Variable Rate Shading allows developers to selectively reduce the shading rate in areas of the frame where it won’t affect visual quality, letting them gain extra performance in their games. This is really exciting, because extra perf means increased framerates and lower-spec’d hardware being able to run better games than ever before.
VRS also lets developers do the opposite: using an increased shading rate only in areas where it matters most, meaning even better visual quality in games.
According to Microsoft, VRS provided a 14% performance boost under DirectX 12 (and NVIDIA RTX hardware) when used in Civilization VI by Firaxis. Most importantly, there was no noticeable reduction in image quality.

 

ethomaz

Banned
GTX 2080TI no Founder (OC) is 13.45TFs.

The boost clock is 1545Mhz.

The Founder (OC) runs boost clock at 1635Mhz.
 
Last edited:
Yeah, the listed clocks are kind of wrong.

It's confusing, I don't know why Nvidia does that.

The piece you're missing is "GPU Boost 3.0" it's basically GPU self overclock. You don't have to do anything, it does it itself.

As soon as you push a 2080ti it will clock itself right up to 1950.

Another example is with my 1080ti. It's listed "boost clock" is only 1582 but as soon as you actually use it, it clocks itself right up to 1960.

If you're curious about GPU Boost tech there's a short write up on it here...

 
GTX 2080TI no Founder (OC) is 13.45TFs.

The boost clock is 1545Mhz.

The Founder (OC) runs boost clock at 1635Mhz.

That is a listed spec yes, but it is not really true.

The 2080ti will boost to 1950Mhz all by itself. No overclocking required.

This is because of GPU Boost. You can read about it above. For Pascal they were already at GPU Boost 3.0. Not sure if Turing as a new higher version.
 

SonGoku

Member
Whatever helps boost framerates is good in my book.

VRS needs to be implemented by game devs, right? It's not like tiled rendering where it just works.
Yes, but its supposed to be very straight forward:
On top of that, we designed VRS to be extremely straightforward for developers to integrate into their engines. Only a few days of dev work integrating VRS support can result in large increases in performance.
No reason to give away ez free perfomance if both consoles support the tech.
 

ethomaz

Banned
That is a listed spec yes, but it is not really true.

The 2080ti will boost to 1950Mhz all by itself. No overclocking required.

This is because of GPU Boost. You can read about it above. For Pascal they were already at GPU Boost 3.0. Not sure if Turing as a new higher version.
I’m not sure about that.

What happens is that all GPU manufacturers sets the boost clock higher than NVidia reference... even the Founder from nVidia has boost clock higher.

I will check some cards.
 
I’m not sure about that.

What happens is that all GPU manufacturers sets the boost clock higher than NVidia reference... even the Founder from nVidia has boost clock higher.

I will check some cards.

Feel free to look into it. AFAIK everything I've said is true.

No Nvidia GPU's ( since at least Pascal ) run at advertised boost clocks. They all run WAY higher because of GPU Boost 3.0.

"Ok, so nearly every day I see on forums how people are very confused that their card (be it a reference/founders edition, or a custom board partner variant) seems to be boosting way pas the max advertised boost clock of the GPU. "

"It is fairly safe to say that 99.99% of 1070's (and other Pascal cards) can hit the 1950MHz range "


Meanwhile their "boost clocks" are all listed in the 1500 range.

Yes, it IS confusing.
 
Last edited:
That is a listed spec yes, but it is not really true.

The 2080ti will boost to 1950Mhz all by itself. No overclocking required.

This is because of GPU Boost. You can read about it above. For Pascal they were already at GPU Boost 3.0. Not sure if Turing as a new higher version.
I think we're arguing semantics here. So GPU boost is a form of automated OC, not stock speed, right?

Either way, what you're saying kinda proves my point: the higher the Teraflops are, the more GB/s you need to feed those hungry cores.

This doesn't really apply to nVidia GPUs (post-Kepler era), because they use clever tricks to minimize memory accesses (from compression to tiled rendering).

Where's the disagreement here?

nVidia uses old, less efficient tech (12nm, GDDR), while AMD uses more efficient tech (7nm, HBM). It's the underlying architecture that makes the difference (not raw specs), therefore it punches above its weight, as I said.

This will become very clear whenever nVidia releases a 7nm GPU with HBM (probably Ampere will be that). Navi is AMD's (RTG's) last chance to redeem themselves.

Yes, but its supposed to be very straight forward:

No reason to give away ez free perfomance if both consoles support the tech.
I remember Cerny saying that PS4 Pro patches are minimal work for devs:


I hope you're right.
 

Evilms

Banned
Nvidia =/= AMD at the theoretical performance level, for proof :

A GTX 1070 with 6.5 Tflops easily rivals the RX Vega 56 and its 10.6 Tflops.

So no interest to compare the next consoles with the nvidia gpu.
 
Nvidia =/= AMD at the theoretical performance level, for proof :

A GTX 1070 with 6.5 Tflops easily rivals the RX Vega 56 and its 10.6 Tflops.

So no interest to compare the next consoles with the nvidia gpu.
If Navi is as efficient as the latest nVidia GPUs, then it will be fair to compare them.

Remember pre-Ryzen AMD CPUs? We're at this stage now (Polaris/Vega GPUs) and anxiously waiting for Navi details to surface...
 
I think we're arguing semantics here. So GPU boost is a form of automated OC, not stock speed, right?

Either way, what you're saying kinda proves my point: the higher the Teraflops are, the more GB/s you need to feed those hungry cores.

This doesn't really apply to nVidia GPUs (post-Kepler era), because they use clever tricks to minimize memory accesses (from compression to tiled rendering).

Where's the disagreement here?

nVidia uses old, less efficient tech (12nm, GDDR), while AMD uses more efficient tech (7nm, HBM). It's the underlying architecture that makes the difference (not raw specs), therefore it punches above its weight, as I said.

This will become very clear whenever nVidia releases a 7nm GPU with HBM (probably Ampere will be that). Navi is AMD's (RTG's) last chance to redeem themselves.

It's not really semantics so much as the advertised boost clock is wrong. It IS an automated OC, but it is ALSO STOCK - because every GPU ( that we're talking about here ) comes configured this way out of the box with nothing the user has to do. Right out of the box, as soon as you use it a 2080ti will clock itself to 1950 and NOT its advertised boost clock of 1545. I don't even know how you could get a 2080ti to run at 1545. I guess you'd have to under clock it.

There's no disagreement here.

I'm just pointing out that if you calculate Tflops using the listed clocks you will arrive at the wrong answer. The 2080ti also does not punch "above it's weight" when compared to the Radeon 7. The 2080ti is simply in a higher weight class.

Feel free to look into this yourself and not believe me because of how weird this sounds, but when Nvidia lists a boost clock the reality is that your GPU is going to clock much higher ( right out of the box ). But when AMD lists a boost clock, that's really just what you're going to get ( although you might not even be able to HOLD that boost clock as is the case with the Radeon 7 ).
 
It's not really semantics so much as the advertised boost clock is wrong. It IS an automated OC, but it is ALSO STOCK - because every GPU ( that we're talking about here ) comes configured this way out of the box with nothing the user has to do. Right out of the box, as soon as you use it a 2080ti will clock itself to 1950 and NOT its advertised boost clock of 1545. I don't even know how you could get a 2080ti to run at 1545. I guess you'd have to under clock it.

There's no disagreement here.

I'm just pointing out that if you calculate Tflops using the listed clocks you will arrive at the wrong answer. The 2080ti also does not punch "above it's weight" when compared to the Radeon 7. The 2080ti is simply in a higher weight class.

Feel free to look into this yourself and not believe me because of how weird this sounds, but when Nvidia lists a boost clock the reality is that your GPU is going to clock much higher ( right out of the box ). But when AMD lists a boost clock, that's really just what you're going to get ( although you might not even be able to HOLD that boost clock as is the case with the Radeon 7 ).
I'm not saying I don't believe you, but it seems we have a different definition of what constitutes "punching above someone's weight". nVidia has the performance crown (seems that's your definition) due to efficiency reasons (not raw numbers), but they haven't released a consumer 7nm/HBM2 GPU yet.

Does GPU Boost 3.0 affect memory clocks too? Either way, I doubt it reaches 1 TB/s... again: just wait for an equivalent nVidia GPU with 1 TB/s of memory bandwidth, then you'll understand the point I'm trying to make. :)
 
I'm not saying I don't believe you, but it seems we have a different definition of what constitutes "punching above someone's weight". nVidia has the performance crown (seems that's your definition) due to efficiency reasons (not raw numbers), but they haven't released a consumer 7nm/HBM2 GPU yet.

Does GPU Boost 3.0 affect memory clocks too? Either way, I doubt it reaches 1 TB/s... again: just wait for an equivalent nVidia GPU with 1 TB/s of memory bandwidth, then you'll understand the point I'm trying to make. :)
I understand the point you're trying to make and I have enjoyed arguing semantics with you. :messenger_winking: Off to work now.
 

SonGoku

Member
I remember Cerny saying that PS4 Pro patches are minimal work for devs:
Most games come with pro patches these days even if the CB implementations are lacking compared to exclusives.
The big difference however is that if both consoles support the tech, devs are more likely to use it.
If Navi is as efficient as the latest nVidia GPUs, then it will be fair to compare them.
Better to keep expectations conservative, lets not get bummed if Navi doesnt match nvidia flops.
I will be super happy with if 13TF of AMD flops makes it to the PS5, anything more will enter megaton status lol
 
Last edited:

HoldTheAir

Member
I have a question in regards to Lockhart. Some of the rumors I have read have suggested that Lockhart will be disc-less. I have around 10 games on discs and was wondering if Microsoft or Gamestop would partner up to give digital codes in exchange for your physical copies.
 

SonGoku

Member
I have a question in regards to Lockhart. Some of the rumors I have read have suggested that Lockhart will be disc-less. I have around 10 games on discs and was wondering if Microsoft or Gamestop would partner up to give digital codes in exchange for your physical copies.
Unlikely to be a standard procedure but individual stores might do trade deals on their own accord
Im curious this being a gaming forum and all i assume most users here will choose snek over lockart, what makes you interested in lockart without knowing price yet?
 

HoldTheAir

Member
Unlikely to be a standard procedure but individual stores might do trade deals on their own accord
Im curious this being a gaming forum and all i assume most users here will choose snek over lockart, what makes you interested in lockart without knowing price yet?
I just like to keep my options open. If the games will look just as good but running at 1080p resolution, that's fine with me as long as the price is right. I'm a gamer on a budget.
 

SonGoku

Member
True, but aren’t budgets just increasing in general? And, I expect devs to be more cautious next-gen with their spending.
Most engines will just be upgraded since the hw is an evolution rather than a revolution unlike previous gens, also devs already make assets for next gen games they just dont use them in real time (UC4 trailer, GTS photo mode, Any Ubi trailer/gameplay reveal :messenger_tears_of_joy: etc.)
Rather than investing time making tools from scratch for completely different architectures they can focus on creating new experiences.

It goes without saying most innovative next gen experiences wont come before midgen
 

CyberPanda

Banned
Most engines will just be upgraded since the hw is an evolution rather than a revolution unlike previous gens, also devs already make assets for next gen games they just dont use them in real time (UC4 trailer, GTS photo mode, Any Ubi trailer/gameplay reveal :messenger_tears_of_joy: etc.)
Rather than investing time making tools from scratch for completely different architectures they can focus on creating new experiences.

It goes without saying most innovative next gen experiences wont come before midgen
That’s true. A lot of the engines now will be used and upgraded for future hardware.
 

CyberPanda

Banned

Microsoft is working on a new Xbox game streaming service, backed by the power of its cloud technologies. This Netflix-style service aims to expand the horizons of the Xbox family, bridging flagship console gaming and mobile phones. We took a deeper dive into Microsoft's mysterious xCloud plans and those working on the project.

Understanding Project xCloud

Project xCloud is Microsoft's first venture into games streaming, mobilizing its Xbox One library via remote, cloud-hosted gameplay. While a duo of next-generation Xbox devices codenamed Anaconda and Lockhart appear in the works, xCloud extends full-fledged "console-quality" titles beyond the living room. Microsoft remains tight on details during development, though plans to host public trials sometime in 2019.

The ambitions of xCloud come as little surprise, with reports of internal testing dating back to 2013. Between cloud and gaming empires, shifting efforts to game streaming is only a natural fusion of Microsoft's strengths. Work is not only underway on scaling this future-facing platform, but also extensive tools for third-party developers. While traditional Xbox consoles are here to stay, xCloud is the next pioneering bet for a more ubiquitous, device-agnostic ecosystem.

Who is working on Project xCloud?

Project xCloud is a substantial undertaking and positioned as a critical pillar of future Xbox plans. It's Microsoft's first real opportunity to extend its ecosystem to a broader pool of gamers, and with rivals fast-accumulating, it's crucial xCloud is right from the start. Mobilizing an existing library, it must retain console philosophies while adapting to mobile norms.

With high stakes for Project xCloud, Microsoft is rapidly hiring and shifting its top in-house talent over to the service. Head of gaming cloud Kareem Choudhry leads the charge, overseeing an all-new gaming division focused on embracing existing cloud successes. The team features key masterminds of Xbox history, previously driving Xbox backward compatibility, Xbox video streaming, and more.
While ongoing work remains cryptic in Project xCloud's early days, we took a deeper dive into the talent leading the venture. While far from a comprehensive list, it provides a peek into the expertise of those publicly known leading the project, and what they bring to the platform. As an ever-evolving roundup, don't hesitate to drop any key figures my way.

Kareem Choudhry — Corporate Vice President, Gaming Cloud

Heading Microsoft's newly-formed gaming cloud division, Kareem Choudhry is a kingpin of game streaming efforts. At the helm of future-facing Xbox endeavors, Choudhry oversees its cloud portfolio beyond the home console. Alongside ongoing Project xCloud goals, the focus lies on scaling the Azure PlayFab suite for developers.
With over two decades at Redmond, Choudhry has spanned Outlook and Windows DirectX, before settling with Xbox. Although formerly head of Kinect development, he moved forward to supervise Xbox software engineering once production of the peripheral wound down. Upon securing a VP role, Choudhry stepped up to fuse Microsoft's cloud and gaming empiresthrough an all-new division. Well-accustomed with the Xbox platform, he's a driving force behind Project xCloud in 2019.

Kevin LaChapelle — General Manager, Project xCloud

Moving onto Project xCloud with a vibrant past, Kevin LaChapelle is another long-standing Microsoft veteran. Jumping onboard with Microsoft in 1989, he aided in the founding years of Windows, later shipping the first iteration of Movie Maker, and pivoting to Expression Encoder and Microsoft Silverlight.
Since onboarding with Xbox, LaChapelle led Xbox 360 video technologies and Microsoft's praised backward compatibility efforts. His role as general manager on Project xCloud parallels with past video streaming and encoding work, essential to the platform's success. With quality and latency core pillars of game streaming, LaChapelle is a fitting piece of the xCloud puzzle.

Bill Stillwell — Director of Product Planning, Project xCloud

From Windows PCs, Windows Phones, to all corners of Xbox, Project xCloud's director of product planning has a seasoned history with Microsoft's consumer lineup. After two decades at Microsoft, Bill Stillwell has moved away from the Xbox platform and backward compatibility teams onto coordinating Project xCloud. From concept to final service, Stillwell has previously discussed the challenges and solutions behind the xCloud vision.
Brandon Riffe — Principal Program Manager Lead, Project xCloud

Among the hosts of xCloud's Games Developer Conference (GDC) 2019 presence, Brandon Riffe joined the project as principal program manager lead. Despite nearly a decade at Microsoft under his belt, Riffe comes fresh off three years at id Software as a senior producer on free-to-play shooter, Quake Champions. Beforehand, Riffe spent years on the LaChapelle-led Expression encoding team, alongside work on Xbox video streaming, apps, and marketplace. He now helps lead the xCloud efforts, recently discussing the Touch Adaption Kitfor mobile devices.

Laying the foundations
It's still early days for Xbox Project xCloud and innumerable unanswered questions lay in the months ahead. However, game streaming is more advanced than ever, and 2019 could be a massive break for Microsoft's cloud ambitions. Let us know your thoughts on the current state of Project xCloud in the comments.

Also Benji predictions for MS at E3:



His predictions are pretty predictable, but here's the summary.
  • Playground Games Fable announcement.
  • MS acquiring a new studio or possibly creating a new one
  • Cyperpunk demo on stage
  • MS spending time talking about it's services like Game Pass, xCloud,and they're evolution
  • Ninja Theory reveals their new IP for current gen
 
Last edited:

bitbydeath

Member
Xbox could me 10 TF, with a hardware raytracing unit punching well above it’s weight, making it look like a 16 TF machine. With Sony having the upper hand in TF.

You’re not the first to say something like this.

Is this pre-emptive damage control incase it gets announced to be weaker so people can MisterXMedia it later on with hidden GPUs?
 

sinnergy

Member
Nah, MS is working on Ray Tracing with Direct X for years, so it’s Not that weird they modify their console to run that.
 

Imtjnotu

Member
Xbox could me 10 TF, with a hardware raytracing unit punching well above it’s weight, making it look like a 16 TF machine. With Sony having the upper hand in TF.
3Eu.gif
 

SaucyJack

Member
That’s true.
Xbox announcement would be deflated if MS announces a weaker console after them.

Within our gamer bubble things like this seem important, but console sales over holiday 2020 are not going to be affected by this sort of thing.

As things stand, on the verge of a new generation, there is no doubt that Sony are in pole position. They have sold at least double the number of consoles in the current gen and have a raft of top notch AAA exclusive titles that are ready to drop over the last year of current gen and the first year(s) of the next. They have all the momentum, the onus is on Microsoft to show their hand first.

I have no doubt that Sony will keep their powder dry until Microsoft have announced.
 
Status
Not open for further replies.
Top Bottom